This article first appeared on 28.10.13 on digitalhealth.net.
I recently did some practice Quality and Outcome Framework visits for our new local area team.
At first, I thought the process chosen was boring and mechanical. However, it brought up some interesting points.
It involved us picking patients at random from different disease registers and then asking the practice for proof that the patient chosen had the disease in question and that it had done what it said it had done.
Proving the diagnosis
Proving the diagnosis was interesting. Recent diagnoses were not a problem. It was often easy to see where the diagnosis was made and by whom.
For diabetes, it was easy to find the two fasting glucoses or an abnormal GGT (a test for obese patients that indicates whether they may be at risk of some complications of the disease).
And for heart disease, QOF has said for a while that all new patients must have an echo to confirm; so, again, these were straight forward diagnoses to prove.
But it became apparent that it was harder to prove some of the older diagnoses and some diagnoses where the criteria is vaguer or has changed.
It became quite a trawl to show where the diagnosis was made, by whom and on what grounds. We had to get paper notes out and we identified some old chestnuts like x-ray diagnosis of chronic obstructive pulmonary disease with no other evidence or, indeed, any illness related behaviour.
For a large number of hypertensives, the diagnoses seemed to have been based on one or two readings above normal. And some heart failures appeared to have been diagnosed on a bit of ankle swelling years ago.
It was even harder to work out when the diagnosis had been made in another practice using a different system.
Cleaning the register
It is important to say we found no evidence of gaming. These issues didn’t appear to indicate a deliberate, concerted attempt to up the prevalence of these conditions.
However, you wonder what effect they might have from practice to practice, depending on their strictness for making a diagnosis and what the impact would be on practices with high transient populations?
In most cases, if anything, the practices were probably working harder than they needed to; chasing these patients, seeing them regularly, and so forth.
So does it matter? Well, the practices could be concentrating on other things. Some patients might be on expensive drugs they don’t need or getting needless side effects.
When I worked for the old primary care trust, I used to try and get the medicines management team interested in doing a “stop unnecessary drugs” project.
This would have been more than another “change to a cheaper drug” programme and I was convinced that it would save more money. Perhaps a “register-clean” project might be the way into this?
Getting IT to help
Certainty of diagnosis can be an issue. A recent patient of mine had what I felt was a mild trigeminal neuralgia type and it appears to have settled with rest, NSAIDs (non-steroidal anti-inflammatory drugs) and reassurance. Would a specialist give it the same name, or call it an orofacial pain syndrome?
What about the girl I saw who had a pox-like rash that I didn’t think was chickenpox? I coded it as ‘viral rash’, whereas someone else would have put chickenpox in her notes.
Of course, the IT doesn’t help. It doesn’t question you on a new diagnosis beyond, sometimes, asking which side is affected.
It doesn’t bring up any criteria on screen and ask you to check against them. It doesn’t suggest similar diagnoses from the patient’s history that you might want to use.
The facilities to put in tentative diagnoses and then evolve them are poor and poorly used. There is no way of linking a diagnosis to the letter, result or ECG that made it.
Getting IT to help a bit more
I think it should be possible to evolve IT systems so that, retrospectively, it is easy to see who made a diagnosis, what grade they were, on what grounds, with what certainty, and what their evidence was.
As things stand, it was often difficult on the LAT project to work out who had entered an exception code and on what grounds.
I think there needs to be a special module to handle these issues. However, it needs to be done carefully with information and support to encourage consistency.
A few years ago, the function to record why a drug was stopped was added to the Emis system that my practice uses.
However, it is easy to ignore; so even if you use it, others don’t. Added to which, there aren’t enough options, and the way the information is presented back isn’t that good.
I’m sure there must be a way of linking it to management pathways. Take something like the ABCD of hypertension treatment.
Why, when I click on ‘hypertension’, doesn’t it pop up with a box that asks for – or displays – the evidence for the diagnosis, the correct treatment algorithm, which drugs have been tried, at what doses, and why they were stopped? After all, this would make it easy for me to know where to go next.
Fit for the future: join me for a debate!
The developer in me cringes at this. I know full well the never-ending list of development requests you can get even for a simple iPhone app.
It must be maddening for something as complex as a clinical system. However, it is one thing to turn down little requests, and another to think about whether your entire system is suitable for the next 10+ years of general practice.
I’m not convinced any of the current systems are fit for the future, which neatly leads me on to a plug.
I’m giving a talk to the BCS Health Northern Specialist Group on Wednesday, 20 November at 7pm at the Manchester conference centre on my thoughts on where primary care IT systems should be going.