I recently took part in some research on how general practitioners code patients with vague symptoms. I was presented with a series of cases of patients with vague symptoms and asked what code I would put in the notes. The researcher is a PhD student who is trying to understand how to extract better quality data from coded notes. Without going into too much detail, he is trying to understand if the prevalence or incidence of certain diseases is actually greater than simply searching for those codes as do general practitioners use a range of synonyms or vague Read codes to describe the early presentation of those diseases. He’s wondering if searching for these patterns would give a better understanding of the true burden of those diseases. A poor example might be – instead of tonsillitis do I code a sore throat. A better example might be instead of -orchitis do I code pain in testicle?
While the research was interesting the real reason I mention it is because of the way that he asked the questions. He gave a clinical scenario and then asked me what my differential diagnosis was. This is actually how we tend to work and think.
Since we trained as medical students, doctors are given a patient with a history, examination findings and test findings and asked what is wrong? We tend to create a differential diagnosis and then proceed to narrow it down to the correct diagnosis by doing further tests and investigations to rule in or rule out things. Of course, often in examinations, you’re meant to be able to guess the right diagnosis from the information given, it’s usually something obscure whereas, in reality, things are common. Also, in cases, there is often one diagnosis that fits all the symptoms , in reality, people can have more than one thing wrong with them. They can have ischaemic heart disease and osteoarthritis and dementia.
It made me reflect on the recent articles I’ve been writing about general practitioner coding and the EPR. It strikes me that we either have a problem-based record in which we list the medical problems of a patient, rather than their functional ones which perhaps a whole blogging itself, or we list their symptoms; numbness, pain in the knee etc. When I’ve not seen is a useful built-in tool that allows a working differential diagnosis from the symptoms and tests currently done.
In this world of artificial intelligence, it would be nice if the computer created this automatically from symptoms I typed in, though I might want to add to what the computer thinks!
I did see an expert system from one company that claimed to be able to get the diagnosis right in about 99% of all musculoskeletal orthopaedic or rheumatological conditions based on the patient answering about 50 questions.
However, I’d settle for some form of , in which I create a differential diagnosis and I then use this as the basis of my plan of action. I might, for example, rule things out as tests came back positive or negative and the widget would keep track of these and let me see where I was up to. Of course, it would be nice as I added possible diagnoses if the computer ordered the correct tests, saving me time and making sure that I don’t miss something.
Another doctor or health care practitioner reading my notes using this system might find it easier to see what I was thinking or what the plan is or more specifically where the patient was up to with that plan. A patient presenting with bloodstained diarrhoea might have inflammatory bowel disease as one of their differential diagnoses and perhaps part of the plan might be to have a colonoscopy. Looking at the records you could see if the colonoscopy had been ordered if it had happened, what the result was and whether or not it helps to confirm or refute the differential. Retrospectively negative tests might be as useful as positive ones.
Currently, when you look at somebody’s notes you see what operations they had, what major diagnoses they have but sometimes it can be very difficult to understand how they actually arrived at that diagnosis. At the moment we are doing a trial on COPD and some of our patients who have that diagnosis when you look into their records appear to have it coded from a chest x-ray which isn’t current best practice. Some had borderline spirometry but somewhere somebody gave them the label of COPD and it stuck ever since. It’s interesting that when I type in the code into my clinical system it doesn’t ask me why? What evidence do you have for this? Have you used the latest ICD 10 criteria?
This lack of rigour is one reason why some practices are reluctant to share records. I’m convinced that a certain proportion of the junior doctors at my local accident and emergency department give the default diagnosis of urosepsis for almost everybody they see. Patient after patient comes out with a discharge summary saying often with no real evidence as to why i.e. few symptoms no positive culture et cetera. Yet the letter comes out the read coder sees the agreed code and the patient is coded as having urosepsis.
I know that Hospital EPRs are taking off, I keep seeing adverts for new EPRs, I wonder if its time to rethink the SOAP methodology that primary care uses?