Can AI deliver a Smarter Problem based Past Medical History and Improve Prescribing?

Following on from my thoughts/article on how AI could watch what I’m typing and generate codes from it that made my record better I’ve had a few more thoughts on how it could help.

GP records are organised along SOAP lines. This is subtly different to how hospital records or outpatient notes are organised. While SOAP works for a consultation – I’m going to be controversial here – I’m not convinced how useful they are in summarising a patients significant past medical and surgical history.

At the moment GPs are encouraged to give each consultation a Problem code. Ideally if its an ongoing problem – pick a code that is active on the record and continue it – this can be harder than it sounds and some GPs are better at it than others. An example – pt presents with a cough – first GP codes it as a cough, they come back – second codes it as URTI – third codes it as tonsillitis – was it all the same episode/infection? which code was/is right? the final diagnosis? should the previous ones be changed? but that wasn’t what the GP was thinking and is it right to change codes? Should we somehow link them together or merge them – well this functionality exists in my EPR system but I’ve yet to meet anyone who uses it properly or consistently.

Also, take the example of a patient presenting with bowel symptoms, this might be a change in bowel habit. This might trigger a referral and tests, including colonoscopy. Let’s say its normal and the final diagnosis is IBS – how should all this be coded? I recently saw in a local consultant outpatients letter this neatly summarised in the heading. IBS – diagnosed after presenting on X with Change in bowel habit. Colonoscopy date – normal – Gluten test and IBS test results Y.

You could easily tell from the top of the letter – that the patient had presented, had some tests, and what the final diagnosis was. Alas, the GP records were largely a mess. the Change in bowel habit was coded as a problem, the colonoscopy was coded as a problem, but the blood results weren’t. The final diagnosis was also coded but separately and not linked. Our summaries spend their lives trying to get all the codes in – sorting them – linking them – evolving and grouping them is too much work but leads to messy records – especially as the above was years ago and there were numerous other codes in the record.

We have been computerised since 2000, the push to code every consultation and the fact that my patients attend on average 6 times a year means often there are over 100 problems in a patients record. Most of which aren’t problems. You could tidy them up – I’ve only ever met one GP who was obsessive about this but most don’t. Some tidy them at a medication review or for a medical report. Lots of trivial stuff should probably be removed.

Again one of the local renal physicians is brilliant at this – his outpatient letters neatly and concisely summarise all the main issues and whats been done or what is planned. You look at his notes and understand what is going on. Often not the case with GP notes.

Could AI sort this? I’m sure if we gave an AI millions of records and started tidying them up so it learnt – it could automate the process and create some amazing records that make sense.

My outlook client – now has a folder called clutter – it keeps putting stuff it thinks I don’t want to see in their – minor ailments could be the same!

I’ve previously blogged about wanting to put the indication for medications on prescriptions as well as the cost! In EMIS – there is a medication linker – it’s a screen where you can create links between medications and problems. Its very manual and of course some medication are for more than one reason – e.g. I give an ACE-I for hypertension and heart failure. Again it strikes me – this could all be done by AI.

Google I think pioneered crowdsourcing help for AI – they had people voting correct or incorrect to AI labelled photos. The AI had to describe or categorise the photo e.g. balloon, child, family portrait etc. People were asked to rate the correctness and this fed the algorithm – perhaps something similar could be done. Could we rate the medication linking and make it more accurate?

Going back to another of my themes – Data Quality – the AI might be able to point out gaps? e.g. This patient is on thyroxine – there is no code in her records to explain this? or the other way round – this patient has a code of osteoporosis – they aren’t on any treatment.

Of course, no doubt the team at Vision are about to comment they are already working on all of this and I’m impressed and a little jealous of their forward thinking – in which case – a question – how do I change my interface without the nightmare of changing my system?