This article first appeared on 3.9.13 on digitalhealth.net.
I was lucky enough to attend most of a recent Nuffield Trust conference on predictive risk in London. It was quite interesting, but left me with the overall feeling that it was all a waste of time.
Statistics only get you so far
The basic idea of predictive risk is simple. You get a lot of population-level data analysed using a statistical model that aims to identify those people who might benefit from a specific health intervention or initiative.
Most commonly, the aim is to try and identify people who might be at risk of an expensive hospital admission, and who could be put on a virtual ward or given some other kind of support to stop that happening.
So, the Nuffield Trust conference had an interesting talk on a London experiment in which GPs were being paid extra to spend up to an hour with identified high risk patients in order to reduce their admission rates.
Yet the first year’s figures showed that despite the time, trouble, effort and expense, this hadn’t made any difference.
Now, a lot of people argued that this was because these interventions require systemic change, and the short timeframe over which the project had been evaluated had not captured that.
They argued that a full three years’ worth of data was needed. I’m sure that’s absolutely right, and it may be that the project will pay back massively in years two and three.
But at the time I was representing my clinical commissioning group, and I found it difficult to go back and wholeheartedly endorse the idea that we should look to do the same.
If we did, we could be copying something that reaps huge benefits; but we could be following a disaster. In the end, I persuaded the CCG to concentrate, in year one at least, on reducing potential harm from drug issues.
Something made a difference, but what?
After the morning break, there was another interesting presentation by Marie Curie Cancer Care, which provides nurses for people with terminal cancer. The organisation seemed to have some data to show that their nurses made a difference to admission rates. Great!
However, the organisation had needed to employ some very clever analysts to come up with a complex matching case control system to prove this. Yet, when the presenter was asked questions like ‘what was it your nurses did that made the difference?’ he couldn’t really answer.
Also, the charity hadn’t really related its data to time of day or day of week. Was it weekends and/or the evening when they made a difference? Or to put it another way, was it the night sitting service that made the difference or was it the nursing that took place during the day?
Or to put it another way again, was it the nursing skills of the nurses that mattered? Or just that there was someone present who knew the patient and not to call an ambulance when they deteriorated?
My CCG area doesn’t have Marie Curie nurses. But it was difficult for me, as a commissioner, to come away knowing whether we should immediately commission a Marie Curie nursing service or just pay a lot of health care assistants to stay in people’s homes overnight and stop them calling 999.
Do we have the right data?
On the train home, I reflected that the bigger scoring systems really depend on the amount of data you have.
When it comes to pure admission to hospital, it must be relatively easy to pick out people with multiple morbidities or on multiple drugs. But once you get to the others – the people who aren’t yet unwell, aren’t yet coming in, or aren’t yet having any data recorded on them – any system is going to struggle.
At least, they are if they are based on hospital data, which most of these models are. GP data is likely to be richer in terms of its variety and the length of time over which it is collected. So I was interested to hear from Professor Julia Hippisley-Cox that her Qadmission scoring is about to be trialled.
Years ago, I tried to get the local path lab interested in trying to detect patients who might be bleeding before they became anaemic.
I’d had a couple of patients who had developed iron deficiency anaemia, and looking back at their annual full blood count over five years it became clear that they had each been normal but on a definite downward trend, both for Hb (Hemoglobin) and MCV (mean corpuscular volume).
If I’d been a proper researcher, I should probably have done a literature search to see if this had already been done. But as I’m not, and I couldn’t get the path lab interested, I put it on hold.
However, it I still wonder whether there are other markers like this that we may record. How about weight? Blood pressure? Renal function? All of these change over time. It must be possible to work out how that change is important.
Monitoring for non-billionaires
A recent TV show featured some billionaire who stores all his urine and faeces for analysis and has regular bloods done. He plots all this data to detect if he is becoming unwell.
That doesn’t sound realistic for everyman; and we’re supposed to be tackling health inequalities. So what about public toilets that detect blood in urine or faeces?
What about pressure plates at urinals that detect weight and height? What about breath analysers in pubs that record both alcohol levels and carry out lung function tests?
Using Bluetooth or similar, these devices could report their results to your smart phone app. Large numbers could be screened this way.
One of the most popular things that my practice ever bought with its patient equipment fund was the sit down blood pressure machines in our waiting rooms.
People come in – often on multiple occasions – to measure their blood pressure. The machine prints off on a slip; why not zap it to their smart phone?
These measurements could be collated and presented to the user or, if the user is ok with that, to their medical records. Similar apps might record alcohol and tobacco use.
I’m sure most of the tech already exists; I suspect that what is needed is an open, free to use common data exchange format.
My scales need to know how to send data to your phone when you jump on them in my bathroom. Perhaps we need to be spending some of the risk prediction money on screening early detection tools?