Hospital death rates in the UK

The headline in the Guardian reads “Hospital death rates in England 45% higher than in US, report finds”, and the story reports on Channel 4 coverage on Wednesday of a new study by Brian Jarman, a professor of health statistics at Imperial College London.

Jarman devised an index called the Hospital Standardised Mortality Ratio (HSMR), which compares a hospital’s mortality rates to expected mortality (given diagnosis). According to a paper by Dr Foster (an indpendent group devoted to providing health care data to the public):

The HSMR is a method of comparing mortality levels in different years,
or for different sub-populations in the same year, while taking
account of differences in casemix. The ratio is of observed to
expected deaths (multiplied conventionally by 100). Thus if mortality
levels are higher in the population being studied than would be
expected, the HSMR will be greater than 100. For all of the 56
diagnosis groups, the observed deaths are the number that have
occurred following admission in each NHS Trust during the specified
time period. The expected number of deaths in each analysis is the sum
of the estimated risks of death for every patient.

The HSMR has become a controversial index. It was credited with bringing to light the Stafford Hospital scandal, which continues to grab the headlines of UK papers with grim stories of how patients were left in their own urine and forced to drink water from flower pots for lack of nursing care. It’s controversial because many people (and not just NHS staff) refuse to believe things can be so bad. Sophisticated apologists for the Trusts poke holes at the methodology of the HSMR index. For example, it’s obviously very sensitive to the way patient’s diagnoses etc are coded, e.g., someone with cancer may be coded as a death from pneumonia.

The latest concerns a cross-sectional HSMR study of the UK and 6 other countries including Canada, Holland, Japan and the US. The UK’s hospital mortality rates are 22% higher than the average of the 7 countries and 45% higher than the US. The comparison with the US is enough for many to dismiss the results right away, as America has a lower life expectancy and its healthcare system is widely distrusted by Brits.

No statistical model is without flaws and data must be interpreted. But what rankles me are those who criticize a quantitative metric that produces uncomfortable results without offering up an alternative. Hospitals must be held accountable to some objective, quantifiable proxy for “quality care” or else the are accountable to nothing at all. The coding-error thesis is particularly pathetic, as that is itself a hospital failure. Imagine a company defending its poor performance by saying that the financial statements are misleading because there were errors in the data provided to the auditors!

And “coding errors” might reveal a different aspect of the problem all together. Maybe it’s not just fat fingers at the keyboard and other flaws in reporting procedures.. maybe patients aren’t being diagnosed properly.

Healthcare Costs and Technology

There is an excellent article in the MIT Technology Review this month (The Costly Paradox of Health-Care Technology) pointing out how healthcare is the only industry where technological progress appears to raise rather than lower costs.

The reasons are not a mystery:

Unlike many countries, the U.S. pays for nearly any technology (and
at nearly any price) without regard to economic value. This is why, since 1980, health-care spending as a percentage of gross domestic product has grown nearly three times as rapidly in the United States as it has in other developed countries, while the nation has lagged behind in life-expectancy gains.

Other researchers have found that just 0.5 percent of studies on new medical technologies evaluated those that work just as well as existing ones but cost less. The nearly complete isolation of both physicians and patients from the actual prices paid for treatments ensures a barren ground for these types of ideas. Why should a patient, fully covered by health insurance, worry about whether that expensive hip implant is really any better than the alternative
costing half as much? And for that matter, physicians rarely if ever know the cost of what they prescribe—and are often shocked when they do find out.

Yet the article concludes when some policy recommendations that range from vague (organisational change, innovations in health care delivery) to downright dumb (“drug container caps with motion detectors that let a nurse know when the patient hasn’t taken the daily dose.”).

The solution, as I seed it, is straightforward: health insurance that pays out a lump sum of cash per diagnosis, to be spent however the patient sees fit (some sort of trust/trustee mechanism needs to exist for those too ill to make the decision themselves). The current framework, whereby insurance pays for whatever treatment doctor thinks best, provides absolutely no incentive to make the inevitable tradeoffs between cost and expected benefit.

Perhaps when healthcare inflation eventually leads to rationing, patients in America will reconsider the wisdom of this paternalistic model and demand the right to make those decisions themselves.