Recently Consumer
Reports published hospital quality data using its well-known “dot” system,
a 5-dot ranking system of best to worst (sources explained here). Lots of local news outlets picked up on it
and published articles on the rankings of hospitals in their area, using CMS
and HCAHPS data (here,
here,
and here). I don’t spend a lot of time on quality metrics
other than those that are infection related, but I know how fraught with issues
our NHSN data is. They also reference
HAC data (when I say “coded infection data”,
you yell “NO”). And then to have all
that information boiled down to 5 colored dots that regular people, and health
journalists, will try to analyze to compare hospitals is beyond
frustrating. Without a doubt, ICPs will
be called on to discuss their facility’s data. This is an endless process that repeats itself each time data is published.
Last week, my state published facility HAI and flu
vaccination data. It’s a giant report,
with some summary data, and then a nice one-page graphic on each facility. I sent the preview to my director because the
executive team WILL see this, and they likely will NOT know how to interpret
it. This is what our HCW flu vaccination
data looks like at 2 of our small hospital sites:
Name
|
Total
Vaccinated
|
Total
HCP
|
Percent
Vaccinated
|
95%
Confidence Interval
|
Hospital
% compared to state %
|
Hospital A
|
21
|
21
|
100.0
|
86.7,
-
|
Similar
|
Hospital B
|
243
|
243
|
100.0
|
98.8,
-
|
Higher
|
We have 4 hospital sites.
Each site vaccinated >97% of its staff. The state average is 93%. However, 2 of our sites are listed as “better”
than the state average, and 2 are listed as “similar”. Why is that, asked my director. It’s because they use a probability, which
comes with a range of likelihood (confidence interval), and our 2 smallest
facilities have a larger range of likelihood (less precision in the measure)
which overlaps the state average, and are therefore not solidly better, but
similar to the state average. But we
vaccinated 100% of our staff at Hospital A.
That is better than the state average of 93%. To a regular person, yes, it is.
So, if you, Cathy Consumer, were going to choose a hospital
by how many staff were vaccinated, which would you choose? The one similar or better than the average? Trick question.
C.difficile is another hot quality metric. Unfortunately, public agencies are calling
this HAI, but it isn’t. This is the surveillance
definition, acknowledged by CDC and NHSN as DIFFERENT from the HAI
definition. But this flawed
proxy measure is what is publicly reportable. The numerator is cases in patients who have been inpatients for >3 days,
but the denominator is all patients, of which not too many stayed greater than
3 days. This is basic
incidence-prevalence-Epi 101. The
denominator should not have people that are not eligible for the event
(numerator). While I appreciate the attempt
to standardize the rates with bed size and local prevalence data, whatever
C.diff “rate” you’re getting from NHSN is far lower than your real rate because
of that enormous denominator flaw. Anyplace I’ve worked, I’ve kept 2 sets of data on C.diff:
the stuff we report to NHSN, and then the actual HAIs which we have opportunity
to improve. THOSE are the cases we focus
on in quality improvement.
Let’s talk about the MRSA “rates”. This is MRSA bacteremia. How silly.
Why, why, why would we only care about one subsegment of one organism in
the bloodstream? I could have a thousand
MSSA infections and nobody would care (unless you’re in PA, where everything is
reportable). This LabId event is another
proxy measure--where we don't review the chart AT ALL--not an HAI measurement, which counts only one thing: a lab
test. Doesn’t matter what the source is,
whether it’s contamination, drawn from an old line, pneumonia from the nursing
home, or a surgery gone wrong. That
number now defines your hospital’s quality in the eyes of your community.
How about CAUTIs, where you are immediately punished for
reducing the number of catheters, since reducing your denominator raises your
rates? But don’t worry, everyone will do
better next year since the definition changed this past January. Look!
Your hospital just got better!
Good job!
CLBSIs? Sicker
patients at higher risk
with more lines get counted the same as patients at lower risk, without
accounting for that risk. I hope you don’t treat transplant or cancer
patients. I bet all those MBIs are
adding up. No place to put those
non-preventable infections? Just add them
to the CLBSI pile, and send a press release to the papers. Unless you’re gaming
the system (intentionally
or not), then your
rates are lower.
And lastly, “avoiding surgical infections”. All that’s reportable here is colons and hysterectomies. Lumped together. Nuff said.
Low volume denominators skew everything, and then it's all boiled down to the SIR or "better, similar, or worse."
Low volume denominators skew everything, and then it's all boiled down to the SIR or "better, similar, or worse."
So is there any value in any of this data? Sure, they do standardize the data so you can
see somewhat where you stand against similar facilities. I oversaw a large dialysis clinic once and we
had no idea how bad our CLBSI rates were until dialysis clinics started
reporting to NHSN and getting data back. We immediately made major changes in
both product and process, with great success.
So that was useful. But I can’t
say it’s been helpful since then. Oh, device utilization, that’s good, too (although accurate collection in a small facility with no EHR is a real issue).
But the infection rates, not so much.
What's your data story? What does your newspaper say about YOU?