Tax season is over. Well, it is over if you filed your return in a timely fashion. Don‚Äôt let this blog stop you from stashing away your W-2‚Äôs and 1040-G‚Äôs for safekeeping. I hope you never need them. But, if you will, indulge me for just a second and leave your calculator out. No, I don‚Äôt need you to calculate the ever-increasing cost to fill-up your gas tank. Let‚Äôs take a quick look at a few health care statistics. Before you cringe, declare that you ‚Äėhate math!‚Äô and click-back to Facebook, let me share this with you: medical errors occur 10 times more than previously thought. Maybe that wasn‚Äôt hard hitting enough. Let me try again. How about this: mistakes occur in one out of every three hospital admissions!
It‚Äôs Hard To Measure Without a Yardstick
Despite all of their education and training, medical professionals make mistakes. You know it, I know it, and certainly they know it. I hope that we can also all agree that it is unrealistic to expect for our health care providers to be perfect. What is reasonable, however, is to require an accurate accounting of the mistakes that occur in a health care setting. Believe it or not, there is no uniform method for a hospital to classify, track and otherwise determine what is or is not a medical mistake. A negative outcome at Hospital X in Baltimore might be considered a mistake, and yet if the same negative outcome occurred at Hospital Y in Washington D.C., it would not be considered a mistake. How so?
I don‚Äôt want to bog you down with the myriad measures that hospitals use to come up with the numbers but suffice it to say that at any hospital in the United States, its administration could utilize the: Agency for Healthcare Research and Quality‚Äôs Patient Safety Indicators or the Utah/Missouri Adverse Event Classification technique, or an approach developed by the Harvard Medical Practice Study, or the Institute of Healthcare Improvement‚Äôs Global Trigger Tool, or they can do their own analysis of the records and score themselves (self-reporting.)
That was a mouthful. Essentially, a yardstick for measuring the safety of care in hospitals does not exist. Or, at least, a yardstick has not been agreed upon. The two most common methods used, however, are voluntary reporting and the Agency for Healthcare Research and Quality Patient Safety Indicators. And according to a recent study, those two methods are awful. Before you conclude that I am being too harsh, let‚Äôs take a look.
The Good, the Bad, and the Ugly
The study, conducted by¬†David C. Classen, and published in journal Health Affairs, utilized the Institute for Healthcare Improvement‚Äôs Global Trigger Tool. The Global Trigger Tool uses specific methods for reviewing medical charts. Patient charts are analyzed methodically, analyzing discharge codes, discharge summaries, medications, lab results, operation records, nursing notes, and physician progress notes to determine whether or not a “trigger” exists. A notation of a trigger leads to further investigation into whether or not an adverse event occurred. Here is how the tools stack up:
Self Reporting (Commonly Used Method #1): 4 adverse events detected
Safety Indicators (Commonly Used Method #2): 35 adverse events detected
Global Trigger Tool: 354 adverse events detected
The Global Trigger Tool is overwhelmingly more sensitive and picked-up many, many more adverse events. Overall, the Global Trigger Tool discovered that adverse events occurred in 33.2 percent of hospital admissions or 91 events per 1,000 patient days. That number is staggering.
What kind of “adverse events” are being missed? Medication errors, surgical errors, procedure related errors, infection, pressure ulcers, device failures and patient falls. All very serious and potentially injurious to a patient. The study indicates that the error detection tool being utilized by Hospital ABC in Yourtown, USA is probably woefully inadequate.
Why Accurate Error Detection Is Important
Error detection is essential to error correction. A hospital cannot identify the areas that need improvement if it is unable to identify the areas where it is falling down on the job. Failure to utilize an adequate error detection tool ensures that the same mistakes will continue to happen time and time again. I think the results certainly beg the question: why not adopt a nationwide standard? The Global Trigger Tool or another sensitive measuring matrix strikes me as a reasonable place to begin.
Certainly, there is a financial aspect to this discussion. Extensive chart reviews and lengthy inquiries into negative outcomes are costly and time intensive. Also, what motivation, besides error prevention, does a hospital have to discover its errors? As I wrote about here before, when errors are discovered, hospitals are penalized. If a hospital‚Äôs main concern is its bottom line and not patient safety, why not continue to “self-report” or use the Agency for Healthcare Research and Quality Patient Safety Indicators and leave the adverse events undetected? ¬†Makes sense if you want to avoid the penalties‚Ä¶
It doesn‚Äôt say “leave a response” down below for nothing. Feel free to let us know YOUR thoughts.
QUESTION: Have you ever had a negative outcome at a hospital? Where you told that a mistake was made or were you told otherwise?