Welcome to #DeathClub– an online end-of-life and palliative care journal club for casual readers
DISCLAIMER: For the purposes of this blog, this is not meant to be a highly academic critique or analysis of this paper. Most of my readers are not research driven nor principle investigators, but rather, people on the front lines: advocates, patients, family members and bedside clinicians. For this reason, this blog is NOT meant to be a comprehensive review of the articles, but instead a quick summary of the findings and implications, with a side dish of personal narrative. Find the introduction to #DeathClub here.
Thomas W. DeCato, MD; Ruth A. Engelberg, PhD; Lois Downey, MA; Elizabeth L. Nielsen, MPH; Patsy D. Treece, RN, MN; Anthony L. Back, MD; Sarah E. Shannon, RN, PhD; Erin K. Kross, MD; J. Randall Curtis, MD, MPH
Crit Care Med. 2013 Jun;41(6):1405-11.
Why I Picked This Article:
I’m starting a new job as a pulmonary and critical care attending in an ICU that I’ve never worked in before. It’s my first venture out into the world outside of where I trained. I know the in’s and out’s of my home institution like the back of my hand and I have a pretty good concept of how we exceled and how we needed to improve. But here, in this new ICU, when it comes to my end-of-life care research endeavors, I don’t even know where to begin. Quite Simply: I need to find out. This article is the first step on my trek down the “yellow brick road’.
Dr. DeCato’s team asks the question: Is there significant variability across individual hospitals in the quality of end-of-life care in the ICU?
Put simply, they want to know if certain hospitals perform end of life care delivery well, and others, perhaps, ‘not so well’. It’s an interesting question to me, because I can take what they learned in this study to ask myself: where is MY hospital on this spectrum?
What did they do?
13 hospitals in the Seattle/Tacoma area were studied. Some were university teaching, some were private and non-teaching and so forth. The researchers made efforts to obtain a good mix of different types of hospitals. First, they examined medical records looking for patients who died after being in the ICU for at least 6 hours. They chose the 6 hours to ensure that the physicians had enough time to deliver quality end of life care. 4-6 weeks after the patients died, surveys were sent out to the family as well as the patients’ nurses, although the nurses completed the surveys soon after the death. There were two surveys used. The first measures family satisfaction with ICU care (called the FS-ICU). The second is a shortened version of the Quality of Death and Dying survey, which essentially measures the overall quality of, well, death and dying. The patients’ medical charts themselves were reviewed for the following factors, which were recorded as being either present or absent: life-sustaining therapies in the last 24 hours such as CPR, whether life support was withdrawn in the last 24 hours, symptom assessment in the last 24 hours of life, a family conference in the first 72 hours of the ICU stay, a discussion about prognosis in the first 72 hours of the ICU stay, use of support services, and so forth. In addition the length of stay and days spent on the ventilator were recorded. The researchers then used statistical analysis of these factors to identify if there was variability in these measures amongst the different hospitals in the study, and whether or not these measures changed (ie. improved) over time.
What this Article Shows
Over 3000 patients were studied and a little less than half of the families responded to the surveys (a pretty good response rate for a survey!) The baseline statistics are pretty interesting. 81% of patients had a DNR (do-not resuscitate) order. 73% underwent withdraw of life support. Surprisingly, only 38% of the patients had documentation of prognosis in their medical record and only 82% had their pain assessed in last 24 hrs. The authors didn’t include any measure of disease severity, so it’s not clear to me just how sick this patient population was, but based on the high percentage of patients who had life support withdrawn, I’d guess they were a sick population.
So, how did the hospitals score? I think the main finding here is: differently. Each hospital had significantly different scores across the different areas the researchers surveyed and measured. What’s more interesting from a quality improvement viewpoint is that not all hospitals did well across the board. In other words, some hospitals scored well in one area, but less well in another area. This was true of the Family satisfaction surveys, the quality of death and dying survey and the measures the researchers abstracted from the charts. Furthermore, in this study, they did not see an improvement in quality of end of life care over time.
How this Article is Helpful
So, it’s pretty obvious that before you can go about studying or improving end of life care in an ICU, you need to take some measurement of where the ‘problem areas’ are. This article is helpful to me personally in that it shows that there isn’t a ‘one size fits all’ measure. Getting a broad picture of my unit is going to be important moving forward. For all hospitals and clinicians, it’s very important to acknowledge that there is tremendous variability in end of life care delivery, and that hospitals need to monitor care delivery via different quality indicators.
The Next Question…
Are the quality indicators used in this study (the FS-ICU, QODD and chart abstraction measures) the RIGHT ones? Do these measurements truly represent high quality care?
Stay tuned. There are many studies that look at this very issue….
-Lauren Van Scoy, MD
If you’re interested in writing a quick and casual #DeathClub review of an article (one you select or we can suggest one!), email firstname.lastname@example.org.