The Institute of Medicine of the National Academy of Sciences (now called the National Academy of Medicine) has just published an important viewpoint article in JAMA about measuring diagnostic errors.
The authors, McGlynn, McDonald, and Cassel, point out that diagnostic errors have received less attention than treatment errors, but are very common and can lead to incorrect treatment and unnecessary costs and harms.
Our work suggests that failure to contextualize care plays an important role in diagnostic error. The article lays out five reasons to measure diagnostic errors, each of which also speaks to the need to better measure and understand contextualization of care:
- Establish the Magnitude and Nature of the Problem
- Determine the Causes and Risks of Diagnostic Error
- Evaluate the Effectiveness of Interventions
- Assess Skills in Education and Training
- Establish Accountability for Diagnostic Performance
Readers familiar with our work will recognize studies of contextual errors that have focused on each of these issues. In our own research publications, we have demonstrated that contextual errors in diagnosis (and therefore, inappropriate management) occur frequently, contribute to unnecessary health care costs, and are associated with worse outcomes for patients. We have also demonstrated several educational strategies that have promise for reducing these errors, and have discussed direct observation of care as a critical missing component of measuring performance. In our forthcoming book, we further discuss causes of contextual errors in diagnosis and the need for systems of medical education and healthcare delivery to apply strong measurement tools to reduce these errors.
The work of this IOM committee is an important effort to bring light to an understudied but serious problem in health care.
In a recent Urban Institute paper, The Road to Making Patient-Centered Care Real: Policy Vehicles and Potholes, the authors observe that “Although patient-centered care is not new, increasing emphasis on quality measurement as part of health care reform has led to a renewed focus on it.” They do a nice job of reviewing the current state of patient-centered care activities that are related to the actual clinical encounter. What stands out, however, is what is missing from those activities, namely any attempt to measure contextualization of care.
What am I talking about? Consider a patient whose diabetes has become poorly controlled because her arthritis has gotten to the point where she is having trouble filling her insulin syringes three times a day. A patient-centered care plan would address this problem. The care plan would be centered, literally, around the patient. Pre-filled syringes, for instance, are one solution. Sending such a patient out without addressing their dexterity issues, with instructions to simply take more insulin would NOT be patient centered. Any disagreement?
Okay, that’s what we mean when we refer to “contextualizing care.” The patient’s fine motor deficit is the context for her poor diabetes control and must be addressed in the care plan. The failure to contextualize care is what we term a “contextual error.” In our analysis of care at two large clinics in the Veterans Administration, we’ve found that in about 40% of encounters, effective care requires attention to patient context. We also found that when those circumstances are addressed patients have better outcomes. Contextualizing care is a provider level skill. Some doctors pick up on contextual issues and address them, and others don’t. It seems ripe for measurement. In fact, not measuring attention to context may be considered a gap in quality measurement.
Yet no one we are aware of is measuring attention to the patient’s context in care planning, outside of the efforts of a small group of us in the VA. At least, not that we are aware. If you have heard otherwise please do let us know. And if no one is measuring attention to patient context, then no one is assessing whether care is, in fact, patient-centered.
I’m not implying that attention to patient context is the only dimension of patient-centeredness. If a patient has to wait 3 hours for an appointment, that’s not patient-centered regardless of whether they walk out with a contextualized care plan. On the other hand, no matter how terrific the “systems” aspects of the care experience are, it won’t matter if the final care plan isn’t going to work for that patient.
How did it come to pass that this core element of patient-centeredness — attention to patient context — is ignored in assessments of patient-centered care? The widely cited IOM definition of patient-centered care — “providing care that is respectful of and responsive to individual patient preferences, needs, and values, and ensuring that patient values guide all clinical decisions.” — does allude to this critical aspect of the construct, but with only one word, “need.” Interestingly, every other part of the definition is about respect for patient preferences. But asking patients what they want is not the same as finding out what they need. Both are essential and, in our experience, inattention to patients’ needs is epidemic.
The major reason that attention to patients’ needs is not assessed is because it requires an entirely new approach to measurement, one that involves periodically audio recording visits — call it a “patient centered care planning audit.” Patients will volunteer to do this if they feel assured that the data will remain confidential, not hurt their doctor, and result in better care. Unannounced standardized patients are another option. Each has pros and cons. And once the audio is collected, there needs to be a systematic way of grading the physician’s performance based on whether the final care plan actually attends to the patient’s expressed needs.
There are many skeptics who think that audio recording some visits and coding the data (termed “audit & feedback”) is too much work and never scalable. What they may not be taking into account is how much money it can save by avoiding unnecessary care. We have a book coming out in January, “Listening for What Matters: Avoiding Contextual Errors in Health Care” that reviews and synthesizes the evidence that measuring attention to context is feasible and worth it.