Dr. Mirvis is the Editor-in-Chief of this journal and a Professor of Radiology, Diagnostic Imaging Department, University of Maryland School of Medicine, Baltimore, MD.
For eight years of my career I was the quality improvement officer for my department. In those days, now about 10 years ago, that job consisted of gathering data on a few technical parameters like X-ray retakes, time to deliver finalized reports and identifying major complications resulting from misinterpretation or interventional “misadventures.” I also ran the monthly departmental QA meeting and tracked the missed cases on a nice spread sheet. Attendance at these meetings was sparse to say the least, but I could always count on my own section to present a fair number of screw-ups. My other major task was to help prepare the radiologists for the Joint Commission visit and present to the surveyors and accompany them as they strolled through the department with their white gloves. Fortunately, most of the faculty disappeared into thin air during that period so at least I had little worry that any of them could “sink” the department in one stroke by not being appropriately knowledgeable about the location of the nearest fire extinguisher. No one ever asked to see my large binders of data collected for the three years since the last survey. It was rare that something constructive was ever done with that data, though not from lack of trying. There was just not that much emphasis on measuring and reacting to quality parameters. On the whole, I was dubious and cynical about the value of the activity—not a good bias for someone in my position.
Wow, how things have changed since then. Quality performance measurement and process improvement are now front and center in our practice. Every day, one encounters some activity in the pursuit of bettering how we assess and improve our department’s performance. A year ago, the department implemented the American College of Radiology (ACR)-compliant physician peer review program. This is part of the Commissure/RadWhere system (Nuance Healthcare, Burlington, MA) and it selects a prior study with another radiologist’s interpretation for every tenth case to look for discrepant opinions. Major discrepancies are reviewed at a monthly department-wide QA meeting. This gives us an ongoing metric of how the department is doing relative to current national radiology standards.
Each resident and new faculty member must complete a quality project that is well-focused on a relatively unique process used in a given area. One such project, now in development, is a surgical follow-up tracker embedded in the picture archiving and communications system to automatically obtain operative results to compare with preoperative imaging interpretations. Not only do these projects help our department measure performance and find opportunities for improvement, but they also give the resident/faculty a sense of the value of the entire QA process. One of our staff is even reviewing how quality improvement research should, in some cases, be vetted through the institutional review board when there are ethical or legal implications of the work. This is a quality assessment of how we do quality assessment.
Perhaps the best change that has come about is that the information we obtain directly affects how we practice. For instance, we identified a problem in achieving intravenous access for some patients and foregoing CT intravenous contrast enhancement, which was indicated. We also had a higher rate of contrast extravasations for peripheral line injections than expected. These observations led to the adoption of central lines approved for power-injection. The process of making the switch was laborious and somewhat complex given the many groups invested in the change, but ultimately it proved to be a very satisfactory solution.
In another example, we observed a delay in notification of emergency medicine physicians about suspected pneumonia diagnosed on ED chest radiographs. Appropriate changes were made to ensure immediate direct communication between the interpreting radiologist and the primary physician. At the Veterans Administration Radiology Department, which is part of our practice, there is constant research to measure a variety of factors impacting the reading environment and how modifications of that environment can improve accuracy and speed of dictation. The entire chain of activities that occur from study acquisition to the final report has been measured and this has helped to streamline the processes. In the University of Maryland Department of Radiology, we assess unread/unsignedreports on a daily basis, if not more frequently, resulting in practice modifications that have reduced these delays from a backed up river to just a trickle.
As a result of these activities and many more, the department is a much tighter ship with a higher level of performance across these and numerous other quality parameters. We can demonstrate the quality of our service and our product—the interpretation of imagesand timely availability of that interpretation. Alas, we still get poor turnouts for the monthly “confessions” conference, another great potential source of identifying ways we can do better, but things are improving. As a card-carrying “Doubting Thomas” of the quality improvement process, I have finally seen beyond my skepticism and recognized the great value it provides. I urge the rest of you doubters to get on board and figure out how to do it better. It’s an ongoing activity because you never reach perfection; you just strive for it.
It’s getting better all the time. Appl Radiol.