Dr. Rubin is Assistant Professor of Radiology and of Medicine, Department of Radiology, Stanford University, Stanford, CA.
Radiologists may not be familiar with the term interoperability, or how it can have a significant impact on their daily workflow. The term is defined as the ability for two or more systems to exchange information and to use the information that has been exchanged. It is critical that our systems talk to each other, because radiologists, on a daily basis, work with many systems. For instance, the health information system (HIS), is where patients are first registered. That system should communicate with the radiology information system (RIS),where the orders are placed and the results of radiology imaging ultimately are sent. The imaging modality itself, where the patient actually has the imaging performed, communicates with the picture archiving and communications systems (PACS), and this is where the images are stored and interpreted.
These different systems need to communicate with each other and interoperate with the information. And this is the critical informatics issue: How do these systems interoperate and how important is it, as radiologists, to be aware of the efforts that are being made to enable this interoperation to happen more seamlessly?
There are communications standards that allow these systems to interoperate with each other using the raw data they generate. The current standards are DICOM, which allows systems to move the actual images between each other; HL-7, which allows health information systems to move clinical data; and the Integrating the Healthcare Enterprise (IHE) standards, which provide a framework that allows systems to use the above standards to pass information.
While these are important standards for interoperability, what is missing are standards for dealing with what is in the actual content of the reports themselves. How do we communicate the descriptions of the findings or the anatomy? The missing link in interoperability is a standard for the content that goes into a report in terms of what a radiologist says and what the image shows. That information is an integral part of the work product of a radiologist, and standards would allow that information to move seamlessly throughout the enterprise. Standards would also enable some important applications, like data mining or correlations with genomics.
I will focus on two elements: A control terminology called RadLex; and new, emerging standards for describing what images show, an initiative called Annotation and Image Mark-up (AIM). These are important initiatives, because they will be incorporated into vendor products and systems, and will greatly enable future applications for radiologists.
The focal point of radiology is the report. But there are no standards for the terms we use when we create a report. There are currently no standards for how a report is structured. So there is great variability amongst radiologists when they are creating reports. Certainly, being expressive is good, but it can also be problematic, because radiologists don’t always use clear and unambiguous terms. Having more report structure will vastly improve communication with our referring clinicians, which is a critical component of what we do.
Part of the problem is that multiple terms exist to describe nearly any radiological finding. The problem with the richness of language and the numerous terms that we can pick from is that our understandings of the terms, both for radiologists and for referring clinicians, can be different. We need a standardized terminology that encapsulates the common meaning in order to reduce confusion.
Variations in terminology
Of course variations in terminology have existed for decades but how do they affect practicing radiologists? Consider the Radiological Society of North America’s (RSNA) online teaching file called MIRC (Medical Imaging Resource Center). It is an online repository of teaching files. Going to the system and performing a text search for the term, “macrocystic adenoma,” yielded no results at the time I performed this inquiry.The MIRC search engine operates similarly to Google or Yahoo, and of course the key to any successful search is using the right terminology. In this case, if I used a similar term, such as “macrocystic neoplasm,” then I got search results. At a pragmatic level, variations in language can affect a radiologist’s ability to use various applications.
Variations in language also affects the quality of reports. Studies have shown that there is great variability in mammography reporting in the use of BIRADS (Breast Imaging Reporting and Data System), for example. There is great variation in the choice of terms used to describe calcifications and masses. This variation in language can have a great impact on our referring clinicians’ ability to understand what the radiologist is saying.
RadLex is a controlled terminology, or a lexicon, for improving the clarity of clinical communication. It is the result of an effort by the RSNA, and it is best likened to a large dictionary of preferred terms for describing findings or anatomy in radiology reports. Its goal is to unify the numerous terms with alternative but similar meanings for a preferred term. In this manner, it is intended to improve the consistency and quality of medical records. It will be incorporated into applications for that specific purpose, and it can also be used by applications to index and retrieve cases. RadLex was developed by 15 committees with more than 150 expert participants and more than 30 organizations. It currently consists of approximately 12,000 terms and it continues to grow.
RadLex contains a list of terms and information about those terms; specifically, it contains imaging methods, the names of devices, acquisition parameters, anatomy and imaging results. It also contains synonyms for terms, so that we can unify all the different ways of saying similar things to a preferred way.
The best way to understand RadLex is to view it at www.RSNA.org, where it is displayed as a taxonomy that shows the primary term and all related terms. For instance, thorax would have the following related terms: diaphragm, airway, mediastinum, heart, etc. Clicking on anyterm will provide more information.
So how will this help the average radiologist? As I mentioned, its goal is to enable new applications, such as those for indexing and searching our radiology reports more effectively. It will also enable applications for indexing our images. Additionally, it will impact structured reporting and decision-support tools. These will help radiologists make better decisions, because if the computers can understand what we are saying, then they can use that information to give us feedback on potentially better interpretations.
There are several applications online that take advantage of RadLex functionality, such as GoldMiner® from the American Roentgen RaySociety (ARRS, goldminer.arrs.org) or the Yottalook™ Radiology Search Engine (www.yottalook.com). These resources are using tools like RadLex to improve the information retrieval by recognizing those different ways of saying the same term. For instance, searching for the term “von Recklinghausen“ returns results for synonymous cases, such as neurofibromatosis type 1. These tools help improve the ability of users to find information. In this case, the user did not have to immediately know of the synonymous term.
Structured reporting and decision support
Structured reporting initiatives are another way to leverage RadLex. For instance, the RSNA has a goal to create an online library of best-practice radiology templates for reporting. Not only is the structure of the template an important aspect of the initiative but the terms that are used in each section are equally important. That is where RadLex can make a contribution, by providing radiologists the preferred terms for describing anatomy or radiological findings, when they are reporting.
The final application for RadLex is in decision support. These are applications that help radiologists improve their interpretations. In mammography for example, a decision support system that uses RadLex terminology could consist of the following workflow. The radiologist dictates the report and the structured reporting template will apply RadLex terminology to the findings. Those findings would then get translated to their appropriate BIRADS classifications which would then be sent to a decision support system. This would then translate the radiologists’ findings into a set of probabilities, a ranked list, that considers the control terms (or findings) and reports probabilities for malignant or benign diagnoses. This ranking could help the radiologist determine if the patient should be biopsied or just followed-up.
Extracting metadata from images
It is also possible to use imaging informatics to describe what a radiologist sees in an image. In this case, the Annotation and Image Mark-up (AIM) initiative can help in labeling images.
In current PACS, when a radiologist annotates an image, those annotations are stored as simple graphics which are not searchable by a computer. The goal of AIM is to make the image annotations computable. This is an important effort in radiology as new applications will leverage this new standard. It will enable radiologists to use not only reports, but images, in a much more powerful way.
So the current problem is that radiologists can add a significant amount of information to images, particularly the measurements on areas of the images, but this information is not computer-accessible. We cannot go to our PACS and retrieve images based on diagnosis or based on the particular location of a lesion in an organ.
When I refer to “mark-up,” I mean the graphical symbols, drawings and regions of interest that radiologists may put on the image. And when I refer to “annotation,” that is what a radiologist might say about that region, for example: irregular mass in the right lobe of the liver,likely a metastasis. So AIM is an emerging standard, similar to DICOM or HL-7, that enables radiologists to communicate imaging observations, e.g. anatomy,, regions of interest, etc., to computers. The project is being developed by the caBIG®(Cancer Biomedical Imaging Grid) Imaging Workspace of the National Cancer Institute.
AIM produces an image-structured report. In the current model, radiologists will describe a particular region in an image but the communication results in text that is disconnected from the image. AIM will allow an application to unify the image with that description,using a rich structure that is computer-accessible.
Because this is richly structured and computer-accessible, the possibility exists to have a large database of images that have now been annotated by radiologists, and that are searchable. Imagine the power of being able to search for all the hyperdense masses, or to search for all the masses that are of a particular size. Or one could compute, in particular patients, how much a certain mass changed over time.
Practical uses for AIM
How will radiologists see this used in actual practice? Several groups are developing tools to implement AIM in a user-friendly fashion. At Stanford, we have been developing a tool called iPad, which is a plug-in to the OsiriX Imaging Software environment (this is an open-source DICOM viewer for MacOS X). It allows radiologists to draw on and annotate an image, describe what they’re seeing and produce a structured output.
So after a radiologist draws on an image, and indicates the lesion, the iPad tool will generate the structured output. This is best considered as a new workflow for reporting that connects the image to what the report is saying. The important point is that the tool is also using RadLex controlled terminology. AIM is also working with other standards, like HL-7 and DICOM, and there is open-source code that converts AIM documents to these standards to allow interoperability.
Tracking tumor changes
As one case example, these interoperable systems could help us track tumor changes as a result of treatment. Commonly, radiologists will have to identify a lesion on several different modalities. The radiologist marks-up and measures the lesions on the images,and then an oncologist, by hand, will have to take the measurements that a radiologist has made, sum them up and apply appropriate criteria to measure tumor response.
This is all computable information, however, and the AIM standard will enable those calculations to be automatically generated, improving the quality and consistency of measurements of lesions, while producing a chart that shows how the sum of the lesions has changed over time. This is a new way in which this quantitative information will be used and consumed by applications. It will enable better quality care. In addition, rules can be applied to determine, given changes in lesion size, if the patient is in remission or if the disease is progressing.
The standards that I have discussed are very important for communicating what is in a report and what is in an image, and for helping our medical systems interoperate. RadLex is a control terminology that is going to enable interoperability for radiology reporting. New standards in image annotation and mark-up are going to enable interoperability of our image data.
The most important thing for radiologists to be aware of is that important, new tools are emerging, that will allow them to communicate more clearly and effectively with other physicians.
ELIOT L. SIEGEL, MD: Thanks, Daniel. That was a great overview of structured reporting and some of the tools that we are going to need in order to be able to take advantage of the exciting vision that you’ve portrayed. From a practical perspective, when do you see this actually happening? A fair amount of the work, as you’ve mentioned, is being done for research and clinical trials at the National Cancer Institute. But when do you see the typical radiologist being able to take advantage of RadLex? When do you see us really moving towards a structured reporting environment? Some of the things that you talked about are really exciting but how do we get there from here?
DR. RUBIN: It’s important for radiologists to be aware that it’s already happening. A number of vendors already are incorporating RadLex. There are also other terminologies even outside of radiology, in electronic medical records. What is more of an issue is that most radiologists and many physicians aren’t aware of these technologies, and there are vendors who are not doing this. And they are not aware of the benefits that this provides in terms of indexing, searching and bringing clarity to healthcare communications.
The take-home message is for radiologists to be aware of this functionality, and what this technology enables them to do. They should be asking critical questions of their vendors when considering new purchases: “Do you support controlled terminology?” It will have significant benefits in daily practice. It will help with finding reports.
The vendors who are including RadLex in their systems are doing this in the context of enabling radiologists to look up other studies about particular pathologies. For instance, if you have a patient with a cystic lesion in the pancreas and you are unsure, you can look up relevant studies to aid your diagnosis. Residents can use the system to look up similar cases for educational purposes. You cannot perform those types of searches in current PACS iterations but when more vendors enable controlled terminologies, we will see a gradual shift to having this functionality.
I also see that speech recognition vendors are incorporating the terminologies, and helping radiologists pick better terms right on the fly. It’s not as efficient to use a pick-list of terms for reporting and some vendors are incorporating controlled terminologies into the voice dictation. Being aware that this technology is here, should help radiologists make more informed purchasing decisions and it can help them in pushing vendors to incorporate these tools into future product revisions.
DR. SIEGEL: At University of Maryland, we’re seeing a really interesting change in mindset. As our new residents come into the program, they have never seen anything other than speech recognition, and the use of templates to create reports. And so we’ve seen more change in the way that our residents and fellows dictate now than we have in the last 20 or more years.
The question for our industry participants, Vikram Simha and Bob Cooke, is: Are you hearing from customers that they’re interested in changing the way that they want to do reporting? Is that something for advanced visualization vendors or PACS vendors to be involved with? What are you hearing from your customers, and what are you doing to respond to this really rapid change, and the really futuristic vision that Daniel depicts?
VIKRAM SIMHA: One of the things that’s very attractive about AIM, and one of the things that we hear a lot from our customers is that we want to generate reports that are usable in a form so you can drive workflow. These days, software can detect anatomy automatically. Measurements on images, lung images for instance, can be presented relative to known anatomy, so when the follow-up scan comes, you can reproduce those measurements and automatically and display them on the screen. I think AIM goes a long way in helping vendors deliver that functionality.
DR. SIEGEL: How do you feel about the future of reporting? Do you think it’s changing as rapidly as we’re talking about?
BOB COOKE: If we look at other imaging areas, such as the cardiovascular field as a good example, that’s an area that’s already very well-structured and quantitated. Cardiovascular reports are regularly relying on measurements. The format in which those measurements are taken is very well-structured, and the data is derived and reported to national bodies to retain accreditation. That is an area where structure already exists, and the value of the quantitative data is evident.
In radiology, however, one of the challenges is to maintain efficiency. So, during your general interpretation process, how can you deal with and generate structure without necessarily slowing down the radiologist? And one of the opportunities that I think is latent, is connecting the image presentation with the result generation. And there are some great opportunities for generating structure, while also improving efficiency. I think the challenge always remains in trying to take something which is computer-represented and turn it into a human-readable form. You need to do that while creating a value equation that you will deliver clearer, more concise reports to your referral base. It is also necessary to get used to working with a representation of the report that is nontraditional. So a paradigm shift will be required.
DR. SIEGEL: Khan, you’re in a unique position, because you just recently made the transition from working full-time as a clinical radiologist to working in industry. How quickly do you think we’re going to be moving to structured reporting, and where do you see the next few years, from your dual perspective?
KHAN M. SIDDIQUI, MD: I think there are two ways you can approach this. You go on from here, onwards, doing structure, or you try to understand structure in existing data. So there’s going to be a two-way approach, in which understanding structure on natural language processing and existing textual data is out there, and it will drive the structure. Tools like AIM and RadLex and others will create much more prospective structure in the way reporting happens.
The creation of a structured reporting base, of course, is a really big change for radiologists who have trained and established their ways of doing things. So, as you said, as more and more institutions start to adapt these practices, and the new radiologists that are getting trained become familiar with it, the change will happen. Now, waiting for radiologists to change their reporting habits does not prohibit us from going back and getting structure from existing data. It remains to be seen if these methods will be good enough to supplant the tradition always radiological reports are generated. I think it is an open-ended question right now.
DR. SIEGEL: It really is interesting, the idea of not only prospectively but retrospectively being able to mine the content. There’s so much content in the images themselves, and we only report out on a small subset of it, and being able to capture more of it sounds like a really intriguing idea.