RSNA 2019: Protecting CT scanners from cyberattacks

Tom Mahler, a doctoral student at the Cyber Security Research Center of Ben Gurion University of the Negev in Beer-Sheva, Israel, and colleagues are utilizing machine learning to develop anomaly detection models for medical imaging devices. Mahler described the team’s initial results at the 2019 RSNA annual meeting.

Protecting medical imaging devices from cyberattacks currently focuses primarily on network and endpoint security levels of hospitals and imaging centers. Cyberattacks targeting internal components of imaging devices are more difficult to detect, but if the internal control unit of equipment is compromised, its behavior could potentially be manipulated, jeopardizing device mechanics, scanner protocols, and even patient safety.

At RSNA 2017, Mahler explained why and how imaging modalities are vulnerable to malware that can bypass existing security mechanisms. At RSNA 2018, he presented a hypothetical attack on a computed tomography (CT) system, and discussed the difficulty of performing regular security updates and implementing malware patches due to vendor validation requirements and government regulations.

To test five machine learning methods, the team acquired 12,711 raw CT scan commands sent from a CT scanner host control to the gantry on a real CT scanner. Each record contained various technical scan parameters, including labels such as the body part being scanned and protocols. These scans included 491 real anomalies, 432 of which were collected during technical maintenance scanning procedures. An additional 59 malicious anomalies were created manually with the assistance of an expert technician and radiologist using a phantom in a controlled non-clinical environment.

“The collected data contained many errors which required preprocessing and cleaning before training our models,” explained Mahler. The team applied 67 of 304 identified features from the data used to train the models, dropping unused features and 100%-correlated features.

“We used unsupervised machine learning to detect the anomalies,” Mahler explained. “The models were not given any indication of whether a scan was an anomaly, but the training data did not contain any anomalies.” Save two false positives identified by each model, all four models performed differently. An ensemble model combining the four produced the best results. “But for all, we were able to classify scan commands to the appropriate scan labels with 90% to 98% accuracy,” he said. “Our research is showing that it is possible to detect anomaly commands sent to CT scanners and that an ensemble of multiple anomaly detection algorithms outperformed any single algorithm we used.”

“Since data are usually unlabeled, unsupervised learning is useful,” he added. “In addition to  being able to detect malicious commands sent by hackers, pre-scan detection could rapidly identify human errors and device malfunctions. Anomaly detection algorithms could also help CT operators optimize scan results with better configurations. Their clinical relevance and potential applications have a wide range of uses that have not yet been investigated.”

Mahler told Applied Radiology that the team plans to improve the models and make their system more robust. “We want to … fine tune the model so that it will be more generalized and so that it may be implemented inside hospitals,” he said.

The team is working with a large medical device vendor that allowed the researchers to create hand-crafted anomalies simulating a real attack and to collect the data. Mahler (tom@mahler.tech) invites interested researchers to collaborate with the Cyber Security Research Center. “This challenge requires collaboration to solve, and it affects many medical imaging device users,” he said.

REFERENCE

  1. Mahler T, Shalom E, Makori A, et al. Protecting patients from cyber-attacks on CT’s using machine learning methods. Radiological Society of North America 2019 Scientific Assembly and Annual Meeting, December 1-6, 2019, Chicago, IL. archive.rsna.org/2019/19008598.html
© Anderson Publishing, Ltd. 2024 All rights reserved. Reproduction in whole or part without express written permission Is strictly prohibited.