Securing the Future: Addressing the Cyber Risks of AI in Healthcare

Published Date: July 31, 2025
By News Release

A new special report in Radiology: Artificial Intelligence, the journal of the Radiological Society of North America (RSNA), highlights the urgent need to address cybersecurity vulnerabilities associated with large language models (LLMs) in health care. As LLMs like OpenAI’s GPT-4 and Google’s Gemini become increasingly integrated into clinical workflows, researchers are sounding the alarm: security must evolve in step with technological adoption to protect patient data and medical integrity.

"While integration of LLMs in health care is still in its early stages, their use is expected to expand rapidly," said lead author Dr. Tugba Akinci D’Antonoli, a neuroradiology fellow at University Hospital Basell in Switzerland. “This is a topic that is becoming increasingly relevant and makes it crucial to start understanding the potential vulnerabilities now.”

LLMs, capable of understanding and generating human language, are already reshaping many areas of medicine—from clinical decision support and drug discovery to improving patient-provider communication. As their use grows, so too does the threat landscape. The report details how both AI-inherent and ecosystem-based vulnerabilities could be exploited, with potentially severe consequences for patient care and data privacy.

Among the risks discussed are data poisoning, where attackers inject false or malicious data into training sets, and inference attacks, where private information is extracted from model responses. In radiology, the authors warn that such attacks could go as far as manipulating image interpretations, accessing protected patient data, or installing unauthorized software within a system.

Dr. D’Antonoli emphasized that radiologists and healthcare professionals need to be proactive. “Radiologists can take several measures to protect themselves from cyberattacks,” she said. “There are of course well-known strategies, like using strong passwords, enabling multi-factor authentication, and making sure all software is kept up to date with security patches. But because we are dealing with sensitive patient data, the stakes (as well as security requirements) are higher in health care.”

The authors call for comprehensive, multi-layered security strategies that extend beyond basic IT hygiene. These include deploying LLMs in secure environments, encrypting interactions, continuously monitoring system activity, and implementing institution-approved, vetted AI tools only. They also recommend anonymizing sensitive patient data before inputting it into any language model.

"Moreover, ongoing training about cybersecurity is important," Dr. D’Antonoli added. “Just like we undergo regular radiation protection training in radiology, hospitals should implement routine cybersecurity training to keep everyone informed and prepared.”

Despite the risks, the report doesn’t aim to alarm but to inform. According to Dr. D’Antonoli, while there are valid concerns, patients should remain reassured. “The landscape is changing, and the potential for vulnerability might grow when LLMs are integrated into hospital systems,” she said. “That said, we are not standing still. There is increasing awareness, stronger regulations and active investment in cybersecurity infrastructure.”

With responsible deployment and continued vigilance, the benefits of LLMs in healthcare—like improved efficiency, better diagnostics, and enhanced patient communication—can be realized without compromising safety. The report ultimately calls for a balance between innovation and protection, urging health care institutions to prioritize security now to prevent serious breaches in the future.