Generative artificial intelligence has exploded in the healthcare sector in recent years, driven by hopes the technology could take on a variety of tasks — from clinical documentation to data analysis — and lessen the industry’s long-standing workforce challenges.
At the same time, healthcare organizations often struggle to manage cybersecurity, burdened by frequent cyberattack attempts as the sector adopts more internet-connected tools.
AI products could be another target for cybercriminals. Meanwhile, hackers can use their own AI to launch cyberattacks. That should create new work and security disciplines for healthcare cyber teams, said Taylor Lehmann, a director of the office of the chief information security officer at Google Cloud.
Lehmann sat down with Healthcare Dive to discuss how the advent of generative AI tools will impact healthcare cybersecurity and what organizations need to do to prepare for an increasingly AI-augmented security workforce.
This interview has been edited for clarity and length.
HEALTHCARE DIVE: Would you say these generative AI tools are more ripe for attack than any other piece of tech hospitals use? Or is that just another vendor they have to consider when they're thinking about cybersecurity?

TAYLOR LEHMANN: This is where I am concerned about the future. Today we’re talking with AI systems, tomorrow we’re watching AI systems do stuff — like the agentic push we’re still on the cusp of — so this answer might change a bit. But number one, it’s going to be very hard to detect when AI is wrong and whether that being wrong is due to it being manipulated by a nefarious actor.
Detecting wrongness is already a challenge. We’ve been working on this for years. No one’s been able to completely eliminate hallucinations or inaccuracy. But I do think detecting more than just hallucinations, but functionality that has been introduced by a bad guy sitting in the middle to get a certain outcome — it’s going to be really hard to detect and increasingly difficult to detect.
So organizations need to make sure that they have provenance for basically everything from the model being served to you, the technology that has been done with, all the way down to the provider of the model and the data they use to train it. That visibility is going to be critical.
At Google we use model cards. We use practices like cryptographic binary signing, where we can tell exactly where the code came from that is running that model. We can tell you exactly where the data came from that trained that model. We can trace every record that went into that model’s training from birth to death. And organizations are going to need to be able to see that in order to manage those risks.
You mentioned the importance of being able to tell what models are trained on and what data is flowing in, and how that could help you figure out if the tool has been infiltrated. How should health systems be thinking about implementing workforces now to deal with those concerns?
There’s definitely new security disciplines coming. This is actually some of the work that we’re doing right now in the office of the CISO. Google is looking at some of these new roles and new capabilities, and then putting guidance together for folks on what to do.
The first thing I would say is, there are ways and methods that are needed to secure AI systems out of the gate. We feel very strongly that any reasonable approach to securing AI and AI systems needs to involve having strong identity controls in place and making sure that we know who’s using the model. We know what the model’s identity is, we know what the user’s identity is and we can differentiate between those two things. We also have radical transparency in everything the model does. Right or wrong, we can see it.
One of the new big areas is this concept of AI “red teaming.” Organizations deploy a team of people that do nothing but try to make these things break — basically try to get them to produce harmful content, take actions that were not intended — to test the limits of the safeguards, as well as evaluate how well the models are trained, whether they’re over- or under-fit for purpose.
I think you’re starting to see AI governance becoming hugely important. It’s always been important, but understanding the risks associated with AI — especially in regulated industries or in safety-critical use cases — requires a combination of technical skill as well as understanding of how AI works and is built, as well as the regulatory or the business context to determine what is an important risk.
Having really robust AI governance and people with these right skills is kind of a difficult place today. Because right now, you’ve got people who are great engineers, you’ve got people who are great doctors. You don't necessarily have both, with a risk hat on. So we’re going to see the rise of industry-specific, even practice-specific, governance professionals.
Hackers are increasingly using AI to improve their intrusion attempts. How should health systems be thinking about the increased risks posed by cyber criminals now that they have access to AI?
Number one is, we have to wrangle how organizations provision and manage identities, like the identities of humans, the identities of machines, the identities of agents.
I do not think your average hospital really thinks about identity more than how to identify the people walking in the door and what their user ID and password are. Identity is going to be the piece of digital evidence or digital artifact that is going to tie everything together. Based on identity is where we will apply controls with appropriate context.
The second is speed. AI systems move fast. They move faster than humans. Weaponized AI systems, especially those that have been trained to do very specific things, will move at lightning speed. We’re not going to say in the future, “Hey, you have less than an hour to fix a ransomware-positive infection of a piece of code.” You have seconds, if milliseconds, now. And if you can't operate with that same speed to detect and eliminate, you’re going to be in trouble.
The strategy I have is, look at your architecture, look at the systems, look at your applications. Look at how quickly you can deploy new controls. Look how quickly you can rebuild the system from scratch, look how quickly you can deploy a patch. Look how quickly you can detect an event and take action on it, and drive down the things and eliminate the things that stand in your way from doing that — whether it be from hours to minutes or minutes to seconds. But prioritize those things that help you deploy corrections and detect and respond to issues extremely quickly.