The last year has seen growing interest in the use of generative artificial intelligence in healthcare. Although generative AI has been hailed as a technology that could drive “new frontiers in productivity”, there have also been reports of AI-generated “hallucinations” and misdiagnoses.
Most healthcare organizations are not very confident about the best way to safely and effectively implement generative AI: A 2023 Bain survey found that only 6% of health systems have a strategy for implementing generative AI.
On the patient side, a 2023 Pew Research Center survey found that more than 60% of Americans would be uncomfortable with their healthcare providers relying on AI to direct their care.
Still, some large health systems have successfully piloted generative AI programs, such as Microsoft and Epic partnering with UC San Diego Health, University of Washington Health and Stanford University Health Care to integrate AI to answer patient messages.
To find out how industry leaders are addressing the challenges of generative AI applications in healthcare, HealthTech spoke with Dr. Christopher Longhurst, chief medical officer and chief digital officer at UC San Diego Health, Dr. Cherodeep Goswami, chief information officer and digital officer at UW Health, Dr. Kevin Johnson, vice dean for applied informatics at the University of Pennsylvania School of Medicine and member of the Healthcare Artificial Intelligence Code of Conduct Steering Committee, and Eric Berger, partner in Bain’s Healthcare and Life Sciences practice.
Click the banner to learn how health IT solutions can help you deliver an integrated care experience.
Healthtech: How is your company currently using generative AI?
Longhurst: We use AI to communicate with patients. A study published in JAMA by researchers at the University of California, San Diego found that licensed nurses rated AI responses to patient questions as, on average, higher quality and more empathetic than human responses. I looked at the responses myself, and it was pretty clear which were the chatbot’s responses and which were the doctor’s responses. The chatbot’s responses were three paragraphs long, and the doctor’s responses were three sentences long.
Johnson: We’re exploring how to replace the time physicians spend documenting encounters with technology that can generate content from voice. We’re also working with Epic to explore how to automatically generate responses to patient portal messages. All of this will be available to most Epic clients over the next 1-2 years.
Discover: How can healthcare balance the benefits and risks of AI?
Healthtech: What role will humans play in AI-generated communications?
Goswami: We all know that in our daily lives, replying to all the emails can be time-consuming. That’s what prompted us to adopt AI technology. Generative AI empowers our clinicians to write more comprehensive replies with a dash of empathy. Patients not only receive their test results, but also an accompanying conversation that is updated by their healthcare provider based on the draft generated by AI.
Longhurst: Our doctors are drowning in inbox overload. In some cases, they’re getting a message every minute. Generative AI is a mechanism that helps solve this problem. After the AI generates a draft answer to the patient’s question in the electronic medical record, the doctor decides on the next step. There are two buttons, one is “Start with draft” and the other is “Start blank reply.” We always make sure there is a human in the loop.
Johnson: What we can anticipate is that while patients may find an AI-generated message easier to read and more reassuring, they may also find the message condescending or culturally different from what the message creator was trying to convey. If a message turns out to be hurtful to someone, there needs to be a process for addressing that.
BERGER: Instead of talking about AI-generated content, it’s best to use the term AI-informed content. If an AI generates 95 percent of a document and a human completes the remaining 5 percent, is it theoretically a human-generated document or an AI-generated document? And more importantly, does it give patients the answers they need?
HEALTHTECH: What are some important security and privacy considerations?
Johnson: Patient-specific information or protected health information shouldn’t be in any of these generative models yet. For example, let’s say you have a patient with a complex medical condition come in, and you discover that ChatGPT can accept all of this patient data. And you say, “Let’s chat about this patient.” Now you’re handing over that entire private record completely to GPT. We shouldn’t be doing that.
GOSWAMI: Today, Microsoft and Epic are building a very safe and secure cloud environment that we, as clients, can leverage. But the really critical success factor is for organizations to evaluate their own culture of acceptance, responsibility, and accountability. It’s a very powerful tool if used properly. If used improperly, it can tarnish an organization’s or provider’s reputation and put at risk the data of the patients they serve.
Longhurst: UC San Diego recently won a $10 million grant to improve cybersecurity in healthcare. We’ve always treated patient data like gold, and with AI, that process is no different. Our approach to AI ensures that all of the data informing our algorithms never leaves our walls. When we work with vendors, we bring them into our secure environment.
Berger: Healthcare organizations are beginning to expand their policies and compliance documentation to cover these types of technologies, with some creating AI councils and even chief AI policy maker positions.
Read more: Learn the three keys to success with a generative AI platform.
Healthtech: What infrastructure needs should healthcare organizations consider before implementing generative AI solutions?
GOSWAMI: Even if an organization hasn’t updated their EHR, it usually doesn’t take much effort to get the latest compatible version from Epic, unless they have a very old system that hasn’t been patched or upgraded in the last few years. I would be surprised if less than 90% of customers don’t meet Epic or Microsoft’s minimum standards.
BERGER: Generally, people build AI applications on top of existing technology infrastructure. Cloud providers employ different large-scale language models, such as OpenAI for Microsoft, and Anthropic for Amazon and AWS. You can use LLM on multiple clouds, but you need to consider the interactions between the cloud players and the LLM.
75 percent
of health system executives believe generative artificial intelligence could transform the healthcare industry; only 6% have a strategy for implementing AI
Source: bain.com, “Beyond the Hype: Making the Most of Generative AI in Healthcare Today,” August 7, 2023
Healthtech: How can health systems successfully implement generative AI?
Goswami: This is a brand new technology, and we need to understand where the flaws are. For example, if cancer is found, patients don’t need a six-paragraph auto-generated email outlining all the financial assistance options available to them. No. A provider needs to pick up the phone and have a conversation.
Johnson: Most people in IT and health IT know about the people-process-technology triangle. But what most don’t know is that the necessary IT infrastructure starts with people. To use AI, people need to be willing and trained. And using AI isn’t completely free. AI saves people time, but every message generated by AI comes at a cost.
Ron Hurst: These are not technologies that should be deployed immediately without testing on every patient. You need a partner that is committed to evaluating them carefully and ensuring there are no unintended consequences. Share your findings in publications and at vendor conferences so that lessons learned become standard features.