AI is becoming more advanced each day, and healthcare organizations across the country are embracing new models to help alleviate the long list of inefficiencies that plague the industry. Providers and other healthcare companies aren’t just jumping on the AI train, though — most believe that the new dawn of AI technology has real potential to change healthcare delivery for the better.
While the dawn of a new AI age is certainly exciting, it’s still concerning that the healthcare industry lacks a comprehensive framework to regulate these new tools. In the absence of these guidelines, healthcare leaders are creating their own governance strategies to deploy AI responsibly, executives said during a panel discussion on Thursday at MedCity News’ INVEST Digital Health conference in Dallas.
Cedars-Sinai vets every AI model introduced into the health system, declared Mike Thompson, the organization’s vice president of data intelligence. The health system makes sure it knows exactly how the model was developed, who created it, what data it was trained on and how it was validated, he said.
AI models are only as good as the data they’re trained on, so it’s incredibly important for providers to sound the alarm if a product was trained on biased or subpar data, Thompson noted.
“I’ve never hired a physician without asking them “What’s your experience?” or “How do you answer this question?” So you should never hire a large language model that gives clinicians answers unless you know that you vetted that model,” he explained.
Ginny Torno, Houston Methodist’s executive director for innovation and clinical IT, agreed with Thompson. She said her health system has “several different workgroups and councils” that help it frame its AI strategy.
One helpful way to determine the worthiness of an AI tool is to determine whether or not it helps clinicians reach decisions faster, pointed out Matthew McGinnis, vice president of data and analytics at Evernorth.
“Our philosophy on AI is that it’s augmented intelligence. How do we help the human get to the decision faster? How do we help them synthesize and be able to work through the vast amounts of information they’re seeing in a more efficient way?” he asked.
The emphasis on “augmented” is important to McGinnis. For example, doctors may use an AI tool to help them take clinical notes during a telehealth visit. The note is automatically drafted, but the doctor still has a chance to edit and review the note before it’s sent to the EHR. By giving doctors an opportunity to review the AI’s output and decide whether or not it’s appropriate, providers give more autonomy to their doctors, McGinnis noted.
Most people in the healthcare industry understand that AI is a complement to doctors rather than a replacement for them, said Ishi Health CEO Ajay Srivastava. But the industry still has some work to do when it comes to figuring out the best use cases for AI and how far it wants to take the technology, he pointed out.
An AI tool that predicts the risk of myocardial infarction is very different from an algorithm that tells us whether or not a patient needs to come in for a check-up, Srivastava declared. For the time being, it may be wiser for providers to focus on “low-hanging fruit” use cases, such as clinical documentation generation and patient engagement, he said.
With generative AI tools being so nascent in the healthcare field, it’s important that providers enact governance guidelines of their own. The industry may lack a comprehensive safety framework at the moment, but that doesn’t mean providers and health plans should use AI any which way they please, the panelists cautioned.
Photo: Walter Lim, Breaking Media