Is Generative AI Safe for Healthcare? Yes, With Expert Oversight

Is Generative AI Safe for Healthcare? Yes, With Expert Oversight


Generative AI is all the buzz of late, sounding alarm bells as a risk to society by Elon Musk and others. Regardless of where you stand, it begs the question: Is it safe for use today in healthcare applications? In an industry where accuracy can have a life-or-death impact, can it be relied upon to interpret data and create accurate forms of new content, which could range from physician guidance to patient instructions and more?

The simple answer is: Yes, with expert oversight. Let’s deconstruct how generative AI works, its current strengths and limitations, and how it can safely be assimilated into healthcare applications now and in the future.

How generative AI works

In its essence, generative AI aims to create new and original content, mimicking the thinking power of humans, albeit in a binary way. On a macro level, it seeks to understand how humans think and behave. How, for instance, writers write, painters paint, and innovators innovate. Machine learning algorithms are based on the creative process, drawing from large amounts of data to “train” neural networks on a particular area. When users provide a prompt, this theoretically provokes an intelligent response.

While this processing is complex, there are really three parts to the generative AI equation: data, training, and neural networks.

Consider, for instance, that you want to create population health-generated content for clinical use purposes. You could engage a person to conduct exhaustive research and manually write content, for example, to support precision medicine for personalized treatment planning to fight diseases. Or, you could prompt generative AI to draw from a relevant database, e.g., the patient’s electronic medical record, to compile the information. If the AI is trained on related subjects, this can intersect with psychology, mental health and social determinants of health, activating the neural network to take the medical content even deeper.

What could go wrong?

Just like our minds don’t always connect the dots accurately, the same can hold true with generative AI. While the output can look logical and correct at first glance, it may not be scientifically accurate. Without access to experts who can verify the content, it is possible to publish false information, which can be very dangerous in the precision medicine example and other use cases. In the hands of a novice, this could be particularly problematic because they may not know what is true or not true. If they take the AI-generated content at face value, it might create more harm than good.

A better approach is the use of generative AI in healthcare to aid experts.  For instance, if you are a clinician and you’re looking for the latest information on Alzheimer’s to create a care plan, you could give it a prompt and review what the generative AI comes back with, drawing from and verifying what’s accurate while discarding anything you don’t agree with.

As part of the process, you may also gain surprisingly helpful insights you hadn’t thought of. We all have knowledge gaps, which generative AI can help to fill, generating permutations and combinations of ideas to challenge our thinking. From a creative standpoint, there is no harm in this. As with the population health example, there are many insights that can be drawn from racial, ethnic, or social information. However, just know that this may not be scientifically validated or clinically proven when it comes to patient care.

Biases are also something to look out for. Generative AI is inherently biased due to the underlying data and knowledge it is learning from, including social media dialogue on platforms such as Twitter and informational websites. ChatGPT, for example, is trained on vast amount of Reddit content. As a rule of thumb, know that whatever biases exist on the internet exist inside the neural network.

Making healthcare content creation more efficient

Ultimately, generative AI can help adopters optimize efficiency gains in healthcare content creation. For instance, in the case of health insurance, the generative AI could be trained on an understanding of a health plan to help optimize reimbursements. It could also be used to assess the risk profile of a member based on things like re-admissions.

From this information, users could create policy game plans for different types of patients. What does the game plan look like if the patient is a healthy teenager? How would it vary if they are a Baby Boomer with multiple co-morbidities? What if they have mental health issues such as depression? In the case of pop health, it could be used to reduce the time and money to create and validate care plans or even action plans for specific patients. However, again, clinicians have to know what is factual and what is not. Otherwise, it could create harm.

Right now, it is important to note that the AI does not ask clarifying questions – users do. This means that during the initial adoption, subject matter experts and consultants, who know the nuances, need to ask the kinds of questions that frame a meaningful response. This shift to expand the AI’s ability to generate better responses is leading to the emerging field of “prompt engineering.” Basically, it’s understanding the AI models and effective prompting techniques useful for interacting with large language models including carefully constructing prompt sequence, which can generate insightful content to produce the desired, high quality output.

Might that new AI career specialization change in the future? Yes, however, that is a topic for another day. Today, you need to know what you want to make happen if you are using it. The key is knowing what you are trying to optimize, not the other way around.

The future of generative AI

Generative AI is a promising technology and has the potential to create more engagement with chatbots for different use cases. It can also be used to help generate more chatbots faster, using large language models to create, inform and reshape better human-like conversations.

We also believe that generative AI can be used to dive deeper into chatbot use cases like post-discharge, exploring the dynamic nature of engagement to improve patient follow-up. Generative AI could help to sketch what the next level of engagement would look like to better inform chatbot design sessions. It could also shrink cycle time, providing designers with a sample that they can take to the next level.

Over time, one thing is for certain. Generative AI will get smarter and better, transplanting intelligence from one model to another. While it is not a replacement for humans – generative AI only has two senses, visual and auditory, unlike humans, who can also touch, taste and feel – it will aid us in ways that could impact white-collar workers just as robots have historically impacted blue-collar workers.

While this presents a challenge for society, it will also be a seed for creativity and new growth. Everyone can be a creator, with some ultimately becoming generative AI natives who innovate in ways we haven’t thought about before.  We cannot foresee how it will play out, but much like smartphone with Tiktok, it is only limited by our imaginations and ingenuity.

By the way, I did not use ChatGPT or any other AI platform to generate this content.

Photo: Ole_CNX, Getty Images



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *