AI, ChatGPT in Healthcare & Tackling Clinical Staff Skepticism

AI, ChatGPT in Healthcare & Tackling Clinical Staff Skepticism


Narinder Singh, co-founder and CEO at LookDeep Health

The potential of AI in clinical settings is clear, yet adoption of the technology continues to lag. Despite the myriad ways it can alleviate the burden placed on healthcare professionals, there is still significant staff skepticism to embrace AI. Narinder Singh, co-founder and CEO at LookDeep Health, believes that this reluctance and skepticism is a result of ambiguity over what AI even means. We recently spoke with Nardiner to discuss how clinical executives can address the hesitation to embrace AI and implement it in a way that is met with minimal resistance from staff. 

Narinder Singh, co-founder and CEO at LookDeep Health: AI is a tool that has the potential to transform medicine. From a computational perspective, no particular kind of AI seems more effective than another (machine learning, deep learning, etc.). The impact is seen in targeted clinical applications that reduce morbidity and mortality while controlling the cost of care. Sometimes “old school” algorithms get the job done; other times, the latest and greatest is needed.

What does have an unquestioned impact is large data sets that are curated with quality, quantity, and diversity in mind. These data sets are absolutely necessary for all kinds of AI and drive clinical transformation and impact.

With that in mind, from a clinical perspective, it is most useful to focus on how AI can help clinicians and have an impact on care, in a way that maps to how they already think. Also, they map to general intuitions of “ intelligence.” These four categories are:

Ambient Sensing – Many problems in healthcare are the result of information not existing. How did the patient sleep, did they eat, how are they moving, etc? Many times we use nurses and doctors to create information for themselves and others (e.g. assessments) to use to later decide what to do. This usage of AI is not about creating a prediction (y) but is about creating more inputs (x) to human and machine decision-making.

Diagnostic – Taking in a lot of data and determining what is happening now is a core clinical activity. Humans are limited in drawing proper conclusions from many distinct data points – doctors are no different. Even a “routine” workup for pneumonia or urinary tract infection can involve dozens of data points. When it comes to very sick patients in the hospital, the data grows exponentially. AI can improve diagnostic accuracy and help when there is conflicting diagnostic data. AI also can help reduce variation in diagnostic performance across providers. 

Predictive – Examining what has happened before with a patient in order to infer what will happen next is a natural progression. “Next” can be different time horizons in different clinical contexts. If you are a large healthcare system, and you want to reduce hospital admissions, you may want to know which of your patients are most likely to visit an emergency room in the next 30 days so care teams can pay more attention to them. If you are a busy tertiary care hospital, you might want to know which of the currently admitted patients may need to be transferred to the ICU in the next 24 hours because their sepsis is getting worse. 

Prescriptive – what is happening now/next and what actions should be taken. This takes the next step and starts to incorporate clinical judgment. Take the above example of knowing which patients may visit an emergency room in the next 30 days. Prescriptive AI would identify, say, a heart failure patient at risk for decompensation and recommend a clinical intervention to reduce the likelihood of ER visits.

Mapping AI solutions to these categories can help understand the context of how to evaluate their effectiveness. However, no one pattern is inherently more valuable than another. The difficulty of creating higher-level solutions makes them both more compelling and harder to do well. Yet the real insight is that the AI solution must have a clearly demonstrated advantage over the current practice (cost, speed, completeness, etc.), and must align with the clinical workflow to allow it to quickly permeate practice and drive change.

Narinder Singh: Nurses and doctors are spread thin. Every minute in their day matters. Reviewing new kinds of data, having to wait for a special device, or following a new checklist likely takes away time from their time to engage with their top priority – the patient. Understanding and respecting their workflows is the most straightforward way to help create adoption and clinical champions for new innovations.

Clinical workflows represent decades (if not centuries) of accumulated wisdom about delivering care to sick patients. On one hand, no one would claim that these workflows are perfect. On the other, aligning with them demonstrates respect and thoroughness to any innovation. These workflows are highly “analog,” so applying digital AI technologies requires a lot of analog-to-digital conversion (and vice versa). 

Two concepts can help AI integrate into clinical workflows: driver assist and human + AI integration. Driver assist means that AI won’t autonomously make decisions (just as almost no cars are fully autonomous or self-driving). AI (such as computer vision) is there to help and assist. Likewise, humans will incorporate AI into their workflows.

Ultimately AI in hospitals will incorporate computer vision and instantly analyze the EHR, vital sign data and more to instantly aid in key decisions. This, however, is even earlier in maturity than proclamations of self-driving cars were a decade ago. We need AI today to prompt nurses and doctors on where their judgment is needed – just like a driver assist lane change warning prompts us to the need for our attention. 

This mode of Human + AI, allows alignment with existing workflows and can survey all patients and investigate based on known patterns and gather relevant context based on the specific scenario (e.g. assessing patients who are immobile in bed for long stretches for potential pressure injury interventions). 

This type of human + AI integration limits the amount of change imposed on the bedside team – for the frontline team, it is just some extra help they desperately need with the familiar ‘interface’ of another clinical team member. This model allows for innovation to quickly have an impact on the patient by lessening the cognitive load on the provider.

Narinder Singh: A study in the journal JAMA Health Network found that burnout was reported by 40 percent to 45 percent of clinicians in early 2020, 50 percent of clinicians in late 2020, and 60 percent of clinicians in late 2021. As this issue becomes more prevalent, we are facing the reality of an unprecedented amount of healthcare workers leaving or on the verge of leaving the profession. 

Physician burnout is a psychological syndrome characterized by emotional exhaustion, depersonalization, and a reduced sense of personal accomplishment. Fewer staff and sicker patients make it impossible for staff to deliver the level of patient care that they were once able to provide. Not only do patient outcomes suffer as a consequence, but healthcare staff are left with feelings of guilt and a lack of purpose.

The chronic stress that drives burnout has a few causes. Attention is drawn in a million different directions, making it hard to know what to focus on. AI has the potential to take in data about all of a doctor’s patients and guide attention to which issues for which patients need to be taken care of right now. How do I keep an eye on a patient who my gut tells me is getting sicker but I don’t have the time to check in every hour? 

The next generation of AI solutions has the potential to go beyond simple data collection and incorporate computer vision, allowing these systems to sense what is happening with patients in a manner similar to how nurses and doctors do. Computer vision can identify who is in the room, how the patient is moving if they are stable in bed, and much more. These measures are easy for a person to understand and validate (i.e. confirming that the patient is in fact out of bed) making it easier to trust but verify what the AI is reporting. 

Another source of stress is repetitive tasks that drain time away from the main goals of a doctor. Automation of those tasks by AI would be a huge sigh of relief for busy doctors. Ambient sensing can do routine checks for a patient who is otherwise recovering according to plan. Aggregation and summarization of heterogeneous data that is all over the EHR into easily digestible reports also raise the efficiency of doctors by removing repetitive tasks and increasing the relevance of the information they do continue to review.

Narinder Singh: ChatGPT has increased the awareness of AI at every level in every industry including healthcare. Executives and assistants, clinicians and administrators have seen the allure of typing a question and simply getting an answer. In an industry where complex systems, changing regulations, and complicated billing rules dominate the time spent caring for patients, this simplicity has particular allure. At the same time, clinicians have seen AI promise to change medicine (Watson), eliminate radiology as a profession and similar naive predictions about the future.

ChatGPT and LLMs succeed because they understand the syntax and rules of language, they fail when they extrapolate in novel areas. Given the high stakes of patient care and the massive efficiency of healthcare administrative functions, this will likely have several implications in the near term.

The poor tools and billing requirements have polluted medical records. ChatGPT’s ability to quickly synthesize information into the crux of what is happening and has happened with the patient will be a perfect antidote to such bloat. Yet, the very EHRs that made it easier to bill, will be the ones most responsible for integrating these tools into most physicians’ practice.

Medical transcription services base technology will be equalized, and whether or not ChatGPT can operate without human review, the companies that help address this will operate with base technology as sophisticated as anything that exists from industry giants or startups that have raised hundreds of millions of dollars. User experience, product design, pricing, and distribution will differentiate future solutions far more than the history of existing solutions.

Developers who have expertise related to ChatGPT are its biggest proponents and critics. They see where it hallucinates libraries that do not exist, but also treasure how quickly it can bring them up to speed. Doctors may have a similar experience through the prompting of such AI. However, there is no compiler or easy path to validate “the code” – i.e. a treatment plan for a patient. Until this is better addressed, it’s likely ChatGPT will supercharge the questions patients have more than the answers doctors can rely on.

Finally, much of medical care is closer in analogy to driving a car than writing a novel. There are concrete rules and dangers with clear right and wrong answers. Traditional ML and AI prediction form the heart and soul of such approaches. Examining imaging, reviewing blood tests, making decisions on drugs, interventions and surgical steps – these patterns of problems sit within a domain of AI that is already deeply focused on healthcare – where models are trained on right and wrong answers and extrapolate those to broader classes of the same problem. These domains have massive amounts of work to be done and potential if solved – but they will not be addressed by a simple textbox of input.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *