5 Egregious Virtual Assistant Fails and How to Overcome Them

5 Egregious Virtual Assistant Fails and How to Overcome Them

Patty Riskind, CEO of Orbita

Three phenomena conspire to drive healthcare organizations towards digitization: 1) patient demand for the same online experience as other consumer sites (e.g., Amazon, Open Table); 2) manual processes contributing to overextended staff made worse by labor shortfalls; and 3) unrelenting need to control costs, optimize revenue and streamline workflows.

But one primary barrier stands in the way: the historically poor performance of chatbots.

Let’s face it. Every single one of us has found ourselves screaming “representative” at some point when struggling with an automated voice system. And who hasn’t simply given up when a website “chatbot” continually returned answers to every question BUT the one we asked? 

Let’s examine five factors that comprise the most egregious virtual assistant fails – and how to overcome them.

1. Bad source content

It stands to reason that any virtual assistant is only as good as the content that feeds it. If the source of content is incomplete, inaccurate or outdated, responses will be anything but helpful and satisfactory.

Keeping source material up to date is no small feat. Healthcare organizations are complex organisms. Providers come and go. Clinical protocols change. Patient needs vary. Hours, locations and parking options fluctuate. Intake and prep instructions undergo modifications. 

The team tasked with ensuring the virtual assistant is up to date is faced with a significant challenge.

The advent of generative AI is an unparalleled antidote to this problem. When used as an enabling technology, generative AI can consume and configure vast amounts of information in hours – when, in the past, it could take weeks or months. Plus, generative AI can be used on an almost endless catalog of materials. The new AI models can ingest content stored not only on websites but in manuals, spreadsheets, PowerPoints, PDFs (those that explain conditions, diagnoses and treatment plans, for example) and educational videos (such as those demonstrating post-surgical wound care or mobility exercises), to name a few.

In other words, the days of untrustworthy content should be behind us. To improve accuracy and completeness further, best-of-breed vendors build virtual assistants solely on provider-approved, validated and authenticated sources rather than pulling from the vagaries of the internet. 

2. Not “speaking the language”

The majority of virtual assistants aren’t equipped to fully understand the meaning and/or intent behind the words and phrases patients might enter into a Q&A dialog. If a patient asks an average virtual assistant a question that contains the phrase “foot doctor” instead of “podiatrist” or “orthopedist,” for example, the chatbot may not be able to process the question and might respond with “I’m sorry, but I don’t understand.” 

This is where generative AI’s veteran companions, conversational AI and natural language processing, come into play. These tools automatically crosswalk the lay terms patients often use into the clinical vernacular that characterizes healthcare. And they are configured to probe for the additional detail needed for hyper-personal responses. For example, well-configured virtual assistants can respond to a patient query with questions of their own: asking “do you mean X…” questions, for example, or verifying age to better understand if the best caregiver might be a pediatrician or a gerontologist. Similarly, they can ask patients to supply preferred provider gender, language, location (near home, office, or school?) and hours (during the workday? After school? Weekends?).

The best-of-the-best takes functionality one step further and enables the patient to schedule an appointment directly from the virtual assistant.

Virtual assistants that are most proficient in “speaking the language” are, not surprisingly, created by technology partners who specialize in healthcare. Virtual assistants not trained in nuanced clinical vocabularies or the complexities of care delivery (such as when pre-authorization is needed), almost always fall short.

3. Use of virtual assistants can mislead

Sometimes, it isn’t clear to the patient whether the dialog they are having is fully automated or if a human agent is behind it. This lack of transparency can quickly breed distrust among users, especially if the interaction with the virtual assistant is negative and affects how users view the provider’s brand overall.

The best defense against this is a good offense: Introduce the virtual assistant (whether delivered on a website, text, or voice) as an automated system. And clearly communicate how patients can exit the automated system and reach a live representative if needed. This provides reassurance that they will not get lost in an endless maze of information that never quite meets their needs.

Most automated virtual assistants are designed to easily manage routine and repetitive questions – which can comprise 75-80% of inbound requests. If this approach doesn’t successfully meet the patient’s needs, the virtual assistant can escalate the conversation to a live agent, who often can handle five or six digital interactions simultaneously. Further, if the digital interaction is not sufficient, the patient can be transferred directly to speak with a staff member by phone.

Interestingly, the inverse process is also available from many virtual assistant providers. Patients may initiate an interaction by phone. However, the system can give them the option of moving the conversation to a virtual assistant via text or email – which is particularly attractive for off-hour calls or when long hold times occur.

4. Virtual assistants can seem uncaring

While use of a virtual assistant can be convenient for the patient and more efficient for the provider, it nevertheless can come across as “cold,” which is antithetical to the guiding principles of personal medical care. 

These challenges can be addressed via conversational AI, with dialogs designed for empathy. As noted earlier, virtual assistants developed specifically for the healthcare industry are especially equipped to address medical and care communications with appropriate sensitivity.

The latest generation of virtual assistants can be integrated with providers’ systems of record such as their EMRs or CRMs. This equips the virtual assistant with the ability to securely access information specific to the patient. This allows for delivery of test results with instructions for next steps depending on those results, updates on outstanding balances and payment terms, and even instructions unique to care for their condition. Patients can access information that relates only to their situation and at their convenience.

5. Ignoring the need for change management

While most patients and providers are familiar with and often prefer online tools, healthcare has been slow to adopt them. The reasons include: 

1) Some patients and providers believe that healthcare can be effective only when delivered personally. They are loath to abandon the one-on-one model they have grown comfortable with. 

2) The relationship between provider and patient often represents a high-trust contract. Any interference with this perception triggers hesitancy and skepticism. 

3) Many healthcare providers are “we’ve always done it this way” holdouts and resist changes fundamental to their experience. Likewise, some patients doubt that an automated system can be trusted with something as precious as their well-being.

No magic wand can change these attitudes overnight. But education and engagement – plus the growing influence of younger patients and providers – are helping healthcare turn the corner. Metrics and measurable outcomes prove how automation can better serve clinical, business and relational performance. Transparent communication – accompanied by handholding and, in some cases, incentives – reassure patients.

And, of course, ensuring that automated systems have “break the glass” options leading to personal interactions when necessary is critical.

Without a doubt, virtual assistants have baggage to overcome. However, the promise and potential they hold to improve both the efficiency and effectiveness of healthcare should make their adoption a priority.

About Patty Riskind

Patty Riskind, CEO of Orbita, a HIPAA-compliant conversational AI platform powers voice and chat solutions for healthcare and life sciences organizations that improve patient engagement, increase clinical efficiency, and improve outcomes. Patty is a dynamic healthcare tech leader with demonstrated success developing innovative analytic software products, selling digital solutions, streamlining operational processes, supporting C-level clients, and managing high performance teams. She has achieved exponential growth with both small and large companies, reengineered stagnant operations, and energized company cultures for expansion and growth. Currently CEO at Orbita, Patty also held leadership positions with Qualtrics (Head of Global Healthcare) and Press Ganey (Chief Client Experience Officer). She received her BA from Brown University and earned her MBA from the Kellogg School of Management at Northwestern University.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *