In the ever-evolving world of health IT, one growing trend is the introduction of consumer tech, such as virtual assistants, into the medical space.
But because healthcare is such a specialized field, it’s not simply a matter of buying new devices or software and expecting it to work. Any new technology has to improve how clinicians deliver care meaningfully.
That means helping providers more quickly and efficiently diagnose and treat patients, conduct follow-up appointments, and document patient encounters, all within an EHR that protects patient data as it shares that information across multiple care settings.
One of the most exciting developments in this area is MEDITECH Expanse Virtual Assistant, which uses the same technology that people use to ask their phone for driving directions or have their smart speaker add items to their shopping list.
Expanse Virtual Assistant is delivered through Nuance Dragon Medical One, a speech recognition solution featuring sophisticated conversational AI dialogue that allows physicians to navigate their Expanse EHR hands-free and improve the way they document care.
I recently had the opportunity to talk with Anuradha Durfee, Senior Principal Product Manager at Nuance, about the developments that have improved speech recognition so doctors can effectively use the benefits that touchless interactions with an EHR provide and the potential for future improvements to the system.
How would you say that virtual assistants improve speech recognition capabilities?
Nuance has a long history in speech recognition, and while virtual assistant technology has become more pervasive in our everyday lives, the application in the healthcare environment is new. Nuance Dragon Medical One (DMO) is the core technology that supports Expanse Virtual Assistant.
When you interact with commercial products like smart speakers, the vocabulary is less complex, and the context is less sensitive. Maybe you turn a light on and off, or you turn your faucet on and off, or set a countdown timer. These types of commercial interactions do not have the same consequences as those interactions in a healthcare space.
As a result, the Expanse Virtual Assistant is designed to handle the complexity and specificity of healthcare. If you are placing a medication order, you need to include the drug name, dosage, frequency, and administration route as data elements for the system to take action on it. If any of these elements are incorrect, it could have negative consequences.
That’s a level above and beyond what you see with commercial devices used in your home.
And with COVID-19 weighing heavily on our health systems, we need to reduce the number of surfaces that clinicians touch, whether that is a mouse and a keyboard or a screen. Being able to navigate the chart using their voice can help decrease the spread.
That’s a great point about how this touchless approach is an advancement over putting your hands on a device, and that’s exciting because it accelerates ordering. The system also allows a clinician to say one thing to launch a command and complete several steps.
The initial set of tasks focuses on reducing the information foraging that clinicians need to get a comprehensive view of their patient’s current health status. These are called "Show me" commands. We also have the capability to search the chart for lab values, diagnoses, prior notes, etc.
Being able to bring that clinician up to speed as fast as you possibly can not only saves time for the provider, but ultimately results in a better health outcome for the patient.
It’s something that helps me, as opposed to simply assisting me — and that leads into my next question: Without getting too technical, how do these systems talk to each other?
In basic terms, Dragon Medical One collects all of the audio and processes it through what we call the Natural Language Understanding engine, or NLU.
What Nuance has done is developed healthcare models that allow for more complex language, as well as context. It can take and parse all of that information, and then send it over to MEDITECH Expanse and say, “This is what the user intends to do.” Then the EHR will take over that experience by displaying information or interacting with the clinician to make sure they have all the information they need to complete the action.
It’s these action commands that provide a higher level of automation. A physician told me that it’s like the difference between a first-year med student and a third-year med student. The first-year student will be able to answer the question and can get you the information. A third-year student will get you the information and do something with it.
So as we build out virtual assistant capabilities, we’re thinking about what the physician needs help with. What do they actually need to accomplish during the day, and how can the system help them accomplish it?
One of the great features of Expanse Virtual Assistant is how it “talks back” to the clinician. After they give a command, there’s a visual confirmation of what they requested. How does that work?
That’s called text-to-speech, and there are typically two interaction styles: Issuing unilateral commands, where you’re requesting information, and having a dialogue with the system where you’re working together to clarify what needs to get done or what you’re trying to accomplish.
A good example is adding orders. The speech recognition software will collect the information from the clinician and then feed it back in the form of a non-intrusive, temporary message so the provider can confirm that Nuance got it right. This is the sort of feedback we need when translating verbal orders to written orders.
Right, and they can also enter a descriptor for the data, like the specific condition to be treated, which means that we can spend more time thinking about the patient and less time thinking about how to navigate the EHR. When we think about the system’s future, it’s developing those predictive capabilities to identify the clinician’s intent and answer it with decision support.
Anything to add about the current development?
We have a lot to learn about the appetite of clinicians to get feedback in different ways. We want a virtual assistant to be helpful, but we don’t want physicians to feel like it’s a third person in the room while they’re providing care.
We’re learning that virtual assistants help the patient to hear the physician say, “This is what I’m actually documenting in your record,” because both the clinician and the patient become co-creators of the patient record.
On the other hand, we recognize that people might be nervous about having something that is always listening to you, or that it’s recording you. What we are trying to do is find that balance between being helpful but also unobtrusive in that environment.
You really want it to be natural so that it doesn’t interfere with the patient-provider relationship.
Right. Even though we’ve changed the modality from visual to auditory, there can still be an auditory persona that is present in the room whether it’s said or unsaid.
Just being able to dictate to the system and queue up orders can organize the patient and the provider more effectively because, at the end of a visit, things can get complicated. But being able to queue orders to review at a later time provides me, as the doctor, with an extra level of support. It minimizes confusion and helps me remember what I was thinking at an earlier time.
Exactly. As more clinicians use the system, we’ll see their patterns, and the system will be able to react to explicit requests and implicitly understand what the clinician is trying to do.
Watch a demonstration of Expanse Virtual Assistant by Dr. Steven Jones: