Explanations in online symptom checkers could improve user trust

UNIVERSITY PARK, Pa. -- Have you recently turned to your mobile device or computer to find out if your cough, sniffle or fever could be caused by COVID-19?

The online symptom checker you used may have advised you to stay home and call your medical provider if symptoms worsen, or perhaps told you that you may be eligible for COVID-19 testing. But why did it make the recommendation it did? And how should you know if you can trust it?

Those are questions that researchers at the Penn State College of Information Sciences and Technology recently explored through a project in which they augmented online symptom checkers by offering explanations of how the system generated its probable diagnoses and suggestions -- while also studying users' perceptions of those recommendations.

"People are confused about why online symptom checkers ask certain questions and how they make certain recommendations and decisions," said Chun-Hua Tsai, assistant research professor and first author on the research paper. "These interactions are not very transparent, which is OK if you just have a common cold, but with COVID it could be pretty serious."

Tsai explained that current online symptom checkers, which are powered by machine learning algorithms, use information that users provide to guide the checker in its next steps toward a possible diagnosis. However, the AI-driven systems' lack of transparency and comprehensible language could result in unintended -- and potentially tragic -- consequences if a user does not fully understand the recommendations it provides.

For example, if an online symptom checker simply recommended that a user get tested for COVID-19 based on the user's input, it could cause undue worry or unnecessary trips to a medical facility. Conversely, if a user learned from an online symptom checker that they could possibly have the coronavirus, it could lead them to make a poor medical decision such as taking medication on their own instead of being tested or seeking proper medical treatment.

"Explanation in medical diagnosis interactions emphasizes the importance of pragmatics," said Jack Carroll, distinguished professor of information sciences and technology and one of the research paper's authors.

The team's work has potential application beyond COVID-19, said Xinning Gui, assistant professor of information sciences and technology and another collaborator on the project.

"Even before COVID-19, tens of millions of people have used symptom checkers to self-diagnose or self-triage for numerous health conditions," she said. "However, little attention is paid to critical issues such as legitimacy, safety, trust and transparency from a user's perspective. Our work is just the start to fill this gap."

In their work, the researchers reproduced a user's interaction with an online symptom checker and added explanations for why the chatbot asked certain questions and how the recommendations were generated -- for example, if the suggestion was drawn from Centers for Disease Control and Prevention guidelines.

"Based on these explanations, our findings showed that the users were more confident (in the accuracy of the symptom checker) when they received these recommendations," said Tsai. "Transparent symptom checkers could be really useful for people to understand their own situation to make a better medical decision. Potentially, this could [also] be a tool to use in responding to the pandemic public health crisis that we're facing today."

In their study, the researchers interviewed users of online symptom checkers to understand if explanations would improve their user experience and their trust in the online tools. The interviews yielded that users are often confused by the questions that chatbots ask and which symptoms and information led to the suggested diagnosis and advice.

"For the possible causes listed to me, (the chatbot) doesn't tell me why my symptoms have a match. It just says something in a statistical way, like how many people might have this cause. I think the app should show the relations, like explain why it thinks this might be a possible cause, which question it asked, and which answers I gave have led me to this diagnosis," said one survey participant, as published in the research paper.

Then, the researchers designed a COVID-19 online symptom checker to include three types of explanation styles: rationale-based, providing an explanation after each question the system promoted to the user; feature-based, offering a personalized summary based on the user's answers; and example-based, highlighting an identical example of a patient who received the same clinical recommendation as the user based on identical answers.

They found that the explanations not only could significantly improve the user experience, but also could facilitate medical decision-making and improve user trust in the diagnosis.

"Explanation could empower health consumers to make informed decisions," said Gui. "Without explanation about how the symptom checkers come to the results and the underpinning evidence, health consumers will face challenges in comprehending or trusting the diagnostic results."

She added, "Our study proves that providing suitable explanations can help users better interpret the results and make informed decisions."

The researchers' findings could inform future design of online symptom checkers, helping users to potentially navigate a number of medical issues beyond COVID-19.

"Our findings could advance the research area of health recommender systems and explainable AI [artificial intelligence] in terms of personal health care, fairness and user trust," said Tsai.

Credit: 
Penn State