AI Can Outperform Doctors. So Why Don’t Patients Trust It?

The authors’ research indicates that patients are reluctant to use health care provided by medical artificial intelligence even when it outperforms human doctors. Why? Because patients believe that their medical needs are unique and cannot be adequately addressed by algorithms. To realize the many advantages and cost savings that medical AI promises, care providers must find ways to overcome these misgivings.

Our recent research indicates that patients are reluctant to use health care provided by medical artificial intelligence even when it outperforms human doctors. Why? Because patients believe that their medical needs are unique and cannot be adequately addressed by algorithms. To realize the many advantages and cost savings that medical AI promises, care providers must find ways to overcome these misgivings.

Medical artificial intelligence (AI) can perform with expert-level accuracy and deliver cost-effective care at scale. IBM’s Watson diagnoses heart disease better than cardiologists do. Chatbots dispense medical advice for the United Kingdom’s National Health Service in lieu of nurses. Smartphone apps now detect skin cancer with expert accuracy. Algorithms identify eye diseases just as well as specialized physicians. Some forecast that medical AI will pervade 90% of hospitals and replace as much as 80% of what doctors currently do. But for that to come about, the health care system will have to overcome patients’ distrust of AI.

We explored patients’ receptivity to medical AI in a series of experiments conducted with our colleague Andrea Bonezzi of New York University. The results, reported in a paper forthcoming in the Journal of Consumer Research, showed a strong reluctance across procedures ranging from a skin cancer screening to pacemaker implant surgery. We found that when health care was provided by AI rather than by a human care provider, patients were less likely to utilize the service and wanted to pay less for it. They also preferred having a human provider perform the service even if that meant there would be a greater risk of an inaccurate diagnosis or a surgical complication.

The reason, we found, is not the belief that AI provides inferior care. Nor is it that patients think that AI is more costly, less convenient, or less informative. Rather, resistance to medical AI seems to stem from a belief that AI does not take into account one’s idiosyncratic characteristics and circumstances. People view themselves as unique, and we find that this belief includes their health. Other people experience a cold; “my” cold, however, is a unique illness that afflicts “me” in a distinct way. By contrast, people see medical care delivered by AI providers as inflexible and standardized — suited to treat an average patient but inadequate to account for the unique circumstances that apply to an individual.

Consider the results of a study we conducted. We offered more than 200 business school students at Boston University and at New York University the opportunity to take a free assessment that would provide them with a diagnosis of their stress level and a recommended course of action to help manage it. The results: 40% signed up when they were told that a doctor was to perform the diagnosis, but only 26% signed up when a computer was to perform the diagnosis. (In both experimental conditions, participants were told that the service was free and the provider made the correct diagnosis and recommendation in 82% to 85% of previous cases.)

In another study, we surveyed over 700 Americans from an online panel to test whether patients would choose AI providers when AI’s performance was clearly superior to that of human providers. We asked research participants to review information about the performance of two health care providers (called provider X and provider Y) in terms of their accuracy in diagnosing skin cancer or making triage decisions for medical emergencies, or the rate of complications associated with pacemaker implant surgeries that these providers had performed in the past.

We then asked participants to indicate their preference between the two providers on a 7-point scale with endpoints, 1 (prefer provider X), 4 (no preference), and 7 (prefer provider Y). When participants chose between two human doctors varying in their performance, all participants preferred the human doctor with the higher performance. But when choosing between a human doctor and an AI provider (e.g., an algorithm, chatbot, or a robotic arm directed remotely through a computer program), participants’ preference for the higher-performing AI provider was significantly weaker. In other words, participants were willing to forego better health care to have a human, rather than an AI, care provider.

Resistance to medical AI also showed up in willingness to pay for the same diagnostic procedure. We gave 103 Americans from an online panel a reference price of $50 for a diagnostic stress test that could be performed by either an AI or human provider; both had an accuracy rate of 89%. Participants in the AI default condition, for example, were told that the diagnosis cost $50 when administered by an AI. They then indicated what they would be willing to pay to switch to have the diagnosis instead performed by a human provider. Participants were willing to pay more to switch to a human provider when the default provider was AI than they were willing to pay to switch to an AI provider when the default provider was a human.

Highlighting the importance of the belief that their circumstances are unique, the more the participants viewed themselves to be unique and different from other individuals, the more pronounced was their resistance to an AI provider. We asked 243 Americans from an online panel to indicate their preference between two providers for a skin cancer screening. Both providers were 90% accurate in their diagnoses. The degree to which participants perceived themselves as unique predicted their greater preference for a human than an (equally accurate) AI provider; it had no effect on their preference between two human providers.

There are a number of steps that care providers can take to overcome patients’ resistance to medical Al. For example, providers can assuage concerns about being treated as an average or a statistic by taking actions that increase the perceived personalization of the care delivered by AI. When we explicitly described an AI provider as capable of tailoring its recommendation for whether to undergo coronary bypass surgery to each patient’s unique characteristics and medical history, study participants reported that they would be as likely to follow the treatment recommendations of the AI provider as they would be to follow the treatment recommendations of a human physician.

Toward that end, for purely AI-based health care services (e.g., chatbot diagnoses, algorithm-based predictive modeling, app-based treatments, feedback from wearable devices), providers could emphasize the information gathered about patients to generate their unique profile, including their lifestyle, family history, genetic and genomic profiles, and details about their environment. Patients might then feel that the AI provider will take into account the kind of information that would be considered by a human provider such as their general practitioner who has access to their history. This information could be used to better explain to patients how the care would be tailored to their unique profile.

Exclusively AI-based services could also include cues — like “based on your unique profile” — that suggest personalization. In addition, health care organizations could make a special effort to spread the word that that AI providers do deliver personal and individualized health care — for example, by sharing evidence with the media, explaining how the algorithms work, and sharing patients reviews of the service.

Having a physician confirm the recommendation of an AI provider should make people more receptive to AI-based care. We found that people are comfortable utilizing medical AI if a physician remains in charge of the ultimate decision. In one study discussed in our paper, participants reported that they would be as likely to use a procedure in which an algorithm analyzed scans of their body for skin cancer and made recommendations to a doctor who made the final call as they would be to utilize care provided from start to finish by a doctor.

AI-based health care technologies are being developed and deployed at an impressive rate. AI-assisted surgery could guide a surgeon’s instrument during an operation and use data from past operations to inform new surgical techniques. AI-based telemedicine could provide primary care support to remote areas without easy access to health care. Virtual nursing assistants could interact with patients 24/7, offer round-the-clock monitoring, and answer questions. But harnessing the full potential of these and other consumer-facing medical AI services will require that we first overcome patients’ skepticism of having an algorithm, rather than a person, making decisions about their care.

Chiara Longoni is an assistant professor of marketing at Boston University. Follow her on Twitter @longoni_chiara.

Carey K. Morewedge is a professor of marketing and Everett W. Lord Distinguished Faculty Scholar at Boston University. Follow him on Twitter @morewedge.

AI Can Outperform Doctors. So Why Don’t Patients Trust It?

Research & References of AI Can Outperform Doctors. So Why Don’t Patients Trust It?|A&C Accounting And Tax Services
Source

error: Content is protected !!