Why Don't Patients Trust AI Breakthroughs In the Medical Field?
Thu, April 22, 2021

Why Don't Patients Trust AI Breakthroughs In the Medical Field?

AI can solve complex problems that are too difficult for humans to do. In the medical field, AI is used to diagnose, treat, and predict outcomes / Photo by: Thananit Suntiviriyanon via 123RF

 

Artificial intelligence (AI) utilizes algorithms, deep learning, pattern matching, etc. to form an approximate conclusion without human intervention, according to healthcare company IBM Watson Health. AI can solve complex problems that are too difficult for humans to do. In the medical field, AI is used to diagnose, treat, and predict outcomes. 

It has the potential to applied in every field of medicine such as personalized patient treatment plans, drug development, and patient monitoring. With AI, doctors can make more informed decisions in treating their patients. It can even outperform doctors! Even so, how come patients don’t trust it? 

Cases of AI Outperforming Health Professionals

1. Pediatric AI Does A Better Job Than Junior Doctors

A Chinese study published in journal portal Nature Medicine Letters examined EHRs from nearly 600,000 patients for 18 months at the Guangzhou Women and Children’s Medical Center, as reported by Jeff Rowe of AI Powered Healthcare, a technology and health news platform. 

The AI-generated diagnoses would be compared to the assessments made by physicians. The result? The AI “was noticeably more accurate than junior physicians.” Moreover, it was as reliable as the senior ones. The study, which was conducted by Huiying Liang and colleagues, was commended for the AI’s ability to outperform junior pediatricians in diagnosing common childhood ailments. 

In the authors’ research, the AI, which is a machine learning classifier (MLC), “broke cases down into major organs and infection areas” before it branched out further into subcategories. This process was similar to how doctors employ deductive reasoning. Another key strength to highlight is the amount of data the AI processed. With 1,362,559 outpatient visits from 567,498 patients, generated 101.6 million data points for the MLC to consume. Hence, this enabled the AI to distinguish and accurately choose “from the 55 different diagnosis codes” across various organ groups and subcategories. 

The future of AI in medicine should not replace human doctors but to augment them, the observers in the research added. Of course, the authors acknowledged augmentation “as the key short-term application of their work. 

2. AI Detects Skin Cancer

Holger Haenssle and colleagues of research portal Oxford Academic conducted an experiment between a CNN (convolutional neural network) and 58 dermatologists, wrote Rachel England of tech news site End Gadget. The AI was trained to differentiate dangerous skin lesions from benign ones after the researchers showed more than 100,000 images.  

The research was published in the August 2018 issue of Annals of Oncology. Haenssle’s team found that human dermatologists accurately detected 86.6% of skin cancers “from a range of images. Meanwhile, the CNN identified 95% of skin cancers.  

Haenssle noted that the CNN “missed fewer melanomas, as it had “higher sensitivity.” It also misdiagnosed “fewer benign moles as malignant melanoma,” resulting in “less unnecessary surgery.” The researchers concluded that the AI will be able to diagnose skin cancer earlier, faster, and easier, allowing doctors to intervene before the cancer escalates. The system is still undergoing testing. However, the AI will most likely be restricted solely for professional use. 

The AI was trained to differentiate dangerous skin lesions from benign ones after the researchers showed more than 100,000 images / Photo by: Evgeniy Kalinovskiy via 123RF

 

Despite These Breakthroughs, Why Don’t Patients Trust AI? 

Chiara Longoni and Carey K. Morewedge of general management magazine Harvard Business Review explored AI’s capabilities in a series of experiments with colleague Andrea Bonezzi of New York University in Oxford Academic’s Journal of Consumer Research. 

They found that when an AI offered healthcare services to patients rather than human health providers, the patients were less likely to “utilize the service.” In fact, they even wanted to pay less for an AI-provided health service. The patients also preferred a human provider to cater to their health even if it entailed “a greater risk of an inaccurate diagnosis or a surgical complication.” 

Interestingly, Longoni, Morewedge, and Bonezzi gave 103 Americans a reference price of $50 for a diagnostic test. The test could be performed by either an AI or a human, with both having an 89% accuracy rate. Participants in the AI category were told that the diagnosis would cost $50 when performed by an AI. They indicated how much they would pay if a human would diagnose them. Therefore, patients who had an AI serve as their default provider were willing to pay more to switch to a human provider than those who had a human as their default provider. 

They found that when an AI offered healthcare services to patients rather than human health providers, the patients were less likely to “utilize the service” / Photo by: dolgachov via 123RF

 

It doesn’t mean AI is inferior or more expensive. Rather, patients doubt it because it does not take into consideration their idiosyncratic characteristics and circumstances. People describe an AI’s health service as “inflexible and standardized,” meaning it is only suitable for an average patient. Hence, services provided by an AI does not note “for the unique circumstances” that apply to each individual. 

Also, patients are more comfortable with AI when a physician makes the ultimate decision, said the authors. To streamline AI in the medical field, patients must overcome their skepticism towards it. 

There must also be transparency on how an AI will take care of them during their stay in the hospital. Health professionals should also emphasize the benefits of AI technologies to their patients. It’s normal to be taken aback by AI’s capabilities, but technology should assist and augment humans.

other sources: 

https://www.engadget.com/2018/05/29/ai-outperforms-human-doctors-in-spotting-skin-cancer/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAL3PX9zpsuodA9kq8W7pegRZTJ8YbjW0-W8Np9L3rDAu_En21AEksrwK_3K0VfYqDWo61n4tO_TCsJFp_RJZb-muF_oaemYEfgDpmqhfYpwHydLxdF4Bj-EsKCp7VagOYK7qQHTnIgMQ9E0YIwLTIgLBZPzk7JLrwBjRt-1S1rcj

https://academic.oup.com/annonc/article/29/8/1836/5004443

https://hbr.org/2019/10/ai-can-outperform-doctors-so-why-dont-patients-trust-it

https://academic.oup.com/jcr/advance-article/doi/10.1093/jcr/ucz013/5485292