Philips survey illustrates trust gaps in AI among patients, providers

In its yearly survey of international clinicians and patients, Philips illustrated significant gaps in people’s trust when it comes to the uses of artificial intelligence in healthcare.

The medtech’s 10th annual Future Health Index report—which polled nearly 2,000 professionals and more than 16,000 patients across 16 countries—found a greater percentage of providers were optimistic about the ability of AI to improve healthcare outcomes, at 63%, while less than half of polled patients said they were convinced, at 48%. 

Older patients were even less positive, with the percentage dropping to 33% among those ages 45 and up. 

However, between 70% and 80% of patients said they would feel more comfortable about AI programs if they came recommended by their doctor or nurses—with about 44% saying they’d feel reassured if they knew a human professional were always in control.

Interestingly, the FDA’s stamp of approval alone did not seem to carry that much weight: Only 35% of surveyed patients listed examinations to ensure safety and effectiveness as a key process to building public trust in AI.

At the same time, 85% of healthcare professionals felt that AI has the potential to reduce their paperwork and administrative burdens, and enable more face time with patients—areas that are not specifically regulated by the agency—while the same proportion said they were wary about where legal liabilities would fall when relying on an AI-powered diagnostic to guide care.

Other concerns included how to deal with the demographic biases that may be inherent within algorithms’ data sets, according to Jeff DiLullo, Philips’ chief region leader for North America.

“Those are some of the themes that came out, around why they felt a little less confident that AI would be better in driving health outcomes, rather than just enjoying the ability to improve time and productivity,” DiLullo said in an interview.

“First of all, practitioners are evidence-based,” he added. “Everything we do, when we introduce something, has an evidence baseline to it.”

“We have a lot of AI-enabled capabilities that are FDA cleared—and there are hundreds of applications that are on the market today—but I think there's still a lack of awareness or understanding of what AI is,” DiLullo said. “It should be more trustable, and yet people don't understand it, so they're more hesitant.”

The FDA maintains a running list of about 1,000 AI- and machine-learning-powered products that have cleared federal review, with about three-quarters of them falling in the field of radiology and reading digital imaging scans.

The survey also comes at a time when clinicians continue to report burnout and workload as massive concerns—with 23% of respondents this year saying that if they could go back in time, they wouldn’t choose a career in healthcare.

That follows up on 2024’s Future Health Index, which centered around AI automation and its potential effect on staffing gaps—with 55% of healthcare leaders reporting an increased likelihood of employee departures amid widespread delays in specialist care.

“There will be 40 to 50 million MRI scans alone performed this year in the U.S.,” DiLullo said. “If I can compress those scans and make them faster with AI—at 20 minutes per scan, times 40 million scans? That's significant productivity.”

And, looking ahead, staffing challenges may compound the difficulties posed by rising costs and an aging population. “We'll have 10,000 fewer radiologists at the end of this decade than we had when it started,” he said. “So, work will pile up, and care will suffer.”

“This survey is definitely giving us a road map to say, where do we see traction, where we can drive scale and help practitioners gain time back—and then ask, what do we have to work on to drive better trust in the system, and in the AI we're beginning to deploy?” DiLullo said. “How do we build that confidence, so that it can really drive the outcomes that are safe and effective?”