Why Google AI Overviews aren’t safe (for medical advice)

February 16, 2026Case Studies
#AI in Healthcare
3 min read
Why Google AI Overviews aren’t safe (for medical advice)

A new Guardian report says Google is putting people at risk by how it presents AI generated medical advice in Search. The issue is not just whether AI can be wrong, but how quietly the product signals that risk when the answer is sitting at the very top of the page.

Google has said its AI Overviews are built to steer users toward professional help on sensitive topics like health, rather than letting a summary replace a clinician. But The Guardian found that promise breaks at the first screen itself. When users are initially shown medical advice, there is no clear disclaimer. The warning only appears after someone clicks “Show more” for additional health information, and even then the label shows up below the extra AI generated text in smaller, lighter type. In other words, you get the confidence first and the caution later.

Google did not talk about where the disclaimer appears or how it is styled, as a spokesperson argued it is inaccurate to suggest AI Overviews fail to encourage professional medical advice. Saying the feature includes a disclaimer and often mentions seeking medical attention within the overview when appropriate. The pushback still leaves the central complaint standing, because the product’s first impression is what users see, in turn trusting it.

That is why AI experts who were shown the Guardian’s findings said the design choice itself is risky. Pat Pataranutaporn at MIT warned that even leading models can hallucinate misinformation or slip overly agreeable behavior that prioritizes user satisfaction over accuracy, and that can be genuinely dangerous in healthcare.

Where as Gina Neff at Queen Mary University of London argued the failures are baked into the feature itself as it is tuned to deliver fast answers rather than carefully, or with clinical precision. Due to which small mistakes in health information turns them into danger.

This is not even the first time this issue has surfaced in health related searches. In January, an investigation said people were being put at risk by false and misleading health information in Google AI Overviews. Following that round of reporting, Google removed them for some but not all medical searches.

Sonali Sharma, a Researcher at Stanford University's centre for AI in Medicine and Imaging, said the top of page summary can create "a sense of reassurance that discourages further searching, or scrolling through the full summary and clicking ‘Show more’ where a disclaimer might appear."

Tom Bishop, Head of patient information at Anthony Nolan, called for urgent action, saying "when it comes to health misinformation, it’s potentially really dangerous," and adding "I’d like this disclaimer to be right at the top."

YR
Y. Anush Reddy

Y. Anush Reddy is a contributor to this blog.