The Guardian revealed an investigation claiming well being specialists discovered inaccurate or deceptive steering in some AI Overview responses for medical queries. Google disputes the reporting and says many examples have been based mostly on incomplete screenshots.
The Guardian stated it examined health-related searches and shared AI Overview responses with charities, medical specialists, and affected person data teams. Google instructed The Guardian the “overwhelming majority” of AI Overviews are factual and useful.
What The Guardian Reported Discovering
The Guardian stated it examined a variety of well being queries and requested well being organizations to evaluation the AI-generated summaries. A number of reviewers stated the summaries included deceptive or incorrect steering.
One instance concerned pancreatic most cancers. Anna Jewell, director of help, analysis and influencing at Pancreatic Most cancers UK, stated advising sufferers to keep away from high-fat meals was “utterly incorrect.” She added that following that steering “could possibly be actually harmful and jeopardise an individual’s probabilities of being nicely sufficient to have remedy.”
The reporting additionally highlighted psychological well being queries. Stephen Buckley, head of data at Thoughts, stated some AI summaries for situations akin to psychosis and consuming problems provided “very harmful recommendation” and have been “incorrect, dangerous or could lead on folks to keep away from in search of assist.”
The Guardian cited a most cancers screening instance too. Athena Lamnisos, chief govt of the Eve Attraction most cancers charity, stated a pap check being listed as a check for vaginal most cancers was “utterly mistaken data.”
Sophie Randall, director of the Affected person Data Discussion board, stated the examples confirmed “Google’s AI Overviews can put inaccurate well being data on the prime of on-line searches, presenting a danger to folks’s well being.”
The Guardian additionally reported that repeating the identical search may produce totally different AI summaries at totally different instances, pulling from totally different sources.
Google’s Response
Google disputed each the examples and the conclusions.
A spokesperson instructed The Guardian that most of the well being examples shared have been “incomplete screenshots,” however from what the corporate may assess they linked “to well-known, respected sources and advocate in search of out skilled recommendation.”
Google instructed The Guardian the “overwhelming majority” of AI Overviews are “factual and useful,” and that it “repeatedly” makes high quality enhancements. The corporate additionally argued that AI Overviews’ accuracy is “on a par” with different Search options, together with featured snippets.
Google added that when AI Overviews misread internet content material or miss context, it’ll take motion below its insurance policies.
See additionally: Google AI Overviews Affect On Publishers & How To Adapt Into 2026
The Broader Accuracy Context
This investigation lands in the course of a debate that’s been working since AI Overviews expanded in 2024.
Through the preliminary rollout, AI Overviews drew consideration for weird outcomes, together with strategies involving glue on pizza and consuming rocks. Google later stated it could cut back the scope of queries that set off AI-written summaries and refine how the characteristic works.
I lined that launch, and the early accuracy issues rapidly turned a part of the general public narrative round AI summaries. The query then was whether or not the problems have been edge circumstances or one thing extra structural.
Extra just lately, information from Ahrefs suggests medical YMYL queries are extra seemingly than common to set off AI Overviews. In its evaluation of 146 million SERPs, Ahrefs reported that 44.1% of medical YMYL queries triggered an AI Overview. That’s greater than double the general baseline fee within the dataset.
Separate analysis on medical Q&A in LLMs has pointed to citation-support gaps in AI-generated solutions. One analysis framework, SourceCheckup, discovered that many responses weren’t totally supported by the sources they cited, even when programs supplied hyperlinks.
Why This Issues
AI Overviews seem above ranked outcomes. When the subject is well being, errors carry extra weight.
Publishers have spent years investing in documented medical experience to satisfy. This investigation places the identical highlight on Google’s personal summaries once they seem on the prime of outcomes.
The Guardian’s reporting additionally highlights a sensible downside. The identical question can produce totally different summaries at totally different instances, making it tougher to confirm what you noticed by working the search once more.
Trying Forward
Google has beforehand adjusted AI Overviews after viral criticism. Its response to The Guardian signifies it expects AI Overviews to be judged like different Search options, not held to a separate normal.









