
On Sunday, Google removed a few of its AI Overviews health summaries after a Guardian investigation found people were being put in danger by false and misleading information. The removals got here after the newspaper found that Google’s generative AI feature delivered inaccurate health information at the highest of search results, potentially leading seriously in poor health patients to mistakenly conclude they’re in good health.
Google disabled specific queries, akin to “what’s the conventional range for liver blood tests,” after experts contacted by The Guardian flagged the outcomes as dangerous. The report also highlighted a critical error regarding pancreatic cancer: The AI suggested patients avoid high-fat foods, a advice that contradicts standard medical guidance to keep up weight and will jeopardize patient health. Despite these findings, Google only deactivated the summaries for the liver test queries, leaving other potentially harmful answers accessible.
The investigation revealed that looking for liver test norms generated raw data tables (listing specific enzymes like ALT, AST, and alkaline phosphatase) that lacked essential context. The AI feature also didn’t adjust these figures for patient demographics akin to age, sex, and ethnicity. Experts warned that since the AI model’s definition of “normal” often differed from actual medical standards, patients with serious liver conditions might mistakenly consider they’re healthy and skip crucial follow-up care.
Vanessa Hebditch, director of communications and policy on the British Liver Trust, told The Guardian that a liver function test is a group of various blood tests and that understanding the outcomes “is complex and involves so much greater than comparing a set of numbers.” She added that the AI Overviews fail to warn that somebody can get normal results for these tests once they have serious liver disease and want further medical care. “This false reassurance might be very harmful,” she said.
Google declined to comment on the particular removals to The Guardian. An organization spokesperson told The Verge that Google invests in the standard of AI Overviews, particularly for health topics, and that “the overwhelming majority provide accurate information.” The spokesperson added that the corporate’s internal team of clinicians reviewed what was shared and “found that in lots of instances, the knowledge was not inaccurate and was also supported by high-quality web sites.”
