In the last few months Google he focused strongly on the AI Overviewsthe automatic summaries that appear at the top of search results for provide quick responses. But when this system began to touch a delicate area such as that of Healththe mechanism turned out to be much more fragile than expected. It was he who raised the alarm The Guardianwhich defined some information provided by artificial intelligence “dangerous and alarming”forcing the Mountain View giant to intervene.
The investigation by the British newspaper showed how, in certain medical research, AI Overviews could provide results incomplete, decontextualized or even incorrect datawith the real risk of lead users to underestimate serious problems. Following these reports, Google decided to remove several summaries linked to particularly sensitive health queries.
The case of liver tests
One of the most cited examples concerns i liver function tests. When asked what the normal liver blood test valuesGoogle’s AI would provide a long series of numbersbut without indicating fundamental elements such as age, gender, ethnicity or geographical origin of the patient. In medicine, these factors are crucial to correctly interpret results, and their absence can lead to misleading conclusions.
The risk highlighted by the Guardian was clear: a person with actually anomalous values could have believe you are healthy and decide to , thus delaying diagnosis and treatment. After the publication of the investigation, some of these studies stopped showing the AI Overviewsa sign of a first system fix.
Google’s response
Faced with criticism, Google has chosen a prudent line. A spokesperson, quoted by Guardianexplained that the company , but reiterated its commitment to improve AI overviews when these do not take the context into account. According to the company, much of the disputed information would be anyway supported by sources deemed reliablebut the problem remains in the way they are summarized and presented to users. In practice, AI Overviews have not been completely eliminated from the health topic, but filter more selectivelyespecially for the most direct and potentially risky queries.
A delicate moment for AI in healthcare
The story comes at a time when more and more technology companies are trying to enter the technology sector digital healthas demonstrated by the launch of ChatGpt Health by OpenAI. Precisely for this reason, the Google episode shows how much it is complex and delicate entrust the synthesis of clinical information to an AI. The message that emerges from the investigation of Guardian it’s clear: when we talk about Healtheven a quick and seemingly useful answer can become a riskif the medical context necessary to interpret it correctly.