Google Scales Back AI Overviews After Admitting Flaws
Google has acknowledged that its recently launched AI Overviews tool, designed to deliver AI-generated responses to user search queries, requires significant improvements.
Despite extensive testing before its release two weeks ago, the search giant admitted on Thursday that the technology has produced “some odd and erroneous overviews.”
Excel Magazine International reports that examples of these problematic responses include advising users to use glue to get cheese to stick to pizza and suggesting drinking urine to quickly pass kidney stones.
This rollback is the latest instance of a tech company prematurely launching an AI product to establish a leadership position in the competitive field.
In a company blog post, Google’s head of search, Liz Reid, explained the decision to scale back the AI Overviews feature.
“Some odd, inaccurate, or unhelpful AI overviews certainly did show up. And while these were generally for queries that people don’t commonly do, it highlighted specific areas that we needed to improve,” Reid stated.
Reid noted that nonsensical questions, such as “How many rocks should I eat?” generated questionable content due to the lack of useful, related advice available online.
She also highlighted the tool’s tendency to misinterpret sarcastic content from discussion forums and present it as factual information.
“In a small number of cases, we have seen AI Overviews misinterpret language on webpages and present inaccurate information. We worked quickly to address these issues, either through improvements to our algorithms or through established processes to remove responses that don’t comply with our policies,” Reid wrote.
To mitigate these issues, Google is introducing restrictions for queries where AI Overviews have been less helpful.
The company is also avoiding the use of AI-generated overviews for hard news topics, where accuracy and timeliness are crucial.
Additionally, Google has updated the tool to limit the inclusion of user-generated content in responses, aiming to reduce the likelihood of misleading advice.