Google scales back AI search answers after it told users to eat glue

Hey there! So, let’s dive into the curious case of Google’s AI telling people to eat glue and rocks. Yes, you read that right. Google recently had to scale back its AI search feature, called AI Overviews, after some hilariously bizarre and downright dangerous advice it dished out to users.

Picture this: you’re making pizza and you want your cheese to stick better. You ask Google’s AI, and it suggests adding a dash of non-toxic glue to your sauce. Yum, right? This gem of culinary advice actually came from an old joke on Reddit, proving once again that the internet is forever and AI has a hard time with context and humor. But wait, there’s more. It also advised people to eat rocks to aid digestion and claimed that President James Madison graduated from the University of Wisconsin 21 times, among other absurdities. Talk about historical inaccuracies! The source of this madness? Satirical sites and ancient joke posts that the AI didn’t quite recognize as, well, not serious.

Google’s AI Overviews feature was meant to provide quick summaries of search results using the Gemini AI model, and it was rolled out to a select group of users in the U.S. However, the plan to expand it globally hit a snag when these strange suggestions started popping up. Naturally, social media had a field day with screenshots of these AI-generated bloopers. Users shared their incredulity and amusement, spreading the word about these digital faux pas far and wide.

In response to the backlash, Google pointed out that these were isolated incidents, not reflective of the overall user experience. They emphasized that most AI Overviews provide high-quality information with links for deeper dives into topics. Google has taken action to correct these errors, refining the AI’s capabilities to prevent future mishaps. But the incident raises important questions about the reliability of AI-generated content and the need for ongoing human oversight.

AI “hallucinations,” as they’re called, are not new. This phenomenon, where AI makes things up, has plagued other models like ChatGPT as well. These hallucinations can range from amusing to dangerously misleading, highlighting the challenges in developing AI systems that can accurately parse and present information from vast, often unreliable sources on the internet.

For instance, in one wild instance, AI Overview suggested that to pass kidney stones, one should drink two quarts of urine daily. Another disturbing suggestion told a user feeling depressed to jump off the Golden Gate Bridge. These examples clearly demonstrate the potential dangers of unverified AI advice, showing that a pinch of skepticism and a lot of human supervision are still very much needed when it comes to AI in critical applications.

One might wonder how such a sophisticated technology could get it so wrong. It boils down to the sources the AI pulls from. In this case, Reddit posts and satirical sites like The Onion were part of the problem. While these sources can be entertaining, they’re not exactly reliable for factual information. Google’s deal with Reddit to scrape content for AI training has made this more apparent, as the AI occasionally regurgitates these humorous or sarcastic tidbits as genuine advice.

Another layer to this issue is the shift in responsibility that comes with using AI as a publisher rather than a traditional search engine. When Google’s AI Overview summarizes content, it essentially becomes a publisher, bearing the responsibility for the accuracy and reliability of the information it provides. This transition is tricky because it requires AI to exercise a level of editorial judgment that it’s not yet capable of. Human publishers verify facts and sources rigorously, a practice that AI, despite its sophistication, still struggles to emulate accurately.

On a lighter note, the internet’s reaction to these AI blunders has been a mix of horror and hilarity. Users have not held back, sharing their incredulous reactions and poking fun at the absurdity of the AI’s suggestions. This public scrutiny has pushed Google to not only fix these specific issues but also to reconsider how it implements and monitors AI features moving forward.

In summary, Google’s attempt to enhance search with AI-generated summaries has hit a few bumps in the road, with some comically bad advice slipping through the cracks. These instances underscore the ongoing challenge of developing reliable AI that can handle the complexities and nuances of human language and information. As Google works to refine its AI Overview, the incident serves as a reminder of the importance of human oversight and the unpredictable nature of AI development.

So, next time you ask Google for advice on cooking or health, you might want to double-check before following any tips about glue or rocks. Stay safe, and maybe stick to more traditional recipes for now!