- Google has been in the news for some of its inaccurate AI summaries and AI-generated images.
- Google's vice president of search reportedly said at the meeting that mistakes shouldn't stop people from taking risks.
- Experts commented on Google's strategy and the risks that may arise from it.
While all the big tech companies are racing to expand their AI capabilities and launch new products, Google continues to make headlines for its widespread AI mistakes.
Shortly after releasing its AI Summary feature, which displays AI-generated summaries at the top of the page for some search queries, the internet began buzzing about the search engine recommending eating glue pizza or eating rocks.
Earlier this year, the company released an image generation tool on Gemini, but it caused a stir when the chatbot inaccurately recreated images of historical figures. Google acknowledged the problem and suspended the feature.
But despite the high-profile misstep, Google doesn't appear to be planning on pumping the brakes anytime soon.
According to leaked audio obtained by CNBC, Google's vice president of search, Liz Reid, addressed the pizza glue-and-rock-eating controversy during a recent all-hands meeting, and used the opportunity to reiterate the company's AI strategy.
“It's important not to hold back on features just because there might be occasional issues,” Reid reportedly said at the meeting.
According to a CNBC report, the vice president said at the meeting that Google should address issues as they are discovered and “act with urgency,” but that doesn't mean it “shouldn't take risks.”
Lorenzo Tione, an AI investor and managing partner at venture capital group Gaingels, told Business Insider that he thinks disclosing experimental features is generally the right move. But users need to know when and how they can trust the results, he said. Tione said product disclosures need to be different if the tool being used is a publisher, curator, or moderator.
In its AI overview, Google says that “generative AI is experimental,” and in its developer safety guide, it says that generative AI tools “can sometimes lead to unexpected outputs, including inaccurate, biased, or disturbing outputs.”
Google isn't the only company to understand the risks of generative AI products. Tim Cook said that while it's entirely possible for Apple Intelligence to make mistakes, he doesn't think they'll happen often. Microsoft also said Thursday it was delaying the release of AI tools that were supposed to ship with its CoPilot PCs after privacy concerns emerged.
But Alon Yamin, CEO of AI platform Copyleaks, told BI that by releasing a large-scale feature at the top of Google searches, the company is making these mistakes especially visible.
The alternative, he said, is to release features more incrementally and not put them front and center if they're not fully ready.
Yamin said it's natural that Google would want to release products sooner, given rumors that the company is lagging behind in the AI race, but he said generative AI isn't completely safe right now, and balancing timing with innovation and accuracy is key.
Read previously wrote in a blog post that Google “builds quality and safety guardrails into these experiences” and tests them rigorously before releasing them, but CNBC reported that Read said in the all-hands meeting that Google “can't always find everything.”
It's worth noting that these mistakes weren't common: A Google spokesperson said that the “vast majority” of results were accurate, and that the company found policy violations in “fewer than one in every 7 million unique queries from AI Overviews.”
But while suggesting people put cheese on pizza and eat it with a stone may be a small mistake, it could raise more serious risks around privacy, security and copyright, and the faster the work is done the greater the risks, Yamin said.
Have a Google tip? Contact the reporter via non-work email: aaltchek@insider.com.