Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Analysis Featured News Technology

Google’s generative AI fails ‘will slowly erode our trust in Google’

post-img

It was a busy Memorial Day weekend for Google (GOOG, GOOGL) as the company raced to contain the fallout from a number of wild suggestions by the new AI Overview feature in its Search platform. In case you were sunning yourself on a beach or downing hotdogs and beer instead of scrolling through Instagram (META) and X, let me get you up to speed.

AI Overview is supposed to provide generative AI-based responses to search queries. Normally, it does that. But over the last week it’s also told users they can use nontoxic glue to keep cheese from sliding off their pizza, that they can eat one rock a day, and claimed Barack Obama was the first Muslim president.

Google responded by taking down the responses and saying it’s using the errors to improve its systems. But the incidents, coupled with Google’s disastrous Gemini image generator launch that allowed the app to generate historically inaccurate images, could seriously damage the search giant’s credibility.

“Google is supposed to be the premier source of information on the internet,” explained Chinmay Hegde, associate professor of computer science and engineering at NYU’s Tandon School of Engineering. “And if that product is watered down, it will slowly erode our trust in Google.”

Google’s AI flubs
Google’s AI Overview problems aren’t the first time the company has run into trouble since it began its generative AI drive. The company’s Bard chatbot, which Google rebranded as Gemini in February, famously showed an error in one of its responses in a promo video in February 2023, sending Google shares sliding.

Then there was its Gemini image generator software, which generated photos of diverse groups of people in inaccurate settings, including as German soldiers in 1943.

AI has a history of bias, and Google tried to overcome that by including a wider diversity of ethnicities when generating images of people. But the company overcorrected, and the software ended up rejecting some requests for images of people of specific backgrounds. Google responded by temporarily taking the software offline and apologizing for the episode.

The AI Overview issues, meanwhile, cropped up because Google said users were asking uncommon questions. In the rock-eating example, a Google spokesperson said it “seems a website about geology was syndicating articles from other sources on that topic onto their site, and that happened to include an article that originally appeared on the Onion. AI Overviews linked out to that source.”

Those are fine explanations, but the fact that Google continues to release products with flaws that it then needs to explain away is getting tiring.

“At some point, you have to stand by the product that you roll out,” said Derek Leben, associate teaching professor of business ethics at Carnegie Mellon University’s Tepper School of Business.

“You can’t just say … ‘We are going to incorporate AI into all of our well-established products, and also it’s in constant beta mode, and any kinds of mistakes or problems that it makes we can’t be held responsible for and even blamed for,’ in terms of just trust in the products themselves.”

Related Post