Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Analysis Featured News Stocks

Meta says AI-generated content was less than 1 precent of election misinformation

post-img

AI-generated content played a much smaller role in global election misinformation than what many officials and researchers had feared, according to a new analysis from Meta. In an update on its efforts to safeguard dozens of elections in 2024, the company said that AI content made up only a fraction of election-related misinformation that was caught and labeled by its fact checkers.

“During the election period in the major elections listed above, ratings on AI content related to elections, politics and social topics represented less than 1% of all fact-checked misinformation,” the company shared in a blog post, referring to elections in the US, UK, Bangladesh, Indonesia, India, Pakistan, France, South Africa, Mexico and Brazil, as well as the EU’s Parliamentary elections.

The update comes after numerous government officials and researchers for months raised the alarm about the role generative AI could play in supercharging election misinformation in a year when more than 2 billion people were expected to go to the polls. But those fears largely did not play out — at least on Meta’s platforms — according to the company’s President of Global Affairs, Nick Clegg.

“People were understandably concerned about the potential impact that generative AI would have on the forthcoming elections during the course of this year, and there were all sorts of warnings about the potential risks of things like widespread deepfakes and AI-enabled disinformation campaigns,” Clegg said during a briefing with reporters. “From what we’ve monitored across our services, it seems these risks did not materialize in a significant way, and that any such impact was modest and limited in scope.”

Meta didn’t elaborate on just how much election-related AI content its fact checkers caught in the run-up to major elections. The company sees billions of pieces of content every day, so even small percentages can add up to a large number of posts. Clegg did, however, credit Meta’s policies, including its expansion of AI labeling earlier this year, following criticism from the Oversight Board. He noted that Meta’s own AI image generator blocked 590,000 requests to create images of Donald Trump, Joe Biden, Kamala Harris, JD Vance and Tim Walz in the month leading up to election day in the US.

At the same time, Meta has increasingly taken steps to distance itself from politics altogether, as well as some past efforts to police misinformation. The company changed users’ default settings on Instagram and Threads to stop recommending political content, and has de-prioritized news on Facebook. Mark Zuckerberg has said he regrets the way the company handled some of its misinformation policies during the pandemic.

Looking ahead, Clegg said Meta is still trying to strike the right balance between enforcing its rules and enabling free expression. “We know that when enforcing our policies, our error rates are still too high, which gets in the way of free expression,” he said.” I think we also now want to really redouble our efforts to improve the precision and accuracy with which we act.”

Related Post