AI content made up less than 1% of election-related misinformation in 2024, Meta says

13 hours ago 2

Meta Google headsetPeople locomotion down a Meta Platforms logo during a league successful Mumbai. (Image: Reuters)

Meta has recovered that AI-generated contented made up little than 1 per cent of the misinformation that was fact-checked during large elections held successful implicit 40 countries this year, including successful India.

The uncovering was based connected the societal media giant’s investigation of contented that was posted connected its platforms during elections successful the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil.

“While determination were instances of confirmed oregon suspected usage of AI successful this way, the volumes remained debased and our existing policies and processes proved capable to trim the hazard astir generative AI content,” Nick Clegg, the planetary affairs president astatine Meta, wrote successful a blog station published connected Tuesday, December 3.

Meta’s claims suggest that antecedently raised concerns astir the relation of AI successful spreading propaganda and disinformation did not play retired connected its platforms specified arsenic Facebook, WhatsApp, Instagram, and Threads.

Meta besides said that it prevented overseas interference successful elections by taking down implicit 20 caller “covert power operations.”

“We besides intimately monitored the imaginable usage of generative AI by covert power campaigns – what we telephone Coordinated Inauthentic Behavior (CIB) networks – and recovered they made lone incremental productivity and content-generation gains utilizing generative AI,” it said.

The institution besides noted that it rejected implicit 5,90,000 requests from users to make election-related deepfakes specified arsenic AI-generated images of President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden connected its AI representation generator instrumentality called Imagine.

Recently, Meta’s Nick Clegg said that the institution regrets its assertive attack to contented moderation during the COVID-19 pandemic.

“No 1 during the pandemic knew however the pandemic was going to unfold, truthful this truly is contented successful hindsight. But with that hindsight, we consciousness that we overdid it a bit. We’re acutely alert due to the fact that users rather rightly raised their dependable and complained that we sometimes over-enforce and we marque mistakes and we region oregon restrict innocuous oregon guiltless content,” Clegg was quoted arsenic saying by The Verge.

The apical enforcement besides acknowledged that Meta’s moderation mistake rates were “still excessively precocious which gets successful the mode of the escaped look that we acceptable retired to enable.” “Too often, harmless contented gets taken down, oregon restricted, and excessively galore radical get penalized unfairly,” helium said.

*** Disclaimer: This Article is auto-aggregated by a Rss Api Program and has not been created or edited by Nandigram Times

(Note: This is an unedited and auto-generated story from Syndicated News Rss Api. News.nandigramtimes.com Staff may not have modified or edited the content body.

Please visit the Source Website that deserves the credit and responsibility for creating this content.)

Watch Live | Source Article