The social media giant is imposing strict controls on information related to the upcoming midterm elections
Social media behemoth Meta is beefing up its information-control tactics as the US heads into the 2022 midterm elections, tightening rules on voting misinformation and advertising. The changes were announced in a blog post on Tuesday.
The company will ban new political, social and electoral issue ads during the last week before the election, ensuring no "October surprises" - factual or otherwise - will disturb the information ecosystem. Editing existing ads will also be forbidden, and ads encouraging people not to vote or questioning the legitimacy of the results will not be permitted.
To further ensure the sanctity of the vote, Meta says it is investing in "proactive threat detection" with the aim of countering "coordinated harassment and threats of violence against election officials and poll workers." The company is also holding regular meetings with the National Association of Secretaries of State and the National Association of State Elections Directors, state and local elections officials, and the federal Cybersecurity and Infrastructure Security Agency.
Meta is deploying fact-checkers in multiple languages for the midterms and expanding the service to WhatsApp, boasting five new partners in Spanish, including Univision and Telemundo. This is part of a $5 million boost in "fact-checking and media literacy initiatives" ahead of November's vote.
The platform promised to deploy fewer "labels that connect people with reliable information" during the 2022 season, acknowledging user feedback had tipped them off that such labels were "over-used" in 2020.
Bragging it had banned more than 270 "white supremacist organizations" and deleted over 2.5 million content items tied to "organized hate" in the first quarter of 2022 alone, the platform revealed 97% of the content in question had been removed by its algorithms without anyone reporting it - raising the question of how hateful it was given the absence of an offended party.
Some question whether Facebook is equipped to deal with actual misinformation during elections, however. Climate justice NGO Global Witness says it submitted 10 fake ads less than two months before Brazil's presidential election telling users to vote on the wrong day, using methods that are not in use, and questioning the validity of the results before the votes are in - only for Meta to accept every single one. The group has conducted similar tests ahead of elections in Kenya, and reported the site's filters "seriously lacking" there as well.
Meta isn't the only social media firm to return to 2020's "election integrity" policies. Twitter announced last week that it would reactivate its own democracy-defending rules, pledging to label and prevent the sharing of "misinformation," promote "reputable" news outlets, and "pre-bunk" narratives that might call the integrity of election results into question, regardless of their veracity. The platform shifted focus from election-related "disinformation" to Covid-19-related content after the election but has continued to experiment with redirecting users toward "approved" sources of information.