Meta Inc. exempted some top advertisers from its normal content moderation process amid internal concerns that the company’s system was wrongly penalizing top brands Protected the business.
The owners of Facebook and Instagram introduced a series of “guardrails” to “protect high spenders”, according to a 2023 internal document obtained by the Financial Times.
The previously unreported memo said Meta would “suppress detection” based on the amount advertisers spend on the platform, with some top advertisers being vetted by humans instead.
One document states that a group called “P95 spenders,” those who spend more than $1,500 per day, are “exempt from advertising regulations,” but are still “subject to manual human review.” This suggests that the
The memo follows CEO Mark Zuckerberg’s announcement this week that Meta will end its third-party fact-checking program and scale back automated content moderation in the lead-up to Donald Trump’s return as president. It was done in advance.
According to a 2023 document, Meta discovered that its automated system was incorrectly flagging some high-spending accounts as violating the company’s rules.
The company told the FT that high-spending accounts were more likely to receive false notifications about potential breaches. He did not respond to questions about whether the measures outlined in the document were temporary or permanent.
Mehta spokesperson Ryan Daniels said the FT report was “completely inaccurate” and said: “This initiative aims to address what we have said publicly: preventing errors in law enforcement. “This is based on a carefully selected reading of documents that clearly state that this is the case.” .
Advertising accounts for the bulk of Meta’s annual revenue, which was about $135 billion in 2023.
The tech giant typically uses a combination of artificial intelligence and human moderators to screen ads and stop violations of its standards, to remove content such as fraudulent or harmful content.
In a document entitled “High Spender Mistake Prevention,” Meta outlines seven guardrails to protect corporate accounts that generate more than $1,200 in revenue over a 56-day period, as well as individual users who spend more than $960 on advertising over the same period. He said there is. period.
The paper wrote that the guardrails are designed to help the company “determine whether to proceed with enforcement of detections” and “suppress detections.” . . Base your decision on characteristics such as ad spend levels. ”
As an example, he cited companies that are “in the top 5% by revenue.”
Mr Mehta told the FT that he was using the “increased spending” as a guardrail. This often means a wider reach for a company’s ads, and the consequences if a company or its ads are accidentally removed can be more severe.
The company also acknowledged that it prevented some high-value accounts from being disabled by automated systems and instead sent them for human review if there were concerns about the system’s accuracy.
However, it said all businesses are still subject to the same advertising standards and no advertisers are exempt from the rules.
In its “High Spender No Mistakes” memo, the company rated various categories of guardrails as “low,” “moderate,” or “high” in terms of whether they were “defensible.”
Metastaff noted that the practice of putting spending-related guardrails in place is “unlikely” to be defensible.
Other guardrails were labeled as “highly” defensive, such as leveraging knowledge of business reliability to determine whether detections of policy violations should be automatically acted upon.
Mehta said the term “defensible” refers to the difficulty of explaining the guardrail concept to stakeholders if it is misunderstood.
The 2023 document doesn’t list the names of high spenders who fell within the company’s guardrails, but spending thresholds exempt thousands of advertisers from typical moderation processes. It has been suggested that it may have been considered.
Recommended
Market intelligence firm Sensor Tower estimates that the top 10 U.S. spenders on Facebook and Instagram include Amazon, P&G, Temu, Shein, Walmart, NBCUniversal and Google.
Meta has delivered record profits in recent quarters and its stock is trading at all-time highs as the global advertising market recovers from the post-pandemic downturn.
But Mr. Zuckerberg has warned of threats to his business, from the rise of AI to ByteDance-owned rival TikTok, which is growing in popularity among younger users.
A person familiar with the document claims the company is “prioritizing revenue and profits over the integrity and health of its users,” adding that there are internal concerns about circumventing standard moderation processes. Ta.
Zuckerberg said Tuesday that the complexity of Meta’s content moderation system resulted in “too many mistakes and too much censorship.”
His comments come after President Trump last year accused Meta of censoring conservative speech, saying Zuckerberg would “spend the rest of his life in prison” if the company interfered in the 2024 election. It was issued after suggesting that it would be.
Internal documents also show that Meta was considering seeking other exemptions for certain of its highest-spending advertisers.
In one memo, Meta staffers suggested “more aggressively providing protection” from over-temperance, known as “platinum and gold spenders,” who bring in more than half of ad revenue.
“Enforcing false positive integrity on high-value advertisers will hurt Meta revenue and undermine our credibility,” the memo said.
It proposed the option of completely exempting these advertisers from certain law enforcement except in “very rare cases.”
The memo said staff concluded that platinum and gold advertisers were “not an appropriate segment” for broad exemptions, as the company’s tests showed that an estimated 73 percent of its enforcements were justified. shown.
Internal documents also show that Meta discovered multiple AI-generated accounts within the high spender category.
Meta has previously come under intense scrutiny for enforcing exemptions for key users. In 2021, Facebook whistleblower Frances Haugen claimed that the company had an internal system called “Crosscheck” designed to review content from politicians, celebrities, and journalists to ensure posts were not accidentally removed. leaked documents showing that
This was a practice known as “whitelisting” that was sometimes used to protect some users from enforcement even when they violated Facebook’s rules, Haugen’s document said.
Meta’s Oversight Board, an independent “Supreme Court”-like body funded by the company to oversee its most difficult moderation decisions, found that its cross-checking system was leaving dangerous content online. I discovered that. That called for an overhaul of the system, which Meta has since undertaken.