Skip to main contentSkip to navigationSkip to navigation
Sophie Zhang detected networks of fake accounts supporting political leaders around the world. Photograph: Jason Henry/The Guardian

How Facebook let fake engagement distort global politics: a whistleblower's account

This article is more than 2 years old
Sophie Zhang detected networks of fake accounts supporting political leaders around the world. Photograph: Jason Henry/The Guardian

The inside story of Sophie Zhang’s battle to combat rampant manipulation as executives delayed and deflected

by in San Francisco

Shortly before Sophie Zhang lost access to Facebook’s systems, she published one final message on the company’s internal forum, a farewell tradition at Facebook known as a “badge post”.

“Officially, I’m a low-level [data scientist] who’s being fired today for poor performance,” the post began. “In practice, in the 2.5 years I’ve spent at Facebook, I’ve … found multiple blatant attempts by foreign national governments to abuse our platform on vast scales to mislead their own citizenry, and caused international news on multiple occasions.”

Over the course of 7,800 scathing words, Zhang outlined Facebook’s failure to combat political manipulation campaigns akin to what Russia had done in the 2016 US election. “We simply didn’t care enough to stop them,” she wrote. “I know that I have blood on my hands by now.”

series embed

Zhang knew that this was not a tale that Facebook wanted her to tell, so when she hit publish, she also launched a password-protected website with a copy of the memo and provided the link and password to Facebook employees. Not only did Facebook temporarily delete the post internally, the company also contacted Zhang’s hosting service and domain registrar and forced her website offline.

Now, with the US election over and a new president inaugurated, Zhang is coming forward to tell the whole story on the record. (Excerpts of her memo were first published in September by BuzzFeed News.) This article is based on extensive internal documentation seen by the Guardian.

“What we have seen is that multiple national presidents believe that this activity is sufficiently valuable for their autocratic ambitions that they feel the need to do it so blatantly that they aren’t even bothering to hide,” Zhang told the Guardian.

“I tried to fix this problem within Facebook … I spoke to my manager, my manager’s manager, different teams, and everyone up to a company vice-president in great detail. I repeatedly tried to get people to fix things … I offered to stay on for free after they fired me, and they said no. I hoped that when I made my departure post it might convince people to change things, but it hasn’t.”

She argues that Facebook is allowing its self-interest to interfere with its responsibility to protect democracy, and that the public and regulators need to know what is happening to provide oversight.

“The whole point of inauthentic activity is not to be found,” she said. “You can’t fix something unless you know that it exists.”

A Facebook spokesperson, Liz Bourgeois, said: “We fundamentally disagree with Ms Zhang’s characterization of our priorities and efforts to root out abuse on our platform.

“We aggressively go after abuse around the world and have specialized teams focused on this work. As a result, we’ve taken down more than 100 networks of coordinated inauthentic behavior. Around half of them were domestic networks that operated in countries around the world, including those in Latin America, the Middle East and North Africa, and in the Asia Pacific region. Combatting coordinated inauthentic behavior is our priority. We’re also addressing the problems of spam and fake engagement. We investigate each issue before taking action or making public claims about them.”

Facebook did not dispute Zhang’s factual assertions about her time at the company.

Ex-Facebook employee on the company's dangerous loophole: 'Autocrats don't bother to hide'

BEHIND ‘COUNTERFEIT LIKES’

Zhang had been working for Facebook for about six months when she realized that Juan Orlando Hernández, the president of Honduras, was amassing large numbers of fake likes on the content he posted to his 500,000 followers on Facebook. Over one six-week period from June to July 2018, Hernández’s Facebook posts received likes from 59,100 users, more than 78% of which were not real people.

Hernández’s fake engagement stood out not just because of its volume, but because of an apparent innovation in how he acquired it. Most fake likes on Facebook come from fake or compromised user accounts, but Hernández was receiving thousands of likes from Facebook Pages – Facebook profiles for businesses, organizations or public figures – that had been set up to resemble user accounts, with names, profile pictures and job titles. One individual was the administrator for hundreds of those fake Pages, as well as for the official Pages of both Hernández and his late sister, who had served as communications minister.

Sitting behind a computer screen, the administrator could publish a post about how well Hernández was doing his job on the president’s Facebook Page, then use his hundreds of dummy Pages to make the post appear popular, the digital equivalent of bussing in a fake crowd for a speech.

Zhang had been hired that January to work on a relatively new team dedicated to combatting “fake engagement” – likes, comments, shares and reactions made by inauthentic or compromised accounts. In addition to distorting the public’s perception of how popular a piece of content is, fake engagement can influence how that content performs in the all-important news feed algorithm; it is a kind of counterfeit currency in Facebook’s attention marketplace.

The vast majority of the fake engagement on Facebook appears on posts or Pages by individuals, businesses or brands and appears to be commercially motivated. But Zhang found that it was also being used on what Facebook called “civic” – ie political – targets. The most blatant example was Hernández, who was receiving 90% of all the known civic fake engagement in Honduras as of August 2018.

A rightwing nationalist who supported Honduras’s 2009 military coup, Hernández was elected president in 2013. His re-election in 2017 is widely viewed as fraudulent, and his second term has been marked by allegations of human rights abuses and rampant corruption. US federal prosecutors have named Hernández as a co-conspirator in multiple drug trafficking cases. He has not been charged with a crime and has denied any wrongdoing.

Hernández did not respond to queries sent to his press officer, attorney and minister of transparency.

The tactics boosting Hernández online were similar to what Russia’s Internet Research Agency had done during the 2016 US election, when it set up Facebook accounts purporting to be Americans and used them to manipulate individuals and influence political debates on Facebook. Facebook had come up with a name for this – “coordinated inauthentic behavior” (CIB) – in order to ban it.

But Facebook initially resisted calling the Honduran activity CIB – in part because the network’s use of Pages to create false personas and fake engagement fell into a serious loophole in the company’s rules. Facebook’s policies to ensure authenticity focus on accounts: users can only have one account and it must employ their “real” name. But Facebook has no such rule for Pages, which can perform many of the same engagements that accounts can, including liking, sharing and commenting.

Zhang assumed that once she alerted the right people to her discovery, the Honduras network would be investigated and the fake Pages loophole would be closed. But it quickly became clear that no one was interested in taking responsibility for policing the abuses of the president of a poor nation with just 4.5m Facebook users. The message she received from all corners – including from threat intelligence, the small and elite team of investigators responsible for uncovering CIB campaigns – was that the abuses were bad, but resources were tight, and, absent any external pressure, Honduras was simply not a priority.

“It’s not for threat intel to investigate fake engagement,” an investigator from that team told Zhang. Katie Harbath, Facebook’s then public policy director for global elections, expressed interest in a “scaled way to look for this and action on other politician Pages” but noted that it was unlikely the case would get much attention outside Honduras, and that she didn’t “feel super strongly” about it. Other executives and managers Zhang briefed included Samidh Chakrabarti, the then head of civic integrity; David Agranovich, the global threat disruption lead; and Rosen, the vice-president of integrity.

“I don’t think Honduras is big on people’s minds here,” a manager from the civic integrity team told Zhang in a chat.

Frustrated and impatient after months of inaction, Zhang took her concerns semi-public – within the confines of the company’s internal communication platform. In late March 2019, she published a post to a group for the company’s “election integrity core team” pointing out that Hernández was “the only national president to be directly, actively, and consistently abusing Facebook to exploit fake engagement for himself” and that the company had known of the problem for months without doing anything.

Facebook headquarters in Menlo Park, California. Photograph: Jason Henry/The Guardian

The post succeeded in attracting the concern of an investigator from the threat intelligence team, but a further delay occurred in April when management temporarily suspended investigations into CIB cases that did not involve interference by a foreign government. In June, the investigator began working on the case and quickly confirmed Zhang’s findings: there was a large CIB network in Honduras working to promote Hernández that was linked to the president himself.

“This campaign has persistently boosted a likely illegitimate president in an ARC [at-risk country],” the investigator wrote in a report highlighting its likely “IRL [in-real-life] impact”. The accounts and Pages involved had been established in 2016 and 2017, before Hernández’s disputed re-election, the investigator noted.

On 25 July 2019, nearly one year after Zhang had reported the network to Facebook, the company announced that it was taking down 181 accounts and 1,488 Pages involved in “domestic-focused coordinated inauthentic activity in Honduras”. The campaign was “linked to individuals managing social media for the government of Honduras” and had spent more than $23,000 on Facebook ads, Facebook said.

Agranovich, the global threat disruption lead, praised Zhang for her role in the takedown, writing in an official feedback channel: “These disruptions removed networks on Facebook that used our services to suppress democratic expression, target innocent users on our platform, and enable clandestine geopolitical conflict. This is among the most important work at Facebook, and we could not have done any of these takedowns without your contributions.”

Privately, he added: “The Honduras case would never have happened without your continued advocacy … It means we’ve created a precedent that the Pages-as-Profiles archetype is inauthentic behavior.”

‘NO ONE CAN AGREE ON WHAT TO DO’

Zhang was invigorated by her success with the Honduras takedown and believed that the “precedent” Agranovich spoke of would clear the way for quicker takedowns in the future.

The next day, she alerted the threat intelligence team to a network of fake Pages supporting political leaders in Albania. In August, she discovered and filed escalations for suspicious networks in Azerbaijan, Mexico, Argentina and Italy. Throughout the autumn and winter she added networks in the Philippines, Afghanistan, South Korea, Bolivia, Ecuador, Iraq, Tunisia, Turkey, Taiwan, Paraguay, El Salvador, India, the Dominican Republic, Indonesia, Ukraine, Poland and Mongolia.

“A lot of the time it felt like trying to empty the ocean with an eyedropper,” Zhang told the Guardian.

The networks often failed to meet Facebook’s shifting criteria to be prioritized for CIB takedowns, which are investigated by threat intelligence and announced publicly, but they still violated Facebook’s policies. Networks of fake accounts were dealt with by identity “checkpointing”, a process whereby a user is required to provide proof of their identity or their account is locked. The tactic is an effective way of getting rid of inauthentic accounts, though it did little to address Page abuse that was carried out by authentic accounts. It could also be carried out by Facebook’s “community operations” staff, who greatly outnumbered threat intelligence investigators.

It was through pursuing these cases that Zhang came into repeated contact with Facebook’s policy bureaucracy. Some of Facebook’s policy staff act as a kind of legislative branch in Facebook’s approximation of a global government, crafting the rules and advising the community operations staff who enforce them; others are more like a privatized diplomatic corps, staffing offices around the world to liaise with local businesses, civil society groups, government regulators and politicians. Policy staff may also provide expertise in a given country’s or region’s language, history and political context in order to inform decisions made at Facebook’s Silicon Valley headquarters.

The policy team’s response to the cases that Zhang uncovered varied widely. Policy staffers pushed for networks of fake accounts in South Korea, Taiwan, Ukraine and Italy to be investigated quickly, while allowing others to languish for months without action.

“It felt a bit like the reaction after every mass shooting,” Zhang said. “People agree that it’s bad, but no one can agree on what to do with it … so it’s just thoughts and prayers.”

Zhang: ‘A lot of the time it felt like trying to empty the ocean with an eyedropper.’ Photograph: Jason Henry/The Guardian

At times, Facebook allowed its self-interest to enter into discussions of rule enforcement.

In 2019, some Facebook staff weighed publicizing the fact that an opposition politician in the Philippines was receiving low-quality, scripted fake engagement, despite not knowing whether the politician was involved in acquiring the fake likes. The company had “strategic incentives to publicize”, one researcher said, since the politician had been critical of Facebook. “We’re taking some heat from Duterte supporters with the recent takedowns, and announcing that we have another takedown which involves other candidates might be helpful,” a public policy manager added.

No action was taken after Zhang pointed out that it was possible Duterte or his supporters were attempting to “frame” the opposition politician by purchasing fake likes to make them look corrupt. But discussions like this are among the reasons Zhang now argues that Facebook needs to create separation between the staff responsible for enforcing Facebook’s rules and those responsible for maintaining good relationships with government officials.

She suspects that the inherent conflict of interest for the policy department explains why it decided to maintain the enforcement loophole in Facebook’s rules about using fake Pages. In 2019, the team accepted Zhang’s proposal to ban the use of inauthentic Pages to create fake engagement. However, a separate proposal by Zhang to punish the most prolific abusers by banning their personal accounts was rejected, with policy staff citing discomfort with taking action against people connected to high-profile accounts, such as the Page administrator for the president of Honduras.

The lack of an enforcement mechanism remains a loophole in Facebook’s policies; even if the company takes down dozens of fake Pages, there is nothing to stop a user with an authentic account from creating dozens of new ones the next day.

“If people start realizing that we make exceptions for Page admins of presidents or political parties, these operators may eventually figure that out and deliberately run their CIB out of more official channels,” a researcher from the civic integrity team told Zhang in commiseration after her proposal was shot down.

FINDING ‘THE RIGHT PRIORITIZATION’

Even as Zhang discovered new influence operations, an old one returned. In October 2019, she found that the Honduras network was reconstituting – and that there was little appetite from threat intelligence to take it down again. Facebook says that it monitors CIB networks for efforts to return using manual and automated methods, and that it “continuously” removes new Pages and accounts connected to the networks.

When Zhang complained about the lack of action in another internal post in December, she received a response from Rosen that was exemplary of how Facebook justified ignoring abuses in small or poor countries that had failed to garner press attention: referring to the company’s myriad prioritization frameworks.

Facebook had “moved slower than we’d like because of prioritization” on the Honduras case, Rosen wrote. “It’s a bummer that it’s back and I’m excited to learn from this and better understand what we need to do systematically,” he added. But he also chastised her for making a public complaint, saying: “My concern is that threads like this can undermine the people that get up in the morning and do their absolute best to try to figure out how to spend the finite time and energy we all have and put their heart and soul into it.”

Rosen had joined Facebook in 2013, when the company acquired his startup, Onavo, a mobile web analytics company that Facebook would go on to use to track usage of rival apps. At Facebook, Rosen ran the Internet.org project and was vice-president for growth before being appointed vice-president for integrity, overseeing the company’s safety, integrity and security efforts.

Following Zhang’s firing and the publication of her goodbye post by BuzzFeed News, Rosen downplayed the importance of the abuses she had uncovered, writing on Twitter: “With all due respect, what she’s described is fake likes – which we routinely remove using automated detection. Like any team in the industry or government, we prioritize stopping the most urgent and harmful threats globally. Fake likes is not one of them.”

But Zhang had discussed her work, which was not limited to fake likes, with Rosen on multiple occasions. In April 2019, after she privately briefed him on the Honduras situation, he encouraged her to stick with the prioritizations laid out by management, saying: “We have literally hundreds or thousands of types of abuse (job security on integrity eh!) … That’s why we should start from the end (top countries, top priority areas, things driving prevalence, etc) and try to somewhat work our way down.”

In December 2019, in a private conversation following up on the Honduras recidivism, Zhang told Rosen that she had been informed that threat intelligence would only prioritize campaigns in “the US/western Europe and foreign adversaries such as Russia/Iran/etc”, a framework Rosen endorsed, saying: “I think that’s the right prioritization.”

Zhang responded: “I get that the US/western Europe/etc is important, but for a company with effectively unlimited resources, I don’t understand why this cannot get on the roadmap for anyone … A strategic response manager told me that the world outside the US/Europe was basically like the wild west with me as the part-time dictator in my spare time. He considered that to be a positive development because to his knowledge it wasn’t covered by anyone before he learned of the work I was doing.”

Rosen replied: “I wish resources were unlimited.” At the time, the company was about to report annual operating profits of $23.9bn on $70.7bn in revenue. It had $54.86bn in cash on hand.

World map of cases and when they were resolved.

‘LIMIT THE MEME WE CANNOT CONTROL FACEBOOK’

At Facebook, all the prioritization metrics went out the window the moment the press or general public caught wind of something.

In September 2018, for example, Zhang alerted the policy team that Alejandro Murat Hinojosa, the governor of Oaxaca, Mexico, was receiving substantial amounts of fake engagement. Nothing was done until 7 January 2019, when the threat intelligence team received a draft copy of an article by the Atlantic Council’s DFRLab exposing suspicious likes from south Asian Facebook accounts on Murat’s Pages. Suddenly the fake likes were considered a high-priority escalation, and Zhang was brought in to remove them.

No effort was made to investigate the source of the fake engagement, however, nor to address the broader problem of fake engagement in Mexican politics. In August 2019, Zhang filed escalation tasks for suspicious networks of accounts supporting local politicians in seven Mexican states; in December 2019 she updated the task to include a total of 18 networks acting on local and state politicians, including one again supporting Murat. The networks were not prioritized and were left in place until August 2020, when, 360 days after Zhang’s report, more than 5,000 fake accounts were finally checkpointed.

Whether inauthentic activity attracts the attention of the press or public is a poor indicator of how serious it is, Zhang argues.

During the 2019 UK general election, the public became alarmed by waves of identical, supportive comments on Boris Johnson’s Facebook Page. Zhang was pulled into multiple “high-priority” escalations to investigate the comments, which did not come from the suspected “Russian bots”, but instead were the result of real-life Brexit supporters pretending to be Russian bots in order to troll Labour voters.

Rosen himself once altered a Facebook policy to avoid a PR fire. On 7 September 2018 – two days before Sweden’s general election – Facebook received notice of a forthcoming Atlantic Council DFRLab article raising questions about disproportionate levels of engagement with the far-right party Alternative for Sweden (AfS). A fringe group with no seats in parliament, AfS was somehow acquiring almost as many Facebook likes on its content as Sweden’s largest political party.

Facebook’s headquarters. Zhang was fired last year in August. Photograph: Jason Henry/The Guardian

Zhang was asked to investigate, and she found that while AfS’s activity was coordinated, it was not inauthentic; it did not violate any of the company’s policies. Nevertheless, Rosen pushed to take action against its activist accounts, arguing in an email thread to the executives Joel Kaplan, Monika Bickert, Nathaniel Gleicher and Rachel Whetstone: “While we’re late here it still seems better to do something and talk about it than not do anything. If we think it’s the right thing.”

Several executives responded negatively, pointing out that he was proposing coming up with a new policy against a behavior Facebook had encouraged users to take (sharing posts into Facebook groups) and applying it hours before voting began in a national election. Rosen overruled them and implemented the new policy. Whetstone crafted a communications plan in case the press asked any questions. The plan’s stated “goals” included “Limit the meme that we’re slow to spot misuse – and can’t control Facebook” and “Limit the meme that we cannot control our systems – or are too slow to spot these different types of abuses”.

HOW WIDESPREAD ABUSE ‘FALLS THROUGH THE CRACKS’

Of all the cases of inauthentic behavior that Zhang uncovered, the one that most concerned her – and that took the longest to take down – was in Azerbaijan. It was one of the largest she had seen, and it was clearly being used to prop up an authoritarian regime with an egregious record on human rights.

The Azerbaijani network used the same tactic that was seen in Honduras – thousands of Facebook Pages set up to look like user accounts – but instead of creating fake likes, the Pages were used to harass. Over one 90-day period in 2019, it produced approximately 2.1m negative, harassing comments on the Facebook Pages of opposition leaders and independent media outlets, accusing them of being traitors and praising the country’s autocratic leader, President Ilham Aliyev, and his ruling party, the YAP.

Facebook did not employ a dedicated policy staffer or market specialist for Azerbaijan, and neither its eastern European nor Middle Eastern policy teams took responsibility for it. Eventually Zhang discovered that the Turkey policy team was supposed to cover the former Soviet republic, but none of them spoke Azeri or had expertise in the country. As of August 2020, Facebook did not have any full-time or contract operations employees who were known to speak Azeri, leaving staff to use Google Translate to try to understand the nature of the abuse.

It took until December 2019 for Facebook to assign someone to look into the harassment campaign, and until January 2020 for that investigation to begin. By early February, the investigator had established that the campaign was “clearly connected to the ruling party YAP”. But one month later, without explanation, the investigator changed the priority level on the escalation from “high” to “low”. Once again Zhang had uncovered an influence operation by a non-democratic government against its own people, and once again Facebook was dragging its feet. After publication of this article, the YAP denied any connection with Pages that recently left harassing comments on an independent news outlet’s Facebook Page.

‘A lot of the time it felt like trying to empty the ocean with an eyedropper,’ Zhang said of tackling abuse on the platform. Photograph: Jason Henry/The Guardian

Zhang again lobbied internally for the Azerbaijan case to be prioritized. “I’ve been told directly by leadership that I should ignore these cases because if they are impactful, we’ll eventually receive PR flak over it and motivate a change,” she said during a presentation at a 2020 internal summit focused on issues in civic integrity, according to her notes. “The assumption is that if a case does not receive media attention, it poses no societal risk … What is our responsibility when societal risk diverges from PR risk?”

In August 2020, following the news that Aliyev was cracking down on opposition leaders and journalists, Zhang again took her case to the internal “election integrity discussions” group.

“Unfortunately, Facebook has become complicit by inaction in this authoritarian crackdown,” she wrote. “Although we conclusively tied this network to elements of the government in early February, and have compiled extensive evidence of its violating nature, the effective decision was made not to prioritize it, effectively turning a blind eye.”

The internally public post helped galvanize a real response, with one researcher in civic integrity declaring the harassment campaign “one of the worst things I have seen” and promising to “spend some political capital” to see action taken. Threat intelligence picked the escalation back up, with an investigator explaining to Zhang that “it had probably just fallen through the cracks”.

By the end of August, threat intelligence had found “solid evidence” of the involvement of the YAP and was preparing to establish the full scope of the network. Zhang would not be at Facebook to see the eventual CIB announcement and takedown of 589 Facebook accounts, 7,665 Pages and 437 Instagram accounts linked to the Youth Union of the YAP on 8 October, however.

She was fired for poor performance in mid-August 2020, a result of her spending too much time focused on uprooting civic fake engagement and not enough time on the priorities outlined by management. Her mental health and job performance had suffered severely under the pressure of trying to do both and the stress of the coronavirus pandemic.

“It shouldn’t have been my job, but at the end of the day, I was the only one who was effectively making any decisions regarding these cases,” she told the Guardian. “Whether a network was taken down or not was effectively based on how much I chose to push it, how much I chose to yell at other people about it.

“I still have trouble sleeping at night, sometimes,” she added. “It was just very overwhelming and frustrating because, frankly, I should never have had this much responsibility and power.”

On her final day, she went through the list of outstanding tasks she had filed about still-active networks of inauthentic accounts, running queries and leaving notes for the staff she hoped would pick up her work after her departure. There were 200 suspicious accounts still boosting a politician in Bolivia, she recorded; 100 in Ecuador, 500 in Brazil, 700 in Ukraine, 1,700 in Iraq, 4,000 in India and more than 10,000 in Mexico.

Then she made one final update to the file on Honduras, writing: “As I’m departing the company today, I did a final sweep in Honduras and turned up what appears to be ~130 recidivist users still active on President [Juan Orlando Hernández] … I realize that this will not be prioritized or tackled, but wished to provide an update for completeness regardless.”

  • The map in this article was amended on 12 April 2021 to clarify the nature of fake accounts in Bolivia. The article was amended on 15 April to incorporate information from the YAP provided after publication.

DO NOT REMOVE

Most viewed

Most viewed