Facebook wants you to know it’s trying really hard to deal with the ways people use its platform to cause harm. It just doesn’t know exactly what to do. What separates hate speech from offensive ideas, or misinformation from coordinated disinformation intended to incite violence? What should the company allow to remain on the platform, and what should it ban? Two years after Russians weaponized Facebook as part of a large-scale campaign to interfere with US democracy, the social network is still struggling to answer those questions, as the past two weeks have made clear. But it’s trying to figure it out.
As Facebook reaffirms its commitment to fighting fake news in recent weeks, it has also been forced to defend its decision not to ban sites like Alex Jones’ InfoWars. Instead, the company says, it reduces the distribution of content that is flagged and confirmed to be false by fact checkers.
On Wednesday, Recode’s Kara Swisher aired a podcast interview with CEO Mark Zuckerberg, in which he outlined Facebook’s approach to misinformation. “The principles that we have on what we remove from the service are, if it’s going to result in real harm, real physical harm, or if you’re attacking individuals, then that content shouldn’t be on the platform,” Zuckerberg said. By way of example, he explained that he wouldn’t necessarily remove Holocaust denial posts from Facebook. “I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong. I don’t think that they’re intentionally getting it wrong,” he said.
People freaked out, and later that day Zuckerberg tried to backtrack, clarifying that “If a post crossed line into advocating for violence or hate against a particular group, it would be removed.”
But Facebook has come under fire for its role in amplifying misinformation which might not cross that line but still has led to violence, in countries like India, Sri Lanka, and Myanmar.
‘There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down.’
Since Wednesday, the company has announced a series of changes to its products that appear to address this criticism. Its private-messaging service WhatsApp launched a test Thursday to limit the number of chats that users can forward messages to. “Indian researchers have found out that much of the misinformation on WhatsApp is coming from political operatives who have 10 or 20 interlaced groups,” says Joan Donovan, of the group Data & Society, who has studied online disinformation and misinformation for years. She described the structure of those disinformation campaigns in India as a honeycomb, at the edges of which are paid operatives forwarding fake messages widely. Limiting their ability to forward messages should help, and WhatsApp said it would continue to evaluate the changes.
Facebook also announced a new policy targeting misinformation specifically—but only when it risks imminent violence. “There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down,” a spokesperson said. Deciphering that kind of context is a challenge that Facebook has already encountered when it comes to things like hate speech, and its willingness to take this on now might represent a shift toward taking responsibility for the role its platform plays in society. By deciding what may lead to violence and taking action, Facebook is taking on duties normally reserved for governments and law enforcement. But without more details about how it will decide what falls under the policy, and given past accusations of arbitrary content moderation, some researchers are skeptical of how effective it will be.
Donovan points to Zuckerberg’s Holocaust denier example. “That to me is a terrifyingly low estimate of the amount of anti-Semitism in the world,” she says. “It shows me that they are not able to truly understand how Holocaust denial has been historically weaponized against Jewish people.” Donovan has studied anti-Semitism and white nationalism extensively, and says most of the people posting it online do so with the intent to defame and target Jews. Does that count as causing imminent harm?
The company already implemented the policy in Sri Lanka, where the government shut down access to its platforms in the wake of sectarian violence earlier this year. Next will be Myanmar, where security forces have been accused of atrocities against the Rohingya minority. The plan, a spokesperson told WIRED, is to eventually apply the policy globally, although how that rollout will proceed is unclear.
To help figure out when misinformation has tipped from “just plain wrong” to “wrong and possibly contributive to violence,” Facebook will partner with local civil society groups that might better understand the specific cultural context. The company has not indicated who its partners are in Sri Lanka, or who they will be in Myanmar or elsewhere. But local groups in Myanmar have been harshly critical of Facebook’s response to violence in their country so far, as WIRED reported earlier this month.
Who those groups are will likely have a large impact on how the policy is enforced—another key decision for the company to make. “There are civil society groups who are anti-semitic. There are civil society groups who are actively in conflict. They are by their nature divisive,” says Donovan. She also worries that this could be just another way for Facebook to pass the buck. “The idea that a civil society group should be made responsible for flagging this content continues a very familiar trope within these platforms, that the users should be the ones policing the platforms.”
‘The idea that a civil society group should be made responsible for flagging this content continues a very familiar trope within these platforms.’
Joan Donovan, Data & Society
Facebook also isn’t sure yet whether it will ever make the names of these partners public, or if it will in some cases but not others, according to a spokesperson. Anonymity may be a very important consideration for some civil society groups, who could face violence themselves if parties with a vested interest in spreading misinformation found out they were partnering with Facebook to take it down. On the other hand, granting such anonymity threatens transparency about how the policy is defining dangerous misinformation. Facebook needs to come up with a plan for when anonymity makes sense, and how to protect local partners who may be OK with their names being known but who may find themselves targeted by trolls or bad actors for working with Facebook.
Donovan wonders what will happen if a partner makes a recommendation that Facebook doesn’t follow and then a violent act occurs. Or what if violence erupts in response to misinformation being taken down?
A Facebook representative told WIRED that the company is iterating and working out all these details now. It’s unclear what will happen to the person sharing misinformation found to be in violation of this policy. Their post will be removed, yes, but will they be banned? A spokesperson for Facebook was not sure.
“Facebook’s new policy seems to suggest a post has to be ‘both’ false and shared with the intent to prompt harm for it to be flagged for potential, not definite, removal,” wrote Luke Stark, a sociology researcher at Dartmouth who studies the intersection of technology and behavior, in an email to WIRED.
Stark wonders how the policy will differ by location, not just in terms of what counts as misinformation that could cause harm, but also in how strongly Facebook enforces it. “Facebook seems to be making a distinction between its impact in some (non-Western) countries—where it admits misinformation on the platform has caused physical harm—and its apparently more benign role in the United States,” he wrote. This leads him to suspect the policy will result in a two-tiered system, where Facebook much more actively polices certain non-Western countries, while allowing misinformation to spread in countries like the US.
The policy will be enforced by the team under Monika Bickert, Facebook’s head of global policy management. In the coming weeks and months, the team will release more specifics about the policy. Only then will we be able to tell if this policy is any more coherent that Facebook’s other attempts to control what happens on its site.