Last year, as thousands of women shared their stories of sexual assault and harassment with the hashtag #MeToo, Amanda, a 30-year-old from Oregon, was looking for a supportive place to share her own experiences. Soon enough she was invited by a friend to join a Facebook group for survivors of sexual assault that had thousands of members.
The group was easy to find: As recently as this month, the page associated with it ranked higher in some search results than the #MeToo page verified by Facebook. The group, which also had “me too” in the name, looked legitimate to Amanda. Best of all, it was “closed,” meaning that while the group showed up in search results, new members needed an admin’s approval to join and only members could see what was posted in it.
“People shared the most intimate moments of trauma with these people,” says Amanda. (WIRED is declining to include her last name to protect her privacy.)
Then suddenly earlier this month, Amanda noticed the group’s name and photo had been changed. The same day President Trump had mocked the #MeToo movement at a rally in Montana, trolls began descending on her community. The group was now advertised as a place for sharing erotica and an account Amanda didn’t recognize had become the administrator. They began adding new members; many of these profiles, when later examined by WIRED appeared to be fake.
WIRED spoke to five women who were in the group, including Amanda, some of whom provided screenshots to support their accounts. They described harassment by many of those new profiles, who threatened in some cases to contact their abusers or to call child protective services regarding their children. One troll commented that they had collected all of the women’s posts about abuse in a file, implying they could still be released even if they were deleted from Facebook. The social network subsequently suspended many of the accounts after being contacted by WIRED.
It’s not clear whether the #MeToo group was taken over through some sort of hack, or if it was purposely set up to lure women in with the goal of eventually harassing those who may have joined. After women began reporting their group to Facebook, it was deleted, leaving the original members to piece together what might have happened.
“That was the worst part. Some people had posted that the group was their safe place to talk, then bam, it’s gone,” says Amanda.
“We want people to feel safe to engage and connect with their community. For that reason, we consider authenticity to be the cornerstone of our community and do not tolerate harassment on Facebook. In line with these policies, we disabled this group, the Page, and the identified profiles for violating our Community Standards,” a Facebook spokesperson said in a statement.
‘That was the worst part. Some people had posted that the group was their safe place to talk, then bam, it’s gone.’
There’s no doubt that Facebook groups represent important communities for millions of people. They played an instrumental role in organizing the West Virginia teacher strike earlier this year and in the 2017 Women’s March. But groups have also been used as tools for manipulation, scams, and harassment.
Whitney Phillips, the author of This Is Why We Can’t Have Nice Things: Mapping the Relationship Between Online Trolling and Internet Culture, says Facebook groups and pages have long been a place for trolling and harassment. She cites incidents that go as far back as 2010, when trolls hijacked memorial pages set up for loved ones that had recently passed away.
“Overnight the admin would flip it so that it said whoever the person was, they deserved [to die]. That was an established strategy back then,” says Phillips. Since then, malicious actors have continued to set up Facebook pages and groups to exploit tragedies and news events, like mass shootings. “This is a failure of Facebook really taking their own product to its logical extension. Facebook doesn’t seem to understand that their tools are the bread and butter of manipulators.”
Unlike other social platforms that host intimate communities, like Reddit, Facebook requires users provide their actual names, heightening the potential consequences of abuse. Unless a Facebook user has strict privacy settings enabled, a troll or other malicious actor can find photos of them and their family, where they work or go to school, and additional contact information like phone numbers and email addresses. Which is what happened with the #MeToo group.
That incident came six months after Adam Mosseri, Facebook’s former head of News Feed, announced that group content would be given more prominence as part of an effort to foster more “meaningful” connection on the platform. Last year, Mark Zuckerberg, Facebook’s CEO, also emphasized that Facebook groups can be used as places to share private, sensitive information.
“If you’re diagnosed with a rare disease, you can join a group and connect with people with that condition all around the world so you’re not alone,” Zuckerberg said at Facebook’s Community Summit in June 2017. At the same event, Zuckerberg said Facebook’s goal was to have 1 billion of its over 2 billion users be a part of “meaningful” communities.
More recently, Facebook has continued to emphasize the importance of groups on its platform. In May, the company released a series of new tools designed to keep groups safe, including admin support, “a dedicated place for admins to report an issue or ask a question and get a response from Facebook,” which was rolled out to a limited number of administrators. Facebook is also developing artificial intelligence to more proactively detect things like fake accounts before they’re reported.
But admin support and other tools like it are only useful if an administrator is acting in good faith. In the #MeToo group’s case, where the admin may have been involved in the abuse, regular members had few options.
“It’s a shame that it is impossible to reach out to someone in charge directly when a thing like this is happening,” says another women in the group who asked for her name not to be used. “The notifying tools are not sufficient.”
It’s not clear whether the administrators behind the group intended to target survivors of abuse from the beginning. In part that’s because it’s impossible to tell who created it in the first place. The administrator of the group was a Facebook page. Facebook doesn’t require pages to publicize the individual users behind them, and so they effectively provide a way to shield the identity of a group’s administrators.
The ability to create groups that aren’t tied to specific Facebook profiles serves a legitimate purpose; a nonprofit might want to create a group tied to its name rather than to the identity of its social media manager, for example. Yet that same ability allows bad actors to masquerade as legitimate, a loophole exploited by Russian propagandists and others during the lead-up to the 2016 presidential election.
Facebook is aware of the problem. In June, the social network announced it would make pages more transparent by including the date they were created and whether their name had been changed recently. But it stopped short of requiring pages disclose who created them.
‘Facebook doesn’t seem to understand that their tools are the bread and butter of manipulators.’
While Facebook does have a Safety Center for users and a specific guide for survivors of abuse developed with the National Network to End Domestic Violence, it doesn’t appear to offer specific guidance for users about how they should evaluate the safety or legitimacy of a group. There aren’t warnings, for example, about sharing personal information when you don’t know an administrator’s real identity.
Facebook does provide public verification badges for certain pages. A blue check mark indicates Facebook has “confirmed that this is the authentic Page or profile for this public figure, media company or brand,” while gray check marks are used for businesses and organizations.
The company uses a number of other signals, beyond just verification status, to surface groups and pages algorithmically in search results and recommendation engines, which can vary from user to user based on things like their friends or connections. But when several people at WIRED searched for the hashtag #MeToo, the page behind the now-deleted group was listed higher than the Facebook verified page associated with the #MeToo movement.
“Recommendation algorithms are routinely manipulated across a wide array of digital platforms (this isn’t unique to Facebook), and verified accounts are one way to communicate trust to users,” says Mary Madden, a research lead at Data & Society, a nonprofit that studies social and cultural issues related to new technologies. “However, placement and ranking on a list also matter a great deal, so it’s easy to see why many users might have assumed a page at the very top was legitimate.”
The social network also appears to rank groups in part based on their size. Before it was deleted, the #MeToo group had more than 15,000 members. But as a Buzzfeed News investigation found, it’s easy to buy fake group members to bolster the perceived size of a community.
For years, Facebook has worked to promote itself as a welcoming and safe platform for billions of people to share their real lives. But that information can sometimes be used for purposes users aren’t aware of, even in closed groups run by legitimate, well-intended administrators.
Last week, CNBC reported that Facebook recently removed functionality that allowed third parties to access the names of people in closed groups. While the posts in closed groups have always been only available to group members, previously anyone could see who the members were. A browser extension had been created that marketers and others could use to download member lists, as well as other information on users including employers, locations, and email addresses.
The extension was discovered by Andrea Downing, a moderator for a closed support group for women with gene mutations that put them at a higher risk of developing some forms of cancer. She and others in the community grew worried that insurance companies or other parties might be able to access information about their health, even though their groups aren’t public. They reached out to Facebook with their concerns. On June 29, the social network made member lists for closed groups private. (Facebook told CNBC its decision to make the change was not related to their outreach.) The browser extension has also been taken down.
Downing is still not satisfied. “The system-wide design of Facebook’s groups functionality is a major problem,” she says. And yet, she says it would be hard to leave the social network for another platform, because users can’t download the posts they’ve made in groups. Facebook did not immediately respond to a question about whether it would make group data downloadable.
Facebook groups have helped millions of people connect and feel less alone, but they are also enticing targets for trolls and scammers. The social network has a mixed record when it comes to anticipating how its tools will be abused, but when people are using their real names and sharing the most intimate details about their lives, the stakes couldn’t be higher.
“I liked the group a lot, people would share their stories or ask if certain things happened to them or some people asked for advice on things,” says Chloe, another woman who was in the #MeToo group. “We all helped each other or supported each other as best we could.”