As 2018 drew to a close, BuzzFeed analyzed the most-shared fake stories on Facebook and found that the misinformation industry is still thriving. According to the report, the top 50 fake stories earned 22 million shares, reactions, and comments — just 7 percent less than the previous year’s crop, and more than the 2016 winners.
Two years after Facebook began making a serious effort to counter the viral spread of misinformation, hoaxes remain a fact of life on social networks. Whether motivated by profits, politics, or sheer mischief, hoaxers continue to find ways around limitations placed on them by the social networks where they operate.
With fake stories a seemingly permanent fixture of life online — and the threat of convincing fake videos gaining steam — it can be easy to despair. But even as the viral threat evolves, new antibodies are emerging. Amid fears that the boundaries between reality and fiction are dissolving, researchers have begun sketching out proposals to prevent it from disseminating. Drawing on experts from a variety of fields, advocates are putting together an organized effort to protect the information sphere from scammers and state-sponsored trolls.
Academic researchers, pro-democracy hackers, and tech employees have begun collaborating on initiatives designed to identify and combat misinformation wherever it appears online. And while the work remains in an embryonic stage, advocates say they are at least somewhat optimistic that the worst actors can be reined in — and that trust can be restored to a greater part of the internet.
In a brightly lit coworking space in downtown Austin, Dwight Knell stood before the assembled hackers and asked them to take the next 48 hours seriously.
”Misinformation is really one of the foremost issues of our time,” he told the crowd at the inaugural CredCon, which took place in November. “And we’re just at the tip of the iceberg. Your kids and your grandkids are going to ask you what you did to fight misinformation — and this is not hyperbole.”
Over the next two days, Knell and his fellow organizers led a few dozen attendees through sessions designed to define and classify various types of misinformation, while also outlining potential approaches to reducing its impact. So far, very little has advanced beyond the prototype stage. But the prototypes hint at where efforts to fight frauds will move in 2019.
At one session, a representative from the RAND Corporation grouped tools for fighting misinformation into a still-evolving set of categories. There are tools that try to certify the authenticity of a piece of content, like a photo; tools to detect bots; tools that generate credibility scores; tools that track disinformation as it spreads; and tools to augment web browsers and search engines. Other methods for fighting information under development include whitelists of trusted websites, editorial codes and standards for publishers, and various efforts involving blockchains.
No one thing is expected solve the problem. But advocates say that collectively, they can make a significant impact. That’s particularly true of financially motivated misinformation, says Jennifer 8. Lee of the journalism nonprofit Hacks/Hackers. The group is one of the organizations that makes up Credibility Coalition, which sponsors CredCon.
Lee says posting a sensational headline to generate thousands of Facebook shares, and profit from everyone who clicks on your website, is a scam that should be mostly solved within three to five years.
”The process of fighting misinformation has been compared to the flu vaccine,” she says. “You have to update it every year, and you don’t necessarily need 100 percent immunization to be effective. You just need to temper it below a certain threshold.”
The Credibility Coalition comprises Hacks/Hackers and Meedan, a hybrid for-profit and nonprofit enterprise that works on news verification efforts, and is sponsored by companies including Google, Facebook, and Mozilla. CredCon is itself a spinoff of Misinfocon, a more policy-oriented event that has been held five times around the world since 2017. The first-ever CredCon had more of a software-development orientation — its centerpiece was a two-day hackathon in which individuals and groups worked together to build tools addressing various deficits of the information landscape.
Hackathon projects included tools to scour Wikipedia articles for unreliable sources, evaluate a Twitter user’s reputation by the number of times they have shared misinformation previously, and search for math errors in articles. A “proof of patience” bot would highlight when someone had actually read the article they were sharing. Another tool, Tattle, attempted to integrate fact-checking databases into WhatsApp and re-share their findings back to users.
These are modest efforts — the projects were designed to be completed in two days, after all — but they point to where future efforts could lead. After the event, attendees could apply for small grants (up to $2,000) to continue their work.
Of course, the biggest opportunity to reduce the spread of misinformation exists at the level of the platform — and they’re paying attention to what CredCon is up to. (Both Facebook and Twitter sent representatives to the event.) Sandro Hawke, a fellow at the World Wide Web Consortium, has been exploring ideas to use browser-based technologies to bolster the credibility of information online. Maybe a browser could denote the credibility of the page with a colored light in the corner, Hawke says, or offer an interstitial warning to users visiting a known misinformation peddler.
In any case, researchers’ operating principle should be to first, do no harm. “Before you do an intervention, you need to have some careful study that it’s going to help,” says Hawke, who chairs the WC3’s credible web community group. (Mission: “to help shift the web toward more trustworthy content without increasing censorship or social division.”)
“Nobody wants to be misled,” Hawke says. “Everybody knows the feeling of, I’ve been tricked. They might not agree what the truth is. But the moment where they go, Shit, I’ve been had — everyone wants to avoid that. I think we can get everybody on our side at that level.”
Another part of building consensus: attracting a diverse group to participate in the project. CredCon is organized primarily by women, and 15 percent of attendees are underrepresented minorities. Organizers say a diverse group is needed to monitor for blind spots in their software development. Misinformation looks different around the world — but so does credibility. White folks might be more likely to consider a government agency a trusted source; racial minorities might not. Bringing disparate groups together helps misinformation researchers consider the problem from as many dimensions as possible.
The event’s urgency derives from the fact that national politics are on fire, with a president who lies constantly and a social media ecosystem still awash in viral falsehoods. But organizers are quick to point out that misinformation is an equally bad problem in fields such as public health, where deceptive information about vaccines, HIV transmission, and other issues have great potential to do harm.
At the same time, attendees are still having earnest debates about the definitions of words like trust and credibility. Discussions about misinformation can quickly turn into freshman-year philosophy seminars. How do you know what you think you know? What does it mean to be true?
But in an environment where lots of people are incredibly bleak, CredCon is notable for the existence, however faint, of optimism.
”We’re still figuring out what the hell is going on,” says Sam Sunne, a freelance journalist who has worked with CredCon. “It’s going to take a few years. We’re only just now starting to figure out how Facebook has affected our psyche, or how Twitter has affected public discourse. We have to really figure that out before we can even start to come up with potential solutions. But once you adjust to the idea that improvement is several years down the road, I think it’s looking up.”
Aviv Ovadya, a misinformation researcher who sounded an alarm about fake news before the 2016 election, says he welcomed the increased attention to the problem.
“What’s changed in the last two-and-a-half years has been the degree of momentum, and interest, and funding, and political will,” says Ovadya, who attended the Austin conference. “[But] most of the work is yet to be done.”
The scale of that work became apparent at the conclusion of the event’s first day. Organizer Nat Gyenes, who had begun the day promising attendees an evening field trip to see Austin’s famous urban bat colony, was forced to cancel the excursion due to a scheduling error.
”I’m sorry I’m the purveyor of misinformation,” Gyenes said. “It’s not actually bat season.” The crowd nodded sympathetically. We’d all been there.