When Alex Jones crashed the congressional hearings looking into big tech platforms back in September, Lord Voldemort kept coming to my mind. Even if you haven’t read the Harry Potter books, you probably know that almost no one in the wizarding world will speak this archvillain’s name aloud; he is referred to only as “he who must not be named” or “you know who.” In the final book, Voldemort puts a curse on the name, so that merely uttering it acts like a beacon for the wizard’s crew of Death Eaters.
Eager to communicate something crucial about the evil lord’s latest plot to his friends, Harry at one point blurts out Voldemort’s name. What follows are many, many scary pages.
Jones is a kind of real-world Voldemort. Speak his name to condemn his conspiracy theories and you draw more attention to his hateful ideas. It’s like fighting fire with oxygen tanks instead of fire extinguishers. The tools breathe more life into the flames.
This is attention-gaming, and Jones excels at it. At the hearings, Jones sat behind Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey as they testified, streaming the action from his phone. He heckled Marco Rubio as the senator talked to reporters. His stunts blew up online and got him into The New York Times.
Jones has been spreading his rage-fueled disinformation for a while, but I have rarely written about him publicly. With the exception of three tweets among tens of thousands I’ve posted, I haven’t referred to Jones by name on Twitter. He was “you know who” to me. This was a deliberate decision; I knew that he counted on his critics to amplify his message. I didn’t want to broaden the reach of his curse.
So why am I naming him now? That fuss Jones made at the Capitol was a last gasp. He’d just been banned from YouTube, Facebook, Apple, and Spotify. Soon after, he was also banned from Twitter.
He’d been deplatformed.
Now that his Voldemort-like powers have vanished, it’s not just possible to discuss Jones—it’s necessary. His deplatforming is easy to celebrate. Though some may wish that good speech is the best way to drive out bad speech, the harms he perpetrated can’t be dealt with in the marketplace of ideas. There is no reasoned debate or enlightened compromise with the idea that parents of children gunned down at Sandy Hook Elementary School in Connecticut were just actors in a false-flag operation later used to promote gun control. Nor is there anything to say about his claim that KKK members are “just Jewish actors” pretending to be Nazis. (Lots of actors in his world.) Yet while I’m happy that Jones has lost his megaphone, I’m troubled both by the system that let him have it and the way it was taken away. Simply put, the influential digital platforms are built to generate more Voldemorts, while also amassing worrisome amounts of centralized power.
The platforms are in the business of harvesting attention, and Jones and his kind are good at delivering it. Jones’ supporters lapped up his content and stoked outrage, leading to even more views. On YouTube, Alex Jones’ channel was so heavily recommended that watching regular political content often led to an autoplay of his red-faced rants. But that wasn’t the end of it. A network of hateful or conspiratorial content suppliers provide these platforms with enormous amounts of “engaging” content to attract users. Their recommendation and sorting algorithms, designed to maximize engagement and the amount of time onsite, spread them farther.
But if the unaccountable manner in which the tech platforms can amplify harmful content has led to a crisis, so has the facility with which they can eject it. Jones delivered eyeballs for many years. Then the platforms succumbed to pressure and banned him, all within the span of a few weeks.
The tech platforms have arbitrary power to decide what to amplify, and thus what to bury, and they have the power to banish as they wish. There is nothing aside from backlash to stop them from deplatforming, say, tech critics or politicians who call for shutting tax loopholes for massive corporations. Without due process or accountability, a frustrated public is left with appealing to a few powerful referees—and crossing our fingers.
This is complicated stuff. We’re dealing with three ideas that are structurally in tension: that hate speech, harassment, false accusations, and baseless conspiracies (like antivaccination claims) cause real harm; that free speech is a crucial value; and that it’s necessary to deal with algorithmic amplification and attention-gamers.
Legislators, courts, users, and the platforms themselves have to be involved. There are some precedents we could use from older technologies. Some updated version of the fairness doctrine, which required radio and television stations to devote time to issues of public importance and seek out a multiplicity of views, could be revived for the digital age. We could come up with a kind of Fair Credit Reporting Act that gives users a right to challenge a platform banishment. There could be antitrust actions against centralized platforms (along with user protections), or upstarts could offer alternatives (with better business models). As with most social problems, we have to accept that there is no single, perfect solution, no avoiding trade-offs, and also that inaction is a decision too.
At a WIRED event in October, Jack Dorsey said people don’t view Twitter as a service. “They see what looks like a public square,” he said, “and they have the same expectation as they have of a public square, and that is what we have to get right.”
There’s lots of work to be done. But getting it right is too important to be left to Dorsey (and Mark Zuckerberg and Susan Wojcicki) alone.
Zeynep Tufekci (@zeynep) is a WIRED contributor and a professor at UNC Chapel Hill.
This article appears in the December issue. Subscribe now.