Disinformed democracy

With the US elections fast approaching, the volume of mis- and disinformation circulating online has now reached a point at which it actively threatens the democratic process. Last month an internal Facebook investigation uncovered thousands of groups affiliated with the QAnon conspiracy theory, suggesting that up to three million users distribute lunatic fictions which have provoked social and political tensions and, occasionally, resulted in violence offline.

Earlier this week Facebook announced that it would restrict political advertising just before the election and also remove posts which convey misinformation about Covid-19. CEO Mark Zuckerberg promised that the November election would not be “business as usual” and that in discharging its “responsibility to protect our democracy” his company would be “helping people register and vote, clearing up confusion about how this election will work, and taking steps to reduce the chances of violence and unrest.”

There is little to suggest that Facebook has the resources to deal with these threats. At least two users flagged a page called “Kenosha Guard” for inciting violence before a self-identified militia member fatally shot two people during the recent protests. Although the shooter was not a follower of the page in question, it is clear that the platform makes it easier for dangerous groups to coordinate their activities. Zuckerberg apologized for the failure to remove the page – it was taken down after the shooting – and cited an “operational mistake” in which a “specialized team that is trained to look for symbolism and innuendo” had not adequately reviewed the content. Since the second review appears to have taken place after the shooting, the idea the company’s review team will preempt future threats looks like wishful thinking.

Having supplied the infrastructure which enabled online interference in the 2016 election it makes sense that Google, Facebook and Twitter should play a leading role in addressing the flood of disinformation, but can they enforce their promises? The restriction or removal of accounts that engage in “coordinated inauthentic behavior” is a good start; so is more robust verification of political ads. But will it suffice when known malefactors can refigure their digital presences so easily?

Joan Donovan, a disinformation researcher at Harvard University, points out that the major digital platforms face a dilemma when trying to rein in groups like the Kenosha Guard and QAnon.  Not only do they  provide the “base infrastructure” which permits the spread of hateful and dangerous ideas but they themselves are often targeted by these very groups for being “oppressive regimes that seek to destroy truth.” The platforms’ reluctance to grasp this nettle has permitted so much disinhibited speech already that it is unlikely that technical adjustments can prevent further damage.

“While people who have bought into these disinformation campaigns are already affected,” notes Donovan, “preventing it from spreading to new groups and new audiences is one intervention, among many, that are needed. Unless there is some kind of coordination between platform companies to get rid of the main QAnon influencers, it will continuously pop back up.”

There are also problems that lie beyond the scope of  internal reviews. Trump’s embrace of conspiracy theorists and the GOP’s failure to check his frequent online provocations looks set to create a perfect storm of fake news at the end of an already ugly campaign. With potential antitrust action and other forms of regulation looming over their heads, the big digital platforms are on a tightrope between preserving profits and protecting democratic norms. Whether they will prevent the next few weeks and months from lapsing into  ‘business as usual’ is an open question, one that will have major repercussions all around the world. In the meantime America’s conflation of its culture war and politics will continue to reveal the dangers that laissez-faire digital platforms can pose to modern democracy.