Facebook wants to give people more control over political ads. Thats a mistake.
Digital Politics is a column about the global intersection of technology and the world of politics.
Facebooks latest solution to stop divisive political content from distorting democracy is as simple as it is wrongheaded: Let the people decide.
In a series of upgrades to be rolled out over the next six months, the worlds largest social network is giving its users a greater say over how theyre targeted with political ads on the companys global platforms.
People will be able to opt out of viewing paid-for messages. Facebooks transparency tools will be expanded. Researchers will have more power to track digital campaigns.
Facebook also doubled down on its refusal to fact-check political messages from established politicians (even when they made false claims), and said it would continue to allow partisan groups to target people online with direct messages — a tactic that Google had banned on its rival services worldwide.
As a U.S. tech giant, appealing to notions of free speech and individual responsibility makes sense — theyre baked into the countrys constitution. But the First Amendment — or the tenets of freedom of speech — does not apply in the same way beyond U.S. borders, where even Western democracies tend to put caveats on the extent of free speech, namely during election seasons.
The tech giants raison dêtre remains one where users, not the company, make the final call.
It also fails to address a level of quasi-state paternalism that people in other countries, notably in government-heavy Europe and across Asia, have come to rely on when tackling some of the most difficult questions now facing voters in the 21st century.
When it comes to asking users from San Francisco to Stockholm to Seoul to make their own determination about what content shows up in their feeds, Facebooks new strategy is based on a basic fundamental flaw — that users have the time, willingness or ability to do so on their own.
Thats wishful thinking.
By outsourcing the policing of political messages to regular people on both its social network and Instagram, the photo-sharing service also owned by Facebook, the tech giant is again sidestepping its obligations for taking greater responsibility for what is posted on its social networks.
Its also deputizing its more than 2.4 billion global users in a complex, ever-changing battle against misinformation and overtly-partisan material that no one is capable of fighting on his or her own — particularly average internet users who are more accustomed to swiping through photos from family and friends than checking if a social media post may have come from a Russian-backed group looking to sow dissent.
What Facebooks latest steps will lead to is more of the same.
On paper, people will be granted new powers to track and limit who targets them with political ads.
But in reality, few, if any, will take advantage of such powers to change their social media experience by limiting, or even deleting, all forms of paid-for partisan messaging ahead of this years U.S. presidential election or the litany of other national votes planned in 2020 from Poland to Peru.
Woods from the trees
Its not that individuals should not do more to protect themselves online.
Since the 2016 U.S. presidential election, after which the countrys national security agencies alleged that Kremlin-backed groups tried to sway voters, peoples awareness of such tactics — and ability to combat them through new transparency tools from Google, Facebook and Twitter aimed at highlighting whos buying online ads — has grown steadily.
But relying on Facebook users to do the heavy lifting wont address some of the systemic problems that have allowed social media platforms to become rich breeding grounds for conspiracy theories, outright hate speech and other politically motivated divisive content.
The real questions are how such material is created in the first place, who is behind campaigns to spread it virally online, and what can be done to suppress the worst offenders — all topics that would require Facebook, not individual users, to step into the breach and take greater responsibility for what is published online.
In the companys defense, it has invested millions in new technologies to weed out some harmful material, hired (mostly contract) workers to moderate digital content and beefed up its security checks in an effort to keep foreign governments at bay.
Yet the tech giants raison dêtre remains one where users, not the company, make the final call. Thats a game plan Facebook has already used to sidestep greater regulatory scrutiny.
Facebook CEO Mark Zuckerberg | Win McNamee/Getty Images
Take its new plan to allow people to opt out to see fewer (but still some) political ads and to remove themselves from so-called custom audiences, or bespoke lists created by advertisers looking to target voters with similar characteristics.
The company took a similar stance when addressing the barrage of privacy-related hackles in the aftermath of the Cambridge Analytica scandal in which more than 87 million people