Facebook’s Oversight Board Is Not Enough

Facebook exerts enormous influence over the creation and spread of media content in the United States and other locations around the world. The company’s recent creation of an oversight board to independently adjudicate content disputes is an attempt to solve real problems: the spread of misinformation, voter suppression, and so on. The author argues that the board itself is not enough, however, because the company’s business model is predicated on the very techniques–audience segmentation, targeting, algorithmic optimization of feeds for more eyeballs–that create the problems to begin with.

Following Mark Zuckerberg’s stated commitment to improving his company’s public accountability measures nearly a year ago, Facebook announced detailed plans last month for its new Oversight Board. The body, which the company says will comprise 40 independent experts who will serve in three-year terms, has been described by many as Facebook’s own Supreme Court as it will adjudicate questions of content policy on the company’s platforms as they arise. The board is designed to have notable independence; in these judgments it can overrule Zuckerberg himself.

On its surface, the Oversight Board is a remarkable answer to a novel problem. The firm, which enjoys well over two billion users across the globe, has seen a broad array of serious problems related to the scaled, frictionless dissemination of content that is its core consumer service. Across Instagram, WhatsApp, and the principal Facebook platform, that content has come, inevitably, to encompass not only cat videos and baby photos but also hateful conduct, the incitement of terrorists in politically unstable locales, the spread of disinformation, and the systemic entrenchment of algorithmic bias. The deep public concern related to these issues is justified, and accordingly the Board’s duties include listening to users with grievances relating to them and deciding when to bring the hammer down on offending users or posts.

But we must ask: is the board really set up to succeed?

I would contend not.  It is not poor execution that is responsible for the company’s general troubles in content moderation — it is the business model behind the company’s platforms itself.

This same model lies at the center of the consumer internet as a whole and is based on maximizing consumer engagement and injecting ads throughout our digital experience. It relies on collecting personal data and on sophisticated algorithms that curate social feeds and target those ads. Because there is no earnest consideration of what consumers wish to or should see in this equation, they are subjected to whatever content the platform believes will maximize profits. These practices in turn generate negative externalities of which disinformation is only one.

Take this example: When Russian political operatives sought to subvert our elections, they turned to the internet platforms. We witnessed this in the course of the 2016 elections in many forms – including posts on Twitter and Facebook that inflamed racial tensions and in order to suppress voting in certain communities in the United States. These efforts relied on the very same audience segmentation and targeting techniques that allow the platform to increase traffic (and ad revenue). Already, the FBI and Department of Homeland Security are informing election officials that agents of the Russian government “might seek to covertly discourage or suppress U.S. voters from participating in next year’s election.” Using the very tools these platforms have perfected, in other words, nefarious actors are identifying the thin cracks in American society and showering them with lies until our political fabric begins to rip apart.

For an oversight board to address these issues, it would need jurisdiction not only over personal posts but also political ads. Beyond that, it would need to be able to not only take down specific pieces of content but also to halt the flow of American consumer data to Russian operatives and change the ways that algorithms privilege contentious content. These steps are much more of a challenge to a company that relies on these mechanisms for its bread and butter. No matter where we set the boundaries, Facebook will always want to push them. It knows no other way to maintain its profit margins.

In reality, then, the Oversight Board in its current form cannot address the harms that are perpetrated and perpetuated over Facebook.

Furthermore, the board may be doing more harm than good. Other internet companies are also trying their hands at mitigating these issues, though these efforts are early and have yet to prove effective. YouTube’s recent hate speech takedowns and Twitter’s update to its operating rules to counteract dehumanizing content targeting religious groups are just a few examples.  Seen in this light, I believe Facebook’s board becomes a commercial thing of convenience for the company both in its name and its function; it gives the impression that the board will provide true oversight by graduating the responsibility of determining what should constitute hate speech to an external party with public credibility, allowing the company to skate over the threat of a more rigorous regulatory policy that might emerge from relatively aggressive legislatures that might wish to target the firm’s business model itself.

To address these issues, perhaps the Oversight Board’s authority should be expanded from content takedowns to the more critical concerns at the heart of the company itself.  We need oversight of the company’s data practices to promote consumer and citizen privacy; oversight of the company’s strategic acquisitions and data governance to protect against anticompetitive practice; and oversight of the company’s algorithmic decision-making to protect against bias. There are many ways that such oversight could be operationalized: through shareholder power, governmental oversight, third-party auditing, industrial regulation, or, indeed, extensions of the board’s authority.

Facebook exerts enormous influence over the creation and spread of media content in the United States and other locations around the world – to the extent that a majority of users in certain nations believe Facebook is the internet itself.  And when we overlay the company’s far-reaching control of the media and communications landscape onto the political world, noting that the Russians and other nefarious actors – both foreign and domestic – may have U.S. and other democratic elections in their sights, the necessity to provide some kind of public oversight on the business practices that cause these problems becomes clear.

Dipayan Ghosh is a Shorenstein Fellow and co-director of the Platform Accountability Project at the Harvard Kennedy School. He was a technology and economic policy advisor in the Obama White House, and formerly served as an advisor on privacy and public policy issues at Facebook. He is the author of Terms of Disservice (forthcoming, November 2019). Follow him on Twitter @ghoshd7.

Facebook’s Oversight Board Is Not Enough

Research & References of Facebook’s Oversight Board Is Not Enough|A&C Accounting And Tax Services
Source

error: Content is protected !!