Should platforms behave more like democratic governments?

Florent Joly
3 min readMar 13, 2020

In his book published in June 2019, David Kaye argues that democracies in the West- for example Germany and France- are delegating the regulation of online speech to tech platforms, without oversight. In his mind, such precedents can encourage less democratic governments in the Global South to pass their own regulation aimed less at combating misinformation or hate speech than at protecting politicians and censoring dissent. Instead of governments setting dangerous precedents for freedom of speech around the world, Kaye argues in favor of proactive measures platforms themselves must take to govern speech more responsibly. Do such measures amount to platforms behaving more like governments? Let’s unpack them.

Decentralized decision making

Kaye sees platforms as having a democratic deficit. He argues civil society activists and users should play an explicit role in company policy making and companies should develop multi-stakeholder councils (similar to Facebook’s Oversight Committee) that help evaluate the hardest kind of content problems. For him, there should be “Desk officers” in each country whose role is to engage with local civil society and there should be programs that protect the engagement of vulnerable populations and minorities.

Here, we can wonder whether democratic participation is truly compatible with the structure and mandate of companies in a capitalistic ecosystem. In the absence of alignment between democratic participation and companies’ business models, would democratic participation become performative participation rather than true engagement?

This brings up another question around the perceptions of democratic participation companies are able or fail to create. If platforms already consult stakeholders and lead programs and initiatives to protect minorities but simply do not talk about them, do their actions count as democratic participation? Or is the essence of participation seen as democratic that everyone should know about the companies’ outreach and efforts- and know how to participate?

Human rights standards as norms for content moderation

For Kaye, companies should use human rights law as a basis for their standards which in practical terms could mean that platforms make explicit references to human rights law in their rules, both proactively ahead of takedowns and reactively, when taking down content.

We should ask whether human rights law provides platforms detailed enough answers to every possible content question the Internet poses- or whether such laws were for the most part drafted for an offline world. If that is the case, would abiding to the spirit of human rights law end up being for platforms a positioning exercise rather than a true improvement to enforcement?

We could also ask is whether human rights law is flexible and evolutive enough to adapt to the rapid pace of “innovation” in the field of problematic content (deep fakes, influencer marketing).

Radical transparency

Finally, Kaye advocates for radically better transparency on platforms like Facebook, Twitter and YouTube to allow users to recover individual agency. He points specifically to two types of transparency:

  • Rule-making transparency which is about explaining how policy decisions were made and how companies arrived at the policy change.
  • Decisional transparency which is about explaining why a takedown action was taken and what users can do to appeal the decision.

A common pitfall when it comes to transparency is to adopt a nearly ideological stance in favor of platforms sharing more information in a way that disregards what we know of information overload or drivers of trust. Would users pay attention to additional transparency and rationale shared by platforms, unless they absolutely have to? Is more information about why and how a decision was made more likely to restore users’ trust?

Seen together, David Kaye’s recommendations around democratic participation, human rights law and radical transparency encourage tech platforms to adopt some of the best practices governments have historically used to build legitimacy. Without waiting for regulation, platforms have already made moves towards each of the three directions.

What do we citizens think?

--

--

Florent Joly

Exploring the intersection of technology and democracy.