Success metric: Societal good?

Florent Joly
2 min readApr 12, 2023

--

‘Societal good’ is too often the missing consideration in assessing the safety of new technology.

In the field of cybersecurity, teams will split into a red team, which plays the role of the attacker by trying to find vulnerabilities and break through defenses — and a blue team, which defends against attacks and responds to incidents when they occur.

In product organizations more generally, teams will consider a range of potential outcomes as they create new features, which they will categorize as goals 🎯 (ex: user engagement) or guardrails 🛡 (ex: harmful behaviors).

What all these approaches miss is a consideration that goes beyond outcomes for individuals, to consider outcomes for society and institutions as a whole. Does this technology introduce a fairness or equity issue? Could it disrupt local newspapers in ways that hurts civic engagement?

Part of it is feasibility. Outcomes for individuals can be measured immediately while outcomes for public goods can take years to play out, making attribution difficult.

Part of it is culture: the Internet as we know it was largely architected using individualistic definitions of freedom, whereas other approaches to designing the Internet could have included considerations of the public good or definitions of freedom that are anchored in the wellbeing of the community.

Compare John Stuart Mills’ conception of free will, which knows only the bounds of self protection, with Jean Jacques Rousseau’s Social Contract, whereby man becomes free through obligation, more specifically adherence to laws that reflect the general will.

Fortunately, social media companies in recent years have mitigated gaps in their own definitions of ‘success’, by hiring teams specifically focused on driving societal outcomes (be it Equity, Fairness, Voter Empowerment).

A similar reckoning is now needed for the development of A.I.

As Aviv Ovadya writes in Wired, this could look like supplementing red teaming and blue teaming with violet teaming, which he defines as “identifying how a system (e.g., GPT-4) might harm an institution or public good, and then supporting the development of tools using that same system to defend the institution or public good.”

This could also start by explicitly mapping integrity protections against societal goals when designing any new technology, using a simple framework like the one Samidh Chakrabharti (former head of Facebook’s Civic Integrity team) introduced me to, which includes both proactive and reactive mitigations.

--

--

Florent Joly

Exploring the intersection of technology and democracy.