Peace and Platforms
The Centre for Human Rights, University of Pretoria, in collaboration with Research ICT Africa, Media Monitoring Africa (Moxii Africa) and Centre for Information Integrity in Africa, Stellenbosch University, held a conference 1-2 December 2025 in Cape Town to explore how to protect human rights and strengthen democratic resilience in the digital era.
Below is a keynote address given I delivered at the conference
It’s a privilege to be here, though I confess the invitation came with a certain irony. We’re convened under the banner of Social Media for Peace at a moment when the platforms themselves have become instruments of geopolitics. But perhaps that’s precisely why we need to talk.
Let me start with a provocation. Ten years ago, in 2015, we could still speak credibly about a near global consensus on internet governance. A broad community of scholars, civil society organizations, and policymakers had done serious work. They sought to understand the structural causes of online harms. They convinced governments to act, helped develop policy responses, iterated governance models based on what worked and what didn’t. The progress was real, if imperfect. Many of you in this room were part of that work. You believed, with good reason, that we were moving in the right direction.
Fast forward to 2025. That consensus is rubble. What we have instead is a patchwork of competing digital sovereignties and the normalisation of great power competition. The shift happened fast enough to give you whiplash, but slowly enough that we could pretend, for a while, that the consensus was holding.
It wasn’t.
So here’s the question we need to reckon with today. Social media platforms are not neutral communication tools. We know this. They are infrastructure, of public discourse, of economic exchange, increasingly of geopolitical projects itself. Which means the question we face is no longer “should we regulate platforms?” That debate is over. The question now is: “whose regulation, serving whose interests, in whose world order?”
What we’re living through isn’t a series of isolated platform governance challenges. We’re living through a polycrisis, compounding, cascading emergencies that are reshaping the terrain on which any governance must operate. And the platforms aren’t just affected by this polycrisis. They’re accelerating it.
Take climate. We’re facing mass displacement and resource conflicts on a scale that should terrify anyone paying attention. And what role do platforms play? They’ve become conduits for disinformation campaigns around climate migration, for amplifying narratives that delay the fossil fuel transition, for serving up climate denial to users during the most critical moment of decision-making in human history. The algorithm doesn’t care about atmospheric carbon concentrations. It cares about engagement. Turns out, doubt and delay are very engaging.
Or take democratic backsliding. Surveillance capitalism meets the surveillance state. We thought these were separate problems. One about corporate overreach, one about authoritarian control. What we’re seeing is their convergence. Facial recognition systems, social credit schemes, predictive policing tools; these technologies are being developed in one context and exported to another, creating a global market for digital authoritarianism. And platforms find themselves caught between market access in autocratic regimes and human rights commitments that look increasingly quaint when billions in revenue are on the table.
Then there’s inequality. Here’s a fact that should make us pause. Several Big Tech companies now have market capitalizations that exceed the GDP of most countries on this planet. Class divides litter this terrain. Platform workers versus platform owners, sure, but also data producers in the Global South versus data accumulators in the Global North. It’s extraction, just like the older forms of colonialism, except this time what’s being extracted isn’t rubber or minerals. It’s attention, behavioural data, the raw material of prediction and control.
These crises don’t sit neatly in separate boxes. They bleed into each other, amplify each other, cascade through the digital infrastructure that now mediates so much of social life. Climate refugees become targets of algorithmically amplified xenophobia. Economic precarity breeds the conditions for authoritarian appeals. The surveillance apparatus built to track dissidents gets repurposed for tracking workers, for union-busting, for maintaining the class order.
Now, some of you might be thinking, “but we have institutions for this. UNESCO, the UN, multilateral frameworks built precisely to manage global challenges.” And you’d be right. We had those. Past tense is doing some heavy lifting in that sentence.
The multilateral moment we’re living through is not the one we imagined. What’s replaced it isn’t an alternative consensus. It’s fragmentation. Europe pursues GDPR and the Digital Services Act. China builds PIPL and its own model of cyber-sovereignty. The United States seeks to hinder formal regulation. This isn’t convergence. It’s a divergence driven by geopolitical competition masquerading as principled disagreement over values.
Here’s what gets lost in this scramble. Across the world working people get the worst of every outcome. Services fragment. Small businesses can’t navigate fifty different regulatory regimes. Gig workers in Jakarta and Johannesburg face the same algorithmic management systems as those in London and Los Angeles, but without the protections that even inadequate Northern regulations might provide.
There’s elite consensus, mind you, on certain kinds of regulation. Protecting intellectual property? Yes. Preserving market positions of incumbent players? Absolutely. But worker surveillance, gig exploitation, algorithmic wage theft? That gets less urgent attention somehow. Funny how that works.
What we’re losing in this moment isn’t just coordination on platform policy. We’re losing the possibility of global digital public goods. We’re losing harmonized standards that could actually constrain platform power rather than just rearrange it. We’re losing a shared vision of technology for peace rather than domination. The UN’s convening power, UNESCO’s legitimacy, these things do matter, but they matter less when major powers have decided unilateral advantage beats multilateral cooperation.
So why aren’t current regulatory approaches working? You can point to the usual suspects like regulatory capture, enforcement gaps, the revolving door between Big Tech and agencies. All true. But I think the failure runs deeper.
First, regulation is too slow for platform speed. Laws take years to draft, debate, and implement. Platform features change in weeks. Business models pivot faster than legislatures can convene hearings. By the time a regulation passes, the company has already moved to the next extractive frontier. We’re regulating yesterday’s problems with yesterday’s tools while tomorrow’s harms are being encoded into systems right now.
Second, national regulation can’t match global platforms. Content legal in one jurisdiction is illegal in another. So what do platforms do? They either censor for the most restrictive market (which means authoritarian states get veto power over global speech) or they geoblock and fragment the internet, which defeats the whole premise of global connection. Either way, we lose.
Third, we’re focused on symptoms instead of structures. Content moderation debates miss the point entirely. We’re arguing about what appears in the feed while ignoring why the algorithm amplifies certain content in the first place, why business models are predicated upon engagement. We treat platform harms like product defects, bugs to be fixed, rather than predictable outcomes of attention economies designed to maximize engagement regardless of social cost. You can’t moderate your way out of a business model built on affective polarization.
And fourth, let’s not be naïve, regulation is too often captured by those being regulated. The fines are big enough to generate headlines, small enough to be a cost of doing business. The reforms are theatrical.
Here’s the bind. We can’t coordinate globally because multilateralism is weakened. We can’t regulate nationally because platforms are transnational. The result is regulatory arbitrage, forum shopping, a race to the bottom where platforms play jurisdictions against each other. And capital, as it always does, flows to wherever it faces the least friction.
So what do we do? What would platform governance look like if we actually took the polycrisis seriously? Let me suggest five principles, not as a complete answer but as a starting point for the conversation we need to have.
First, think structural, not symptomatic. Stop regulating content. Regulate the business model. If the problem is algorithmic amplification of rage and conspiracy, address the amplification, not just the conspiracy theory du jour. If the problem is monopoly power, mandate interoperability so users can leave without losing their social graph. Treat platforms like infrastructure, because that’s what they are.
Second, think participatory, not paternalistic. The multi-stakeholder model was supposed to include civil society, but let’s be honest about who had real power in those rooms. We need governance structures where workers hold decision-making authority. Platform cooperatives. Public options. Democratic ownership of digital infrastructure. Not as utopian fantasy but as practical alternative.
Third, think adaptive, not static. Build regulatory capacity for real-time oversight. Algorithmic audits conducted by independent researchers with actual access to systems. Sandboxes with teeth. Continuous review mechanisms instead of waiting a decade between reforms while the platforms run circles around us.
Fourth, think subsidiarity with solidarity. Local authority over local issues when it comes to cultural norms and language because context matters for content decisions. But global coordination on global challenges. Climate disinformation, child exploitation, election interference, these require cooperation, not fifty different national approaches. Which means we need to rebuild multilateral institutions fit for purpose. Not 1945 frameworks pretending they can handle 2025 problems.
Fifth, think peace as a design principle. If we’re serious about social media for peace, we need to evaluate platforms on whether they contribute to conflict or cohesion. Peace impact assessments alongside privacy impact assessments. Systems that elevate bridge-building voices instead of rewarding engagement maximization at any cost. This isn’t soft. It’s practical. Societies can’t function when their communication infrastructure is designed to tear them apart.
We can’t achieve peace on platforms designed for commodifying user data acquired through raw engagement, collected through the secondary incentives of affective polarization. Full stop. And we can’t achieve peace when platform governance is itself a site of geopolitical conflict, when the infrastructure of communication becomes weaponized.
But here’s what I want to leave you with. The digital public square can be a space for understanding across difference. Or it can be an accelerant of every crisis we face. Which future we get isn’t inevitable. It’s a choice. And the time to choose is now, in this room, in this moment of fracture, before the fragments become permanent.

