Discover more from Reimagining Technology
How 'Platform Democracy' or 'AI Democracy' might interact with existing institutions
Can deliberative democratic processes commissioned by corporations interact helpfully with nation-state, multilateral, and multistakeholder decision-making?
One of the questions that often comes up when talking about platform democracy, is how it relates to and fits into our existing institutions and processes. I wrote the following piece initially in March 2023 as a chapter for a publication exploring global views on how “public interests and democratic values are taken into account in the rule-making processes of platforms”. You can find the original here. I'm sharing it in full on this page, (with permission from the publishers) as it helps answer some of the questions I've been getting from people following this work.
This piece uses the platform frame, but all of the content fully applies to the non-state democratic governance of AI systems, ala ‘AI democracy’. It focuses on interoperability with respect to established systems and institutions—a sort of interoperability with the status quo. Future pieces will explore interoperability within the ecosystem of democratic and deliberative innovation.
Readers who are already familiar with platform democracy might want to skip the section entitled “What is platform democracy?”.
Interoperable Platform Democracy
Is there a world where corporations not only run democratic processes for their decision-making—but where such processes are actually a good thing? A world where important and controversial choices facing corporate platforms and AI organizations are decided not by leadership fiat but by a truly representative deliberation (largely outside of government)—and where this is not just ‘democracy washing’?
This piece explores what that world might look like, and how such democratic processes—potentially commissioned by corporations—might beneficially interoperate into our existing institutions of national, transnational, and global governance (hereafter referred to simply as institutions).
Such questions are particularly salient given both Meta’s concrete actions to use such processes (initially to develop greenfield policies in the Metaverse), and AI leaders’ exhortations to "align their interests to that of humanity"—where such processes might be particularly applicable.
There were several key guiding questions that led to the approach outlined below:
Who should actually be in charge?
Is it possible to govern tech in a way that moves power to the people being impacted—and away from both corporate leadership and oppressive governments?
What are pragmatic approaches we can try today to rapidly improve the governance of transnational technologies—in a world where such international coordination seems increasingly difficult?
What is platform democracy?
Platform democracy: Governance of the people, by the people, for the people—except within the context of an internet platform (e.g. Facebook, YouTube, or TikTok) instead of a physical nation.
More formally, platform democracy refers to the use of democratic processes to include the populations impacted by a platform, in the governance of that platform in a representative fashion.
In particular, two approaches to such platform democracy are considered here: intensive deliberative democratic platform assembly processes for complex decisions and lighter-weight collective dialogue processes for decisions that need less context. In both cases, an organization (such as a platform) needs to make a decision that would benefit from democratic legitimacy from the decision.
Such questions might include:
What if anything should be done about content that is not strictly false, but which is meant to be misleading?
Under what conditions, if any, should audio or video be recorded in online spaces in order to identify potential harassment, and if so, who should have access to such recordings?
What kinds of content, if any, should not be shown as ‘trending’?
What kinds of outputs are acceptable from generative AI systems?
None of these are theoretical. Meta has already directly explored a version of the first two of these questions through such processes; Twitter would likely have asked the third question had there not been an acquisition, and OpenAI’s CEO has described the fourth as a question for which he would like global democratic input.
How are these questions then answered such that the processes are "democratic"?
A microcosm of the impacted population is convened and facilitated by a neutral 3rd party, such that everyone being impacted by the decision (might be e.g. the user base, or the countries the organization operates in) has roughly the same opportunity to be selected (through sortition: stratified random sampling). The selected people make the ultimate recommendation to the decision-maker—and unlike a poll, they are given the opportunity to learn from each other's perspectives (and for decisions that involve significant tradeoffs or context, they also learn from stakeholders and experts). These ‘deliberators’ are paid for their time, and ideally child care, elder care, travel, etc., to reduce the self-selection bias.
Why are these processes legitimate?
The potential democratic legitimacy of such processes comes from their representative nature—instead of every single person having an opportunity to vote, but spending fairly small amounts of time per person, a much smaller number of people vote, but they each are supported with the time and resources to make the best possible decision (without the often perverse incentives of electoral politics or corporate profit). Such processes are also not just some techno-optimistic idealistic dream but are being used by existing governments around the world. Moreover, as any individual has only a small chance of being selected, it is far more feasible to imagine such processes working across many platforms, even globally, than an electoral representation system.
Can deliberative processes work across many languages and cultures?
Such deliberative democratic processes have now been run with many languages a number of times, including several across the EU. Admittedly, interlingual and intercultural deliberation is still imperfect, but there are both process approaches and tools that can help mitigate the risks, and ongoing experimentation to develop best practices around the most challenging aspects (e.g., subtle differences in word connotations across languages).
What happens after the process is complete?
As a slight generalization, when governments run such deliberative processes, they usually serve as recommendations, which must be either implemented, or receive a response from the government about why the recommendation is not being followed. The same can apply when the commissioning organizations are companies like Meta or OpenAI; though it is also likely possible to make the results binding.
Interoperability with existing institutions
Platform democracy does not exist in isolation—it should be structured to support existing institutions instead of fighting them. There are several places where this can happen. To contextualize the options for interoperability with existing institutions, it’s useful to understand the different organizations potentially involved in such a process.
First, there must be a commissioning organization, e.g. Meta, Twitter, Google, or OpenAI. This could also be a combination of organizations, or even organizations and governments together.
They commission the deliberation with "deliberative infrastructure providers"— organizations that run these sorts of processes as neutral third parties for governments (and now companies) around the world. The deliberative infrastructure providers also select the members of the deliberation body using sortition and facilitate the assembly itself.
These deliberation infrastructure providers may work with existing expert and stakeholder bodies to provide context for the deliberators or create a new temporary advisory or stakeholder body to help support the deliberation.
Finally, the deliberation members learn from those stakeholders, experts, and each other in order to make the final recommendation.
Impacts of platform democracy outputs
The most obvious touchpoint where such a process interacts with the broader world beyond the commissioning organization relates to the impacts of the process outputs.
Let's not beat around the bush here—platform democracy, in its most limited form, can be considered a form of self-regulation. However, it is different from most forms of self-regulation in that the power of creating the mandate is not directly in the hands of the platform. It is instead put to people chosen at random, without any incentive to accede to the platform’s wishes, and facilitated by a 3rd party deliberation organization. (A rigorous process aiming for strong legitimacy would also use as impartial a method as possible of choosing experts and stakeholders.)
Moreover, even if the recommendation is not binding, the legitimacy of the mandate created by such a representative democratic process makes this kind of self-regulation rather awkward for a company to ignore. In plainer words, it looks very bad to the media (and the governments that follow it) if an organization convened a process to have the people tell it what they want in a democratic fashion—and the organization ignored the outputs.
Perhaps even more exciting, a sufficiently transparent and high-profile process not only educates the members of the deliberation themselves, but allows the broader public, media, and even regulators to follow along through broadcast and social media, enabling learning from experts and stakeholders alongside the members. This can potentially help elevate the overall level of conversation on issues with complex trade-offs more effectively than hearings used to score political points, and also help the public see itself through its reflection in the deliberative microcosm.
Raising the responsibility baseline
In fact, recommendations that come out of a process that is seen as broadly legitimate are likely to not only affect the organization that is convening it, but also any other organization facing similar questions and advocacy and interest groups that relate to the question (assuming it is not too specific to the convening organization). If the question is for example around potential responsibility actions, this can help create a corresponding responsibility baseline—a minimal level of action that is seen to be broadly acceptable, which may be higher than the current industry default, raising pressure to implement responsibility practices across the board.
Even if the responsibility baseline is lowered, that is potentially indicative that the impacted population does not actually believe that that level of ‘responsibility’ is warranted (for example, you can imagine a deliberative process that determines that there should actually be less content moderation around a particular issue—which would be a good thing to know).
Creating a responsibility ‘north star’
Some processes may not change the baseline, but may instead create a north star— responsibility practices that might be too difficult to fully execute on, but can be aspired to and approximated. Such north stars may also exert pressure on the entire industry of the commissioning organization.
Identifying global ‘moral high ground’
For some issues, the challenge is not around the ideal north star, or the minimal baseline for responsibility. Instead, there might be deeply competing notions of what responsibility even is. For example, some organizations developing powerful AI systems say that the responsible thing to do is to share as much as possible—maximizing openness. Others are extremely cautious and barely release any information about their research. Both sides say that they are acting for the good of humanity—in other words that they have the ‘moral high ground’. Both argue their perspective to the public and regulators with the intent of shaping perception and law. Similar differences in approach occur in many domains, including in tradeoffs between privacy and security.
A rigorous global deliberative process can create something closer to an idealized public sphere to actually identify what ‘humanity’1 believes is the moral high ground (that such companies should be aiming for). There are thus potentially strong incentives for organizations that believe that they are closer to the ‘true moral high ground of humanity’ to convene such processes, in order to have their approach validated (assuming that they are correct).
Regulatory and institutional suggestions
Such responsibility baselines, north stars, and moral high grounds may then directly impact the actions of legislators, regulators, standards bodies, multilateral bodies, multi-stakeholder bodies, trade associations, etc., in ways that may be binding. In other words, the commissioning organization is essentially fronting the cost for deep research and input gathering that can then directly feed into these existing processes, some of which may have more binding force. Concretely, this might look like, for example, the UK government, the EU, UNESCO, or the Partnership on AI developing recommendations (or, for governments, even laws) directly based on and referencing those deliberative outputs. This could be true even if the deliberative process that was originally convened by Meta or OpenAI—assuming that the process was seen as rigorously impartial and democratic.
There is the option that the convening organization can pre-commit to making an output binding (when otherwise legal), using the legal infrastructure of the jurisdiction(s) they are operating in. There are likely a number of legal instruments that can be used to do this depending on the relevant jurisdiction (e.g. a golden share arrangement).
Conflict with platform democracy outputs
There are some common questions about how this might play out in practice:
What happens when there is conflict with existing law or regulation?
In situations where there are conflicts between the outputs of deliberations and existing laws or regulations, the situation is roughly analogous to when a company’s strong ideological stance conflicts with that of a government. In some cases, this may be seen as good, e.g. when a company avoids sharing location information about democracy activists, thus violating the laws of an authoritarian country. In other cases this may be seen as problematic, e.g. when a ride-sharing company ignores local safety regulations. Either way, if organizations do not follow the laws of the nations they are based in, they face the consequences. The main difference is that if the legitimacy of the process used to create the deliberate outputs is higher than that used by the government (for example in an authoritarian or extremely partisan context), then there may be significant pressure, both externally and internally pushing for the more democratic outcome.
Could the governments or regulators themselves actually be involved in the process?
Definitely, though of course this can become more challenging with more global processes (and thus more governments). It's also worth noting that one of the benefits of the platform itself running a process is that the process can be specific to features that only that platform has, and it may not be worth the time for government officials to be involved with every platform in such a manner. That said, especially for processes that involve multiple platforms or industry consortia, governments may want to act as co-convenors, and platforms may want that also in order to increase the legitimacy of the outcomes.
Could there be permanent deliberative bodies?
There are many potential models beyond the simple temporary platform assembly or collective dialogue, including institutionalized permanent models built on approaches such as multibody sortition, the Ostbelgien model, the Paris model, and which could directly interact with existing institutions in much more sophisticated ways. It feels somewhat presumptuous to explore this in the context of platforms and companies without more understanding and exploration of the basic model, but it is important to know that it may be possible to have key decisive power over an entire company through such processes, as they are refined and combined. One could even imagine augmenting or replacing a traditional corporate board structure with carefully designed deliberative bodies in order to truly enable democratic governance, with no higher executive or board-level power (though feasibility might depend on the jurisdiction).
What happens if there are multiple representative deliberations with conflicting outcomes, perhaps even some run by the governments themselves?
There is no clear answer to this as this entire regime is too nascent. It is perhaps roughly analogous to having multiple treaties or non-binding agreements that are in conflict in a multilateral context. The ideal is likely that the process that is most rigorous and thus most legitimately democratic wins out—but there are many potential interpretations of rigorous, legitimate, and democratic, and no clear arbiter. This suggests that it is particularly important to create international standards for such processes in order to ensure consistent evaluation.
More generally, any time there are multiple competing decision-makers, potentially of varying quality, and no official hierarchy, there is bound to be tension, (ideally productive tension) and there is value in creating institutions to navigate those tensions.
Inputs to platform democracy
Beyond simply interoperating with other organizations through the outputs, democratic process inputs can also interact with existing institutions and organizations at other stages of the process.
The commissioning organization could actually be a joint body involving a partnership of a platform (or platforms) with a government or even governments, multilateral institutions etc. The commissioning organization could itself be an existing multi-constituency body such as the Digital Trust and Safety Partnership.
The expert and stakeholder body could also be an existing multi-stakeholder body such as the Partnership on AI.
Governments could help support the actual process of sortition selection if they already have ‘sortition infrastructure’ (as e.g. Mongolia has, as illustrated by its incredible turnout for their deliberative democratic process on constitutional amendments).
Why we might want platform democracy
I might prefer a world where purely public institutions fully govern our technological developments and have kept up with the rate of technological change—change that respects no borders. But we have not evolved our existing governance institutions to take on the challenge of legislating at the speed of technology, and that is unlikely to change very quickly.
The realist question we thus face is:
"How can we practically govern an onslaught of technological disruption—and what are the consequences if we fail to do so?"
This is not theoretical—platforms like Facebook, YouTube, and TikTok have shaped society through their policies, but even more impactfully, they have shaped the incentives of society through their ranking systems. These ranking systems determine what kinds of politicians, journalists, or entertainers succeed and shape the kinds of content they produce. Our existing governance institutions, over a decade after this became clear, have done very little to improve the impact of such systems on society outside of the narrow scopes of personalization and privacy.
We can and must do better, both to tackle belated issues and the emerging governance challenges around new technologies. This is especially salient for advances in AI, such as foundation models like GPT-4 and products like ChatGPT built on top of them, which are likely to rapidly transform our lives. Perhaps deliberative democracy can help us find a way forward. Given a steady rhythm of convened processes, decisions might be made democratically, even at global scale, within months instead of years or decades.
Platform democracy alone cannot solve our problems, but it perhaps provides a useful new governance option between our status quo of platform autocracy and platform chaos.
Stay in touch by following me on any of those platforms, reaching out at email@example.com, and subscribing.
With the caveat that humanity is not a singular entity—and ideally, processes are convened as locally as possible, following the principle of subsidiarity. Thus only issues that significantly affect everyone should merit global processes.