Reimagining Democracy for AI (in the "Journal of Democracy")
How can our democratic capacity keep up with AI advances?
How can we effectively integrate democracy into AI, and AI into democracy?
What would a democratic future of AI governance and alignment look like?
How should we regulate AI (or not?) What should we align AI systems to?
I was invited to write for a ‘symposium issue’ for the Journal of Democracy, focused on AI, and I aimed to sketch answers to some of those questions. That paper, titled “Reimagining Democracy for AI” was recently published, and you can read it in full here (though this link may be better for sharing and this is the official paper link).
The abstract:
AI advances are shattering assumptions that both our democracies and our international order rely on. Reinventing our "democratic infrastructure" is thus critically necessary—and possible. Four interconnected and accelerating democratic paradigm shifts illustrate the potential: representative deliberations, AI augmentation, democracy-as-a-service, and platform democracy. Such innovations provide a viable path toward not just reimagining traditional democracies, but also enabling the transnational and even global democratic processes critical for addressing the broader challenges posed by destabilizing AI advances—including those relating to AI alignment and global agreements. We can and must rapidly invest in such democratic innovation if we are to ensure that our democratic capacity increases with our power.
The paper piece aims to summarize and contextualize many of the approaches discussed in earlier pieces (and in other outputs), except focused entirely on AI, for a general educated audience.
Below I’ll provide a few key excerpts.
Motivation
If we continue on our current course, advances in AI may take us down one of two possible paths toward a dystopian future: that of autocratic centralization, where powerful corporations or authoritarian countries unilaterally control extraordinarily powerful AI systems, or of ungovernable decentralization, where everyone has unrestricted access to those incredibly powerful systems and, because there are no guardrails, can use them to cause massive, irreversible harm.
I advocate a third path—that of combined democratic centralization and democratic decentralization—and accelerating investment in the democratic infrastructure needed to make such a path viable.
The core message is that we don’t need to settle for autocratic or ungovernable technology. Improved democratic mechanisms provide a third path.
The meat of the paper goes into the key ingredients: representative deliberations (like those discussed here; more detail), AI augmentation (e.g. in collective response systems, consensus language models), democracy-as-a-service (e.g., deliberation infrastructure providers), and platform/AI democracy. As they have mostly have already been discussed in this series, I won't excerpt those here. But what I have focused less on in my public writing is how these apply to AI alignment and AI governance.
Applications for “alignment” (heavily simplified to explain to non-experts given the very limited space)
"AI democracy" built upon augmented representative deliberations can help in developing the principles for aligning AI systems—that is, ensuring that an AI system operates according to a set of principles. […] While some of the decisions about such principles may be delegated to the direct user of an AI system, there will always be some base set of values that is encoded by default—and which may be required in order to limit severely harmful activity. Currently it is primarily the AI companies themselves […] that are deciding what generative and general-purpose AI systems should align to.
Unfortunately, if unsurprisingly, differences of perspective around such values appear to be exacerbating mistrust and geopolitical risk, as AI organizations and governments with differing values race to ensure that the most powerful systems are aligned with their values—and one casualty of this race is likely to be critical guardrails. Representative deliberation can help to address these challenges by providing a broadly acceptable mechanism for navigating across those competing values, democracy-as-a-service enables corporations to convene such deliberations while staying at arm's-length, and AI-augmentation may even enable such processes to be feasible globally.
Applications for “transnational AI governance” or “global agreements”
To further address these risks and challenges of powerful AI systems, we are likely to need some form of globally agreed-upon policies around the development, deployment, and distribution of such systems—for example, mandating that AI systems should be trained and aligned not to support the development of chemical and biological weapons.
This may sound straightforward, but it brings up a thorny issue related to open-source AI systems. Open-source systems reduce centralized corporate control of AI systems and make research easier. Unfortunately, it might not be possible to prevent people from "retraining" an open-source AI system to overcome its alignment guardrails—this has already been done with some of the most powerful opensource models. And it is impossible to "unrelease" an open system once it has been shared publicly, which means that a single actor could irreversibly impact the entire planet. Some argue that if the risks of such open releases are significant enough, we might need a global prohibition on the development or open distribution of certain types of AI systems.
There is currently significant disagreement about how to navigate such dilemmas, and meaningful consensus is exceedingly difficult to achieve, due to challenges including the speed of change; uncertainty and disagreement around the degree and direction of AI impacts; distrust among key actors; ease of replication; and the lack of a broadly trusted process for weighting conflicting ethical obligations. The same democratic innovations may be invaluable here also, providing a complement to more traditional geopolitical negotiations.
There is much more to say here, particularly in how this can be practically achieved, given our existing economic and geopolitical incentives and institutions. One thing that is increasingly likely is that “business as usual” will be insufficient—the same approaches that have so far averted catastrophe from nuclear or biological weapons, and which have not averted catastrophe in climate, are unlikely to be capable of addressing severe negative impacts of AI, given the rate of advances and extreme uncertainty.
Momentum and next steps:
In the last nine months, we have gone from having almost no recognition of the necessity to think about democratic innovations to seeing almost every major AI company begin to explore how best to incorporate aspects of deliberative democracy into their work. […]
There is incredible capacity and momentum in the democratic-innovation ecosystem, but the rate of AI advances is far faster. I have therefore been exploring the possibility of setting up a fund focused on democratic innovation to accelerate the design, testing, evaluation, and composition of such processes at increasing scale, working in partnership with civil society, academia, AI companies, and multistakeholder and multilateral organizations for implementation.
I would like to see governments around the world developing similar focused funds to ensure that we can rapidly build the capacity to run complex end-to-end processes for both alignment and policy. Corporations advancing AI should also signal their willingness to invest in democratic governance and alignment, with funds pre-allocated for running processes that can satisfy particular criteria, whether developed in-house or externally. This would create a market incentive for rapid investment in the development of implementable democratic processes.
Related to the above, I’m excited to be creating an organization with deep relevant expertise, both to: (1) advise corporations, governments, etc. on their funding and adoption of strategies for integrating AI and democracy, and (2) act itself as a grantmaker to accelerate targeted innovation and adoption in this space. We have a pre-launch landing page here, and are aiming to hire an operations director soon. I’ll announce major updates—including the launch—through this newsletter.
Conclusion:
It is a great gift that the same technology which is so destabilizing may also be harnessed to help overcome the problems it is creating. […]
There is a tremendous amount that we need to do right now to address present and significant risks and harms—but there is also little time to waste if we want to be ready to tackle the even more significant crises that are coming.
This paper only provides a sketch of what directions we might aim toward and how we might get there. I would be interested to hear what you or your organization find most valuable, confusing, or problematic in it—and what you would like more detail on in order to transform theory into practice. The full paper can be accessed here on my website (the original link here is now paywalled).
Please share this with people who might find it interesting—and tag me if you share on social media: I’m @metaviv, aviv@mastodon.online, and Aviv Ovadya on LinkedIn.
Stay in touch by following me on any of those platforms, reaching out at aviv@aviv.me, and of course, subscribing.