Concepts & artifacts for AI-augmented democratic innovation: Process Cards, Run Reports, and more
We need shared language and documentation practices to support understanding, adoption, interoperability, and evaluation of democratic processes.
(The following is an adapted excerpt from a working paper.)
I believe that we will need to rapidly increase our capacity to govern the impacts of technology—and that this will require democratic innovation of and with AI.
However, if we are to rapidly accelerate such democratic innovation, that requires an ecosystem of actors that can efficiently build on each other’s work. Moreover, while major actors and funders in the space are increasingly interested in applying such democratic innovations, the current approaches often don’t take into account the internal structure of democratic processes—structure that is critical for addressing the challenges around competence, alignment, and robustness. Thus, to support democratic innovators, funders, and other key actors, we need a standard way to communicate about the structure and applicability of processes and their subprocesses—to support adoption, interoperability, and evaluation.
This piece thus introduces a set of concepts for describing democratic processes, and, at a high level, two key kinds of process documentation or “design artifacts” to support such communication. It is inspired by the work of both the democratic process community (with e.g., Participedia and PolicyKit), and the machine learning community (with e.g., model cards, data sheets, and reward reports).
This is also not just theoretical, though I focus here on concepts and documentation—applying these concepts and documentation practices at varying levels of fidelity has deeply influenced my overall work for the past few years; most especially in the past six months when advising AI companies and funders, including the OpenAI Democratic inputs grant program.
Processes at different levels of abstraction
[The next two sections focus on details of terminology; feel free to skip to ‘Key Artifacts’ if that isn’t your jam.]
There are several different concepts that one might be referring to when one talks about decision-making or information-gathering processes (whether they are focused on democratic outcomes, collective intelligence, alignment, etc.). Part of the confusion is that discussions often happen at different levels of abstraction.
The following is a rough taxonomy for distinguishing between those levels (though the lines may sometimes be blurry), going from the most concrete to the most abstract.
Process Run: An exercise, usually involving people, potentially mediated by machines, with inputs, outputs, and potentially additional state changes.
Examples: The 2020 presidential election; A Polis on data-driven campaigning, a citizen’s assembly on assisted dying.
Corresponding computer science concept: Instance
Process Design: A detailed description of how to run a process, sufficient to enable a Process Run. This might be a plan for people to execute, and/or for code for a computer to run.1
Examples: Presidential elections; Polis processes; a standardized run sheet for a specific style of a 5-day citizens assembly.
Relationships to other terms: Every Process Design may have many Process Runs.
Corresponding computer science concept: Class
Process Pattern: A more abstract specification of the interfaces between the process and the external world (and potentially other processes)—intuitively, the “shape of a process” or the “contract that it satisfies”.
Examples: Elections, Collective response systems, Citizens’ assemblies.
Relationships to other terms: Many different Process Designs may satisfy the same Process Pattern interface and vice versa.
Corresponding computer science concept: an interface or abstract class (in the Java sense)
In sum, one can think of a Process Run as a specific ‘event’ (e.g., 2020 presidential election); a Process Design (e.g., presidential elections) as a detailed description of how to run such events for people and/or machines; and a Process Pattern as a way of specifying a family of Process Designs with many similar properties (e.g., elections).
At each level of abstraction, these can contain one another (e.g., a process may involve many sub-processes), or feed into each other (e.g., a process may involve some output which is then used as the input into another process).
Process Properties
The next definition is somewhat self-explanatory, but still helpful to make explicit if only to distinguish between its subtypes (bolded).
Process Property: A characteristic of a process. Some process properties are designed (e.g. # of participants invited) and others must be measured (e.g., speed, resources consumed), or approximated via measurement (e.g. trust in a democratic process, quality of outputs).
Examples:
Designed: The output is a statement of up to length N. This will cost around Y dollars per represented person per year.
Measured: K participants dropped out.
Approximated: Trusted by X% of the participants according to a survey.
Relationships: Every Process Design, Process Pattern, and Process Run has many Process Properties with particular values.
Key Artifacts: Process Cards and Run Reports
Finally, we move from theoretical concepts to more actionable terms. The following are roughly analogous to approaches to machine learning transparency and documentation that have become increasingly mainstream over the past several years, e.g., the aforementioned “model cards”. They enable practitioners to understand the properties, impacts, and tradeoffs of particular processes. They are not primarily intended for a general audience, but for the process designers, evaluators, procurers, executors, advisors, and communicators (those translating technical details to a general audience).
Run Report: A roughly standardized document providing details about a particular Process Run, including the results of any evaluations of that process.2
Relationships: There may be many Run Reports for the same Process Card.
Process Card: A roughly standardized document providing details about a process and its appropriateness for different goals, including summary results of evaluations (roughly analogous to a model card in machine learning).
Relationships: Complete Process Cards can document a Process Design, while Abstract Process Cards can document Process Patterns, leaving out some details. Much of a Process Card is defined in terms of Process Properties.
Process Benchmark: A specification of a (measured or approximated) Process Property (or set of properties) that can be compared across processes, through some standardized approach to evaluation.
Relationships: Process Benchmarks can be specified for any level of abstraction from Process Runs, to Process Designs, to Process Patterns; this enables comparison across designs. Benchmarks can be referenced in Process Cards and Process Designs.
What do Process Cards and Run Reports include?
[To fully describe this with examples is outside of the scope of this excerpt, but please reach out if you are interested in creating Process Cards and Run Reports, or helping develop a standard structure for them; that is an active work-in-progress. The following is a high-level overview.]
A Process Card should include information that helps identify if a process (a Process Design or Process Pattern) is appropriate for a particular use. It should be generalized enough for anyone relevant who could be interested in running that process. A Process Card should not include topic or process run-specific information. It minimally includes: intended uses (including challenges it can and cannot address), inputs, outputs, additional impacts; and relationships to other processes. If the process has been sufficiently evaluated, it would also include a summary of evaluation results (e.g., an analysis of biases, groupthink risks, quality measures, legitimacy measures, etc.).
A Run Report should include the specific details of a particular Process Run—one that has been run or will be run (leaving the more general process design to the Process Card). Who was involved, how were they selected, what were the specific inputs and outputs, what were the results of evaluations (including any relevant benchmarks), etc.
Combined, Run Reports (for a process and all of its subprocesses), augmented by the corresponding Process Cards (and potentially references to domain-specific resources) should provide most of the information needed for replication of a Process Run.
Why care?
The overall goal of such documentation is to improve the ecosystem by making many critical tasks easier. Somewhat standardized Process Cards and Run Reports are intended to…
Help process innovators and executors to:
Learn from each other similar processes.
Use each others’ processes and even combine efforts if appropriate, e.g. by enabling the identification of the best subprocesses for complex end-to-end processes (through clear descriptions of intended uses, inputs, outputs, and additional impacts).
Avoid replication of work and enable learning from others' challenges and solutions.
Choose similar questions and topics in order to facilitate comparisons of the results.
Help funders and advisors:
Identify organizations that should connect with each other to overcome challenges.
Route appropriate resources and expertise to innovators.
Help potential process adopters in corporations, government, philanthropy, civil society, etc.:
Identify which groups and processes are useful to connect with for specific goals.
See if existing processes already answer their questions.
Help researchers and other evaluators:
Clarify which evaluation approaches are being used across groups and processes, enabling comparison across them and reducing duplication of effort.
Compare outputs of processes across similar topics and questions.
Support the development of benchmarks that enable understanding of improvements (and regressions) in process quality.
From processes to structures
While Process Cards and Run Reports can help move us toward a world of interoperable design, composition, evaluation, and adoption of democratic processes, they are only a first step. True democratic integration requires more than just processes—it requires evolving institutional structures. Many early pseudo-democratic bodies around the world started out as processes that involved the convening of temporary bodies by a king or similar figure. Over the decades and centuries, those temporary bodies became institutionalized structures and took on the forms we know today—including the British Parliament.
Such structures can be thought of as yet another layer of abstraction, connecting processes together into resilient networks, gathering and transforming knowledge, decisions, and power across space and time. Such structures were (at least in some cases) ultimately described and defined in formal documents such as the United States Constitution. As the AI ecosystem develops more experience with innovative processes, or uses of processes, it can begin to explore analogous forms of institutionalization, perhaps documented through standard ‘Structure Sheets’ that designate how processes defined in process cards interact.
Shared language and documentation practices alone cannot create innovation, but they can accelerate an ecosystem of innovation and even foster accountability. We have seen this not only in machine learning, but in fields as disparate as nutrition and mechanical design. If we want to ensure that our democracy capacity can keep up with AI advances, we will need all of the innovation and accountability we can muster!
Thanks to Andrew Konya, Jessica Yu, Shannon Hong, Matthew Prewitt, and Tantum Collins for reading drafts of earlier versions.
Please share this with people who might find it interesting or useful—and tag me if you share on social media: I’m @metaviv, aviv@mastodon.online, and Aviv Ovadya on LinkedIn.
Stay in touch by following me on any of those platforms, reaching out at aviv@aviv.me, and subscribing.
Next issue I’ll be sharing a paper laying out a rough vision for the future of AI and Democracy.
In practice, for human-run processes, a specification is unlikely to be complete and assumes some tacit human knowledge that may be present in a training regimen. One of the challenges for scaling such processes is related to building capacity for human training or (semi-)automating subtasks that previously required people.
Some parts of a run report may also be created ahead of time, in the process of planning a Process Run.