Project Language
Minimum Time Commitment
10 hours per week.
Project Abstract
Democracy can be seen as an equilibrium, sustained by specific structural conditions. This project investigates how advances in AI could shift the conditions that sustain democratic governance, and aims to produce concrete forecasts for these dynamics.
We start by studying the "democracy equilibrium": what structural factors cause democracies to emerge or fade? This involves drawing on political science frameworks like selectorate theory, as popularized by CGP Grey's "The Rules for Rulers" (https://www.youtube.com/watch?v=rStL7niR7gs), and on concrete historical case studies. We try to really understand why we are currently in a democracy and create a model where we can quantify the stability of a democracy based on measurable variables. Then analyze how AI might change those. For example:
(1) the productivity of the economy depends less on the wealth of the population,
(2) concentration of wealth,
(3) superpropaganda, where AI-powered social media platforms or their future analogs shape public discourse, or
(4) stronger military-coercive imbalance.
Optionally, participants may
1) also try to derive recommendations for how democratic accountability can be preserved, and/or
2) try to figure out how to communicate this risk here well.
The project outputs are
(1) a research report and
(2) either a recorded talk, a video, or an accessible blog post communicating the key findings to a broader audience.
A lot of the work will go into deriving the general model, but then we want to apply it in more detail to at least one of US, Europe or Germany. I will also encourage mentees to try to get in touch with academics whose research is about history of democracy or so, because I think this is often a fast way to learn.
Theory of Change
Bad frameworks produce bad decisions. The question of machine moral status will increasingly affect AI development and governance. Currently, most people reasoning about it lack adequate conceptual tools. This matters for catastrophic risk in several ways.Under-reaction: if AI systems develop welfare-relevant internal states and we lack frameworks to recognize this, we may create systems with misaligned interests while dismissing their signals as "mere computation." A system that experiences something like suffering under certain conditions, and whose operators dismiss this, is a system with reason to deceive.Over-reaction: anthropomorphizing systems that lack morally relevant properties wastes attention and resources, and may constrain beneficial AI development without corresponding benefit.Poor discourse: without shared conceptual foundations, public debate about AI consciousness polarizes between dismissive and credulous positions. Neither serves good governance.The primer addresses these by training researchers and practitioners to reason carefully across multiple frameworks, recognize what each assumes, and navigate uncertainty without false confidence. The German focus (incorporating European philosophical traditions, piloting with German-speaking users) builds SAIGE's national infrastructure while contributing to the broader field.Conceptual clarity is infrastructure. This project builds it.
Desired Mentee Background
Any or all, it's more about skills and resourcefulness than a field of study.
Desired Mentee Level of Education
Any level.
Other Mentee Requirements
It's good if you did something really cool. Something impressive. Doesn't matter what.