Risk-Weighted Compute Permits Under Imperfect Monitoring: Enforcement Design and an EU-Implementable Blueprint
Mentor: Joel Christoph
Project area: Technical AI governance, compute governance, enforcement design, mechanism design, economics of frontier AI oversight
Project Language
Minimum Time Commitment
8 hours per week.
Project Abstract
This project develops and stress-tests a concrete governance instrument for frontier AI: risk-weighted tradable compute permits with enforceable compliance under imperfect monitoring. The core idea is to regulate training-relevant compute as a scarce, auditable input while allowing trade to reduce compliance cost and improve feasibility. “Risk-weighted” means the permits required per unit of compute depend on verifiable risk indicators, especially evaluation outcomes, so higher-risk training runs face tighter effective caps and higher marginal cost.
The project has three workstreams.
(1) Formal model: a regulator sets an aggregate cap, a risk-weighting rule, and an enforcement policy; developers choose compute use, reporting, and evasion effort under imperfect monitoring. We derive implementable conditions for truthful reporting and deterrence of under-reporting or hidden training.
(2) Minimal simulation: we implement a lightweight Monte Carlo or agent-based simulation comparing enforcement regimes such as random audits, risk-based targeting, convex penalties, and escalation rules, documenting tradeoffs between compliance, expected harm reduction, and administrative burden.
(3) Policy translation: we convert the design into an EU-relevant blueprint specifying institutional roles, evidentiary standards, audit triggers, and interfaces with evaluations and incident reporting.
Expected outputs by the end of the main phase are:
(a) a public preprint with a clear proposition spine and robustness checks,
(b) an open-source repository with simulation code and reproducible figures, and
(c) a 6 to 10 page policy brief describing an EU-implementable pathway.
If the project extends, we will polish for submission and stakeholder feedback.
Theory of Change
Bad frameworks produce bad decisions. The question of machine moral status will increasingly affect AI development and governance. Currently, most people reasoning about it lack adequate conceptual tools. This matters for catastrophic risk in several ways.
Under-reaction: if AI systems develop welfare-relevant internal states and we lack frameworks to recognize this, we may create systems with misaligned interests while dismissing their signals as "mere computation." A system that experiences something like suffering under certain conditions, and whose operators dismiss this, is a system with reason to deceive.
Over-reaction: anthropomorphizing systems that lack morally relevant properties wastes attention and resources, and may constrain beneficial AI development without corresponding benefit.
Poor discourse: without shared conceptual foundations, public debate about AI consciousness polarizes between dismissive and credulous positions. Neither serves good governance.
The primer addresses these by training researchers and practitioners to reason carefully across multiple frameworks, recognize what each assumes, and navigate uncertainty without false confidence. The German focus (incorporating European philosophical traditions, piloting with German-speaking users) builds SAIGE's national infrastructure while contributing to the broader field.
Conceptual clarity is infrastructure. This project builds it.
Desired Mentee Background
Computer Science/ML, Maths, Economics, Law, International Relations, Political Science
Desired Mentee Level of Education
Masters and above.
Other Mentee Requirements
Strong quantitative reasoning and reliability.
Ability to write clear English.
Comfort reading formal models and translating them into precise prose. At least one of:
(a) game theory or mechanism design exposure,
(b) strong quantitative microeconomics, or
(c) strong Python ability for simulations.
Consistent weekly progress is mandatory.