Economic Decision-Making Under Deep Uncertainty About AI's Trajectory
Mentors: Pavel Kocourek, Wim Howson Creutzberg
Project area: Economics Theory / Game Theory
Project Language
English only.
Minimum Time Commitment
12 hours per week.
Project Abstract
The future of AI could unfold in very different ways. In one scenario, AI automates most cognitive work and the economy grows explosively. In another, AI brings steady but modest productivity gains, much like earlier waves of IT adoption. These futures have radically different implications for how much people should save, what they should invest in, and which skills will retain their value. Yet the question of how to make such decisions when you genuinely do not know which future is coming has received almost no formal attention.
On the modeling side, Trammell & Korinek (2023) lay out a useful taxonomy of transformative AI (TAI) growth scenarios, and other important contributions — Aghion, Jones & Jones (2018), Acemoglu (2024), Benzell & Ye (2024) — work out the economic consequences of specific AI futures. On the empirical side, Andrews & Farboodi (2025) study what financial markets currently believe about TAI. What is missing is the normative question: given genuine uncertainty over which scenario will materialize, how should a forward-looking decision-maker allocate resources? That is the gap this project aims to fill.
Mentees will build a tractable model in which an investor faces uncertainty over whether AI leads to moderate or explosive growth, and chooses how much to save and how to split wealth across assets — broad equity, AI-intensive capital, human-capital-linked claims, and a safe asset — whose payoffs depend on which future arrives. The project will study how optimal choices shift with the perceived likelihood of explosive TAI, risk aversion, and ambiguity aversion (discomfort with poorly defined probabilities). The approach combines analytical work on a stylized model with numerical illustrations. The intended outputs are a research blog post and a technical working paper, with potential for coauthorship on a subsequent publication.
Theory of Change
Catastrophic risks from transformative AI go beyond misalignment or misuse — they include the possibility that society is economically unprepared for the world TAI creates. If explosive growth materializes, wages, asset prices, and fiscal revenues could shift dramatically within years. Economically destabilized societies are worse at governing powerful technologies safely.
This project addresses that blind spot. Today, discussions about how to prepare economically for TAI are largely informal: people speculate about AI stocks or reskilling, but no rigorous framework connects these decisions to the range of plausible AI futures. This project will build one. By incorporating ambiguity aversion — discomfort with acting on probabilities you do not trust — it will identify strategies that are robust across scenarios, not just optimal under a single best guess. The framework extends naturally to a social planner allocating resources across safety research, education, and fiscal buffers under TAI uncertainty — a stepping stone toward the broader question of how governments should prepare for a technology whose trajectory is deeply uncertain.
By grounding these questions in formal economic theory, the project aims to give policymakers and the AI safety community practical tools for preparation rather than speculation.
Desired Mentee Background
Maths, Economics, Finance.
Desired Mentee Level of Education
Masters and above.
Other Mentee Requirements
Familiarity with intertemporal optimization (e.g., consumption-saving problems, Euler equations) and basic probability theory is required. Exposure to decision theory under uncertainty (expected utility, risk aversion) is strongly preferred. Interest in or familiarity with the macroeconomics of AI or transformative technology is a plus but not strictly necessary. No programming is required, though the ability to implement numerical comparative statics in Python or MATLAB would be a significant bonus.