SAIGE Incubator: Spring 2026 Cohort | Project & Mentor Profiles
This national fellowship is designed to fast-track talented students, recent graduates and career professionals, even those new to AI Safety, into impactful research, field-building or communication roles while strengthening the AI safety network across Germany. As a mentee, you can choose to apply to four tracks, namely:
Communication & Fieldbuilding
AI Governance & Policy
Convergence or Divergence? The Future of Frontier AI Capabilities and Implications for Catastrophic Risk
About Pavel
Project abstract
Economic Decision-Making Under Deep Uncertainty About AI's Trajectory (Co-mentored between Pavel and Wim)
About Pavel
About Wim
Project abstract
Estimating AI Harm Rates for Germany: Applying Epidemiological Methods to Incident Monitoring
About Branwen
Project abstract
Technical AI Governance
Risk-Weighted Compute Permits Under Imperfect Monitoring: Enforcement Design and an EU-Implementable Blueprint
About Joel
Project abstract
Stanford SAFE - Designing Interactive Modules to Introduce High-Ranking Decision Makers to Technical AI Fundamentals (co-mentored between Felix and Duncan)
About Felix
About Duncan
Project abstract
Technical AI Safety
Empirical Technical AI Safety Research (topic: flexible. See project abstract)
About Joschka
Project abstract
Advancing the Human Empowerment Approach to Safe AI Agents
About Jobst
Project abstract
Outcome-Based Distillation for Jailbreaking Safety Guardrails
About Alexander (Sasha)
Project abstract
Detecting and Mitigating Language-Triggered Value Instability in Multilingual LLMs (co-mentored among Ajay, Jeyashree and Jason)
About Ajay
About Jeyashree
About Jason
Project abstract
Eliciting Encoded Reasoning in Language Models Trained Against Chain-of-Thought Monitors
About Julian
Project abstract























