SAIGE Incubator: Spring 2026 Cohort | Project & Mentor Profiles

This national fellowship is designed to fast-track talented students, recent graduates and career professionals, even those new to AI Safety, into impactful research, field-building or communication roles while strengthening the AI safety network across Germany. As a mentee, you can choose to apply to four tracks, namely:


Communication & Fieldbuilding

Fellowship Navigator: A Decision Tool for Career Transition

Fellowship Navigator: A Decision Tool for Career Transition

About Marta

Project abstract

Visualizing AI Safety Risks for German Decision Makers

Visualizing AI Safety Risks for German Decision Makers

About Inken

Project abstract

New Fieldbuilding Organisations in Germany

New Fieldbuilding Organisations in Germany

About Tilman

Project abstract

Raising Awareness for AI Existential Risks in Germany

Raising Awareness for AI Existential Risks in Germany

About Karl

Project abstract


AI Governance & Policy

Convergence or Divergence? The Future of Frontier AI Capabilities and Implications for Catastrophic Risk

About Pavel

Project abstract

Economic Decision-Making Under Deep Uncertainty About AI's Trajectory (Co-mentored between Pavel and Wim)

About Pavel

About Wim

Project abstract

Estimating AI Harm Rates for Germany: Applying Epidemiological Methods to Incident Monitoring

About Branwen

Project abstract

How will AI affect the Democracy equilibrium?

About Simon

Project abstract


Technical AI Governance

Verification of a Global AI Treaty

About Naci

Project abstract

Risk-Weighted Compute Permits Under Imperfect Monitoring: Enforcement Design and an EU-Implementable Blueprint

About Joel

Project abstract

Stanford SAFE - Designing Interactive Modules to Introduce High-Ranking Decision Makers to Technical AI Fundamentals (co-mentored between Felix and Duncan)

About Felix

About Duncan

Project abstract


Technical AI Safety

Empirical Technical AI Safety Research (topic: flexible. See project abstract)

About Joschka

Project abstract

Inoculation Against Model Poisoning

About Florian

Project abstract

Advancing the Human Empowerment Approach to Safe AI Agents

About Jobst

Project abstract

A Meta-Analysis of the AI Safety Research Landscape

About Ihor

Project abstract

A Benchmark for Preventing Emergent Misalignment

About Florian

Project abstract

Outcome-Based Distillation for Jailbreaking Safety Guardrails

About Alexander (Sasha)

Project abstract

Detecting and Mitigating Language-Triggered Value Instability in Multilingual LLMs (co-mentored among Ajay, Jeyashree and Jason)

About Ajay

About Jeyashree

About Jason

Project abstract

Eliciting Encoded Reasoning in Language Models Trained Against Chain-of-Thought Monitors

About Julian

Project abstract