Fellowship Navigator: A Decision Tool for Career Transition

Mentor: Marta Krzeminska

Project Language

English only.

Forschungsmanagement ist oft ein Engpass: Diese Rollen sind schwer zu besetzen, da sie sowohl Vertrautheit mit der AI Safety-Forschung als auch starke zwischenmenschliche Fähigkeiten und Managementerfahrung erfordern. Zudem wollen wirkungsorientierte Menschen, die sich für AI Safety interessieren, meist selbst forschen – anstatt die Forschung anderer zu managen! Entscheidend ist jedoch: Du musst oft selbst nicht in der Forschung brillieren, um exzellent im Forschungsmanagement zu sein. Menschen mit Erfahrung als Projektmanager:innen, People Manager:innen und Executive Coaches eignen sich oft hervorragend dafür.

Es mangelt an Führungskräften: Das technische Feld der AI Safety könnte sehr von mehr Menschen mit Hintergrund in Strategie, Management und Operations profitieren. Wenn du Erfahrung darin hast, ein Team von mehr als 30 Personen zu leiten und weiterzuentwickeln, könntest du in einer führenden AI Safety-Organisation einen großen Unterschied machen – auch wenn du wenig direkte Erfahrung mit KI hast.

Wir brauchen Gründer:innen, Gestalter:innen des Ökosystems und Kommunikator:innen: Es gibt viel Raum, um neue Organisationen zu gründen und das Ökosystem zu erweitern. Zudem gibt es viel verfügbare Finanzierung, besonders im gewinnorientierten Bereich für AI Interpretability und Sicherheit. Unsere Arbeit am Job Board profitiert ebenfalls davon, wenn Leute neue Organisationen starten: Sie schaffen neue Rollen, auf die wir unsere Nutzer:innen vermitteln können!

Wir brauchen mehr Berufserfahrene: Da immer mehr Arbeit an KI delegiert wird, sind wir zunehmend auf erfahrene Manager:innen angewiesen. Sie können KI-generierte Ergebnisse (Outputs) überwachen, andere im Umgang mit KI-Tools schulen und Teams aus Menschen und KIs koordinieren.

Wir brauchen Menschen, die sich für „Support“-Rollen begeistern: Es mag weniger aufregend wirken, nicht direkt an den Kernproblemen zu arbeiten. Doch gerade in Rollen wie Operations und Management vervielfachst du den Impact anderer. Diese Bereiche werden oft vernachlässigt, obwohl sie sehr wirkungsvoll sind. Und als jemand, dessen Job es ist, anderen zu Jobs zu verhelfen, finde ich diese Art von Arbeit ziemlich spannend!


Minimum Time Commitment

10 hours per week.

Project Abstract

The AI safety field has a talent bottleneck, and fellowships are a primary pathway for people to build relevant career capital. However, the fellowship landscape is fragmented: MATS, AISC, BlueDot, SERI, Astra, and dozens of others each target different career stages, require different backgrounds, and lead to different outcomes. Currently, information about these programmes travels largely by word of mouth or referral lists shared by organisations. With a limited field of view of their options, applicants either apply - potentially to a poor fit - or don't apply at all, unaware that better-matched options exist.

Phase 1: Research & Database. Mapping of AI safety fellowships, including requirements (technical background, career stage, time commitment, funding), focus areas (alignment, governance, field-building), application timelines, and post-fellowship outcomes. If possible, also acceptance rates. This should involve verifying the info with the Fellowship organisations.

Phase 2: Tool Development. Building an interactive advisor that asks users about their current situation (background, available time, financial constraints, interests, timeline, etc.) and returns tailored recommendations. The MVP version is a filterable database; a more sophisticated version could use decision-tree logic or AI to provide personalised guidance, similar to how Google Flights suggests "fly a day earlier for lower prices."

Phase 3: Validation & Launch. Testing with target users, verifying with the organisations, refining recommendations based on feedback, and publishing on a shareable, visually appealing platform. Potentially, we could host it on FIG's website.

Contributions: A publicly available tool that reduces friction in the AI safety talent pipeline, helps candidates self-select into appropriate programmes, and potentially surfaces lesser-known fellowships that might be better fits. Secondary output: a comprehensive fellowship database useful for career advisors, 80,000 Hours, and the broader community.

Theory of Change

Bad frameworks produce bad decisions. The question of machine moral status will increasingly affect AI development and governance. Currently, most people reasoning about it lack adequate conceptual tools. This matters for catastrophic risk in several ways.

Under-reaction: if AI systems develop welfare-relevant internal states and we lack frameworks to recognize this, we may create systems with misaligned interests while dismissing their signals as "mere computation." A system that experiences something like suffering under certain conditions, and whose operators dismiss this, is a system with reason to deceive.

Over-reaction: anthropomorphizing systems that lack morally relevant properties wastes attention and resources, and may constrain beneficial AI development without corresponding benefit.

Poor discourse: without shared conceptual foundations, public debate about AI consciousness polarizes between dismissive and credulous positions. Neither serves good governance.

The primer addresses these by training researchers and practitioners to reason carefully across multiple frameworks, recognize what each assumes, and navigate uncertainty without false confidence. The German focus (incorporating European philosophical traditions, piloting with German-speaking users) builds SAIGE's national infrastructure while contributing to the broader field.

Conceptual clarity is infrastructure. This project builds it.

Desired Mentee Background

Any or all, it's more about skills and resourcefulness than a field of study.

Desired Mentee Level of Education

Any level.

Other Mentee Requirements

- Familiarity with the AI safety ecosystem (completed at least one AI safety course/fellowship, or demonstrable engagement with the field)

- Basic experience with no-code tools (Notion, Airtable, Typeform) OR basic web development skills (HTML/CSS)

- Strong research and synthesis skills (ability to gather information from multiple sources and organise it clearly)

- Basic project planning, project management, and quality control skills

- Good written communication in English

- Bonus: UX/design sensibility or experience creating user-facing tools

No specific programming language or technical stack required. The project can adapt to the mentee's existing skills.