Project Language
English only.
Minimum Time Commitment
10 hours per week.
Project Abstract
The AI safety field has a talent bottleneck, and fellowships are a primary pathway for people to build relevant career capital. However, the fellowship landscape is fragmented: MATS, AISC, BlueDot, SERI, Astra, and dozens of others each target different career stages, require different backgrounds, and lead to different outcomes. Currently, information about these programmes travels largely by word of mouth or referral lists shared by organisations. With a limited field of view of their options, applicants either apply - potentially to a poor fit - or don't apply at all, unaware that better-matched options exist.
Phase 1: Research & Database. Mapping of AI safety fellowships, including requirements (technical background, career stage, time commitment, funding), focus areas (alignment, governance, field-building), application timelines, and post-fellowship outcomes. If possible, also acceptance rates. This should involve verifying the info with the Fellowship organisations.
Phase 2: Tool Development. Building an interactive advisor that asks users about their current situation (background, available time, financial constraints, interests, timeline, etc.) and returns tailored recommendations. The MVP version is a filterable database; a more sophisticated version could use decision-tree logic or AI to provide personalised guidance, similar to how Google Flights suggests "fly a day earlier for lower prices."
Phase 3: Validation & Launch. Testing with target users, verifying with the organisations, refining recommendations based on feedback, and publishing on a shareable, visually appealing platform. Potentially, we could host it on FIG's website.
Contributions: A publicly available tool that reduces friction in the AI safety talent pipeline, helps candidates self-select into appropriate programmes, and potentially surfaces lesser-known fellowships that might be better fits. Secondary output: a comprehensive fellowship database useful for career advisors, 80,000 Hours, and the broader community.
Theory of Change
One of the bottlenecks in AI safety is talent. Fellowships are one of the most effective mechanisms for helping people transition into the field, providing mentorship, credentials, and networks.
Currently, fellowship discovery is inefficient. People learn about programmes through personal connections, leading to: (1) qualified candidates never hearing about suitable opportunities, (2) candidates applying to poor-fit programmes and getting rejected, which can discourage later attempts, and (3) lesser-known but high-quality programmes remaining undersubscribed while well-marketed ones are oversubscribed.
This tool improves talent allocation by matching candidates to fellowships based on actual fit rather than (social) proximity to information. Better matching means higher acceptance rates, better fellowship experiences, and ultimately more people successfully transitioning into AI safety work.
The downstream impact: a larger, better-matched AI safety workforce. Additionally, a comprehensive fellowship database that benefits career advisors, organisations, and the broader field-building ecosystem.
Desired Mentee Background
Any or all, it's more about skills and resourcefulness than a field of study.
Desired Mentee Level of Education
Any level.
Other Mentee Requirements
- Familiarity with the AI safety ecosystem (completed at least one AI safety course/fellowship, or demonstrable engagement with the field)
- Basic experience with no-code tools (Notion, Airtable, Typeform) OR basic web development skills (HTML/CSS)
- Strong research and synthesis skills (ability to gather information from multiple sources and organise it clearly)
- Basic project planning, project management, and quality control skills
- Good written communication in English
- Bonus: UX/design sensibility or experience creating user-facing tools
No specific programming language or technical stack required. The project can adapt to the mentee's existing skills.