The SAIGE Pivot Track

Designed for career professionals who are interested in pivoting into AI Safety.


You are not starting from scratch. You are bringing the missing piece.



Mid/Senior-career professionals: AI Safety's biggest bottlenecks

In the most recent article by 80,000 Hours based on a conversation with Ryan Kidd, Co-Executive Director at MATS, we see some of the current most overlooked roles in the AI Safety ecosystem. Specifically, he shared that:


  1. Research management seems to be a substantial bottleneck: These roles can be hard to fill, as they require some familiarity with AI safety research as well as strong interpersonal skills and management experience. Plus, impact-driven people who are interested in AI safety generally want to be researchers themselves — rather than manage the research of others! Crucially, you often don’t need to be a great researcher yourself to be a great research manager: people with experience as project managers, people managers, and executive coaches can all make for excellent research managers.


  2. We lack executive talent: The technical AI safety field could really benefit from more people with backgrounds in strategy, management, and operations. If you have experience managing and growing a team of 30+ people, you could make a big difference at a top-tier AI safety organisation, even if you don’t have a lot of direct experience with AI.


  3. We lack founders, field-builders, and communicators: There’s a lot of room to start new organisations and grow the ecosystem, as well as a lot of available funding, especially in the for-profit AI interpretability and security space. Our work on the Job Board also benefits from people starting new organisations: they create new roles we can match our users to!


  4. We need more mid-career professionals: As more work is delegated to AI, we’ll become increasingly reliant on experienced managers who can oversee AI-generated outputs, train others to use AI tools, and coordinate teams of humans and AIs.


  5. We need people excited for ‘support’ roles: It might seem less exciting to not work directly on top problems, but this means roles in which you multiply the impact of others (e.g. operations and management roles) are neglected despite being very impactful. And, speaking as somebody whose job is to help others get jobs, I find this kind of work can be quite exciting!



  1. Research management seems to be a substantial bottleneck: These roles can be hard to fill, as they require some familiarity with AI safety research as well as strong interpersonal skills and management experience. Plus, impact-driven people who are interested in AI safety generally want to be researchers themselves — rather than manage the research of others! Crucially, you often don’t need to be a great researcher yourself to be a great research manager: people with experience as project managers, people managers, and executive coaches can all make for excellent research managers.


  2. We lack executive talent: The technical AI safety field could really benefit from more people with backgrounds in strategy, management, and operations. If you have experience managing and growing a team of 30+ people, you could make a big difference at a top-tier AI safety organisation, even if you don’t have a lot of direct experience with AI.


  3. We lack founders, field-builders, and communicators: There’s a lot of room to start new organisations and grow the ecosystem, as well as a lot of available funding, especially in the for-profit AI interpretability and security space. Our work on the Job Board also benefits from people starting new organisations: they create new roles we can match our users to!


  4. We need more mid-career professionals: As more work is delegated to AI, we’ll become increasingly reliant on experienced managers who can oversee AI-generated outputs, train others to use AI tools, and coordinate teams of humans and AIs.


  5. We need people excited for ‘support’ roles: It might seem less exciting to not work directly on top problems, but this means roles in which you multiply the impact of others (e.g. operations and management roles) are neglected despite being very impactful. And, speaking as somebody whose job is to help others get jobs, I find this kind of work can be quite exciting!



  1. Research management seems to be a substantial bottleneck: These roles can be hard to fill, as they require some familiarity with AI safety research as well as strong interpersonal skills and management experience. Plus, impact-driven people who are interested in AI safety generally want to be researchers themselves — rather than manage the research of others! Crucially, you often don’t need to be a great researcher yourself to be a great research manager: people with experience as project managers, people managers, and executive coaches can all make for excellent research managers.


  2. We lack executive talent: The technical AI safety field could really benefit from more people with backgrounds in strategy, management, and operations. If you have experience managing and growing a team of 30+ people, you could make a big difference at a top-tier AI safety organisation, even if you don’t have a lot of direct experience with AI.


  3. We lack founders, field-builders, and communicators: There’s a lot of room to start new organisations and grow the ecosystem, as well as a lot of available funding, especially in the for-profit AI interpretability and security space. Our work on the Job Board also benefits from people starting new organisations: they create new roles we can match our users to!


  4. We need more mid-career professionals: As more work is delegated to AI, we’ll become increasingly reliant on experienced managers who can oversee AI-generated outputs, train others to use AI tools, and coordinate teams of humans and AIs.


  5. We need people excited for ‘support’ roles: It might seem less exciting to not work directly on top problems, but this means roles in which you multiply the impact of others (e.g. operations and management roles) are neglected despite being very impactful. And, speaking as somebody whose job is to help others get jobs, I find this kind of work can be quite exciting!



The local context: the German ecosystem

Apart from the global situation mentioned above, Germany is the European engine for technical talent: according to the Federal Statistical Office (Destatis), Germany holds the highest share of STEM Master’s degrees in the EU (35%), significantly outperforming the EU average of 25%. Moreover, Germany possesses an annual approx. 300,000 students in STEM (source), >110,000 students in Law (source).


We see a massive structural bottleneck in the local ecosystem: virtually none of this top-percentile talent is funneled to AI Safety. Instead, this hidden reserve of industrial experts flows almost exclusively into traditional roles (e.g. mechanical engineering with 1.3 million employees). If you have a background in software engineering, ML, or technical R&D, your skills are highly transferable. We explicitly invite you to pivot your engineering rigor toward the problems of AI alignment and robustness.


We lack non-technical talents, too: Germany is a key architect of the EU AI Act, yet our domestic implementation has a severe shortage of governance/policy experts. If you have experience in Compliance, GDPR, or Regulatory Affairs, you are uniquely positioned to fix this. We need your experience to turn technical safety constraints into coherent national policy. Moreover, while cities like London and San Francisco have mature networks of AI safety organisations, Germany's landscape is fragmented (which was the motivation behind SAIGE). We need more field-builders here, including Founders, Operations Leads and Program Managers to build local capacities (e.g. incubators, think tanks, safety teams).



How SAIGE supports your pivot

SAIGE acts as a central routing hub for experienced professionals looking to transition into the field. After your application to our Pivot Track, we will review your profile to identify the highest-leverage entry point for your skillset.

Depending on your the information you provide, we will:

  • Route you to our specialised ecosystem partners (e.g., High Impact Professionals, Impact Academy), and/or

  • Match you with an internal SAIGE advisor for 1:1 strategy and local context.

We're excited to see the first step in your transition path!