Visualizing AI Safety Risks for German Decision Makers

Mentor: Inken Paland
Project area: AI Safety Communication

Project Language

English and German.

Minimum Time Commitment

15 hours per week.

Project Abstract

The primary objective of this project is to close the critical communication gap between high-level AI safety research and the intuitive understanding of German decision-makers. As transformative AI systems advance, technical alignment challenges remain largely invisible to the political sphere. We aim to translate these abstract risks into legible, high-impact visual narratives.

Methodologies:
Research: Deep-diving into technical safety papers, such as those regarding Deceptive Alignment or Sleeper Agents.

Translation: Utilizing Design Thinking to simplify complex concepts without compromising scientific accuracy.

Production: Creating a series of professional short videos and digital assets specifically tailored for politicians and industry leaders.

Contributions: The project will deliver a modular video campaign designed for the German political landscape. By providing "epistemic clarity" through visual evidence, we empower leaders to make informed decisions on AI governance and the EU AI Act. Simultaneously, we train a new generation of science communicators fluent in both technical safety and strategic media.

Theory of Change

The primary bottleneck for effective AI governance in Germany is the lack of technical intuition regarding catastrophic risks. Without a clear mental model of how AI systems could fail or become misaligned, policy proposals remain vague or focused on minor harms rather than existential safeguards.

Our project addresses this by providing "epistemic clarity" through visual evidence. By creating high-fidelity demonstrations of AI risks, we enable decision-makers to grasp the urgency of safety standards and the implementation of the EU AI Act. This visual translation lowers the barrier for politicians to support technical safety research and robust governance frameworks.

Furthermore, by engaging mentees in this process, we build a talent pipeline of experts who can bridge the gap between technical research and public discourse. Ultimately, better-informed leaders lead to more resilient policies, significantly reducing the probability of loss of control over transformative AI systems.

Desired Mentee Background

Any or all, it's more about skills and resourcefulness than a field of study.

No specific requirements. Creativity is most important. Team members with diverse backgrounds are ideal.

Desired Mentee Level of Education

Any level.

Other Mentee Requirements

Applicants should possess a foundation in basic video editing and visual design tools to ensure high quality production standards. It is essential that mentees arrive with a solid understanding of core AI Safety concepts so that we can focus on strategic communication. You certainly do not need to be an expert, but you should have a basic understanding of AI safety and what the field encompasses. While we will definitely work together as a group to reach a shared understanding, starting entirely from scratch would take up too much of our limited time.

I am looking for individuals who can apply creative methodologies like Design Thinking to translate complex scientific research into accessible narratives. A high degree of autonomy is required because the mentees will function as an independent production unit under my strategic guidance. Furthermore, fluency in German is necessary as the project specifically targets decision makers and the public within the German political landscape.

I am also open to including team members who are not fluent in German, provided that the majority of the production unit speaks the language fluently to ensure the project remains deeply rooted in the German context.