Opinion

Safe AI Germany (SAIGE)

​TL;DR: SAIGE is a national research and field-building initiative, started in January 2026. We believe that Germany’s talents are critical to the global effort of reducing catastrophic risks brought by AI. We provide our incubator program, resources, professional support, and events, to help redirect some of them to work on AI Safety.Note: At the time of writing, SAIGE is currently entirely self-funded by its director (me). If you like what we have been doing so far and have any funding leads, please contact me at info@safeaigermany.org. If you don’t like what we have been doing so far or have any feedback, please also contact me at info@safeaigermany.org.The preview image is generated by Nano Banana Pro, but everything else about this post is not consciously associated with any usage of LLMs.A Summary of SAIGEWe aim to address an urgent inefficiency in the current landscape: the shortage of people from Germany positioned to positively influence the trajectory of advanced AI development. In terms of geopolitics, Germany possesses the political and economic weight to influence the EU AI ecosystem. For example, during the final stages of the EU AI Act, Germany acted as the ultimate swing vote while some major member states pushed back against the provisional agreement. This ensured the successful adoption of the legislation.Speaking of technical talents alone: According to the Federal Statistical Office (Destatis), Germany holds the highest share of STEM Master’s degrees in the EU (35%), significantly outperforming the EU average of 25%. Moreover, Germany possesses a world-class engineering sector, together with an annual approx. 300,000 students in STEM (source), >110,000 students in Law (source); Yet global capacity in technical safety and governance remains critically limited. We see a massive structural bottleneck in the local ecosystem: virtually none of this top-percentile talent is funneled to AGI safety. Instead, this hidden reserve of industrial experts flows almost exclusively into traditional roles (e.g. mechanical engineering with 1.3 million employees), simply because they lack the context and infrastructure to apply their skills to AGI safety.Our mission is to build the centralised infrastructure required to bridge this gap. We are moving beyond volatile student initiatives to create a stable national organisation that supports both groups through:Upskilling: We have launched our inaugural SAIGE incubator program, providing coverage for cities that currently lack local hubs. This ensures high-potential students and professionals have a clear path into the field. We received 69 mentorship applications for the Spring 2026 cohort, but due to capacity (since we only started in January 2026!), we could only include 22 of them (acceptance rate ≈ 32%). This was by no means an easy decision. The project reviewing process was done with the help of our board advisors, each of whom are specialised within their fields.Together with the incubator program, we are also organising events such as discussions on AI middle powers, networking meet-ups and talks from global experts, for our community to gain up-to-date information and network opportunities in AI Safety.Career support: For career professionals, we have partnered with High Impact Professionals and Impact Academy to provide network and career guidance. See our Pivot Track for more details.In addition, we are also collaborating with Successif for workshops on how to transition one’s career into AI Safety, such as this.Theory of ChangeCurrently, the path of least resistance for high-potential German talents is to swarm into standard industry roles. Our theory of change is focused on expanding AI Safety talents by redirection.A link to our Theory of Change diagram can be seen here. Note that “Sufficient funding” is still pending at the current time of writing.We define a successful “AI Safety role” outcome to include either of the following: Employment: Full-time permanent positions, short-term fixed positions, or  project-based contractor positions at established labs and organisations (e.g. MATS fellowship);Entrepreneurial roles: founding new AI Safety initiatives or non-profits;Civic & ecosystem contribution: High-impact pro-bono work such as advising policymakers or giving educational talks.Note: Since SAIGE is just starting its journey, although we have plenty of activities listed in our Theory of Change, it is necessary to determine which ones we are prioritising first, according to our goal. See the planned activities below for more details.Our ActivitiesDue to funding constraints, we separate our activities into two phases. The ones in Phase I are already carried out. These include activities which are relatively low-budget. Phase II activities would mean scaling and instutionalisation, which would be contingent on funding.Phase I:- The SAIGE incubator program,- Pivot Track for career professionals,- low-budget online events, and- basic infrastructure support for local groups. We are currently supporting new local groups being set up in Frankfurt, Bonn and Nuremberg.Phase II:- In-person events/retreats, incl. national retreat for local leaderships every 6 months, to provide feedback to each other and to SAIGE,- SAIGE Day,- in-person hackathons (already agreed collaboration with Apart Research),- deployment of a centralised tech stack to relieve local organisers of administrative burdens.Depending on capacity, in Phase II, we could also include events which would likely add to our outreach but are not currently in our priority list, such as an introductory course partnered with AIS Collab to fit into the German semester dates, and also establishing a weekend-intensive program for career professionals to suit more to their schedule and capacity for time commitment. They are currently not listed in Phase I, since the incubator program already aims to include an introductory course, and we currently do not know the exact, quantitative impact of such a program. However, if we gain positive results and receive sufficient funding, we will consider these as well in Phase II. Our TeamOne can see the “our team” page for information on who is in our core team and who our board advisors are. Below is a list containing more information on everyone.Core TeamJessica P. Wang, DirectorBackground:Educational background in mathematics. Worked at Epoch AI to develop and later co-organised the FrontierMath project. Specifically, as their Outreach Coordinator to source talents to Tier 4, and co-organised the 2025 FrontierMath Symposium, held at Constellation. Top 9 global contributor to Humanity’s Last Exam. Previously worked as a reviewer for the $18 million AI for Math Fund at Renaissance Philanthropy, and will continue to be their reviewer for the 2026 funding round. Also worked as the Global Operations Analyst at Calastone, the largest global funds network. Worked at the International Mathematical Olympiad as the only official photographer in 2024 (& a team guide in 2019), with 1300+ attendees. In addition, the President of the Durham University Maths Society, and the Ambassador for the Institute of Physics. Responsibilities:Oversees the overall progress, design, and execution of activities. Communicates with existing and potential collaborators to ensure activities are carried out smoothly. Also responsible for outreach, fundraising and the entire website.Manon Kempermann, Tech LeadBackground: Educational background in data science and artificial intelligence. Founder of AI Safety Saarland. Currently writing a thesis at Max Planck Institute for Software Systems on red-teaming for misalignment in AI agents. A Pathfinder mentor at Kairos. Organised AI Safety events, including a talk with Anthropic containing 300+ attendees. Also works as a research assistant at the Interdisciplinary Institute for Societal Computing. Current research focuses on context-sensitivity in AI safety evaluations. Presented at IASEAI26 in Paris her paper, “Challenges of Evaluating LLM Safety for User Welfare”.Responsibilities: Works with the Director on the nationwide rollout of the Interdisciplinary Research Incubator model, adapting the successful AIS Saarland framework for a much broader German context. Oversees the strategic pairing of technical mentors with participants to maximise research output.Jessie Kelly, Governance LeadBackground: Educational background in law. Designed and implemented realignment programs and national policies for governments, including the Australian Government. Over 15 years of experience in helping governments with new programs and policies, including analysis of technological trends. Along with SAIGE, she is currently working on a project with the UN and a scientific institute to consider what the ground rules for AI Governance in agriculture should be. She has previously worked with Australia’s national science agency (CSIRO), the Australian Embassy in Berlin & the German Red Cross.,Responsibilities: Oversees and manages the AI Governance track of the SAIGE Research Incubator. Identifying high-quality mentors and helping governance research fellows progress in their projects and careers. Works with the Tech Lead and the Director to ensure the SAIGE incubator runs smoothly.Franziska Heuschkel, Communications ManagerBackground: Educational background in international management and intercultural communication. Spent 7 years shaping brand and visibility initiatives for international corporations, including Coca-Cola and Lufthansa, before working 7 years as a consultant advising start-ups and SMEs in hospitality/prop tech on user-centric positioning and sustainability. Co-founded a Berlin-based agency and think tank designing innovation hubs and co-working spaces based on human-centred design methodologies. Built programs, facilitated cross-functional collaboration, and organised 30+ events, talks, and workshops within Berlin’s start-up landscape.Responsibilities: Works with the Director to design promotional materials for key initiatives. Drives continuous improvement by gathering and analysing feedback from events to better understand audience needs and refine SAIGE’s offerings. Additionally, serves as SAIGE’s on-the-ground representative at in-person events and networking opportunities throughout the Berlin ecosystem.Board AdvisorsSince our core team has its potential weakness of being relatively new in AI Safety, we are very grateful to have a list of experts across different fields, to help us make good judgment calls in our decisions (including but not restricted to: mentorship project review for our incubator, advising on program management, leadership structure, etc.). At the time of writing, we are still actively looking for and adjusting our list of board advisors to make sure we have a high-calibre set of experts to reach out to in times of uncertainty and to give us timely feedback. Hence, the list is not yet finalised. There are also some advisors who have been guiding us with their wisdom, but do not wish to be publicly named. In any case, the finalised list will contain: Leadership advisor(s):Have regular contact with the Director to provide feedback, and to ensure SAIGE’s activities are aligned with the bigger AI Safety ecosystem. Also makes sure that the planned activities are reasonable given the range of capacities within the core team.Operations advisors: Advise the core team on the practical execution and logistical planning of SAIGE’s activities. Provide concrete guidance when operational uncertainties arise, such as determining the optimal format for programs or advising on resource allocation.Technical advisors: Advise the core team on the technical direction of SAIGE’s initiatives, drawing on years of in-depth experience in AI alignment. Provide expert evaluation of technical project proposals for the incubator to ensure mission alignment, identify the most critical and relevant AI Safety topics for today’s ecosystem, and resolve any technical uncertainties the core team encounters.Governance advisors:Analogous to the role of technical advisors, but for the governance / technical governance directions of SAIGE’s activities.Final RemarkWhile we are proud of the traction our Incubator and Pivot Tracks have already achieved (+ nearly 300 registrations to our launch event), this is only the beginning of Phase I. The window to positively shape transformative AI is narrow, and leaving Europe’s top talents on the sidelines is a systemic failure we can no longer afford. Whether you are someone interested in exploring AI Safety, a professional looking to pivot your career, an expert willing to mentor the next generation, or a funder ready to help us scale our activities, please join our activities and/or reach out!Discuss ​Read More

​TL;DR: SAIGE is a national research and field-building initiative, started in January 2026. We believe that Germany’s talents are critical to the global effort of reducing catastrophic risks brought by AI. We provide our incubator program, resources, professional support, and events, to help redirect some of them to work on AI Safety.Note: At the time of writing, SAIGE is currently entirely self-funded by its director (me). If you like what we have been doing so far and have any funding leads, please contact me at info@safeaigermany.org. If you don’t like what we have been doing so far or have any feedback, please also contact me at info@safeaigermany.org.The preview image is generated by Nano Banana Pro, but everything else about this post is not consciously associated with any usage of LLMs.A Summary of SAIGEWe aim to address an urgent inefficiency in the current landscape: the shortage of people from Germany positioned to positively influence the trajectory of advanced AI development. In terms of geopolitics, Germany possesses the political and economic weight to influence the EU AI ecosystem. For example, during the final stages of the EU AI Act, Germany acted as the ultimate swing vote while some major member states pushed back against the provisional agreement. This ensured the successful adoption of the legislation.Speaking of technical talents alone: According to the Federal Statistical Office (Destatis), Germany holds the highest share of STEM Master’s degrees in the EU (35%), significantly outperforming the EU average of 25%. Moreover, Germany possesses a world-class engineering sector, together with an annual approx. 300,000 students in STEM (source), >110,000 students in Law (source); Yet global capacity in technical safety and governance remains critically limited. We see a massive structural bottleneck in the local ecosystem: virtually none of this top-percentile talent is funneled to AGI safety. Instead, this hidden reserve of industrial experts flows almost exclusively into traditional roles (e.g. mechanical engineering with 1.3 million employees), simply because they lack the context and infrastructure to apply their skills to AGI safety.Our mission is to build the centralised infrastructure required to bridge this gap. We are moving beyond volatile student initiatives to create a stable national organisation that supports both groups through:Upskilling: We have launched our inaugural SAIGE incubator program, providing coverage for cities that currently lack local hubs. This ensures high-potential students and professionals have a clear path into the field. We received 69 mentorship applications for the Spring 2026 cohort, but due to capacity (since we only started in January 2026!), we could only include 22 of them (acceptance rate ≈ 32%). This was by no means an easy decision. The project reviewing process was done with the help of our board advisors, each of whom are specialised within their fields.Together with the incubator program, we are also organising events such as discussions on AI middle powers, networking meet-ups and talks from global experts, for our community to gain up-to-date information and network opportunities in AI Safety.Career support: For career professionals, we have partnered with High Impact Professionals and Impact Academy to provide network and career guidance. See our Pivot Track for more details.In addition, we are also collaborating with Successif for workshops on how to transition one’s career into AI Safety, such as this.Theory of ChangeCurrently, the path of least resistance for high-potential German talents is to swarm into standard industry roles. Our theory of change is focused on expanding AI Safety talents by redirection.A link to our Theory of Change diagram can be seen here. Note that “Sufficient funding” is still pending at the current time of writing.We define a successful “AI Safety role” outcome to include either of the following: Employment: Full-time permanent positions, short-term fixed positions, or  project-based contractor positions at established labs and organisations (e.g. MATS fellowship);Entrepreneurial roles: founding new AI Safety initiatives or non-profits;Civic & ecosystem contribution: High-impact pro-bono work such as advising policymakers or giving educational talks.Note: Since SAIGE is just starting its journey, although we have plenty of activities listed in our Theory of Change, it is necessary to determine which ones we are prioritising first, according to our goal. See the planned activities below for more details.Our ActivitiesDue to funding constraints, we separate our activities into two phases. The ones in Phase I are already carried out. These include activities which are relatively low-budget. Phase II activities would mean scaling and instutionalisation, which would be contingent on funding.Phase I:- The SAIGE incubator program,- Pivot Track for career professionals,- low-budget online events, and- basic infrastructure support for local groups. We are currently supporting new local groups being set up in Frankfurt, Bonn and Nuremberg.Phase II:- In-person events/retreats, incl. national retreat for local leaderships every 6 months, to provide feedback to each other and to SAIGE,- SAIGE Day,- in-person hackathons (already agreed collaboration with Apart Research),- deployment of a centralised tech stack to relieve local organisers of administrative burdens.Depending on capacity, in Phase II, we could also include events which would likely add to our outreach but are not currently in our priority list, such as an introductory course partnered with AIS Collab to fit into the German semester dates, and also establishing a weekend-intensive program for career professionals to suit more to their schedule and capacity for time commitment. They are currently not listed in Phase I, since the incubator program already aims to include an introductory course, and we currently do not know the exact, quantitative impact of such a program. However, if we gain positive results and receive sufficient funding, we will consider these as well in Phase II. Our TeamOne can see the “our team” page for information on who is in our core team and who our board advisors are. Below is a list containing more information on everyone.Core TeamJessica P. Wang, DirectorBackground:Educational background in mathematics. Worked at Epoch AI to develop and later co-organised the FrontierMath project. Specifically, as their Outreach Coordinator to source talents to Tier 4, and co-organised the 2025 FrontierMath Symposium, held at Constellation. Top 9 global contributor to Humanity’s Last Exam. Previously worked as a reviewer for the $18 million AI for Math Fund at Renaissance Philanthropy, and will continue to be their reviewer for the 2026 funding round. Also worked as the Global Operations Analyst at Calastone, the largest global funds network. Worked at the International Mathematical Olympiad as the only official photographer in 2024 (& a team guide in 2019), with 1300+ attendees. In addition, the President of the Durham University Maths Society, and the Ambassador for the Institute of Physics. Responsibilities:Oversees the overall progress, design, and execution of activities. Communicates with existing and potential collaborators to ensure activities are carried out smoothly. Also responsible for outreach, fundraising and the entire website.Manon Kempermann, Tech LeadBackground: Educational background in data science and artificial intelligence. Founder of AI Safety Saarland. Currently writing a thesis at Max Planck Institute for Software Systems on red-teaming for misalignment in AI agents. A Pathfinder mentor at Kairos. Organised AI Safety events, including a talk with Anthropic containing 300+ attendees. Also works as a research assistant at the Interdisciplinary Institute for Societal Computing. Current research focuses on context-sensitivity in AI safety evaluations. Presented at IASEAI26 in Paris her paper, “Challenges of Evaluating LLM Safety for User Welfare”.Responsibilities: Works with the Director on the nationwide rollout of the Interdisciplinary Research Incubator model, adapting the successful AIS Saarland framework for a much broader German context. Oversees the strategic pairing of technical mentors with participants to maximise research output.Jessie Kelly, Governance LeadBackground: Educational background in law. Designed and implemented realignment programs and national policies for governments, including the Australian Government. Over 15 years of experience in helping governments with new programs and policies, including analysis of technological trends. Along with SAIGE, she is currently working on a project with the UN and a scientific institute to consider what the ground rules for AI Governance in agriculture should be. She has previously worked with Australia’s national science agency (CSIRO), the Australian Embassy in Berlin & the German Red Cross.,Responsibilities: Oversees and manages the AI Governance track of the SAIGE Research Incubator. Identifying high-quality mentors and helping governance research fellows progress in their projects and careers. Works with the Tech Lead and the Director to ensure the SAIGE incubator runs smoothly.Franziska Heuschkel, Communications ManagerBackground: Educational background in international management and intercultural communication. Spent 7 years shaping brand and visibility initiatives for international corporations, including Coca-Cola and Lufthansa, before working 7 years as a consultant advising start-ups and SMEs in hospitality/prop tech on user-centric positioning and sustainability. Co-founded a Berlin-based agency and think tank designing innovation hubs and co-working spaces based on human-centred design methodologies. Built programs, facilitated cross-functional collaboration, and organised 30+ events, talks, and workshops within Berlin’s start-up landscape.Responsibilities: Works with the Director to design promotional materials for key initiatives. Drives continuous improvement by gathering and analysing feedback from events to better understand audience needs and refine SAIGE’s offerings. Additionally, serves as SAIGE’s on-the-ground representative at in-person events and networking opportunities throughout the Berlin ecosystem.Board AdvisorsSince our core team has its potential weakness of being relatively new in AI Safety, we are very grateful to have a list of experts across different fields, to help us make good judgment calls in our decisions (including but not restricted to: mentorship project review for our incubator, advising on program management, leadership structure, etc.). At the time of writing, we are still actively looking for and adjusting our list of board advisors to make sure we have a high-calibre set of experts to reach out to in times of uncertainty and to give us timely feedback. Hence, the list is not yet finalised. There are also some advisors who have been guiding us with their wisdom, but do not wish to be publicly named. In any case, the finalised list will contain: Leadership advisor(s):Have regular contact with the Director to provide feedback, and to ensure SAIGE’s activities are aligned with the bigger AI Safety ecosystem. Also makes sure that the planned activities are reasonable given the range of capacities within the core team.Operations advisors: Advise the core team on the practical execution and logistical planning of SAIGE’s activities. Provide concrete guidance when operational uncertainties arise, such as determining the optimal format for programs or advising on resource allocation.Technical advisors: Advise the core team on the technical direction of SAIGE’s initiatives, drawing on years of in-depth experience in AI alignment. Provide expert evaluation of technical project proposals for the incubator to ensure mission alignment, identify the most critical and relevant AI Safety topics for today’s ecosystem, and resolve any technical uncertainties the core team encounters.Governance advisors:Analogous to the role of technical advisors, but for the governance / technical governance directions of SAIGE’s activities.Final RemarkWhile we are proud of the traction our Incubator and Pivot Tracks have already achieved (+ nearly 300 registrations to our launch event), this is only the beginning of Phase I. The window to positively shape transformative AI is narrow, and leaving Europe’s top talents on the sidelines is a systemic failure we can no longer afford. Whether you are someone interested in exploring AI Safety, a professional looking to pivot your career, an expert willing to mentor the next generation, or a funder ready to help us scale our activities, please join our activities and/or reach out!Discuss ​Read More

Leave a Reply

Your email address will not be published. Required fields are marked *