Opinion

[Hiring] Principia Research Fellows

​Published on February 11, 2026 4:30 PM GMTPrincipia · London · Fixed-term (6 months) with potential extension · Starting ASAPWe are launching Principia, a new technical research agenda led by Andrew Saxe focused on theoretical models of representation learning and generalization in modern machine learning systems.We are hiring 2–3 research fellows to join this effort. We are based in London and offer an initial 6-month fixed-term contract starting ASAP, with the possibility of extension based on mutual interest and funding.About PrincipiaPrincipia’s mission is to develop foundational, predictive theory for modern machine learning systems that can help us understand complex network behaviors, including those critical for AI safety and alignment. We study simple, analytically tractable “model organisms” that capture essential learning dynamics and failure modes, with the goal of explaining and anticipating behaviors observed in large neural networks.About the research agendaOur agenda aims to develop analytically and empirically tractable toy models that explain how representations emerge, specialize, and generalize under different training regimes. The broader goal is to build a theory that is:mathematically clean and interpretable,empirically connected to phenomena observed in modern neural networks, andcapable of generating testable predictions relevant to AI safety. Research directions may include learning dynamics, inductive biases, multi-stage training, and generalization phase transitions in simplified model systems.Our projects will build on prior and ongoing work, including (but not limited to):Gated Linear Neural Networks [Link]Linear Attention [Link]RL Perceptron–style models [Link] Fellows will work closely with Andrew Saxe, fellow Principia researchers, and collaborators through regular technical discussions, joint problem formulation, and collaborative research projects.About the roleAs a research fellow at Principia, you will contribute directly to this agenda through both independent and collaborative work. This may include:developing formal or simplified models,conducting theoretical and analytical work,designing and performing empirical experiments to validate theoretical predictions.The role is research-focused, emphasizing depth, clarity, and conceptual progress.Who we’re looking forWe are particularly excited about candidates who:Have demonstrated research experience in a relevant areaHave a strong theoretical background, for example, in:theoretical machine learningphysics (especially statistical physics, dynamical systems, or related areas)applied mathematics or related fieldsAre comfortable working with mathematical models and analytical argumentsFormal degrees (including a PhD) are preferred but not required. What matters most is evidence of research ability — for example, through publications, preprints, technical reports, or substantial independent research projects.Experience with empirical or computational work (e.g,. PyTorch, JAX, numerical experiments) is a plus. Working style and environmentResearch hub in London (LISA), with close collaboration and frequent technical discussion with existing collaborators and researchersEmphasis on depth, clarity, and conceptual progress, rather than rapid productizationSupport for conference travel and internal research retreatsProviding dedicated cloud compute credits to support empirical experiments Well-suited for researchers seeking focused time to develop foundational theory relevant to AI safetyDuration, salary, and visasContract TermThis is initially a 6-month fixed-term position, starting ASAP. We are actively raising additional funding to support longer-term appointments. For exceptional candidates, we may offer a 12-month term from the outset if required. Extensions will be considered depending on funding availability and mutual interest.CompensationSalary range of USD 40,000–50,000 for a 6-month contract, depending on experience and seniority.VisaWhile we are unable to offer visa sponsorship in this hiring round, we welcome international applicants and will actively support candidates through the visa process if needed. How to applyPlease fill in the application form by 25 February 2026 at 11:59 PM GMT, which includesa CV,a brief statement of interest (500 words max) describing your background, research experience, and interest in this agenda, andcontact details for 2-3 refereesOptional: Supporting materials that reflect your research ability (e.g. preprints, code repositories, technical writing if not included in CV already)We are reviewing applications on a rolling basis and may make offers potentially before the application deadline. We encourage applicants to submit the form as early as possible. We will contact selected applicants, ideally within a week of the application, to schedule an interview consisting of a brief research talk and 1:1s.  Diversity, Equality and InclusionWe aim to promote equal opportunities and eliminate discrimination. We welcome applications from all backgrounds, regardless of gender, ethnicity, religion, or disability, and are happy to discuss any reasonable adjustments you may require throughout the hiring process.QuestionsFor any questions, please leave a comment or use this form. Discuss ​Read More

​Published on February 11, 2026 4:30 PM GMTPrincipia · London · Fixed-term (6 months) with potential extension · Starting ASAPWe are launching Principia, a new technical research agenda led by Andrew Saxe focused on theoretical models of representation learning and generalization in modern machine learning systems.We are hiring 2–3 research fellows to join this effort. We are based in London and offer an initial 6-month fixed-term contract starting ASAP, with the possibility of extension based on mutual interest and funding.About PrincipiaPrincipia’s mission is to develop foundational, predictive theory for modern machine learning systems that can help us understand complex network behaviors, including those critical for AI safety and alignment. We study simple, analytically tractable “model organisms” that capture essential learning dynamics and failure modes, with the goal of explaining and anticipating behaviors observed in large neural networks.About the research agendaOur agenda aims to develop analytically and empirically tractable toy models that explain how representations emerge, specialize, and generalize under different training regimes. The broader goal is to build a theory that is:mathematically clean and interpretable,empirically connected to phenomena observed in modern neural networks, andcapable of generating testable predictions relevant to AI safety. Research directions may include learning dynamics, inductive biases, multi-stage training, and generalization phase transitions in simplified model systems.Our projects will build on prior and ongoing work, including (but not limited to):Gated Linear Neural Networks [Link]Linear Attention [Link]RL Perceptron–style models [Link] Fellows will work closely with Andrew Saxe, fellow Principia researchers, and collaborators through regular technical discussions, joint problem formulation, and collaborative research projects.About the roleAs a research fellow at Principia, you will contribute directly to this agenda through both independent and collaborative work. This may include:developing formal or simplified models,conducting theoretical and analytical work,designing and performing empirical experiments to validate theoretical predictions.The role is research-focused, emphasizing depth, clarity, and conceptual progress.Who we’re looking forWe are particularly excited about candidates who:Have demonstrated research experience in a relevant areaHave a strong theoretical background, for example, in:theoretical machine learningphysics (especially statistical physics, dynamical systems, or related areas)applied mathematics or related fieldsAre comfortable working with mathematical models and analytical argumentsFormal degrees (including a PhD) are preferred but not required. What matters most is evidence of research ability — for example, through publications, preprints, technical reports, or substantial independent research projects.Experience with empirical or computational work (e.g,. PyTorch, JAX, numerical experiments) is a plus. Working style and environmentResearch hub in London (LISA), with close collaboration and frequent technical discussion with existing collaborators and researchersEmphasis on depth, clarity, and conceptual progress, rather than rapid productizationSupport for conference travel and internal research retreatsProviding dedicated cloud compute credits to support empirical experiments Well-suited for researchers seeking focused time to develop foundational theory relevant to AI safetyDuration, salary, and visasContract TermThis is initially a 6-month fixed-term position, starting ASAP. We are actively raising additional funding to support longer-term appointments. For exceptional candidates, we may offer a 12-month term from the outset if required. Extensions will be considered depending on funding availability and mutual interest.CompensationSalary range of USD 40,000–50,000 for a 6-month contract, depending on experience and seniority.VisaWhile we are unable to offer visa sponsorship in this hiring round, we welcome international applicants and will actively support candidates through the visa process if needed. How to applyPlease fill in the application form by 25 February 2026 at 11:59 PM GMT, which includesa CV,a brief statement of interest (500 words max) describing your background, research experience, and interest in this agenda, andcontact details for 2-3 refereesOptional: Supporting materials that reflect your research ability (e.g. preprints, code repositories, technical writing if not included in CV already)We are reviewing applications on a rolling basis and may make offers potentially before the application deadline. We encourage applicants to submit the form as early as possible. We will contact selected applicants, ideally within a week of the application, to schedule an interview consisting of a brief research talk and 1:1s.  Diversity, Equality and InclusionWe aim to promote equal opportunities and eliminate discrimination. We welcome applications from all backgrounds, regardless of gender, ethnicity, religion, or disability, and are happy to discuss any reasonable adjustments you may require throughout the hiring process.QuestionsFor any questions, please leave a comment or use this form. Discuss ​Read More

Leave a Reply

Your email address will not be published. Required fields are marked *