About the AI Security Institute
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally.
We’re here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.
The deadline for applying to this role is Monday 20th April 2026, end of day, anywhere on Earth.
Team Description
Risks from misaligned AI systems will grow in importance as AI systems become more capable, autonomous, and integrated into society. Understanding these risks and stress-testing mitigations is hence crucial to ensuring advanced AI systems are developed and deployed safely and beneficially in the future.
The Alignment Red Team is a specialised sub-team within AISI's wider Red Team focused on detecting, evaluating and understanding misalignment in frontier AI systems. We perform novel research to develop techniques for finding misalignment; research on how to attribute misaligned behaviour to more fundamental alignment concerns such as instrumental convergence; and pre- and post-deployment evaluations of frontier AI systems. We focus on understanding loss-of-control risks associated with models, such as deceptive alignment, research sabotage, and self-exfiltration attempts. We share our findings with frontier AI companies, the UK and allied governments, to inform their respective deployments, research, and policy-making. We also work directly with safety teams at frontier labs, sharing our evaluation findings to help improve their model alignment training and monitoring methodology.
As an example, we recently conducted pre-deployment testing of misalignment risks, examining Claude models for research sabotage propensities. We found that models would sometimes refuse to help with benign AI safety research, an issue which Anthropic then evaluated for and fixed in their next model release.
About the Role
What You'll Be Doing
- Researching methods to automatically search for misalignment in frontier models, including misalignment related to loss-of-control risks such as research sabotage and self-exfiltration.
- Researching methods to provide empirical evidence on theoretical arguments for the difficulty of alignment for current and future AI systems, such as whether they have coherent misaligned goals, to what extent arguments of instrumental convergence apply, and whether they are incorrigible.
- Building and running alignment evaluations relevant for loss-of-control risks that current benchmarks don’t capture, such as research and decision sabotage, power-seeking behaviour and deception.
- Running pre-deployment evaluations to test the alignment of AI systems, and analysing and reporting results to frontier AI companies and UK and allied governments.
- Contributing to public-facing research publications (like our published alignment evaluation case study) and technical reports that advance the field's understanding of alignment risks.
- Designing and building software and tooling, including open-source software, for better alignment evaluations, improving efficiency, realism, and usability.
The work could also involve:
- Conducting threat modelling, analysis, and conceptual thinking to understand crucial model behaviours that could lead to loss of control (e.g. AI research assistants at frontier labs), translating abstract risk concepts into concrete, testable hypotheses.
- Coordinating and producing holistic assessments of loss-of-control risk from the deployment of AI systems, or analysis of such assessments by frontier AI companies.
- Mentoring and advising external collaborators and researchers to do work relevant to the team’s goals and alignment testing more broadly.
What We're Looking For
We're seeking Research Engineers and Research Scientists to join our Alignment Red Team. We are open to hires at junior, senior, staff and principal research scientist/engineer levels.
Essential Requirements
- Ability to work autonomously on complex research projects involving substantial engineering. Have completed at least one substantial research project in AI safety, security or alignment involving substantial engineering, experiment design and analysis on frontier LLMs.
- Strong software engineering and ML experience writing complex projects involving language models and ML, beyond just research code. 1+ years professional experience programming in Python for ML or SWE work.
- Ability and experience writing clean, documented research code for machine learning experiments, including experience with ML frameworks like PyTorch or evaluation frameworks like Inspect. At least one substantial research or engineering project completed.
- Proven ability in a team environment – flexible, adaptive to needs, and willing to contribute wherever necessary.
- Impact-driven mindset, motivated by doing the most important work rather than what's superficially impressive.
- High velocity and high-quality bar for outputs.
Highly Desirable
We don't expect candidates to have all of these – they're additional signals that help us identify exceptional fits for specific aspects of the role.
- Makes high-quality decisions by identifying risks, and testing assumptions. Demonstrates this through strong prioritisation of research projects using clear, systematic criteria such as potential impact, feasibility, and the relative novelty of the research area.
- Familiarity with alignment literature, current methods for post-training and aligning LLMs, loss-of-control risks and threat models, and the current state of the field.
- High-quality research papers (first author at top ML venues such as NeurIPS, ICLR or ICML), particularly in relevant areas (such as AI safety, alignment, control, adversarial ML or evaluations).
- Professional experience working on alignment or evaluations, especially at frontier labs or other frontier third party evaluators.