Unknown Company

Research Engineer, Frontier Safety Mitigations, DeepMind

London • Posted Today
Onsite Full Time Not specified Level software_engineering

Minimum qualifications:

  • Bachelor’s degree or equivalent practical experience.
  • 5 years of experience with software development in one or more programming languages.
  • 3 years of experience testing, maintaining, or launching software products, and 1 year of experience with software design and architecture.

Preferred qualifications:

  • Master's degree or PhD in Computer Science or a related technical field.
  • Experience in Large Language Model (LLM) development, fine-tuning, or safety evaluation methodologies.
  • Experience taking research from concept to product.
  • Experience in areas such as Safety and Alignment.
  • Familiarity with concepts like the Frontier Safety Framework, adversarial attacks, and in-model/out-of-model mitigation strategies.
  • Ability to build large-scale research or engineering systems.

About the job

The goal of our frontier safety mitigations work is to de-risk model launches by researching and implementing defenses against high-stakes frontier safety risks, particularly those coming from misuse that could make a model tangibly dangerous as model capabilities increase.

In this role, you will be accountable for the safety and behavior of Google DeepMind’s (GDM) latest Gemini models. You will focus on critical domains such as CBRN (Chemical, Biological, Radiological, Nuclear), Cybersecurity, and Harmful Manipulation and will make sure that our mitigations are still enabling the beneficial use of our technology. You will employ a wide range of methods, from building novel evaluations to red-teaming, researching and deploying advanced mitigations, monitoring emerging risks, and contributing to model development.
Artificial intelligence will be one humanity’s most transformative inventions. At Google DeepMind, we are a pioneering AI lab with exceptional interdisciplinary teams focused on advancing AI development to solve complex global challenges and accelerate high-quality product innovation for billions of users. We use our technologies for widespread public benefit and scientific discovery, ensuring safety and ethics are always our highest priority.

We are pushing the boundaries across multiple domains. Our global teams offer diverse learning opportunities and varied career pathways for those driven to achieve exceptional results through collective effort.

Responsibilities

  • Develop high quality evaluations that capture risks arising in frontier safety domains, such as cybersecurity and CBRN.
  • Design, implement, and productionalize a range of safety mitigations, including in-model approaches (e.g., Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) training recipes) and out-of-model solutions (e.g., logging, monitoring).
  • Own the data and evaluation pipeline to measure the effectiveness of our mitigations.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Back to Job Search