Research Scientist, Evaluations, Security and Privacy, DeepMind
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Mountain View, CA, USA; San Francisco, CA, USA.
Minimum qualifications:
- PhD degree in Computer Science, a related field, or equivalent practical experience.
- 4 years of experience with research agendas across multiple teams or projects.
- 3 years of experience designing and implementing benchmarking frameworks for machine learning models.
- 2 years of experience in security and privacy.
- One or more scientific publication submissions for conferences, journals, or public repositories (such as CVPR, ICCV, NeurIPS, ICML, ICLR, etc.).
Preferred qualifications:
- 3 years of experience in software development or engineering.
- 2 years of experience coding in C++ and Python.
- Passion for AI technology and all of its possibilities.
About the job
As an organization, Google maintains a portfolio of research projects driven by fundamental research, new product innovation, product contribution and infrastructure goals, while providing individuals and teams the freedom to emphasize specific types of work. As a Research Scientist, you'll setup large-scale tests and deploy promising ideas quickly and broadly, managing deadlines and deliverables while applying the latest theories to develop new and improved products, processes, or technologies. From creating experiments and prototyping implementations to designing new architectures, our research scientists work on real-world problems that span the breadth of computer science, such as machine (and deep) learning, data mining, natural language processing, hardware and software performance analysis, improving compilers for mobile platforms, as well as core search and much more.
As a Research Scientist, you'll also actively contribute to the wider research community by sharing and publishing your findings, with ideas inspired by internal projects as well as from collaborations with research programs at partner universities and technical institutes all over the world.
The mission of the team is to develop solutions to address contextual security and privacy challenges in Gemini as well as agentic products from Google. This is a research team that looks ahead at the upcoming security challenges, and solves them, going all the way to landing in Gemini or Google products.
Artificial intelligence will be one of humanity’s most transformative inventions. At DeepMind, we are a pioneering AI lab with exceptional interdisciplinary teams focused on advancing AI development to solve complex global challenges and accelerate high-quality product innovation for billions of users. We use our technologies for widespread public benefit and scientific discovery, ensuring safety and ethics are always our highest priority.
Responsibilities
- Drive research to safeguard Gemini’s flagship foundational models and agentic products against emerging vulnerabilities at a massive scale.
- Design, prototype, and evaluate novel defense mechanisms to protect models and agents from adversarial attacks, prompt injections, and contextual security threats.
- Translate theoretical research breakthroughs into practical, real-world security solutions for both training and inference pipelines.
- Work closely with core modeling, engineering, and Trust and Safety teams to seamlessly integrate security innovations into Gemini's infrastructure.
- Stay ahead of the threat landscape by inventing next-generation security techniques specifically designed for autonomous and agentic AI systems.