Senior Security Engineer (AI & Agentic Systems)
Uber
**About the Role**
As AI systems-especially agentic and autonomous AI-become deeply embedded in our products and internal platforms, the security model must evolve. Traditional application security alone is no longer sufficient. We are looking for a Senior AI Red Team Engineer to help us proactively identify, understand, and mitigate AI-native and agent-specific security risks before they reach production.
In this role, you will design, build, and execute adversarial red-teaming exercises against AI models and AI agents, focusing on how they can be manipulated into unsafe, unintended, or harmful behavior. You will work closely with AI platform teams, product engineers, and security partners to stress-test agent logic, tool usage, memory, and autonomy-and translate findings into concrete guardrails and defenses.
This role is ideal for someone who enjoys thinking like an attacker, understands modern AI systems, and wants to work at the intersection of security, AI, and real-world impact.
---- What the Candidate Will Do ----
This role sits at the intersection of offensive security and AI engineering. You will not be limited to traditional penetration testing; instead, you will focus on behavioral, logical, and contextual attacks that cause AI systems to fail in subtle but dangerous ways-often without exploiting classic vulnerabilities. Success in this role means uncovering unknown unknowns," clearly articulating risk, and helping teams build safer AI systems by design.
**Design and execute AI red-teaming exercises** against LLMs and AI agents, including:
1. prompt injection (direct & indirect)
2. jailbreaking and policy bypass
3. model and tool poisoning
4. memory and context poisoning
5. behavioral drift and unsafe autonomy
6. tool misuse and emergent privilege escalation
**Analyze agent workflows, logic, and tool graphs** to identify systemic security weaknesses beyond prompt-level attacks.
**Develop reusable adversarial test cases, attack libraries, and red-team playbooks** for AI systems.
**Collaborate with AI platform and product teams** to translate red-team findings into actionable mitigations, guardrails, and design changes.
**Partner with broader security teams** (AppSec, InfraSec, Privacy, Risk) to integrate AI red teaming into the SDLC and launch gates.
**Contribute to AI security strategy** , helping define how we evaluate and secure agentic systems at scale.
**Stay ahead of emerging AI threats** , tracking industry research, incidents, and attack techniques relevant to AI and autonomous systems.
---- Basic Qualifications ----
1. 4+ years of experience in security engineering, offensive security, or red teaming
2. Hands-on experience red-teaming AI models or AI agents, including testing for prompt injection, jailbreaks, unsafe behavior, Excessive agency, Model DoS.
3. Familiarity with AI production patterns such as ReAct, tool use, multi-agent orchestration
4. Strong understanding of security fundamentals (threat modeling, secure design, least privilege, defense in depth).
5. Experience analyzing complex systems and reasoning about unintended behavior and emergent risk.
6. Ability to clearly document findings and communicate risk to both technical and non-technical stakeholders
7. Proficiency in at least one programming language (e.g., Python, Go, Java, or similar)
---- Preferred Qualifications ----
1. Familiarity with AI security tools and frameworks (e.g., PyRIT, AgentDojo, Promptfoo, custom harnesses).
2. Strong understanding of GenAI and LLM architectures, including: embeddings, RAG, or agent frameworks.
3. Hands-on experience building or operating AI agents, including tool calling, memory, or workflow orchestration.
4. Offensive security / penetration testing background (e.g., red team, bug bounty, exploit development).
5. Active on HackerOne, Bugcrowd, Synack
For New York, NY-based roles: The base salary range for this role is USD$202,000 per year - USD$224,000 per year. For San Francisco, CA-based roles: The base salary range for this role is USD$202,000 per year - USD$224,000 per year. For Seattle, WA-based roles: The base salary range for this role is USD$202,000 per year - USD$224,000 per year. For Sunnyvale, CA-based roles: The base salary range for this role is USD$202,000 per year - USD$224,000 per year. For all US locations, you will be eligible to participate in Uber's bonus program, and may be offered an equity award & other types of comp. You will also be eligible for various benefits. More details can be found at the following link https://www.uber.com/careers/benefits.
Uber is proud to be an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you have a disability or special need that requires accommodation, please let us know by completing this form- https://docs.google.com/forms/d/e/1FAIpQLSdb_Y9Bv8-lWDMbpidF2GKXsxzNh11wUUVS7fM1znOfEJsVeA/viewform
Confirm your E-mail: Send Email
All Jobs from Uber