Security Engineer, Agent Security

New Today

About the Team The team’s mission is to accelerate the secure evolution of agentic AI systems at OpenAI. To achieve this, the team designs, implements, and continuously refines security policies, frameworks, and controls that defend OpenAI’s most critical assets—including the user and customer data embedded within them—against the unique risks introduced by agentic AI. About the Role As a Security Engineer on the Agent Security Team , you will be at the forefront of securing OpenAI’s cutting-edge agentic AI systems. Your role will involve designing and implementing robust security frameworks, policies, and controls to safeguard OpenAI’s critical assets and ensure the safe deployment of agentic systems. You will develop comprehensive threat models, partner tightly with our Agent Infrastructure group to fortify the platforms that power OpenAI’s most advanced agentic systems, and lead efforts to enhance safety monitoring pipelines at scale. We are looking for a versatile engineer who thrives in ambiguity and can make meaningful contributions from day one. You should be prepared to ship solutions quickly while maintaining a high standard of quality and security. We’re looking for people who can drive innovative solutions that will set the industry standard for agent security. You will need to bring your expertise in securing complex systems and designing robust isolation strategies for emerging AI technologies, all while being mindful of usability. You will communicate effectively across various teams and functions, ensuring your solutions are scalable and robust while working collaboratively in an innovative environment. In this fast-paced setting, you will have the opportunity to solve complex security challenges, influence OpenAI’s security strategy, and play a pivotal role in advancing the safe and responsible deployment of agentic AI systems. This role is based in San Francisco, CA . We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees. You’ll be responsible for: Architecting security controls for agentic AI – design, implement, and iterate on identity, network, and runtime-level defenses (e.g., sandboxing, policy enforcement) that integrate directly with the Agent Infrastructure stack.
Building production-grade security tooling – ship code that hardens safety monitoring pipelines across agent executions at scale.
Collaborating cross-functionally – work daily with Agent Infrastructure, product, research, safety, and security teams to balance security, performance, and usability.
Influencing strategy & standards – shape the long-term Agent Security roadmap, publish best practices internally and externally, and help define industry standards for securing autonomous AI.
We’re looking for someone with: Strong software-engineering skills in Python or at least one systems language (Go, Rust, C/C++), plus a track record of shipping and operating secure, high-reliability services.
Deep expertise in modern isolation techniques – experience with container security, kernel-level hardening, and other isolation methods.
Hands-on network security experience – implementing identity-based controls, policy enforcement, and secure large-scale telemetry pipelines.
Clear, concise communication that bridges engineering, research, and leadership audiences; comfort influencing roadmaps and driving consensus.
Bias for action & ownership – you thrive in ambiguity, move quickly without sacrificing rigor, and elevate the security bar company-wide from day one.
Cloud security depth on at least one major provider (Azure, AWS, GCP), including identity federation, workload IAM, and infrastructure-as-code best practices.
Familiarity with AI/ML security challenges – experience addressing risks associated with advanced AI systems (nice-to-have but valuable).
Location:
San Francisco

We found some similar jobs based on your search