Software Engineer, Inference - TL

1 Days Old

About the Team Our team brings OpenAI’s most capable research and technology to the world through our products. We empower consumers, enterprises, and developers alike to access state-of-the-art AI models - unlocking new capabilities across productivity, creativity, and more. We focus on high-performance model inference and accelerating research through efficient and reliable infrastructure. About the Role We’re looking for a hands-on Tech Lead to drive the design, optimization, and scaling of our inference systems. In this role, you’ll lead engineering efforts to ensure our largest models run with exceptional efficiency in high-throughput, low-latency environments. You’ll be responsible for shaping our CUDA strategy, driving performance at the kernel level, and collaborating across teams to deliver end-to-end production readiness. In this role, you will: Lead the design and implementation of core inference infrastructure for serving frontier AI models in production.
Own and optimize CUDA-based systems and kernels to maximize performance across our fleet.
Partner with researchers to integrate novel model architectures into performant, scalable inference pipelines.
Build tooling and observability to detect bottlenecks, guide system tuning, and ensure stable deployment at scale.
Collaborate cross-functionally to align technical direction across research, infra, and product teams.
Mentor engineers on GPU performance, CUDA development, and distributed inference best practices.
You may thrive in this role if you: Have deep expertise in CUDA, including writing and optimizing high-performance kernels for inference or training workloads.
Have experience leading complex engineering efforts, particularly at the systems and performance layer of large-scale ML infrastructure. Understand the full inference stack - from model loading and memory management to communication libraries and deployment orchestration.
Are comfortable working in large, distributed GPU environments and debugging performance issues across hardware and software layers.
Have strong familiarity with PyTorch and NVIDIA’s GPU software stack (NCCL, NVLink, MIG, etc.).
Take a systems-level view, but aren’t afraid to dive into low-level code when performance is on the line.
Bonus: Experience with inference frameworks like TensorRT, vLLM, SGLang, or custom model parallelism infrastructure.
Familiarity with TPU, AMD GPUs, ROCm, HIP, TensorRT-LLM, Ray Serve, Megatron, MPI, or Horovod.
Familiarity with profiling tools (Nsight, nvprof, or custom observability stacks). Background in HPC or large-scale distributed systems engineering.
Location:
San Francisco

We found some similar jobs based on your search