Software Engineer, Inference - Multi Modal
New Today
About the Team
OpenAI’s Inference team powers the deployment of our most advanced models - including our GPT models, 4o Image Generation, and Whisper - across a variety of platforms. Our work ensures these models are available, performant, and scalable in production, and we partner closely with Research to bring the next generation of models into the world. We're a small, fast-moving team of engineers focused on delivering a world-class developer experience while pushing the boundaries of what AI can do.
We’re expanding into multimodal inference, building the infrastructure needed to serve models that handle image, audio, and other non-text modalities. These workloads are inherently more heterogeneous and experimental, involving diverse model sizes and interactions, more complex input/output formats, and tighter coordination with product and research.
About the Role
We’re looking for a software engineer to help us serve OpenAI’s multimodal models at scale. You’ll be part of a small team responsible for building reliable, high-performance infrastructure for serving real-time audio, image, and other MM workloads in production.
This work is inherently cross-functional: you’ll collaborate directly with researchers training these models and with product teams defining new modalities of interaction. You'll build and optimize the systems that let users generate speech, understand images, and interact with models in ways far beyond text.
In this role, you will:
Design and implement inference infrastructure for large-scale multimodal models.
Optimize systems for high-throughput, low-latency delivery of image and audio inputs and outputs.
Enable experimental research workflows to transition into reliable production services.
Collaborate closely with researchers, infra teams, and product engineers to deploy state-of-the-art capabilities.
Contribute to system-level improvements including GPU utilization, tensor parallelism, and hardware abstraction layers.
You might thrive in this role if you:
Have experience building and scaling inference systems for LLMs or multimodal models.
Have worked with GPU-based ML workloads and understand the performance dynamics of large models, especially with complex data like images or audio.
Enjoy experimental, fast-evolving work and collaborating closely with research.
Are comfortable dealing with systems that span networking, distributed compute, and high-throughput data handling.
Have familiarity with inference tooling like vLLM, TensorRT-LLM, or custom model parallel systems.
Own problems end-to-end and are excited to operate in ambiguous, fast-moving spaces.
Nice to Have:
Experience working with image generation or audio synthesis models in production.
Exposure to distributed ML training or system-efficient model design.
- Location:
- San Francisco
We found some similar jobs based on your search
-
30 Days Old
Software Engineer, Inference - Multi Modal
-
San Francisco, CA, United States
-
$200,000 - $250,000
- IT & Technology
About the Team OpenAI’s Inference team powers the deployment of our most advanced models - including our GPT models, 4o Image Generation, and Whisper - across a variety of platforms. Our work ensures these models are available, performant, and scalab...
More Details -
-
79 Days Old
Software Engineer, Inference - Multi Modal
-
San Francisco, CA, United States
- IT & Technology
Software Engineer, Inference - Multi Modal | OpenAI Careers Software Engineer, Inference - Multi Modal Inference - San Francisco Apply now (opens in a new window) About the Team OpenAI’s Inference team powers the deployment of our most advanced mod...
More Details -