Research Engineer / Scientist - Storage for LLM San Jose Regular R&D - Infrastructure Job ID: A[...]

New Yesterday

Research Engineer / Scientist - Storage for LLM
Scroll down the page to see all associated job requirements, and any responsibilities successful candidates can expect. San Jose Regular R&D - Infrastructure Job ID: A244964 Responsibilities About the TeamThe Infrastructure System Lab is a hybrid research and engineering group focused on building next-generation AI-native data infrastructure. Positioned at the intersection of databases, large-scale systems, and AI, the team leads innovation in areas such as vector and multi-modal databases, infrastructure optimization through machine learning, and LLM-based tooling like NL2SQL and NL2Chart. They also develop high-performance cache systems, including multi-engine key-value stores and LLM inference KV caches. The team thrives on collaboration, with researchers and engineers working closely to take ideas from paper to prototype to production. Their work supports key products used by millions and is regularly published and deployed at scale.About the RoleWe are seeking a systems researcher or engineer with deep expertise in large-scale distributed storage and caching infrastructure to design and maintain a high-performance KV cache layer for large language model (LLM) inference. This role focuses on improving latency, throughput, and cost-efficiency in transformer-based model serving by optimizing the reuse of attention key-value states and prompt embeddings. You’ll work on cutting-edge AI systems problems with real-world impact, alongside a world-class team. The role offers opportunities to publish, contribute to open-source, attend top conferences, and enjoy competitive compensation, generous research resources, and an innovation-driven culture.Responsibilities- Design and implement a distributed KV cache system to store and retrieve intermediate states (e.g., attention keys/values) for transformer-based LLMs across GPUs or nodes.- Optimize low-latency access and eviction policies for caching long-context LLM inputs, token streams, and reused embeddings.- Collaborate with inference and serving teams to integrate the cache with token streaming pipelines, batched decoding, and model parallelism.- Develop cache consistency and synchronization protocols for multi-tenant, multi-request environments.- Implement memory-aware sharding, eviction (e.g., windowed LRU, TTL), and replication strategies across GPUs or distributed memory backends.- Monitor system performance and iterate on caching algorithms to reduce compute costs and response time for inference workloads.- Evaluate and, where needed, extend open-source KV stores or build custom GPU-aware caching layers (e.g., CUDA, Triton, shared memory, RDMA). Qualifications Minimum Qualifications- PhD in Computer Science, Applied Mathematics, Electrical Engineering, or a related technical field.- Strong understanding of transformer-based model internals and how KV caching affects autoregressive decoding.- Experience with distributed systems, memory management, and low-latency serving (RPC, gRPC, CUDA-aware networking).- Familiarity with high-performance compute environments (NVIDIA GPUs, TensorRT, Triton Inference Server).- Proficiency in languages like C++, Rust, Go, or CUDA for systems-level development.Preferred Qualifications- Prior experience building inference-serving systems for LLMs (e.g., vLLM, SGLang, FasterTransformer, DeepSpeed, Hugging Face Text Generation Inference).- Experience with memory hierarchy optimization (HBM, NUMA, NVLink) and GPU-to-GPU communication (NCCL, GDR, GDS, InfiniBand).- Exposure to cache-aware scheduling, batching, and prefetching strategies in model serving. Job Information About Us Founded in 2012, ByteDance's mission is to inspire creativity and enrich life. With a suite of more than a dozen products, including TikTok, Lemon8, CapCut and Pico as well as platforms specific to the China market, including Toutiao, Douyin, and Xigua, ByteDance has made it easier and more fun for people to connect with, consume, and create content. Why Join ByteDance Inspiring creativity is at the core of ByteDance's mission. Our innovative products are built to help people authentically express themselves, discover and connect – and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and enrich life - a mission we work towards every day. As ByteDancers, we strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. By constantly iterating and fostering an "Always Day 1" mindset, we achieve meaningful breakthroughs for ourselves, our Company, and our users. When we create and grow together, the possibilities are limitless. Join us. Diversity & Inclusion ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too. Reasonable Accommodation ByteDance is committed to providing reasonable accommodations in our recruitment processes for candidates with disabilities, pregnancy, sincerely held religious beliefs or other reasons protected by applicable laws. If you need assistance or a reasonable accommodation, please reach out to us at https://tinyurl.com/RA-request
#J-18808-Ljbffr
Location:
San Jose, CA
Salary:
$125

We found some similar jobs based on your search