Machine Learning Engineer - GenAI Post Train, Monetization Generative AI

New Today

About the Generative AI Production TeamThe Post-Training pod under Generative AI Production Team is at the forefront of refining and enhancing generative AI models for advertising, content creation, and beyond. Our mission is to take pre-trained models and fine-tune them to achieve state-of-the-art (SOTA) performance in vertical ad categories and multi-modal applications. We optimize models through fine-tuning, reinforcement learning, and domain adaptation, ensuring that AI-generated content meets the highest quality and relevance standards. We work closely with pre-training teams, application teams, and multi-modal model developers (T2V, I2V, T2I) to bridge foundational AI advancements with real-world, high-performance applications. If you are passionate about pushing cognitive boundaries, optimizing AI models, and elevating AI-generated content to new heights, this is the team for you. As a Machine Learning Engineer, you will drive innovations in post-training optimization, reinforcement learning, and fine-tuning techniques to maximize the performance of generative AI models. You will work on multi-modal diffusion models, transformer architectures, and various RL algorithms to adapt pre-trained models into highly performant, domain-specific AI solutions. Responsibilities 1) Develop and implement fine-tuning strategies for large-scale diffusion models (T2V, I2V, T2I) to achieve SOTA performance in advertising and creative applications. 2) Optimize reinforcement learning methods (., DPO, PPO, GRPO) to refine generative model outputs, ensuring alignment with human preferences and business objectives. 3) Enhance model personalization by integrating domain adaptation, contrastive learning, and retrieval-augmented generation techniques. 4) Work closely with pre-training teams to refine and extend model capabilities, ensuring seamless adaptation from foundational training to specialized, high-precision use cases. 5) Collaborate with application teams to deploy fine-tuned models into real-world content generation pipelines, optimizing for latency, efficiency, and content quality. 6) Advance model evaluation and signal growth strategies, designing innovative objective and subjective evaluation metrics for continuous model improvement. 7) Integrate novel training methodologies, such as self-supervised learning, active learning, and reinforcement learning-based data curation, to enhance generative model quality. 8) Explore cutting-edge techniques from academia and open-source communities, driving innovation in generative AI and maintaining TikTok’s leadership in the field.
Minimum Qualifications: 1) ., ., or . in Computer Science, Electrical Engineering, or a related field. 3+ years of industry experience in machine learning, deep learning, and large-scale AI model optimization. Expertise in PyTorch, diffusion models, and transformer architectures. 2) Strong background in fine-tuning large models for vertical applications in multi GPU settings. Hands-on experience with reinforcement learning (DPO, PPO, GRPO), contrastive learning, and retrieval-based methods. Deep understanding of generative model evaluation, multi-modal learning, and domain adaptation techniques. 3) Experience in scaling model fine-tuning and inference on large GPU clusters. Strong proficiency in model distillation, quantization, and memory-efficient optimization techniques (., LoRA, QLoRA, ZeRO, DeepSpeed). Familiarity with distributed computing frameworks (Ray, Triton, vLLM) for large-scale AI training. 4) Ability to design iterative data curation loops that enhance model learning signals and domain relevance. Experience in active learning, dataset distillation, and self-improving model pipelines.
Location:
San Jose