Research Engineer / Scientist, Health AI

New Today

About the Team The team is dedicated to ensuring the safety, robustness, and reliability of AI models towards their deployment in the real world. OpenAI’s calls on us to ensure the benefits of AI are distributed widely. Our Health AI team is focused on enabling universal access to high-quality medical information. We work at the intersection of AI safety research and healthcare applications, aiming to create trustworthy AI models that can assist medical professionals and improve patient outcomes. About the Role We’re seeking strong researchers who are passionate about advancing AI safety and improving global health outcomes. As a Research Scientist, you will contribute to the development of safe and effective AI models for healthcare applications. You will implement practical and general methods to improve the behavior, knowledge, and reasoning of our models in these settings. This will require research into safety and alignment techniques that we aim to generalize towards safe and beneficial AGI. This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees. In this role, you will: Design and apply practical and scalable methods to improve safety and reliability of our models, including RLHF, automated red teaming, scalable oversight, etc.
Evaluate methods using health-related data, ensuring models provide accurate, reliable, and trustworthy information.
Build reusable libraries for applying general alignment techniques to our models.
Proactively understand the safety of our models and systems, identifying areas of risk.
Work with cross-team stakeholders to integrate methods in core model training and launch safety improvements in OpenAI’s products.
You might thrive in this role if you: Are excited about OpenAI’s mission of ensuring AGI is universally beneficial and are aligned with OpenAI’s charter.
Demonstrate passion for AI safety and improving global health outcomes.
Have 4+ years of experience with deep learning research and LLMs, especially practical alignment topics such as RLHF, automated red teaming, scalable oversight, etc.
Hold a Ph.D. or other degree in computer science, AI, machine learning, or a related field.
Stay goal-oriented instead of method-oriented, and are not afraid of unglamorous but high-value work when needed.
Possess experience making practical model improvements for AI model deployment.
Own problems end-to-end, and are willing to pick up whatever knowledge you're missing to get the job done.
Are a team player who enjoys collaborative work environments.
Bonus: possess experience in health-related AI research or deployments.
Location:
San Francisco

We found some similar jobs based on your search