San Francisco, CAOnsite$230,000 - $400,0002 weeks ago
full-timeseniorgpt-4gpt-4ocustom
About the Role
OpenAI is looking for an Alignment Research Engineer to join our Superalignment team. You will work on developing scalable oversight techniques, automated red-teaming, and interpretability methods for frontier AI systems.
This is a high-impact role where your work directly contributes to ensuring that advanced AI systems remain safe and beneficial. You will collaborate with world-class researchers on some of the most important problems in AI.
The ideal candidate has strong ML engineering skills, a passion for safety research, and the ability to rapidly prototype and evaluate new alignment techniques.
Requirements
- 5+ years of ML engineering or research experience
- Experience with RLHF and related training methods
- Strong Python, PyTorch, and distributed training skills
- Familiarity with interpretability and mechanistic understanding of models
- PhD or equivalent research experience
- Published work in AI safety, alignment, or related fields preferred
Required Skills
PythonPyTorchRLHFDistributed TrainingTransformers
About OpenAI
Creating safe AGI that benefits all of humanity. Makers of GPT-4, ChatGPT, and DALL-E.