Steven Adler, a leading safety researcher at OpenAI, revealed today that he left the company in November. He warned that the rapid race toward artificial general intelligence represents a “very risky gamble” for humanity’s future.
Why it matters: The departure of another senior safety researcher fundamentally challenges OpenAI’s approach to AI development, adding to growing internal discord over the company’s balance between innovation and safety precautions.
Safety Concerns: Adler spent four years evaluating AI capabilities and developing safety protocols at OpenAI. His departure reflects deepening anxiety among researchers about the industry’s pace of development without adequate safeguards.
- Led dangerous capability evaluations
- Focused on agent safety and control
- Worked on AGI identity systems
Steven Adler: “Honestly I’m pretty terrified by the pace of AI development these days. When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: Will humanity even make it to that point?”, Adler stated.
Industry Impact: The resignation adds to a pattern of high-profile departures from OpenAI’s safety teams:
- Jan Leike left citing safety culture concerns
- Ilya Sutskever departed after leadership conflicts
- Superalignment team dissolved
Market Response: OpenAI CEO Sam Altman has indicated the company will accelerate some releases in response to competition from China’s DeepSeek, precisely the type of development race that concerns safety researchers.
Looking Forward: While OpenAI maintains its commitment to safe AI development, the exodus of safety researchers suggests growing internal tension over the company’s approach to managing potentially catastrophic risks.