Senior OpenAI Safety Researcher Quits Over ‘Terrifying’ AI Development Pace

Senior OpenAI safety researcher Steven Adler quits, warning of “terrifying” pace of AI development and insufficient safeguards against catastrophic risks.

Ryan Hansen Avatar
Ryan Hansen Avatar

By

Our editorial process is built on human expertise, ensuring that every article is reliable and trustworthy. AI helps us shape our content to be as accurate and engaging as possible.
Learn more about our commitment to integrity in our Code of Ethics.

Image credit: Wikimedia

Key Takeaways

  • Leading safety researcher departs OpenAI after four years of evaluating dangerous AI capabilities
  • Resignation adds to pattern of safety experts leaving over concerns about development pace
  • Competition with Chinese AI firms intensifies pressure to accelerate potentially risky research

Steven Adler, a leading safety researcher at OpenAI, revealed today that he left the company in November. He warned that the rapid race toward artificial general intelligence represents a “very risky gamble” for humanity’s future.

Why it matters: The departure of another senior safety researcher fundamentally challenges OpenAI’s approach to AI development, adding to growing internal discord over the company’s balance between innovation and safety precautions.

Safety Concerns: Adler spent four years evaluating AI capabilities and developing safety protocols at OpenAI. His departure reflects deepening anxiety among researchers about the industry’s pace of development without adequate safeguards.

  • Led dangerous capability evaluations
  • Focused on agent safety and control
  • Worked on AGI identity systems

Steven Adler: “Honestly I’m pretty terrified by the pace of AI development these days. When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: Will humanity even make it to that point?”, Adler stated.

Industry Impact: The resignation adds to a pattern of high-profile departures from OpenAI’s safety teams:

  • Jan Leike left citing safety culture concerns
  • Ilya Sutskever departed after leadership conflicts
  • Superalignment team dissolved

Market Response: OpenAI CEO Sam Altman has indicated the company will accelerate some releases in response to competition from China’s DeepSeek, precisely the type of development race that concerns safety researchers.

Looking Forward: While OpenAI maintains its commitment to safe AI development, the exodus of safety researchers suggests growing internal tension over the company’s approach to managing potentially catastrophic risks. 

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →