OpenAI top executive resigns

Leike expressed worry that these critical areas were not receiving the necessary attention and resources.

Jan Leike, the head of alignment, super alignment lead, and executive at OpenAI, announced his resignation on May 17, 2024. His departure marks the end of a significant era in AI research and development at the organization.

Leike's announcement, made via a series of tweets, revealed that his decision to leave was not easy. He reflected on the accomplishments of his team over the past three years, including launching the first-ever RLHF (Reinforcement Learning from Human Feedback) language model with InstructGPT. His team also made strides in scalable oversight on large language models (LLMs) and pioneered advancements in automated interpretability and weak-to-strong generalization.

"I love my team," Leike tweeted, expressing gratitude for the talented individuals he worked with both inside and outside the superalignment team. He highlighted the intelligence, kindness, and effectiveness of OpenAI's talent.

However, Leike's departure was fueled by deep concerns about the company's direction. He disclosed ongoing disagreements with OpenAI's leadership about the organization's core priorities. "We urgently need to figure out how to steer and control AI systems much smarter than us," he stated, emphasizing the importance of focusing on the next generations of AI models, security, monitoring, preparedness, safety, adversarial robustness, alignment, confidentiality, and societal impact.

Leike expressed worry that these critical areas were not receiving the necessary attention and resources. "These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there," he tweeted, adding that his team often faced challenges in securing the computational resources needed for their research.

He also voiced concerns about the potential dangers of developing machines smarter than humans, stressing that OpenAI has a significant responsibility to humanity. He criticized the company for prioritizing "shiny products" over safety culture and processes, calling for a shift towards becoming a safety-first AGI (Artificial General Intelligence) company.

In his final message to OpenAI employees, Leike urged them to act with the seriousness appropriate for building AGI and to embrace the necessary cultural changes. "The world is counting on you," he wrote.