Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

OpenAI Prioritizes 'Shiny Products' Over AI Safety, Ex-Researcher Says

Jan Leike, who resigned from OpenAI this week alongside fellow researcher Ilya Sutskever, says he 'finally reached a breaking point' over the company's core priorities.

By Michael Kan
May 17, 2024
OpenAI logo (Photo by Jaap Arriens/NurPhoto via Getty Images)

A researcher who just resigned from ChatGPT developer OpenAI is accusing the company of not devoting enough resources to ensure that artificial intelligence can be safely controlled. 

"These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there," ex-OpenAI researcher Jan Leike claimed in a tweet on Friday.

A year ago, OpenAI appointed Leike and his colleague, renowned AI researcher Ilya Sutskever, to co-lead a team focused on reining in future superintelligent AI systems to prevent long-term harm. The resulting “superalignment” team was supposed to have access to 20% of OpenAI’s computing resources to research and prepare for such threats. 

But earlier this week, both Leike and Sutskever abruptly resigned from the company. Although Sutskever said he believes the company is on track to develop a “safe and beneficial” artificial general intelligence, Leike took to Twitter/X on Friday to express some serious doubts.

“Over the past few months my team has been sailing against the wind,” Leike alleged in a long tweet thread. “Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.”

He also revealed more about why he quit. "I joined because I thought OpenAI would be the best place in the world to do this research,” Leike said in a separate tweet. “However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point.”

In another post, Leike noted that “building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.” This was posted days after OpenAI debuted GPT-4o, its latest large language model.

Leike’s tweets are bound to raise some serious concerns about OpenAI, which is trying to develop AI systems that can match and eventually exceed human capability. The company didn’t immediately respond to a request for comment. But OpenAI told Bloomberg that the superalignment team Leike and Sutskever were leading has been effectively disbanded. Instead, the company is preparing to integrate the remaining parts across OpenAI’s research efforts. 

Wired reports that five researchers who focused on safety and policy on OpenAI were fired or have resigned in recent months. That said, the company has other groups focused on shorter-term AI safety threats, whereas the superalignment team spent its efforts on far-off, theoretical dangers. 

UPDATE: OpenAI CEO Sam Altman has since responded to Leike. "He's right we have a lot more to do; we are committed to doing it. I'll have a longer post in the next couple of days," Altman said in his own tweet.

Get Our Best Stories!

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

TRENDING

About Michael Kan

Senior Reporter

I've been with PCMag since October 2017, covering a wide range of topics, including consumer electronics, cybersecurity, social media, networking, and gaming. Prior to working at PCMag, I was a foreign correspondent in Beijing for over five years, covering the tech scene in Asia.

Read Michael's full bio

Read the latest from Michael Kan