No one is prepared for AGI, not even OpenAI.
OpenAI dismantles another security team following the departure of a policy leader.
Miles Brundage, senior advisor at OpenAI for the preparation of artificial general intelligence (AGI), issued a stark warning when announcing his departure on Wednesday: no one is ready for AGI, not even OpenAI. Brundage, who spent six years developing the company's safety initiatives, stated that neither OpenAI nor any other leading lab is prepared to face the AGI challenge, nor is the world at large. He noted that this comment should not be controversial among OpenAI executives, although he raised a separate issue regarding whether the company and the world are on track to be prepared at the relevant moment.
Brundage's resignation is part of a series of departures from prominent figures within OpenAI's safety teams. Jan Leike, a leading researcher, also left his position after pointing out that the culture and safety processes have been sidelined in favor of more attractive products. Similarly, Ilya Sutskever, one of the company’s co-founders, departed to start his own startup focused on the safe development of AGI.
The dissolution of Brundage's "AGI Preparation" team, occurring just months after the company dismantled its "Superalignment" team dedicated to long-term AI risk mitigation, reflects the growing tensions between OpenAI's original mission and its commercial ambitions. The company faces considerable pressure to transition from a non-profit organization to a for-profit public corporation within two years, or it may be forced to return funds from its recent $6.6 billion investment round. This inclination toward commercialization has been a concern for Brundage, who had already expressed reservations in 2019 when OpenAI established its for-profit division.
Regarding his departure, Brundage mentioned the increasing restrictions on his freedom of research and publication within the influential company. He emphasized the importance of having independent voices in discussions about AI policy, as well as the need to avoid biases and conflicts of interest within the industry. After having advised OpenAI's management on internal preparations, he believes he can now have a greater impact on global AI governance from outside the organization.
This resignation may also reflect a deeper cultural divide within OpenAI. Many researchers joined to advance the study of artificial intelligence and now find themselves in an increasingly production-oriented environment. The internal allocation of resources has emerged as a point of contention; reports indicate that Leike’s team was denied computing power for their safety research prior to its eventual dissolution.
Despite these frictions, Brundage mentioned that OpenAI has offered its support for his future work, providing him with funding, API credits, and early access to models, without any additional conditions.