OpenAI CEO Sam Altman is leaving the internal commission OpenAI created in May to oversee ācriticalā safety decisions related to the companyās projects and operations.
In a blog post today, OpenAI said the committee, the Safety and Security Committee, will become an āindependentā board oversight group chaired by Carnegie Mellon professor Zico Kolter, and including Quora CEO Adam DāAngelo, retired U.S. Army General Paul Nakasone, and ex-Sony EVP Nicole Seligman. All are existing members of OpenAIās board of directors.
OpenAI noted in its post that the commission conducted a safety review of o1, OpenAIās latest AI model, after Altman had stepped down. The group will continue to receive regular briefings from OpenAI safety and security teams, said the company, and retain the power to delay releases until safety concerns are addressed.
āAs part of its work, the Safety and Security Committee ⦠will continue to receive regular reports on technical assessments for current and future models, as well as reports of ongoing post-release monitoring,ā OpenAI wrote in the post. ā[W]e are building upon our model launch processes and practices to establish an integrated safety and security framework with clearly defined success criteria for model launches.ā
Altmanās departure from the Safety and Security Committee comes after five U.S. senators raised questionsĀ about OpenAIās policies in a letter addressed to Altman this summer. Nearly half of the OpenAI staff that once focused on AIās long-term risks have left, and ex-OpenAI researchers have accused Altman of opposing ārealā AI regulation in favor of policies that advance OpenAIās corporate aims.
To their point, OpenAI has dramaticallyĀ increasedĀ its expenditures on federal lobbying, budgeting $800,000 for the first six months of 2024 versus $260,000 for all of last year. Altman also earlier this spring joined the U.S. Department of Homeland Securityās Artificial Intelligence Safety and Security Board, which provides recommendations for the development and deployment of AI throughout U.S. critical infrastructure.
Even with Altman removed, thereās little to suggest the Safety and Security Committee would make difficult decisions that seriously impact OpenAIās commercial roadmap. Tellingly, OpenAIĀ said in May that it would look to address āvalid criticismsā of its work via the commission ā āvalid criticismsā being in the eye of the beholder, of course.
In an op-ed for The EconomistĀ in May, ex-OpenAI board members Helen Toner and Tasha McCauley said that they donāt think OpenAI as it exists today can be trusted to hold itself accountable. ā[B]ased on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives,ā they wrote.
And OpenAIās profit incentives are growing.
The company is rumored to be in the midst of raising $6.5+ billion in a funding round thatād value OpenAI at over $150 billion. To cinch the deal, OpenAI could reportedly abandon its hybrid nonprofit corporate structure, which sought to cap investorsā returns in part to ensure OpenAI remained aligned with its founding mission: developing artificial general intelligence that ābenefits all of humanity.ā

