Unlock Editor’s Digest Lock for Free
FT editor Roula Khalaf will select your favorite stories in this weekly newsletter.
When he returned to the White House in January, Donald Trump quickly dismantled the regulatory framework that his predecessor, Joe Biden, had introduced to address the risks of artificial intelligence.
The US president’s actions included overturning an executive order in 2023 requiring AI developers to submit safety test results to federal authorities when the system poses “serious risks” to the country’s safety, economy, or public health and health. Trump’s Order saw these guardrails as “barriers to American AI innovation.”
This back and forth on AI regulations reflects the tension between public safety and economic growth seen in areas such as workplace safety, financial sector stability, and environmental protection. If regulations prioritize growth, businesses need to continue to align governance with the public interest. And what are the advantages and disadvantages of doing so?
Founded as a nonprofit organization by Sam Altman in 2015, Openai has become an important topic of discussion among investors and co-founders, including Elon Musk, to ensure that AI is a technology early concern, safe, ethical and for the benefit of humanity.
As a result, many companies have adopted a new corporate structure aimed at balancing economic benefits with broader social concerns. For example, in 2021, seven former Openai employees founded humanity and incorporated it into a profit company. Its founding document states that the purpose of humanity is to responsibly develop and maintain advanced AI for the long-term benefits of humanity.
Test it yourself
This is part of a series of case studies that teach regular business schools dedicated to business dilemmas. Before considering the questions raised, read the text that was last proposed (and linked within the work) and other articles elsewhere. The series forms part of FT’s extensive collection of “Instant Education Case Studies” that explore business challenges.
The benefits structure, first introduced by Maryland in 2010, has been adopted by more than 40 US states, including Washington, DC, Puerto Rico, Italy, Colombia, Ecuador, France, Peru, Rwanda, Uruguay and the Canadian Province of British Columbia.
However, they are also adopted by AI companies whose goals are not particularly aligned with environmental and social impacts. Founded as a Nevada welfare company, Musk’s Xai states its corporate purpose to “have a significant positive impact on society and the environment as a whole.”
Critics argue that benefits models have no teeth. Although most of the time it includes transparency provisions, the relevant reporting requirements are not to provide meaningful accountability as to whether the company is achieving its legal objectives.
All this increases the likelihood that models will open the door to “governance cleaning.” Following a wave of lawsuits against opioid maker Purdue Pharma, its owner, the Sackler family, proposed to turn the company into a benefits company. Final action on numerous lawsuits against the company is underway.
In Openai, it illustrates the problems surrounding governance in the AI sector. In 2019, the company launched a for-profit institution to receive billions of dollars in investments from Microsoft and others. Many early employees remain, reportedly surrounding safety concerns.
Musk sued Openry and Sam Altman in 2024, claiming that he compromised the startup’s mission to build AI systems for the benefit of humanity.
In December 2024, Openai announced plans to rebuild it as a public benefits corporation, and in early 2025, the company’s nonprofit committee reportedly worked to split Openai into two entities. Musk is opposed to the move, and this month it placed an unsolicited bid of more than $97 billion on Open Rye.
Openai’s funding trajectory supports the arguments raised by Musk and others who prioritize profits over the public interest. In October 2024, the company secured a groundbreaking investment round with a $157 billion valuation. However, it still does not formally make the ownership structure and governance framework, and has had a major impact on investors on the company’s mission and execution.
Once the company completes its structure, should it embrace the industry vision clarified in Trump’s executive order and focus on safety and humanity? Or, given that other parts of the world or future US presidents may view AI companies’ responsibility differently, should that be maintained at that focus?
Also, are voluntary mechanisms such as corporate structure and governance sufficient to create accountability while maintaining the agility required for innovation? According to some legal experts, such structures are not necessary because traditional corporate form establishment forms allow companies to set sustainability goals when they are in the long-term benefits of their shareholders.
To increase accountability, some profitable companies have created multi-stakeholder oversight councils with representatives from affected sectors such as technology and civil society. In May 2024, Openai established a Safety and Security Committee led by Altman (he later resigned), but critics point out that such a voluntary structure could be subordinate to profit targets.
Other options include adopting the EU corporation’s sustainability reporting directive. This will govern companies such as OpenAI over the next few years and link compensation and stock options to safety-related goals.
Alternative accountability mechanisms may emerge. Meanwhile, the governance of AI companies such as Openai raises important questions regarding the integration of ethical and safety considerations into rarely tested technologies.
Questions for discussion
How can companies in the AI sector be guaranteed accountability for their social and environmental commitments?
How can voluntary corporate governance protection provide public trust in an industry that is often criticised for opacity and potential harm?
What specific metrics and reporting requirements make profit company status meaningful for AI companies?
What mechanisms can policy makers implement to enhance the effectiveness of the profit corporation model in the high stakes industry?
Are these models likely to lead to a systematic change in corporate accountability, or do these models remain niche solutions?
How can businesses deal with their global impact when operating under different national law frameworks?
Christopher Marquis is Sinny, a professor of China management at the Cambridge Judge Business School.