Nearly ten years ago, tech tycoons Sam Altman and Elon Musk launched OpenAI with a promise to develop AI tech to further the good of humanity.

But as the years passed and the market surrounding artificial intelligence grew, so too did the ambitions of OpenAI's executives (Musk flamed out from the venture in 2019, and now runs his own AI company, called xAI.)

Now, the venture that began as a transparent tech nonprofit is quickly turning into a typical Silicon Valley startup — complete with whistleblowers speaking out against the company's foray into the private market.

Earlier this week, a group of former OpenAI employees, law professors, activists, and Nobel Prize winners sent a letter to the California and Delaware attorneys general pleading with them to stop OpenAI from transforming itself into a private company. Though a number of OpenAI's operations have pivoted to a commercial model in recent years — like ChatGPT, which charges a subscription for higher-performing versions — the company's current plan is to restructure itself as an entirely for-profit venture.

"OpenAI may one day build technology that could get us all killed," said former OpenAI employee Nisan Stiennon, alluding to the company's pursuit of Artifical General Intelligence (AGI), the hypothetical point at which machine intelligence matches or exceeds human ability. "It is to OpenAI’s credit that it’s controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control."

At the moment, most of the titans in the AI industry operate as commercial ventures, as opposed to nonprofits. With this move, OpenAI would simply be joining the rat race — but insiders say the restructuring would bring the company and its frontrunning tech beyond the humanitarian promise that pushed it to the top in the first place.

As a nonprofit, OpenAI's directors have a legal obligation to follow the company's charter, which currently includes the goal "to ensure that artificial general intelligence benefits all of humanity," according to Todor Markov, a former OpenAI employee who now works for Anthropic, which is overseen by a 5-person long-term benefit trust.

"Directors of the [public benefit corporation] would have no such fiduciary duty," Markov explains of the OpenAI transition. "They would be *allowed* to balance [that duty] against shareholder interests, but not *required* to do so... as long as they haven’t broken any laws, you have no recourse."

Altman was previously ousted from OpenAI after pulling some shady stunts in 2023, including hiding the release of ChatGPT from his board of directors, and approving enhancements to ChatGPT 4.0 without running them through the company's jointly-organized Safety Board. That "blip," as it came to be known, only lasted five days before Altman was reinstalled, but it remains a black mark on the tech tycoon's reputation.

The question of whether OpenAI is allowed to proceed as a public benefit corporation is up to the two states' attorneys general. Whether it ultimately matters is another question entirely; AGI technology is still a pipe dream at the moment, while a growing body of research suggests it's impossible, at least building on today's technology — which would make the AGI threat of a for-profit OpenAI vastly overblown.

More on AI: Sam Altman Admits That Saying "Please" and "Thank You" to ChatGPT Is Wasting Millions of Dollars in Computing Power


Share This Article