As Openai Attracts Billions In New Investment, Its Goal Of Balancing Profit With Purpose Is Getting More Challenging To Pull Off


Author: Alnoor Ebrahim

(MENAFN- The Conversation) OpenAI, the artificial intelligence company that developed the popular ChatGPT chatbot and the text-to-art program Dall-E , is at a crossroads. On Oct. 2, 2024, it announced that it had obtained US$6.6 billion in new funding from investors and that the business was worth an estimated $157 billion – making it only the second startup ever to be valued at over $100 billion .

Unlike other big tech companies, OpenAI is a nonprofit with a for-profit subsidiary that is overseen by a nonprofit board of directors. Since its founding in 2015, OpenAI's official mission has been“to build artificial general intelligence (AGI) that is safe and benefits all of humanity.”

By late September 2024, The Associated Press , Reuters , The Wall Street Journal and many other media outlets were reporting that OpenAI plans to discard its nonprofit status and become a for-profit tech company managed by investors. These stories have all cited anonymous sources. The New York Times, referencing documents from the recent funding round, reported that unless this change happens within two years , the $6.6 billion in equity would become debt owed to the investors who provided that funding.

The Conversation U.S. asked Alnoor Ebrahim , a Tufts University management scholar, to explain why OpenAI's leaders' reported plans to change its structure would be significant and potentially problematic.

How have its top executives and board members responded?

There has been a lot of leadership turmoil at OpenAI. The disagreements boiled over in November 2023, when its board briefly ousted Sam Altman, its CEO. He got his job back in less than a week, and then three board members resigned. The departing directors were advocates for building stronger guardrails and encouraging regulation to protect humanity from potential harms posed by AI.

Over a dozen senior staff members have quit since then, including several other co-founders and executives responsible for overseeing OpenAI's safety policies and practices. At least two of them have joined Anthropic, a rival founded by a former OpenAI executive responsible for AI safety. Some of the departing executives say that Altman has pushed the company to launch products prematurely .

Safety“has taken a backseat to shiny products ,” said OpenAI's former safety team leader Jan Leike , who quit in May 2024.


Open AI CEO Sam Altman, center, speaks at an event in September 2024. Bryan R. Smith/Pool Photo via AP Why would OpenAI's structure change?

OpenAI's deep-pocketed investors cannot own shares in the organization under its existing nonprofit governance structure, nor can they get a seat on its board of directors. That's because OpenAI is incorporated as a nonprofit whose purpose is to benefit society rather than private interests. Until now, all rounds of investments, including a reported total of $13 billion from Microsoft , have been channeled through a for-profit subsidiary that belongs to the nonprofit.

The current structure allows OpenAI to accept money from private investors in exchange for a future portion of its profits. But those investors do not get a voting seat on the board, and their profits are“capped.” According to information previously made public, OpenAI's original investors can't earn more than 100 times the money they provided . The goal of this hybrid governance model is to balance profits with OpenAI's safety-focused mission.

Becoming a for-profit enterprise would make it possible for its investors to acquire ownership stakes in OpenAI and no longer have to face a cap on their potential profits. Down the road, OpenAI could also go public and raise capital on the stock market.

Altman reportedly seeks to personally acquire a 7% equity stake in OpenAI , according to a Bloomberg article that cited unnamed sources.

That arrangement is not allowed for nonprofit executives, according to BoardSource , an association of nonprofit board members and executives. Instead, the association explains, nonprofits“must reinvest surpluses back into the organization and its tax-exempt purpose.”

What kind of company might OpenAI become?

The Washington Post and other media outlets have reported, also citing unnamed sources, that OpenAI might become a“public benefit corporation” – a business that aims to benefit society and earn profits.

Examples of businesses with this status, known as B Corps., include outdoor clothing and gear company Patagonia and eyewear maker Warby Parker .

It's more typical that a for-profit business – – becomes a benefit corporation, according to the B Lab, a network that sets standards and offers certification for B Corps. It is unusual for a nonprofit to do this because nonprofit governance already requires those groups to benefit society.

Boards of companies with this legal status are free to consider the interests of society, the environment and people who aren't its shareholders, but that is not required . The board may still choose to make profits a top priority and can drop its benefit status to satisfy its investors. That is what online craft marketplace Etsy did in 2017, two years after becoming a publicly traded company .

In my view, any attempt to convert a nonprofit into a public benefit corporation is a clear move away from focusing on the nonprofit's mission. And there will be a risk that becoming a benefit corporation would just be a ploy to mask a shift toward focusing on revenue growth and investors' profits.

Many legal scholars and other experts are predicting that OpenAI will not do away with its hybrid ownership model entirely because of legal restrictions on the placement of nonprofit assets in private hands.

But I think OpenAI has a possible workaround: It could try to dilute the nonprofit's control by making it a minority shareholder in a new for-profit structure. This would effectively eliminate the nonprofit board's power to hold the company accountable. Such a move could lead to an investigation by the office of the relevant state attorney general and potentially by the Internal Revenue Service.

What could happen if OpenAI turns into a for-profit company?

The stakes for society are high.

AI's potential harms are wide-ranging, and some are already apparent, such as deceptive political campaigns and bias in health care .

If OpenAI, an industry leader, begins to focus more on earning profits than ensuring AI's safety, I believe that these dangers could get worse. Geoffrey Hinton, who won the 2024 Nobel Prize in physics for his artificial intelligence research, has cautioned that AI may exacerbate inequality by replacing“lots of mundane jobs.” He believes that there's a 50% probability“that we'll have to confront the problem of AI trying to take over” from humanity.

And even if OpenAI did retain board members for whom safety is a top concern, the only common denominator for the members of its new corporate board would be their obligation to protect the interests of the company's shareholders , who would expect to earn a profit. While such expectations are common on a for-profit board, they constitute a conflict of interest on a nonprofit board where mission must come first and board members cannot benefit financially from the organization's work.

The arrangement would, no doubt, please OpenAI's investors. But would it be good for society? The purpose of nonprofit control over a for-profit subsidiary is to ensure that profit does not interfere with the nonprofit's mission. Without guardrails to ensure that the board seeks to limit harm to humanity from AI, there would be little reason for it to prevent the company from maximizing profit, even if its chatbots and other AI products endanger society.

Regardless of what OpenAI does, most artificial intelligence companies are already for-profit businesses. So, in my view, the only way to manage the potential harms is through better industry standards and regulations that are starting to take shape .

California's governor vetoed such a bill in September 2024 on the grounds it would slow innovation – but I believe slowing it down is exactly what is needed, given the dangers AI already poses to society.


The Conversation

MENAFN14102024000199003603ID1108776482


The Conversation

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.