Technology

Inside Musk's Testimony: What the OpenAI Trial Reveals About AI Governance

Elon Musk testified in a lawsuit against OpenAI and Microsoft about how AI companies should be structured. His core complaint: the for-profit side of OpenAI has gained too much control over its nonpro

Martin HollowayPublished 6d ago6 min readBased on 5 sources
Reading level
Inside Musk's Testimony: What the OpenAI Trial Reveals About AI Governance

Inside Musk's Testimony: What the OpenAI Trial Reveals About AI Governance

Elon Musk spent nearly three hours being questioned by an OpenAI attorney during his second day testifying in court against OpenAI and Microsoft. The questioning dug into his views on how AI companies should be structured—specifically, whether a nonprofit can operate a for-profit business as a subsidiary without losing sight of its original mission.

What Musk Said About Company Structure

When pressed on the question of hybrid organizations, Musk said he wasn't opposed to a nonprofit creating a for-profit arm. His condition was simple: the for-profit part should ultimately serve the nonprofit's mission, not the other way around.

According to Business Insider, Musk used a striking phrase during his testimony: the "tail is wagging the dog." In other words, his concern isn't that OpenAI has a for-profit division. It's that the for-profit division—the tail—seems to be calling the shots instead of the nonprofit—the dog.

This testimony speaks to a central question in the lawsuit. OpenAI started as a pure nonprofit organization. Over time, it shifted toward a hybrid model where the for-profit side grew much larger and more powerful. Musk is arguing that this shift violated the original promise of the organization.

Musk's own company, xAI, is structured as a public benefit corporation—a legal form meant to balance profit with public good. He founded xAI in March 2023, five years after stepping back from OpenAI's board and roughly a decade after helping start OpenAI itself. Now the two companies are in direct competition.

How Musk and OpenAI Got Here

The relationship between Musk and OpenAI leadership has been antagonistic for years. Both sides agree on one key fact: back in 2017, they decided a for-profit structure would be needed for OpenAI to grow. But when it came time to work out the details, negotiations broke down.

OpenAI rejected Musk's proposal to have Tesla acquire OpenAI. They also wouldn't give Musk control of the new for-profit entity. According to Musk's own comments at the time, he thought OpenAI had almost no chance of success without billions in funding, but he stepped back and let the OpenAI team pursue their own path. In January 2018, he officially left OpenAI's board.

One person has occupied a notable position in this story: Shivon Zilis. She joined OpenAI as an advisor in 2016 and sat on the nonprofit's board from 2020 to 2023. During that same period, she held executive positions at two of Musk's companies—Tesla and Neuralink. According to Wired's reporting, OpenAI's lawyers are arguing that Zilis acted as a hidden channel of communication between Musk and OpenAI even after he left the board officially.

The Antitrust Question

The legal claims go well beyond disagreements about governance. Musk and xAI are accusing OpenAI and Microsoft of violating antitrust law—specifically, the Sherman Act, which prohibits unfair business practices that reduce competition.

The core allegation: that Microsoft and OpenAI told their investors to avoid funding competing AI companies. Think of it like a coordinated effort to squeeze out rivals. Court filings lay out this "fund no competitors" claim as a way of maintaining market dominance in AI.

This kind of allegation has historical echoes. We saw similar accusations against Microsoft in the late 1990s and Google in the 2000s, when the scale of their dominance raised questions about whether they were using their market position to block competitors. The parallel is instructive but incomplete—AI development moves much faster than the software or search markets did, and only a handful of companies worldwide have the resources to train the largest AI models. That concentration of power makes coordination even more impactful as a potential barrier to entry.

What Happens Next, and Why It Matters

The cross-examination continues, with testimony expected from Sam Altman (OpenAI's CEO) and Satya Nadella (Microsoft's CEO). Reuters notes the proceedings are surfacing a fundamental power struggle over who controls OpenAI's direction and, by extension, the shape of the AI industry.

Broadly speaking, the current AI landscape is marked by complex partnerships between companies. Microsoft has invested thirteen billion dollars in OpenAI and secured exclusive rights to license its GPT models. Arrangements like this blur the line between healthy competition and potential coordination that harms smaller players.

The outcome of this trial could reshape how we think about nonprofit organizations that transition to for-profit hybrid structures—especially when those organizations carry a public mission. At stake is whether legal and regulatory frameworks can keep AI development competitive when the cost of building leading AI systems has become so high.

The tension runs deep. Training cutting-edge AI models requires enormous computing power, rare talent, and vast datasets. These requirements sit in tension with the open-source ideals that both Musk and OpenAI's original founders promoted. Whether courts, regulators, and the industry itself can find a middle ground between that vision and the realities of modern AI development is one of the questions this trial may begin to answer.