Technology

Why Elon Musk is Suing OpenAI — And What It Reveals About AI Companies

Elon Musk sued OpenAI in 2024, claiming the company abandoned its promise to be a nonprofit focused on benefiting everyone, and instead became a profit-driven company partnered with Microsoft. The law

Martin HollowayPublished 2w ago5 min readBased on 3 sources
Reading level
Why Elon Musk is Suing OpenAI — And What It Reveals About AI Companies

Why Elon Musk is Suing OpenAI — And What It Reveals About AI Companies

Elon Musk filed a lawsuit against OpenAI on February 29, 2024, claiming the company broke its original promise. When OpenAI started in 2015, Musk says it was supposed to be a nonprofit — meaning any profits would go back into the company, not to investors. He claims the company has drifted away from that mission and is now primarily focused on making money through its partnership with Microsoft.

The lawsuit, filed in San Francisco, asks the court to force OpenAI back to its original nonprofit structure and to open-source its AI technology so anyone can use it.

The Money Problem That Changed Everything

OpenAI's explanation provides the crucial missing piece. By 2017, just two years after it started, OpenAI realized something important: building advanced AI is extremely expensive. The costs were rising fast.

Training a cutting-edge AI model in 2015 might cost thousands of dollars. By 2020, it was projected to cost hundreds of millions. That's because these AI systems require enormous amounts of computing power, running continuously for months. A nonprofit relying on grants and donations couldn't afford that bill.

OpenAI faced a choice: stay nonprofit and stop building advanced models, or find a way to generate enough money to keep going. The company chose money.

During conversations in late 2017, Musk proposed a solution — a new for-profit company he would control and lead. According to OpenAI's account, Musk wanted majority control and the CEO job. When OpenAI said no, Musk left and started his own AI company instead.

Two Different Ideas About How AI Should Work

This lawsuit brings into focus a real tension in the world of artificial intelligence. One camp believes AI should be open and free — anyone can access it and use it, which makes it available to everyone. The other camp argues that only well-funded companies with massive resources can build the best AI systems, which means some level of profit-driven control is necessary.

Musk's argument is that OpenAI promised the first approach but chose the second. OpenAI's current structure is a middle ground: the company is designed to make profits, but there are supposedly limits — the profits are capped, and the company says it stays focused on beneficial AI development.

This same conflict played out before in technology. In the 1990s, a web browser called Netscape had to decide whether to stay free and open-source or to charge money and protect its work. Companies and governments face versions of this question all the time: do you prioritize access for everyone, or do you prioritize the money you need to build something good.

The stakes here are higher than a web browser, though. If AI systems become as powerful as many experts predict, the question of who controls them and who benefits from them could affect the entire economy.

The Real Constraint: Computer Power

There's another piece to understand. Building a state-of-the-art AI model requires coordinating thousands of powerful computer chips working together continuously for weeks or months. The electricity bill alone is enormous. Only companies with billions of dollars in revenue or wealthy investors backing them can afford to do this.

Some smaller companies and research groups have released their own AI models using more open approaches. These models are freely available for anyone to use and study. But they typically lag behind the most advanced models by about a year or eighteen months in capability.

This creates a practical reality worth considering: Musk's demand to return OpenAI to a nonprofit model may not be achievable, regardless of what a court decides. The cost of developing cutting-edge AI has fundamentally changed what kind of organization can afford to do it. Building this kind of technology increasingly requires the kind of money that only for-profit companies and their investors can provide.

What Happens Next

The lawsuit arrives as governments around the world are paying closer attention to AI. The European Union, the U.S. Congress, and individual states are all working on rules for how AI should be developed and used.

This lawsuit may influence those discussions by highlighting a real problem: companies sometimes start with one mission but shift to another as they grow. It also raises questions about whether founding promises should legally bind organizations, especially in rapidly changing fields.

The outcome won't solve the deeper question of how AI should be developed and governed. But it could clarify what happens when a nonprofit becomes a for-profit company, and whether the founders can legally challenge that change. For an industry that is reshaping itself rapidly, that kind of clarity might matter.