Technology

xAI Lets Another AI Company Use Its Giant Computer Superpower

xAI has made its Colossus supercomputer available to Anthropic, the company behind Claude. This partnership lets Anthropic use one of the world's largest computers for AI training without building the

Martin HollowayPublished 9h ago4 min readBased on 2 sources
Reading level
xAI Lets Another AI Company Use Its Giant Computer Superpower

xAI Lets Another AI Company Use Its Giant Computer Superpower

xAI has announced that Anthropic, the company behind the AI assistant Claude, can now use its Colossus supercomputer. This is a big deal because building and running computers this massive is extremely expensive, and companies are starting to share them to save money and speed up their work.

What Is Colossus?

Colossus is a supercomputer made of over 150,000 specialized computer chips called GPUs. Think of it like a massive library with thousands of workers who can all read books at the same time to find information much faster than one person ever could.

xAI built this system in just four months, which is much quicker than the usual timeline of two years. The company says Colossus stays running and available 99% of the time, which is important because when machines this big go down, it costs a lot of money.

xAI has said it plans to make Colossus even bigger—up to 1 million of these chips. Right now, it would be one of the largest computers of its kind in the world built for training AI.

Why This Partnership Matters

Anthropic, which makes Claude, now has access to this giant computer without having to build its own. This means they can work on improving Claude faster and do research they might not be able to do otherwise.

For xAI, letting Anthropic use the computer helps pay for the enormous cost of building and running it. It's like owning a big apartment building and renting out units to cover your expenses.

This trend is becoming more common in the AI industry. As AI companies need bigger and bigger computers to train their systems, they are starting to share them instead of each building their own.

The Challenge of Running Machines This Big

Running 150,000 computer chips together is complicated. All that power needs steady electricity and a lot of cooling—like running industrial air conditioners to keep the chips from overheating.

The chips also need to talk to each other constantly while they work. This requires fast connections between all the machines, similar to how everyone in a large group chat needs to be able to send messages quickly without the system getting backed up.

The fact that xAI finished Colossus in four months suggests they used existing buildings and supply chains rather than building everything from scratch. Most companies take much longer because they have to design new facilities and wait for parts.

Looking Ahead

The broader context here is that we have seen this pattern before. When cloud computing started in the 2000s, companies like Amazon and Google built huge computer systems for their own use, then realized they could make money by renting them to other companies. The xAI-Anthropic deal follows the same basic idea.

xAI wants to build Colossus up to 1 million chips, which would be an enormous investment. Whether that makes sense depends on finding enough companies willing to rent computing time from them. The more partners they add, the more of the cost they can cover.

Managing a computer that size will also require advances in software and systems that we are still figuring out. Getting 1 million chips to work together smoothly is a problem that grows harder the bigger the system gets—more things can go wrong, and fixing them takes longer.

The Anthropic partnership gives xAI real experience running a computer system that serves more than one company at a time, which is a skill they will need if they want to keep growing. As AI companies keep needing more computing power, partnerships like this one will probably become normal.

xAI Lets Another AI Company Use Its Giant Computer Superpower | The Brief