Technology

OpenAI Now Works With Amazon and Google Cloud, Ending Microsoft's Exclusive Deal

OpenAI has ended its exclusive deal with Microsoft, allowing Amazon Web Services and Google Cloud to now offer OpenAI models directly to customers. Microsoft remains a preferred partner but loses excl

Martin HollowayPublished 2w ago5 min readBased on 7 sources
Reading level
OpenAI Now Works With Amazon and Google Cloud, Ending Microsoft's Exclusive Deal

OpenAI Now Works With Amazon and Google Cloud, Ending Microsoft's Exclusive Deal

OpenAI has ended Microsoft's exclusive deal to use its AI models. Starting now, Amazon Web Services (AWS) and Google Cloud can sell and run OpenAI's technology directly to their customers. Microsoft remains a close partner, but it no longer has exclusive rights — a five-year arrangement that began in 2019 has ended.

The new agreement lets Microsoft keep special rights to OpenAI's research methods until 2030 or until a panel decides that artificial general intelligence (a form of AI that can do many kinds of tasks like humans can) has been achieved. Meanwhile, OpenAI can now work with other major cloud providers.

AWS Is Already Offering OpenAI Models

Amazon moved quickly. AWS now offers OpenAI's AI models through Amazon Bedrock, a service that lets companies use different AI models from one place. Customers can use OpenAI's models through the same programming interfaces they already use, so existing software keeps working without changes.

OpenAI's open-source models (free models the public can download and modify) are also now available on Amazon SageMaker JumpStart, which is AWS's tool for building and launching AI models. This gives companies more control — they can adjust the models to fit their needs and run them on their own infrastructure.

AWS also introduced a new tool specifically for OpenAI: a Stateful Runtime Environment for Agents. This is essentially a better workspace where AI can remember information across multiple conversations and steps without the application having to manage all that memory manually. It's designed for complex tasks that require the AI to keep track of what happened before.

Why Microsoft and OpenAI Started as Exclusive Partners

Back in 2019, Microsoft invested $1 billion in OpenAI and made a deal: OpenAI would run only on Microsoft's cloud (Azure), and the two companies would build special supercomputing technology together for training large AI models.

This partnership worked well. OpenAI released breakthrough models like GPT-3, ChatGPT, and GPT-4 — all running on Azure. Microsoft used these models in its own products, like Copilot in GitHub and AI features in Office. OpenAI got access to the expensive computing power it needed to train larger and better models.

We have seen this pattern before. Video game consoles used to be exclusive to one maker; now games appear on multiple platforms. Apps started only on iPhones; now they are everywhere. When a partnership succeeds and the companies have proven their strategy works, exclusivity typically ends so both sides can grow faster and reach more customers.

What Changes for Cloud Customers

Companies can now get OpenAI models through whichever cloud provider they already use. Before, if your company used AWS or Google Cloud but OpenAI only ran on Azure, you had to either switch clouds or add Azure as an extra piece of your system — both complicated and expensive.

Now, a company can run OpenAI models on AWS using the same tools and billing it uses for everything else. Same with Google Cloud. The AI no longer requires a separate cloud vendor relationship.

The business reason is simple: as AI becomes more common, companies want choices. They do not want to depend on just one vendor for something critical. By offering on multiple clouds, OpenAI appeals to far more customers. AWS and Google Cloud can compete to host OpenAI models, which is good for pricing and service quality.

Technical Details for the Curious

On AWS, Bedrock provides a consistent way to use different AI models — OpenAI's, Claude, Amazon's Titan, and others. You can switch between them without rewriting your code. The service handles the technical details like making sure the model is ready to respond quickly and managing how many requests it can handle at once.

SageMaker JumpStart lets companies deploy models for two kinds of work: real-time (where the AI answers immediately) and batch (where the company sends many requests to process at once, not necessarily instantly). The service automatically manages hardware and scaling, but companies can still control details like which physical servers to use and where data is stored.

OpenAI's new stateful runtime for agents means the AI can hold information in memory across multiple steps or conversations. Think of it like the AI keeping notes as it works through a problem, so it does not have to start fresh each time. This makes complex tasks faster and simpler for programmers to build.

The broader shift here is that OpenAI models are becoming available anywhere, not locked to one cloud. As AI services mature, companies want the flexibility to use them on the platform that fits their business best. Vendors who do not offer this choice tend to lose customers over time.

The companies that benefit most right now are those already using AWS or Google Cloud. They can add OpenAI models without changing their whole setup. Longer term, the change means the AI market will likely compete more on features and price than on which cloud you are locked into.