Women Sue AI Companies Over Fake Sexual Images Made Without Permission
Three women are suing AI companies for creating and selling fake sexual images based on their social media photos without permission. The case highlights gaps in current laws and raises questions abou

Women Sue AI Companies Over Fake Sexual Images Made Without Permission
Three women filed a lawsuit on January 31, 2026, against AI companies they say used their social media photos to create and sell fake sexual images without their permission. The women, who are in their early twenties and from Arizona and California, regularly post lifestyle photos on Instagram. The lawsuit names the companies AI ModelForge, CreatorCore, FAL – Features & Labels, Inc., and Phyziro, LLC, according to court documents.
The lawsuit says the defendants downloaded the women's publicly posted photos and used them to generate explicit fake images. One plaintiff is from Kansas City, KCTV5 reported. The women are asking the court to shut down the platforms or stop them from creating these images, and to hold the companies financially responsible.
How the Companies Operated
The case focuses on AI ModelForge, which the women say did two things at once: it created the fake images and also taught other people how to make them and sell them. The defendants apparently ran multiple companies working together to carry out this operation.
The women's lawyers are Nick Brand from the Donlon Group and Cristina Perez Hesano, who leads Perez Law Group, according to the law firm's website. Their strategy names not just the consumer-facing platforms but also the companies that provide the underlying technology, trying to disrupt the entire chain.
The Legal Problem
Arizona updated its revenge porn law in 2025 to include fake sexual images made by AI. The state added language to cover what they call "realistic pictorial representation"—meaning images that look real but were created by computer. Several states have made similar changes to their laws as AI tools have become easier to use.
However, court documents show that the companies being sued took only minimal steps to comply with these laws. They created a page where people could request that fake images be taken down, but they kept making non-consensual images anyway. This suggests they were doing the bare minimum to appear law-abiding rather than making real changes to how they operate.
A Broader Pattern
This lawsuit echoes a pattern we have seen before, when the early internet in the late 1990s and 2000s made it easier to share non-consensual images without legal recourse for victims. Back then, websites could claim they were just hosting what users uploaded, and laws had not caught up. Today, the difference is that AI can generate these images automatically and at massive scale, without anyone uploading them first.
The case raises a genuine tension: the same AI tools that let people create useful things also make it much easier for bad actors to cause harm. The courts will need to decide whether AI companies that build these tools have legal responsibility when people use them to hurt others.
A Murky Legal Area
There is an additional complication that could affect the outcome. Federal law, called Section 230 of the Communications Decency Act, shields websites from liability when users post content. Some states have borrowed this same protection and added it to their revenge porn laws. The problem is that this law was written for websites that simply host what users upload—not for AI systems that actively create harmful content.
The defendants in this case might argue they deserve the same protection as a social media company. The courts have not fully settled whether that argument makes sense when a company is not just hosting content, but actively generating it without consent. This is a significant legal question with no clear answer yet, and the outcome could shape how AI companies are held responsible going forward.
What Comes Next
If the women win this case, it could change how AI companies think about safety. Right now, different companies take very different approaches—some use filtering tools to block harmful content, while others do very little. There are no agreed-upon industry standards yet for stopping fake sexual images.
The case also matters beyond just this lawsuit. Congress and federal agencies are working on broader rules for AI. Non-consensual fake sexual images are one of the clearest examples of AI-enabled harm that experts agree needs immediate attention. How the courts rule here could influence what those federal rules look like when they arrive.
The broader context here is that we are in the early stages of figuring out who is responsible when powerful tools can be misused. The legal system is working through questions that did not exist five years ago, and the answers will affect not just these three women, but how AI companies operate for years to come.


