How AI-Generated Fake People Are Making Money on Social Media
AI tools have made it easy for individuals to create and monetize entirely fictional personas on social media, targeting specific audiences like conservative political groups. Multiple cases show crea

A 22-year-old medical student in northern India built a profitable business by creating an entirely fictional woman — an AI-generated conservative activist named Emily Hart — and selling photos, videos, and merchandise of her across social platforms. The case reveals how generative AI has made it dramatically easier to manufacture convincing fake personas and monetize them, while exploiting gaps in how platforms police artificial content.
The Emily Hart Case
The creator, using the name Sam, deliberately targeted the MAGA conservative audience after calculating that this group has higher spending power. The Emily Hart Instagram account and related presences generated thousands of dollars through subscriptions and merchandise sales, with Sam openly describing the AI-generated content as "rage bait" — inflammatory posts designed to maximize engagement.
This was a deliberate business strategy, not a genuine ideological project. Using current AI image generation tools, Sam produced consistent photos and videos of the entirely artificial Emily Hart across multiple platforms and formats, all without manually shooting or editing any real footage.
Multiple Operations, Same Pattern
The Emily Hart operation is far from isolated. Jessica Foster — an AI-generated military officer profile — amassed over one million Instagram followers in just months by posing as a patriotic soldier.
The Jessica Foster account included sophisticated visual storytelling: fake photos showing the nonexistent soldier in barracks, standing beside military vehicles, wearing combat gear. The creator went further, posting doctored images placing the fabricated soldier alongside real world leaders and politicians — building an entire fictional military career narrative.
Yet no public military record of Jessica Foster exists. The account operated on Instagram without any label disclosing that the content was AI-generated, despite Instagram's stated policy against such undisclosed synthetic personas — a gap that highlights how hard platforms struggle to police this content at scale.
How the Money Works
These operations exploit the existing creator economy infrastructure that evolved for real influencers. The Jessica Foster account ran a paid membership site selling explicit photos of the fabricated soldier's feet to paying subscribers, before later moving to Fanvue — a platform similar to OnlyFans that explicitly allows AI-generated models, provided they are labeled as such.
The Jessica Foster posts accumulated over 100,000 comments, mostly from accounts with male profile photos. Some verified accounts — including a real Brazilian transportation official — regularly liked and commented on posts, indicating that these fabricated personas were attracting genuine engagement from real people across international borders.
The Technical Side
Creating these personas is straightforward using off-the-shelf tools. Developers use Runway Gen 3 to create AI-generated videos and Flux tools for AI images, building accessible production pipelines for synthetic content. One person can now maintain multiple convincing personas across different platforms simultaneously without needing specialized technical knowledge — a dramatic drop from the high barriers that deepfake technology required just a few years ago.
Broader Context: AI Scams and Fraud
Political personas fit into a wider landscape of AI-generated fraud. An elderly woman lost her home and life savings — $81,000 — to an AI deepfake romance scam impersonating actor Steve Burton, while scammers have created deepfake videos of Gwyneth Paltrow promoting fake Goop giveaways.
Fraudulent AI-generated ads for craftspeople appear regularly on Facebook and Reddit — stock images paired with made-up retirement stories and sob tales. The Bank of Italy warned citizens about deepfake videos impersonating Governor Panetta used in financial scams. Insurance companies now sell policies covering AI deepfake risks, a clear sign that institutions recognize this threat as real and growing.
Why Platforms Can't Keep Up
Mainstream platforms like Instagram and TikTok prohibit undisclosed AI-generated content, yet accounts like Jessica Foster operated without penalty. The detection problem is straightforward: modern AI tools produce content that passes basic authenticity checks, exploiting the fact that humans are naturally drawn to attractive, ideologically sympathetic personas.
Analysis: The gap between policy and enforcement exists because AI image and video quality has improved faster than platform detection systems can evolve. Major platforms are not designed to catch sophisticated synthetic content at scale; they catch obvious fakes or content that gets reported.
A Shift Toward Legitimate Business
Worth flagging: What started as pure deception has begun evolving into a different kind of marketplace. Unlike traditional catfishing or identity theft, these AI-generated operations can now scale profitably without the legal risks of impersonation, because the personas are fully synthetic rather than stolen identities.
Platforms like Fanvue have emerged explicitly supporting labeled AI-generated creators — a regulated alternative to the undisclosed synthetics clogging mainstream social networks. This signals a potential future: instead of eliminating synthetic personas, the market may simply segment them — disclosed AI models sold to knowing audiences on specialized platforms, versus undisclosed fakes exploiting audiences on mainstream networks.
The Regulatory Blind Spot
Current laws struggle with this space. Traditional fraud statutes apply to direct financial deception — stealing money or identities — but the legal status of selling admittedly synthetic content to willing audiences remains murky. Regulators simply have not written rules that directly address AI-generated personas sold transparently.
In this author's view, the market evolution toward regulated synthetic content feels inevitable. The real dividing line is disclosure: selling an AI model to an audience that knows what it is occupies a different ethical and legal space than deploying an undisclosed fake persona to deceive or manipulate. The Emily Hart and Jessica Foster cases are important not because they prove AI is dangerous, but because they show how quickly the creator economy adapts to new tools — and how far behind platform policies and regulatory frameworks have fallen.


