Why Criminals Are Getting Frustrated With AI Tools
Researchers studying criminal forums found that despite initial excitement, cybercriminals are disappointed with AI tools. The technology hasn't made crime significantly easier and doesn't help inexpe

Why Criminals Are Getting Frustrated With AI Tools
Researchers at the University of Edinburgh have been watching what criminals say about artificial intelligence on underground forums and websites. After studying nearly 100,000 conversations since ChatGPT launched in 2022, they found something unexpected: criminals went from being excited about AI to being disappointed with it.
The main complaint is that AI tools are flooding their forums with junk posts — low-quality, unhelpful content that drowns out real technical discussions. Criminals say they would rather talk to other humans than deal with AI-generated responses that don't actually help them.
AI Hasn't Made Crime Easier
Here's what surprised researchers: even though AI tools are everywhere and free to use, criminals haven't found them particularly useful for their core work. The AI tools that exist today seem to help people who are already skilled at what they do. They don't really make it easier for someone new to jump into cybercrime and pull off sophisticated attacks.
This mirrors what we see in regular software development. AI coding assistants make experienced programmers faster, but they don't turn amateurs into experts.
What Criminals Are Actually Using AI For
According to German law enforcement, criminals have used AI in a few specific ways. They've used AI-generated images to forge identity documents. They've used deepfake technology — AI video that mimics a real person — to trick facial recognition systems. Some criminal groups use AI to help them write code and fix bugs, though this mostly helps them improve attacks they already know how to do.
The organized crime groups with real leadership have been more careful about which AI tools to adopt. They think about whether a new technology will actually make them money.
What Could Go Wrong
Looking at the bigger picture, researchers worry about where AI could actually hurt security in the future. The real concern right now is AI systems that can act on their own. These systems could be misused to create fake but convincing content at massive scale, which could make social engineering attacks — like phishing emails or fraud — much more effective and personalized.
Imagine if criminals didn't have to write each phishing email by hand. Instead, an AI system could generate thousands of nearly perfect emails in dozens of languages, each one crafted to manipulate a specific person. That's the kind of shift that would genuinely change how attacks work.
This Pattern Happened Before
In this author's view, we've seen this story play out before. Back in 2017 and 2018, criminals were convinced that blockchain and cryptocurrency would revolutionize how they moved money secretly. Turns out, old-fashioned money laundering methods often worked better and left fewer digital traces. Criminal organizations, despite breaking the law, still face the same practical problems that legitimate businesses do: training people, changing workflows, and actually making new tools work in practice.
What Security Teams Need to Know
The good news for security professionals is that crime using AI might unfold more slowly than some doomsday predictions suggest. Right now, criminals are using AI to make incremental improvements to what they already do, not to invent entirely new types of attacks. This gives security teams time to develop defenses.
However, there's a concerning part: well-funded criminal organizations with smart leadership might figure out breakthrough uses for AI that smaller operators can't replicate. This could create a wider gap between sophisticated cybercriminals and amateurs.
The attacks that are already documented — forged identity documents and deepfake videos — are real problems that security teams need to handle right now. These don't require much technical skill to execute but could seriously undermine existing verification systems.
What Defenses Look Like
The current state of criminal AI use shows that humans are still in the loop. AI is a tool that amplifies what criminals can do, not a replacement for their judgment. This creates opportunities for defenders. Security systems can be designed to spot AI-generated content or recognize patterns that suggest automated attacks.
The broader lesson for defensive strategy is this: the priority should be locking down AI systems themselves so they can't be misused, rather than bracing for an overnight transformation of how attacks work. As criminal organizations continue experimenting with AI, security teams have time to develop detection methods and countermeasures before these tools become widespread in the criminal world.


