HIPAA-Compliant CRM

AI-Generated Slop: When Humans Weaponize Technology

Between 2022 and 2025, humans weaponized AI tools to flood the internet with fraudulent content, causing tens of billions in losses. Examples include 11,000 retracted academic papers, a 25.6 million dollar deepfake heist at Arup, 1,271 fake news sites, and 17 percent of Google results now AI-generated. State actors, criminals, and opportunists exploited accessible AI for election manipulation. This is not an AI problem but a human decision problem.
AI-Generated Slop: When Humans Weaponize Technology

AI-Generated Slop: When Humans Weaponize Technology

Author: Nic Nevin | Published on: October 21, 2025 | Category: Generative AI | Views: 27

Between 2022 and 2025, humans made deliberate choices to flood the internet with AI-generated content at unprecedented scale—causing tens of billions of dollars in losses, distorting elections, and degrading the global information ecosystem.

This isn't an AI problem. It's a human decision problem.

Across governments, corporations, and individual opportunists, people chose to weaponize generative tools for manipulation, fraud, and profit.

Examples abound: more than 11,000 retracted academic papers tied to AI-assisted fraud at Wiley/Hindawi; a $25.6 million deepfake heist at Arup's Hong Kong office; 1,271 AI "news" sites tracked by NewsGuard; and AI-written pages now appearing in 17% of Google's top results according to Originality.AI.

The scale of this transformation accelerated after ChatGPT's November 2022 release, turning what were once labor-intensive schemes into industrial pipelines of misinformation.

The Disinformation Architects

One of the most prolific human operators is John Mark Dougan, a former Florida deputy sheriff who fled to Russia and now runs a sprawling network of U.S.-branded fake news sites. His outlets, which included DC Weekly and Boston Times, published AI-written stories that infiltrated mainstream discourse—most notoriously the false claim that Ukraine's President Zelenskyy had purchased yachts with U.S. aid.

In a 2025 NewsGuard test, major AI chatbots repeated content from Dougan's network about one-third of the time, highlighting how human disinformation can launder itself through automated systems.

Meanwhile, China's state-backed Spamouflage (or Dragonbridge) operation pivoted to AI content by late 2022. Google's Threat Analysis Group disrupted over 175,000 instances of Dragonbridge activity across YouTube and Blogger by mid-2024. In its April 2024 report, Microsoft's Threat Analysis Center described it as the first documented case of a nation-state using AI-generated media to influence a foreign election—Taiwan's 2024 presidential race.

Russia, not to be outdone, developed "Meliorator," a Kremlin-sponsored AI system to automate social influence campaigns. A U.S. Justice Department advisory released in July 2024 detailed the seizure of servers running 968 fake X (Twitter) accounts tied to the operation—the first publicly confirmed example of a government deploying custom AI software for disinformation.

When Profit Meets AI Capability

The economic incentive to produce "AI slop" became clearest in academia. After acquiring Egyptian publisher Hindawi, Wiley uncovered a vast paper-mill infiltration that forced over 11,000 retractions and cost $35–40 million in lost revenue.

A PLOS Biology study (May 2025) revealed an explosion of formulaic, low-quality research exploiting the U.S. NHANES health database: papers surged from a handful annually before ChatGPT to hundreds in 2024, many following identical "AI-template" structures.

These weren't AI accidents—they were conscious human shortcuts. Paper mills openly advertised "AI writing assistance" on Taobao and similar platforms, charging from $180 to $5,000 per fabricated authorship.

The result: polluted literature, corrupted peer review, and erosion of scientific credibility.

The $25 Million Deepfake Call

In January 2024, a finance officer at Arup's Hong Kong office joined a video call that appeared to include their CFO and colleagues. The participants looked, spoke, and behaved exactly like familiar coworkers—except none of them were real.

Over the call, the employee made 15 transfers totaling $25.6 million USD to five local bank accounts before realizing the deception.

The perpetrators had deliberately trained voice and video models on real meeting footage. This was no AI malfunction—it was a coordinated act of human fraud, exploiting deepfake tools to weaponize trust itself.

Elections and the Liar's Dividend

On January 21, 2024, just before New Hampshire's Democratic primary, thousands received a robocall of President Biden's cloned voice discouraging them from voting. The operation was commissioned by political consultant Steve Kramer, who later claimed it was a "warning" about deepfakes. Regulators disagreed: the FCC fined him $6 million and New Hampshire charged him with 13 felonies for voter suppression.

Similar tactics appeared abroad. In Slovakia's 2023 election, deepfake audio circulated days before voting, falsely depicting opposition leader Michal Šimečka plotting to rig results. Analysts traced distribution to pro-Russian networks. Whether or not it changed the outcome, the human intent to manipulate voters was unmistakable.

The "liar's dividend" has followed: politicians now dismiss real evidence as AI-generated, muddying accountability and deepening distrust.

When Search Engines Turn to Slop

By September 2025, Originality.AI found that 17.31% of Google's top-20 results were AI-written content—a 760% increase since 2019.

NewsGuard identified 1,271 AI-generated "news" sites across 16 languages, many pumping out thousands of auto-written stories per day with minimal oversight.

Google's March 2024 core update targeted low-quality "scaled content" and expired-domain abuse, aiming to cut unoriginal pages by ~40%, but spam networks quickly adapted.

Even legacy outlets stumbled: between November 2022 and January 2023, CNET published 77 AI-authored finance articles, of which 41 required corrections. The decision to conceal AI authorship for SEO gain cost the brand its "generally reliable" rating on Wikipedia.

The Facebook Spam Apocalypse

By 2024, Facebook feeds were flooded with surreal AI images—Christ fused with shrimp, impossible woodworking feats, fake disabled children asking for "birthday wishes." A joint Georgetown/Stanford study traced these to organized content farms in Asia operating for ad revenue.

Meta's own transparency reports show over 4.3 billion fake accounts removed in 2024, alongside millions of impersonator profiles. Despite enforcement, engagement with AI-spam posts routinely hit the millions, thanks to algorithmic amplification.

On X (Twitter), a Clemson University team identified 686 bot accounts posting 130,000+ times around U.S. midterms, shifting from ChatGPT to less-restricted models to evade filters.

The pattern is constant: humans exploit every new generative tool faster than platforms can respond.

Crypto, Scams, and the $10 Billion Question

An elderly Arizona investor named Steve Beauchamp thought he was watching Elon Musk endorse a new trading platform. The video was flawless—and fake. He lost his $690,000 life savings.

Deepfake Musk has become the face of AI-enabled fraud. According to Chainalysis, scam wallets took in $9.9 billion of cryptocurrency in 2024, a 40% year-over-year rise.

Meanwhile, the FTC's Operation AI Comply charged multiple firms with AI-based deception—one, Ascend Ecom, alone defrauded investors of $25 million. These schemes weren't algorithmic accidents; they were conscious human cons dressed in AI language.

Grandparents, Voice Clones, and Emotional Warfare

In 2023, Arizona mother Jennifer DeStefano received a frantic call from her daughter—sobbing, terrified, claiming she'd been kidnapped. The voice was perfect. It was also fake, generated from a short TikTok clip.

"Grandparent" scams now routinely use AI voice cloning to impersonate loved ones. The FTC logged 845,000 imposter scam reports in 2024 with $2.7 billion in losses; officials suspect voice clones account for a growing share.

Here again, the cruelty is human: fraud rings harvest social-media audio, craft emotional emergencies, and exploit empathy for profit.

The Verification Crisis

Humans are losing the ability to tell real from synthetic.

Studies show people correctly identify AI-generated images only about 60% of the time, and deepfakes barely one in four.

NewsGuard found that major chatbots echoed Russian disinformation in roughly a third of prompts, while another analysis in mid-2025 put the rate at around 24% depending on model.

Even legitimate science is infected: retracted AI-generated papers remain discoverable and citable on Google Scholar, feeding false data back into new models—a self-reinforcing misinformation loop.

Amazon claims to have blocked over 250 million fake reviews in 2023, yet third-party auditors like Fakespot estimate up to 43% of product reviews remain unreliable.

Verification can't keep pace with automated falsehood.

The Accountability Gap

Every case stems from human intent.

Steve Kramer chose to commission an AI robocall. Russian engineers chose to code Meliorator. Paper-mill operators chose to mass-produce fake research. CNET executives chose SEO over truth. And social-media entrepreneurs chose engagement over integrity.

Some progress exists—the FTC finalized rules in August 2024 imposing $51,744 per violation for fake reviews; the FCC banned AI-voice robocalls; and Google's March 2024 update temporarily cut AI spam.

Yet enforcement lags behind human creativity in exploitation.

What $20 Billion in Harm Reveals About Human Nature

The evidence is overwhelming: AI-generated slop is not an artificial-intelligence problem—it's a human-intelligence problem.

Given tools that generate convincing content at near-zero cost, thousands of people chose deception over creation. Governments saw propaganda potential. Criminals saw scalability. Corporations saw cheap traffic.

AI merely lowered the price of manipulation. Humans supplied the motive.

The solution, therefore, cannot be purely technological. It must address the human incentives and accountability structures behind every misuse—because every fake paper, scam video, and deepfake call begins with a decision to cause harm.


By the Numbers (2022–2025)

  • 11,300+ academic papers retracted (Wiley/Hindawi)
  • $25.6M stolen in the Arup deepfake heist
  • 1,271 AI-generated "news" sites identified (NewsGuard)
  • 17.31% of top-20 Google results AI-generated (Originality.AI)
  • >175,000 Dragonbridge/Spamouflage instances disrupted (Google TAG)
  • 968 fake accounts in Russia's "Meliorator" network (DOJ 2024)
  • $9.9B crypto scam inflows (Chainalysis 2025)
  • 4.3B fake Facebook accounts removed (Meta 2024)
  • 845,000 imposter-scam reports; $2.7B in losses (FTC 2024)
  • $51,744 maximum fine per fake review (FTC 2024 rule)

Share this post: