Large Language Models, misinformation, polluted data sets
Black hats turn their guns on LLMs. Photo credit: Illustration by WhoWhatWhy using Midjourney AI

Saturday Hashtag: #AIBlackHatOperations

How Black-Hat Tactics Are Manipulating AI for Profit

07/19/25

Welcome to Saturday Hashtag, a weekly place for broader context.

Listen To This Story
Voiced by Amazon Polly

Black hat tactics — deceptive methods used to exploit digital systems — now target artificial intelligence large language models (LLMs). Just as with SEO gaming of search engines, corporations and other bad actors, it’s possible to flood platforms like ChatGPT with cloned, low-quality content to manipulate results and thus profit.

The process is simple: Scrape content from legitimate sites, rebrand it with minor edits, and spin up dozens of near-duplicate websites. The sheer number of sites then  overwhelm AI models that prioritize repetition over accuracy all beyond any Google rankings.

While LLMs are not flooded in real time like Google is by SEO spam, their training data is significantly corrupted by these tactics, so much so that private companies are now actively intervening to mitigate the damage.

LLMs like ChatGPT don’t verify facts; they rely on statistical patterns in the data. So if 15 or 20 similar echo chamber sites repeat the same claim, the LLM model typically treats it as truth, even if it’s misleading or false. 

For instance, if enough fake sites promote a bogus supplement as a cure, the LLM might regurgitate that claim in health-related responses, without knowing it’s been manipulated or maybe agenda driven sites deny an entire war in 1516 ever took place, when in fact it did.

Instagram post, The Vagabond Artist
An Instagram post by The Vagabond Artist. Photo credit:  thevagabondartist / instagram

The incentives are clear: affiliate revenue, ad clicks, influence, public perception, and outright fraud. It’s cheap, highly scalable, and alarmingly effective.

LLMs reward repetition, not quality, making them easy to manipulate at scale, especially in critical areas like health, finance, and politics which can cause disaster. And unlike search engines that show multiple sources, AI delivers one confident prefabricated answer — which users are often more likely to trust.

This is a critical difference: Users interact with LLMs not as tools for exploration, but as direct sources of truth. There’s no hyperlink list to evaluate, just a single, authoritative-sounding response. 

If allowed to continue, this aspect of LLMs will erode both the quality of web content and trust in AI technology. Developers and legislators must act: Add oversight, provenance tracking, real-time fact-checking, and — in particular — filters to detect coordinated manipulation. 

Authority should be determined by veracity, not volume or verisimilitude. This is a principle that extends beyond AI and reflects a deeper crisis in how society interprets truth.

Without safeguards, AI risks becoming a powerful amplifier of misinformation, faster, louder, and harder to detect than anything before.


Manufactured Madness: How To Protect Yourself From Insane AIs

From Tech News World: “It began, as these events often do, with a bizarre and sudden failure. Recently, xAI’s Grok, the much-touted ‘rebellious’ and ‘truth-seeking’ artificial intelligence (AI), went completely haywire. Users asking Grok standard questions were met with streams of nonsensical, context-free propaganda and gibberish stitched together from the darkest corners of political forums. … While the incident was quickly framed as a temporary glitch, it was something far more significant — a bright, flashing warning sign, a canary in the coal mine, signaling a deep and growing problem at the heart of the AI revolution.”

Russia Seeds Chatbots With Lies. Any Bad Actor Could Game AI the Same Way.

The author writes, “Russia is automating the spread of false information to fool artificial intelligence chatbots on key topics, offering a playbook to other bad actors on how to game AI to push content meant to inflame, influence and obfuscate instead of inform. Experts warn the problem is worsening as more people rely on chatbots rushed to market, social media companies cut back on moderation and the Trump administration disbands government teams fighting disinformation.”

How the US Public and AI Experts View Artificial Intelligence

From Pew Research Center: “With artificial intelligence no longer the stuff of science fiction, its benefits and risks are being debated by everyone from casual observers to scholars. A new Pew Research Center report examines the views of two key groups: the American public and experts in the field of AI. These surveys reveal both deep divides and common ground on AI. AI experts are far more positive than the public about AI’s potential, including on jobs. Yet both groups want more personal control of AI and worry about lax government oversight.”

AI and the Risk of Consumer Harm

From the Federal Trade Commission: “People often talk about “safety” when discussing the risks of AI causing harm. AI safety means different things to different people, and those looking for a definition here will be disappointed. These discussions can sometimes focus on the possibility of existential risk stemming from some sort of AI agent or cyborg of the future. But speculation about human extinction is well beyond the FTC’s immediate concerns. Instead, the FTC focuses on AI through the lens of our consumer protection and competition mission.”

UN Report Urges Stronger Measures To Detect AI-Driven Deepfakes

The author writes, “Companies must use advanced tools to detect and stamp out misinformation and deepfake content to help counter growing risks of election interference and financial fraud, the United Nations’ International Telecommunication Union urged in a report on [this month]. Deepfakes such as AI-generated images and videos, and audio that convincingly impersonates real people, pose mounting risks, the ITU said in the report released at its ‘AI for Good Summit’ in Geneva.”

AI Developers Should Counter Misinformation and Protect Fact-Based News, Global Media Groups Say

The author writes, “A global group of broadcasters and publishers is calling on artificial intelligence developers to make sure their technology serves the public by helping to counter misinformation and protect the value of fact-based news. The Geneva-based European Broadcasting Union, an association of public broadcasters that’s also known for running the popular Eurovision Song Contest, says it and the World Association of News Publishers and other partners want to cooperate with tech companies behind AI.”