Welcome to Saturday Hashtag, a weekly place for broader context.
|
Listen To This Story
|
Yes, reality is now new and improved, but it’s no longer real; both cost and value have been cheapened. The shift occurred the moment artificial intelligence crossed the threshold of indistinguishable video generation.
Traditional, old‑school reality always had its weaknesses, as any critical mind understands. For example, a feature‑length film in 1915, The Birth of a Nation by D.W. Griffith, completely distorted the truth, glorifying the Ku Klux Klan as heroes instead of exposing them as violent criminals with ICE hearts who terrorized, kidnapped, disappeared, and murdered those they did not like.
But until recently, the ability to create fake videos that appear authentic to a high degree was confined to major film studios and, sometimes, governments. It required budgets, crews, custom equipment, and controlled sets.
That power is now in the hands of anyone with a smartphone. The gatekeeper monopoly on visual manipulation is gone. It’s democratized, unregulated, and nearly impossible to detect.
Video has historically been our closest reflection of reality. That era is over. We find ourselves forced into a modern Three Wise Monkeys avoidance posture, relying on willful ignorance just to cope with the chaos.
Today, AI models can generate “authentic” body‑cam recordings, security‑camera feeds, or historical archives with perfect lighting and physics. These tools don’t imitate events; they fabricate them flawlessly.
A click can frame or clear anyone. Guilt and innocence are irrelevant. Real is fake; fake is real.
Once doubt becomes ubiquitous, proof collapses. Courts, media, elections, stock markets, and history itself are threatened.
Fraud, blackmail, fake disasters, election manipulation, historical revisionism, and mass panic are all now just a button press away. Trust collapses. Truth is now negotiable.
The pace of development is snowballing. In the first half of 2025 alone, multiple AI video tools became capable of perfectly mimicking reality.
Google launched Nano Banana Video, combining its ultrarealistic image generator with advanced video synthesis to produce studio-quality, multi-scene videos with consistent characters.
Google also released SynthID to detect images from its own models, but a “no” result is meaningless — the watermark might be broken, or the image could be from Stable Diffusion or DALL-E. It offers only a false sense of security.
Meanwhile, Higgsfield released FLUX.1 Kontext in May 2025, a context-aware LTX studio tool that generates full production-ready visuals from text, sketches, or reference images.
To make Christmas extra special this year, the Higgsfield Black Friday sale cut prices by 65 percent, making hyperrealistic synthetic media cheaper and accessible to EVERYONE. This is not discount shopping; it’s discount reality.
These hyperreal tools don’t unleash creativity — they unravel reality.
Existing Protections
- Nonconsensual intimate deepfakes — illegal in many US states.
- Unauthorized use of likeness or voice — laws exist in a few states (e.g., Tennessee’s ELVIS Act).
- Child exploitation (real/artificial) content — strictly illegal.
- Election interference/political deepfakes — limited bans in some jurisdictions.
Major Gaps
- No comprehensive law covering most AI‑generated fake videos.
- Nonsexual, nonpolitical fake videos are often fully legal.
- No mandatory labeling or authentication standards.
- Enforcement is nearly impossible across borders or anonymous platforms.
- Laws are evolving far slower than the technology.
What You Can Do
- Stop assuming anything you see online is real. If it shocks or enrages you, treat it as a warning and verify.
- Learn basic AI literacy. Understand what these tools can do and why realism does not necessarily equal truth.
- Pressure governments to act. Demand mandatory watermarking, real enforcement, and penalties for malicious synthetic media.
- Slow down. Trust people you know, not random clips from strangers.
- Recognize the threats: political manipulation, financial fraud, historical revision, mass panic, blackmail, evidence fabrication, and the collapse of trust itself.
Reality hasn’t vanished, but it no longer defends itself. If we want truth to survive, we have to protect it individually and intentionally.
Hashtag Picks
Augmenting Archival Access Through AI
The author writes, “Archives and records repositories around the world hold vast collections of paper documents, photographs, and other media that are not easily searchable or accessible in digital form. From historical manuscripts to typescript records, much of this material can only be accessed by physically browsing or reading through it, which is labor-intensive. Today, artificial intelligence (AI) offers new ways to bridge this gap.”
The Rise of AI and Deepfakes Threaten Democracy: Legal Scholar Wes Henricksen Shows the Path Forward
From the University Press of Kansas: “We are entering an era where truth is negotiable, facts are contested, and the line between reality and fiction is blurring in ways that threaten both personal safety and democratic stability. Which is why Professor Wes Henricksen’s new book, In Fraud We Trust, feels less like an academic treatise and more like a survival manual for democracy.”
The AI Prompt That Could End the World
The author writes, “When nuclear fission was discovered in the late 1930s, physicists concluded within months that it could be used to build a bomb. Epidemiologists agree on the potential for a pandemic, and astrophysicists agree on the risk of an asteroid strike. But no such consensus exists regarding the dangers of A.I., even after a decade of vigorous debate. How do we react when half the field can’t agree on what risks are real?”
Deepfakes and the Crisis of Knowing
From UNESCO: “As deepfakes blur reality, education must go beyond detection, teaching students to navigate truth, knowledge, and AI-mediated uncertainty.”
India Proposes Strict Rules to Label AI Content Citing Growing Risks
The authors write, “India’s government [in October] proposed that artificial intelligence and social media firms should clearly label AI-generated content to tackle the spread of deepfake and misinformation, prompted by similar moves by the European Union and China.”
Our Racist, Terrifying Deepfake Future Is Here
From The Nation: “A faked viral video of a white CEO shoplifting is one thing. What happens when an AI-generated video incriminates a Black suspect? That’s coming, and we’re completely unprepared.”
UC Riverside Scientists Develop Tool to Detect Fake Videos
The author writes, “In an era where manipulated videos can spread disinformation, bully people, and incite harm, UC Riverside researchers have created a powerful new system to expose these fakes. Amit Roy-Chowdhury, a professor of electrical and computer engineering, and doctoral candidate Rohit Kundu, both from UCR’s Marlan and Rosemary Bourns College of Engineering, teamed up with Google scientists to develop an artificial intelligence model that detects video tampering — even when manipulations go far beyond face swaps and altered speech.”
The ‘Deepfake Paradox’ Could Undermine the Justice System
The author writes, “The ‘deepfake paradox’ — the erosion of inherited trust in video evidence due to the existence of deepfake technology — challenges the justice system, which was built on the assumption that ‘seeing is believing.’”
