Saturday Hashtag: #AITelepathyBomb
Machine ‘Telepathy’ Is Here and It Killed Anonymized Data
Listen To This Story
|
On May 13, 2024, the authors of a paper titled “The Platonic Representation Hypothesis” dropped a bomb: AI models, no matter how they’re built or trained, end up thinking in near-identical ways.
This breaks a core assumption in AI safety — that models are siloed and can’t interpret each other’s inner workings.
Turns out, they can. Even if trained on totally different data, AI systems develop similar “thought maps,” placing related ideas near each other (like “cat” and “dog”), a structure that makes cross-modal translation possible.
In other words: One AI can now decode another’s internal representations, even if they were never designed to communicate. This is essentially a machine telepathy that creates a dangerous AI synchronicity which is a privacy disaster.
Embeddings Aren’t Safe. They Never Were
AI embeddings, once believed to be secure compressed data representations, can no longer be relied upon for ensuring security.
Embeddings are the number lists that represent words or ideas in AI. For example, the word “cat” might be turned into [1, 0, 1], where 1 means “animal,” 0 means “big,” and 1 means “pet.” These embeddings help AI understand, contextualize, and compare meanings.
This research shows that embeddings can be reverse-engineered using small models on ordinary hardware, without needing access to the original model structure. That means so-called anonymous data — medical records, faces, financial histories — can now be reconstructed (with your identity) and exposed, even by unrelated systems.
Alarming Proof Keeps Stacking Up
- “The Platonic Representation Hypothesis” (May 2024): AI systems naturally converge to similar internal structures, enabling thought-to-thought mapping.
- “Information Leakage from Embedding in Large Language Models” (May 2024): Tools like Embed Parrot and Captum can reconstruct user inputs from deep embeddings.
- “Transferable Embedding Inversion Attack” (August 2024): Surrogate models can extract sensitive info without access to the original model.
- “Information Leakage in Embedding Models” (March 2020): Of original texts, 50–70 percent is recoverable; author identity is often traceable.
- “The Hidden Dangers of Face Embeddings” (December 2024): Facial embedding inversion shows facial reconstructing from supposedly anonymized vectors.
No Regulation. No Oversight. No Accountability.
Despite mounting evidence, regulators remain largely silent, offering only token legislation. Meanwhile, Big Tech continues weaving fragile AI systems into critical infrastructure — banking systems, energy grids, health care platforms, law enforcement — without real safeguards.
The media barely blinks, blinded by the “innovation.” Executives talk “ethics” while mining unprotected personal data. Profit is driving a privacy collapse that is now hardwired into the foundations of modern society.
Not a Bug in the System. The Bug Is the System
Every major vector database is a ticking time bomb. The belief that embeddings were anonymized is dead. What’s left is a system where models think alike, all private data is extractable, and nothing is siloed.
This isn’t a theoretical flaw. It’s a seismic security failure, and it’s already live inside critical systems.
What must happen now:
- Embedding-level red-teaming must be mandatory.
- Companies must publicly disclose risks and model capabilities.
- Embeddings must be legally classified as private sensitive data.
Without immediate action, your digital identity, thoughts, face, and history aren’t just up for grabs; they’re already out the door.
This is not a glitch. It’s the future we’re sleepwalking into.
Afraid of AI? Learn the Seven Cardinal Dangers and How to Stay Safe
The author writes, “If you’re afraid of artificial intelligence, you’re not alone, and you’re not wrong to be cautious. AI is no longer science fiction. It’s embedded in the apps we use, the decisions that affect our lives, and the tools reshaping work and creativity. But with its power comes real risk.”
Emerging AI Threats To Navigate in 2025 and Beyond
From Forbes: “As society integrates AI into sectors such as healthcare, finance, manufacturing and transport, the potential for catastrophic blowback grows if these autonomous bytes are not properly regulated and monitored. Listed below are some heads-up threats to be aware of as organizations prepare their risk management plans for 2025.”
Are AI Deep Network Models Converging?
The author writes, “A [2024] MIT paper has come to my attention for its impressive claim: AI models are converging, even across different modalities — vision and language. ‘We argue that representations in AI models, particularly deep networks, are converging’ is how ‘The Platonic Representation Hypothesis’ paper begins. But how can different models, trained on different datasets and for different use cases converge? What has led to this convergence?”
Shadows and Circuits: Plato’s Allegory of the Cave and the Rise of Artificial Intelligence
From Brewminate: “Let’s explore the Allegory of the Cave as a philosophical lens through which to examine the rise of AI. It argues that AI systems, like the shadows on the cave wall, can both enlighten and deceive, and that the development of artificial intelligence forces us to confront not only technological possibilities but metaphysical realities.”
5 Reasons Plato Would Be Skeptical About GenAI
The author writes, “As Generative AI becomes more embedded in our lives, I find myself using the Socratic method to refine the outputs of LLMs like ChatGPT, and I can’t help but wonder what Plato would say about our growing dependence on artificial intelligence. Based on his writings, I think he’d have some serious concerns. Here are 5 reasons he would have doubts.”
Does AI Limit Our Creativity?
From Knowledge at Wharton: “As more companies look to tools like ChatGPT to supercharge creativity, a new study out of Wharton offers a word of caution: Generative AI may boost individual performance, but it can also limit how teams think. New research co-authored by Wharton professors Gideon Nave and Christian Terwiesch finds that while ChatGPT improves the quality of individual ideas, it also leads groups to generate more similar ideas, reducing the variety that’s essential for breakthrough innovation.”
IdDecoder: A Face Embedding Inversion Tool and its Privacy and Security Implications on Facial Recognition Systems
The authors write, “Most state-of-the-art facial recognition systems (FRS:s) use face embeddings. In this paper, we present the IdDecoder framework, capable of effectively synthesizing realistic-neutralized face images from face embeddings, and two effective attacks on state-of-the-art facial recognition models using embeddings.”
Novel Universal Bypass for All Major LLMs
From HiddenLayer: “Researchers at HiddenLayer have developed the first, post-instruction hierarchy, universal, and transferable prompt injection technique that successfully bypasses instruction hierarchy and safety guardrails across all major frontier AI models. … Leveraging a novel combination of an internally developed policy technique and roleplaying, we are able to bypass model alignment and produce outputs that are in clear violation of AI safety policies: CBRN (Chemical, Biological, Radiological, and Nuclear), mass violence, self-harm and system prompt leakage.”