Subscribe

Science

Woman, cellphone, phone
Photo credit: Marek Mucha / Pexels

As women are being increasingly targeted by the spread of deepfake pornographic images online, international and domestic lawmakers struggle to create legislation that effectively protects victims from these attacks.

Listen To This Story
Voiced by Amazon Polly

Amy Smith, an American currently studying in Paris, was preparing for a midterm in her apartment when a stranger sent her an Instagram message with an ominous threat: “Reply, or these will be sent everywhere.” Hesitating a moment, she clicked on the notification and scrolled past his initial text to see AI-generated images of herself, nude.

Smith, who asked that her name be changed, wasn’t worried at first. The images were so poorly altered that anyone could tell they were fake. She blocked the messenger after saying she would report him to the police if he went any further.

But the harassment continued.

For a week after the 20-year-old received the first message, Smith’s online stalker used multiple social media accounts to send increasingly realistic images to her family members, post them to her and her father’s Facebook pages, and even threatened her in messages sent to her personal phone number.

To this day, his motives remain unknown. After speaking with law enforcement, Smith thinks the attacker might simply enjoy manipulating people like this.

Smith’s brother, who is an IT enthusiast, attempted to find her attacker’s location using a cheap pocket smartphone after receiving a crude text with a deepfake photo of Smith attached. One of the messages could be traced to Asia, but others were scattered across the globe, suggesting that the attacker used different electronics and phone numbers to remain anonymous.

“Being alone in Paris made it a lot scarier for me,” Smith said. “If I could have been with my family, I would have felt safer.”

In the past two years, people with malicious intent have increasingly targeted celebrities, children, and young women alike with fake nude photos and videos. These photos, called deepfakes, take previous photos of a person, and, using artificial intelligence (AI), create synthetic images in their likeness. A 2023 report by the US cybersecurity firm Home Security Heroes found that 98 percent of deepfake videos online are pornographic, and the victims targeted by this cyber-crime are almost exclusively women.

Last year, the FBI warned the public of these deepfakes and noted an uptick in “sextortion schemes,” in which a perpetrator uses altered images to blackmail victims into either paying large sums of money or sending them actual nude photos.

“Malicious actors use content manipulation technologies and services to exploit photos and videos — typically captured from an individual’s social media account, open internet, or requested from the victim — into sexually-themed images that appear true-to-life in likeness to a victim, then circulate them on social media, public forums, or pornographic websites,” the FBI said. “Many victims, which have included minors, are unaware their images were copied, manipulated, and circulated until it was brought to their attention by someone else.”

The public primarily becomes aware of this problem when it affects celebrities. 

Deepfake pornographic images of Taylor Swift rapidly spread on X for nearly an entire day in January before being taken down. 

However, the spread of nonconsensual deepfake intimate images on social media platforms is not limited to famous people.

After the attacker sent a message to Smith’s personal phone number stating that he knew where she lived and that she was in grave danger, she turned to law enforcement for protection. She contacted the US embassy in France, reported the crime to the French police and the police in her home state of Florida, and even reported the blackmail attempt to the FBI. No one knew how to help. Each time she got the same answer: The chances of finding her attacker were slim to none.

“I was super overwhelmed,” Smith said, “I still don’t understand what this person wanted from me. The hardest part was when they went after my family.”

In the US, only 10 states have passed legislation that specifically mentions deepfakes. In Florida, a 2022 law states that posting AI-created nude photos without the person’s consent is a third-degree felony, the least severe type of felony — only one step up from a misdemeanor — with a maximum sentence of 5 years.

There are currently no national laws that directly regulate AI in France, although proposed legislation would enforce AI systems to respect copyright laws and disclose when an image is AI-generated.

But laws targeting individual acts are not enough, according to Sara Jodka, data privacy and cybersecurity lawyer based in Michigan. New developments in AI technology will continuously be ahead of laws that fix issues retroactively, she said, meaning that proactive legislation is the only way to truly control how AI can be used.

“The law will always be behind technology,” Jodka said. “It will always be behind social media.”

Though she initially believed the photos were only sent to people connected to her, Smith received another direct message from a stranger on Instagram warning that her photos had been leaked to a Telegram channel that shares nude photos of random women. The admin, after hearing that the images were fake, kicked the original poster out of the channel and deleted the images.

After this event, the threats came to an abrupt stop and have not continued since. 

However, Smith’s uncertainty over what will happen next has not.

“I don’t know if he has given up, or if he is planning something else,” Smith said.

Even if the attacker’s identity were known, there are very limited paths that the authorities can take when a victim has a case like Smith’s. But even these solutions come up short. The first and safest option for lawyers is to send a cease-and desist letter assuming that the attacker engaged in “revenge porn” (when a victim’s nude images are leaked without consent). And in most cases of deepfake pornographic images, only the person’s face is real, which severely limits the effectiveness of this approach.

The main issue lies in the fact that, in most deepfake pornographic threats like Smith’s case, the attacker is completely anonymous.

“How can we go after a faceless perpetrator? We can’t,” said Jodka. And even if state, federal, or international governments step in, “If that actor isn’t within the state that protected it, then what are we supposed to do?”

But Taylor Swift’s scandal spurred legislators to start taking the onslaught of deepfake pornography seriously. Two bipartisan bills, the Defiance Act and the Take it Down Act, were introduced in the Senate this year that, if passed, would allow victims to seek civil penalties against those who produced or possessed these “digital forgeries” with the intent to distribute them. The legislation would also require internet sites to take down deepfake pornographic images within 48 hours, respectively.

“Many women and girls are forever harmed by these crimes, having to live with being victimized again and again,” said Sen. Ted Cruz (R-TX), the sponsor of the Take It Down Act. “By creating a level playing field at the federal level and putting the responsibility on websites to have in place procedures to remove these images, our bill will protect and empower all victims of this heinous crime.”

And, in October 2023, President Joe Biden issued an executive order on AI that attempts to tackle this underlying issue by focusing on what can happen after publication. The order instructs the Commerce Department to create a technology that detects deepfake content and tracks it to its origin.

In its most basic form, this technology will have the creator “sign” the AI content with a cryptographic code. Web browsers, social media platforms, and other sites could then signal to users that specific content is AI-generated. Think of it as something like advertisement disclosures, where posts that contain paid promotions must be marked as an ad.

But this technology can be easily manipulated, some experts say. And since basic AI models will always be several steps ahead of technology that detects what they produce, it will be hard to create methods that will effectively handle these issues in the future. 

Outside of the United States, the EU passed the AI Act in March of this year, which establishes tiered regulations for AI systems that are separated into four categories: unacceptable, high risk, limited risk, and low risk. Deepfakes are currently classified as “limited risk,” meaning that creators of deepfakes must disclose their works’ artificial origins to make viewers less susceptible to manipulation.

While experts acknowledge that this measure is a large step forward, many criticize the fact that the document has no liability framework dictating who is responsible for deepfake misuse or legal consequences for individuals who create deepfakes for malicious intent.

“The AI Act’s preventative measures can only stop malicious uses of deepfakes to a very limited extent,” said Philipp Hacker, the chair for Law and Ethics of the Digital Society at European University, when asked about the law’s limits. “Preventing all misuse is difficult without monitoring user behavior, which raises significant privacy and data protection issues.”

Despite these potential technologies and new regulations, Smith isn’t convinced that they will stop attackers from creating and spreading AI pornography.

“The person that did this is not smart,” said Smith. “But if they can still reproduce these images, then anyone can do it. I can see it being used as a means to control people, and it’s scary.”


Author

Comments are closed.