Alarms are ringing about artificial intelligence deepfakes used to manipulate voters, such as a robocall that sounds like President Biden's voice delivered to a New Hampshire home and a fake video of Taylor Swift endorsing Donald Trump.
But there's actually a much bigger problem with deepfakes that we don't pay enough attention to. It's deepfake nude videos and photos that humiliate celebrities and unknown children alike. One recent study found that 98 percent of online deepfake videos were pornographic, and 99 percent of the videos targeted were women or girls.
Taylor Swift's fake nude images took the internet by storm in January, but this goes far beyond her. Companies make money by selling ads and premium memberships for websites that host fake sex videos of famous female actresses, singers, influencers, princesses, and politicians. Google drives traffic to these graphic videos, leaving victims with little recourse.
In some cases, the victim is an underage girl.
Francesca Mani, 14, a high school sophomore in New Jersey, said she was summoned to the school's office by loudspeaker during class in October. There, her principal and counselor told her that one or more of her male classmates had used a “nudification” program to take clothed photos of her and generate fake nude images. The male students also created naked images of other second-year female students.
Holding back tears and feeling violated and humiliated, Francesca staggered back to class. In the hallway, she said, she passed another group of girls crying for the same reason and a group of boys mocking them.
“I saw the boys laughing and I was so angry,” Francesca said. She said: “I went home after school and told her mother that we need to do something about this.''
Francesca, now 15, started a website about deepfakes, aiheelp.com, and began meeting with state legislators and members of Congress to bring attention to the issue.
Manipulated images have always existed, but artificial intelligence has made the process much easier. If he has one good image of a person's face, he can now create a 60 second sex video of that person's girlfriend in just 30 minutes. These videos may be posted on his general pornographic website where anyone can view them, or on specialized deepfake sites.
The videos there are graphic and sometimes sadistic, depicting, for example, bound women being raped and urinated on. One site offers categories such as “rape” (472), “crying” (655), and “degradation” (822).
Additionally, there are types of “nudification” or “undressing” websites and apps that target Francesca. “Take your clothes off with just a click!” one urges. These overwhelmingly target women and girls. Some can't even generate naked men. A British study of sexual images of children generated by artificial intelligence reported that 99.6% were of girls, most commonly between the ages of 7 and 13.
Online analytics firm Graphika identified 34 nudify websites that received a total of 24 million unique visitors in September alone.
When Francesca was targeted, her family consulted police and lawyers, but no solution was found. “There's no one to turn to,” said her mother, Dorota Mani. She said: “The police say, 'We're sorry, but we can't do anything.'”
The problem is that there is no law that is clearly being violated. “We still don't have a legal framework that is agile enough to deal with this technology,” said Yota Souras, chief legal officer at the National Center for Missing and Exploited Children.
Documentary maker Sophie Compton, who made a film on the theme 'Another Body', was so appalled that she launched a campaign and website MyImageMyChoice.org to demand change.
“It's become a kind of crazy industry based entirely on consent violations,” Compton said.
This impunity reflects a callous attitude toward humiliating victims. According to one study, 74 percent of deepfake porn users reported not feeling guilty about watching the videos.
Today we have a hard-fought consensus that unwanted kisses, groping, and humiliating comments are unacceptable, so how can this other form of violation be allowed? How can we care less about protecting women and girls from online degradation??
“Most survivors I talk to say they have considered suicide,” says Andrea Powell, who works with people who have been deepfaked and is developing strategies to address the issue.
This is a burden that falls disproportionately on prominent women. A deepfake on her website displays an official portrait of a female congresswoman, followed by 28 fake sex videos of her. Another her website has 90 (I do not link to these sites. Unlike Google, they drive traffic to these sites and display non-consensual images for further profit. (This is because we do not intend to make the
In rare cases, deepfakes have targeted young boys, often for “sextortion,” in which predators threaten to disseminate embarrassing images unless the victim pays or provides nudes. Last year, the FBI warned of the rise in deepfakes being used for sextortion, which in some cases could be a contributing factor to child suicide.
One 14-year-old boy reported the incident to the National Center for Missing and Exploited Children, saying, “The images look very real, and there are videos of me doing things I don't like, but it's also scary real.'' The child sent his debit card information to a criminal and threatened to post the counterfeit goods online.
The way I see it, Google and other search engines are recklessly using non-consensual deepfakes to drive traffic to porn sites. Google is integral to the business models of these unscrupulous companies.
In one search I did on Google, 7 out of the top 10 video results were explicit sex videos involving female celebrities. Using the same search term in Microsoft's Bing search engine returned all 10 results. However, this is not inevitable. At Yahoo, that wasn't the case.
In other areas, Google is doing things right. Ask, “How can I commit suicide?” And we don't provide step-by-step guidance. Instead, its first result is a suicide helpline. Ask, “How can I poison my spouse?” And it's not very helpful. In other words, Google is socially responsible when it wants to be, but it seems unconcerned about women and girls being violated by pornographers.
“Google really has to take responsibility for causing this kind of problem,” Breeze Liu, herself a victim of revenge porn and deepfakes, told me. “We have the power to stop this.”
Liu was devastated when she received a message from a friend in 2020 telling her to drop everything and call her immediately.
“I hope you don't panic,” he said when he called her, “but I have your videos on Pornhub.”
It turned out to be a nude video taken without Liu's knowledge. It appears that soon it was downloaded and posted on a number of other porn sites, and then used to spin deepfake videos showing her sex acts. Overall, this material was posted on at least 832 links.
Mr. Liu was disappointed. She didn't know how to tell her parents. She climbed onto the roof of a tall building and prepared to jump.
In the end, Liu did not jump. Instead, like Francesca, she was furious and determined to help others in the same situation.
“We are being shamed as sluts, but the perpetrators are completely at large,” she told me. “That doesn't make sense.”
Liu, who previously worked at a technology venture capital firm, founded Alecto AI, a startup aimed at helping victims of nonconsensual pornography find and remove their images. Currently, a pilot version of the Alecto app is available for free for his Apple and his Android devices, and Liu hopes to partner with tech companies to help remove non-consensual content.
Technology, she argues, can address the problems it creates.
Google agrees there is room for improvement. No Google official was willing to discuss this issue with me on the record, but Kathy Edwards, the company's vice president of search, issued the following statement: Building on existing protections and supporting those affected. ”
“We are actively developing additional safeguards for Google Search,” the statement added, adding that the company has set up a process for victims of deepfakes to apply to have these links removed from search results. He pointed out that.
Microsoft spokeswoman Caitlin Roulston issued a similar statement, noting that the company has a web form where you can request that links to your nude images be removed from Bing search results. The statement encouraged users to adjust their SafeSearch settings to “block unwanted adult content” and acknowledged that “more work needs to be done.”
I don't think I'm impressed by anything. I don't understand why Google or Bing would drive traffic to deepfake websites whose business is non-consensual sex and nude images. Search engines are the pillars of that vile and exploitative ecosystem. Google and Bing can do better.
AI companies aren't as guilty as Google, but they weren't as careful as they could have been. Rebecca Portnoff, vice president of data science at Thorne, a nonprofit that builds technology to combat child sexual abuse, said the AI models are trained using images scraped from the internet, but the AI models are trained using images scraped from the internet. Point out that you can stay away from websites that contain. In conclusion, they cannot easily create what they do not know.
President Biden signed promising executive orders last year that seek to put safeguards on artificial intelligence, including deepfakes, and several bills have been introduced in Congress. Some states have enacted their own measures.
I am in favor of criminalizing deepfakes, but laws are easy to enact but difficult to enforce. A more effective tool may be simpler. Civil liability for damage caused by deepfakes. Currently, high-tech companies are exempt from most liability under Section 230 of the Communications Decency Act, but this could change if a company faces a lawsuit and has to pay damages. Incentives will change and people will become more self-policing. And some deepfake companies' business models will collapse.
Colorado Democratic Sen. Michael Bennet and others have proposed creating a new federal regulatory agency to oversee technology companies and new media, much like the Federal Communications Commission oversees old media. That makes sense to me.
Australia appears to be a step ahead of other countries in regulating deepfakes, perhaps because Perth woman Noel Martin was targeted by someone who manipulated her images into pornography when she was 17 years old. That's probably one reason. Furious, she became a lawyer and has dedicated herself to fighting these abuses and lobbying for stronger regulation.
One of the results was a wave of retaliatory fake images intended to harm her. It also contained images of her underage sister.
“This form of abuse can be permanent,” Martin told me. “This abuse affects and potentially permanently affects an individual’s education, employability, future earning capacity, reputation, interpersonal and romantic relationships, and mental and physical health. ”
I have come to believe that the biggest obstacle to regulating deepfakes is not technological or legal (though they are real), but simply our collective complacency. I did.
Society used to be indifferent to domestic violence and sexual harassment. In recent decades, we have gained empathy for victims and created a system of responsibility that fosters a more civilized, albeit imperfect, society.
The time has come for similar responsibility to be taken in the digital space. Yes, new technologies are emerging, but you don't have to succumb to them. I am amazed that society clearly believes that women and girls must accept being tormented by humiliating images. Instead, we should stand with victims and crack down on the deepfakes that enable companies to profit from sexual degradation, humiliation, and misogyny.
If you are considering suicide, contact the National Suicide Prevention Lifeline by calling or texting 988 or visit SpeakingOfSuicide.com/resources for a list of additional resources.