AI images can be used to create art, try on clothes in virtual fitting rooms, or help design advertising campaigns.
But experts fear that the darker side of easily accessible tools could worsen something that primarily harms women: non-consensual counterfeit pornography.
Deepfakes are videos and images that have been digitally created or altered using artificial intelligence or machine learning. Pornography created with the technology began to spread across the Internet several years ago when a Reddit user shared clips that placed the faces of female celebrities on the shoulders of porn actors.
Since then, deepfake creators have spread similar videos and images targeting online influencers, journalists, and others with a public profile. There are thousands of videos on a large number of websites. And some have been offering users the chance to create their own images, essentially allowing anyone to turn anyone they want into sexual fantasies without her consent, or use technology to harm their exes.
The problem, experts say, grew as it became easier to make sophisticated and visually appealing deepfakes. And they say it could get worse with the development of generative artificial intelligence tools that train on billions of images from the Internet and spit out novel content using existing data.
“The reality is that technology will continue to proliferate, it will continue to develop, and it will continue to be as easy as pushing a button,” said Adam Dodge, founder of EndTAB, a group that provides training on technology-enabled abuse. . “And as long as that happens, people will undoubtedly…continue to misuse that technology to harm others, primarily through online sexual violence, fake pornography, and false nude images.”
Noelle Martin, from Perth, Australia, has experienced that reality. The 28-year-old herself found fake pornography of herself 10 years ago when she, out of curiosity, one day used Google to search for an image of herself. To this day, Martin says she doesn’t know who created the fake images of her or the videos of her having sex that she would later find. She suspects that someone probably took a photo of her posted on her social media page or elsewhere and turned it into pornography.
Horrified, Martin contacted different websites over several years in an effort to have the images removed. Some did not respond. Others removed it, but she soon found it again.
“You can’t win,” Martin said. “This is something that is always going to be out there. It’s like I’ve ruined you forever.
The more he talked, he said, the more the problem intensified. Some people even told her that the way she dressed and posted pictures on social media contributed to the bullying of her, essentially blaming the pictures of her on her instead of the creators of her.
Ultimately, Martin turned his attention to legislation, advocating for a national law in Australia that would fine companies AUD$555,000 (USD370,706) if they fail to comply with takedown notices from online safety regulators. .
But governing the Internet is nearly impossible when countries have their own laws for content that is sometimes made on the other side of the world. Martin, currently a lawyer and legal researcher at the University of Western Australia, says she believes the problem must be brought under control through some kind of global solution.
Meanwhile, some AI models say they are already restricting access to explicit images.
OpenAI says it has removed explicit content from the data used to train the DALL-E imaging tool, limiting users’ ability to create such images. The company also filters requests and says it blocks users from creating AI images of celebrities and prominent politicians. Midjourney, another model, blocks the use of certain keywords and encourages users to flag problematic images to moderators.
Stability AI spokesman Motez Bishara said the filter uses a combination of keywords and other techniques like image recognition to detect nudity and returns a blurry image. But it’s possible for users to tamper with the software and build whatever they want as the company releases its code to the public. Bishara said that the Stability AI license "extends to third-party applications based on Stable Diffusion" and strictly prohibits "any misuse for illegal or immoral purposes".
Some social media companies have also tightened their rules to better protect their platforms against harmful material.
TikTok said last month that all deepfakes or manipulated content depicting realistic scenes must be tagged to indicate that they are fake or altered in any way, and that deepfakes of private figures and young people are no longer allowed. The company had previously banned sexually explicit content and deepfakes that mislead viewers about real-world events and cause harm.
Gaming platform Twitch also recently updated its policies on explicit deepfake images after a popular streamer named Atrioc was found to have a fake pornography website open in his browser during a live stream in late January. The site featured fake images of other Twitch streamers.
Twitch has already banned explicit deepfakes, but now shows a glimpse of such content, even if it’s intended to express outrage, "will be removed and will result in an app", the company wrote in a blog post. And intentionally promoting, creating, or sharing the material is grounds for an instant ban.
Other companies have also tried to ban deepfakes on their platforms, but keeping them away requires diligence.
Apple and Google recently said they removed an app from their app stores that was running fake videos of sexually suggestive actresses to market the product. Research into deepfake pornography isn’t common, but a 2019 report by artificial intelligence firm DeepTrace Labs found that it was almost completely weaponized against women and that the people most targeted were Western actresses, followed by K singers. -pop from South Korea.
The same app removed by Google and Apple had served ads on Meta’s platform, which includes Facebook, Instagram and Messenger. Meta spokesperson Dani Lever said in a statement that company policy restricts both AI-generated and non-AI-generated adult content and has restricted app page advertising on its platforms.
In February, Meta, as well as adult sites such as OnlyFans and Pornhub, began participating in an online tool, called Take It Down, that allows teens to report explicit images and videos of themselves on the Internet. The reporting site is powered by regular images and AI-generated content, which has become a growing concern for child safety groups.
“When people ask our senior leadership what are the rocks coming down the hill that we are concerned about. The first is end-to-end encryption and what that means for child protection. And second is AI and specifically deepfakes,” said Gavin Portnoy, a spokesman for the National Center for Missing & Exploited Children, which operates the Take It Down tool.
“We haven’t been able to formulate a direct answer yet,” Portnoy said.