In the age of AI, it’s easy to make deepfake porn. But victims find it hard to undo the damage

Windwhistler
12 Min Read
In the age of AI, it’s easy to make deepfake porn. But victims find it hard to undo the damage

“The shock was something I didn’t expect.” That’s how L., a 21-year-old law student at the University of Hong Kong, remembers feeling the moment she discovered another student had created pornographic images of her without consent using artificial intelligence.Initially, she blamed herself, wondering if she had done something he had misunderstood. “I was super shocked and quite afraid at that time, and I stopped using social media for a while, because I had no idea who would have access to my photos,” L. told CBC.More than 700 photos, which included AI-generated pornographic images, were discovered by the male student’s former partner when she borrowed his laptop in February, according to three of the alleged victims, L., B. and H. All three spoke to CBC under the condition of anonymity, because they didn’t want to be recognized by their classmates.The laptop contained around 20 folders of women, which included screenshots from their social media taken without their consent and explicit photos depicting them naked or doing sexual acts, created using AI tools, L., B. and H. said. “Criminalization is a signal to the public that this action is publicly wrong.- B.The three HKU law students decided to go public about the incident in July. They set up an Instagram account to publish a letter detailing what had happened. They said that after being questioned by his ex-partner, the male student admitted using photos of the other students, screenshotted from social media, to create AI-generated pornographic images with their faces.H. said the University of Hong Kong initially asked him to write an apology letter, which he did, and sent him a warning letter to be kept in his personal file for internal reference. H. didn’t think this went far enough in holding him accountable.B. told CBC she still had to see him in class at least four times after they contacted the university.Worldwide problemThe university later said it was reviewing the incident and vowed to take further action. In a statement to CBC, a spokesperson for the University of Hong Kong said the case was still under review. John Lee, who leads the city of Hong Kong, even weighed in on the case, pledging in July to research international “best practices” on regulating AI.WATCH | The notorious AI porn kingpin in Canada:The notorious AI porn kingpin with a double lifeMrDeepFakes was the world’s most notorious website for non-consensual deepfake porn until CBC’s visual investigations team — along with partners Bellingcat, Tjekdet and Politiken — exposed the Canadian pharmacist who played a key role in the site. Support is available for anyone who has experienced sexual violence. You can access crisis lines and local support services through this Government of Canada website (https://bit.ly/3D1rUmb) or the Ending Violence Association of Canada database (https://bit.ly/3ilpp67). If you’re in immediate danger or fear for your safety or that of others around you, please call 911.But months later, the three HUK students continue to live with a lack of closure. B. told CBC their complaint to Hong Kong’s Equal Opportunities Commission has been closed, because the action didn’t amount to sexual harassment under the ordinance, and there was insufficient information to continue the investigation. In a statement to CBC, the Equal Opportunities Commission said it was “committed to handling enquiries and complaints under the anti-discrimination ordinances in an impartial, fair, just and objective manner.”The three women told CBC they haven’t filed a police report, because Hong Kong doesn’t have a law criminalizing the creation of AI-generated pornography. The case points to a worldwide problem when it comes to policing and regulating AI porn made without consent.“Once [someone has] possession of these images, you won’t know how they would be used. They could be used personally, or they could be spread underground or publicly,” said H. “You cannot control [it].”B. said there was also a chance the perpetrator’s computer could be hacked, so whether or not the male student wanted to publish the photos, there was still a risk the images could become widely available.“Criminalization is a signal to the public that this action is publicly wrong,” B. added.Addressing ‘deepfakes’In 2021, Hong Kong passed legislation that covered publishing or threatening to publish intimate images or videos without consent, including “deepfakes,” which are images or videos that have been digitally altered using AI to replace the face of one person with another. The offences are punishable by up to five years in jail. However, that doesn’t include their creation or possession. Legal experts say that loophole needs to be closed.Sharing non-consensual deepfake porn isn’t a crime in Canada. Prime Minister Mark Carney pledged to pass a law that would criminalize “producing and distributing non-consensual sexual deepfakes” during his 2025 federal election campaign.South Korea passed a law last year that criminalizes not only the possession but the consumption of such content. The United Kingdom has also made the creation or distribution of sexually explicit deepfakes a criminal offence, following a surge in incidents in recent years.”Nudify” apps typically use AI to create fake images of people without their consent. Pop star Taylor Swift and U.S. Democratic politician Alexandria Ocasio-Cortez are among the victims of non-consensual explicit deepfakes. Clare McGlynn, a law professor at Durham University who helped draft the U.K. law that made pornographic deepfakes illegal, told CBC the social media platforms must do more. Clare McGlynn, a law professor at Durham University in the U.K., says social media platforms must do more to stop the proliferation of deepfake pornography.  (Submitted by Clare McGlynn)She said the issue is increasingly widespread and pointed to the work of San Francisco city attorney David Chiu, who took legal action in August 2024 against 16 of the most-visited websites creating AI-generated non-consensual explicit images.His office said the websites targeted in the lawsuit had been visited more than 200 million times in the first half of 2024. In June of this year, his office said 10 of the sites were now offline or no longer accessible in California. The attorney’s office named some of the companies that operated the websites, including U.S.-based Sol Ecom and Briver and U.K.-based Itai Tech Ltd.McGlynn said even more mainstream AI tools, such as Elon Musk’s Grok chatbot, have “very few guardrails.” According to a report by The Verge, Grok Imagine’s new “spicy” mode “didn’t hesitate to spit out fully uncensored topless videos” of Taylor Swift without even being asked to make explicit content.Meanwhile, OpenAI’s popular ChatGPT is going to allow erotic content soon, according to OpenAI boss Sam Altman.Canada has been debating regulating platforms, such as forcing companies to take down content deemed to be child sexual abuse or intimate photos and videos shared without consent, including deepfakes.RainLily, which is a unit of the Hong Kong-based Association Concerning Sexual Violence Against Women, provides counsellors to support victims through the criminal justice process. (Submitted by Vince Chan)McGlynn believes if governments look into the large platforms, they wouldn’t need to worry as much about the smaller ones.“Those are the gateways, the pipeline. It’s on TikTok. The boys see the nudify apps advertised…. If we stopped that, then we could really deal with this,” McGlynn said. Material increasingly lifelikeVince Chan from the Hong Kong-based Association Concerning Sexual Violence Against Women said it was becoming increasingly difficult to identify content that was AI-generated, as the technology has advanced quickly.For example, he said that just in the last two years, it has become harder to identify face-swapping onto existing pornographic content.“Glitches and distortions were more obvious [a couple of years ago], but now, the results can be very lifelike unless the content is scrutinized very closely,” he said.In the past year, his organization received 11 requests for help involving deepfake intimate images, a 38 per cent increase from the year before.Hong Kong’s privacy watchdog has launched a criminal investigation into the case of the University of Hong Kong law students. But barrister Michelle Wong, who works with victims of sexual assault and harassment, said the privacy commissioner’s power is limited.Hong Kong-based barrister Michelle Wong works with victims of sexual assault and harassment and says the privacy commissioner’s power in stopping AI deepfake porn is limited. (Submitted by Michelle Wong)She says their primary focus is the disclosure or misuse of personal data, which makes it difficult to fit this case into their remit.Wong said Hong Kong criminalizes creating fake photos of children under the age of 16, and therefore there’s a system that caters to these kinds of offences.“I believe the original legislative intent is that children need to be protected. But obviously now we may see a trend saying that actually, adults above 16 should also be protected by the same legal framework,” she said.’The internet feels quite lawless’W., 30, who was a victim of AI porn about five years ago and asked to remain anonymous because she doesn’t know the identity of the perpetrator, remembers when her friends forwarded photos they received through a direct message on social media, asking if it was her. “They think it’s real. I also thought it looked very, very close to reality in those pictures, including my facial features,” she told CBC, pointing to a mole on her face.W. said at the time, she cried every day and had nightmares. She skipped work, too afraid to leave the house because she was worried the person who sent the photos knew where she lived. She also didn’t report it to the police, because she was wary they would handle her case sensitively. W. believed the perpetrator took photos from her social media account. Four years later, she still wonders about their identity and wishes they would be arrested. She remains nervous of men she doesn’t know approaching her.While she’s glad there’s growing awareness of AI-generated pornography, she feels helpless about what can be done about it.“The internet feels quite lawless,” she said. “Even if a law is implemented, how it’s enforced remains a larger question.”

Share This Article
x  Powerful Protection for WordPress, from Shield Security
This Site Is Protected By
Shield Security