The Notoriety of Deep Nudes
about the deteriorated state of the internet, cyber bullying using AI, and the pit-filled road ahead.
When I turned thirteen, Facebook turned three.
I entered my teenage years in the same period that Facebook mushroomed into the mainstream. I created an account the minute I stepped into high school. Many of my friends had signed up even earlier, a simple turn of the age dial was enough to get a eleven-or-twelve year old online.
We entered the world of facebook at a time when social media as a concept was still unknown to our parents and other boomers. It was a time of unregulated content, zero policies on child protection, and freedom for unconstrained speech and harassment.
I was a fairly private person who preferred fanfiction to facebook. But even I used to go online in order to trawl through my classmates’ walls and then gossip with my friends the next day. In the beginning, things were innocent enough. Some comments, a few shy pokes, many hesitant friend requests.
But soon the blue-eyed flirting turned black. Once teenagers got a hang of the platform the real trouble began. Fake accounts kept popping up, most were of girls. They were under different names, of course, but were all used for a range of notorious purposes from not-so-harmless pranking to highly dangerous catfishing.
The online bullying then began. Some girls were slandered by anonymous users, their faces photoshopped onto inappropriate memes and more. Gossip would run through our narrow school building like wildfire, reputations carelessly thrown out the window.
At that time, there was still one advantage on our side. Doctored images were easy to pick out. One could convincingly declare the images fake and that you were innocent.
16 years later, social media bullying has not come to an end. Instead, it has grown to include even more ways to harass and intimidate young children. And unfortunately, its not as easy to pick apart the fake from the real. Thrown in the unceasing patriarchy, profit-focused tech-bros, thoughtless victim blaming, and its easy to see that times have only gotten tougher for young girls.
For we are in the age of deep nudes. And any woman can find herself at the butt end of a crude and vulgar post online.
From Deepfakes to Deep-Nudes.
The first time an AI was used to scissor away a woman’s face and transfigure it into an inappropriate caricature was in 2016, a full EIGHT years ago, when a redditor under the name ‘Deepfakes‘ uploaded such an image online. One would expect such behavior to be flagged and shut down at once. But that did not happen.
Instead his username became a legend. His posts became so popular that he started a subreddit named after himself, ‘r/deepfakes‘ and dedicated it solely to exchanging the created deepfakes, many of which involved swapping celebrities' faces onto the bodies of actors in pornographic videos. This community attracted 15,000 users in just two months. And no, it was not shut down at once.
His idea quickly caught the imagination of many a vile-minded but technologically advanced pervert. Deepfake Apps and open source repositories on Microsoft-owned Github proliferated. The AI involved grew increasingly sophisticated as the years passed. And now, with the wide spread use of GPTs and the like, what used to be fringe behavior has solidly entered the mainstream.
Generating a doctored and damaging image is as easy as typing in a single line command in a chat window. And since the data used to train these models are already heavily biased and sexist, its only too easy to generate vulgar images using GenAI. Deepfakes have evolved into deep nudes, which means that nude images of women can be quickly generated by supplying the AI model with a photo plucked off an unsuspecting user’s social media.
The Big Picture
Mainstream media and cyber-security focused Government agencies are very much aware of Deepfakes and the subsequent problems. However, in 2024, we see that the UN remains more concerned with protecting powerful men like Tom Cruise and Rishi Sunak than the thousands of women who get picked apart online.
The public conversation around deepfakes has consistently revolved around political figures and oh-what-if scenarios that may affect businesses. The on-going tragedies faced by everyday women, high-school girls, and even famous celebrities like Taylor Swift don’t get more than an hour’s worth of coverage. Indeed in India, a journalist, Rana Ayyub, was herself made a victim of AI-generated abuse when she spoke up for a child rape victim in 2018.
Our current unregulated and private-equity minded tech ecosystem enables the worst of our societies. By turning the conversation away from the real on-going crimes and the very real subsequent damages faced by women, the perpetrators continue on fearlessly.
The truth is the 98% of deepfakes online are pornographic in nature, 99% of which are of women. Only 2% of all deepfakes is aimed at famous public figures and money-making schemes.
From 2022 to 2023, the number of deep nudes uploaded on Big Tech’s servers increased by more than 54%, and an increase of 550% from 2019.
The worst part of all this? The majority of those who create and post this despicable content online do not feel guilty about it.
The Fight Ahead
However, there is reason for us to be cautiously optimistic. Legislators are working to combat deepfakes since a small portion of it did affect public figures. The attack on Taylor Swift in particular was a wake-up call to legislators on both sides of the Atlantic. Meanwhile, parents of teen girls who got harassed in American high school are fighting on the ground by pushing their state representatives to do more and to pass stricter regulations.
Some states have already banned the sharing of non-consensual deep-nudes, including New York, California, and Virginia. Missouri has passed its own “Taylor Swift Act“ aimed at curbing intimate depictions online. Others are working towards the same.
The EU’s AI act tackles deepfakes by proposing that creators must disclose their identities. They are hoping that increased transparency will hinder bad actors. It remains to be seen how effective that will be. And only in 2027, will the act of sharing and creating deepfakes become a criminal offense. In a refreshing change, UK is moving ahead of the EU and has pronounced the creation of deep nudes illegal.
The problem of deepfakes is even harder to tackle in a country like India, where more than half of its population is active online and predominantly male. Countries like South Korea and India with entrenched gender and sexist norms combined with an active social media populace face a tough challenge in combating deep nudes. The Indian Government updated the IT act last year to include deepfakes, but its efficacy is yet unknown.
Eight years since the dawn of deepfakes, there remains an ineffective hodgepodge of cyber-security laws and compliance mechanisms. Tech companies continue to wash their hands off of any blame. But how hard is it to add an if condition that looks out for innocent women and their bodies?
If, as a teenager living in India, I was aware of online bullying and degradation of women on the internet, I’m sure Mark and his ilk must have gotten some wind of it over the past two decades. The truth is that these harmful contents have left the dark web and entered the everyday internet a long time ago. It is very much possible to detect nude images and videos, followed by an even simpler task of stopping this content at its source.
It’s clear to the public and anyone working in tech that its no more a question of can’t, but a matter of won’t. As the old adage goes, sex sells and tech companies are willing to look away if they can get more eyes on their ads.
Its time to change that.
Only if businesses, government, and the public work together can we even begin to protect the vulnerable members of our society. This is one of the instances where if you see something, you should definitely say something. In fact, go ahead and raise some hell.
'As the old adage goes, sex sells and tech companies are willing to look away if they can get more eyes on their ads.'
This is really disappointing, despicable, and indeed quite disturbing. There ultimately needs to be more pressure from the people and relevant NGOs towards the government authorities to enforce strict regulations against such tech companies and internet users in general.
Thanks for raising awareness on this, Harshini.