Haeun Rho | haeun.rho@yale.edu
In the last year, an epidemic of digital sex crime using deepfake technology has pervaded South Korea. Deepfake is a form of artificial intelligence that creates video and photo content with an individual’s face, voice, or other characteristics. As seen on short-form media platforms like Instagram and TikTok, deepfakes are used to mock politicians or edit civilian faces onto pop stars. Such videos are easily produced through a downloadable app, such as Deepfake Studio, Reface, or Face Swapper, which is free and accessible to anyone. However, users may abuse this technology to manipulate the faces of celebrities, friends, teachers, and family members into pornography.
“Send a picture of your crush now”:
Abusers of deepfake technology operate in group chats on a widely used messaging app called Telegram. Some of these deepfake pornography group chats contain over 400,000 international members. Pervasively advertised on Twitter, X, and Instagram, these chats are often promoted with an eye-catching phrase like “send a picture of your crush now.” Telegram chat ‘bots’ are utilized to transact money into a cryptocurrency called “diamonds” (an equivalent to 650 won or $0.49). With one diamond, these bots produce deepfakes in a matter of seconds. Users also have the option to customize body parts —similar to how one can adjust the size of their eyes or face on a photo editing app—in return for more diamonds. By offering three free trials upon joining and an additional diamond for each invitation to a friend, the allure and operation of these chats expand greatly.
“Your mom’s video’s highlight is when she…”:
In the case of South Korea, deepfake crimes became viral as Telegram group chats categorized by city, school, and age started to form rapidly. These group chats are called 능욕방 (“neung-yeok bang”), which directly translates to ‘humiliation chat,’ targeting 겹지인 (“gyeop-ji-in”) or ‘mutual acquaintance.’ Most require new members to send ten deepfake photos or videos to the administrator to be permitted in the chat, mostly of their own mutuals. Along with the creation of deepfake media, the members expose the victims’ names, ages, phone numbers, social media tags, residences, and schools to further exploit the vulnerability of their acquaintances.
Many abusers exacerbate the harassment by blackmailing the victims by sending convincing deepfakes to family in return for money or even sexual intercourse. Another common way teenagers torment victims is by trapping them in group chats flooded with inappropriate deepfakes and incessantly inviting them. They emotionally abuse victims by pretending to warn them about the victim’s exposed information and deepfakes while recording their petrified reactions as another form of entertainment. In other cases, dehumanizing images of acquaintances are used interchangeably with emojis and memes in group chats.
“A new playground for teenagers”:
Deepfake is a relatively new feature of AI, making it more common among teenagers to understand and use this technology. A 2023 study revealed that 75.8 percent of the deepfake victims in Korea were teenagers, with the number decreasing to 70 percent in 2024. High school, middle school, and even elementary school students are being told by school administration to take down any media of them on social platforms for protection. In an interview, a Korean female high-school sophomore said the horror is that anyone from her seat partner and trusted friends to teachers might have created or seen a deepfake photo of her, and she would never know. Her friend, who is a victim, fought against endless rumors following a fabricated video. The mental distress as a result of deepfakes is unimaginably omnipresent in South Korea. A Korean news reporter on the radio described these Telegram chats as a “new playground for teenagers.”
“No Shame. Yes Anger.”
Since this upturn, there have been over 500 digital sex crime cases in 2024, with most resulting in impunity. To investigate the owners of Telegram group chats, police departments need permission from Telegram. The company doesn’t cooperate, making it impossible to investigate the deeper work of this crime. Korean law requires explicit evidence that proves an individual had malicious intent, and even if perpetrators are charged, their crime is spreading false media online and defaming others. By which the punishment is only up to 10 months. In response to the impotent support from the Korean government, the public has responded by fighting for legal protection against AI technology. Campaign phrases like “no shame, yes anger” and “my life is not your porn” circulate among advocacy groups.
On October 10, 2024, the government passed a partial legislature criminalizing the possession, purchase, and viewing of sexual deepfake media. It attempts to prohibit the spread of deepfake media with a maximum sentence of three years in prison or ₩30,000,000 (equivalent to $28,000) for viewers and seven years for creators and distributors. This measure will be enacted after receiving President Yoon’s approval.
Opinions:
According to a study, South Korea was ranked number one in the susceptibility rate to deepfake pornography in 2023. While researching this topic, I wondered why, in such a country known for its conservative culture, there was an outbreak of, ironically, the most non-conservative crime around the world. I took this question and viewed it through a psychological and historical lens. South Korea has a prominent culture of competition where people are constantly in a race, comparing themselves to others.
South Korea is a comparatively recent country, with its founding year being 1948. Even then, its country had to be reconstructed after the Korean War (1950–1953). Despite their novel and poor starting point, today, Korea marks its presence as a global leader in industries, including technology, pop, fashion, and more. South Korea was able to get to this status in less than 100 years by firing up competition within the country, selecting a small number of elitists out of an overpopulated country, which eventually centered on competition amongst one another. One example of Korea’s competition is the insane study culture. Since preschool, Korean students go to after-school tutoring called “Hagwon” to preview content years in advance. All this effort is for them to do well in a single college entrance exam (“Suneung”) that is just one day long. The results determine which universities students will attend, and since only about three colleges are considered employable by employers, the fierce competition puts students in constant rivalry. In Korea, it seems that it is not about finding their own path or themselves but rather about doing better than those around them. In such a small country with overpopulated people, people in Korea constantly look for one more thing to be superior over others.
The root cause of Korea’s deepfake incidents lies in the way people can feel like they have control of others through the possession of a deepfake video. In a conservative country like South Korea, an individual being a subject of sexually inappropriate media is considered a disgrace to that person’s family and is looked down upon regardless of its authenticity. For these criminals to automatically assume that power by possessing these media of their mutuals is what draws them more to digital sex crimes.
However, these deepfake incidents are not far away from us. The United States of America was ranked second highest (Korea being first) for its susceptibility rate to deepfake pornography in 2023. Although deepfake pornography crimes have not been as concerning compared to South Korea, there have been many victims. With further development of AI, it would not be surprising if the epidemic of deepfake pornography that South Korea is undergoing occurs in the United States as well soon. We must adopt national and global precautionary measures to protect our people against the harmful use of AI.
Writer’s Reflection:
My motivation for writing this piece was to shed light on a human rights violation where anyone is subject to becoming the victim. You couldn’t see them with just a picture on social media or a phone. People are even talking with you, hiding no one to trust. If artificial intelligence is supposed to make our lives easier, why does it create discord among us? Through writing this piece, I must record that while technology and innovation are inevitable, continuously failing to regulate their usage can lead to uncontrollable consequences.