AI Fueling A Deepfake Porn Crisis In South Korea


(MENAFN- Asia Times) It's difficult to talk about artificial intelligence without talking about deepfake porn – a harmful AI byproduct that has been used to target everyone from Taylor Swift to Australian school girls .

But a recent report from startup Security Heroes found that out of 95,820 deepfake porn videos analyzed from different sources, 53% featured South Korean singers and actresses – suggesting this group is disproportionately targeted.

So, what's behind South Korea's deepfake problem? And what can be done about it?

Deepfakes are digitally manipulated photos, video or Audio files that convincingly depict someone saying or doing things they never did. Among South Korean teenagers, creating deepfakes has become so common that some even view it as a prank . And they don't just target celebrities.

On Telegram, group chats have been made for the specific purpose of engaging in image-based sexual abuse of women, including middle-school and high-school students, teachers and family members. Women who have their pictures on social media platforms such as KakaoTalk , Instagram and Facebook are also frequently targeted.

The perpetrators use AI bots to generate the fake imagery, which is then sold and/or indiscriminately disseminated, along with victims' social media accounts, phone numbers and KakaoTalk usernames. One Telegram group attracted some 220,000 members, according to a Guardian report .

MENAFN25092024000159011032ID1108712377


Asia Times

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.