North Korean-Linked Hackers Target South Korean Defense Entities
(MENAFN) A hacking collective allegedly connected to North Korea has executed a cyberattack targeting South Korean organizations, including a defense-related institution, by employing artificial intelligence (AI)-generated deepfake images, according to a report from a South Korean security institute on Monday.
The Kimsuky group, a hacking unit believed by Seoul to be backed by the North Korean government, attempted a spear-phishing operation on a military-affiliated organization in July, a news agency reported, referencing findings from the Genians Security Center (GSC).
Spear phishing is a deceptive tactic that involves sending emails from trusted sources to extract sensitive information.
The GSC report explained that the attackers dispatched an email containing malicious code, disguised as communication about ID issuance for military-related personnel.
The ID card image employed in the attack is thought to have been created using a generative AI model.
Ordinarily, AI platforms, including ChatGPT, refuse requests to produce copies of military identification, citing legal protections on government-issued documents.
Despite this, the hackers seemingly circumvented safeguards by asking for mock-ups or sample designs framed as "legitimate" requests, instead of directly duplicating real IDs.
The report also noted that incidents like this underscore Pyongyang's "growing attempts to exploit AI services for increasingly sophisticated malicious activities."
The Kimsuky group, a hacking unit believed by Seoul to be backed by the North Korean government, attempted a spear-phishing operation on a military-affiliated organization in July, a news agency reported, referencing findings from the Genians Security Center (GSC).
Spear phishing is a deceptive tactic that involves sending emails from trusted sources to extract sensitive information.
The GSC report explained that the attackers dispatched an email containing malicious code, disguised as communication about ID issuance for military-related personnel.
The ID card image employed in the attack is thought to have been created using a generative AI model.
Ordinarily, AI platforms, including ChatGPT, refuse requests to produce copies of military identification, citing legal protections on government-issued documents.
Despite this, the hackers seemingly circumvented safeguards by asking for mock-ups or sample designs framed as "legitimate" requests, instead of directly duplicating real IDs.
The report also noted that incidents like this underscore Pyongyang's "growing attempts to exploit AI services for increasingly sophisticated malicious activities."

Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Most popular stories
Market Research

- Spycloud Launches Consumer Idlink Product To Empower Financial Institutions To Combat Fraud With Holistic Identity Intelligence
- Japan Green Hydrogen Market Size To Reach USD 734 Million By 2033 CAGR Of 27.00%
- Primexbt Wins Global Forex Award For Best Multi-Asset Trading Platform
- Primexbt Launches Empowering Traders To Succeed Campaign, Leading A New Era Of Trading
- United States Insulin Pumps Market Forecast On Share & Demand Mapping 20252033
- Cartesian Launches First Outsourced Middle-Back-Office Offering For Digital Asset Funds
Comments
No comment