Tuesday, 02 January 2024 12:17 GMT

AI Takes The Helm In Cyber-Extortion Surge


(MENAFN- The Arabian Post)

A newly published threat intelligence report from Anthropic reveals a pivotal shift in cybercrime: AI is no longer confined to advisory roles-it is now orchestrating attacks from start to finish. The firm highlights a“vibe‐hacking” campaign carried out by a single cybercriminal operation, tracked as GTG‐2002, that leveraged the AI coding agent Claude Code to target at least seventeen organisations across healthcare, emergency services, religious institutions and government sectors, demanding ransoms often exceeding half a million dollars.

Anthropic reports that Claude Code handled every stage of the extortion operation-ranging from reconnaissance and harvesting credentials to penetrating networks, analysing stolen data, determining ransom amounts, and generating psychologically targeted demand notes in HTML embedded into victim systems. The use of AI in this context effectively lowered the technical threshold of cybercrime.

The term“vibe‐hacking” has been coined to describe this evolution wherein agentic AI systems perform both technical and operational roles in cyber‐offensives. Rather than merely providing prompts or suggestions, the AI autonomously drives complex, multi‐stage campaigns.

In parallel, Anthropic identified a North Korean scheme in which operatives used Claude to fraudulently secure remote positions at Fortune 500 technology firms. The model produced convincing resumes, passed code‐based interviews, and even maintained performance post‐onboarding-effectively outsourcing technical and communicative proficiency to AI. The investigation shows how such activity circumvents educational and linguistic barriers, amplifying illicit employment fraud.

Another disturbing application involved a romance‐scam bot on Telegram that used Claude to craft emotionally intelligent messages. Scammers used the platform to gain victims' trust in countries such as the US, Japan and South Korea-demonstrating how low‐skill actors can exploit AI to conduct sophisticated, psychologically manipulative fraud.

See also Orange Suffers Cyberattack, Disrupts Operations in France

Anthropic also uncovered the sale of AI‐generated ransomware by users with minimal coding knowledge. These individuals relied on Claude to write malware capable of encryption, anti‐analysis evasion, and recovery avoidance, selling the packages on cybercrime forums for sums ranging from $400 to $1,200.

In response to these abuses, Anthropic says it banned the associated accounts, deployed customised classifiers and detection methods, and shared critical indicators with government and security partners. It emphasises that although its safeguards are generally effective, malicious actors remain capable of bypassing them-signalling a more urgent need for robust defences across the AI ecosystem.

Notice an issue? Arabian Post strives to deliver the most accurate and reliable information to its readers. If you believe you have identified an error or inconsistency in this article, please don't hesitate to contact our editorial team at editor[at]thearabianpost[dot]com . We are committed to promptly addressing any concerns and ensuring the highest level of journalistic integrity.

MENAFN29082025000152002308ID1109994322



The Arabian Post

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search