Fraudgpt, Wormgpt And The Rise Of Dark Llms


(MENAFN- Asia Times) The internet, a vast and indispensable resource for modern society, has a darker side where malicious activities thrive.

From identity theft to sophisticated malware attacks , cybercriminals keep coming up with new scam methods.

Widely available generative artificial intelligence (AI) tools have now added a new layer of complexity to the cyber security landscape. Staying on top of your online security is more important than ever.

One of the most sinister adaptations of current AI is the creation of“dark LLMs” (large language models).

These uncensored versions of everyday AI systems like ChatGPT are re-engineered for criminal activities. They operate without ethical constraints and with alarming precision and speed.

Cyber criminals deploy dark LLMs to automate and enhance phishing campaigns, create sophisticated malware and generate scam content.

To achieve this, they engage in LLM“jailbreaking” – using prompts to get the model to bypass its built-in safeguards and filters.

For instance, FraudGPT writes malicious code, creates phishing pages and generates undetectable malware. It offers tools for orchestrating diverse cybercrimes, from credit card fraud to digital impersonation .

FraudGPT is advertised on the dark web and the encrypted messaging app Telegram. Its creator openly markets its capabilities, emphasising the model's criminal focus.

MENAFN24072024000159011032ID1108476398


Asia Times

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.