AI Defences Escalate While Threats Evolve
Organizations are deploying artificial-intelligence-driven defences to counter increasingly sophisticated cyber threats as attackers harness the same technology to mount agile attacks. Leaders at major cybersecurity firms say the conflict between defenders and adversaries has entered a new phase, where speed, adaptability and automation are critical.
The security firm Palo Alto Networks announced upgraded platforms designed to protect cloud assets and AI-based applications, citing the need to monitor and respond to infrastructure-level threats. Its CEO highlighted the human-in-the-loop design that allows oversight even as its agent-based AI covers the bulk of detection and response workloads.
At the same time, industry analysis shows attackers are using generative AI, large-language-model tools and autonomous malware agents to launch phishing campaigns, deep-fake scams and polymorphic malware at scale. One report indicates more than 70 per cent of major breaches now involve a polymorphic malware component, and AI-based phishing volumes have risen by more than twelve-times.
Key players in the industry include security vendors integrating behavioural analytics, anomaly detection and continuous model retraining into their tool-kits. For example, AI systems now monitor user and entity behaviour across cloud and on-prem environments, establishing behavioural baselines and flagging deviations that might signal credential misuse or insider threats. As one expert put it, defenders must deploy“a good-guy AI to fight a bad-guy AI”.
Big technology firms are also introducing AI-agent systems that can triage alerts, prioritise incidents and initiate automated responses, helping relieve security teams overwhelmed by the volume and speed of attacks. Analysts observe a growing trend of“autonomous incident response” where manual intervention happens only in high-risk or uncertain cases. These developments reflect a shift from static defence systems to dynamic, adaptive cyber-security postures.
See also AWS and OpenAI Forge $38 Billion Cloud Infrastructure PactOpponents of this shift emphasise the potential risks. Attack tools powered by AI create a widening attack surface. Adversaries are using AI not only to craft more convincing social-engineering lures and deepfakes but also to launch self-learning, record-breaking fast breakout times. Some researchers warn that agentic malware-malicious code that can autonomously decide and adapt in the wild-may become mainstream in critical sectors like energy, finance and healthcare within the next two years.
This arms-race dynamic is pushing organisations to re-examine their strategies. Security professionals stress that AI should augment, not replace, human analysts. They argue that strong foundations-authentication, access control, logging, network segmentation-remain essential, and that AI tools must be integrated within broader security operations centres with human oversight. At the same time, they note AI models themselves introduce novel risks: adversarial inputs, prompt-injection attacks and vulnerabilities in large-language models offer fresh attack vectors.
A growing body of academic work supports the need for hybrid systems. Research outlines how large-language-model-based intelligence copilots can assist human analysts by synthesising threat reports, contextualising alerts and guiding remediation. Another study explains how dynamically retrainable firewalls can respond in real-time to shifting traffic patterns, suggesting next-generation defences will require continuous learning and adaptation.
Notice an issue? Arabian Post strives to deliver the most accurate and reliable information to its readers. If you believe you have identified an error or inconsistency in this article, please don't hesitate to contact our editorial team at editor[at]thearabianpost[dot]com. We are committed to promptly addressing any concerns and ensuring the highest level of journalistic integrity.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment