AI Agents Deepen Identity Security Worries Arabian Post
A global survey of 1,100 organisations across the US, UK, France, Germany, Italy, Spain, Australia and Singapore found that 93% either already use or plan to use AI agents for security tasks such as password resets and VPN access. The finding points to a rapid shift from experimental AI use towards direct involvement in identity-linked operations, where a single error or compromise can give attackers access to critical systems.
Security teams are under pressure to automate routine help-desk and access-management work as cyber threats grow in volume and complexity. AI agents can reduce waiting times, enforce standard procedures and help stretched teams handle repetitive requests. Yet their use in password resets, VPN approvals and privileged workflows also creates a new class of non-human identities that must be tracked, authenticated, limited and audited.
Survey data showed that 29% of organisations already use AI agents to manage security-related help-desk tickets, including password resets and VPN access. A further 65% intend to do so within the next year. That adoption curve suggests AI agents are likely to become a standard layer in enterprise security operations, rather than a narrow pilot project confined to innovation teams.
Identity infrastructure has become one of the most sensitive areas of corporate cyber defence. Active Directory, Entra ID, Okta and related platforms control employee access, administrative privileges, device permissions and application credentials. Compromise of such systems can allow attackers to move laterally across networks, escalate privileges and disrupt recovery efforts after ransomware or data-theft incidents.
See also OpenAI says Musk escalates trial clashConcerns are sharpened by the proximity of AI tools to high-value credentials. Ninety-two per cent of organisations said AI is installed on at least some local machines where it can access SSH or encryption keys. That exposure increases the risk that a compromised endpoint or poorly governed agent could leak credentials, execute unintended actions or become a bridge into wider systems.
Confidence in recovery remains uneven. Only 32% of organisations globally said they were very confident they could regain control if AI exposed administrative credentials. The figure was higher in the US, at 53%, but fell sharply in France, where only 12% expressed strong confidence. The gap underlines a broader divide between AI adoption and the operational readiness needed to contain identity-led incidents.
Security leaders are also confronting a governance problem. Sixty-five per cent of organisations said AI identities are fully registered, authenticated and authorised in a formal system, while 6% admitted they do not track them at all. Among organisations that do track AI identities, 57% use the same system as human identities and 43% rely on a separate authentication and authorisation model.
That fragmentation can complicate oversight. AI agents often act across multiple applications, handle tokens, trigger workflows and interact with data that would otherwise require human approval. Without consistent logging, role-based limits and rapid revocation mechanisms, organisations may struggle to determine who authorised an action, which system executed it and whether the action stayed within approved boundaries.
Other enterprise surveys have identified data leakage and over-privileged access as the leading barriers to AI agent adoption. Those risks are especially acute when agents are permitted to reset credentials, approve network access or interact with cloud and developer environments. Poorly scoped permissions can turn a productivity tool into a high-impact security weakness.
See also Sea opens Asia coding race with OpenAIThe appeal of AI agents remains strong. Organisations see them as a way to improve response speed, reduce manual workloads and strengthen service availability. Security teams also use AI for phishing detection, intrusion detection and automated security operations, reflecting a broader shift towards machine-assisted defence. The challenge is that the same autonomy that makes agents useful also makes them risky when controls are weak.
Regulators, insurers and enterprise customers are likely to demand clearer safeguards as AI agents move into privileged workflows. Expected controls include least-privilege access, time-limited permissions, separation between human and agent identities, behavioural analytics, tamper-resistant logs and recovery plans that can restore identity systems to a trusted state after compromise.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment