Tuesday, 02 January 2024 12:17 GMT

Cisco Unveils Key Strategies For Securing AI Applications Amidst Rapid Adoption In The Middle East


(MENAFN- Mid-East Info) Dubai, United Arab Emirates – January, 2026 – Cisco highlights four priority focus areas organizations should consider to secure AI applications as they scale adoption. The guidance outlines how security teams can adapt proven application security practices to AI, helping organizations across the Middle East manage emerging risks and maintain digital trust. As AI adoption scales across the Middle East, including government, financial services, energy, and critical infrastructure, CISOs and IT leaders are under pressure to secure AI applications across the full lifecycle, from the data they rely on to the models they deploy. Four focus areas for AI applications security:
  • Open-source scanning
    AI application development relies heavily on components such as open-source models, public datasets, and third-party libraries. These dependencies can include vulnerabilities or malicious insertions that compromise the entire system.
  • Vulnerability testing
    Static testing for AI applications involves validating the components of an AI application, including binaries, datasets, and models, to identify vulnerabilities like backdoors or poisoned data. Dynamic testing evaluates how a model responds across various scenarios in production. Algorithmic red-teaming can simulate a diverse and extensive set of adversarial techniques without requiring manual testing.
  • Application firewalls
    The emergence of generative AI applications has given rise to a new class of AI firewalls designed around the unique safety and security risks of LLMs. These solutions serve as model-agnostic guardrails, examining AI application traffic in transit to identify and prevent failures and enforce policies that mitigate threats such as PII leakage, prompt injection, and denial of service (DoS) attacks.
  • Data loss prevention
    The rapid proliferation of AI and the dynamic nature of natural language content makes traditional DLP ineffective. Instead, DLP for AI applications examines inputs and outputs to combat sensitive data leakage. Input DLP can restrict file uploads, block copy-paste functionalities, or restrict access to unapproved AI tools. Output DLP uses guardrail filters to help ensure model responses do not contain personally identifiable information (PII), intellectual property, or other sensitive data. Fady Younes, Managing Director for Cybersecurity at Cisco Middle East, Africa, Türkiye, Romania and CIS, commented:“As AI adoption accelerates across the region organizations are moving quickly from pilots to production, and that shift changes the risk profile. Securing AI applications requires looking beyond traditional application controls to protect the full AI lifecycle, from the data and third-party components feeding models to how those models behave in real world use. By applying familiar security principles in AI specific ways, organizations in the Middle East can scale innovation with confidence while reducing risks such as prompt injection and sensitive data leakage.” Protecting AI applications from development to production
    Risk exists at virtually every point in the AI lifecycle, from sourcing supply chain components through development and deployment. The security measures highlighted above help mitigate different risk areas and each plays an important role in a comprehensive AI security strategy.


    MENAFN07012026005446012082ID1110569813



  • Mid-East Info

    Legal Disclaimer:
    MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

    Search