403
Sorry!!
Error! We're sorry, but the page you were looking for doesn't exist.
Shadow AI vs Managed AI: Kaspersky reviews the use of neural networks for work in KSA
(MENAFN- Golin Mena) Shadow AI vs Managed AI: Kaspersky reviews the use of neural networks for work in KSA
Riyadh, Saudi Arabia – Wednesday, 22 October 2025 – Recent Kaspersky research entitled “Cybersecurity in the workplace: Employee knowledge and behavior” has found that 83% of professionals surveyed in KSA say that they utilize Artificial Intelligence (AI) tools for work tasks. However, only 45,5% have received training on the cybersecurity aspects of using neural networks, which is one of the critical elements of protection against AI-related risks ranging from data leaks to prompt injections.
The vast majority of survey respondents in KSA (92%) said that they understand what the term “generative artificial intelligence” means, and for many employees this knowledge is no longer just theoretical: AI tools have become part of their every workday. Overall, 83% of respondents use AI tools for work: most often - to write or edit texts (63%) and work e-mails (51%), to create images or videos with the help of neural networks (50%), and for data analytics (59%).
The survey uncovered a serious gap in employee preparedness for AI risks. Less than a third (26%) of professionals reported receiving no AI-related training. Among those who had courses, 53% said the focus was on how to effectively use AI tools and create prompts; while only 45,5% received guidance on the cybersecurity aspect of AI use.
While AI tools, which help automate everyday tasks, are becoming ubiquitous in many organizations, they often remain part of ‘shadow IT’, when employees use them without corporate guidance. 72% of respondents said generative artificial intelligence tools are permitted at their work, 20% acknowledged these tools are not allowed, while 8% were unsure.
To make employee use of AI more clear and secure, organizations should implement a company-wide policy regarding this aspect. This policy can prohibit AI use in specific functions and for certain types of data, regulate which AI tools are provided to employees, and allow only tools from the approved list. The policy should be formally documented, and employees should receive proper training. After setting a list of hygiene measures and
restrictions, companies should monitor AI usage, identify popular services, and use this information to plan future actions and refine their security measures.
“When it comes to company-wide AI usage, neither a total ban nor unrestricted freedom are likely to be effective. A more balanced approach is to establish a policy that allows different levels of access to AI based on the type of data handled by different departments. When reinforced with proper training, such a policy will lead to greater flexibility and effectiveness, while still ensuring that AI is used in a secure way," says Mohamad Hashem, General Manager for Saudi Arabia and Bahrain at Kaspersky.
To secure corporate AI use Kaspersky recommends organizations to: · Train employees on responsible AI usage. Courses on AI security from Kaspersky Automated Security Awareness Platform can help with adding specialized training to companies’ educational programmes.
· Provide IT specialists with relevant knowledge on exploitation techniques and practical defense strategies. The 'Large Language Models Security' training, part of the Kaspersky Cybersecurity Training portfolio, can enhance both the professional development and the overall cybersecurity of an organization.
· Ensure all employees have a cybersecurity solution installed on all their work and personal devices used to access business data. Kaspersky Next products protect against a range of threats including phishing or installing a fake AI tool, particularly given the growing trend of scammers embedding infostealers in deceptive AI applications.
· Conduct regular surveys to monitor how frequently AI is being used and for which tasks. Using this information, assess both the benefits and risks of AI use to adjust company policy.
· Use a specialized AI proxy that cleans queries on-the-fly by removing specific types of sensitive data (such as names or customer IDs), and uses role-based access control to block inappropriate use cases.
· Create a full-fledged policy that addresses the spectrum of relevant risks.
*The survey was conducted by Toluna research agency at the request of Kaspersky in 2025. The study sample included 2800 online interviews with employees and business owners using computers for work in seven countries: Türkiye, South Africa, Kenya, Pakistan, Egypt, Saudi Arabia, and the UAE.
Riyadh, Saudi Arabia – Wednesday, 22 October 2025 – Recent Kaspersky research entitled “Cybersecurity in the workplace: Employee knowledge and behavior” has found that 83% of professionals surveyed in KSA say that they utilize Artificial Intelligence (AI) tools for work tasks. However, only 45,5% have received training on the cybersecurity aspects of using neural networks, which is one of the critical elements of protection against AI-related risks ranging from data leaks to prompt injections.
The vast majority of survey respondents in KSA (92%) said that they understand what the term “generative artificial intelligence” means, and for many employees this knowledge is no longer just theoretical: AI tools have become part of their every workday. Overall, 83% of respondents use AI tools for work: most often - to write or edit texts (63%) and work e-mails (51%), to create images or videos with the help of neural networks (50%), and for data analytics (59%).
The survey uncovered a serious gap in employee preparedness for AI risks. Less than a third (26%) of professionals reported receiving no AI-related training. Among those who had courses, 53% said the focus was on how to effectively use AI tools and create prompts; while only 45,5% received guidance on the cybersecurity aspect of AI use.
While AI tools, which help automate everyday tasks, are becoming ubiquitous in many organizations, they often remain part of ‘shadow IT’, when employees use them without corporate guidance. 72% of respondents said generative artificial intelligence tools are permitted at their work, 20% acknowledged these tools are not allowed, while 8% were unsure.
To make employee use of AI more clear and secure, organizations should implement a company-wide policy regarding this aspect. This policy can prohibit AI use in specific functions and for certain types of data, regulate which AI tools are provided to employees, and allow only tools from the approved list. The policy should be formally documented, and employees should receive proper training. After setting a list of hygiene measures and
restrictions, companies should monitor AI usage, identify popular services, and use this information to plan future actions and refine their security measures.
“When it comes to company-wide AI usage, neither a total ban nor unrestricted freedom are likely to be effective. A more balanced approach is to establish a policy that allows different levels of access to AI based on the type of data handled by different departments. When reinforced with proper training, such a policy will lead to greater flexibility and effectiveness, while still ensuring that AI is used in a secure way," says Mohamad Hashem, General Manager for Saudi Arabia and Bahrain at Kaspersky.
To secure corporate AI use Kaspersky recommends organizations to: · Train employees on responsible AI usage. Courses on AI security from Kaspersky Automated Security Awareness Platform can help with adding specialized training to companies’ educational programmes.
· Provide IT specialists with relevant knowledge on exploitation techniques and practical defense strategies. The 'Large Language Models Security' training, part of the Kaspersky Cybersecurity Training portfolio, can enhance both the professional development and the overall cybersecurity of an organization.
· Ensure all employees have a cybersecurity solution installed on all their work and personal devices used to access business data. Kaspersky Next products protect against a range of threats including phishing or installing a fake AI tool, particularly given the growing trend of scammers embedding infostealers in deceptive AI applications.
· Conduct regular surveys to monitor how frequently AI is being used and for which tasks. Using this information, assess both the benefits and risks of AI use to adjust company policy.
· Use a specialized AI proxy that cleans queries on-the-fly by removing specific types of sensitive data (such as names or customer IDs), and uses role-based access control to block inappropriate use cases.
· Create a full-fledged policy that addresses the spectrum of relevant risks.
*The survey was conducted by Toluna research agency at the request of Kaspersky in 2025. The study sample included 2800 online interviews with employees and business owners using computers for work in seven countries: Türkiye, South Africa, Kenya, Pakistan, Egypt, Saudi Arabia, and the UAE.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment