Dryrun Security Finds Over 80% Of Large Language Model Application Risks Go Undetected By Traditional Code Scanners
AUSTIN, Texas, Dec. 09, 2025 (GLOBE NEWSWIRE) -- DryRun Security, the industry's first AI-native, code security intelligence company, today announced analysis of the 2025 OWASP Top 10 for LLM Application Risks. Findings show that legacy AppSec scanners fail to detect 80% of LLM-specific vulnerabilities, such as prompt injection to model poisoning, underscoring the need for AI-native detection and defenses.
As LLMs become foundational to business operations worldwide, vulnerabilities are emerging where models interact with code, data and human workflows, creating a new class of software risk. DryRun Security's analysis of the OWASP Top 10 list gives security and engineering leaders a practical framework to design, build and operate LLM-based systems safely, mapping where failures occur and how to prevent them.
“LLMs are now embedded in every customer workflow, and that means security failures are product failures,” said James Wickett, CEO and co-founder, DryRun Security.“2026 is the year AI security must move from theory to practice. Developers now own part of the AI attack surface but traditional AppSec tools weren't built for this shift. Our testing of the OWASP Top 10 for LLM App Risks proves that AI-native detection is no longer optional, it's the only way to find real risks.”
“At Commerce, we're building AI-driven shopping experiences, and agentic checkouts are changing everything,” said Adam Dyche, Manager, Application Security Engineering, Commerce.“We chose DryRun because OWASP LLM app risks are all about context, and we wanted to build security in from day one. DryRun outperformed every other tool we tested by far, and its contextual security analysis actually understands our code the way our engineers do.”
DryRun Security's new whitepaper,“Building Secure and Safe Agents” expands on the OWASP Top 10 for LLM Applications and provides security leaders with a tested checklist for secure LLM integrations in their production environments. The paper links common LLM risks with real incident examples, reference architectures and operational checklists mapped to common GenAI patterns such as RAG, tool use and agent frameworks. Insights include:
- LLM Security Is Now Software Security: Early AI security thinking treated LLMs as external services. The new reality is that most vulnerabilities now emerge inside of code. AppSec and developer teams need to apply the same methodology they've used for web and API security to the model orchestration layer.
Traditional SAST Falls Short: Legacy code scanners miss LLM-specific vulnerabilities because they were never designed to inspect model orchestration and tool use. They were inherently designed for traditional application flaws like SQL injection, but not for semantic vulnerabilities found in LLM-powered systems. This technology gap proves AI-native analysis outweighs traditional scanners.
“Agentic” Systems Expand Attack Surfaces: The shift to autonomous and tool-using models introduces new failure modes unseen in web app security. This signals a major evolution in AppSec: security boundaries now depend on how models are embedded, not just how they are trained.
Download the whitepaper at: .
About DryRun Security
DryRun Security is the industry's first AI-native, agentic code security intelligence solution. Powered by its proprietary Contextual Security Analysis engine, it secures software built for the future by helping security and developer teams quiet noise, gain insights, and surface risks that pattern-based scanning tools inherently miss. For more information, please visit .
Press Contact
Jason Echols
DryRun Security
Head of Growth
...

Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment