The numbers are in, and they're alarming. According to new research released alongside Palo Alto Networks and Google Cloud's expanded partnership announcement, 99% of organizations report experiencing AI-related attacks. As covered by ERP Today, this isn't a future threat—it's the present reality.
The Scope of the AI Attack Landscape
The research reveals that AI-related attacks span a broad spectrum of techniques and targets. According to the Cybersecurity Dive analysis, companies should prioritize identity security and integrate cloud monitoring into the SOC:
These aren't isolated incidents or edge cases. Organizations across every industry and size category are experiencing AI-specific attacks with increasing frequency and sophistication.
The 1% Question
If you believe your organization is in the 1% that hasn't experienced AI attacks, the more likely explanation is that you simply haven't detected them yet. AI attacks can be subtle and often escape traditional security monitoring.
The Palo Alto Networks and Google Cloud Response
In response to this threat landscape, Palo Alto Networks and Google Cloud have expanded their partnership in a deal valued at nearly $10 billion. The partnership focuses on:
Identity Security
Companies should prioritize identity security as the first line of defense. AI systems often operate with elevated privileges, making identity compromise particularly dangerous. The partnership emphasizes zero-trust principles applied specifically to AI workloads, aligned with NIST's Zero Trust Architecture guidance.
Cloud Monitoring Integration
The partners recommend integrating cloud monitoring directly into the Security Operations Center (SOC). AI workloads running in cloud environments generate unique telemetry that traditional security tools often miss.
AI-Native Detection
The partnership will develop detection capabilities specifically designed for AI attack patterns, moving beyond signature-based detection to behavioral analysis of AI systems.
Why Traditional Security Falls Short
The 99% statistic isn't just about the volume of attacks—it reflects a fundamental mismatch between traditional security tools and AI-specific threats:
Different Attack Surface
AI systems present attack surfaces that don't exist in traditional applications: training data, model weights, inference APIs, and natural language interfaces. Traditional firewalls and endpoint protection weren't designed for these vectors. The OWASP Top 10 for LLM Applications documents these new attack surfaces.
Semantic Attacks
Many AI attacks operate at the semantic level—manipulating meaning rather than exploiting code vulnerabilities. Prompt injection, for example, uses natural language to subvert AI behavior without any traditional "exploit."
Invisible Boundaries
AI systems often blur the boundaries between code and data, between trusted and untrusted inputs. This ambiguity creates opportunities for attackers that traditional security models don't address.
The Security Paradigm Shift
Securing AI isn't about applying existing security tools to new systems. It requires a fundamental rethinking of what we're protecting and how attacks manifest. This is why purpose-built AI guardrails are essential.
What Organizations Must Do Now
The research and partnership announcement carry clear implications for enterprise security leaders. The SANS Institute and CISA both emphasize these priorities:
1. Assume Breach Posture for AI Systems
With 99% of organizations already attacked, the question isn't whether you'll face AI threats but how quickly you can detect and respond. Implement monitoring that assumes AI systems are under constant attack.
2. Deploy AI-Specific Controls
General-purpose security tools aren't sufficient. Deploy AI guardrails that understand AI-specific attack patterns and can provide runtime protection for AI workloads.
3. Integrate AI Security into SOC
Following Palo Alto's recommendation, ensure your security operations center has visibility into AI system behavior and the tools to investigate AI-specific incidents.
4. Implement Zero Trust for AI
Apply zero trust principles specifically to AI systems: verify every input, validate every output, and never assume AI behavior is benign without validation.
5. Establish AI Incident Response
Traditional incident response playbooks don't cover AI-specific scenarios. Develop procedures for AI model compromise, prompt injection detection, and AI-enabled data exfiltration.
The Investment Imperative
When industry leaders like Palo Alto Networks and Google Cloud commit nearly $10 billion to AI security, it signals the scale of the challenge and the opportunity. Organizations that underinvest in AI security are essentially operating unprotected systems in a hostile environment.
The economics are clear: the cost of AI security controls is a fraction of the potential damage from AI-related breaches. And with regulatory frameworks like the EU AI Act imposing liability for AI system failures, the risk calculus has shifted decisively toward investment.
Conclusion
The 99% statistic should serve as a wake-up call for any organization still treating AI security as a future concern. AI attacks are happening now, at scale, across virtually every enterprise. The response from industry leaders shows the path forward: purpose-built AI security, integrated into enterprise security operations, with continuous monitoring and protection.
The question for security leaders isn't whether to invest in AI security—it's whether they can afford not to.