CISO Intelligence: Your AI Infrastructure Is Already Compromised
CISO Intelligence for January 29th, 2026
The Zscaler ThreatLabz report landed yesterday, and if you're not already panicking about your AI deployment, you should be. Meanwhile, criminals aren't waiting for you to catch up—they're already reselling access to your LLM endpoints.
The 16-Minute Window: Zscaler's Damning AI Security Report
Zscaler's ThreatLabz 2026 AI Security Report arrived Monday with numbers that should end careers. Enterprise AI adoption jumped 91% in 2025. Great. The problem? 100% of AI systems tested had critical vulnerabilities. Not "some." Not "most." All of them.
The median time to breach? Sixteen minutes.
Let that sink in. Your shiny new AI deployment—the one the board approved after months of vendor pitches and ROI projections—can be compromised faster than your SOC can finish their coffee. The report analyzed nearly a trillion AI/ML transactions and found 410 million DLP policy violations, with 18,033 terabytes of enterprise data flowing to AI applications like ChatGPT.
CISO Action: Audit every AI endpoint in your environment by Friday. If you don't know where your AI deployments are, that's your first problem.
Criminals Are Already Selling Your AI Infrastructure
While you're still working on that AI governance framework, criminal groups have moved on to monetizing your infrastructure. Pillar Security's report released yesterday details two campaigns—one dubbed "Operation Bizarre Bazaar"—where attackers are using Shodan and Censys to find exposed corporate LLM and MCP endpoints.
The business model is elegant in its simplicity: hijack exposed Ollama and vLLM instances, steal compute for unauthorized inference, then resell API access on underground markets. It's cryptojacking evolved for the AI age, except the damage goes beyond electricity bills. They're exfiltrating your data and using your infrastructure as a pivot point for lateral movement.
The attack surface is self-inflicted. Organizations deploying self-hosted AI tools without proper authentication are essentially running open inference servers. I've seen this movie before with exposed MongoDB instances circa 2017. We never learn.
CISO Action: Run a Shodan query against your own infrastructure today. Search for exposed ports 11434 (Ollama), 8000 (vLLM), and any MCP endpoints. If you find them, assume compromise.
AI Finds What Humans Missed for Decades
Here's a more encouraging story. An AI-assisted security team at Aisle Security published findings on 12 newly discovered OpenSSL vulnerabilities this week—some dating back to OpenSSL 1.0.2. We're talking decades of dormant high-severity bugs, including stack buffer overflows in CMS parsing that could enable remote code execution (CVE-2025-15467).
The team's conclusion? "Humans are the limiting factor." They're not wrong. These vulnerabilities survived years of human code review, penetration testing, and bug bounties. An AI-augmented approach found them in weeks.
This is the productive use of AI in security—amplifying human capabilities rather than replacing judgment. Patches are already available. Apply them.
CISO Action: Verify your OpenSSL versions across all systems. CVE-2025-15467 and CVE-2025-11187 need immediate attention.
Quick Hits
FBI Takes Down RAMP Forum: The Russian cybercrime marketplace got seized this week. Good news, but displaced threat actors will reconstitute elsewhere within weeks. Don't adjust your threat model.
WhatsApp Lockdown Mode: Meta rolled out enhanced security for high-risk users targeted by spyware. If your executives aren't using this, they should be.
WebLogic Exploit Attempts: SANS is tracking odd requests against WebLogic servers, possibly CVE-2026-21962 exploitation or (their words) "AI slop"—malformed exploits generated by AI tools. Either way, patch.
---
Jonathan Care has 33 years in cybersecurity and fraud detection. These are his personal views, not those of his employer.


The 16-minute median time to breach stat is absolutely brutal, and the part about 100% of AI systems tested having critical vulnerabilities should be causing way more panic than it probably is. What really got me is the parallel to exposed MongoDB instances from 2017, like we literally never learn from these mistakes. The fact that criminals are already reselling access to hijacked LLM endpoints while companies are still working on "AI governance frameworks" is such a perfect example of how security always lags behind deployment speed. Running a Shodan query on your own infrastructure sounds embarassing but necessary.