CISO Intelligence — 05 March 2026
CISO Intelligence — 05 March 2026
The Week AI Became the Infrastructure
Three stories this week converged on the same uncomfortable truth: security teams built their programmes for a world where AI was a tool you interacted with. That world is gone.
---
1. Agentic AI Is Already in Production — and Security Has No View of It
A briefing published this week by the AIUC-1 Consortium, developed with input from Stanford's Trustworthy AI Research Lab and more than 40 security executives, documents the gap between what enterprises have deployed and what their security teams can observe. The numbers are not reassuring.
Eighty percent of organisations surveyed reported risky agent behaviours — unauthorised system access, improper data exposure — in their existing deployments. Only 21% of executives said they had complete visibility into agent permissions and tool use. One in five organisations had already experienced a breach linked to unauthorised AI.
The briefing frames three overlapping problems. First: agents that operate without per-action human approval and with overprivileged access can cause damage through ordinary, non-adversarial operation. No attacker required. Second: prompt injection moved from academic curiosity to recurring production incident during 2025, and OWASP's LLM Top 10 ranked it first for a reason. Third: agents with memory and external tool access erode single-turn safety controls. The protections that work in a chatbot don't hold when the session extends across hours and external calls.
The contributors include CISOs from Confluent, Elastic, UiPath, and Deutsche Börse. When people with that level of operational exposure are signing off on a briefing this stark, it is worth paying attention.
Source: Help Net Security / AIUC-1 Consortium Briefing, 3 March 2026
---
2. Cloudflare Is Blocking 230 Billion Threats Per Day — and the Pattern Has Shifted
Cloudforce One published its inaugural annual threat report this week, drawing on telemetry from a network that handles roughly 20% of global web traffic. The headline figure, 230 billion blocked threats per day, is less interesting than what those threats reveal about how breaches now begin.
Infostealers have replaced credential guessing as the dominant access method. Tools like LummaC2 extract live session tokens from infected machines rather than passwords. Those tokens give attackers access to already-authenticated sessions, bypassing MFA entirely. According to Verizon DBIR data cited in the report, 54% of ransomware attacks in 2025 traced back to infostealer-enabled credential theft. Cloudforce One participated in a May 2025 coordinated takedown of LummaC2 infrastructure, and is already tracking successor variants expected to reduce the gap between initial infection and ransomware deployment to hours rather than days.
Bots account for 94% of all login attempts on Cloudflare's network. Of human login attempts, 46% involve credentials already compromised in prior breaches. Nation-state actors are routing activity through legitimate cloud infrastructure — AWS, Google Cloud, Azure, and SaaS platforms including Google Calendar and Dropbox — making network-level detection increasingly unreliable.
The session token problem deserves more attention than it gets in most security programmes. If your MFA strategy assumes the token is valid, and the token has been stolen, you have no protection at the point that matters.
Source: Help Net Security / Cloudforce One Threat Report 2026, 3 March 2026
---
3. 43% of MCP Servers Have Catalogued Vulnerabilities — and Most Enterprises Don't Know They're Running Them
Adversa AI's March 2026 agentic security resource roundup documents what is becoming an infrastructure-scale problem: the Model Context Protocol, which allows AI agents to connect to external tools and services, has a catalogued vulnerability rate of 43% across major platforms. Real CVEs. Command execution.
The piece introduces a concept worth adding to your threat model vocabulary: the Lethal Trifecta. AI agents that execute transactions (rather than suggest them), combined with persistent memory that can be poisoned, combined with broad tool access, create a compound risk profile that conventional security controls were not designed for. An attacker who can influence an agent's memory or inject into its tool input can hijack its goals across a session without ever touching the underlying application.
Dark Reading's polling this week found that 48% of security professionals now rank agentic AI as the top attack vector for 2026. The supply chain angle is already real: a separate incident report noted a compromised plugin ecosystem affecting 47 enterprise deployments, with attacker access to customer data and financial records following from harvested agent credentials.
The EU AI Act's August 2026 conformity deadlines are adding compliance pressure on top of security exposure. Organisations deploying high-risk AI systems need technical documentation finalised and EU database registration completed before that date. Most are behind.
Source: Adversa AI, March 2026 / Dark Reading / Legal Nodes, EU AI Act Compliance Review
---
The Connecting Thread
These three stories are the same story. The agent has shell access, memory, and tool permissions. It operates autonomously. Security teams have no visibility into what it's doing. And the threat actors — whether they're using infostealers to bypass MFA, injecting into agent memory, or exploiting MCP server vulnerabilities — have adapted faster than the defenders.
The question security leaders should be asking is not "should we adopt agentic AI?" Most organisations are past that decision already. The question is: do you know what your agents can do, what they can access, and what it would look like if one was misbehaving? For most organisations, the honest answer is no.
That needs to change before Q3, when the compliance clock runs out and the threat actors know it.
---
CISO Intelligence is a briefing for security executives. Published weekly.

