CISO Intelligence: The AI Security Reckoning
Thales, IBM, and Cisco all published this week. The picture they collectively paint is uncomfortable.
The AI Security Reckoning
Three major reports landed in the past week. Between them, they paint a picture that should make any CISO uncomfortable — not because the threats are novel, but because the exposure is already inside.
Your AI Can Be Fooled. Repeatedly.
Cisco's State of AI Security 2026 found something worth sitting with: multi-turn jailbreak attacks achieved success rates of 92% against eight open-weight models in extended testing. Not single-prompt tricks. Extended conversations, building context, gradually steering the model toward disallowed actions. Single-turn defences did not hold up.
This matters because the agentic systems enterprises are deploying right now operate over long sessions. They carry memory. They have access to internal databases, code repositories, cloud dashboards and ticketing systems. A compromised agent operating inside that context is not a chatbot giving a bad answer. It can open pull requests, query production data, and trigger automated workflows. The attack surface is the entire pipeline the agent touches.
Cisco's Amy Chang put it plainly: multiturn resilience should be tracked as a separate metric, especially for agents operating over longer sessions. Most enterprises are not measuring it at all.
The Budget Gap Is Real
The 2026 Thales Data Threat Report, drawn from 3,120 respondents across 20 countries, found that 61% now name AI as their top data security risk. Meanwhile, only 30% have a dedicated AI security budget — up from 20% last year, but still a minority. The rest are either folding AI risk into existing security allocations or not funding it distinctly at all.
The same report found that 53% still rely on traditional security budgets built around human users and perimeter controls, not machine identities or automated access patterns. That is a structural mismatch. AI agents authenticate as services, accumulate permissions over time, and do not behave the way user accounts behave. Applying human-centric controls to machine actors and expecting them to hold is optimistic.
Data visibility is not helping. Only 34% of respondents reported complete knowledge of where their data is stored. Encryption coverage in cloud environments fell from 51% in 2025 to 47% in 2026. This is the environment into which organisations are deploying agents with broad read and write access.
Deepfake incidents were reported by 59% of respondents. Reputational damage from AI-generated misinformation reached 48%. Those figures have now crossed from "emerging concern" into baseline operational risk.
IBM Confirms the Basics Are Still Broken
IBM's 2026 X-Force Threat Intelligence Index lands a point that the security industry keeps failing to absorb: the most consequential compromises keep exploiting the same foundational gaps. Exploitation of public-facing applications was the most common initial access vector last year, up 44% year-on-year. Many of the exploited vulnerabilities did not require authentication.
Supply chain and third-party compromises have increased nearly fourfold since 2020. Attackers are targeting developer identities, CI/CD platforms, SaaS integrations and trust relationships between systems. AI coding tools accelerating development and occasionally pushing unvetted code into those pipelines are adding pressure the industry is not yet equipped to absorb.
In 2025, over 300,000 ChatGPT credential sets were advertised on dark web marketplaces, collected by infostealer operators who have extended their target lists to include AI platforms. Password reuse means a compromised personal AI account is often an indirect path to enterprise access.
The ransomware picture is more fragmented. X-Force tracked 109 distinct extortion groups in 2025, up from 73 in 2024. The share of attacks from the top ten groups fell 25%. Lower barriers to entry, more opportunistic operators, less predictable behaviour.
MCP Has 30 CVEs and a 36% No-Auth Rate
Model Context Protocol was introduced in 2024. It is now on every executive agenda because it has become the connective tissue for enterprise AI. MCP lets organisations plug AI agents into their existing application stack without rebuilding integrations from scratch. That utility explains its rapid adoption. It also explains why the vulnerability count has reached 30 in under two years, with one new CVE published last week.
CVE-2026-27896, disclosed yesterday, affects the official MCP Go SDK. Go's standard JSON parser handles field names case-insensitively. An attacker can craft malicious MCP messages using field names like "Method" or "PARAMS" instead of the lowercase equivalents. Validation logic that checks for exact field names passes the message; the SDK processes it. Any security intermediary sitting between MCP client and server can be bypassed.
That is one CVE in one SDK. The broader picture is worse. A scan of deployed MCP servers found that 36% accept connections from any client without authentication. Servers connect to QuickBooks, Stripe, database connectors, file system bridges. These are not toy deployments.
The attack surface runs three layers deep: the MCP servers themselves, the SDK libraries parsing MCP messages, and the host environments running MCP clients. A vulnerability at any layer compromises the chain. Community-built MCP connectors extend that surface further. Most will never go through a security review.
RSA Conference submissions for 2026 include a session demonstrating how an MCP vulnerability enables remote code execution and full Azure tenant takeover. This is no longer theoretical.
What the Patterns Suggest
You could read these four stories as separate developments. They are not. The Thales data shows organisations deploying AI without knowing where their data lives or how it flows. The Cisco findings show the agent access models those organisations are building can be manipulated through extended conversation. The IBM data confirms that foundational controls — authentication, patching, supply chain hygiene — are still not where they need to be. And the MCP CVE stream shows a new protocol category absorbing enterprise trust faster than security teams can audit it.
CISOs have been in similar positions before. The cloud adoption wave brought genuine efficiency gains and a decade of identity and access problems that are still not fully resolved. Agentic AI is a comparable inflection point, compressed into a shorter timeline, with agents that can act at machine speed once compromised.
The organisations that came through the cloud transition better were those that treated identity as infrastructure from the start. Non-human identity governance — tracking what agents can access, what they are doing, and how that access changes over time — is where the equivalent work needs to happen now. Not next year. The agents are already there.
---
Sources: Thales 2026 Data Threat Report (S&P Global 451 Research, 3,120 respondents, 20 countries) | IBM 2026 X-Force Threat Intelligence Index | Cisco State of AI Security 2026 | CVE-2026-27896 (MCP Go SDK) | Help Net Security, CIO, DEV Community

