The Week AI Broke the Speed Limit
CrowdStrike and IBM X-Force 2026: 27 seconds from initial access to lateral movement. Your SOC is probably measuring in hours.
CISO Intelligence: The Week AI Broke the Speed Limit
Edition — 27 February 2026
---
CrowdStrike's 2026 Global Threat Report and IBM's X-Force Threat Intelligence Index 2026 both dropped this week, and they're telling the same story. The average eCrime breakout time is now 29 minutes, down 65% year-on-year. The fastest observed: 27 seconds from initial access to lateral movement. Most SOCs are still measuring detection in hours. That gap is the problem.
---
The Numbers
CrowdStrike 2026 Global Threat Report (24 February 2026):
Average eCrime breakout time: 29 minutes, down 65% year-on-year
Fastest observed breakout: 27 seconds
89% increase in attacks from AI-enabled adversaries over the prior year
Adversaries are using AI to conduct reconnaissance, generate phishing lures, automate lateral movement, and evade detection at scale
The 27-second figure should reframe every conversation about detection and response SLAs. Most enterprise environments cannot detect, alert, and begin response in under a minute. AI-accelerated attackers are operating inside the OODA loop of conventional security operations.
IBM X-Force Threat Intelligence Index 2026 (25 February 2026):
44% rise in application exploitation year-over-year
Over 300,000 exposed ChatGPT credentials identified
4x increase in large supply chain and third-party compromises since 2020
AI-powered coding tools are accelerating software delivery and simultaneously introducing unvetted code into CI/CD pipelines
Both reports converge on the same point: identity is the primary attack vector, AI is the primary force multiplier. IBM's framing of "securing identities, AI-enhanced detection, and proactive risk management" reflects where the field is landing.
---
Agentic AI: The Attack Surface Nobody Mapped
The more significant story this week isn't AI-assisted attacks on traditional systems. It's autonomous AI agents becoming a new attack surface that most enterprises haven't begun to assess.
Help Net Security (23 February) reported on documented MCP (Model Context Protocol) server compromises. In one confirmed case, a malicious GitHub issue embedded hidden instructions that hijacked an agent's workflow and triggered data exfiltration from private repositories. No malware. No exploit. A poisoned natural-language instruction.
OWASP's Top 10 for Agentic AI (2026) was published this week:
1. Prompt injection — hijacking agent actions via embedded instructions in processed content
2. Privilege escalation — agents operating with excessive permissions relative to their task
3. Memory poisoning — corrupting agent memory stores to manipulate future behaviour
4. Cascading failures — multi-agent systems amplifying errors or malicious actions
5. Supply chain attacks — compromised tools or model providers injecting malicious behaviour
None of these require a CVE or a stolen credential. They operate through the agent's normal reasoning channels. An agent that reads emails, processes documents, or calls external APIs is a potential attack surface at every interaction point.
Do you know what AI agents your organisation has deployed? What they can access? What instructions they'll act on?
---
Voice BEC Is Operational
Business Email Compromise has a capable successor. Voice cloning now requires roughly 30 seconds of source audio, readily available from conference recordings, earnings calls, LinkedIn videos, or social media.
Two cases this week: a Swiss businessman transferred several million Swiss francs after a call from an AI-cloned voice he believed was a known business partner. A UK energy sector CEO authorised a €220,000 transfer after receiving a call cloned from his German parent company's CEO.
These are not sophisticated nation-state operations. The barrier to entry is a browser and a credit card.
Kiteworks' State of AI Cybersecurity 2026 report found hyper-personalised phishing was the top concern at 50% of respondents, followed by automated vulnerability scanning and exploit chaining (45%), adaptive malware (40%), and deepfake voice fraud (40%).
Finance controllers and executive assistants are the primary targets, not because they're unsophisticated, but because they have the authority and access attackers need. Out-of-band verification for wire transfers is a baseline control now, not a nice-to-have. Voice should not be a sole authorisation channel.
---
LLM Infrastructure as a Target
The IBM X-Force finding of 300,000+ exposed ChatGPT credentials is notable, but the broader dynamic matters more. Enterprises are integrating LLM services into production workflows, often without the credential hygiene applied to traditional enterprise systems.
Research published this week found that exposed LLM API endpoints are becoming exfiltration channels, pivot points, and targets for prompt injection via legitimate API calls.
A separate finding worth flagging: LLM-generated passwords exhibit measurable statistical weaknesses. They look random but lack true CSPRNG entropy. If any developers or systems in your organisation are using LLMs to generate passwords or keys, rotate them.
---
EU AI Act: Six Months
The EU AI Act's August 2026 deadline for high-risk AI system compliance is six months away. Most enterprises, particularly mid-market, have not completed basic system classification.
Key obligations taking effect August 2026:
Full compliance for operators of high-risk AI systems already deployed
Mandatory documentation, testing, and post-deployment monitoring
Specific requirements for AI used in HR processes
The classification burden sits with the organisation. Regulators will not classify your systems for you. If you haven't determined which deployments fall into high-risk categories — which includes a wider range of operational and HR uses than most assume — you are already behind.
VentureBeat's reporting on Claude Code Security this week found that "formal governance frameworks for reasoning-based scanning tools are the exception, not the norm" across 40+ CISOs interviewed. The tools are arriving faster than the governance.
---
Six Things Worth Acting On
This quarter:
1. Reframe detection SLAs. A 29-minute average breakout time means a 4-hour IR response window is inadequate. If you cannot detect and contain lateral movement within 15 minutes, the resource question needs answering.
2. Map your AI agent deployments. Before your next board meeting: what agents are running, what data they can access, what actions they can trigger, what instruction sources they trust.
3. Implement voice BEC controls. Out-of-band verification for all financial authorisations. Train finance teams on deepfake voice scenarios specifically. Use known-good callback numbers, not caller-ID.
4. Audit LLM credentials. AI API credentials — developer keys, service accounts, integration tokens — should rotate on the same schedule as production credentials.
Before August 2026:
5. Complete your EU AI Act classification. Start with HR systems, then operational AI, then customer-facing. Get legal and compliance aligned on your methodology now.
6. Build an agentic AI governance framework. The OWASP Agentic AI Top 10 is a reasonable starting point. Treat each AI agent as a privileged service account: minimal permissions, audit logging, human escalation paths for high-consequence actions.
---
Sources
CrowdStrike 2026 Global Threat Report — 24 February 2026
IBM X-Force Threat Intelligence Index 2026 — 25 February 2026
"Enterprises are racing to secure agentic AI deployments" — Help Net Security, 23 February 2026
"How Exposed Endpoints Increase Risk Across LLM Infrastructure" — The Hacker News, 23 February 2026
OWASP Top 10 for Agentic AI (2026)
"State of AI Cybersecurity 2026" — Kiteworks
"The Deepfake CEO Scam" — ScamWatchHQ, February 2026
"EU AI Act 2026: What Applies Now" — Gosign
"Anthropic's Claude Code Security" — VentureBeat, 23 February 2026
---
CISO Intelligence is a briefing for security executives. Produced by Jonathan Care / KuppingerCole.

