CISO Intelligence for 09 March 2026
What is it good for?
CISO Intelligence — 9 March 2026
AI as weapon, AI as liability, AI as gap — and one agency stretched thin at the worst possible time.
---
Threat Actors Have Operationalised AI. The Gap With Defenders Is Widening.
Microsoft Threat Intelligence published a detailed analysis this week on how state-linked actors are embedding AI throughout the attack lifecycle. The headline finding is worth sitting with: most malicious AI use today is not exotic. Threat actors are using language models to draft phishing lures, translate content, summarise stolen data, generate and debug malware, and build infrastructure scaffolding. AI functions, in Microsoft's framing, as a force multiplier that removes technical friction.
North Korean groups tracked as Jasper Sleet and Coral Sleet (formerly Storm-1877) illustrate the pattern at scale. Coral Sleet is running fully AI-enabled workflows from end to end — fake company websites, remote infrastructure provisioning, lure development, payload testing — all at low cost and high volume. These are not experimental campaigns. They are production operations, scaled by AI, sustained indefinitely.
The more concerning observation sits in the emerging category. Microsoft is seeing early experimentation with agentic AI by threat actors: models being used for iterative decision-making and task execution rather than just text generation. It is not yet at scale. The reliability and operational risk are still limiting factors. But the direction is clear, and defenders should not need to wait for scale before preparing.
So: The asymmetry between how attackers are adopting AI and how defenders are governing it has become structural. Attackers operate at the speed of iteration. Most enterprise security teams are still debating policy.
---
Enterprise AI Deployment Is Outrunning Security's Ability to Watch It
A briefing from the AIUC-1 Consortium, developed with input from Stanford's Trustworthy AI Research Lab and more than 40 security executives, has put numbers to what many CISOs privately suspect.
64% of companies with annual turnover above $1 billion have lost more than $1 million to AI failures, according to EY data cited in the report. One in five organisations has experienced a breach linked to unauthorised AI use. Shadow AI — employees routing sensitive data through personal chatbot accounts without corporate visibility — is now a primary exposure vector.
The statistics on internal governance are stark. Only 21% of executives report complete visibility into what their AI agents can access, what tools they call, or what data they touch. 86% of organisations have no visibility into AI data flows. The average enterprise has roughly 1,200 unofficial AI applications in active use. These are not edge cases. They describe the median enterprise.
Three risk categories dominate the field. The agent challenge: AI systems with overprivileged access operating without per-action human approval can cause damage through normal operation, not just through attack. 80% of organisations surveyed reported risky agent behaviours including unauthorised system access and improper data exposure. The visibility challenge: 63% of employees who used AI tools in 2025 pasted sensitive company data — including source code and customer records — into personal accounts. Shadow AI breaches cost an average of $670,000 more than standard incidents, driven by delayed detection. The trust challenge: prompt injection remains unsolved because LLMs cannot reliably separate instructions from data input. With 53% of companies now using RAG or agentic pipelines, the injection surface area is enormous.
So: The CISO's job in 2026 is not primarily about the model. It is about the governance layer — permissions, observability, data flow controls — that nobody built when the model was deployed.
---
Malicious Browser Extensions Are Harvesting Your AI Conversations at Scale
Microsoft Defender has published findings on a campaign involving malicious Chromium-based extensions that impersonate legitimate AI assistant tools. The extensions have reached approximately 900,000 installs and have been confirmed active across more than 20,000 enterprise tenants.
The attack chain is straightforward. Extensions that look like sidebar AI tools for ChatGPT or DeepSeek request broad page-level permissions that users grant without scrutiny. Once installed, they collect full URLs and complete AI chat content, sending it to attacker infrastructure. The exfiltrated data includes proprietary source code, internal workflows, strategic discussions, and client communications — precisely the content that knowledge workers now routinely discuss with AI assistants.
One detail in the Microsoft report deserves attention: agentic browsers were observed downloading these extensions automatically without explicit user approval, because the names and descriptions were convincing enough to satisfy the model's trust evaluation. That is a new class of supply chain risk. The agent installs the malware; the human never clicks.
Browser extension governance has been a perennial IT control that most organisations deprioritise. In an environment where employees are conducting sensitive business conversations through browser-based AI tools, it has become critical infrastructure security.
So: If your organisation has not reviewed its browser extension policies recently, do it this week. The attack surface is not theoretical.
---
LLMs Can Deanonymise Users at Scale. Pseudonymity Is Functionally Broken.
Research published this week demonstrates that large language models can identify the real-world identities behind pseudonymous social media accounts with 68% recall and up to 90% precision. The experiments correlated individuals across multiple platforms — including Hacker News and LinkedIn — using AI-based pattern matching against public post histories.
Classical deanonymisation required skilled investigators assembling structured data sets manually. That constraint effectively protected most pseudonymous users most of the time. LLMs remove the constraint. The technique now scales.
The implications run in several directions simultaneously. For threat actors, this is a reconnaissance capability at industrial volume. For employees who use pseudonymous accounts to discuss sensitive topics, the protection is largely gone. For organisations, this creates a new dimension of insider risk: what staff say under assumed identities in public forums can now be attributed reliably and cheaply.
The researchers' summary is direct: "The average online user has long operated under an implicit threat model where they have assumed pseudonymity provides adequate protection because targeted deanonymisation would require extensive effort. LLMs invalidate this assumption."
So: Threat models built on the assumption of pseudonymity need revision. This applies to corporate communications policies, insider risk programmes, and the advice we give to individuals who discuss work-related matters under cover.
---
VMware Aria Operations: Patch It Now
CISA added CVE-2026-22719 to the Known Exploited Vulnerabilities catalogue on 3 March. The vulnerability is a command injection flaw in VMware Aria Operations. Broadcom has released patches. It is being actively exploited.
The technical details are less important than the operational reality: virtualisation management platforms are high-value targets. Aria Operations controls visibility across the virtualised environment. Compromise at that layer gives attackers both administrative access and the ability to manipulate what monitoring tools see.
If you have not patched, patch. If you are not certain whether you have patched, check.
---
CISA Is Stretched Thin at the Moment It Is Most Needed
CNBC reported this week on a significant structural problem: CISA, the primary US cyber readiness agency, is operating under a partial government shutdown, managing furloughs, and navigating a management reshuffle — simultaneously with an escalating Iran threat following US and Israeli strikes on the region.
Security experts cited in the reporting describe the timing as acutely dangerous. Iran, the assessment goes, may have been holding capabilities in reserve and is approaching a moment where retaliation becomes both motivated and strategically timed. One cited expert framed it plainly: "From a timing perspective, it's now or never."
For organisations with critical infrastructure exposure or US-linked operations, the practical implication is that federal coordination capacity is reduced precisely when threat levels are elevated. That means more weight shifts to the private sector. Threat sharing, which depends in part on CISA functioning at full capacity, is constrained.
This is the environment in which the other stories this week should be read. AI-enabled attackers, shadow AI exposure, deanonymisation at scale — these are not isolated technical problems. They sit inside a deteriorating geopolitical situation with a temporarily weakened coordination layer.
So: If your incident response plans assume normal CISA responsiveness, review them. This week is a reasonable moment to do that.
---
The Week's Signal
The theme across this week's stories is not any single vulnerability or attack campaign. It is the widening distance between the speed of attacker adoption and the pace of defensive governance.
Threat actors are running AI-enabled operations end to end. Enterprise security teams are still negotiating policy. AI agents are being deployed into production with no visibility into what they can access. Browser extensions are harvesting AI conversations at enterprise scale. Pseudonymity, a foundational assumption in many threat models, has been algorithmically broken.
None of these are hypothetical futures. They are reported conditions from this week.
CISOs who have been watching AI security from a distance while waiting for the picture to become clearer need to close that gap now. The picture is clear enough.
---
Sources: Microsoft Security Blog, Help Net Security / AIUC-1 Consortium, Ars Technica, The Hacker News, CNBC, National CIO Review / Cisco State of AI Security 2026, CISA KEV Catalog.

