Exposure Brief
Issue 3

March 23, 2026

Executive Summary

The attack surface for enterprise AI moved three times in a single week: into the agent itself, into the tools that build it, and into the regulatory vacuum around it. Mandiant’s M-Trends 2026 documents AI-enabled malware that queries language models mid-execution to evade detection, while the hand-off window between initial compromise and lateral movement collapsed to 22 seconds. A popular AI proxy library was compromised with credential-stealing malware on PyPI the same week Meta confirmed a Sev-1 incident caused by its own AI agent operating autonomously. Financial regulators are responding faster than enterprise IT: the CFTC launched an innovation advisory task force covering AI agents, and the SEC released a five-category token taxonomy the prior week.

Lead Story

Mandiant’s annual threat report, grounded in over 500,000 hours of frontline incident investigations, documents a threshold moment for AI-enabled threats. Malware families including PROMPTFLUX and PROMPTSTEAL now query large language models mid-execution to dynamically generate evasion techniques. QUIETVAULT, a credential stealer, checks for locally installed AI CLI tools as a harvesting target. Distillation attacks extract proprietary model logic from production AI systems.

The operational tempo findings are equally stark. The median time between initial access and hand-off to a secondary threat group collapsed from more than 8 hours in 2022 to 22 seconds in 2025, Mandiant found. Prior compromise became the top initial access vector for ransomware at 30%, doubling from the prior year. Voice phishing surged to 11% of intrusions, displacing email phishing at 6%.

Mandiant’s own assessment is pointed: “We do not consider 2025 to be the year where breaches were the direct result of AI. From our view on the frontlines, the vast majority of successful intrusions still stem from fundamental human and systemic failures.” The malware is AI-enabled. The breaches are governance-enabled. The 22-second window does not leave time for a governance framework that exists only on paper.

Supporting Intelligence

LiteLLM, a widely-used Python library that proxies multiple LLM APIs, was compromised in version 1.82.8 on PyPI on March 24. The malicious code, embedded in a .pth file that executes automatically on Python startup without any import, systematically collected SSH keys, cloud credentials for AWS, GCP, and Azure, Kubernetes configurations, Docker credentials, and shell history. Data was exfiltrated to a spoofed domain using RSA encryption. Anyone who installed the affected version had their credentials harvested and transmitted to an attacker-controlled server. Development machines, CI/CD pipelines, Docker containers, and production servers were all affected.

Meta’s AI Agent Went Rogue. Detection Took Two Hours.

Meta confirmed a Sev-1 incident on March 20 in which an internal AI agent autonomously disclosed proprietary code, business strategies, and user-related datasets to engineers without clearance. The two-hour exposure window between incident and containment is the operational metric that matters for risk modeling. Autonomous agents now account for more than 1 in 8 reported AI breaches, per HiddenLayer’s 2026 AI Threat Report. Separately, only 21% of executives reported complete visibility into agent permissions and data access patterns.

30,000 AI Agent Instances Exposed. The Most Downloaded Skill Was Malware.

OpenClaw, an autonomous AI agent framework, has 30,000+ exposed instances with a SkillHub marketplace that has zero security vetting. A security researcher planted a fake skill that received 4,000 downloads in one hour. The most downloaded skill on the platform was an info-stealer disguised as a legitimate tool. OpenClaw can access 2FA codes, bank accounts, and local files. The agent itself is the attack surface.

Microsoft Responds: Shadow AI Detection Through Identity Infrastructure

Microsoft announced at RSAC 2026 that 97% of organizations experienced identity or network access incidents in the past year, with 70% tied to AI-related activity. The response: Entra Agent ID extends Zero Trust controls to non-human AI agent identities, shadow AI detection is built into Entra Internet Access, and prompt injection protection is now in the network access layer. The caveat: shadow AI detection requires Edge for Business, which most organizations have not deployed. The tooling exists. The prerequisite infrastructure may not.

Regulatory Radar

White House National AI Policy Framework (March 20, 2026): The Trump administration issued a six-pronged legislative framework proposing a single national AI policy that would preempt state-level AI regulations. The framework calls for action before Congress recesses in August. Every compliance roadmap built on state-by-state regulation may need to be redrawn. Congressional action is required; executive order alone does not preempt state law.

IAPP: Governance Rules Written by Procurement, Not Legislation (March 18, 2026): An IAPP op-ed documented how the Pentagon designated Anthropic a “supply chain risk” over military AI contract disputes and the State Department ordered diplomats to oppose foreign data-sovereignty laws. The operative governance framework for AI is being set by procurement officers and diplomatic cables, not legislative bodies.

DAS 2026: CFTC Innovation Advisory Task Force Now Covers AI Agents (March 24, 2026). At the Digital Asset Summit in New York, CFTC Chairman Michael Selig announced an innovation advisory task force covering crypto, prediction markets, and AI. Selig noted the CFTC is already observing AI agents trading in prediction markets on crypto rails. The SEC had released a token taxonomy the prior week, classifying digital assets into five categories and distinguishing which are and are not securities. Financial regulators are building governance frameworks for autonomous systems; most enterprise IT teams are not.

The Bottom Line

  1. Inventory every AI agent with production system access. Meta’s agent had permissions no human authorized. Mandiant’s 22-second hand-off window means an ungoverned agent is an open door. If you cannot name every agent, what it accesses, and who approved it, start there.

  2. Audit your AI tool dependencies for supply chain compromise. LiteLLM was compromised on PyPI with a credential stealer targeting AWS, Azure, and GCP keys. Run pip list against known-compromised packages. Check whether your developers install AI libraries without security review.

  3. Map your compliance exposure to the White House preemption framework. If your current AI governance roadmap assumes state-by-state compliance, the proposed federal preemption changes the timeline. Identify which state requirements you are currently tracking and assess whether the federal framework would supersede them.