Exposure Brief

March 25, 2026


title: “AI Governance Reckoning: Court Rulings, Supply Chain Compromises, and the Agentic Identity Gap” date: 2026-03-25 label: adhoc

Executive Summary

The AI governance reckoning arrived via three simultaneous vectors on March 24-25, 2026. A federal court ruled in United States v. Heppner that AI tool conversations carry no privilege protection, making shadow AI use a live litigation and regulatory examination risk. RSAC researchers disclosed 6 CVEs across Claude Code, Codex, Cursor, and Gemini CLI, confirming that AI coding tools have systematically bypassed the endpoint security infrastructure enterprises spent 20 years building. Meanwhile, a coordinated supply chain campaign compromised the LiteLLM AI proxy library — reaching 95 million monthly installations — with a credential stealer targeting AWS, Azure, and GCP keys. The week’s throughline: the governance gap is no longer theoretical. It has a docket number, a CVE list, and a credential exfiltration archive.


Lead Story

A Federal Court Made Your AI Conversations Discoverable

The United States v. Heppner ruling, decided March 25, 2026, is the most significant enterprise AI governance development this quarter: a federal court held that conversations with AI tools qualify neither as attorney-client privilege nor as work product protection. The reasoning is structural. AI interactions are records held by a third-party vendor, eliminating the confidentiality expectation that underpins both doctrines. Unlike conversations with human counsel or internal deliberations, AI tool sessions exist as data rows in a vendor’s infrastructure — outside the firm’s legal control and outside the boundaries of most privilege frameworks.

For financial services firms subject to SEC, FINRA, and CFTC recordkeeping requirements, this is a compounding exposure. AI conversation logs may now be subpoenaed or produced during regulatory examinations, surfacing deal deliberations, risk assessments, and internal strategy that compliance teams assumed were protected. Shadow AI use — employees routing sensitive work through non-sanctioned tools — creates an undocumented discovery pool that no one has inventoried. Vendor retention policies, jurisdiction, and data access terms are buried in service agreements most GCs have not revisited since the tool was approved.

The Growth Strategist reads this as an accelerant for AI governance demand: the Heppner ruling converts the assessment conversation from “best practice” to “current legal exposure.” The Business Strategist’s take is sharper — the GC is now a buyer. Every regulated firm that has not conducted an AI tool inventory faces an audit question they cannot answer. The audit your leadership just asked for needs to include the agents. United States v. Heppner (LegalTech News, 2026-03-25)


Supporting Intelligence

RSAC 2026: AI Coding Tools Have Rendered Endpoint Security Blind

Twenty years of endpoint security investment has a new adversary: the developer’s AI assistant. Check Point researcher Oded Vanunu disclosed 6 CVEs at RSAC 2026 spanning Claude Code (CVE-2025-59536), OpenAI Codex CLI (CVE-2025-61260), and Cursor (CVE-2025-54136), plus a silent command execution vector in Gemini CLI via malicious GEMINI.md files — all now patched. The attack surface is not the model; it is the configuration file. A single malicious line in a .json, .env, or .toml can instruct an AI agent to execute arbitrary commands. No malware signature exists to detect it because the activity presents as legitimate developer behavior. Vanunu’s characterization: “All security products are blind. Totally blind.” The Red Team notes this is not a patch problem — it is an architectural one. AI coding tools require filesystem access and developer-level trust to function, which structurally bypasses the thin-client security model EDR was designed to protect. The Privacy & Security Auditor’s implication: test whether your endpoint controls can distinguish an AI agent executing a config-embedded command from a developer running a normal build. If the answer is no, that is a finding. Dark Reading / RSAC 2026

LiteLLM Supply Chain Compromise: Credential Theft at 95 Million Installations

The same week RSAC described the threat model, a live example landed. TeamPCP embedded a credential stealer in LiteLLM versions 1.82.7 and 1.82.8 on PyPI — a library receiving 95 million monthly downloads that serves as the AI inference proxy for many enterprise AI pipelines. The persistence mechanism is worth understanding: a .pth file auto-executes on Python startup regardless of whether LiteLLM is imported, meaning any environment where the package exists as a transitive dependency is affected. Harvested credentials included SSH keys, AWS/GCP/Azure access, Kubernetes secrets, Docker credentials, cryptocurrency wallets, and shell history — exfiltrated via RSA-encrypted archive to a spoofed domain. The Blue Team notes the target selection was deliberate: LiteLLM sits in AI inference pipelines that aggregate API keys to multiple LLM providers, the highest-value credential pool in any AI-enabled organization. TeamPCP’s five-day campaign extended to Trivy, Checkmarx KICS, GitHub Actions, and 66+ npm packages — the security tools enterprises use to audit their own supply chains were themselves compromised, creating a recursive trust failure. Credential rotation across all affected environments is not optional. CyberInsider | Phoenix Security

Agent Identity Governance Is Becoming a Formal Compliance Domain

Two announcements in 48 hours mark the transition from best practice to institutional infrastructure. The Cloud Security Alliance launched CSAI, a dedicated 501(c)3 foundation for AI agent security, with a new CVE numbering authority for agentic AI, three TAISE certification tracks (CxO, Agentic practitioner, and a high school Compass track), and a global assurance framework built on ISO 42001, ISO 27001, and SOC 2. Concurrently, Microsoft VP of Data and AI Security Herain Oberoi stated at RSAC that AI agent proliferation and lack of management tooling is the most pressing change to the enterprise threat landscape — ranking it above AI sprawl, data leakage, and new regulation. Microsoft responded with agent identities in Entra ID, a centralized agent registry, guardrail controls assignable per model, and a new AI pillar in its Zero Trust Workshop. Omdia finds that more than half of companies lack confidence they can secure resources accessed by non-human identities. The Martell-Method Advisor sees a market timing window: the governance infrastructure is maturing faster than most enterprises’ awareness that the gap exists. The firms that move now on agent identity controls position themselves ahead of what the compliance calendar makes mandatory. Dark Reading / CSA | Dark Reading / Microsoft

EY Quantifies the Shadow AI Gap: 45% Data Leaks, 52% Unsanctioned Initiatives

The governance gap has numbers now. EY surveyed technology executives and found 45% had experienced confirmed or suspected sensitive data leaks from unauthorized third-party AI tool use; 39% reported proprietary IP leaks from the same cause; and 52% of department-level AI initiatives operate without formal approval or oversight. The malicious/accidental split — 53% malicious, 47% accidental — matters for how firms frame the problem: shadow AI is not primarily a malice problem, it is a management visibility problem. 97% of those same executives rate autonomous AI as high or essential priority, while 78% acknowledge adoption is outpacing risk management capability. Only 50% of AI governance leaders hold independent authority to halt problematic projects, which means the reporting function exists without enforcement teeth. Audit committees asking for AI risk reports will find the reporting infrastructure was built but the controls were not. EY Americas (2026-03-04)

Mobile Apps Are the Shadow AI Blind Spot Enterprise Programs Miss

AI governance programs built around M365 and SaaS platforms are systematically missing an entire exposure category. NowSecure tested 50,000 mobile apps in February 2026 and found 53% include AI components — embedded SDKs, ML frameworks, or API calls to external AI services that standard MDM vetting and app approval processes do not detect. The exposure mechanism is a routine update: an approved productivity app ships an AI summarization feature that begins routing internal documents to an external provider, triggering no approval checkpoint. For FinServ firms under FINRA or SEC jurisdiction, those documents may constitute books and records subject to existing retention rules — now flowing to an unvetted service. Gartner projects over 40% of enterprises will face shadow AI security or compliance incidents by 2030. The firms that close that gap are the ones that have inventoried the mobile layer, not just the desktop. NowSecure


Regulatory Radar

State AI Penalties Are Active and Summer Deadlines Are Approaching

Federal preemption is not clearing the state-level compliance calendar. Gunderson Dettmer’s 2026 AI law tracker documents the penalty exposure enterprises face from state laws that the December 2025 Executive Order has not displaced: California AG can impose up to $1M per violation under SB 53; Colorado AG holds exclusive enforcement authority with up to $20K per violation for algorithmic discrimination; NYC Local Law 144 already carries a $1,500/day per-violation penalty; EU AI Act fines reach 35M EUR or 7% of global turnover. Three deadlines cluster this summer: Colorado AI Act (June 30, 2026), EU AI Act high-risk requirements (August 2, 2026), and California AB 853 (August 2, 2026). Counsel guidance is explicit: companies must maintain dual compliance tracks. The EO does not pause state obligations. Gunderson Dettmer (2026-02-05)

FCC Hardware Ruling Traces the Same Logic as AI Tool Governance

The FCC added all foreign-produced consumer routers to its Covered List on March 23, 2026, citing their documented role in Volt, Flax, and Salt Typhoon attacks against U.S. critical infrastructure. The enforcement reasoning is identical to the AI tool governance argument: uncontrolled foreign-origin infrastructure creates data flows enterprises cannot audit or inspect. Hardware today; AI tools with foreign data residency and opaque retention practices are next. The regulatory direction is legible. FCC (2026-03-23)

Digital Governance Confidence Has Landed at 31%

IAPP’s 2026 governance research finds only 31% of organizations report strong confidence in their ability to comply with applicable digital law and policy. The cause is structural: the organizational silos that once separated privacy, cybersecurity, AI governance, and legal compliance have collapsed. Teams previously accountable for one domain are now managing all of them simultaneously with the same headcount. IAPP’s term for the result — “digital entropy” — is accurate: chaos is the default when technology outpaces governance frameworks. The 69% without strong confidence are not failing; they are accurate about the difficulty. IAPP (2026-03-24)


The Bottom Line