Exposure Brief
Issue 4

March 26, 2026

Substack Metadata
Title: RSAC 2026 Proved That AI Coding Tools Operate Outside Every Security Control You Have
Subtitle: Check Point demonstrated six CVEs across Claude Code, Cursor, Codex, and Gemini CLI. The same week, a supply chain attack cascaded across five DevSecOps tools. Courts started holding companies accountable.
Issue: Issue 4 | March 26, 2026

Executive Summary

A court verdict, two security conferences, and a string of supply chain compromises all surfaced the same gap: accountability infrastructure for AI agents lags behind deployment speed. At RSAC 2026, Check Point demonstrated that AI coding tools bypass every layer of endpoint security. A single stolen credential cascaded across five DevSecOps tools in five days. A California jury found Meta and YouTube liable for deploying technology they knew caused harm, and a federal court ruled that AI-generated documents are not privileged.

Lead Story

RSAC 2026 Proved That AI Coding Tools Operate Outside Every Security Control You Have

Check Point researcher Aviv Donenfeld presented at RSAC 2026 demonstrating six CVEs across Claude Code, Codex CLI, Cursor, and Gemini CLI. The vulnerabilities are not implementation bugs. They are architectural: AI coding assistants execute code in contexts that endpoint detection, application firewalls, and runtime monitoring cannot see. As Donenfeld put it, these tools “crushed” the endpoint security fortress.

The implications extend beyond coding tools. Microsoft’s own VP of Data and AI Security, Herain Oberoi, confirmed at RSAC that AI agent proliferation ranks as the most pressing security threat — above data sprawl, data leakage, or new regulation. If the vendor building your AI platform acknowledges that agent governance is the top concern, the organizational response cannot be “we will address it next quarter.”

The same week, LiteLLM, a Python library with 95 million monthly downloads (per CyberInsider) that proxies multiple LLM APIs, was compromised on PyPI with a credential stealer. The malicious code harvested AWS, GCP, and Azure keys, SSH keys, Kubernetes configs, and shell history — on Python startup, without requiring an import. Development machines, CI/CD pipelines, and production servers were all affected.

Supporting Intelligence

One Stolen Credential Cascaded Across Five DevSecOps Tools in Five Days

TeamPCP’s supply chain campaign started with a single compromised credential and spread across Trivy, Checkmarx KICS, GitHub Actions, VS Code extensions, and 66+ npm packages. The attack demonstrates that DevSecOps tools — the tools organizations rely on to catch supply chain compromises — are themselves attack surfaces. The security toolchain is not immune to the threats it monitors.

Microsoft Responds: 97% Had Identity Incidents, 70% Tied to AI

Microsoft’s 2026 Secure Access report found that 97% of organizations experienced identity or network access incidents in the past year, with 70% tied to AI-related activity. The response: Entra Agent ID extends Zero Trust controls to non-human AI agent identities, and shadow AI detection is built into Entra Internet Access. The caveat: shadow AI detection requires Edge for Business deployment, which most organizations have not completed.

30,000 AI Agent Instances Exposed. A Researcher Proved How Easy They Are to Compromise.

OpenClaw has 30,000+ exposed instances with a SkillHub marketplace that has zero security vetting. A researcher planted a fake skill, inflated its download count to the top spot, and 4,000+ real developers across 7 countries executed arbitrary commands. To get value from their new bot, users grant too much personal access. At the Digital Asset Summit in New York, a panelist noted that in Asia, “hordes of people are queuing up to install OpenClaw, then paying to uninstall it weeks later because of security violations.”

CSA Creates a Dedicated Foundation for AI Agent Security

The Cloud Security Alliance launched CSAI, a dedicated 501(c)3 foundation focused on governing the “agentic control plane” — identity, authorization, and trust assurance for autonomous AI agents. CSAI will develop certifications and serve as a CVE authority specifically for agentic AI vulnerabilities. The creation of a dedicated standards body signals that the industry recognizes agent governance as a distinct discipline, not a subset of application security.

Regulatory Radar

Meta and YouTube Found Liable in Bellwether Addiction Trial (March 25, 2026): A California jury awarded $6 million in damages after finding Meta and YouTube deliberately designed platforms that addict children, with Meta liable for 70%. The damages are subject to judicial review, but the verdict’s significance is structural: this is a bellwether case, selected to signal how thousands of consolidated lawsuits are likely to resolve. The verdict landed because internal documents showed executives knew their products caused harm and deployed them anyway.

Federal Court Rules AI Conversations Are Not Privileged (February 2026): In United States v. Heppner, Judge Rakoff held that documents generated using Anthropic’s Claude did not qualify for attorney-client privilege. The defendant used a consumer AI tool without attorney direction; the vendor’s privacy policy permitted data disclosure. The court left open whether enterprise tools with data isolation face a different analysis. For organizations where employees use consumer AI tools without formal governance, those conversations are discoverable records.

DAS 2026: Financial Regulators Building Agent Accountability Frameworks (March 24-25, 2026): At the Digital Asset Summit, KPMG, Stripe/Privy, and EigenCloud panelists described agentic commerce as “the biggest thing to happen to commerce in the coming decade.” Thirty-five percent of incoming Privy developers are building agentic products. EigenCloud’s JT Rose stated: “You need the ability to hold these agents accountable for what they’re doing.” The CFTC’s innovation advisory task force already covers AI agents alongside crypto and prediction markets.

The Bottom Line

  1. Run your AI coding tools through a security assessment independent of your EDR. Check Point proved at RSAC that Claude Code, Cursor, Codex, and Gemini CLI bypass endpoint detection entirely. Your existing security stack does not see what these tools execute. Test it: run an AI coding assistant and check whether your SIEM logged the activity.

  2. Audit your Python AI dependencies for supply chain compromise. LiteLLM (95M monthly downloads) was compromised with a credential stealer that runs on Python startup. Run pip list against the advisory. Check whether developers install AI libraries without a security review process.

  3. Assess your legal exposure from ungoverned AI tool usage. The Heppner ruling makes consumer AI conversations discoverable. The Meta verdict holds companies liable for deploying technology they knew caused harm. Every AI conversation your employees have is a record. The governance you establish today determines what you can demonstrate tomorrow.