Exposure Brief

March 24, 2026

Run: close | Articles: 5 | Store: 131 total


Executive Summary

The AI tool supply chain is under coordinated attack. A single threat actor — TeamPCP — compromised both the LiteLLM AI proxy library (95 million monthly PyPI downloads) and the Checkmarx KICS security scanner GitHub Action in the same campaign, harvesting CI/CD credentials, cloud keys, and Kubernetes secrets from developer pipelines. This is not theoretical risk: the attacker used .pth file persistence tricks that execute even when the compromised package is never directly imported, and deployed Kubernetes privileged pods for lateral movement. Every organization running AI tools in their build pipeline is exposed, and most have zero visibility into it.

RSAC 2026 delivered the sharpest validation yet for the Common Nexus assessment thesis. Check Point’s Oded Vanunu disclosed 6 CVEs across Claude Code, OpenAI Codex, Cursor, and Gemini CLI, demonstrating that AI coding assistants systematically bypass endpoint security — EDR products are “totally blind” to agentic AI activity. His framing — “developers are the new perimeter” and “configuration equals code” — expands the shadow AI conversation from employees pasting into ChatGPT to AI agents with filesystem access executing config-embedded commands. Meanwhile, Mandiant’s M-Trends 2026 report documented AI-enabled malware families (PROMPTFLUX, QUIETVAULT) operating in the wild, though the report explicitly states 2025 was not the year breaches resulted directly from AI — fundamental human and systemic failures remain the root cause, which aligns perfectly with the “governance first” message.

Microsoft’s own data now validates the identity-AI risk intersection: 70% of organizations report identity incidents tied to AI-related activity, with a near-even split between malicious (53%) and accidental (47%) exposure. That accidental half is the shadow AI story — unmanaged usage creating risk without malicious intent. Combined with Mandiant’s finding that the ransomware hand-off window has collapsed from 8 hours to 22 seconds, the message for sales conversations is clear: the window between “we should assess our AI exposure” and “it’s too late” is shrinking fast.


Persona Analysis

Growth Strategist: The RSAC “all security products are blind” quote from Check Point is the strongest top-of-funnel hook you’ve had — it comes from a major vendor admitting the industry’s flagship defense layer has failed against AI tools. Pair it with Microsoft’s 70% AI-linked incident stat for a one-two punch in outbound messaging. The TeamPCP supply chain campaign gives you a concrete, named threat actor story that makes abstract risk feel real to prospects.

Content Strategy Lead: Two LinkedIn posts here, not five. Priority 1: the RSAC Check Point research — “6 CVEs, 4 AI coding tools, endpoint security totally blind” is a headline that writes itself. Use the “developers are the new perimeter” framing. Priority 2: TeamPCP supply chain campaign connecting litellm + KICS — “same actor, same week, AI tools and security tools both compromised.” Save the Microsoft 70% stat as supporting data for either post, not a standalone.

Privacy & Security Auditor: The RSAC CVE disclosures demand a scope expansion conversation for the assessment. If AI coding tools bypass EDR, then the assessment needs to cover developer workstations and CI/CD pipelines, not just M365 Copilot. The TeamPCP campaign’s .pth persistence trick — executing without import — is a pattern to flag in client reports as evidence that AI supply chain risk is non-trivial. Mandiant’s explicit “not yet AI-caused breaches” caveat is important nuance to preserve in any content.

Martell-Method Advisor: Two actions from this briefing, not five. (1) Draft the RSAC Check Point LinkedIn post — the quotes are ready-made and the 48-hour news cycle window is closing. (2) Add the Microsoft 70% stat and the “22-second hand-off window” to your sales deck talking points. Everything else is context that sharpens your understanding but does not require action tonight.

Business Strategist: The RSAC research reframes the competitive landscape. Network-layer shadow AI detection (Witness AI, etc.) can’t see what AI coding agents do on developer machines — Check Point just proved that. Your Graph API identity-layer approach remains differentiated, and the scope is expanding: developer tooling + CI/CD pipelines are now part of the AI governance conversation, not just SaaS usage. Microsoft validating the 70% AI-incident link at the identity layer confirms you’re building in the right place.

Red Team Analyst: TeamPCP’s campaign is textbook supply chain tradecraft applied to AI infrastructure: compromise the proxy layer (litellm) to harvest API keys, then pivot to the security scanner (KICS) for CI/CD secrets. The .pth persistence and Kubernetes privileged pod escalation show sophistication. The RSAC CVE research reveals a new attack class: weaponized config files (.json, .env, .toml) that co-opt AI agents into executing arbitrary commands. Both attack vectors bypass traditional detection entirely.

Blue Team Analyst: Immediate defensive recommendations: (1) Pin all GitHub Action references to commit SHAs, never version tags — the KICS attack exploited mutable tags. (2) Audit for litellm versions 1.82.7-1.82.8 and any .pth files in Python site-packages. (3) Treat AI coding tool config files as security-critical — .claude, .cursor, GEMINI.md, MCP configs all need integrity monitoring. Mandiant’s M-Trends finding that 90-day log retention creates blind spots against edge-device persistence is operationally relevant.

Connected Intelligence Advisor: The convergence of these five articles tells an enterprise credibility story: AI tools are simultaneously the productivity accelerator and the unmonitored attack surface. Microsoft (70% incidents), Mandiant (AI malware in the wild), Check Point (EDR blind), and two active supply chain campaigns — all in the same 48-hour window. This is the density of evidence that makes the “assessment first” message credible to enterprise security leaders who are skeptical of vendor hype.

Compliance Framework Specialist: The RSAC findings create a compliance gap: if EDR cannot monitor AI agent activity, then existing endpoint security controls mapped to frameworks (NIST CSF, SOC 2, CMMC) are insufficient for AI-augmented development environments. Organizations claiming compliance based on EDR coverage now have a documented control failure. The Mandiant M-Trends report’s emphasis on 90-day log retention blind spots compounds this — most compliance frameworks assume logs exist for the audit window.


Top 3 Actions — Consensus

  1. Draft RSAC Check Point LinkedIn post — 6 CVEs, “all security products are blind,” developers are the new perimeter by tomorrow
  2. Add Microsoft 70% AI-incident stat and Mandiant 22-second hand-off window to sales deck talking points this week
  3. Scope conversation: discuss expanding assessment beyond M365 into developer tooling and CI/CD pipelines next planning session

Articles

Trigger Events (4)

ScoreTitleSourcePublished
9/10RSAC 2026: AI coding tools ‘crushed’ endpoint security fortress, 6 CVEs disclosedDark ReadingMar 24
9/10M-Trends 2026: AI malware in the wild, 22-second ransomware hand-off, voice phishing surgesMandiantMar 23
7/10LiteLLM supply chain attack: 95M monthly downloads, credential theft via .pth persistenceCyberInsiderMar 24
7/10KICS GitHub Action compromised: TeamPCP hijacks 35 tags in CI/CD supply chain attackWiz BlogMar 23

Market & Buyer Signal (1)

ScoreTitleSourcePublished
8/10Microsoft Entra 2026 Report: 70% of identity incidents tied to AI activityMicrosoft Tech CommunityMar 19