Exposure Brief
Issue 6

March 30, 2026

substackTitle: “GitHub Copilot Used Its Access to Your Codebase to Run Ads” substackSubtitle: “The vendors you trust with access to your systems are changing the terms of that access faster than your governance can track.” substackUrl: “https://exposurebrief.com/p/github-copilot-used-its-access-to” date: “2026-03-30” status: published articles:

Executive Summary

The AI tools your organization already authorized are acting beyond the scope you approved. GitHub Copilot injected promotional ads into 1.5 million pull requests this week using a hidden, templated feature that modified developer-written content without disclosure. A security researcher found that ChatGPT reads 55 properties from your browser and application state before you type a single character. At RSAC 2026, presenters cited a healthcare firm fined $3.5 million for employee ChatGPT misuse and a manufacturer that lost $54 million to shadow AI data leaks. The thread connecting all of it: the vendors you trust with access to your systems are changing the terms of that access faster than your governance can track, and the costs are no longer theoretical.

Lead Story

GitHub Copilot Used Its Access to Your Codebase to Run Ads

A developer asked Copilot to fix a typo in a pull request description. Copilot fixed the typo, then rewrote the description to include a promotional message for itself and the Raycast application. Hidden in the raw markdown: an HTML comment labeled START COPILOT CODING AGENT TIPS, inserted before the ad copy. This was not a model hallucination. It was a templated injection, built into the product.

A search of GitHub for the exact phrase reveals over 11,000 pull requests containing the same promotional text. Neowin reports that more than 1.5 million PRs were affected across GitHub and GitLab. GitHub’s Principal Product Manager for Copilot, Tim Rogers, confirmed the feature was disabled, acknowledging that letting Copilot modify human-written PRs without disclosure “was the wrong judgement call.”

The precedent this sets extends beyond advertising. An AI tool with read-write access to enterprise code repositories used that access for purposes the user did not authorize and was not informed about. For any organization running Copilot on production repos, the incident raises a concrete question: what else has the tool modified, generated, or transmitted using its existing permissions that nobody reviewed?

Supporting Intelligence

FINRA Fined a Broker-Dealer $600K for Unapproved Communications Platforms. AI Tools Are Next.

FINRA disciplined BTIG, LLC on March 25 with a $600,000 fine for failing to supervise employees’ use of unapproved messaging platforms between January 2020 and July 2024. The violations span SEC recordkeeping rules (17a-4) and FINRA supervisory rules (3110). The same regulatory framework governs unapproved AI tool usage: if employees are generating client communications, research summaries, or trade rationale through unmonitored AI tools, the recordkeeping obligation is identical. FINRA has not yet brought an AI-specific enforcement action, but the supervisory expectation is already in place.

GitHub Will Use Your Code to Train AI by Default Starting April 24

Starting April 24, GitHub will collect interaction data from Copilot Free, Pro, and Pro+ users for AI model training by default. The data scope includes code inputs, accepted suggestions, file context, comments, and feedback. Users can opt out, but the default is opt-in. Copilot Business and Enterprise tiers are exempt, which creates a governance question: are all your developers on Business/Enterprise, or are some using Free/Pro on company devices? If IT does not know the answer, developer code is now training data.

ChatGPT Reads 55 Properties from Your Browser Before You Type a Character

A security researcher decrypted Cloudflare’s Turnstile verification system running on ChatGPT and found it reads 55 distinct properties before users can interact: browser characteristics (GPU, screen resolution, fonts, hardware), Cloudflare network data (city, IP address, region), and ChatGPT application internals including React Router context, loader data, and bootstrap state. The encryption is XOR with the key in the same payload — it prevents casual inspection, not analysis. When employees use ChatGPT on corporate devices, the platform is fingerprinting hardware and reading application state that goes well beyond what bot detection requires.

IBM: 300,000 ChatGPT Credentials Stolen, 44% More Attacks on Public-Facing Apps

IBM’s 2026 X-Force Threat Index documents that infostealers harvested over 300,000 ChatGPT credentials in 2025, with attacks on public-facing applications increasing 44% year-over-year. According to IBM Global Managing Partner for Cybersecurity Services Mark Hughes, “Attackers aren’t reinventing playbooks, they’re speeding them up with AI.” The enterprise implication: every AI tool an employee authenticates with a corporate email address is a credential target.

Regulatory Radar

The Bottom Line