Vercel confirms a limited security breach tied to a third‑party AI app integration, says core services are unaffected, and LinkedIn voices are treating it as a serious supply‑chain style wake‑up call with concrete hardening steps for customers.

Quick Summary of the Breach

  • Vercel says an employee connected a Context AI app to their company account, and that OAuth integration was the entry point for unauthorized access to Vercel systems.
  • The breach affects a “limited subset of customers,” but an attacker claims to be selling stolen data including access keys and employee information.
  • Vercel states its hosting services are still running normally; they advise customers to rotate “non‑sensitive” keys and credentials as a precaution and say they have involved law enforcement.

The “how” is essentially: Context AI’s cloud got popped, attackers stole an OAuth token for a Vercel employee’s Google Workspace, used that to impersonate the employee via Google APIs, then pivoted into Vercel internal systems and read environment variables and credentials that weren’t fully protected.

Below is a breakdown that down step‑by‑step, explain what the Context AI app is in practice, and call out the key technical levers (OAuth scopes, refresh tokens, token storage, etc.).


What Context AI is in this story

  • Context AI is a third‑party AI “assistant” product that integrates with Google Workspace (Gmail, Drive, Calendar, Docs, etc.) via OAuth, marketed as an “AI Office Suite” that can read your work data and help you draft, summarize, and search.
  • A Vercel employee installed this Context AI app and connected it to their corporate Google Workspace account, granting it broad OAuth permissions (effectively “Allow all” over their Google data).
  • Context AI stored users’ Google OAuth tokens server‑side in its own infrastructure (AWS plus Supabase), so it could call Google APIs on behalf of users in the background.

So “Context AI app” here is not some plugin inside Vercel; it’s an external SaaS that had persistent API access to a Vercel employee’s Google Workspace identity via OAuth.


Step‑by‑step attack chain (HOW it actually happened)

Putting the public reports together, the likely chain looks like this:

  1. Initial compromise at Context AI (infostealer → AWS/OAuth)
    • A Context AI employee’s device was infected with Lumma (an info‑stealing malware) around Feb 2026, which exfiltrated credentials, session data, and tokens for Context AI infrastructure.
    • Using that data, attackers gained unauthorized access to Context AI’s AWS environment and associated platforms (including Supabase) where OAuth tokens for Google users (like the Vercel employee) were stored.
  2. Stealing Google Workspace OAuth tokens from Context AI
    • Context AI’s investigation indicates that OAuth tokens belonging to users of the AI Office Suite were compromised, including at least one Vercel employee token.
    • These tokens had been issued by Google when users clicked “Allow” during OAuth sign‑in, and were stored by Context AI so it could continuously access Gmail/Drive/etc. on behalf of those users.
  3. Using the stolen token to impersonate the Vercel employee in Google Workspace
    • With the Vercel employee’s OAuth token, the attacker could call Google APIs as that user: access email, Drive, and potentially other Workspace components, depending on scopes granted.
    • Vercel confirms that the attacker took over the employee’s Google Workspace account using this OAuth access, giving them the same effective reach into Vercel’s Google‑integrated resources that the employee had.
  4. Pivot from Google Workspace into Vercel internal systems
    • From that Google Workspace identity foothold, the attacker pivoted into Vercel’s environment: internal tools, dashboards, and services that trusted Google SSO or accepted that user’s identity.
    • Vercel’s own bulletin states they then navigated Vercel systems and enumerated and decrypted non‑sensitive environment variables tied to a subset of customers.
    • TechCrunch and others note they also accessed some internal credentials that were not encrypted (e.g., certain keys or tokens used internally).
  5. Data exposure at Vercel layer
    • The attacker could see (and potentially exfiltrate) environment variables that were not flagged as “sensitive,” which might include API keys, connection strings, and configuration for customer projects depending on how those customers used Vercel.
    • Third‑party analyses say the threat level is primarily supply‑chain: with stolen Vercel tokens, env vars, or linked GitHub/NPM tokens, attackers could attempt downstream access to customers’ own infrastructure or code, not just Vercel itself.

That’s why people are stressing “design for breach” and treating OAuth apps as powerful identity connectors rather than just “productivity tools.”


How OAuth became the entry point (deep dive)

OAuth here is not a bug; it’s the mechanism that made the attack possible once Context AI was compromised.

1. The normal OAuth flow that started it

When the Vercel employee first used Context AI:

  • The app redirected them to Google’s OAuth consent screen.
  • The screen listed the scopes Context AI requested (e.g., read email, read/write Drive, manage calendars; exact scopes have not been fully disclosed but are described as broad “Allow all” type permissions).
  • The employee clicked “Allow,” so Google issued access and refresh tokens tied to Context AI’s client ID and that user account.
  • Context AI stored those tokens in its backend so it could periodically call Google APIs for that user without additional interaction.

That’s all normal OAuth behavior; the problem is what happens when the app’s backend is compromised.

2. Why stolen refresh tokens are so dangerous

Once the attacker had the refresh token from Context AI’s storage:

  • They could exchange it at Google’s token endpoint for fresh access tokens over and over, as long as the token wasn’t revoked and the app wasn’t disabled.
  • Google’s APIs then treat them exactly as the Vercel employee, with the same scopes and privileges that were originally granted.
  • If the employee had elevated rights in Vercel’s Google Workspace (e.g., access to specific shared drives, documents, or services used to manage internal tooling), the attacker inherited that access.

This is why third‑party writeups call it an “OAuth token supply chain compromise”: the trust anchor isn’t passwords, it’s refresh tokens stored inside a vendor’s infrastructure.

3. How OAuth access turns into Vercel internal access

Several analyses explain the critical step: Vercel internal systems that rely on Google SSO will trust a user identity if presented via Google APIs or tokens.

Concrete ways this can play out (based on common patterns, consistent with public descriptions):

  • Internal admin or engineering tools behind Google SSO; the attacker, acting as the employee, can obtain the necessary tokens or links to those tools and sign in.
  • Access to shared Google Drive or Docs where credentials, runbooks, environment snapshots, or service account keys might be stored, which then enable further pivoting.
  • Access to CI/CD or deployment meta‑data stored in Google services that embed secrets or pointers to other systems.

Vercel specifically says the attacker “pivoted into a Vercel environment” and then used that access to enumerate and decrypt non‑sensitive env vars, indicating that once inside Vercel’s internal network, the attacker had a service‑level foothold, not just read‑only Google data.


What exactly is “non‑sensitive environment variable” access?

Vercel distinguishes between “sensitive” and “non‑sensitive” environment variables in their internal design and customer dashboards.

  • “Sensitive” variables are stored with stricter controls: encryption at rest with different keys, additional access checks, and may not be readable in plaintext by normal internal tooling.
  • “Non‑sensitive” env vars were apparently less protected and could be enumerated and decrypted by the attacker once they reached that internal environment.

However, in real‑world projects, teams often put genuine secrets into whatever env‑var mechanism is handy, so the risk is that “non‑sensitive” in Vercel’s classification does not necessarily mean “harmless” in customer practice.

Public writeups and LinkedIn posts emphasize that downstream risk includes:

  • API keys and connection strings for databases, third‑party APIs, or internal services.
  • Tokens for GitHub/NPM or other package registries that could allow code tampering or package hijacks.

That’s why the immediate guidance is “rotate everything” rather than assuming a narrow blast radius.


Why this is being called a “double supply‑chain” attack

Several security vendors describe this incident as a double supply‑chain or “SaaS‑to‑SaaS” breach:

  • First hop: attacker compromises Context AI (a vendor), steals OAuth tokens for its customers.
  • Second hop: those tokens give access into Vercel, which is itself a platform that hosts many other customers’ apps and secrets.

From a defender perspective, the crucial point is that Vercel’s own perimeter wasn’t the original entry point; trust in a vendor’s OAuth app effectively extended your perimeter into that vendor’s AWS environment and made it part of your attack surface.


Key technical lessons: focusing on the “HOW” levers

For someone in your role, the things to hone in on are:

  • OAuth scope and consent discipline
    Broad “Allow all” permissions for third‑party SaaS apps on corporate identities massively increase blast radius when that SaaS gets popped.
  • Token storage and lifecycle
    The root failure is not that OAuth exists; it’s that long‑lived refresh tokens were stored centrally in Context AI’s backend, then stolen and not quickly revoked.
  • Google Workspace as a high‑value pivot layer
    OAuth tokens here weren’t just about email; they effectively became an SSO key into the victim company’s internal ecosystem, including systems that trust Google identity.
  • Internal segregation of secrets
    Once the attacker reached Vercel’s environment, any secrets not treated as highly sensitive (separate encryption, strict access paths) were more easily enumerated and decrypted.
  • Shadow AI / ungoverned app installs
    A single well‑meaning employee connected a powerful AI tool into the corporate tenant, creating an unvetted, high‑privilege supply‑chain dependency that security governance had to catch only after the fact.

Here’s a concrete hardening plan you can lift into your own environment, focused on OAuth app allow‑lists, token policies, detection, and secret‑handling, using the Vercel/Context AI path as the threat model.


1. OAuth app governance and allow‑lists

Objective: Stop “one click on an AI app” from becoming an org‑wide supply‑chain risk.

  1. Move to an explicit allow‑list model for high‑privilege tenants
    • In Google Workspace / Azure AD / Okta, disable “Users can install any third‑party OAuth app” for core org units; allow only admin‑approved apps, especially for engineering and privileged roles.
    • Require security review and owner sign‑off for any app requesting wide scopes (Drive full access, Gmail full access, Directory, Admin APIs, etc.).
  2. Scope‑based approval rules
    • Automatically block any app that requests risky scopes (e.g., Gmail read/write all, Drive full, Calendar full, Directory read) unless a security admin explicitly whitelists it.
    • Require justification and risk assessment for: AI assistants that index email/Drive, SaaS that stores OAuth tokens, tools with access to source code or CI.
  3. Segmentation of identities
    • Prohibit privileged accounts (SRE, prod‑access, Org admin) from installing any third‑party apps; use separate low‑privilege personas for allowed SaaS tools.
    • For devs, use different accounts for “productivity/AI tools” vs “infra/production” so a compromise of the former cannot directly grant access to prod systems.
  4. Periodic SaaS/OAuth inventory review
    • At least quarterly, export the list of all OAuth apps, who installed them, and scopes; kill any unused or high‑risk ones (“shadow AI”) and notify owners.
    • Tag critical apps (CI, source control, SSO bridge, password managers) and treat them as supply‑chain tier‑1 for third‑party risk processes.

2. Token lifetime, storage, and revocation policies

Objective: Make stolen refresh tokens much less valuable and easier to invalidate.

  1. Short‑lived access tokens and constrained refresh tokens
    • Prefer providers and IdPs that support short‑lived access tokens (minutes) and refresh tokens bound to device, IP, or client; enable those features wherever possible.
    • For your own apps, never issue multi‑month refresh tokens without rotation; implement rolling refresh with server‑side detection of anomalies (e.g., new geo/ASN).
  2. Centralized revocation workflows
    • Build a playbook: on SaaS vendor incident (like Context AI), security can immediately revoke all tokens for that app at the IdP / Google level (not just in the vendor UI).
    • Automate revocation via API where possible (Google Workspace Security APIs, MS Graph, Okta API) so you can “kill all tokens for app X” quickly.
  3. Token binding and device assurance (where supported)
    • Enable token binding to specific device keys or trusted device posture, so tokens stolen from a vendor backend cannot be replayed from arbitrary infrastructure.
    • If full binding is not available, use conditional access to restrict where certain apps can be accessed from (e.g., corp VPN or known IP ranges).
  4. Vendor requirements for token handling
    • For high‑risk SaaS (AI copilots, code analysis tools, CI vendors), require: encrypted token storage, regular key rotation, SOC2/ISO‑level controls, and documented incident response SLAs.
    • Prefer vendors that support just‑in‑time access via customer‑managed keys or service accounts over long‑lived user refresh tokens.

3. Detection and monitoring ideas (identity + SaaS + infra)

Objective: Detect the Vercel/Context‑style attack as it’s happening or shortly after, and bound blast radius.

  1. Identity‑level detections (Google/M365/IdP)
    • Alert on “impossible travel” or anomalous sign‑ins for admin/dev accounts, especially via OAuth / “granted by third‑party app” rather than direct SSO.
    • Monitor for new OAuth client consents with high‑risk scopes, and send real‑time alerts when they are granted by privileged users or sensitive OUs.
  2. SaaS‑to‑SaaS pivot detection
    • In SIEM, correlate new third‑party app grants with: unusual API access, large data export, or new access to internal admin tools shortly afterward.
    • Create detections for “first time API access from app X to resource Y” when Y is sensitive (e.g., drives shared with security, production runbooks, vault exports).
  3. Vercel / hosting platform telemetry
    • Enable and ship Vercel audit logs (or equivalent) into your SIEM; detect:
      • Viewing or exporting environment variables.
      • Changes to environment variables values.
      • Creation or modification of integrations (GitHub, NPM, deploy keys).
      • Logins from new IPs / user agents for your org’s Vercel users.
    • Treat any spike in env‑var reads or integration changes as a potential incident and auto‑open an investigation.
  4. Downstream repo and package registry monitoring
    • For GitHub/GitLab: alert on new PATs, deploy keys, or OAuth app installs; watch for unusual force‑pushes, new tags, or changes to build pipelines.
    • For NPM/other registries: monitor for unexpected new versions or maintainer changes on your packages, especially if builds are wired to auto‑consume latest.
  5. Deception and honey‑tokens
    • Seed non‑used “canary” secrets in low‑sensitivity env vars, files, or repos; alert if they are ever used (e.g., DNS callback tokens, fake API keys).
    • This catches env‑var scraping and secret harvesting early, even if the attacker got past identity defenses.

4. Secret‑handling patterns to limit blast radius

Objective: Even if an internal env‑var store is compromised, impact to real systems is small and short‑lived.

  1. Use a dedicated secrets manager as the source of truth
    • Store real secrets in Vault, AWS Secrets Manager, GCP Secret Manager, or similar; use Vercel / CI env vars only for short‑lived references (paths, names, or one‑time tokens).
    • At runtime, apps fetch secrets from the manager using narrowly scoped identities instead of reading long‑lived secrets directly from env.
  2. Short‑lived, auto‑rotated credentials
    • Use dynamic DB credentials and short‑lived cloud tokens (STS/assumed roles) so any stolen secret expires quickly.
    • Rotate API keys and DB passwords on a schedule (daily/weekly) and immediately as part of your incident playbooks; automate via pipelines where possible.
  3. Classify and separate “sensitive” vs “config” env vars
    • Enforce a rule: env vars in your hosting platform that are not behind extra encryption/controls must be configuration only (URLs, feature flags, non‑secret IDs).
    • Anything that grants access (passwords, tokens, API keys) belongs in “sensitive” storage or external secrets manager, never in the “non‑sensitive” tier.
  4. Least privilege for service identities
    • Bound each app’s service principal / role to exactly what it needs; a stolen secret for Service A should not grant carte blanche across your estate.
    • Separate prod vs non‑prod secrets and roles; compromise of a dev project’s env vars must not expose prod DBs or core infra.
  5. Secret discovery and hygiene
    • Regularly scan repos, CI configs, and env vars for hard‑coded secrets using tools like truffleHog, gitleaks, or vendor scanners.
    • Feed findings into a remediation backlog with SLAs (e.g., “rotate and remove in 7 days”), and re‑scan after fixes to prevent regression.

5. Example “Vercel/Context‑style” playbook you can adapt

You can turn the above into a single playbook for when a third‑party SaaS you use reports a breach:

  1. Identify all users and groups that had OAuth tokens with that vendor.
  2. Revoke all OAuth tokens for that app in your IdP / Google Workspace immediately.
  3. For any accounts with elevated access, force sign‑out and reset credentials, and review recent Google/IdP logs.
  4. In Vercel/hosting:
    • Rotate all environment variables that contain secrets.
    • Recreate and rotate linked GitHub/NPM/cloud integrations and tokens.
    • Review audit logs for env‑var reads, integration changes, deployments.
  5. Downstream: rotate DB credentials, cloud access keys, and any secrets that were ever present or derived from those env vars.
  6. Monitor for canary‑token hits, unusual API calls, code changes, or package pushes for at least 30–60 days.