One Employee Action Exposed Vercel Before Context.ai Was Hacked
The vendor compromise came later. The real breach risk began when an external AI agent was connected to corporate systems with broad permissions and little oversight.
Chris Morosco, Aurascape VP & Head of Marketing
April 22nd, 2026 | 🕐 5 minute read
Introduction
The action that set up the Vercel breach was not taken by the attacker. It happened earlier, when a Vercel employee connected Context.ai’s AI Office Suite to Vercel’s Google Workspace and granted it broad permissions. Context says at least one employee enabled “allow all” on the requested Google Workspace permissions, and says the product let AI agents perform actions across external applications. That was the moment the real exposure began. Everything that came later depended on that connection already existing.
Vercel could not have prevented Context.ai from being breached. It could not have stopped the later abuse of access once that vendor-side compromise happened. What Vercel could have governed was the employee’s use of an external AI agent and the level of access that agent was granted inside the business. That is the part security leaders can actually control, and it is the part they should focus on first. Vercel says the attacker then used that access to take over the employee’s Vercel Google Workspace account and reach internal environments.
That is what makes this incident important. The warning is not just that third-party breaches happen. It is that hidden AI agents can quietly become trusted paths into the business long before anyone treats them like a security dependency.
This was an AI Agent Problem, Not Just an 0Auth Problem
The upstream breach at Context.ai matters, but it is not the main story here. Context says it detected unauthorized access to its AWS environment and later learned that some AI Office Suite user OAuth tokens were compromised during that incident. One of those tokens was then used to access Vercel’s Google Workspace. There is also outside reporting that the chain may have started with malware on a Context.ai employee device, but neither Vercel nor Context has publicly confirmed that exact upstream device-compromise path. For Vercel and every other customer, the customer-side lesson is the same either way: no customer-side control would have prevented the vendor compromise itself.
That is why this should not be framed mainly as an OAuth story. OAuth was the breach mechanic. It explains how the attacker cashed in after the upstream compromise. It does not explain the earlier decision that created the blast radius. By the time stolen access was abused, the important trust decision had already happened. An external AI agent had already been granted broad access into enterprise workflows.
That is the shift security leaders need to internalize. The first question is no longer only what employees type into AI tools. It is also which AI agents employees are connecting to business systems, what those agents are allowed to do, and whether anyone approved that level of access.
The Breach Starts With a Quiet Connection
The Drift breach is a useful comparison because it highlights the same customer-side lesson. Security teams generally cannot prevent a third-party vendor from being breached. What they can control is how that service is used inside their own environment. Google’s advisory on the Salesloft Drift incident says attackers used compromised OAuth tokens associated with Drift to target Salesforce customer instances. After the breach, many organizations were left asking the same questions: who used the service, how they used it, what data may have been exchanged before the breach, and how to govern that access going forward.
The difference between Drift and Vercel is what was hidden. In the Drift case, the hidden risk was embedded AI operating inside trusted websites and SaaS apps without enough visibility into what employees were sharing. In the Vercel case, the hidden risk was an external AI agent being connected to corporate Google Workspace with broad permissions. Different setup, same enterprise problem. The trust decision happened quietly, the connection became part of the environment, and only later did the compromise make that hidden dependency obvious.
For CISOs, the practical takeaway is straightforward. Hidden AI agents are becoming a governance problem before they become a breach problem. Many of these tools do not look malicious. They look useful. They help create content, summarize information, draft responses, and automate small tasks. That is exactly why employees connect them. The problem is not always the tool itself. The problem is the quiet combination of hidden adoption, delegated action, and permissions that become broader than the organization realizes.
If a company cannot see which AI agents employees have connected to business systems, what those agents are allowed to do, and where permissions have become too broad, it will struggle to govern AI risk effectively.
How Aurascape Can Help
This is exactly the gap Aurascape is built to close. Aurascape helps organizations safely adopt AI across employee use, embedded AI, and agentic workflows, with inline visibility and control across AI apps, MCP tools, and agents. It can automatically uncover every AI app and agent in use, including those embedded inside SaaS apps, risk-score them, and apply real-time controls based on context and intent.
In practical terms, that means companies can use Aurascape to find hidden AI agents across the organization, understand what they are connected to, see how employees are using them, and control risky access before those connections become normalized and forgotten. In incidents like Vercel, the goal is not to pretend any platform can prevent every vendor-side breach. The goal is to make sure an external AI agent does not quietly gain broad, ungoverned access to your environment without security ever seeing it. That is the control point that matters.
How to Get Ahead of This Risk
See how Aurascape helps security teams uncover hidden AI agents and bring the right visibility and control to AI use across the business.
Aurascape Solutions
- Discover and monitor AI Get a clear picture of all AI activity.
- Safeguard AI use Secure data and compliancy in AI usage.
- Secure Agentic AI Secure how your teams use AI and build AI agents.
- Copilot readiness Prepare for and monitor AI Copilot use.
- Coding assistant guardrails Accelerate development, safely.
- Frictionless AI security Keep users and admins moving.