What started as a quiet security bulletin has quickly unraveled into one of the most telling cyber incidents of the year, because this time, the entry point wasn’t a zero-day or a phishing email, but an AI tool trusted inside a developer’s workflow.
Vercel, the company behind the widely used Next.js ecosystem, has confirmed that attackers gained unauthorized access to parts of its internal systems after compromising an employee account through a third-party AI platform, Context.ai. The breach didn’t begin inside Vercel at all, it began upstream, where the AI tool itself had already been compromised. In many ways, this starts to look less like a traditional breach and more like a supply chain attack, except the supply chain here wasn’t code or infrastructure, but an AI tool embedded directly into everyday work.
From there, attackers leveraged OAuth access to take over the employee’s Google Workspace account, quietly inheriting permissions that opened doors into Vercel’s environments. It wasn’t just OAuth, it was the trust layered into these integrations. Tokens with broad, persistent access became the bridge, allowing attackers in without ever needing to “break” anything.
What followed wasn’t chaotic, it was precise. The attacker moved laterally, accessing environment variables, internal logs, and system data, areas that may not always be labeled as “sensitive,” but often hold the kind of context and access that makes deeper compromise possible. While Vercel emphasized that highly sensitive data remained encrypted and core production systems were not broadly impacted, a limited subset of credentials, access tokens, and internal information may have been exposed.
Importantly, the intrusion appears to have been detected and contained relatively quickly. Vercel revoked compromised access, secured affected systems, and began response measures early, suggesting that while the entry point was subtle, the window of exploitation may have been limited.
Then came the public signal. A threat actor, claiming links to the ShinyHunters group, began advertising stolen data for sale, reportedly including access keys, databases, and even source code, placing a price tag of $2 million on the breach. Some samples allegedly surfaced online, including employee details and activity logs, adding a layer of credibility, even as the attribution itself remains unverified and potentially exaggerated.
Inside Vercel, the response has been measured but serious. The company brought in Mandiant for incident response and forensics, notified law enforcement, and began directly contacting affected customers. The attacker, Vercel noted, appeared “highly sophisticated,” moving with speed and familiarity, raising the possibility that automation, or even AI-assisted techniques, may have played a role in accelerating the attack.
There’s a deeper shift embedded in this incident. This wasn’t just a breach of infrastructure, it was a breach of trust in the growing ecosystem of AI integrations. Tools designed to enhance productivity are now becoming potential attack surfaces, especially when connected through powerful permissions like OAuth. In this case, the compromise didn’t require forcing entry; it required being let in.
Vercel has since urged users to rotate credentials, audit third-party integrations, and monitor activity logs closely. But beyond immediate remediation, the incident points to something broader. As AI tools become embedded in workflows, they are also becoming part of the attack surface, not just individually, but as part of a wider, interconnected supply chain of trust.
Caught feelings for cybersecurity? It’s okay, it happens. Follow us on LinkedIn and Instagram to keep the spark alive.