Was Databricks Compromised by the TeamPCP Supply Chain Attack?

Was Databricks Compromised by the TeamPCP Supply Chain Attack?

The modern software development lifecycle relies on a delicate web of trust that, when severed, can expose the most sensitive secrets of global enterprises to malicious actors within seconds. In early March 2026, a sophisticated threat group known as TeamPCP, or alternatively as PCPcat and ShellForce, launched a massive supply chain attack targeting the very infrastructure developers use to build and secure their applications. This campaign systematically compromised critical ecosystems including GitHub Actions, Docker Hub, PyPI, and NPM, focusing specifically on security-adjacent tools like Aqua Security’s Trivy and Checkmarx scanners. By poisoning these trusted repositories, the attackers were able to inject malicious code into automated build processes across numerous industries. Databricks, a leader in data intelligence, found itself under intense scrutiny after threat intelligence researchers suggested that its internal environments might have been accessed during this widespread wave of automated exploitation.

Anatomy of the TeamPCP Campaign

The core of this offensive operation involved the distribution of a specialized malware known as the TeamPCP Cloud stealer, identified under the designation CVE-2026-33634. This tool was designed to activate during the automated build and deployment phases, which are typically seen as secure, isolated environments. Once active, the malware harvested environment variables, Kubernetes configurations, and cloud tokens associated with major providers like Amazon Web Services, Google Cloud, and Microsoft Azure. By targeting the CI/CD pipelines, the attackers bypassed traditional perimeter defenses, as the malicious code was executed by the trusted build runners themselves. This approach allowed the threat actors to gain high-level access to cloud credentials without needing to crack external firewalls. The precision of the attack demonstrated a deep understanding of modern DevOps workflows, where automated security scanners are often granted broad permissions to inspect code and infrastructure.

To achieve such a wide reach, the TeamPCP group utilized a combination of typosquatting and the manipulation of fallback repositories to ensnare unsuspecting developers and automated systems. By creating packages with names nearly identical to popular libraries or by hijacking unmaintained dependencies, they ensured that their malicious payloads would be pulled into legitimate software projects. This method proved particularly effective against organizations that rely on automated dependency updates without rigorous manual verification of every new version. Moreover, the attackers exploited the inherent trust placed in security tools, knowing that many enterprises whitelist these scanners within their network environments. This created a paradoxical situation where the very tools meant to identify vulnerabilities became the primary vectors for data exfiltration. The sophisticated nature of this campaign suggests a well-resourced adversary capable of monitoring repository changes and reacting quickly to patches.

Defensive Measures and Future Safeguards

Following the initial reports of potential exposure, Databricks immediately mobilized its incident response teams to conduct a comprehensive forensic investigation into its internal systems. The company sought to determine if any of its developer credentials or cloud environment variables had been intercepted by the TeamPCP Cloud stealer during its routine CI/CD operations. This security event happened to coincide with the recent rollout of Databricks’ own AI-driven Lakewatch security platform, which was designed to monitor and protect data lakehouse environments from evolving threats. Despite the alarming nature of the supply chain campaign, a Databricks spokesperson eventually confirmed that no evidence of an actual compromise was discovered within their infrastructure. The organization emphasized its commitment to transparency and requested further documentation from the intelligence sources that first flagged the risk. This proactive stance was intended to reassure customers that their data remained isolated.

Security experts concluded that the most effective response to this campaign involved a complete reset of the trust established within the development environment. Organizations that utilized any of the affected security scanners were advised to assume that their credentials had been exposed and were forced to rotate all secrets and tokens accessible to their CI runners. Administrators took steps to audit GitHub Actions logs for any unauthorized outbound traffic directed toward malicious domains identified in threat reports. Furthermore, security teams monitored for the creation of unauthorized repositories that followed the specific naming conventions used by the ShellForce group during their operations. The incident highlighted the necessity for a proactive audit of the entire software development lifecycle to identify hidden threats that might have been introduced through third-party dependencies. By implementing stricter controls over repository access and verifying the integrity of automated tools, companies sought to build more resilient defenses.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later