Shai Hulud attack ships signed malicious TanStack, Mistral npm packages
by Bill Toulas · BleepingComputerHundreds of packages across npm and PyPI have been compromised in a new Shai-Hulud supply-chain campaign delivering credential-stealing malware targeting developers.
The attacker hijacked valid OpenID Connect (OIDC) tokens to publish malicious package versions with verifiable provenance attestation (SLSA Build Level 3)
Attributed to the TeamPCP threat group, the attack started with compromising dozens of TanStack and Mistral AI packages but quickly extended to other popular projects, like Guardrails AI, UiPath, and OpenSearch.
The Shai-Hulud campaign emerged last September and had multiple iterations [1, 2, 3], some of them exposing hundreds of thousands of developer secrets in automatically generated GitHub repositories. Among more recently compromised projects are the Bitwarden CLI package and the official SAP packages.
The latest attack wave occurred yesterday with the threat actor publishing multiple malicious packages in the TanStack namespaces on the Node Package Manager (npm), and then spreading to other projects using stolen CI/CD credentials.
Application security company StepSecurity notes that the threat actor published the infected packages via the legitimate CI/CD pipeline, carrying valid SLSA provenance attestations issued by npm's signing infrastructure and "tied to the legitimate TanStack/router Release workflow."
Endor Labs reports over 160 compromised packages on npm, Aikido recorded 373 malicious package-version entries, and Socket tracked 416 compromised package artifacts across npm and the Python Package Index (PyPI).
According to TanStack's post-mortem report from TanStack, the attackers chained three vulnerabilities: a risky ‘pull_request-target’ workflow, GitHub Actions cache poisoning, and OIDC token theft from runner memory.
The attackers published 84 malicious versions across 42 TanStack packages that had valid provenance, valid Sigstore attestations, and legitimate GitHub Actions signatures.
From a developer’s perspective, the packages appeared to be cryptographically authentic, and there was no indication of a compromise.
Endor Labs highlights a clever Git commit trick in which attackers abused an orphaned commit pushed to a fork of the TanStack/router repository, making it accessible through GitHub’s shared fork object storage even though it didn't belong to any branch.
The commit was referenced via a malicious optional dependency, causing npm to automatically fetch and execute attacker-controlled code during package installation.
The malware targets developer secrets, including:
- GitHub Actions OIDC tokens and PATs
- Git credentials
- npm publish tokens
- AWS Secrets Manager, IAM, and ESC task credentials
- Kubernetes service account tokens and cluster credentials
- HashiCorp Vault tokens
- SSH keys
- Claude Code configs
- VS Code tasks
- .env files
StepSecurity says that the payload reads the GitHub Actions process memory to collect credentials from more than 100 file paths associated with cloud providers, cryptocurrency tokens, and messaging apps.
To exfiltrate the sensitive information, the malware used the Session P2P network, making it appear as encrypted messenger traffic and complicating detection, blocking, and takedown efforts.
Once an infection occurs, the malware writes itself into Claude Code hooks and VS Code auto-run tasks, so uninstalling the malicious packages does not remove it.
The self-propagation mechanism remains largely unchanged from past waves: it uses stolen GitHub/npm credentials, enumerates the packages linked to the compromised maintainer, modifies tarballs to inject the payload, and then republishes malicious versions.
According to supply-chain security platform SafeDep, although the trigger mechanism is different in compromised Mistral AI and TanStack packages, they drop the same credential-stealing payload.
Microsoft Threat Intelligence analyzed the payload delivered via a malicious Mistral AI package on PyPI. The actor named it 'transformers.pyz', which may be to impersonate the Hugging Face open-source Python library Transformers used for accessing pre-trained models for natural language processing.
The researchers say that payload drops an information-stealing malware on Linux systems. The stealer includes basic geofencing logic, specifically avoiding execution on hosts where Russian language settings are detected.
A destructive secondary routine is also present. In environments that appear to originate from Israel or Iran, the malware introduces a probabilistic sabotage mechanism with a 1-in-6 chance of running a recursive wipe command (rm -rf/).
The behavior resembles the CanisterWorm campaign that TeamPCP deployed in March and targeted Kubernetes platforms. If CanisterWorm landed on machines that matched Iran's timezone and locales, it would wipe it.
Lists of compromised packages are available in the reports from various security vendors [1, 2, 3, 4, 5], and it is recommended to check all the resources for a complete view of the impact.
Developers who downloaded an affected package version should assume that credentials were exposed. Researchers recommend that security teams take the following action:
- check for affected package versions
- check for persistence on developer machines
- rotate all credentials (GitHub tokens, npm tokens, AWS credentials, Vault tokens, Kubernetes service accounts, and CI/CD secrets)
- audit IDE directories for malicious files surviving npm install (e.g., router_runtime.js or setup.mjs)
- block the threat actor's command-and-control infrastructure (api.masscan.cloud, git-tanstack.com, and *.getsession.org) at DNS or proxy level
Snyk researchers say that since the "attack produces valid SLSA Build Level 3 attestations for malicious packages," it is necessary to verify provenance and add a behavioral analysis layer at install time, along with a signature-based check for malicious packages.
In the long term, to mitigate the risk from similar attacks, consider enforcing lockfile-only installs, which should prevent auto/silent package updates.
UPDATE [08:36 EST]: Added information from Microsoft Threat Intelligence's analysis of a payload delivered via a compromised Mistral AI package.
99% of What Mythos Found Is Still Unpatched.
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what's exploitable, proves controls hold, and closes the remediation loop.