NPM Was the Breach
Junior Dev Nugget; principle: Installation is execution unless the toolchain proves otherwise.; likely mistake: Treating provenance badges, trusted publishing, or lockfiles as proof that installed code cannot exercise dangerous powers.; read next: Read npm lifecycle scripts, GitHub Actions pull_request_target, OIDC trusted publishing, pnpm minimumReleaseAge, and capability-based package policy.
Word count receipt: technical migration from unpublished Libertaria Dispatch draft.
What changed
This post moves the NPM supply-chain dispatch into the Devlog before publication. It was drafted for libertaria.blog, then held back because the audience mismatch was obvious: this is not public Exitarian commentary. This is build-chain trench doctrine.
The incident: Socket reported 84 compromised TanStack npm package artifacts in the Mini Shai-Hulud campaign. The malicious releases added credential-stealing payloads aimed at developer machines and CI systems. TanStack later attributed the publish path to a chained GitHub Actions attack involving the pull_request_target trust boundary, cache poisoning, and extraction of an OIDC token from the runner process. StepSecurity reported that the malicious packages carried valid SLSA Build Level 3 provenance attestations.
The poison had paperwork.
That is the lesson.
NPM did not fail merely as a registry. It failed as an execution environment, trust oracle, social reputation machine, transitive authority router, and credential-adjacent build system. Then the industry wired it into CI and acted shocked when a worm learned the map.
Why now
Agents now run package-manager commands constantly. Human operators hand them shell access, repo access, CI context, and vague goals. The agent sees a missing dependency and reaches for npm install or pnpm install because that is what the training distribution taught it.
That habit is now a security boundary.
The Mini Shai-Hulud campaign shows the new shape of the battlefield: attackers do not need a glamorous zero day when they can compromise one trusted publish path, ride install-time code execution, steal credentials, and self-propagate through package ownership.
The attack did not invent the doors.
It walked through them.
Design decisions and tradeoffs
- Chosen path: Treat package installation as privileged code execution, not dependency resolution.
- Rejected path: Frame the incident as an npm-only scandal.
- Why the rejection was correct: PyPI also appeared in campaign reporting, and the deeper failure is ambient install authority plus credential-adjacent automation. NPM is the loudest case, not the entire disease.
- Chosen path: Recommend pnpm hardening as triage while refusing to call it salvation.
- Rejected path: Publish a tool-switch sermon.
- Why the rejection was correct: pnpm improves the gate. It does not change language-level process authority. A better installer can refuse unknown scripts. A better substrate makes powers explicit before code runs.
The Installer Is Code Execution
Most developers still speak about package installation as if it were a download.
It is not.
An npm install can run lifecycle scripts. A dependency can pull a git-based subdependency. A prepare hook can execute arbitrary JavaScript. That JavaScript can spawn child processes, read environment variables, inspect the filesystem, touch developer tools, and phone home before the developer has even imported the library.
The screen says:
installing @tanstack/react-router
The machine may be doing:
reading GITHUB_TOKEN walking ~/.kube inspecting .npmrc staging payloads in /tmp contacting attacker infrastructure
That is the horror. The command looks like dependency resolution. The authority profile looks like a shell session with secrets nearby.
NPM normalized this because install-time build hooks solved real engineering problems. Native modules need compilation. Toolchains need setup. Packages like esbuild and sharp have legitimate postinstall needs.
Fine.
Then the ecosystem made the exception feel ordinary. Once ordinary, it became invisible. Once invisible, it became infrastructure. Once infrastructure, it became a worm path.
Trusted Publishing Did Not Save You
The TanStack compromise is worse than the usual “maintainer got phished” story.
That would be bad enough. This one is more educational.
The attacker did not need to steal a classic npm token, according to TanStack’s postmortem as summarized by Socket. The malicious publishes went through the project’s own GitHub Actions publishing machinery. The workflow had the kind of trusted-publisher shape that the ecosystem encourages: GitHub Actions obtains an OIDC identity token, npm trusts that identity, and the registry accepts a publish from the expected pipeline.
In theory, this removes long-lived npm tokens from CI.
Good.
But identity is not magic. If attacker-controlled code runs inside the trusted context, it can inherit the halo. The registry sees the expected publisher. The provenance system sees the expected build lane. The badge turns green.
The package is still poison.
SLSA, Sigstore, OIDC, and provenance attestations can prove where a thing came from. They cannot prove that the thing deserved to exist. If the trusted factory is tricked into manufacturing knives, the certificate will faithfully attest that the knives came from the factory.
The supply chain did not lack a signature.
It lacked confinement.
The Pwn Request Pattern Is a Border War
GitHub Actions has a sharp distinction between pull_request and pull_request_target.
pull_request runs in the context of the proposed merge. For forks, it gets reduced permissions and no access to base-repository secrets by default.
pull_request_target runs in the context of the base repository. It can access broader permissions and secrets. It exists for legitimate cases: labeling, commenting, triaging, and doing trusted repository work in response to an untrusted pull request.
Then developers use it like a normal CI trigger.
That is where the floor opens.
If a workflow checks out and executes attacker-controlled code while carrying base-repository authority, the attacker no longer needs to defeat GitHub. The workflow hands him the room key, compliments his badge, and asks whether he would like coffee before touching production-adjacent machinery.
Cache poisoning compounds the damage. A cache is not just a performance optimization when it crosses a trust boundary. It becomes a storage channel between untrusted code and trusted execution. If the fork can poison something the base workflow later restores, the cache has become a smuggler.
Speed was the sales pitch.
State was the weapon.
The Worm Did Not Need Genius
CodeOne spends time admiring the worm’s engineering: obfuscation, daemonization, credential collectors, provider modules, git mutation, package mutation, payload staging, sender infrastructure, and signs of AI-assisted coding.
That deserves analysis, but not awe.
Attackers no longer need mythical skill to build competent worms. They need enough domain understanding to compose known tricks, enough automation to iterate, and enough patience to read ecosystem docs the way maintainers rarely do.
AI helps with that. It writes glue. It explains APIs. It deobfuscates samples. It suggests persistence paths. It turns “how do I enumerate likely secrets on CI runners?” into a shopping list.
The dangerous part is not that the attacker used AI.
The dangerous part is that the ecosystem gave AI a beautifully documented maze with credentials at the center.
Modern package infrastructure is a self-replicating attack lab because every piece wants to be convenient:
- package managers execute code during installation;
- CI jobs carry tokens because publishing should be smooth;
- caches persist state because builds should be fast;
- registries trust identities because tokens are painful;
- dependency graphs hide depth because developer experience sells.
The worm did not invent these doors.
It walked through them.
PNPM Is Triage, Not Salvation
The practical advice circulating now is mostly right.
Use pnpm. Set minimumReleaseAge so fresh poison cannot enter your graph ten minutes after publication. Block exotic dependencies so a nested package cannot pull executable code from a random git URL. Use onlyBuiltDependencies or equivalent allowlists so install scripts run only for packages that actually need them. Pin versions. Audit lockfiles. Rotate secrets on touched machines. Treat any CI runner that installed affected versions as contaminated until proven otherwise.
Do it.
Today.
But do not mistake this for a cure.
pnpm can harden the install path. It can slow the attacker. It can force explicit build-script approval. It can make the usual npm sludge less suicidal. That matters. Good tools matter.
Still, the deeper problem remains: the package manager is compensating for a language and runtime model where arbitrary transitive code inherits ambient process authority.
A better installer can say, “I will not run unknown postinstall scripts.”
A better language says, “This package cannot read the filesystem, open the network, spawn a process, or inspect environment variables unless those capabilities appear in its type and policy surface.”
One is a gate.
The other changes the physics.
What Janus Would Make Visible
Janus treats code as a permission contract.
If a package wants to read files, it needs a filesystem capability. If it wants network egress, it needs a network capability. If it wants to spawn a subprocess, it needs process authority. If it wants to operate in the dangerous tier, it climbs into a named profile where reviewers can see the escalation.
That does not eliminate malicious authors.
It changes what they can hide.
An npm-style installer lets a dependency say nothing and still touch the machine. A Janus-style package must surface authority requirements before install and before execution. The resolver can reject packages whose capability profile exceeds policy. CI can run with no ambient secrets. A build script can exist, but it runs in a sandbox with declared, minimal powers.
Package metadata stops being decoration. It becomes law.
A supply-chain resolver should be able to ask:
- Does this package require network egress during install?
- Does any transitive dependency request process-spawn authority?
- Did a minor patch release add filesystem read access?
- Did a :script package pull in a :sovereign module?
- Did the artifact hash change without enough independent signatures?
These questions should not require a crisis spreadsheet and three panicked Discord threads. They should be machine-checkable before the poison touches disk.
Junior Dev Nugget
- The principle being demonstrated: Installation is execution unless the package manager and runtime prove otherwise.
- The mistake the reader would have made: Believing that a signed package from a trusted publishing workflow is safe. Signed poison is still poison.
- What to read or look at next: npm lifecycle scripts, GitHub Actions pull_request_target, OIDC trusted publishing, pnpm hardening options, SLSA provenance limits, and capability-based package policy.
Ideological stance, grounded
- Position: You do not own your machine if npm install can turn it into an exfiltration node.
- Engineering evidence drawn from the incident: Install scripts can execute arbitrary code. CI systems often place credentials near build steps. Trusted publishing can authenticate the factory while saying nothing about whether the factory was tricked into producing malware. Provenance without confinement is a receipt, not a border.
- Where this sits in the Libertaria mission: Sovereign infrastructure needs build chains that behave like territory: bordered, inspectable, reproducible, revocable, and hostile to smuggled authority.
References
- Socket: TanStack npm packages compromised in ongoing Mini Shai-Hulud supply-chain attack
- StepSecurity: Mini Shai-Hulud is back
- TanStack: npm supply chain compromise postmortem
- npm: Lifecycle scripts
- GitHub Actions: Events that trigger workflows
- pnpm: Configuration options
What comes next
Harden our own JavaScript-facing repos with package-manager policy where practical, and keep translating this failure class into Janus package/capability enforcement.
NPM was a mistake because it taught the industry to confuse installation with consent. - V.