Which is basically phishing:
> The meeting link itself directed to a spoofed Zoom meeting that was hosted on the threat actor's infrastructure, zoom[.]uswe05[.]us.
> Once in the "meeting," the fake video call facilitated a ruse that gave the impression to the end user that they were experiencing audio issues.
> The recovered web page provided two sets of commands to be run for "troubleshooting": one for macOS systems, and one for Windows systems. Embedded within the string of commands was a single command that initiated the infection chain.
...
it seems the correct muscle memory response to train into people is that "if some meeting link someone sent you doesn't work, then you should create one and send them the link"
(and of course never download and execute anything, don't copy scripts into terminals, but it seems even veteran maintainers do this, etc...)
see Infection Chain here https://cloud.google.com/blog/topics/threat-intelligence/unc...
textarea at the bottom of this comment: https://github.com/axios/axios/issues/10636#issuecomment-418...
Arrgh. You're looking at the closest thing to a root cause and you're just waving over it. The culture of "just paste this script" is the problem here. People trained not to do this (or, like me, old enough to be horrified about it and refuse on principle) aren't vulnerable. But you just... give up on that and instead view this as a problem with "muscle memory" about chat etiquette?
Good grief, folks. At best that's security theater.
FWIW, there's also a root-er cause about where this culture came from. And that's 100% down to Apple Computer's congenital hatred of open source and refusal to provide or even bless a secure package management system for their OS. People do this because there's no feasible alternative on a mac, and people love macs more than they love security it seems.
I don't understand. I used Linux for a long time before I switched to Mac, and the "copy this command and paste it in your terminal" trope was just as prevalent there.
This and every other recent supply chain attack was completely preventable.
So much so I am very comfortable victim blaming at this point.
This is absolutely on the Axios team.
Go setup some smartcards for signing git push/commit and publish those keys widely, and mandate signed merge commits so nothing lands on main without two maintainer sigs, and no more single points of failure.
It seems the Axios team was largely practicing what you're preaching. To the extent they aren't: it still wouldn't have prevented this compromise.
One must sign commits -universally- and -also- sign reviews/merges (multi-party) and then -also- do multi party signing on releases. Doing only one step of basic supply chain security unfortunately buys you about as much defense as locking only a single door.
I do however certainly assign significant blame to the NPM team though for repeatedly refusing optional package signing support so packages with signing enabled can be refused at the server and client if unsigned by a quorum of pinned keys, but even aside from that if packages were signed manually then canary tools could have detected this immediately.
I think NPM is fully to blame here. Packages that exceed a certain level of popularity should require signing/strong 2FA. They should implement more schemes that publishers can optionally enable, like requiring mandatory sign-off from more than 1 maintainer before the package is available to download.
Then on the package page it should say: "[Warning] Weak publishing protection" or "[Checkmark] This package requires sign-off from accountA and accountB to publish".
they had 2FA, but likely software TOTP (so it was either autofilled via 1password (or similar), or they were able to steal the seed)
at this point I think publishing an npm app and asking people to scan a QR with it is the easiest way (so people don't end up with 1 actual factor)
They won't do this, I have talked to them plenty of times about it. But, if they did, the supply chain attacks would almost entirely stop.
Until NPM can enforce those basic checks though, you have to roll your own CI to do it yourself, but large well funded widely used projects have an obligation to do the basics to protect their users, and their own reputations, from impersonation.
You said that you "also" blame NPM, but they're the only party who should get any blame until they get their shit together.
[1] https://github.com/axios/axios/issues/10636#issuecomment-418...
If I understand it correctly, your suggestions wouldn’t have prevented it, which is evidence that this is not as trivially fixable as you believe it is.
Operate under the assumption all accounts will be taken over because centralized corporate auth systems are fundamentally vulnerable.
This is how you actually fix it:
1. Every commit must be signed by a maintainer key listed in the MAINTAINERS file or similar
2. Every review/merge must be signed by a -second- maintainer key
3. Every artifact must be build deterministically and be signed by multiple maintainers.
4. Have only one online npm publish key maintained in a deterministic and remotely attestable enclave that validates multiple valid maintainer signatures
5. Automatically sound the alarm if an NPM release is pushed any other way, and automatically revoke it.
Whats even more stupid is they actually started mandating 2FA for high risk packages, and FIDO2 supports being used to actually sign artifacts, but they instead simply use it for auth, and let releases stay unsigned. Even the developers they insisted hold cryptographic signing keys, they insist on only throw-away signatures for auth, but not using them for artifact signing to prevent impersonation. It is golf clap level stupid.
Consider them a CDN that wants to analyze your code for AI training for their employer and nothing more. Any security controls that might restrict the flow of publishing even a little bit will be rejected.
I've started adding a provenance verification step to our deploy pipelines after the event-stream incident years ago, and it's caught weird stuff twice now, both times just maintainers accidentally publishing from their local machine instead of CI, not actual attacks, but the point is the same mechanism would catch this. The real problem isn't that we lack signing infrastructure or better NPM policies. It's that the ecosystem has trained everyone to treat dependency updates as a solved problem the moment you have a lockfile. Lockfiles protect you from silent republishing of existing versions, but they do nothing when the attack comes as a "legitimate" new patch release. Diffing your lockfile for unexpected new transitive deps on every deploy is table stakes, and most teams I've worked with still don't do it.
> March 31, 00:21 UTC: axios@1.14.1 published with plain-crypto-js@4.2.1 injected
> March 31, around 01:00 UTC: axios@0.30.4 published with the same payload
> March 31, around 01:00 UTC: first external detections
> March 31, around 01:00 UTC: community members file issues reporting the compromise. The attacker deletes them using the compromised account.
So it was found out almost immediately.
GHSA and OSV have started tracking them as advisories - but the infrastructure to actually find these in a given system is still lacking. Most tools are still just checking NVD/CVE.
So the attestation signal was there, the malware flag comes later - normally after the malicious version is already pulled - and nothing in the average developer's toolchain is telling them that they got hit.
Adding postinstall should require approval from NPM. NPM clients should not install freshly published packages. NPM packages should be scanned after publishing. High profile packages should verify upstream git hash signature. NPM install should run in sandbox and detect any attempt to install outside project directory.
But npm being part of multi trillion company cannot be bothered to fix any of these. Instead they push for tighter integration with GitHub with UX that suck.
That would be a beautiful example of Cobra effect: what about updates that fix vulnerabilities? You're gonna force users to wait couple days or a week before they can get malware removed?
The problem would be new versions that fix security issues though, and because this is all open source as soon as you publish the fix everyone knows the vulnerability. You wouldn’t want everyone to stay on the insecure version with a basically public vulnerability for a week.
The next incarnation of this, I worry, is that the malware hibernates somehow (e.g., if (Date.now() < 1776188434046) { exit(); }) to maximize the damage.
I mean the compromised machine registers itself on the command server and occasionally checks for workloads.
The hacker then decides his next actions - depending on the machine they compromised they'll either try to spread (like this time) and make a broad attack or they may go more in-depth and try to exfiltrate data/spread internally if eg a build node has been compromised
I feel like npm specifically needs to up their game on SA of malicious code embedded in public projects.
Before each action you need to enter your 2fa code.
I got so frustrated with npm end of last year that I wrote a whole guide covering that issue: https://npmdigest.com/guides/npm-trusted-publishing
Still needs to be published first, but looks like it automates all the annoying UI things you mentioned.
Edit: wait, did the attacker intercept the totp code as it was entered? Trying to make sense of the thread
I would argue that the problem is network accessibility, not programmability.
It would not be an advantage for your front door lock to be infinitely reprogrammable. It’s just a liability.
> It would not be an advantage for your front door lock to be infinitely reprogrammable. It’s just a liability.
Er, most door locks are infinitely reprogrammable, because being able to rekey them without having to replace the whole unit is a huge advantage and the liability/disadvantage is minimal (falling under "It rather involved being on the other side of this airtight hatchway" in an unusually almost-literal sense where you have to be inside the house in order to rekey the lock, at which point you could also do anything else).
the implicit trust we have in maintainers is easily faked as we see
(As we’ve seen from every GPG topology outside of the kinds of small trusted rings used by Linux distros and similar, there’s no obvious, trustworthy, scalable way to do decentralized key distribution.)
Identity continuity at a minimum, is of immense defensive value even though we will not know if the author is human or trusted by any humans.
That said any keys that become attached to projects that are highly depended on would earn a lot of trust that they are human by getting a couple of the 5k+ of people worldwide with active well trusted PGP keys to sign theirs via conferences or otherwise, as it has always been.
Are you really saying there is just something fundamental about javascript developers that makes them unable to run the same basic shell commands as Linux distribution maintainers?
You are of course right that a signed package ecosystem would be great, it's just that you're asking people to do this labour for you for free. If you pay some third party to verify and sign packages for you? That's totally fine. Asking maintainers already under tremendous pressure to do yet another labour-intensive security task so you can benefit for free? That's out of balance.
Are they incapable of doing it? Probably not. Does it take real labour and effort to do it? Absolutely.
Interesting it got caught when it did.
Seems to me that one drastic tactic NPM could employ to prevent attacks like this is to use hardware security. NPM could procure and configure laptops with identity rooted in the laptop TPM instead of 2FA. Configure the NPM servers so that for certain repos only updates signed with the private key in the laptop TPM can be pushed to NPM. Each high profile repo would have certain laptops that can upload for that repo. Set up the laptop with a minimal version of Linux with just the command line tools to upload to NPM, not even a browser or desktop environment. Give those laptops to maintainers of high profile repos for free to use for updates.
Then at update time, the maintainer just transfers the code from their dev machine to the secure laptop via USB drive or CD and pushes to NPM from the special laptop.
Given the "extreme vigilance" of the primitive "don't install unknown something on your machine" level is unattainable, can there really be an effective project-level solutions?
Mandatory involvement of more people to hope not everyone installs random stuff, at least not at same time? (though you might not even have more people...)
Point 4 from https://npmdigest.com/guides/npm-trusted-publishing#ux-probl...
(I wrote that guide page for myself because I always get annoyed when dealing with npm OIDC)
NPM rejected PRs to support optional signing multiple times more than a decade ago now, and this choice has not aged well.
Anyone that cannot take 5 minutes to set up commit signing with a $40 usb smartcard to prevent impersonation has absolutely no business writing widely depended upon FOSS software.
Normalized negligence is still negligence.
Just sign commits and reviews. It is so easy to stop these attacks that not doing so is like a doctor that refuses to wash their hands between patients.
If you are not going to wash your hands do not be a doctor.
If you are not going to sign your code do not be a FOSS maintainer.
Even if they did sign the code, What's stopping them slipping some crypto link in. And do they also need to check all the transitive depdencies in their code?
Sitting back and expecting Microsoft to keep the community safe is going to continue to end badly. The community has an obligation to each other.
Like, no one is making someone go bring a bunch of food to feed the homeless, but if you do, you have some basic social obligation to make sure it is sanitary and not poison.
People who give things away for free widely absolutely have obligations, and if they do not like those, they should hand off the project to a quorum of responsible maintainers and demote themselves to just a contributor.
>if they do not like those, they should hand off the project to a quoarum of >responsible maintainers and demote themselves to just a contributor.
The most responsible thing to do is to release it under an OSS license and let whoever, yes - including you, fork and maintain their own copy if it's that important.
Is a food pantry giving away free food obligated to check expiration dates and make sure the food is properly sealed?
Volunteer work absolutely has obligations, and I do not know why software volunteers are exempt from any responsibility unless they are being paid.
If you do not want to do the volunteer work in a safe way, please hand off the job to a volunteer willing to do so.
If maintainers really cannot afford that, they should flag it as a major big bold print supply chain risk on the readme: "We cannot afford 4 yubikeys for our maintainers and thus all code is signed with software keys in virtual machines as a best effort defense. Donate to our fund [here] to raise $500 for dedicated release hardware"
Friends and I have gotten 100s of yubikeys and nitrokeys donated to FOSS maintainers, but FOSS maintainers have to be willing to say they would use them and signal that they need them.
Honestly though, anyone that cannot afford $40 I expect is at high risk of being bribed or having to give up contributing to take on more work, so we should significantly fund any project signaling that much desperation.
Yet most developers I work with just use it reflexively. This seems like one of the biggest issues with the npm ecosystem - the complete lack of motivation to write even trivial things yourself.
I use "xhr" via fetch extensively, it can do everything in day to day business for years with minimal boilerplate.
(The only exception known to me being upload progress/status indication)
Then you would have created just an axios clone. AKA re-inventing the wheel. The issue isn't the library itself, but rather the fact that it's popular and provided a large enough attack surface.
You can actually just clone the axios package and use it as is from your private repo and you would not have been affected.
The wheel is the native fetch API, nobody needs to reinvent it.
All you'd do in that scenario is make your own hubcap to put on top.