AI Is Breaking Two Vulnerability Cultures
83 points by speckx 3 hours ago | 37 comments

rikafurude21 2 hours ago
This feels more like an old problem getting reframed as an AI problem.

people were already diffing kernel commits and figuring out which ones were security fixes long before llms. if a patch lands publicly, the race has basically already started.

also not sure shorter embargoes really help. the orgs that can patch in hours are already fine. everyone else still takes days or weeks.

if anything, cheaper exploit generation probably makes coordinated disclosure more important, not less.

reply
JumpCrisscross 2 hours ago
> people were already diffing kernel commits and figuring out which ones were security fixes

With skill, and usually not consistently and systematically. With AI, anyone can do this to any software.

> not sure shorter embargoes really help

Why 90 days versus 2 years? The author is arguing the factors that set that balance have shifted, given the frequency of simultaneous discovery. The embargo window isn’t an actual window, just an illusion, if the exploit is going to be found by several people outside the embargo anyway.

> cheaper exploit generation probably makes coordinated disclosure more important

I agree. But it also makes it less viable. If script kiddies can find and exploit zero days, the capacity to co-ordinate breaks down.

There was always a guild ethic that drove white-hate culture. If the guild is broken, the ethic has nothing to stand on.

reply
Hizonner 29 minutes ago
> With skill, and usually not consistently and systematically.

How do you know? If the people who like to crow about vulnerabilities aren't doing it, it doesn't mean that the people who are actually in a position to exploit them systematically and effectively aren't doing it.

Those embargoes have always been dangerous, because they create a false sense of security. But, as you point out...

> With AI, anyone can do this to any software.

Yep. Even if it hadn't been true before, it's clear that now you just have to assume that everybody relevant will immediately recognize the security impact of any patch that gets published. That includes both bugs fixed and bugs introduced.

... and as the AI gets better, you're going to have to assume that you don't even have to publish a patch. Or source code. Within way less time than it's going to take people to admit it and adjust, any vulnerability in any software available for inspection is going to be instant public knowledge. Or at least public among anybody who matters.

reply
thereisnospork 18 minutes ago
>any vulnerability in any software available for inspection is going to be instant public knowledge. Or at least public among anybody who matters.

Shouldn't this naturally lead to a state where all (new) code is vulnerability-free? If AI vulnerability detection friction becomes low enough it'll become common/forced practice to pre-scan code.

reply
Hizonner 12 minutes ago
> it'll become common/forced practice to pre-scan code.

You'd think.

But then you'd think people would do a lot of other things too. I hope, I guess.

The other danger is that "the cloud" may become even more overwhelmingly dominant. Which of course has its own large security costs.

reply
organsnyder 16 minutes ago
Finding a vulnerability by looking at the diff that fixed it is very different than just looking through the code.
reply
ragall 24 minutes ago
> How do you know?

We know because we could see the effects of the average rate of vulnerabilities discovery and exploitation, and it's definitely going up very fast. Until recently, vulnerabilities were relatively hard to find, and finding them was done by a very restricted group of people world-wide, which made them quite valuable. Not any more.

reply
awesome_dude 19 minutes ago
That's correlation, not causation.

It could equally be argued that the AI slop that's being produced makes for a lot more vulnerabilities being shipped. The bigger target makes for the easier discovery.

reply
tempestn 9 minutes ago
But don't we know that some of the vulnerabilities being discovered predate ai coding?
reply
ragall 11 minutes ago
> That's correlation, not causation.

Pragmatically, correlation *is* evidence of causation in favour of the best explanation, until somebody finds a better explanation.

> It could equally be argued that the AI slop that's being produced makes for a lot more vulnerabilities being shipped.

This is also true, and does not exclude the other, because for the moment the vast majority of production software in the world (and therefore the bulk of enticing targets) was written before AI. If LLM software will become prevalent in commercial setups, then LLM-generated code will eventually become the majority of targets.

reply
awesome_dude 5 minutes ago
> Pragmatically, correlation is evidence of causation in favour of the best explanation, until somebody finds a better explanation.

Uh, no.

Correlation is only ever one thing - cause for investigation.

Everything based on correlation alone is speculation.

You can speculate all you like, I have zero issue with that, but that's best prefaced with "I guess"

reply
ragall 2 minutes ago
Very often you only have limited time for investigation and you have to act now. Action is almost always based on educated guesses.
reply
gritspants 30 minutes ago
I'm here for white-hate culture. You should, you should know better.
reply
awesome_dude 21 minutes ago
> people were already diffing kernel commits and figuring out which ones were security fixes With skill, and usually not consistently and systematically. With AI, anyone can do this to any software.

I would like to see actual evidence of this, not.. vibes

I mean, this reeks of "Anyone is a Principal developer now" when the truth is there is still work to do.

reply
alecco 22 minutes ago
> Torvalds said that disclosing the bug itself was enough, without the pursuant circus that followed when a major problem has been discovered. [1]

So it's not surprising Dirtyfrag was disclosed by a fix in the Linux kernel. [2]

[1] https://www.zdnet.com/article/torvalds-criticises-the-securi...

[2] https://afflicted.sh/blog/posts/copy-fail-2.html

reply
santoshalper 7 minutes ago
I'd say it's an old problem be exacerbated by AI.
reply
oytis 6 minutes ago
The old saying of Tony Hoare about no obvious bugs vs obviously no bugs holds in the age of LLMs more than ever
reply
miki123211 13 minutes ago
AI will shorten update windows dramatically. 2026 is the worst year to be thinking about dependency cooldowns, we need to think about dependency warmups instead.

Soon, there will be no such thing as a safe way to disclose a vulnerability in an open source project. Centralized SaaS will have a major security advantage here.

reply
woah 9 minutes ago
You could have a web of trust where Linux-using organizations each spend $x continuously scanning and patching their own dependencies with AI, and sending each other patches and scans.
reply
xiaoyu2006 2 hours ago
The quick test doesn't show a lot - by out straight asking if this is a security patch, it implies and guides AI to have output more probably to agree on this assumption. A confusion matrix is more useful. Nonetheless of course this is not a detailed ai capability testing blog.
reply
jefftk 34 minutes ago
[author]

I agree it is not much additional evidence! If someone wanted to try running the same test on a series of N commits from that list including this one I'd be very curious to see the answer!

reply
cubefox 50 minutes ago
Yeah, ideally we would need the phi coefficient (aka MCC, the binary Pearson correlation), which can be calculated from a confusion matrix of yes/no LLM classifications for all kernel diffs. (Number of true positives, true negatives, false positives, false negatives.)
reply
JumpCrisscross 2 hours ago
> So many security fixes are coming out now that examining commits is much more attractive: the signal-to-noise ratio is higher

Why?

> Additionally, having AI evaluate each commit as it passes is increasingly cheap and effective

This is the key. With AI, the “people won't notice, with so many changes going past” assumption fails.

reply
j2kun 13 minutes ago
> Luckily AI can speed up defenders as well as attackers here, allowing embargoes that would previously have been uselessly short.

This is an important facet of the problem space: security risks turning into an arms race for who wants to spend more tokens.

reply
FuriouslyAdrift 6 minutes ago
Reverse engineering vulnerabilities from patches is red team 101...
reply
Analemma_ 2 hours ago
I'd argue it's actually breaking three vulnerability cultures. In addition to the two Jeff mentions, I think the culture of delaying upgrades and staying on stable versions for as long as possible is going to become increasingly untenable, if everything that's not latest can be trivially scanned and exploited. In the extreme I think there's a decent chance projects like Debian might have to radically overhaul or just shut down completely - the whole philosophy of slow and steady with old code just won't work.

There will be much wailing and gnashing of teeth around this, because a lot of tech types really resent having to update constantly, but I don't think people will have a choice. If you have a complicated stack where major or even minor version updates are a huge hassle, I'd start working now to try and clear out the cruft and grease those wheels.

reply
tetha 59 minutes ago
> In the extreme I think there's a decent chance projects like Debian might have to radically overhaul or just shut down completely - the whole philosophy of slow and steady with old code just won't work.

It may actually be the opposite.

Debians steady and professional approach on shipping security patches with very little to no functional difference actually enables us to consider and work on automated, autonomous weekly or faster patches of the entire fleet. And once that's in place and trusted, emergency rollouts are very possible and easy.

We have other projects that "move fast and break things" and ship whatever they want in whatever versions they want and those will require constant attention to ship any update for a security topic. These projects require constant human attention to work through their shenanigans to keep them up to date.

reply
calvinmorrison 47 minutes ago
Not only that but debian has for example, debsecan so you can see on any system what CVEs exist and if your packages are patched. ex from my system I ran it and got

> CVE-2026-32105 xrdp

which i see has a fix in sid but not on bookworm

reply
layer8 2 hours ago
> there's a decent chance projects like Debian might have to radically overhaul or just shut down completely - the whole philosophy of slow and steady with old code just won't work.

Debian continuously issues security updates for stable versions, ingestable with automatic updates. “Stable” doesn’t mean that vulnerabilities aren’t getting fixed.

The argument that could be made is that keeping up with getting vulnerabilities fixed might become such a high workload that fewer releases can be maintained in parallel, and therefore the lifetime and/or overlap of maintained releases would have to be reduced. But the argument for abandoning stable releases altogether doesn’t seem cogent.

It goes both ways: Stable code that only receives security updates becomes less vulnerable over time, as the likelihood of new vulnerabilities being introduced is comparatively low. From that point of view, stable software actually has a leg up over continuous (“eternal beta” in the worst case) functional updates.

reply
ryandrake 52 minutes ago
I can only dream, but this may re-popularize (among the rest of the non-Debian software industry) the general best practice of keeping a "sustaining" branch green, buildable, and with frequent releases, for security fixes.

I hate software that forces you to take new features as a condition of obtaining bug and security fixes. We need to keep old "stable" builds around for longer and maintain them better. I know, I know, it is really upsetting to developers to have to backport things to old versions--they wish that all they had to work on was the current branch. But that just causes guys like me to never upgrade because the downside of upgrading (new features) is worse than the upside (security fixes).

reply
muvlon 2 hours ago
That's not really the culture of debian to be honest. Yes they run old major and minor versions, but they do ship patch updates as fast as they can. Even on debian stable, you absolutely are supposed to update all the time. The culture of "just don't touch it" is a different one (but also exists, I've seen it).
reply
y3ahd0g 9 minutes ago
Yep. This is why I am using local AI to edit and build my own copies of Linux kernel, Wayland... everything a distribution would ship really.

Not so daunting for me having come of age when compiling a kernel specific to a hardware platform was essential.

Custom software that does not fit the usual patterns is not fool proof but it won't be obvious.

Monocultures with all their eggs in one basket are even less secure than truly diverse ecosystems though.

reply
acranox 2 hours ago
Debian has updated kernel packages out for the stable release. https://security-tracker.debian.org/tracker/CVE-2026-43284

I kind of get your point, but they responded pretty quickly here.

reply
Analemma_ 2 hours ago
Oh yeah, to be clear: Debian has always been good about quickly shipping patches to kernel vulnerabilities, and they will continue to be so. I was more thinking about whether they will get overwhelmed if every bit of software they package just has a firehose of vulnerabilities on everything which isn't latest.
reply
pixl97 2 hours ago
We are now paying for the sins of our fathers (well and mostly ourselves).

We've just kept building more complex things with more exposure with no recognition that the day of reckoning was coming. And now we are in an untenable situation. With governments spending billions on AI with the big providers it's likely they've found many of these already.

reply
giancarlostoro 45 minutes ago
Arch Linux to become the only Linux OS left.
reply
papichulo2023 55 minutes ago
Maybe it is about time for Linux to get a real CD/CI and start using AI extensively.

Not just for vulnerabilities, having a nice agents|skills|etc.md definitions would encourage new devs to contribute instead of dealing with an overworked maintener repeating the same thing for n time.

reply