In conclusion - Having a lot of tokens help! Having a better model also helps. Having both helps a lot. Having very intelligent humans + a lot of tokens + the best frontier models will help the most (emphasis on intelligent human).
Adding the words “by Claude” to it doesn’t materially change it. One could also pay a few humans to do the same thing. People have done that for decades.
A good security expert earns how much per year? And that person works 8/5.
Now you can just throw money at it.
CIA and co pay for sure more than 20k (thats what the anthropic red team stated as a cost for a complex exploit) for a zero day.
If someone builds some framework around this, you can literaly copy and paste it, throw money at it and scale it. This is not possible with a human.
> Now you can just throw money at it.
What happens when you throw enough money at it that it raises the cost significantly.
CIA and FBI and states easily pay 100k for a zero day.
Plenty of companies have security expert staff on file.
And it will become cheaper and easyer to use, fast.
Logged in just to show some love. +1 for the economics. +1 again (if I could) for the truth-to-power.
We need a lot more of this kind of multi-disciplinary skepticism to counterbalance the industrial grade rockstar ninja 10x Kool-Aid drinking.
It takes humans a very long time to learn how to code/find bugs. You just can't take any human and have them do it in a reasonable amount of time with a reasonable amount of money.
Claude is effectively automation, once you have the hardware you can run as many copies of the model as you want. Factories can build hardware far faster then they can train more people.
It's weird to see a denial of the industrial revolution on HN.
I’m not denying that LLMs can be used to improve security research, suggesting that their use is wrong or anything like that.
Humans have used software to research security for a long time. AI driven SAST is clearly going to help improve productivity.
Even checking human work is often a shortcoming of processes in practice.
- what if at a certain level of capability you're essentially bug-free? I'm somewhat skeptical that this could be the case in a strong sense, because even if you formally prove certain properties, security often crucially depends on the threat model (e.g. side channel attacks, constant-time etc,) but maybe it becomes less of a problem in practice?
- what if past a certain capability threshold weaker models can substitute for stronger ones if you're willing to burn tokens? To make an example with coding, GPT-3 couldn't code at all, so I'd rather have X tokens with say, GPT 5.4, than 100X tokens with GPT-3. But would I rather have X tokens with GPT 5.4 or 100X tokens with GPT 5.2? That's a bit murkier and I could see that you could have some kind of indifference curve.
Also, I find myself thinking more and more that the ability to pay for tokens is becoming crucial. And it's unfair. If you don't have money, you don't have access. Somehow, a worsening of class conflicts. If you know what I mean.
If you spend months shipping slop, because “models will get better and tomorrow’s models can fix me today’s slop”, what happens when they not only do not get better, but actually get worse, and you are left with a bunch of slop you don’t understand and your problem solving muscles gotten weak?
I would say that most software is going to have few easily exploitable bugs. Presence of such bugs will immediately cost more than having them discovered and fixed.
Other bugs, those that do not lead to easy pwning of a system, circumventing billing, etc, may linger as much as they currently do.
The defender also not only has to discover issues but get them deployed. Installing patches takes time, and once the patch is available, the attacker can use it to reverse engineer the exploit and use it attack unpatched systems. This is happening in a matter of hours these days, and AI can accelerate this.
It is also entirely possible that the defender will never create patches or users will never deploy patches to systems because it is not economically viable. Things like cheap IoT sensors can have vulnerabilities that don't get addressed because there is no profit in spending the tokens to find and fix flaws. Even if they were fixed, users might not know about patches or care to take the time to deploy them because they don't see it worth their time.
Yes, there are many major systems that do have the resources to do reviews and fix problems and deploy patches. But there is an enormous installed base of code that is going to be vulnerable for a long time.
tomato, tomato
If anyone has access to the mythical Mythos we'll see the contact with reality.
Interestingly enough, I was thinking of writing an article about how cybersecurity (both access models and operational assumptions) can be modeled as a proof (NOT proof of work) system. By that I mean there is an abstract model with a set of assumptions (policies, identities, invariants, configurations and implementation constraints) from which authorization decisions are derived.
A model is secure if no unauthorized action is derivable.
A system is correct if its implementation conforms to the model's assumptions.
A security model can be analyzed operationally by how likely its assumptions are to hold in practice.
With LLMs even the halting problem is just the question of paying for pro subscription!
It's not proof of work, but proof of financial capacity.
The big companies are turning the access to high-quality token generators (through their service) into means of production. We're all going direct to Utopia, we're all going direct the other way.
This continous rush is not healthy. npm updates, replies to articles that barely made HN 12 hours ago, anything like that. It's not healthy.
Slow down.
So the bigger models hallucinate better causally hitting more real problems?
its not just PoW at inference. It's PoW of inference + training.
This is exactly the argument AI skeptics make btw. Also you say you tried GPT 120B OSS, that's like me proclaiming LLM coding doesn't work because I tried putting gpt 3.5 in Claude Code. Try it with GLM 5, Qwen, etc. Or improve your harness :)
I guess this is the crux of the debate. All the claims are comparing models that are available freely with a model that is available only to limited customers (Mythos). The problem here is with the phrase "better model". Better how? Is it trained specifically on cybersecurity? Is it simply a large model with a higher token/thinking budget? Is it a better harness/scaffold? Is it simply a better prompt?
I don't doubt that some models are stronger that other models (a Gemini Pro or a Claude Opus has more parameters, higher context sizes and probably trained for longer and on more data than their smaller counterparts (Flash and Sonnet respectively).
Unless we know the exact experimental setup (which in this case is impossible because Mythos is completely closed off and not even accessible via API), all of this is hand wavy. Anthropic is definitely not going to reveal their setup because whether or not there is any secret sauce, there is more value to letting people's imaginations fly and the marketing machine work. Anthropic must be jumping with joy at all the free publicity they are getting.
Well, just cause these are all AI people doesn't mean they verified enough of the output of these models to actually provide the significant security implications they're advertising.
I guess we'll never learn.
He also transfers the logic of their claims to the actual real world. You can say that model cards are marketing garbage. You have to prove that experienced programmers are not significantly better at security.
That has not been my experience. It's true that they are "better at security" in the sense that they know to avoid common security pitfalls like unparamaterized SQL, but essentially none of them have the ability to apply their knowledge to identify vulnerabilities in arbitrary systems.
My point is that more experienced programmers are better at security on average, not that they are security experts.
I've found it to be the opposite. Many of them do have the ability to apply their knowledge in that fashion. They're just either not incentivised to do so, or incentivised to not do so.
This does not match my experience.
Out of curiosity, are you one of the people who has access to the model? If yes, could you write about your experimental setup in more detail?
Rumors say it has 10 trillion parameter vs. 1 trillion.
It's restricted because it's genuinely good at finding vulnerabilities, and employees felt that it's not a good idea to give this capability to everyone without letting defenders front-run.
That's it. That's all there is to it. It is not some grand marketing play.
It's a possibility, but it doesn't eliminate the possibility that it's hype. If these claims were indeed serious, they would submit it for independent analysis somewhere.
This isn't some crazy process. Defense contractors are required to submit their systems (secret sauce and all) for operational test and evaluation before they're fielded.
They have. 40 different companies that have all committed resources to patching their systems based on vulnerabilities found by Mythos. One of them, Google, is a frontier AI lab that pointedly did not say that their own models have found similar vulnerabilities.
> Defense contractors are required to submit their systems (secret sauce and all) for operational test and evaluation before they're fielded.
Does this look something like having 40 separate companies look at the outputs of the system, deciding that it’s real and they should do something about it, and committing resources to it?
At some point, “cynicism” is another word for “lalala can’t hear you”.
To which my answer is clearly, no, not even remotely. If Anthropic is outright lying about what Mythos can do, someone else will have it in a year.
In fact the security world would have to seriously consider the possibility that even if Mythos didn't exist that nation states have the equivalent in hand already. And of course, if Mythos does exist, nation states have it now. The odds that Antropic (and every other AI vendor) isn't penetrated enough by every major intelligence agency such that they have access to their choice of model approach zero.
I wonder about the overlap between people being skeptical of Mythos' capabilities, and those who are too skeptical of AI to have spent any time with it because they assume it can't be any good. If you are not aware of what frontier models routinely do, you may not realize that Mythos is just an evolution of existing capabilities, not a revolution. Even just taking a publicly-available frontier model, pointing it at a code base and telling it to "find the vulnerabilities and write exploits" produces disturbingly good results. I can see the weaknesses referenced by the Mythos numbers, especially around the actual writing of the exploits, but it's not like the current frontier models fall on their face and hallucinate wildly for this task. Most everything they produce when I try this is at least a "yeah, that's worth thinking about" rather than an instant dismissal.
I'm not that old but have been here long enough that I remember when GPT-3 was considered too dangerous to release. Now you have models 10x as good, 1/10th the size and run on 8GB VRAM.
We don't yet know if Mythos was a level shift in the capability/cost frontier, or a continued extension of the same logarithmic capability/cost curve.
AI companies routinely claim that something is too dangerous to release (I think GPT-2 was the first case) for marketing reasons. There are at least 10 documented high profile cases.
They keep it secret because they now sell to the MIC with China and North Korea bullshit stories as well as to companies who are invested in the AI hype themselves.
And with gpt-2 the worry was mass emails a lot better and more detailed and personal, social media campaigns etc.
How many bots are deployed today on X and influencing democrazy around the globe?
Its fair to say it had an impact and LLMs still have.
The platonic ideal of how to dismiss any argument by anyone about anything.
unless you are an employee at anthropic and shouldn't be talking about any of this at all, there's no way to know what the model's capabilities are.