This makes me feel better about Google, but also makes me kind of frightened of the rest of Android. I wonder what Apple's response time is?
I've heard they cleaned up their program recently to respond much quicker nowadays
It does make me scared for what other dangers lurk since this was a really bad one and it was so little work to find.
Also of note: so many security issues lately have been done using AI. This report makes me think two things:
1. Expertise is still immensely valuable, the more niche, the more valuable.
2. There are lots of niches still where AI doesn't dominate...
```
does this look right to you? don't do any searches or check memory, just think through first principles
static int vpu_mmap(struct file fp, struct vm_area_struct vm) { unsigned long pfn; struct vpu_core core = container_of(fp->f_inode->i_cdev, struct vpu_core, cdev); vm_flags_set(vm, VM_IO | VM_DONTEXPAND | VM_DONTDUMP); / This is a CSRs mapping, use pgprot_device */ vm->vm_page_prot = pgprot_device(vm->vm_page_prot); pfn = core->paddr >> PAGE_SHIFT; return remap_pfn_range(vm, vm->vm_start, pfn, vm->vm_end-vm->vm_start, vm->vm_page_prot) ? -EAGAIN : 0; }
```
And it correctly identified the issue at hand, without web searches. I'd love to try something more comprehensive, e.g. shoving whole chunks of the codebase into the prompt instead of just the specific function, but it seems the latent ability to catch security exploits is there.
So then.... I wonder how this got out in the first place. I know I'm using a toy example but would love to learn more!
> Observations & Potential Issues A few things worth flagging: 1. No bounds checking on the mapping size. Userspace controls vm_end - vm_start and vm->vm_pgoff. Here vm_pgoff is ignored entirely and the size is trusted blindly. If the VPU's register block is, say, 64KB but userspace requests a 1MB mapping, the driver will happily map 1MB of physical address space starting at core->paddr — potentially exposing whatever hardware happens to live at adjacent physical addresses. A defensive check would be:
---
70 day release cycles are very quickly not going to be fast enough to stop widespread use of exploits when you have bots able to scan every PR on every open source project as it comes out.
This is quite impressive considering I’m just a dumbass with a Claude subscription.
Someone T-bones you in parking lot, chef causes food poisoning, plumber's leak floods your bathroom, personal trainer pushes to injury, mislabeled allergen on food, movers break your armoire, roofer leaves a leak -- I bet we'd see a lot less of all that if a $1MM fine + life in jail loomed over everyone.
Nobody would want to do business, but boy would we be in a golden age.
Take my friend who is a property lawyer. The firm she works for buys her insurance, because it would be insane to operate without insurance, but the only available insurance is personal insurance, it insures a specific person to do property law. So, although her day job is helping that $100Bn farm equipment company buy a $10M new factory from a $100Bn construction firm, at the weekend she is covered by that same insurance when she represents her friend buying a $500k cottage. AIUI this is a completely normal arrangement.
If that was the situation for programming, the company is going to buy your $100M exploit insurance because they need a programmer, but it's personal insurance so you could work on your Game jam game using the same insurance, and it'd be crazy to just "Go commando" if you don't have employment and thus insurance, in case somehow your "Galaga but also Blue Prince and somehow a visual novel" Game jam entry causes a $10M damages payment.
It really just introduces a legal burden to prove competence and work in good-faith, and nets immense power to throw out ridiculous deadlines. Your managers are legally responsible too, and if they push beyond what's reasonable you have just cause to bring them to court in a way that you currently don't. To re-emphasize, I don't think this is a better world, but it's not unlivable.
If I was personally liable for damages, and there was an insurance program or some sort - similar to how doctors & dentists practice - sure, I'd probably still write code, very carefully. But if there was a decent change of me spending the rest of my life in prison because something I wrote on a Friday at 4pm under some amount of stress? No thanks. I can re-train as a plumber, and stand knee-deep in shit all day.
Yes, they certainly would. You wouldn't have smartphones, for instance.
I can't tell if this is satirical or not. But there are so many takes like this recently (hold the website liable for user content, hold the corporate developer liable for zero days in a project they happened to touch) that would all result in the same outcome (no more product at all) that I can't help but wonder if there's some luddite psy-op trying desperately to bring us back to a pre-Internet era in any way they can...
By definition in Rust it's incorrect to overflow the non-overflowing integer types, and so if you intend say wrapping you should use the explicit wrapping operations such as wrapping_add or the Wrapping<T> types in which the default operators do wrap - but if you turn off checks then it's still safe to be wrong, just as if you'd call the wrapping operations by hand instead of using the non-wrapping operations.
That Dolby overflow code looks awkward enough that I can't imagine writing it in Rust even if the checking was off - but I wasn't there. However the reason it's on Project Zero is that it resulted in a bounds miss, and that Rust would have prevented anyway.
We've moved slightly closer to this, but in a world where we're still arguing over memory safety being necessary we've probably still got a ways to go before we notice that addition silently overflowing is a top-10 security issue. It's the silent top-10 security issue, I guess.
That said you can enable overflow checks in Rust's release mode. It's literally two lines:
[profile.release]
overflow-checks = true
I wonder if it would make sense for ISAs to have trapping versions of add and subtract. RISC-V's justification for not doing that is that it's only a couple more instructions to check afterwards. It would be interesting to see the performance difference of `overflow-check = true` on high performance RISC-V chips once they are available.Here's a cool project that inventories all your KASLR info leaks: https://github.com/bcoles/kasld
Feels like there’s something new every other day - linux, windows, mobile, various commonplace tools used by everybody, the list goes on