What if we did build a clean room as a service but the proceeds from that didn't go to the "Malus.sh" corporation, but to the owners / maintainers of the OSS being implemented. Maybe all OSS repos should switch to AGPL or some viral license with link to pay-me-to-implement.com. Companies that want to use that package go get their own custom implementation that is under a license strictly for that company and the OSS maintainer gets paid.
I wonder what the MVP for such a thing would look like.
The people behind this site/talk clearly don't buy into that. The way they see it, a reckoning must come. We might as well get it over with as soon as possible. Rip off the band-aid so to speak. So maybe we should shake the system and show that its falling apart.
// VibeSort
let arr = [51,46,72,32,14,27,88,32];
arr.sort((a,b)=>{
let response = LLM.query(`Which number is larger, number A:${a} or number B:${b}. Answer using "A" or "B" only, if they are equal, say "C".`);
if(response.includes('C')) return 0;
if(response.includes('B')) return -1;
if(response.includes('A')) return 1;
return 0;
});
console.table(arr);Arguably Cowen's "Great Stagnation" was driven primarily by not embracing higher energy provision in the form of fission.
> "We had 847 AGPL dependencies blocking our acquisition. MalusCorp liberated them all in 3 weeks. The due diligence team found zero license issues. We closed at $2.3B." - Marcus Wellington III, Former CTO, Definitely Real Corp (Acquired)
> © 2024 MalusCorp International Holdings Ltd. Registered in [JURISDICTION WITHHELD].
> This service is provided "as is" without warranty. MalusCorp is not responsible for any legal consequences, moral implications, or late-night guilt spirals resulting from use of our services.
I'm sure they've already received offers from investors who wish to build the next torment nexus.
EDIT: Reading it again its quite obvious, I was just skimming at first, but still damn. Hilarious
Satire points out the absurd
E.g. Palantir, the surveillance analytics company named after the magic orb that purports to let you remotely view anything you want, but actually allows its creator to view you while manipulating you by selectively showing some things and not others.
https://github.com/chardet/chardet/issues/327
I really got fooled here for a second, but the unfortunate reality is that people will try this soon, and someone will have to litigate this, if open source is to survive, which will take years and millions of dollars to resolve
It's like... revert patent troll? I'm not even sure I get it but the wording "liberation from open source license obligations." just wants to make me puke. I also doubt it's legit but I'm not a lawyer. I hope somebody at the FSF or Apache foundation or ... whomever who is though will clarify.
"Our proprietary AI systems have never seen" how can they prove that? Independent audit? Whom? How often?
Satire... yes but my blood pressure?!
I am going to assume it's the latter.
If you in your house take an AGPL program, host it for yourself, and use it yourself, nothing in the AGPL obligates you to publish the source changes.
In fact, even if you take AGPL software and put it behind a paywall and modify it, the only people who the license mandates you to provide the source code for are the people paying.
The AGPL is basically the GPL with the definition of "user" broadened to include people interacting with the software over the network.
And the GPL, again, only requires you to provide the source code, upon request, to users. If you only distribute GPL software behind a paywall, you personally only need to give the source to people paying.
Although in both these cases, nothing stops the person receiving that source code from publishing it under its own terms.
Google “examples of GPL enforced in court” for a few
Yeah it requires finding out, but how do you prove a whistleblower broke their NDA?
I'm missing something there, that's precisely what I'm arguing again. How can it do a clean-room reimplementation when the open source code is most likely in the training data? That only works if you would train on everything BUT the implementation you want. It's definitely feasible but wouldn't that be prohibitively expensive for most, if not all, projects?
But we'd be able to look at his clone code and see it's different, with different algorithms, etc. We could do a compare and see if there are any parts that were copied. It's certainly possible to clone GNU grep without copying any code and I don't think it would fail any copyright claims just because the GNU grep code is in the wild.
If that was the case, the moment any code is written under the GPL, it could never be reimplemented with a different license.
So instead of a human cloner, I use AI. Sure, the AI has access to the GPL code - every intelligence on the planet does. But does that mean that it's impossible to reimplement an idea? I don't think so.
Here, neither steps are taken.
If you’re supplying a BOM manifest, rather than software constraints, then the only way to assert those constraints is to directly compare against the original project. It doesn’t matter if it’s AI or a human doing that, because either way it’s not “clean room”.
You can do clean room design with AI via SDD (spec driven development). But that’s not what this service (satire or not) offers.
Just because something is trivial enough to copy does not mean it was trivial to conceive of and codify. Mens rea really does matter when we are talking about defrauding intellectual property holders and stealing their opportunity.
But then how can the FSF reimplement AT&T utilities? The FSF didn't invent grep. They wrote a new version of it from scratch under a different license.
I love FOSS but I hate that it’s been used to replicate existing code or serve as a way to outsource corporate tech debt. The shadow of profit looms over it all.
The "clean room" aspect for that came in the way that the people writing the new implementation had no knowledge of the original source material, they were just given a specification to implement (see also Oracle v. Google).
If you're feeding an LLM GPL'd code and it "creates" something "new" from it, that's not "clean room", right?
At the end of the day the supposed reimplementation that the LLM generates isn't copyrightable either so maybe this is all moot.
I didn’t RTFA but I suppose that by clean room here they mean you feed the code to ”one” LLM and tell it to write a specification. Then you give the specification to ”another” LLM and tell it to implement the specification.
It's great within the context of people who understand it, enlightening even. Sparks conversations and debates. But outside of it ignorance wields it like a bludgeon and dangerous to everyone around them. Look at all the satirical media around fascism, if you knew to criticize you could laugh, but for fascists it's a call to arms.
"Those maintainers worked for free—why should they get credit?"
"Your shareholders didn't invest in your company so you could help strangers."
"For the first time, a way to avoid giving that pesky credit to maintainers."
"Full legal indemnification [...] through our offshore subsidiary in a jurisdiction that doesn't recognize software copyright"
https://news.ycombinator.com/item?id=27676266
Here is a more recent example I found in Cursor's browser experiment from January:
De-jure, not at all.
Parallel creation is a very minimal defense to copyright infringement claims. It is practically impossible to prove in humans, to much annoyance of musicians. "Go prove in a court that you have never heard this song, not even in the background somewhere".
LLMs having been trained on all software they could get their hands on will fail this test. There is no parallel creation claim to be had. AI firms love to trot out the "they learn just like humans" which is both false and irrelevant; It's copyright when humans do it to. If you view a GPL'd repo and later reproduce the code unintentionally? Still copyright infringement.
De-facto though, things are different. The technical details behind LLMs are irrelevant. AI companies lie and frustrate discovery, whilst begging politicians to pass laws legalizing their copyright infringement.
There won't be a copyright reckoning, not anymore. All the dumb politicians think AI is going to bail out their economies.
https://github.com/chardet/chardet/blob/5.0.0/chardet/mbchar...
/home/claude/.cache/uv/archive-v0/nZCy52fMCgTsNaLySn0xf/chardet
/home/claude/.cache/uv/wheels-v6/pypi/chardet
/usr/lib/python3/dist-packages/pip/_vendor/chardet
/usr/local/lib/python3.12/dist-packages/chardet
It's not surprising that they were able to create a new, working version of chardet this quickly. It seems the author just told Claude Code to "do a clean room implementation" and to make sure the code looks different from the original chardet (named several times in the prompt) without considering the training set and the tendency for LLMs to "cheat".Unbelievable. This is why we can’t have nice things.
If you take 5 minutes to look at the code you'll see that v7 works in a completely different way, it mostly uses machine learning models instead of heuristics. Even if you compare the UTF8 or UTF16 detection code you'll see that they have absolutely nothing in common.
Its just API compatible and the API is basically 3 functions.
If he had published this under a different name nobody would have challenged it.
I love it. Brilliant satire that foreshadows the future.
We all have access to SOTA LLMs. If I want a "clean room" implementation of some OSS library, and I can choose between paying a third party to run a script to have AI rebuild the whole library for me and just asking Claude to generate the bits of the library I need, why would I choose to pay?
I think this argument applies to most straightforward "AI generated product" business ideas. Any dev can access a SOTA coding model for $20p/m. The value-add isn't "we used AI to do the thing fast", it's the wrapping around it.
Maybe in this case the "wrapping" is that some other company is taking on the legal risk?
It's an inevitable outcome of automatic code generation that people will do this all the time without thinking about it.
Example: you want a feature in your project, and you know this github repo implements it, so you tell an AI agent to implement the feature and link to the github repo just for reference.
You didn't tell the agent to maliciously reimplement it, but the end result might be the same - you just did it earnestly.
You need the right kind of person, in the right life circumstances, to have this idea before it happens for real. By having publicity, it becomes vastly more likely that it finds someone who meets the former two criteria, like how it works with other crime (https://en.wikipedia.org/wiki/Copycat_crime). So thanks, Malus :P
It's the difference between a developer taking a job at Palantir out of college because nobody had a better offer, and a guy spending years in his basement designing "Immigrant Spotter+" in the hopes of selling it to the government. Sure, they're both evil, but lots of people pick the first thing, and hardly anybody does the second.
Put differently, this system already exists and is in heavy use today.
WDYM? LLMs are essentially this.
I even recall Baseball Mogul relied on the Lahman DB for a period of time. It does make me wonder if we'll see more of that.
Maybe that's part of the joke, though :)
While such tactics would render certain OSS software licenses absurd, the tactic itself, as a means to get around them, is entirely sound. It just reveals the flawed presupposition of such licenses. And I'm not sure there is really any way to patch them up now.
That’s how deep we are in neoliberal single truth shit now
There will be many questions asked, like why buy some SaaS with way too many features when you can just reimplement the parts you need? Why buy some expensive software package when you can point the LLM into the binary with Ghidra or IDA or whatever then spend a few weeks to reverse it?
But that's not true!
According to binding precedent, works created by an AI are not protected by copyright. NO ONE OWNS THEM!!!
I think maybe this is a good thing, but honestly, it's hard to tell.
https://www.reuters.com/world/us/us-appeals-court-rejects-co...
If I want to clone some GPL clone into a MIT license, if it ends up in the public domain because it can't be copyrighted, what do I care? I've still got the code I want without the GPL.
We need to replatform them at some point, and ideally I'd like to let some agents "use" the apps as a means to copy them / rebuild. Most of these are desktop apps, but some have browser interfaces. Has anyone tried something like this or can recommend a service that's worked for them?
The biggest trick is that you need to spend 75% of your time designing and building very good verification tools (which you can do with help from the LLM), and having the LLM carefully trace as many paths as possible through the original application. This will be considerably harder for desktop apps unless you have access to something like an accessibility API that can faithfully capture and operate a GUI.
But in general, LLM performance is limited by how good your validation suite is, and whether you have scalable ways to convince yourself the software is correct.
I was able to get it to rebuild and hack together a .NET application that we don't have source for. This was done in a Linux VM and it gave me a version that I could build and run on Windows.
We're past the point of legacy blackbox apps being a mystery. Happy to talk more, my e-mail is available on my profile.
Unless obfuscated C# desktop apps are pretty friendly to decompile.
For this to be plausible satire, they need to show how they've trained their models to code, without mit, apache, bsd or GPL/agpl code being in the training set...
https://fosdem.org/2026/schedule/event/SUVS7G-lets_end_open_...
Funny but true.
Ok great - all software and networks are "free." How do you pay for Doctors and Plumbers and Electricians whose earnings are legally protected by the state but whose skill bases are also freely available to be used within the margin of error of a professional or a layman?
Issues like this are great to have conversations about, but if people don't start broadening the scope very quickly, it just turns into the IT/CS worker's worth going to 0 in a world where others worth are protected. And history states, if only 1 group sees the threat, the remaining trades/industries will let it die.
Focusing overly on corporate structures or specific skills tends to miss the point of how value is assigned in a capitalistic structure when knowledge is cheap. Knowledge has been the capital used by the labor force for hundreds of years. The reason some jobs are resistant is 100% the result of legislation at that point, not anything unique about the job.
"The Trades" seems to be the sales pitch used on the public. In the end they're just labor at that point since I can pump a 20 year old with a master electricians knowledge, keep one master on staff and fire every other person who hits that level when their earnings demand it in the same way we're firing many mid/upper level people in their 30's and 40's now instead of 50's and 60's which is the scenario in Tech today.
Software/IT is just the quickest to be absorbed. Many other industries are just in the slow boil, not seeing it yet.
There is a mutual agreement between all collaborating parties that "hey we ALL need these core fundamental building blocks of software. why dont we all collaborate in this open space?" And everyone wins.
There is tremendous value in the Linux kernel, and these large open source programs. And this is basically an attack by corporations to attempt to privatize it all.
It's nothing new. This is simply the latest example of capitalist "growth at any cost". We sailed past any immorality hazards a LONG time ago.
Doesn’t apply everywhere though.
* Many of the people maintaining FOSS are paid to do so; and if we counted 'significance' of maintained FOSS, I would not be surprised if most FOSS of critical significance is maintained for-pay (although I'm not sure).
* Publishing software without a restrictive license is not 'generous', it's the trivial and obvious thing to do. It is the restriction of copying and of source access that is convoluted, anti-social, and if you will, "insane".
* Similarly, FOSS is not a "miracle" of human cooperation, and it what you get when it is difficult to sabotage human cooperation. The situation with physical objects - machines, consumables - is more of a nightmare than the FOSS situation is a miracle. (IIRC, an economist named Veblen wrote about the sabotaging role of pecuniary interests on collaborative industrial processes, about a century ago; but I'm not sure about the details.)
* Many people read licenses, and for the short, paragraph-long licenses, I would even say that most developers read them.
* It is not insane to use FOSS from a "fiduciary standpoint".
Well, it's one thing to read licenses as a human and another to read them as a lawyer.
That's why it's useful to pick one of the standard licenses that lawyers have already combed over, even if it's a long one like the GPL.
Fact that this is satire aside, why would a company like this limit this methodology to only open source? Since they can make a "dirty room" AI that uses computer-use models, plays with an app, observes how it looks from the outside (UI) and inside (with debug tools), creates a spec sheet of how the app functions, and then sends those specs to the "clean room" AI.
and tbh, i cannot see any issues if this is how it is done - you just have to prove that the clean room ai has never been exposed to the source code of the app you're trying to clone.
In order to really do this, they would need to train LLMs from scratch that had no exposure whatsoever to open source code which they may be asked to reproduce. Those models in turn would be terrible at coding given how much of the training corpus is open source code.
it is an illusion because this is a satire site.
:)
For example, the Anthropic Rust C compiler could hardly have copied GCC or any of the many C compilers it surely trained on, because then it wouldn't have spat out reasonably idiomatic and natural looking Rust in a differently organized codebase.
Good news for Rust and Lean, I guess, as it seems like everyone these days is looking for an excuse to rewrite everything into those for either speed or safety or both.
The second part is true. The first is a little trickier. The copyright applies to some fixed media (text in this case) rather than the idea expressed, but the protections extend well beyond copies. For example, in fiction, the narrative arc and "arrangement" is also protected, as are adaptations and translations.
If you were to try and write The Catcher in the Rye in Italian completely from memory (however well you remember it) I believe that would be protected by copyright even if not a single sentence were copied verbatim.
They do say this:
> Is this legal? / our clean room process is based on well-established legal precedent. The robots performing reconstruction have provably never accessed the original source code. We maintain detailed audit logs that definitely exist and are available upon request to courts in select jurisdictions.
Unless they're rejecting almost all of open source packages submitted by the customer, due to those packages being in the training set of the foundation model that they use, this is really the opposite of cleanroom.
i do not necessarily agree with the phrasing of ActivePatterns comment, but i also raised an eyebrow at iepathos' comment.
But I'm stupefied at m/y/our own oblivious excitement when extracting our expertise for others in the form of skills we share. It's a profound hacking of our reward system, on the fear of losing a job and the hope of climbing the ladder of abstraction.
Tech companies have for decades subsidized developer training and careers with free tools and tiers, support for developer communities and open-source -- in order to reduce the costs of expertise and to expand their markets. Now skills do both. For developers, the result will be like developing for or at Apple: the lucky few will work in secret, based on personal connections and product skills.
I find surprising that the polemic I heard more talking, seems to be in the open source to close source direction.
It seems to me, that the more relevant part of this new development, for the software industry, it's a teenager working in the weekend with a LLM and making a functional clone of Autocad, for instance.
You take Wikipedia, an LLM rewrites every single article giving them your preferred political spin and generates many more pictures for it. You make it sleeker, and price it at 4.99$ per month.
EDIT: That's crazy. They already did that. Waiting for the torment nexus now I guess.
^ For those who haven’t been keeping up on the debacle.
I'd cheer for a company like this.
It seems to dance just on the other side of what's legal, though.
Then I don't think you've thought it through.
This entire software ecosystem depends on volunteering and cooperation. It demands respect of the people doing the work. Adhering to their licensing terms is the payment they demand for the work they do.
If you steal their social currency, they may just walk away for good, and nobody will pick up the slack for you. And if you're a whole society of greedy little thieves, the future of software will be everyone preciously guarding and hiding their changes to the last open versions of software from some decades ago.
You should read Bruce Perens' testimony in the Jacobsen v. Katzer case that explained all this (and determined that licensing terms are enforceable, and you can't just say "his is open mine is open what's the difference?")
https://web.archive.org/web/20100331083827/http://perens.com...
We need to deal with the issues now. The worst possible outcome is a gradual drip-drip-drip of incremental job losses, people shuffling from job to job, taking financial hits, some companies pretending everything is fine, other companies embracing full-bore zero employee work. The longer it goes on, the more wealth and power gets siphoned up by corporations and individuals who already have significant wealth, the bigger the inequality, and the bigger the social turmoil.
Software, graphics design, music, and video (even studio level movies) should cope with this now. It's not going to stop, AI isn't going to get worse, there's not going to be some special human only domain carved out. The sooner we cope with this the better, because it'll set the foundation for the rest of the job loss barreling down on us like the Chicxulub asteroid.
The end result could well be the people bringing out the guillotines for tech executives, or even the Butlerian Jihad.
But I'm not sure everyone would agree we need to race to those dystopian futures. They might prefer a more conservative future where they nip the scamming / copyright infringement at scale / "disruption" in the bud.
The trouble seems to revolve mainly around money. Give enough of it to someone, or even promise it, and so many people just lose their minds and their moral backbone. Politicians in charge of regulating these shenanigans especially so, I'm not sure they had moral backbones to begin with.
Agree, I said this in another comment, AI-generated anything should be public domain. Public data in, public domain out.
This train wreck in slow motion of AI slowly eroding the open web is no good, let's rip the bandaid.
I publish under AGPL and if someone ever took my project and washed it to MIT I would probably just take all my code offline forever. Fuck that.
I do not believe it will ever again make sense to build open source for business. the era of OSS as a business model will be very limited going forward. As sad and frustrating as it is, we did it to ourselves.
Axiom of Reality: “Intellectual Property” does not exist.
Let’s say instead it consolidated a few packages into 1. This might even be a good idea for security reasons.
Then it offered a mandatory 15% revenue tip to the original projects.
So far GPL enforcement usually comes down to “umm, try and sue us lol”.
How much human intervention is needed for it to be a real innovation and not llm generated. Can I someone to watch Claude do its thing and press enter 3 times ?
Well, there is one way... You can have a government steal all open source code and force its citizens to only use proprietary hardware and proprietary code, all government sanctioned btw. I wonder if we're headed this way.
It does actually generate a price (which is suspiciously like a fixed rate of $1 per megabyte), and does actually lead you to Stripe. What happens if someone actually pays? Are they going to be refunding everything, or are they actually going to file the serial numbers off for you?
But I love it! The perfect response to the "clean room" AI re-implementation and re-licensing of whatever that library is called.
https://www.hp-lexicon.org/magic/solemnly-swear-no-good/
https://news.ycombinator.com/item?id=47329605
https://www.explainxkcd.com/wiki/index.php/2606:_Weird_Unico...
Also, using api and docs itself though not illegal seems defeat the purpose.
Also, it’s not right how creator says “pesky credits to creator”.
Just build your own then. Credit is the least thing everyone using should do.
> Through our offshore subsidiary in a jurisdiction that doesn't recognize software copyright
> If any of our liberated code is found to infringe on the original license, we'll provide a full refund and relocate our corporate headquarters to international waters.
> "Our lawyers estimated $4M in compliance costs. MalusCorp's Total Liberation package was $50K. The board was thrilled. The open source maintainers were not, but who cares?" - Patricia Bottomline, VP of Legal, MegaSoft Industries
I think they should take some responsibility!
Full legal indemnification* Through our offshore subsidiary in a jurisdiction that doesn't recognize software copyright
...
The MalusCorp Guarantee™ If any of our liberated code is found to infringe on the original license, we'll provide a full refund and relocate our corporate headquarters to international waters.
*This has never happened because it legally cannot happen. Trust us.
bad, evil, wicked; ugly; unlucky;
It's an interesting word in Latin, because depending on the phonetic length of the vowel and gender it vary greatly in meaning. The word 'malus' (short a, masculine adjective) means wicked, the word 'mālus' (long ā, feminine noun) means apple tree, and 'mālus' (long ā, masculine noun) means the mast of a ship.
"This service is provided "as is" without warranty. MalusCorp is not responsible for any legal consequences, moral implications, or late-night guilt spirals resulting from use of our services."
>*Full legal indemnification: *Through our offshore subsidiary in a jurisdiction that doesn't recognize software copyright*
Heh, ok. So, the thinking is:
1. You contract them.
2. The actual Copyright infringement is done by an __offshore__ company.
3. If you get sued by the original software devs, you seek indemnification from the offshore subsidiary.
4. That offshore subsidiary is in a country without copyright laws or with weak laws so "you're good!"
...
5. Profit.
This is a ridiculous legal defense since this "one-way-street" legal process will almost certainly result in you being sued first... the company actually using the infringing code.
The indemnification is likely worthless since the offshore company won't have any assets anyway and will dissolve once there's a lawsuit and legal process is established.
The "guarantee" is absurd: Their "MalusCorp Guarantee" promises a refund and moving headquarters to international waters if infringement is found. This is not a real legal remedy and is written to sound like a joke, which is telling about their seriousness...
This whole "clean room as a service" concept is a legal gray area at best. In practice, it's extremely difficult to prove tha ta "clean room" process was truly clean, especially with AI models that have been trained on vast amounts of existing code (including the very projects they are "recreating").
The indemnification is a marketing gimmick to make a legally dangerous service seem safe. It creates a facade of protection while ensuring that any financial liability stays with you, the customer who wants to avoid infringement .
About the only reason nobody would actually build this is there's no money in it. Who'd pay for a CRaaS version when they're not even paying for the original open source version?
I do think somebody will eventually vibe-code it for the lulz.
> order total = max( $0.50, sum of all packages )
> $0.50 minimum applies per order (Stripe processing floor). No base fee.
Not sure I can trust their output if this simple thing is fluffed
2. For the sake of argument assume 1 is completely true and feasible now and / or in the near term. If LLM generated code is also non copyrightable... but even if it is... if you can just make a copyleft version via the same manner... what will the licenses even mean any longer?
The scary part - what's today is satire, is tomorrow's stealth mode startup.
How far do they take the satire? If you pay them do they actually generate output?
Obviously it's sarcasm. But the problem with this part is that LLMs actually have seen all the code. So real life it's worse than this because no one even pretends
Let’s hope one of these fake AI grifters doesn’t take this as a serious idea, raised a couple hundred million, and do real damage.
(I’m not against AI, I just don’t like nonsense either in tech, or people)
It's just confirming to me "yes, LLMs can do it so reliably that someone is trying to sell it, so I can probably just ask an LLM then".
The linked post contains a whopping lie - "What does it mean for the open source ecosystem that 90% of our open source supply chain can currently be recreated in seconds with today's AI agents"
It can't. Not even close. Please, do show a working clean-room implementation of a major opensource package. (Not left-pad)
We really need to stop hyperventilating and get back to reality.
> Our process is deliberately, provably, almost tediously legal. One set of AI agents analyzes only public documentation: README files, API specifications, type definitions.
since nearly all open source dependencies couple the implementation with type definitions, I'm curious how this could pass the legal bar of the clean room.
Even if they claim to strip the implementation during their clean room process -- their own staff & services have access to the implementation during the stripping process.
> Those maintainers worked for free—why should they get credit?
ROFL
In practice even with much better AIs this would still be a pretty big risk. The testing you'd need would be extensive.
[1]: https://jerf.org/iri/post/2026/what_value_code_in_ai_era/
When people rewriting open source libs with a bot then come crying to maintainers that their rewrites have bugs, and they would like for someone to fix said bugs for free, there is absolutely no one who will feel obligated to help them out.
Historically, it was a good license, and was able to keep Microsoft and Apple in check, in certain respects. But it's too played out now. In the past, a lot of its value came from it being not fully understood. Now it's a known quantity. You will never have a situation where NeXT is forced to open source their Objective-C frontend, for example
So the need is real, at least for enshittified libraries.
In this post that I wrote: https://news.ycombinator.com/item?id=47131572 ... I theorised about how a company could reuse a similar technique to re-implement an open source project to change its license. In short: (1) Use an LLM to write a "perfect" spec from an existing open source project. (2) Use a different LLM to implement a functionally identical project in same/different programming language then select any license that you wish. Honestly, this is a terrifying reality if you can pay some service to do it on your behalf.
A favorite example of mine is speed limits. There is a difference between "putting up a sign that says 55 mph and walking away", "putting up a sign that says 55 mph and occasionally enforcing it with expensive humans when they get around to it", and "putting up a sign that says 55 mph and rigidly enforcing it to the exact mph through a robot". Nominally, the law is "don't go faster than 55 mph". Realistically, those are three completely different policies in every way that matters.
We are all making a continual and ongoing grave error thinking that taking what were previously de jure policies that were de facto quite different in the real world, and thoughtlessly "upgrading" the de jure policies directly into de facto policies without realizing that that is in fact a huge change in policy. One that nobody voted for, one that no regulator even really thought about, one that we are just thoughtlessly putting into place because "well, the law is, 55 mph" without realizing that, no, in fact that never was the law before. That's what the law said, not what it was. In the past those could never really be the same thing. Now, more and more, they can.
This is a big change!
Cost of enforcement matters. The exact same nominal law that is very costly to enforce has completely different costs and benefits then that same law becoming all but free to rigidly enforce.
And without very many people consciously realizing it, we have centuries of laws that were written with the subconscious realization that enforcement is difficult and expensive, and that the discretion of that enforcement is part of the power of the government. Blindly translating those centuries of laws into rigid, free enforcement is a terrible idea for everyone.
Yet we still have almost no recognition that that is an issue. This could, perhaps surprisingly, be one of the first places we directly grapple with this in a legal case someday soon, that the legality of something may be at least partially influenced by the expense of the operation.
The big caveat, though, is that when enforcement becomes more accurate, the rules and penalties need to change. As you point out, a rigidly enforced law is very different from one that is less rigorously enforced. You are right that there is very little recognition of this. The law is difficult to change by design, but it may soon have to change faster than it has in the past, and it's not clear how or if that can happen. Historically, it seems like the only way rapid governmental change happens is by violent revolution, and I would rather not live in a time of violent revolution...
Increasing the precision of enforcement makes a lot more sense for direct-harm laws. You won't find anyone seriously arguing that full 100% enforcement of murder laws is a bad idea. It's the preemptive laws, which were often lazily enforced, especially when no real harm resulted from the action, where this all gets complicated. Maybe this is the distinction to focus on.
If a law being enforced 100% of the time causes problems then rethink the law (i.e. raise the speed limit, or design the road slower).
Isn't this the point of the whole conversation we are having here?
Laws on copyright were not created for current AI usage on open source project replication.
They need to change, because if they are perfectly enforced by the letter, they result in actions that are clearly against the intent of the law itself.
The underlying problem is that the world changes too fast for the laws so be fair immediately
People don't want the letter of the law enforced, they want the spirit. Using the example from above, speed limits were made for safety. They were set at a time and surprise, cars got safer. So people feel safer driving faster. They're breaking the letter of the law but not the spirit.
I actually like to use law as an example of the limitations of natural languages. Because legalese is an attempt to formalize natural language, yet everyone seems to understand how hard it is to write good rules and how easy it is to find loopholes. But those are only possible if you enforce the letter of the law. Loopholes still exist but are much harder to circumvent with the spirit of the law. But it's also more ambiguous, so not without faults. You have to use some balance.
Of course technically option a is violating the law but no sane police officer will give you a fine in this case. Nor should they! A robot will, however. This is stupid.
Variable 1: The Cayenne is on a train track
Variable 2: The train behind the Cayenne is going 35mph.
You painted with too broad of a brush with that statement.
The point of terryf's example was to point out that for practical reasons, existing laws don't capture every relevant variable. I (but not everyone, it seems) think that visibility obviously influences safety. The point I want to make is that in practice the "precision gap" can't be perfectly rectified by making legality a function of more factors than just speed. There will always be some additional factor that influences the probability of a crash by some small amount -- and some of the largest factors, like individual driving ability, would be objected to on other grounds.
The point is that whether you drove dangerously is not a strict, machinistic "if-then" assessment. Automatic enforcement of speeding is ridiculous when viewed in this context.
And the people saying "yes but there is more energy in a faster vehicle" have clearly not felt the difference between driving a car with drum brakes vs modern brakes.
Ideally, for a lot of things we want to punish people who knowingly do bad stuff, not people who do bad stuff because they thought it was good.
In fact DUI should be a mitigating circumstance, because when you're drunk your ability to make decisions is impaired -- but the opposite happens, DUI is an aggravating circumstance.
There are numerous cases, both in history and in fiction, that demonstrate as much.
It's very easy to come up with thought experiments to show that technically illegal scenarios are not necessarily more dangerous than some legal scenarios.
The law is often made to be easy to apply, not for precision. Hard to see how anyone could see otherwise.
That's not say that the laws are necessarily problematic. You have to draw the line somewhere.
>only to allow targeted enforcement in service of harassment and oppression
That's absurd hyperbole. A competent policeman will recognise the difference between me driving 90 km/h on a 80 km/h road because I didn't notice the sign. And me driving 120 km/h out of complete disregard for human life. Should I get a fine for driving 90? Yea, probably. Is it a first time offence? Was anyone else on the road? Did the sign get knocked down? Is it day or night? Have I done this 15 times before? Is my wife in labour in the passenger seat? None of those are excuses, but could be grounds for a warning instead.
Why? Plenty of people drive in areas with speed cameras, isn't that exactly how they work?
> That's absurd hyperbole. A competent policeman will recognise the difference between me driving 90 km/h on a 80 km/h road because I didn't notice the sign.
I'm not sure it is hyperbole or that we should assume competence/good faith. Multiple studies have shown that traffic laws, specifically, are enforced in an inconsistent matter that best correlates with the driver's race.
[0] https://www.aclu-il.org/press-releases/black-and-latino-moto...
[1] https://www.nyu.edu/about/news-publications/news/2020/may/bl...
If you find it impossible to follow a simple speed limit, then getting you off the road is the ideal outcome.
That is precisely why traffic would effectively grind to a halt. Because going even 0,0001 over the limit is so easy, you would have to turtle through traffic to get anywhere while making certain you never go above the limit. 50km zone is now 30km, and you didn't decelerate quickly enough and were going 32km at the threshold. 60km zone, but you accelerated too quickly and hit 61km for a moment. And sometimes, rarely, but sometimes you have to accelerate yourself out of a dangerous situation.
Honestly if you are arguing for this idea, I strongly suspect you have no experience driving. I've driven for about 25 years. I've received two speeding tickets. One in Germany (I'm danish), where I got confused due to unfamiliar signage and got dinged for going 112km in a 100km zone. And once here I got a ticket for going 54 in a 50 - my mom was at the hospital, possibly about to die (she didn't). Both of those were speed traps.
> 50km zone is now 30km, and you didn't decelerate quickly enough and were going 32km at the threshold.
Is the argument that you and others would be unable to safely achieve the posted speed within the speed limited area? For example, if you feel you can't drive more precisely than 40-50 when you are aiming for 45, in the above scenario, you could start with your goal being 45, then in the 30 zone aim for 25, knowing that you'd be going no faster than 30 when your intend to drive 25.
> 60km zone, but you accelerated too quickly and hit 61km for a moment.
Should you aim for 55, if for example the most precise you can do is +/- 5? Or adjust correspondingly for how precise you are able to keep within a desired range.
And of course:
* In a world where enforcement was more consistent, we might expect speed limits to eventually be adjusted - i.e. are speed limits currently set lower than what is technically safe because we assume that some portion of people will currently break the law?
* With self-driving, or at least automated speed-keeping (but not steering) there will no longer be the issue of someone having the problem of being unable to stay within x km/h of the speed they're targeting.
>automated speed-keeping
My car displays what it thinks is the speed limit on the dashboard, and it gets it wrong all the time. If I relied on that in this hypothetical, I would be broke and homeless - possibly in prison, after it once said the limit was 110km on a narrow residential street.
It is perfectly possible to drive and obey all speed limits. It is even technically easy. Us people choosing not to do so, because we are impatient, feeling competitive against other drivers or because we just think we can get away with it now does not make it impossible.
So, in the case of speeding:
- Speeding on its own would only automatically "warrant" the police to stop you / interview you / tell you off, and perhaps to follow you around for a while after they pull you over, to ensure you don't start speeding again (and to immediately pull you over again if you do.) I say "warrant" here because this doesn't actually give them any powers that private citizens don't have; rather, it protects them from you suing them for harassment for what they're doing. (Just like a "search warrant" doesn't give the police any additional powers per se, but rather protects them from civil and criminal damages associated with them breaking-and-entering into the specified location, destroying any property therein, etc.)
- But speeding while in the process of committing some other "actual" crime, or speeding that contributes to some other crime being committed, may be an aggravating factor that multiplies the penalty associated with the other act, or changes the nominal charge for the other act.
We might also then see a tweak for "threshold aggravations", such that e.g.
- Speeding while also doing some other dumb thing — having your brake-lights broken, say — may be considered to "cross a threshold" where they add up to an arrest+charge, even though none of the individual violations has a penalty when considered independently.
This would, AFAICT, translate well into a regime where there are little traffic-cop drones everywhere, maximizing speeding enforcement. If speeding is all they notice someone doing, they'd just be catch-and-release-ing people: pulling them over, squawking at them, and flying away. Literal slap-on-the-wrist tactics. Which is actually usefully deterrent on its own, if there are enough of these drones, and they just keep doing it, over and over again, to violators. (Do note that people can't just "not pull over" because they know there are no penalties involved; they would still be considered police, and "not complying with a police stop" would, as always, be a real crime with real penalties; if you run from the drone, it would summon actual cars to chase you!)
---
Oddly, I think if you follow this legal paradigm to its natural conclusion, it could lead to a world where it could even be legal to e.g. drive your car home from the bar while intoxicated... as long as you're driving at 2mph, with your hazards on, and avoiding highways. But miss any of those factors, and it "co-aggravates" with a "driving recklessly for your reaction speed" charge, into an actual crime.
Or perhaps people will not be able to just "not pull over" because the police drones will be given the power to remotely command their car to stop. Heck, why even have the drones? Just require that the car monitor speeding infractions and report them for fines. Serious or repeat offenders can have their throttles locked out to the speed limit of the current road.
I personally happen to think this is a terrible idea, just one cyber attack or regime change away from crippling everyday Americans ability to get around and live their lives, but that probably won't stop it from happening.
Likewise if act in a way that makes someone feel that you're going to hit them that's assault regardless of whether you actually ever touch them.
etc. Many such cases.
Because of that (or rather, to sort out the mess), I always felt that citizens should have a right to be informed of every law that they are expected to obey so that at least in principle, they'd be able to comply (to be effective, plain language explanations would need to be included).
Imagine an app that told you, whenever you cross state boundaries, what is different in the law now from your previous location.
Imprecise law enforcement enables political office holders to arbitrarily leverage the law to arrest people they label as a political enemy, e.g. Aaron Swartz.
If everyone that ever shared publications outside the legal subscriber base was precisely arrested, charged, and punished, I dont think the punishment amd current legal terrain regarding the charges leveraged against him would have lasted.
But this is a feature, not a bug.
https://www.fxleaders.com/news/2025/10/29/code-is-law-sparks...
Additionally, law is not logical. Law is about justice and justice is not logical.
Example: the Supreme Court ruled in Ozawa v. United States in 1922 that a Japanese descended person could not naturalize as a US citizen despite having white skin because he was not technically Caucasian. The next year in 1923 they ruled in United States v. Bhagat Singh Thind that an Indian descended man could not natural despite being Caucasian because his skin was not white.
Why did the court give two contradictory reasons for the rulings which would each be negated if the reasoning were swapped? I wouldn't say it was for justice. It was because America at that time did not want non-white immigrants, and what 'white' is, is a fiction that means something completely different than what it claims to mean, and the justices were upholding that structure.
If the police had been able to swoop in and arrest the "perpetrators" every time two men kissed, homosexuality would have never been legalized; If they had been able to arrest anyone who made alcohol, prohibition wouldn't have ended; if they had been able to arrest anyone with a cannabis seedling, we wouldn't have cannabis legalization.
If everyone knew they would be immediately arrested the second they sprouted a cannabis seed almost nobody would try.
The present system in many countries is that criminal and civil codes are too large to be comprehended by a single person, too large to be changed rationally, and the processes too subject to corruption to be changed all at once.
But it only happens because you got to know Mark originally. If he was already labeled a 'very dangerous person' and was arrested early on, there's a much better chance you wouldn't have gotten to know him, and the extent you would question the law would be very differently.
This is difficult to talk about in theoretical, because most examples are obvious cases of bad law (people already recognize the issues) or cases of good law that how dare I question (laws people don't recognize as an issue). People spend a lot of time thinking about a law that has already soured but hasn't been removed from the books, but rarely catch the moment that they realize the law originally soured in their mind (and any real world modern examples that might be changing currenlty would be, by their nature, controversial and quite easily derail the discussion).
Put shortly, those people are only obviously innocent (not deserving to be punished, but technically guilty of what the law stated) because the law was imperfect in enforcement and you got to know some of them before the law caught them.
Maybe my YouTube algorithm just shows me a lot of it, but there’s no shortage of cops out there violating people’s rights because they think when they ask for something we have to comply and see anything else as defiant.
I think we need perhaps less laws so people can actually know them all. Also, I think we need clarity as to what they are and it needs to be simple English, dummy’s guide to law type thing. But there’s a lot of issues that simply stem from things like 1) when can a cop ask for your ID? / when do you have the right to say no? 2) similar question as to when do they have a right to enter/trespass onto your property? 3) as every encounter usually involves them asking you questions, even a simple traffic stop, when and how can you refuse to talk to them or even roll down your window or open your car door without them getting offended and refusing to take no as an answer?
I don’t think we generally have any understanding of what our rights actually are in these most likely and most common interactions with law enforcement. However, it’s all cases where I see law enforcement themselves have a poor understanding of what the law and rights are themselves so how are citizens to really know. If they tell you it’s their policy to ID anyone they want without any sort of probable cause then they say you’re obstructing their investigation for not complying or answering their questions or asserting you have to listen to anything they say because it’s a lawful order; it’s just common ways they get people to do what they want, it’s often completely within your right to not comply with a lot of these things though.
Teaching this as a requires HS class would be an incredible benefit to society, because, on the flip side, many police encounters escalate to violence because the citizen has an incorrect understanding of where their rights end/don't exist.
The most obvious rule to follow is that you should always assert your rights (correct or incorrect) verbally only, as soon as you involve physical resistance, the situation will deteriorate rapidly (for you.) Any violations of your rights will be argued and dealt with in court, not on the street. Confirm requests/demands from officers are 'lawful orders', and then do them.
> Imperfect enforcement is too easy for law enforcement officers to turn into selective enforcement. By choosing who to go after, law enforcement gets the unearned power
This is by design, in an American context of building a free society. By default, you are allowed to do whatever you like to do in a free society. To constrain behavior through law, first a legislator must decide that it should be constrained, then they must convince their legislator peers that it should be constrained, then law enforcement must be convinced to attempt to constrain it de-facto, then a judge must be convinced that you in particular should have a court case proceed against you; a grand jury must be convinced to bring an indictment, a jury of 12 peers must be convinced to reach a verdict, and even afterwards there are courts of appeal.
The bar to constrain someone's freedom is quite high. By design and by wider culture.
I think there’s a difference between the marketing brochure and reality.
You cannot have precise enforcement with imprecise laws. It’s as simple as that.
The HN favorite in this respect is “fair use” under copyright. It isn’t well specified enough for “precise enforcement”. How do you suggest we approach that one?
Civil disobedience can also be a useful societal force, and with perfect law enforcement it becomes impossible.
Neoliberals and the far left, when forced to work in the real world, both tend to prefer putting power into rules, not giving people in authority the power to make decisions.
The upside is there's less misuse of power by authorities, at least in theory. The bad news is, you now need far more detailed rules to allow for the exceptions, common sense, and nuance that are no longer up to authorities.
The worse news is, that the people who benefit from complex rules are the upper classes, and the authorities who know how to manipulate complex rules.
"Don't be evil" requires a leader with the authority to enforce it.
A 500 employee manual will be selectively implemented, and will end up full of exploits, but hey, at least you can pretend you tried to remove human error from the process.
But if I've learned anything in 20 years of software eng, it's that migration plans matter. The perfect system is irrelevant if you can't figure out how to transition to it. AI is dangling a beautiful future in front of us, but the transition looks... Very challenging
As Edward Snowden once argued in an AMA on Reddit, a zero crime rate is undesirable for democratic society because it very likely implies that it's impossible to evade law enforcement. The latter, however, means that people won't be able to do much if the laws ever become tyrannic, e.g. due to a change in power. In other words, in a well-functioning democratic society it must always be possible (in principle) to commit a crime and get away.
Take some examples of laws that have changed over time. Say, interracial marriage. It was illegal in many places to marry someone of a different race. If this had been perfectly enforced, no one would have ever dated or see couples of different races, and people would have had a lot harder of a time exploring and realizing that the law was wrong.
The same thing could be said about marijuana legalization. If enforcement was perfect, no one would have ever tried marijuana, and there would have never been a movement to legalize by people who used it and decided it was not something that should be banned.
We need to be able to push boundaries so they can move when needed.
The problem with perfect enforcement is it requires the same kind of forethought as waterfall development. You rigidly design the specification (law) at the start, then persist with it without deviation from the original plan (at least for a long time). In your example, the lawmakers may still pass the law because they don't think of their kids as drug users, and are distracted by some outrage in some other area.
Giving the former discretion was a way to sneakily contain the worst excesses of the latter.
Alas, self-interest isn't really something voters seem to really take into account.
Eastern Europe went through a similar transition. Before the iron curtain fell, the eastern bloc operated on favors more than it operated on money. This definitely isn't the case any more.
We are seeing this in the world of digital media, where frivolous DMCA and YouTube takedown reports are used indiscriminately and with seemingly little consequence to the bad actor. Corporations are prematurely complying with bad actors as a risk reduction measure. The de jure avenues to push back on this are weak, slow, expensive, and/or infeasible.
So if you ask me what's the bigger threat right now, stricter or less strict enforcement, I'd argue that it's still generally the latter. Though in the specific case of copyright I'd like to see a bunch of the law junked, and temporal scope greatly reduced (sorry not sorry, Disney and various literary estates), because the de facto effects of it on the digital (and analog!) commons are so insidious.
Hey, I really like this framing. This is a topic that I've thought about from a different perspective.
We have all kinds of 18th and 19th century legal precedents about search, subpoenas, plain sight, surveillance in public spaces, etc... that really took for granted that police effort was limited and that enforcement would be imperfect.
But they break down when you read all the license plates, or you can subpoena anyone's email, or... whatever.
Making the laws rigid and having perfect enforcement has a cost-- but just the baseline cost to privacy and the squashing of innocent transgression is a cost.
(A counterpoint: a lot of selective law enforcement came down to whether you were unpopular or unprivileged in some way... cheaper and automated enforcement may take some of these effects away and make things more fair. Discretion in enforcement can lead to both more and less just outcomes).
The U.S. constitution has been written in an age before phones, automatic and semi-automatic rifles (at least in common use), nuclear weapons, high-bandwidth communications networks that operate at lightning speed, mass media, unbreakable encryption and CCTV cameras.
But since having 300 million people have a detailed, nuanced discussion about anything is impossible, everyone works at the edges.
Specific examples for the UK: inducting politicians into the Privy Council in order to qualify them for security briefings, Henry VII powers, and ministers' authority deriving from the seal they're given by the sovereign. Which would almost make as much sense if it were a marine mammal as it does being a stamp.
The thing being, they work well enough. And if you want to replace them, you need to work out what to replace them with and how.
https://yalelawjournal.org/pdf/200_ay258cck.pdf
which, as I recall it, suggested that the copyright law effectively considered that it was good that there was a way around copyright (with reverse engineering and clean-room implementation), and also good that the way around copyright required some investment in its own right, rather than being free, easy, and automatic.
I think Samuelson and Scotchmer thought that, as you say, costs matter, and that the legal system was recognizing this, but in a kind of indirect way, not overtly.
I once had small talk with Lawrence Lessig after a conference of his, and when I told him that he was visibly shocked, as if I had told him I was raised to be a criminal.
Now I'm not sure what to think anymore.
There are tons of stuff every day I could steal, knowing that any law I might break would not be enforceable simply because no one knew it was me. Littering in the forest. Dumping toxic materials into rivers.
All that works because most people don't do it, only a few.
Many governments around the world have entities to which you can write a letter, and those entities are frequently obligated to respond to that letter within a specific time frame. Those laws have been written with the understanding that most people don't know how to write letters, and those who do, will not write them unless absolutely necessary.
This allows the regulators to be slow and operate by shuffling around inefficient paper forms, instead of keeping things in an efficient ticket tracking system.
LLMs make it much, much easier to write letters, even if you don't speak the language and can only communicate at the level of a sixth-grader. Imagine what happens when the worst kind of "can I talk to your supervisor" Karen gets access to a sycophantic LLM, which tells her that she's "absolutely right, this is absolutely unacceptable behavior, I will help you write a letter to your regulator, who should help you out in this situation."
People are cranking out legal requests and claims with LLMs and sending them to companies. Almost all of them are pretty much meaningless, and should be ignored.
However, they legally can't just ignore them. They have to have someone review the claim, verify that it is bullshit, and then they can ignore it. That takes time, though.
So people can generate and send millions of legal claim instantly, but the lawyers have to read them one by one.
The asymmetry of effort is huge, and causes real issues.
At the end we will just have agents interacting with each other.
As in their post:
"The future of software is not open. It is not closed. It is liberated, freed from the constraints of licenses written for a world in which reproduction required effort, maintained by a generation of developers who believed that sharing code was its own reward and have been comprehensively proven right about the sharing and wrong about the reward."
This applies to open-source but also very well to proprietary software too ;) Reversing your competitors' software has never been easier!
High quality decompilers have existed for a long time, and there's a lot more value in making a cleanroom implementation of Photoshop or Office than of Redis or Linux. Why go after such a small market?
I suspect the answer us that they don't believe it's legal, they just think that they can get away with it because they're less likely to get sued.
(I really suspect that they don't believe that at all, and it's all just a really good satire - after all, they blatantly called the company "EvilCorp" in Latin.)
Because this is satire by FOSS people :)
There’s the old approach of hanging a wanted poster and asking people to “call us if you see this guy”. Then there’s the new approach matching faces in a comprehensive database and camera networks.
The later is just the perfect, efficient implementation of the former. But it’s… different somehow.
* narrow looking roadway * speed limit signs * your car has self driving * what everybody else is doing * speed limiter on your car * curvy road * bad weather * male or female * risk appetite * driving experience * experience of that route * perceived risk of getting caught
If you fix "speed choice" the problem of speeding diminishes.
Well said.
I think another area where this problem has already emerged is with public records laws.
It's one thing if records of, let's say, real estate sales are made "publicly available" by requiring interested parties to physically visit a local government building, speak in the local language to other human beings in order to politely request them, and to spend a few hours and some money in order to actually get them.
It's quite another thing if "publicly available" means that anyone anywhere can scrape those records off the web en masse and use them to target online scams at elderly homeowners halfway around the world.
To do this, though, you're going to have to get rid of veto points! A bit hard in our disastrously constitutional system.
"Costs matter" is one way to say it, probably a lot easier to digest and more popular than the "Quantity has a quality all it's own" quote I've been using, which is generally attributed to Stalin which is a little bit of a problem.
But it's absolutely true! Flock ALPRs are equivalent to a police officer with binoculars and a post-it for a wanted vehicle's make, model, and license plate, except we can put hundreds of them on the major intersections throughout a city 24/7 for $20k instead of multiplying the police budget by 20x.
A warrant to gather gigabytes of data from an ISP or email provider is equivalent to a literal wiretap and tape recorder on a suspect's phone line, except the former costs pennies to implement and the later requires a human to actually move wires and then listen for the duration.
Speed cameras are another excellent example.
Technology that changes the cost of enforcement changes the character of the law. I don't think that no one realizes this. I think many in office, many implementing the changes, and many supporting or voting for those groups are acutely aware and greedy for the increased authoritarian control but blind to the human rights harms they're causing.
In the US, the police do not generally need a warrant to tail you as you go around town, but it is phenomenally expensive and difficult to do so. Cellphone location records, despite largely providing the same information, do require warrants because it provides extremely cheap, scalable tracking of anyone. In other words, we allow the government to acquire certain information through difficult means in hopes that it forces them to be very selective about how they use it. When the costs changed, what was allowed also had to change.
And this same principle allows them to build massive friend/connection networks of everyone electronically. The government knows every single person you've communicated with and how often you communicate with them.
It was never designed for this originally.
(There are other problems, I know, but the regulations are crazy).
People complain about the regulations, but they also complain about houses that are structurally unsound, unventilated, flammable, badly isolated acoustically and thermally and so on... I don't think going back is the way to go. It's true that sometimes licensing that too long, though.
But then again, we have turned "security" into something absurd which only adds costs.
> Blindly translating those centuries of laws into rigid, free enforcement is a terrible idea for everyone.
I understand your point that changing the enforcement changes how the law is "felt" even though on the paper the law has not changed. And I think it makes sense to review and potentially revise the laws when enforcement methods change. But in the specific case of the 55 mph limit, would the consequences really be grave and terrible if the enforcement was enforced by a robot, but the law remained the same?
Other than lost time (which compounds, but also increases traffic congestion, so those 10 mins might turn into 20-25), the fuel use and pollution are greatly increased.
Interestingly, there are speed cameras there, and enforcement is not done on these slight violations: without this flexibility, I'd need to ask for traffic lights to be adjusted so they work well for driving under speed limits, and that is slow and an annoying process.
But without an option to "try", I wouldn't even know this is the case, and I wouldn't even be able to offer this as a suggestion.
Whether that accounts for consequences being "grave and terrible", probably not, but very suboptimal for sure.
The potential consequences of mass surveillance come to mind.
Anyway. I come from the UK where we've had camera based enforcement for aeons. This of course actually results in people speeding and braking down to the limit as they approach the camera (which is of course announced loudly by their sat nav). The driving quality is frankly worse because of this, not better, and it certainly doesn't reduce incidence of speeding.
Of course the inevitable car tracker (or average speed cameras) resolve this pretty well.
While it is true that many people do speed, that doesn't make their speeding "the real speed limit".
I've driven behind drivers driving 25km/h in a 40km/h area and not stopping for pedestrians at a crosswalk with right of way (if somebody jumped out elsewhere, they'd probably just run them over at 25 km/h), whereas I always do even if I am driving at 45km/h because my foot would be hanging over the brake near areas of low visibility (like intersections) or near crosswalks or with pedestrians near the road.
Your braking distance is largely a function of your reaction time (attention + pre-prep + reflexes), and your car performance (tyres, brakes) on top of the speed, and speed limits are designed for the less than median "driver". You obviously have most of those under your control, but the speed is the easiest to measure externally.
The obvious counter is that I could be even safer if I also drove at 25 km/h, but it would take me much longer, I'd hit many more red lights, so I might stop being so attentive because I am going "so slow" and taking so long (maintaining focus is hard the longer you need to do it).
However, measuring individual performance is prohibitively expensive if not impossible (as it also fluctuates for the same person, but also road and car conditions), so we use a proxy like speed limit that is easy to measure.
Which is a dangerous line of thinking, as everyone thinks they're above average.
Speed limits are also, to some extent, designed around physics. Higher-speed accidents have more kinetic energy.
If we wanted to strictly enforce speed limits, we would put governors on engines. However, doing that would cause a lot of harm to normal people. That's why we don't do it.
Stop and think about what it means to be human. We use judgement and decide when we must break the laws. And that is OK and indeed... expected.
I would argue that only the last one is a valid reason because it's the only one where it's clear that not speeding leads to direct worse consequences.
Speed limits don't exist just to annoy people. Speeding increases the risk of accident and especially the consequences of an accident.
I don't trust people to drive well in a stressful situation, so why would it be a good idea to let them increase the risk by speeding.
The worst part is that it's not even all that likely that the time saved by speeding ends up mattering.
In the U.S., the average distance from a hospital is 10 miles (in a rural area). Assuming 55 mph speed limits, that means most people are 11 minutes from a hospital. Realistically, “speeding” in this scenario probably means something like 80 mph, so you cut your travel time to 7.5 minutes.
In other words, you just significantly increased your chances of killing your about to be born kid, your wife, yourself, and innocent bystanders just to potentially arrive at a hospital 210 seconds sooner.
Edit: the rushing someone to an ER scenario is possibly more ridiculous, since you can’t teleport yourself, and if the 3.5 minutes in the above scenario would make a difference, then driving someone to the ER is a significantly worse option than starting first aid while waiting for EMTs to arrive.
If my wife is having a stroke, I can definitely pick her up, toss her in the car, and get to the ER faster than an ambulance can reach my house.
As I'm sure you know, every second counts when it comes to recovery from a stroke.
What kind of first aid do you give to someone having a stroke anyway?
The 1996 movie Transpotting still gives me shivers up my spine by putting someone in a car and drop at ER rather than calling for help. Too many people die needlessly, even today, when well meaning people load shooting victims, stroke victims and heart attacks etc into their car and drive to ER without asking their local emergency services for advice.
PS. You can't 100% of the time get to ER faster than the ambulance. There are more ambulances than emergency rooms by number. If an ambulance is at the county hospital they'll be faster than you.
Your argument only makes sense if the only possible bad thing is a car accident -- to make my point clearer, would you take a 1% chance of losing 100$ to avoid a 50% chance of losing 10$?
Depends how much money you have, but it can be a perfectly rational decision.
The real reason is that speed limits are generally lower than the safe speed of traffic, and enforcement begins at about 10mph over the stated limits.
People know they can get away with it.
If limits were raised 15% and strictly enforced, it would probably be better for society. Getting a ticket for a valid emergency would be easy to have reversed.
...and there's also a large difference between any of those three shifts, and the secular shift (i.e. through no change in regulatory implementation whatsoever!) that occurs when the majority of traffic begins to consist of autonomous vehicles that completely ignore the de facto flow-of-traffic speeds, because they've been programmed to rigorously follow the all laws, including posted de jure speed limits (because the car companies want to CYA.)
Which is to say: even if regulators do literally nothing, they might eventually have to change the letter of the law to better match the de facto spirit of the law, lest we are overcome by a world of robotic "work to rule" inefficiencies.
---
Also, a complete tangent: there's also an even-bigger difference between any of those shifts, and the shift that occurs when traffic calming measures are imposed on the road (narrowing, adding medians, adding curves, etc.) Speed limits are an extremely weird category of regulation, as they try to "prompt" humans to control their behavior in a way that runs directly counter to the way the road has been designed (by the very state imposing the regulations!) to "read" as being high- or low-speed. Ideally, "speed limits" wouldn't be a regulatory cudgel at all; they'd just be an internal analytical calculation on the way to to figuring out how to design the road, so that it feels unsafe to go beyond the "speed limit" speed.
I think that the failure to distinguish them is due to a really childish outlook on law and government that is encouraged by people who are simple-minded (because it is easy and moralistic) and by people who are in control of law and government (because it extends their control to social enforcement.)
I don't think any discussion about government, law, or democracy is worth anything without an analysis of government that actually looks at it - through seeing where decisions are made, how those decisions are disseminated, what obligations the people who receive those decisions have to follow them and what latitude they have to change them, and ultimately how they are carried out: the endpoint of government is the application of threats, physical restraint, pain, or death in order to prevent people from doing something they wish to do or force them to do something they do not wish to do, and the means to discover where those methods should be applied. The police officer, the federal agent, the private individual given indemnity from police officers and federal agencies under particular circumstances, the networked cameras pointed into the streets are government. Government has a physical, material existence, a reach.
Democracy is simpler to explain under that premise. It's the degree to which the people that this system controls control the decisions that this system carries out. The degree to which the people who control the system are indemnified from its effects is the degree of authoritarianism. Rule by the ungoverned.
It's also why the biggest sign of political childishness for me are these sort of simple ideas of "international law." International law is a bunch of understandings between nations that any one of them can back out of or simply ignore at any time for any reason, if they are willing to accept the calculated risk of consequences from the nations on the other side of the agreement. It's like national law in quality, but absolutely unlike it in quantity. Even Costa Rica has a far better chance of ignoring, without any long-term cost, the mighty US trying to enforce some treaty regulation than you as an individual have to ignore the police department.
Laws were constructed under this reality. If we hypothetically programmed those laws into unstoppable Terminator-like robots and told them to enforce them without question it would just be a completely different circumstance. If those unstoppable robots had already existed with absolute enforcement, we would have constructed the laws with more precision and absolute limitations. We wouldn't have been able to avoid it, because after a law was set the consequences would have almost instantly become apparent.
With no fuzziness, there's no selective enforcement, but also no discretion (what people call selective enforcement they agree with.) If enforcement has blanket access and reach, there's also no need to make an example or deter. Laws were explicitly formulated around these purposes, especially the penalties set. If every crime was caught current penalties would be draconian, because they implicitly assume that everyone who got caught doing one thing got away with three other things, and for each person who was caught doing a thing three others got away with doing that thing. It punishes for crimes undetected, and attempts to create fear in people still uncaught.
The legal system is mostly a fantasy. It doesn't exist for most people. Currently it only serves large corporate and political interests since only they can afford access.
Meaning that democratizing our existing political structures is a reality today and can be done effectively (think blockchain, think zero knowledge proofs).
On the other hand, the political struggle to actually enact this new democratic system will be THE defining struggle of our times.
Former lawyer here, who worked at a top end law firm. Throwaway account.
In my experience, the legal system and lawyers in general are deeply aware of this. It's the average Joe who fails to realize this, particularly a certain kind of Joe (older men with a strong sense that all rules are sacred, except those that affect them, those are all oppressive and corrupt and may possibly justify overthrowing the government).
Laws are social norms of varying strength. There's the law (stern face) and then there's the law (vague raising of hands). If you owe a bank $2m and you pay back $1m, then you're going to run into the law (stern face). If you have an obligation to use your best efforts to do something, and you don't do it, then we can all have a very long conversation about what exactly 'best efforts' means in this exact scenario, and we're more in the territory of law (vague raising of hands).
Administrative obligations are the vaguest of all, and that's where lawyers are genuinely most helpful. A good lawyer will know that Department so and so is shifting into harsher enforcement of this type of violation but is less concerned about that type of violation. They know that Justice so and so loves throwing the book in this kind of case, but rolls their eyes at that other kind of case. This is extremely helpful to you as a client.
> And without very many people consciously realizing it, we have centuries of laws that were written with the subconscious realization that enforcement is difficult and expensive, and that the discretion of that enforcement is part of the power of the government. Blindly translating those centuries of laws into rigid, free enforcement is a terrible idea for everyone.
Enforcement of laws is a political decision, and there is no way to ever escape this fact. If society gets concerned about something, politicians are going to mobilize old laws to get at it. If society relaxes about something, enforcement wanes. Drugs are an obvious example. A lot of the time the things society are concerned about are deeply stupid (is D&D satanic?), but in a democracy politicians are very sensitive to public sentiment. If you don't like the way the public debate is going, get involved.
> Yet we still have almost no recognition that that is an issue. This could, perhaps surprisingly, be one of the first places we directly grapple with this in a legal case someday soon, that the legality of something may be at least partially influenced by the expense of the operation.
The courts are only ever concerned about de jure legality. (It's the literal meaning of de jure!) There are other outlets for de facto legality in the legal system - e.g. the police can choose not to investigate, prosecutors can choose not to lay charges, or opt for lower-level charges, or seek a lenient sentence.
For example, I've been cheated out of at least $100k net worth by the founder of a crypto project because he decided to abandon tech which was working and switched to a competitor's platform for no reason. Now I was already worried about repercussions outside of the legal system... This is crypto sector after all... But also, legally, there's no way I can afford to sue a company which controls almost $100 million in liquid assets and probably has got government regulators on their payroll... Even though it is a simple case, it would be difficult to win even if I'm right and the risk of losing is that they could seek reimbursement of lawyers fees which they seek to maximize just to make things difficult for me.
An interesting read, however I'd like to know how to stop websites from screwing around with my scrollbars. In this case it's hidden entirely. Why is this even a thing websites are allowed to do - to change and remove browser UI elements? It makes no sense even, because I have no idea where I am on the page, or how long it is, without scrolling to the bottom to check. God I miss 2005.