Only if you don't have a working content blocker on that device which you should have. Using web devices without content blockers is just as bad for your health as sunbathing on the local garbage dump. Just get a content blocker and protect yourself from the filth. If you happen to use a device which does not allow effective content blocking - something from the fruit factory or something totally googly - you might want to consider getting something which is more aligned with your needs and less with those of the aforementioned companies.
I can't imagine we'll really be able to trust AI without it's use in open source software where we can see how reliable it is.
AI bug reports went from junk to legit overnight, says Linux kernel czar (theregister.com)
58 points by amarant 4 days ago
Nothing has changed this month, it's been good for a while and a small minority could see that it was already decided that it was paradigm shifting.
This is a notice to all of you that are just now changing your mind and crossing over: Your cognition of reality is flawed and you are not as good as you think you are at observing technological progress. The only thing that has changed is that you just now noticed how capable LLMs are. There are many that have been telling you before 2026 that it was here and you all tried to paint us as charlatans.
Reddit is still very far behind. Browsing /r/programming and other software dev releated subs like /r/cscareerquestions is like being at a dinosaur museum
Like you implied, I think a personal threshold crossing gives this false impression that "everything changed" this month or last month or last year. Like you said, the main thing that changed in one particular month was the observer.
But, perhaps the AI epiphany is not waking up to recognize how good AI already was. Instead, it could be when an individual's standards degrade such that the same AI usage is seen as a benefit instead of a liability. Both interpretations yield the same basic pattern of adoption and commentary that we see right now.
The difference will be in the long-term outcome. Some years from now, will we see that this mass adoption yielded a renaissance of productivity and quality, or a cataclysm of slop-induced liability and loss?
It's been fun seeing the cognitive dissonance in anti-LLM tech circles as technical giants that they idolized, from Torvalds through Carmack all the way up to Knuth, say something positive about AI, let alone sing praises of it!
These same artisans complain about how bad AI generated code is. The AI is trained on your bad artisan code. It's like they are looking in the mirror for the first time and being disgusted by what they see.
[1]: https://techcrunch.com/2014/03/29/the-internet-is-held-toget...
[2]: https://krebsonsecurity.com/2021/11/the-internet-is-held-tog...
A lot of people seem stuck with their older (correct at the time) views of them still always producing slop.
FWIW I am more of an AI doomer (in the sense that I think the economic results from them will be disastrous for knowledge workers given our political realities) than booster, but in terms of utility to get work done they did pass a clear inflection point quite recently.
So, still pretty likely to produce slop in a large majority of cases
If the most useful place for them is where you've already specced things out to that degree of precision then they aren't that useful?
Speccing things to that precision is the time consuming and difficult work anyways, after all.
I wish this wasn't true because I think it will economically upend the industry in which I have a career, but sadly the universe doesn't care what I wish.
IMO this vastly overestimates how good the "untrained masses" are at thinking in a logical, mathematical way. Apparently something as basic as Calculus II has a fail rate of ~50% in most universities.
There's nothing "basic" about Calculus II. Calculus is uniquely cursed in mathematical education because everything that comes before it is more or less rooted in intuition about the real world, while calculus is built on axioms that are far more abstract and not substantiated well (not until later in your mathematical education). I expect many intelligent, resourceful people to fail it and I think it says more about the abstractions we're teaching than anything else.
But also, prompting LLMs to give good results is nowhere near as complex as calculus.
Most people on here don’t belong to that group of people. So ofc they can find a way to create value out of a thing that requires some tinkering and playing with.
The question is can the techniques evolve to become technologies to produce stuff with minimal effort - whilst only knowing the bare minimum. I’m not convinced personally - it’s a pipe dream and overlooks the innate skill necessary to produce stuff.
If they truly did, there wouldn't be a huge amount of humans whose role is basically "Take what users/executives say they want, and figure out what they REALLY want, then write that down for others".
Maybe I've worked for too many startups, and only consulted for larger companies, but everywhere in businesses I see so many problems that are basically "Others misunderstood what that person meant" and/or "Someone thought they wanted X, they actually wanted Y".
I mean, yes. I'm worried about my career too, but for different reasons. I don't think these things are actually good enough to replace me, but I do think it doesn't matter to the people signing the cheques.
I don't believe LLMs are producing anything better than slop. I think people's standards have been sinking for a long time and will continue to sink until they reach the level LLMs produce
The problem isn't just LLMs and the fact they produce slop, it's that people are overall pretty fine with slop
I'm not though, so there's no place for me in most software business anymore
But I look at software from the perspective of them as being objects.
Since it’s intangible people can’t see within. So something can look pretty even if underlying it all, it’s slop.
However, there is an implicit trade off - mounting slop makes you more vulnerable from a security standpoint, bugs etc which can destroy trust and experience of using the software. This can essentially put the life of a business at risk.
People aren’t thinking so much about that risk - because it hasn’t happened to anyone large substantially. What I think about is will slop just continue to mount unchecked? Or are people expecting there to be improvements that enable oneself to go back and clean up the slop with more powerful tooling?
If the latter does not come about, I think we will see more firms come under stress.
Overall though, I think too much focus is on the acceleration of output. I never think that’s the most important thing. It’s secondary to having a crystal clear vision. The problem is to have a clear vision requires doing a lot of grunt work - it trains and conditions your mind to think a particular kind of way.
It will be interesting to see how this all plays out.
Opus 4.6 has been a step change. It's simply never wrong anymore. You may need to continue giving it further clarification as to what you want, but it never makes mistakes with what it intends to do now.
I do agree that the Q1 2026 models in general have passed a threshold, but goodness almighty Opus 4.6 still screws up a lot.
Just because you can't tell when Claude is right, doesn't mean that you are.
This shit is AGI, with decades + billions of dollars of research and development behind it.
So don't get all high and mighty now, acting like you know better than Claude.
Odd sentiment. It's pretty clear the tools crossed a threshold last year (in April as I recall) where they became good enough to actually write entire applications, and just accelerated from there. Today they're amazing and no-one I know is writing artisanal code anymore (at least, not at work).
This is the buried lede. It's a propaganda piece.