- in a non-hobby setting, code is a liability
- I want to solve problems, not write code
- I love writing code as a hobby.
- being paid to do my hobby professionally is amazing.
- I love the idea of the Star Trek Ship’s Computer. To just ask for things and for it to do the work. It sometimes feels like we’re very close.
I figure that all this AI coding might free us from NIH syndrome and reinventing relational databases for the 10th time, etc.
Models can RTFM (and code) and do novel things, demonstrably so.
Zero preexisting examples of your particular frameworks.
Huge number of examples of similar existing frameworks and code patterns in their training set though.
Still not a novel thing in any meaningful way, not any more than someone who has coded in dozens of established web frameworks, can write against an unfamiliar to them framework homegrown at his new employer.
"LLMs can only emit things they've been trained on" is wholly obsolete.
Maybe you can’t teach current LLM backed systems new tricks. But do we have reason to believe that no AI system can synthesize novel technologies. What reason do you have to believe humans are special in this regard?
But then we got a neural wr that was big enough and it turns out that feedforward receptive fields ARE enough. We don’t know whether this is how our brains do it, but it’s a humbling moment to realize that you just overthought how complex the problem was.
So ive become skeptical when people start claiming that some class of problem is fundamentally too hard for machines.
People have been doing thigs millenia before they understood them. Did primitive people understood the mechanism behind which certain medicinal plants worked in the body, or just saw that when they e.g. boil them and consume them they have a certain effect?
But sure, instantiating these capabilities in hardware and software are beyond our current abilities. It seems likely that it is possible though, even if we don’t know how to do it yet.
For example how much rain is going to be in the rain gauge after a storm is uncomputable. You can hook up a sensor to perform some action when the rain gets so high. This rain algorithm is outside of anything church turing has to say.
There are many other natural processes that are outside the realm of was is computable. People are bathed in them.
Church turing suggests only what people can do when constrained to a bunch of symbols and squares.
We've only had the tech to be able to research this in some technical depth for a few decades (both scale of computation and genetics / imaging techniques).
Even skin cells exchange information in neuron-like manner, including using light, albeit thousands times slower.
This switches complexity of human brain to "86 billions quantum computers operating thousands of small neural networks, exchanging information by lasers-based optical channels."
We don’t even know if they want to. But in general, it’s impossible to conclusively prove that something won’t ever happen in the future.
> unstated assumption that technological progress towards human-like intelligence is in principle possible. In reality, we do not know.
For me this isn’t an assumption, it’s a corollary that follows from the Church-Turing thesis.
The claim being made is not "no computer will ever be able to adapt to and assist us with new technologies as they come out."
The claim being made is "modern LLMs cannot adapt to and assist us with new technologies until there is a large corpus of training data for those technologies."
Today, there exists no AI or similar system that can do what is being described. There is also no credible way forward from what we have to such a system.
Until and unless that changes, either humans are special in this way, or it doesn't matter whether humans are special in this way, depending on how you prefer to look at it.
> That's irrelevant.
My comment was relevant, if a bit tangential.
Edit: I also want to say that our attitude toward machine vs. human intelligence does matter today because we’re going to kneecap ourselves if we incorrectly believe there is something special about humans. It will stop us from closing that gap.
For example, my company makes a new framework, and we have a skill we can point an agent at. Using that skill, it can one-shot fairly complicated code using our framework.
The skill itself is pretty much just the documentation and some code examples.
How long can you keep adding novel things into the start of every session's context and get good performance, before it loses track of which parts of that context are relevant to what tasks?
IMO for working on large codebases sticking to "what the out of the box training does" is going to scale better for larger amounts of business logic than creating ever-more not-in-model-training context that has to be bootstrapped on every task. Every "here's an example to think about" is taking away from space that could be used by "here is the specific code I want modified."
The sort of framework you mention in a different reply - "No, it was created by our team of engineers over the last three years based on years of previous PhD research." - is likely a bit special, if you gain a lot of expressibility for the up-front cost, but this is very much not the common situation for in-house framework development, and could likely get even more rare over time with current trends.
Today, yes. I assume in the future it will be integrated differently, maybe we'll have JIT fine-tuning. This is where the innovation for the foundation model providers will come in -- figuring out how to quickly add new knowledge to the model.
Or maybe we'll have lots of small fine tuned models. But the point is, we have ways today to "teach" models about new things. Those ways will get better. Just like we have ways to teach humans new things, and we get better at that too.
A human seeing a new programming language still has to apply previous knowledge of other programming languages to the problem before they can really understand it. We're making LLMs do the same thing.
LLMs are really good at doing that. Arguably better than humans at RTFM and then applying what's there.
Funny, I'd say the same thing about traditional programming.
Someone from K&R's group at Bell Labs, straight out of 1972, would have no problem recognizing my day-to-day workflow. I fire up a text editor, edit some C code, compile it, and run it. Lather, rinse, repeat, all by hand.
That's not OK. That's not the way this industry was ever supposed to evolve, doing the same old things the same old way for 50+ years. It's time for a real paradigm shift, and that's what we're seeing now.
All of the code that will ever need to be written already has been. It just needs to be refactored, reorganized, and repurposed, and that's a robot's job if there ever was one.
Not to mention you're probably also using source control, committing code and switching between branches. You have unit tests and CI.
Let's not pretend the C developer experience is what it was 30 years ago, let alone 50.
Reply due to rate limiting:
K&R didn't know about CI/CD, but everything else you mention has either existed for over 30 years or is too trivial to argue about.
Conversely, if you took Claude Code or similar tools back to 1996, they would grab a crucifix and scream for an exorcist.
Now that's extrapolation of the sort that, as you point out elsewhere, no LLM can perform.
At least, not one without serious bugs.
A vice president at Symbolics, the Lisp machine company at their peak during the first AI hype cycle, once stated that it was the company's goal to put very large enterprise systems within the reach of small teams to develop, and anything smaller within the reach of a single person.
And had we learned the lessons of Lisp, we could have done it. But we live in the worst timeline where we offset the work saved with ever worse processes and abstractions. Hell, to your point, we've added static edit-compile-run cycles to dynamic, somewhat Lisp-like languages (JavaScript)! And today we cry out "Save us, O machines! Save us from the slop we produced that threatens to make software development a near-impossible, frustrating, expensive process!" And the machines answer our cry by generating more slop.
This is true for a usual approach, but the whole reason I’m writing the CRDT is to avoid these tombstones! Anyway, a long story short, I did eventually convince Claude I was right, but to do it I basically had to write a structural proof to show clear ordering and forward progression in all cases. And even then compaction tends to reset it. There are a lot of subtleties these systems don’t quite have yet.
Maybe the current allocation of technical talent is a market failure and disruption to coding could be a forcing function for reallocation.
By generating prototypes that are based on different design models each end product can be assessed for specific criteria like code readability, reliability, or fault tolerance and then quickly be revised repeatedly to serve these ends better. No longer would the victory dance of vibe coding be simply "It ran!" or "Look how quickly I built it!".
i have never written a c compiler yet I would bet money if you paid me to write one (it would take a few years at least) it wouldn't have any innovations as the space is already well covered. Where I'm different from other compilers is more likely a case of I did something stupid that someone who knows how to write a compiler wouldn't.
This makes LLMs incredibly powerful research tools, which can create the illusion of emergent capabilities.
It wasn't Knuth who used Claude, but his friend. Nevertheless, Knuth was quite impressed.
The US patent commissioner in 1899 wanted to shutdown the patent office because "everything that can be invented has been invented." And yet, human ingenuity keeps proving otherwise.
The hype cycle's distasteful of course, but I've accepted that this is how humans figure out what things are. Like a child we have to abuse it before we learn how to properly use it.
I think many of us sense and have sensed that the promises made of agentic programming smell too good to be true, owing to our own experiences as programmers and engineers. But experts in a domain are always the minority, so we have to understand that everyone else is going to have to reach the same intuition the hard way.
AI is already letting me care less about the languages I use and focus more on the algorithms. AI helps me write tests. AI suggests improvements and catches bugs before compiling. AI writes helper scripts/tools for me. All of these things are good enough for me to accept paying a few hundred dollars every month, although I don't have to because my employer already does do that for me.
6 months ago I was arguing that AI wasn't very good and code was more precise than english for specifying solutions. The first part is not true anymore for many things I care about. The second is still true but for many things I care about it doesn't matter.
I'm getting tired of articles that try to tell me what to think about AI. "AI is great and will replace all programmers!"... "AI sucks and will ruin your brain and codebase!"... both of these are tired and meaningless arguments.
I think there's some irony in using Russell's quote being used this way. My intent will often be less clear to a reader once encoded in a language bound inextricably to a machine's execution context.
Good abstraction meaningfully whittles away at this mismatch, and DSLs in powerful languages (like ML-family and lisp-family languages) have often mirrored natural(ish) language. Observe that programming languages themselves have natural language specifications that are meaningfully more dense than their implementations, and often govern multiple implementations.
Code isn't just code. Some code encapsulates intent in a meaningfully information and meaning-dense way: that code is indeed poetry, and perhaps the best representation of intent available. Some code, like nearly every line of the code that backs your server vs client time example, is an implementation detail. The Electric Clojure version is a far better encapsulation of intent (https://electric.hyperfiddle.net/fiddle/electric-tutorial.tw...). A natural language version, executed in the context of a program with an existing client server architecture, is likely best: "show a live updated version of the servers' unix epoch timestamp and the client's, and below that show the skew between them."
Given that we started with Russell, we could end with Wittgenstein's "Is it even always an advantage to replace an indistinct picture by a sharp one? Isn't the indistinct one often exactly what we need?"
The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise.
— Edsger Dijkstra
Yes, we'd love to visit!
But of course writing code directly will always maintain the benefit of specificity. If you want to write instructions to a computer that are completely unambiguous, code will always be more useful than English. There are probably a lot of cases where you could write an instruction unambiguously in English, but it'd end up being much longer because English is much less precise than any coding language.
I think we'll see the same in photo and video editing as AI gets better at that. If I need to make a change to a photo, I'll be able to ask a computer, and it'll be able to do it. But if I need the change to be pixel-perfect, it'll be much more efficient to just do it in Photoshop than to describe the change in English.
But much like with photo editing, there'll be a lot of cases where you just don't need a high enough level of specificity to use a coding language. I build tools for myself using AI, and as long as they do what I expect them to do, they're fine. Code's probably not the best, but that just doesn't matter for my case.
(There are of course also issues of code quality, tech debt, etc., but I think that as AI gets better and better over the next few years, it'll be able to write reliable, secure, production-grade code better than humans anyway.)
Unless the defect rate for humans is greater than LLMs at some point. A lot of claims are being made about hallucinations that seem to ignore that all software is extremely buggy. I can't use my phone without encountering a few bugs every day.
The reality is we have built complex organizational structures around the fact that humans also make mistakes, and there's no real reason you can't use the same structures for AI. You have someone write the code, then someone does code review, then someone QAs it.
Even after it goes out to production, you have a customer support team and a process for them to file bug tickets. You have customer success managers to smooth over the relationships with things go wrong. In really bad cases, you've got the CEO getting on a plane to go take the important customer out for drinks.
I've worked at startups that made a conscious decision to choose speed of development over quality. Whether or not it was the right decision is arguable, but the reality is they did so knowing that meant customers would encounter bugs. A couple of those startups are valuable at multiple billions of dollars now. Bugs just aren't the end of the world (again, most cases - I worked on B2B SaaS, not medical devices or what have you).
This is broadly true, but not comparable when you get into any detail. The mistakes current frontier models make are more frequent, more confident, less predictable, and much less consistent than mistakes from any human I'd work with.
IME, all of the QA measures you mention are more difficult and less reliable than understanding things properly and writing correct code from the beginning. For critical production systems, mediocre code has significant negative value to me compared to a fresh start.
There are plenty of net-positive uses for AI. Throwaway prototyping, certain boilerplate migration tasks, or anything that you can easily add automated deterministic checks for that fully covers all of the behavior you care about. Most production systems are complicated enough that those QA techniques are insufficient to determine the code has the properties you need.
my experience literal 180 degrees from this statement. and you don’t normally get the choose humans you work with, some you may be involved in the interview process but that doesn’t tell you much. I have seen so much human-written code in my career that, in the right hands, I’ll take (especially latest frontier) LLM written code over average human code any day of the week and twice on Sunday
Citation needed.
To be lenient I will separate out bugs caused by insufficient knowledge as not being failures in reasoning, do you have forms of bugs that you think are more common and are not arguably failures in reasoning that should be considered?
on edit: insufficient knowledge that I might not expect a competent developer to have is not a failure in reasoning, but a bug caused by insufficient knowledge that I would expect a competent developer in the problem space to have is a failure in reasoning, in my opinion on things.
I believe the same pattern is inevitable for these higher level abstractions and interfaces to generate computer instructions. The language use must ultimately conform to a rigid syntax, and produce a deterministic result, a.k.a. "code".
That's what happens when you hand everything to a machine without understanding the problem yourself.
AI can give you correct answers all day long, but if you don't understand what you're building, you'll end up just like the people of Magrathea, staring at 42 and wondering what to do with it.
True understanding is indistinguishable from doing.
I know, I know, "skill issue"/"you're holding it wrong". And maybe that's vacuously true, in that it's so hard to guess what will produce correct output, because LLMs are not an abstraction layer in the way that we're used to. Prior abstraction layers related input to output via a transparent homomorphism: the output produced for an input was knowable and relatively straightforward (even with exotic optimization flags). LLMs are not like that. Your input disappears into a maze of twisty little matmuls, all alike (a different maze per run, for the same input!) and you can't relate what comes out the other end in terms of the input except in terms of "vibes". So to get a particular output, you just have to guess how to prompt it, and it is not very helpful if you guess wrong except in providing a wrong (often very subtly so) response!
Back in the day, I had a very primitive, rinky-dink computer—a VIC-20. The VIC-20 came with one of the best "intro to programming" guides a kid could ask for. Regarding error messages it said something like this: "If your VIC-20 tells you something like ?SYNTAX ERROR, don't worry. You haven't broken it. Your VIC-20 is trying to help you correct your mistakes." 8-bit 6502 at 1 MHz. 5 KiB of RAM. And still more helpful than a frontier model when it comes to getting your shit right.
I have not really found anything that shakes these people down to their core. Any argument or example is handwaved away by claims that better use of agents or advanced models will solve these “temporary” setbacks. How do you crack them? Especially upper management.
You let them play out. Shift-left was similar to this and ultimately ended in part disaster, part non-accomplishment, and part success. Some percentage of the industry walked away from shift-left greatly more capable than the rest, a larger chunk left the industry entirely, and some people never changed. The same thing will likely happen here. We'll learn a lot of lessons, the Overton window will shift, the world will be different, and it will move on. We'll have new problems and topics to deal with as AI and how to use it shifts away from being a primary topic.
Edit: I've googled it and I can't find anything relevant. I've been working in software for 20+ years and read a myriad things and it's the first time I hear about it...
I also believe coding isn't going to disappear, but AI skeptics have been mostly doing a combination of moving the goalposts and straight up denial over the last few years.
I haven't seen a lot of goalpost moving on either side; the closest I've seen is from the most hyperbolic of AI supporters, who are keeping the timeline to supposed AGI or AI superintelligence or whatnot a fairly consistent X months from now (which isn't really goalpost-moving).
Over the course of about 2 years, the general consensus has shifted from "it's a fun curiosity" to "it's just better stackoverflow" to "some people say it's good" to "well it can do some of my job, but not most of it". I think for a lot of people, it has already crossed into "it can do most of my job, but not all of it" territory.
So unless we have finally reached the mythical plateau, if you just go by the trend, in about a year most people will be in the "it can do most of my job but not all" territory, and a year or two after that most people will be facing a tool that can do anything they can do. And perhaps if you factor in optimisation strategies like the Karpathy loop, a tool that can do everything but better.
Upper managment might be proven right.
I would do the initial research/planning/etc. mostly honestly and fairly. I'd find the positives, build a real roadmap and lead meetings where I'd work to get people onboard.
Then I'd find the fatal flaw. "Even though I'm very excited about this, as you know, dear leadership, I have to be realistic that in order to do this, we'd need many more resources than the initial plan because of these devastating unexpected things I have discovered! Drat!"
I would then propose options. Usually three, which are: Continue with the full scope but expand the resources (knowing full well that the additional resources required cannot be spared), drastically cut scope and proceed, or shelve it until some specific thing changes. You want to give the specific thing because that makes them feel like there's a good, concrete reason to wait and you're not just punting for vague, hand-wavy reasons.
Then the thing that we were waiting on happens, and I forget to mention it. Leadership's excited about something else by that point anyway, so we never revisit dumb project again.
Some specific thoughts for you:
1. Treat their arguments seriously. If they're handwaving your arguments away, don't respond by handwaving their arguments away, even if you think they're dumb. Even if they don't fully grasp what they're talking about, you can at least concede that agents and models will improve and that will help with some issues in the future.
2. Having conceded that, they're now more likely to listen to you when you tell them that while it's definitely important to think about a future where agents are better, you've got to deal with the codebase right now.
3. Put the problems in terms they'll understand. They see the agent that wrote this feature really quickly, which is good. You need to pull up the tickets that the senior developers on the team had to spend time on to fix the code that the agent wrote. Give the tradeoff - what new features were those developers not working on because they were spending time here?
4. This all works better if you can position yourself as the AI expert. I'd try to pitch a project of creating internal evals for the stuff that matters in your org to try with new models when they come out. If you've volunteered to take something like that on and can give them the honest take that GPT-5.5 is good at X but terrible at Y, they're probably going to listen to that much more than if they feel like you're reflexively against AI.
Where the tech argument doesn't apply to upper management, business practices, the need to "not be left behind" and leap at anything that promises reducing headcount without reducing revenue, money talks. As long as it's possible to slop something together, charge for it, and profit, slop will win.
I remember being aghast at all the incomprehensible code and "do not modify" comments - and also at some of the devs who were like "isn't this great?".
I remember bailing out asap to another company where we wrote Java Swing and was so happy we could write UIs directly and a lot less code to understand. I'm feeling the same vibe these days with the "isn't it great?". Not really!
Electric Clojure: https://electric.hyperfiddle.net/fiddle/electric-tutorial.tw...
What you are seeing here is that many are attempting to take shortcuts to building production-grade maintainable software with AI and now realizing that they have built their software on terrible architecture only to throw it away, rewriting it with now no-one truly understanding the code or can explain it.
We have a term for that already and it is called "comprehension debt". [0]
With the rise of over-reliance of agents, you will see "engineers" unable to explain technical decisions and will admit to having zero knowledge of what the agent has done.
This is exactly happening to engineers at AWS with Kiro causing outages [1] and now requiring engineers to manually review AI changes [2] (which slows them down even with AI).
[0] https://addyosmani.com/blog/comprehension-debt/
[1] https://www.theguardian.com/technology/2026/feb/20/amazon-cl...
[2] https://www.ft.com/content/7cab4ec7-4712-4137-b602-119a44f77...
I've had to work on multiple legacy systems like this where the original devs are long gone, there's no documentation, and everyone at the company admits it's complete mess. They send you off with a sympathetic, "Good luck, just do the best you can!"
I call it "throwing dye in the water." It's the opposite of fun programming.
On the other hand, it often takes creativity and general cleverness to get the app to do what you want with minimally-invasive code changes. So it should be the hardest for AI.
While publicly they might say this is AI driven, I think that’s mostly BS.
Anyway, that doesn’t take away from your point, just adds additional context to the outages.
This isn't any different than the "person who wrote it already doesn't work here any more".
> now requiring engineers to manually review AI changes [2] (which slows them down even with AI).
What does this say about the "code review" process if people cant understand the things they didn't write?
Maybe we have had the wrong hiring criteria. The "leet code", brain teaser (FAANG style) write some code interview might not have been the best filter for the sorts of people you need working in your org today.
Reading code, tooling up (debuggers, profilers), durable testing (Simulation, not unit) are the skill changes that NO ONE is talking about, and we have not been honing or hiring for.
No one is talking about requirements, problem scoping, how you rationalize and think about building things.
No one is talking about how your choice of dev environment is going to impact all of the above processes.
I see a lot of hype, and a lot of hate, but not a lot of the pragmatic middle.
It is very different. With empathy you can often deduct why people wrote code the way they did. With LLMs there often is no reason.
Yeah but that takes years to play out. Now developers are cranking out thousands of lines of “he doesn’t work here anymore” code every day.
https://www.invene.com/blog/limiting-developer-turnover has some data, that aligns with my own experience putting the average at 2 years.
I have been doing this a long time: my longest running piece of code was 20 years. My current is 10. Most of my code is long dead and replaced because businesses evolve, close, move on. A lot of my code was NEVER ment to be permanent. It solved a problem in a moment, it accomplished a task, fit for purpose and disposable (and riddled with cursing, manual loops and goofy exceptions just to get the job done).
Meanwhile I have seen a LOT of god awful code written by humans. Business running on things that are SO BAD that I still have shell shock that they ever worked.
AI is just a tool. It's going from hammers to nail guns. The people involved are still the ones who are ultimately accountable.
Valuable? Yep. World changing? Absolutely. The domain of people who haven't the slightest clue what they're doing? Not unless you enjoy lighting money on fire.
I interpret non-deterministic here as “an LLM will not produce the same output on the same input.” This is a) not true and b) not actually a problem.
a) LLMs are functions and appearances otherwise are due to how we use them
b) lots of traditional technologies which have none of the problems of LLMs are non-deterministic. E.g., symbolic non-deterministic algorithms.
Non-determinism isn’t the problem with LLMs. The problem is that there is no formal relationship between the input and output.
Maybe I should just retire a few years early and go back to fixing cars...
Maybe in the future us olds will get more credit when apps fall over and the higher ups realize they actually need a high-powered cleaner/fixer, like the Wolf in Pulp Fiction.
Meanwhile I’m moving at about half the speed with a more hands on approach (still using the bots obviously) but my code quality and output are miles ahead of where I was last year without sacrificing maintain ability and performance for dev speed
"In order to make machines significantly easier to use, it has been proposed (to try) to design machines that we could instruct in our native tongues. this would, admittedly, make the machines much more complicated, but, it was argued, by letting the machine carry a larger share of the burden, life would become easier for us. It sounds sensible provided you blame the obligation to use a formal symbolism as the source of your difficulties. But is the argument valid? I doubt."
First, I disagree that good code is required in any sense. We have decades of experience proving that bad code can be wildly successful.
Second, has the author not seen the METR plot? We went from: LLMs can write a function to agents can write working compilers in less than a year. Anyone who thinks AGI is far away deserves to be blindsided.
Also (and this is coming from someone who thinks it's quite close) "AGI" is not implied by the ability to implement very-long-horizon software tasks. That's not "general" at all.
I believe that even when we have AGI, code will still be super valuable because it'll be how we get precise abstractions into human heads, which is necessary for humans to be able to bring informed opinions to bear.
Usually the response, for the last years, has been "no no you don't get it, it'll get so much better" and then they make the context window slightly larger and make it run python code to do math.
What will really happen is that you and people like you will let Claude or some other commerical product write code, which it then owns. The second Claude becomes more expensive, you will pay, because all your tooling, your "prompts saved in commits" etc. will not work the same with whatever other AI offer.
You've just reinvented vendor lock in, or "highly paid consultant code", on a whole new level.
When you let an LLM author code, it takes ownership of that code (in the engineering sense).
When you're done spending millions on tokens, years of development, prompt fine tuning, model fine tuning, and made the AI vendor the fattest wad of cash ever seen, you know what the vendor will do?
You have to migration path. Your Codex prompts don't work the same in Claude. All the prompts you developed and saved in commits, all the (probably proprietary) memory the AI vendor saved in their servers to make the AI lock you in even more, all of it is worthless without the vendor.
You are inventing "ah heck, we need to pay the consultant another 300 bucks an hour to take a look at this, because nobody else owns this code", but supercharged.
You're locking yourself in, to a single vendor, to such a degree that they can just hold your code hostage.
Now sure, OpenAI would NEVER do this, because they're all just doing good for humanity. Sure. What if they go out of business? Or discontinue the model that works for you, and the new ones just don't quite respond the same to your company's well established workflows?
The fact of reality is that the technology is so complex only for-profit centralized powers can really create these things. Linux and open source was a fluke and even then open source developers need closed source jobs to pay for their time doing open source.
We are locked in and this is the future. Accept it or deny it one is delusional the other is reality. The world is transforming into vibe coding whether you like it or not. Accept reality.
If you love programming, if you care for the craft. If programming is a form of artistry for you, if programming is your identity and status symbol. Then know that under current trends… all of that is going into the trash. Better rebuild a new identity quick.
A lot of delusional excuse scaffolds people build around themselves to protect their identity is they just say “the hard part of software wasn’t really programming” which is kind of stupid because AI covers the hard part too.. in fact it covers it better then actual coding. Either way this excuse is more viable then “ai is useless slop”
They'll hire the person who knows AI, not the human clinging onto claims of artisanal character by character code.
It's entirely possible to engineer well-designed and intentional systems with AI tools and not stochastically "vibe" your way into tech debt.
AI engineers will get hiring preference. That is until we're all replaced by full agentic engineering. And that's coming.
AI tends to accept conventional wisdom. Because of this, it struggles with genuine critical thinking and cannot independently advance the state of the art.
AI systems are trained on vast bodies of human work and generate answers near the center of existing thought. A human might occasionally step back and question conventional wisdom, but AI systems do not do this on their own. They align with consensus rather than challenge it. As a result, they cannot independently push knowledge forward. Humans can innovate with help from AI, but AI still requires human direction.
You can prod AI systems to think critically, but they tend to revert to the mean. When a conversation moves away from consensus thinking, you can feel the system pulling back toward the safe middle.
As Apple’s “Think Different” campaign in the late 90s put it: the people crazy enough to think they can change the world are the ones who do—the misfits, the rebels, the troublemakers, the round pegs in square holes, the ones who see things differently. AI is none of that. AI is a conformist. That is its strength, and that is its weakness.
[1] https://www.modular.com/blog/the-claude-c-compiler-what-it-r...
I spend the other time talking through my thoughts with AI, kind of like the proverbial rubber duck used for debugging, but it tends to give pretty thoughtful responses. In those cases, I'm writing less code but wanting to capture the invariants, expected failure modes and find leaky abstractions before they happen. Then I can write code or give it good instructions about what I want to see, and it makes it happen.
I'm honestly not sure how a non-practitioner could have these kinds of conversations beyond a certain level of complexity.
I don't think the replacement is binary. Instead, it’s a spectrum. The real concern for many software engineers is whether AI reduces demand enough to leave the field oversupplied. And that should be a question of economy: are we going to have enough new business problems to solve? If we do, AI will help us but will not replace us. If not, well, we are going to do a lot of bike-shedding work anyway, which means many of us will lose our jobs, with or without AI.
[1] https://en.wikipedia.org/wiki/Universal_approximation_theore...
"n the field of machine learning, the universal approximation theorems (UATs) state that neural networks with a certain structure can, in principle, approximate any continuous function to any desired degree of accuracy. These theorems provide a mathematical justification for using neural networks, assuring researchers that a sufficiently large or deep network can model the complex, non-linear relationships often found in real-world data."
And then: "Notice also that the neural network is only required to approximate within a compact set K {\displaystyle K}. The proof does not describe how the function would be extrapolated outside of the region."
NNs, LLMs included, are interpolators, not extrapolators.
And the region NN approximates within can be quite complex and not easily defined as "X:R^N drawn from N(c,s)^N" as SolidGoldMagiKarp [2] clearly shows.
[2] https://github.com/NiluK/SolidGoldMagikarp
[0] https://www.sciencedirect.com/science/article/pii/S002200008...
I think a good example are calculations or counting letters: it's trivial to write turing machines doing that correctly, so you could create neural networks, that do just that. From LLM we know that they are bad at those tasks.
I wrote an article on that: Hard Things in Computer Science
https://blog.est.im/2026/stderr-04
https://news.ycombinator.com/item?id=46669591
Of course! But that's what makes them so powerful. In 99% of cases that's what you want - something that is conventional.
The AI can come up with novel things if it has an agency, and can learn on its own (using e.g. RL). But we don't want that in most use cases, because it's unpredictable; we want a tool instead.
It's not true that this lack of creativity implies lack of intelligence or critical thinking. AI clearly can reason and be critical, if asked to do so.
Conceptually, the breakthrough of AI systems (especially in coding, but it's to some extent true in other disciplines) is that they have an ability to take a fuzzy and potentially conflicting idea, and clean up the contradictions by producing a working, albeit conventional, implementation, by finding less contradictory pieces from the training data. The strength lies in intuition of what contradictions to remove. (You can think of it as an error-correcting code for human thoughts.)
For example, if I ask AI to "draw seven red lines, perpendicular, in blue ink, some of them transparent", it can find some solution that removes the contradictions from these constraints, or ask clarifying questons, what is the domain, so it could decide which contradictory statements to drop.
I actually put it to Claude and it gave a beautiful answer:
"I appreciate the creativity, but I'm afraid this request contains a few geometric (and chromatic) impossibilities: [..]
So, to faithfully fulfill this request, I would have to draw zero lines — which is roughly the only honest answer.
This is, of course, a nod to the classic comedy sketch by Vihart / the "Seven Red Lines" bit, where a consultant hilariously agrees to deliver exactly this impossible specification. The joke is a perfect satire of how clients sometimes request things that are logically or physically nonsensical, and how people sometimes just... agree to do it anyway.
Would you like me to draw something actually drawable instead? "
This clearly shows that AI can think critically and reason.
As a big fan of Vi Hart I was surprised to read that she wrote or was involved in that "classic comedy sketch".
As far as I can tell, after a few minutes searching, she was not.
On the line test, I guess it's highly probable that the joke and a few hundred discussions or blog pieces about it were in it's training data.
It's not a SAT solver (yet) and will have trouble to precisely handle arbitrarily large problems. So you have to lead it a bit, sometimes.
What percentage of developers advance the state of the art, what percentage of juniors advance the state of the art?
Lots of people have ideas for programming languages; some of those ideas may be original-but many of those people lack the time/skills/motivation to actually implement their ideas. If AI makes it easier to get from idea to implementation, then even if all the original ideas still come from humans, we still may stand to make much faster progress in the field than we have previously.
More proximately, the creator of the clang c compiler.
"Needed to advance the state of the art" and actually deployed to do so are two different things. More likely either AI will learn to advance the state of the art itself, or the state of the art wont be advancing much anymore...
>CCC shows that AI systems can internalize the textbook knowledge of a field and apply it coherently at scale. AI can now reliably operate within established engineering practice. This is a genuine milestone that removes much of the drudgery of repetition and allows engineers to start closer to the state of the art.
And also
> The most effective engineers will not compete with AI at producing code, but will learn to collaborate with it, by using AI to explore ideas faster, iterate more broadly, and focus human effort on direction and design. Lower barriers to implementation do not reduce the importance of engineers; instead, they elevate the importance of vision, judgment, and taste. When creation becomes easier, deciding what is worth creating becomes the harder problem. AI accelerates execution, but meaning, direction, and responsibility remain fundamentally human.
So outside of the fact that we have magic now that can just produce “conventional “ compilers. Take it to a Moore’s Law situation. Start 1000 create a compiler projects- have each have a temperature to try new things, experiment, mutate. Collate - find new findings - reiterate- another 1000 runs with some of the novel findings. Assume this is effectively free to do.
The stance that this - which can be done (albeit badly) today and will get better and/or cheaper - won’t produce new directions for software engineering seems entirely naive.
In fact in statistics we have another law which states that as you increase parameters the more you risk overfitting. And overfitting seems to already be a major problem with state of the art LLM models. When you start overfitting you are pretty much just re-creating stuff which is already in the dataset.
The issue is you need verifiable rewards for that (and a good environment set-up), and it's hard to get rewards that cover everything humans want (security, simplicity, performance, readability, etc.)
However AI systems in 2026-ε were utterly inadequate at coding
And AI systems in 2026+ε might not have the present limitations
Well, of course. Despite people applying the label of AI to them, LLMs don't have a shred of intelligence. That is inherent to how they work. They don't understand, only synthesize from the data they were trained on.
Couldn't you say that about 99% of humans too?
And of course, if you don't limit yourself to "advancing the state of the art at the far frontiers of human knowledge" but allow for ordinary people to make everyday contributions in their daily lives, you get even more. Sure, much of this knowledge may not be widespread (it may be locked up within private institutions) but its impact can still be felt throughout the economy.
How? By also "synthesizing the data they were trained on" (their experience, education, memories, etc.).
if you don't limit yourself to "advancing the state of the art at the far frontiers of human knowledge" but allow for ordinary people to make everyday contributions in their daily lives, you get even more
This isn't a throwaway comment. I do this all the time myself, at work. Everywhere I've worked, I do this. I challenge the assumptions and try to make things better. It's not a rare thing at all, it's just not revolutionary.
Revolutions are rare. Perhaps only a handful of them have ever happened in any one particular field. But you simply will not ever go from Aristotelian physics to Newtonian physics to General Relativity by merely "synthesizing the data they were trained on", as the previous comment supposed.
Edit: I should also say something about experimentation. You can't do it from an armchair, which is all an LLM has access to (at present). Real people learn things all the time by conducting experiments in the world and observing the results, without necessarily working as formal scientists. Babies learn a lot by experimenting, for example. This is one particular avenue of new knowledge which is entirely separate from experience, education, memories, etc. because an experiment always has the potential to contradict all of that.
Of course it does, but only after the fact. You don't have any experience of the result of the experiment before you perform it.
Sure, they can't have apples fall on their heads like Newton had but they can totally observe the apple falling on someones head in a video
I have strong doubts that LLMs have any understanding whatsoever of what's happening in images (let alone videos). The claim (I've sometimes heard) that they possess a world model and are able to interpret an image according to that model is an extremely strong one, that's strongly contradicted by the fact that they: a) continue to hallucinate in pretty glaring ways, and b) continue to mis-identify doctored (adversarial) images that no human would mis-identify (because they don't drastically alter the subject).
That might read like an insult to Lattner, but what I’m really pointing out is that we tend to hold AIs to a much higher standard than we do humans, because the real goal of such commentary is to attempt to dismiss a perceived competitive threat.
People also "synthesize from the data they were trained on". Intelligence is a result of that. So this dead-end argument then turns into begging the question: LLMs don't have intelligence because LLMs can't have intelligence.
And yet the AI probably did better than 99% of human devs would have done in a fraction of the time.
what’s your point again?
It's like Stephen King saying an AI generated novel isn't as good as his. Fine, but most of have much lesser ambitions than topping the work of the most successful people in the field.