>"Wait a moment! Being forced to use AI gave me depression, and I'm really aware of the fact that it's only going to become better and better the more developers are using it, to the point where the 10 job openings of yesterday are 1 job opening tomorrow. Why are people so excited", remember this:
You are reading HN, the survivorship bias and groupthink is just as high as any other self-calibrating online community ("upvote if you agree" -> self-calibration of the popular opinion), and there's an extremely high survivorship bias because people who are into this LLM craze have a higher probability of browsing HN.
As for you, OP, I have no idea why age is a factor to consider to this. I'm 45, and while I programmed as a hobby since I was 16 I turned it into a career during COVID, and all the pressure cooking LLM watch-six-agents-writing-and-you-proofreading gave me so much existential crisis and depression that I seriously can't even get myself writing anything "over the weekend".
I hope to God the next generation of wonder kids that is the equivalent of the 12 year old discovering how to bent the computer to do what they want it to do enjoy arguing with multiple agents concurrently back and forth.
this is extremely bizarre because I’m 53, been coding since 12, and it has had literally the exact opposite effect on me, I find it tremendously exciting, like riding a snowmobile instead of manually cross-country skiing
but I do think that if you’re not ready to work like this, you may need to consider a career pivot in the short term
Or maybe your analogy is correct. AI is a bit as if everyone in the mountains drove around in a snowmobile, noisy and a smell of gasoline.
I don't think people are confused why there are the different types of people who like different winter sports, but people seem shocked that opinions differ on the enjoyment of using an LLM
That's where the analogy starts to break a bit. You can't mode switch between skis and snowmobile, but you sure can ai assist/not pretty quickly.
One more quick one - imagine skiers showing up to the snowmobile club hating on snowmobiles and vice versa.
I, for one, have still not properly got a grip on how tech enables this sort of a analogy-breaking reality.
Effing go ski then; there's even a club for that! (rhetorical, not directed at anyone in particular) And shame on me cause I show up to the ski club on a snowmobile with skis on my back.
That said, like many people here I have invested quite some time in becoming a skilled and experienced coder, so there is no denying that this whole AI craze makes me feel like something is taken away from me.
Using AI has been really perfect for me. I can build stuff while I do other things, walk the dog, make lunch, sit on the porch.
Sometimes i realize that my design was flawed and I just delete it all and start again, with no loss aversion.
this resonates with me strongly, while i like coding, and understanding it, I understand my human limitations. I couldn't possibly write by hand the stuff I've been making, in the time I am making it, without a team these past few months. I would be coding literally all day, which while I sometimes enjoy the zoning out process of wiring stuff up, what i really enjoy is exactly what you described.
I enjoy being outside and walking my dog, taking a long shower, and cooking. All of these things are simple tasks with a good bit of repetition, and unlike wiring up some code or whatever, they allow my thoughts to flow, and I can think about where my projects are likely heading and what needs to be done next.
Those moments, even before heavy AI assisted coding, have always been the moments i cherish about software development.
The AI should be spending most of its time helping you spec out new revisions to the codebase, the code-writing time is just the last step and if you've planned the work in depth, you'll understand what the AI is trying to do (and be able to stop and revise if anything is going off the rails). This is a healthier approach than "just spec out something else in the meantime" IMHO, but of course that happens too.
Yeah, I've learned that if I do too much of that I'll spend more time catching up in terms of consolidating gains through review of code and functionality. That's just me, people are clearly developing a few different and not "wrong" ways of going about things.
I either switch between two projects, or I keep an eye on what Claude is doing, because it often gets off the rails or in a direction I don't like and then it's easier to just stop it there and tell it what to do instead.
That said, like many people here I have invested quite some time in becoming a skilled and experienced coder, so there is no denying that this whole AI craze makes me feel like something is taken away from me.
I might have felt like that when I was younger (almost 44 now, programming since 10), but over time I realized that the thing I enjoy is not really writing code itself, but coming up with ideas, solving puzzles, etc. LLMs are like insanely fast junior programmers, so they do the more mundane part of the task, but they need me to come up with good ideas, good taste, and solve design challenges. Otherwise it ends up as a pile of unmaintainable junior programmer code.
It is possible that LLMs might replace the other parts of being a good programmer as well, but for the time being it makes my work more pleasant, because I can work on interesting problems.
I usually review the code that's been written. Sometimes directly, sometimes by telling claude to bring things up piece by piece to explain choices as I review. Or I kick off one of the various maintenance tasks, validate my assumptions and expectations on how things should function, note the things that don't to be addressed. I'm going to have to do this stuff anyway, I might as well do it then.
Or I read something, or do something to clear my head. Sometimes because I need a mental break, because I find that the speed these tools having me working at can be taxing in different ways.
I think expectations of the "10x" variety whether you put that at 10 or 3x will have to be adjusted. Coding as fast as 5 developers is far far different than "A singly developer can produce as much as 5 others"
Think of it like being a project manager in a team. There's a lot you can do to help the project moving forward without touching one line of code
I grew up witnessing Carmack going from Keen to Quake in 5 or 6 years.
That standard gets you attached to the idea that you should be able at some level to individually reach a fraction of the depth and breadth. Sadly, I have neither the energy nor the focus.
But what's the point of getting an LLM to, say, write a raycaster if your point is to learn how to do that yourself? If your mission in life is to learn to build things?
(I hope I'm getting my idea across)
I have worked as a developer, security engineer, program manager and engineering manager through my career. Writing stuff to understand algorithms or hardware requires engaging with the math, science, and engineering of the software and hardware. Optimizing it or developing a novel algorithm requires deep comprehension.
Writing a service that shuffles a few things around between stuff on my home network so that I can build an automation to turn down the lights when I start playing a movie? Yeah, I could spend a day or two writing and testing it. Having done it a few times, the work of it is a bit of a chore, I'm not learning, just doing something. Using an Claude or some other agent to do that makes it go from 'do I want to spend my time off doing a chore?' to 'I can design this and have it built in an hour'.
Making the jump to using the tools in my day job has been a bitore challenging because as a security engineer I have seen some hairy stuff over the last two years as AI generated stuff wends it's way into production, but the tools and capabilities have expanded massively, and heck, my peers from Mozilla just published some awesome successes working with Anthropic to find new vulns :)
Don't let using tools take away the love of learning, use them to solve a problem and take care of the drudgery of building stuff.
Sounds like you had one at home? If so, I'm a bit jealous. But also, hello, brother/sister!
Nobody can teach you to own and control you. But you had better. Use tricks, treats, magic, whatever, but get to the damned end or make for damned sure you know why you walked away (and live with that).
Your life matters. Your ideas matter. Birth them. It hurts. Push through. Don't look back at your life and wonder what it would have been like if you had stuck with it. It hurts. But do it.
Or do whatever you want, but this random stranger votes "getting over".
Otoh -- if there is this bifurcation among coders (one group super-excited, one group depressed and angst-ridden) then maybe we should be trying to figure out why people are reacting the way they are. Can you explain more about your situation? What do you code? Do you have hobby projects? Do you have free time? Etc.
I'm 40 and have been doing this since I was 12 as well. Once I became a staff engineer at a large company and ended up being a less Hands-On with code and more on team leadership and system architecture, it set me up for this perfectly.
I missed writing code (or so I thought) but what I realized is that I actually missed bringing ideas to life. Coding was just a means to do that and the new tools with LLM and agents have allowed me to do the core of what I love way more than coding by hand could have ever allowed
I’m also not really in the HN gestalt, so to speak. I have some views that are common, hereabouts, and some, not so much.
I’m enjoying having an LLM “pair partner.”
The last years somehow it felt like there’s nothing new anymore, the same 10 ideas being regurgitated with slight modifications. I tinkered with AI for the past 2 years but it was mostly a “tool for writing boilerplate”. I have tried a few ideas for agents but didn’t see how it could work.
That changed with Opus 4.6 and the subsequent wave of local models - now I try 10 ideas a day and it’s like magic! And if something doesn’t work - jumping into the code and debugging it is huge fun!
Understanding that the era of the almost-free cloud tokens might come to an end, I run my own harness pointing to my own GPUs running Qwen3.5-27B, and the last few days it has been very busy! :)
My harness doesn’t “pressure cook” since it doesn’t make sense to do that with only one GPU (besides many other reasons), it runs everything in a linear fashion, including subagents, and logs everything - reading the logs as they go by is another cool thing - sometimes I pick up interesting things from it !
The distribution of people’s moods related to AI seems indeed bimodal. And I feel lucky somehow ending up in the “enthusiastic” rather than “depressed” part of it. To the folks in the other one: I am sorry. I don’t know why it is this way. If I knew I might have given unsolicited advice.
Of the ones I can share:
Browser-based network tester using webrtc unreliable data https://netpoke.com - use magic code “DEMO” to see what’s it about - the source is at https://github.com/ayourtch/netpoke
A port of the SOTA speech generation model from Python to Rust:
https://github.com/ayourtch/fish-audio-experiment
A study on LLM prompting techniques:
https://github.com/ayourtch-llm/kindness
My own coding agent that i use with my locally hosted LLM for experiments:
https://github.com/ayourtch-llm/apchat
Also LLM helped with a lot of code for my packet mangling library: https://github.com/ayourtch/oside - which, among other things, includes a now battle tested SNMPv3 stack.
A true “stochastic parrot” using hash tables: https://github.com/ayourtch/hashmem
These are the ones I remember. Feel free to scout my GitHub for more. Edit: And of course it doesn’t need to be said that out of ideas I try all of them make it to github. Many end up thrown away.
I'm in the former camp. Every time I have an LLM write code it makes me entirely depressed because the satisfaction I get from programming is the programming. However, what I have found incredibly valuable is having LLM's help me plan. Using it as someone to brainstorm with, to "rubber duck" if you will. I still get to code, it just speeds up the planning process and has gone from a depressing exercise to one where I am excited to work.
Find your own path.
But I also like to work the way you described it, and also by using Claude Code for e.g. K8s stuff (kubectl, helm) where you'd otherwise have to use a TUI or do a lot of typing just to get logs/status/etc. and a bunch of yaml that is just incredibly tedious
There doesn't seem to be a place for me in the future of software/tech: I like sitting quietly, alone, solving problems, writing code, and reading it. I like in code much of what I like in art: the fruits of human labor and the results of human ingenuity. Being excited about AI/LLMs makes no sense to people like me. If you're excited because LLMs let you make something, great, good for you. Have fun.
If the tools become a mandatory part of the job, I'll change careers. Spending my days talking to chipper robots and describing what I want rather than making it myself sounds unbearable.
In the end, I remembered how much I hated schooling. This is despite being a huge fan of education. It wasn't realistic to think that I'd complete the work needed for accreditation.
Regardless, I'm happy today having selected for the thing that I already knew. I hope you also find yourself satisfied. It's lonely feeling lost when evaluating a thing you'd known through a new paradigm.
Its not uncommon for people to lose interest or find the passion has gone out of things they enjoyed when they were younger, especially in their professional lives where the enjoyment eroded through forced contact with aspects of it that were less enjoyable or contaminated by unpleasant work environments and uninteresting projects.
Having that passion reignited isn't something given to all people.
(And I hesitate to even air that view in front of others that are already in the field because I am a kind of Pollyanna and don't want to foment bad vibes.)
But since I retired a few years ago it was clearly not LLMs that precipitated the decline of my enjoyment of the profession. Instead it was the slow erosion of agency and responsibility that did that.
I'll drop the euphemisms and just say outright that the inmates ran the asylum when I began in the 90's (at Apple, FWIW). The only one that really told me what to do was the tech-lead on the team. Not my manager—for sure not marketing or the CEO (ha ha — Jobs had not yet returned).
In effect, I and all other engineers were told, "Here's your sandbox, here's your shovel: you go make your sand castle however you want—so long as it does X, Y and Z. We'll ship it but you'll own it. You'll fix it, expand it…"
(A coworker whose sense of humor I always enjoyed said to me, perhaps seriously, "When someone drops code in my lap and says, 'It's yours now' the first thing I do is rewrite it." Yeah, that's what happens to someone's code when they moves on—becomes someone else's sandbox and they are free to knock down the castle, build another—Chesterton's Fence not withstanding, ha ha.)
To that end I feel a little bad for anyone that missed that era. I mean unless you enjoy writing unit tests, having code reviews, style guidelines, etc.—and I have certainly met younger engineers that have come on board that seem to enjoy those aspects of the these-days profession.
I admit that when I began it was in fact a bit intimidating when you realized that code you were writing, were responsible for, was going to ship on millions (in 1995? maybe?) of machines. The responsibility though also came with agency—the combination came to give me a sense of freedom, the power of using my discretion, and finally a sense that I was a valued contributor.
You can infer from the above what I disliked about the profession as I was aging out of it. My general sense is that the industry became too big though and too much money riding on it for management to entrust it to the "funny farm". But of course we cowboys who came up in that ward liked it the way it had been.
As someone who references Chesterton’s fence often, I not only agree the code often gets rewritten when someone moves on, I even think it’s often the right thing to do - for medium to small projects where there is one or only a few people who own the code. The reason is because I’ve seen what happens when you don’t rewrite it - the new owner(s) don’t have intimate knowledge of the codebase, and as a result, they work at the speed of molasses regardless of their skill. I have left code behind to people who are better coders than me, and it took years for them to become productive.
To be fair, I have also seen large projects with many people get rewritten and have Chesterton bite back hard, having the projects go late, cost enormous sums of money, and end up as bad as the first time, so rewrites certainly aren’t always called for.
This is all changing dramatically with Claude, BTW, people can now get into a codebase and be productive without rewriting it. They might not understand it, but this is a positive development of some kind at some level.
I've been working on a contract for a large corp. They asked me to design a piece of software over 6 months which I delivered on time and worked great — by the time we had to ship into PROD, the whole thing was canned unceremoniously.
Luckily they liked my work so much they moved me to another greenfield project. Worked on it for a year, had to invent novel solutions which I'm pretty proud of, and we shipped into prod last Autumn. I haven't heard a peep from anyone, whether the thing is working and by masterful skill of mine it hasn't crashed yet, or if no one is using it and it was just another bullshit job.
All this work, good pay, and nothing to show for it. Not even a pat in the back. I'm just a well-oiled cog in an unfathomable machine. I wonder if my career has any meaning at all. Recently they've asked me to deliver a feature for yesterday because of bad planning on their part, and when I mentioned how long it'll take, they've half-jokingly suggested to use LLMs so I can ship it in half the time to make their arbitrary deadline.
Joke's on them, in less than 6 months I'm out. 20 years as a software engineer, 15 as a contractor, and all I feel when I get at my desk is existential dread. There is just no pleasure at it, that I'd rather risk poverty but feel like my actions and efforts have tangible effects on the physical world.
Was producing more mediocre code ever the problem? This all feels like a Kafkian fever dream.
It was clear that my friend was looking on somewhat enviously and when I asked, he admitted as much.
And I knew too immediately the draw. Before I was old enough for "gainful employment" there was a neighbor who hired my sister and I (I think I was 11, my sister 10) to ride along with him and his kids (our neighborhood friends) and help with his lawn services business.
I know. But this was the 1970's, a small working-class neighborhood in a Kansas suburb… And he paid us by the hour, helped load/unload the lawn mowers. We'd get a free lunch at a "Wiley's" fast-food hamburger joint.
But despite the physical labor of pushing a lawn mower all over someone's yard, there was a curious sense of satisfaction that came from having arrived at a tatty, overgrown lawn but then leaving it looking neat, tidy. It is the usual "sense of accomplishment" that physical labor often metes out that is often more elusive in the white-collar world.
To be sure there's no arguing about the differences in pay—I'm talking strictly about a sense of job satisfaction. (And, over the course of my three decade career as a programmer, the closest to that had been early on when I had full ownership of the code.)
I'd always been a computer person, but it wasn't until I'd reached my thirties that I realized I could make a career out of that interest. The joy of programming still gets me out of bed in the morning and sends me skipping happily to my desk in my home office. What I do wouldn't impress anybody at a technical level. I'm not an innovator. The world of software and tech would not suffer if I had never existed. But I like the guy I work for. I like the people I work with. I write stuff that lots of people use. I do it well enough that I can feel decently good about it.
And I'm watching all of what I enjoy in software as a career and craft gradually disappear. Upper management are now all True Believer AI zealots who know, just know, that AI is the future and therefore ensure that it is also the present. They've caused nothing but organizational chaos, shoved out knowledgeable people, in some misguided effort to remake the company in their image, and replaced them with, to me, obvious bullshit artists.
Engineering time and effort that might a few years ago have produced value and good experiences for users now produce mediocre "MCPs," used only internally, that turn out even more mediocre code and tests that don't test anything.
I don't have nearly the chops or talent you and your peers have. I never could have run with you guys or made the mark on the world that you did. What I do, and the processes I follow, are probably the exact stuff that drove you to retirement. Still, I enjoy what I do and hate that it's being taken from me and replaced with something I hate, overseen, in my company's case, by vapor merchants pretending to be visionaries/cutting-edge 'thought leaders.'
I'm glad some of us got to build things when the inmates ran the asylum, and I regret the money and 'progress' that strangled the life and joy out of it for you.
Just an aside: I've really enjoyed everything you've posted on HN and look forward to your comments. Thanks, and cheers.
Trust me, when I started at Apple in 1995 I was way in over my head. Or so I thought.
After a couple months on the job I asked a coworker down the hall (who seemed particularly chill—Hi, Brian!), "How long until I feel like I know what I'm doing?"
"6 months."
I liked the unambiguity of his answer even if it seemed kind of off the cuff.
He was more or less right. It was somewhere about 6 months that I more or less knew what I was headed in each morning to work to accomplish. And I felt like I, with a little help perhaps, could even contribute in a small way.
Still, I was always surrounded by some of the most amazing programmers I had ever met. One guy (hi, Cam!) could walk through a "backtrace" in machine code, look at the registers, addresses and data on the stack, and then declare, "You're accessing memory after you've already released it. Do you know what could be 24 bytes in size?"
And who was I? Some kid from Kansas with no degree in software engineering.
It may have in fact taken closer to two decades before I was able to shake off the imposter syndrome. At some point I had to admit that I wasn't so dense to have not learned anything in my 20+ years of coding. I was still not on Cameron's level, never will be, but I might have made up for that shortcoming by leaning into being prolific, coding two or three prototypes quickly in order to finally determine The Best Path.
Just from your comment I would be willing to bet your enthusiasm alone would make you a valuable asset.
That is kind of how it worked: there were some people that could hold multiple threads in their head and rattle off a semaphore strategy that was performant, skirt a deadlock.
There was the "math guy". We all knew who they were and would cycle by their office when we were wrestling with matrix inversions and the order of transforms.
And there were people that you could rely upon to take perhaps the most dreaded task of a project and work diligently at it. Trust me, no one split hairs over whether that individual could disassemble PPC code just by looking at it. The team appreciated the "tanks" that could do some of the drudge-work. (I was from time to time that person.)
I don't need to belabor a point, you get it, it took all types. It took me some time to see that though, and longer still to see where I fit in as well.
But I vibe-coded a web site [1] that I would not have otherwise attempted (I just didn't want to have to figure out how to learn a map-type framework in order to put little points-of-interest on a web page.)
I also vote-coded an extremely esoteric app for turning .mpo files into stereograms that you can then print to display in an old-fashioned stereoscope [2].
I have lately been learning (I hope?) to build a hobbyist analog computer. This a deep dive into electronics—something I have no training in.
And I have already queued up a couple of my abandoned projects (also esoteric) that I hope to turn an LLM loose on when I free up some time (from my current analog computing obsession).
It's hard to say if I would not have pursued all the above without an LLM. I am giving examples though of projects that I feel were sort of on the tipping point for me as to whether they were worth the effort to pursue or not—the learning-curve-required vs. useful-end-product balance. I am finding the LLMs are a finger on the scale tipping it more often toward "Go for it." Maybe you would call that a "spark"?
[1] (where I map out the location of a pair of YouTubers that have been road-tripping the U.S. for over three years) https://engineersneedart.com/OneAdvanture/
[2] https://engineersneedart.com/stereographer/stereographer.htm...
This is one only data point but my dad was a programmer and frequently complained about cognitive decline once he hit his mid 50s. From talking to him, he remained sharp at a conceptual and high level, knowing what he wanted to do and how it would be done, but struggled with the tooling, the logistical details, etc. He didn't make it to the AI era, alas, but AI could be a god send for people who have the proven technical chops and background but find juggling a lot of minutiae is becoming difficult.
I'm in my mid 40s, I've had a really fulfilling career working on interesting things and making decent money, and over that time have accumulated a few passion projects that I knew were always out of my reach.
Well, technically within my reach but I'd need to somehow find someone to pay for me and a team for some period of time to work on stuff.
When I started playing around with these tools, it started feeling like maybe some of my ideas were within reach. Some time after, it felt plausible enough that I've decided to go for it. I'm actively in the middle of some deep performance research that I simply would not have the bandwidth or capacity for without these tools.
I've also managed to acquire enough confidence in the likelyhood of some degree of success that I'm investing in starting a company (self-funded) to develop and release and license the stuff i'm building.
I don't know exactly how my ideas will turn out, but that's part of the excitement and anticipation. Point is I never felt I had enough breathing room to really go for it (between normal life obligations like mortgage, feeding kids, etc.)
These tools have changed the equation enough that it's made it more feasible for me to pursue some of these ideas on my own. Things I would have shelved for the rest of my life, probably.. or maybe tried to encourage and interest others into doing.
Agreed. To expand IMHO and somewhat tangentially: recognizing the importance of software/technology and using it as tool is the hallmark of a person with balanced mental makeup. Someone who has ever had 'passion' for software (or in general technology) extended beyond a few weeks can be considered to have something abnormal going on - for example autism. This is like a carpenter becoming obsessed with his chisel and deriving his entire sense of purpose and happiness from delving into the minutiae of chisels.
It is more fun to treat them as coding buddies, usually using them one at a time a time, it is fair to race them at debugging a bug or spend waiting time looking at docs or something.
The real bottleneck is how much you can hold in your head simultaneously to be sure about quality as a moral subject.
HN comments bias far more negative towards technology, tech companies, and current politics than the people I know in real life. People who mostly don’t work as professional software engineers, at least not anymore. And the (employed) engineers I know are all having a lot of fun too.
Love AI explaining code
Dislike AI for writing code (that was my fun part)
I've worked in professional software development for more than 20 years. I'm pretty well connected and well aware of what is going on in the industry. If you think that coding agents are not widely used and just a bubble on HN, you are very much mistaken. At this point I'd suggest more than 50% of professional developers are using them. Within a few years it will be 90%.
The reason is, they are actually good, despite what some people really want to believe.
Personally, I've been typing characters into a text editor or IDE for a long, long time. I'm very happy that I have a an automated junior programmer to do it for me now while I guide it and tell it when it is getting things wrong, and fix up mistakes. I did the manual way for a long time, I'm enjoying this new way. I understand this isn't for everyone though.
Fast forward 30 years later, I thought those days were gone forever. I'd accepted that I'd never experienced that kind of obsession again. Maybe because I got older. Maybe those feelings were something exclusively for the young. Maybe because my energy wasn't what it used to be. Yada yada, 1000s of reasons.
I was so shocked when I found out that I could experience that feeling again with Claude Code and Codex. I guess it was like experiencing your first love all over again? I slept late, I woke up early, I couldn't wait to go back to my Codex and Claude. It was to the point I created an orchestrator agent so I could continue chatting with my containerized agents via Telegram.
"What a time to be alive" <-- a trite, meaningless saying, that was infused by real meaning, by some basic maths that run really, really, really fast, on really, really expensive hardware. How about that!
It suddenly turns that dead time while you're waiting for CI, review or response into time where you can work on the fun or satisfying side projects by firing up a few prompts, check an iteration or 2, and then pause again until the next time or while the agent is doing its thing
My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM.
You have decades of expert knowledge, which you can use to drive the LLMs in an expert way. Thats where the value is. The industry or narrative might not have figured that out yet, but its inevitable.
Garbage in, garbage out still very much applies in this new world.
And just to add, the key metric to good software hasn't changed, and won't change. It's not even about writing the code, the language, the style, the clever tricks. What really matters is how well does the code performs 1 month after it goes live, 6 months, 5 years. This game is a long game. And not just how well does the computer run the code, but how well can the humans work with the code.
Use your experience to generate the value from the LLMs, cuase they aren't going to generate anything by themselves.
Massive job cuts, bad job market, AI tools everywhere, probable bubble, it seems naive to be optimistic at this juncture.
LLMs may be accelerating the process, but definitely not the cause.
If you want a career in technology, a durable one, you learn to adapt. Your primary skill is NOT to master a given technology, it is the ability to master a given technology. This is a university that has no graduation!
If you're a great programmer, can you can stop using Angular and master React? Yes. Can you stop telling the computer what to do, and master formal proof assistants? Maybe. Can you stop using the computer except as a tool and go master agricultural technology? Probably not. (Which is not to say you can't be a good programmer at an agritech company)
This is the fundamental problem with how so many people think about LLMs. By the time you get to Principal, you've usually developed a range of skills where actual coding represents like 10% of what you need to do to get your job done.
People very often underestimate the sheer amount of "soft" skills required to perform well at Staff+ levels that would require true AGI to automate.
I remember a cinema theater projectionist telling me exactly that while I was wiring a software controlling numeric projector, replacing the 35mm ones.
[citation needed]
It has just merely moved from "almost, but not entirely useless" to "sometimes useful". The models themselves may perhaps be capable already, but they will need much better tooling than what's available today to get more useful that that, and since it's AI enthusiasts who will happily let LLMs code them that work on these tools it will still take a while to get there :)
If we truly value human creativity, then things that decrease the rote mechanical aspects of the job are enablers, not impediments.
After 40 years in this industry—I started at 10 and hit 50 this year—I’ve developed a low tolerance for architectural decay.
Last night, I used Claude to spin up a website editor. My baseline for this project was a minimal JavaScript UI I’ve been running that clocks in at a lean 2.7KB (https://ponder.joeldare.com). It’s fast, it’s stable, and I understand every line. But for this session, I opted for Node and neglected to include my usual "zero-framework" constraint in the prompt.
The result is a functional, working piece of software that is also a total disaster. It’s a 48KB bundle with 5 direct dependencies—which exploded into 89 total dependencies. In a world where we prioritize "velocity" over maintenance, this is the status quo. For me, it’s unacceptable.
If a simple editor requires 89 third-party packages to exist, it won't survive the 5-year test. I'm going back to basics.
I'll try again but we NEED to expertly drive these tools, at least right now.
What's missing is another LLM dialog between you and Claude. One that figures out your priorities, your non-functional requirements, and instructs Claude appropriately.
We'll get there.
There are already spec frameworks that do precisely this. I've been using BMAD for planning and speccing out something fairly elaborate, and it's been a blast.
> neglected to include my usual "zero-framework" constraint in the prompt
And then your complaint is that it included a bunch of dependencies?
AI's do what you tell them. I don't understand how you conclude:
> If a simple editor requires 89 third-party packages to exist
It obviously doesn't. Why even bother complaining about an AI's default choices when it's so trivial to change them just by asking?
I have been consistently skeptical of LLM coding but the latest batch of models seems to have crossed some threshold. Just like everyone, I've been reading lots of news about LLMs. A week ago I decided to give Claude a serious try - use it as the main tool for my current work, with a thought out context file, planning etc. The results are impressive, it took about four hours to do a non-trivial refactor I had wanted but would have needed a few days to complete myself. A simpler feature where I'd need an hour of mostly mechanical work got completed in ten minutes by Claude.
But, I was keeping a close eye on Claude's plan and gradual changes. On several occasions I corrected the model because it was going to do something too complicated, or neglected a corner case that might occur, or other such issues that need actual technical skill to spot.
Sure, now a PM whose only skills are PowerPoint and office politics can create a product demo, change the output formatting in a real program and so on. But the PM has no technical understanding and can't even prompt well, let alone guide the LLM as it makes a wrong choice.
Technical experts should be in as much demand as ever, once the delirious "nobody will need to touch code ever again gives way to a realistic understanding that LLMs, like every other tool, work much better in expert hands. The bigger question to me is how new experts are going to appear. If nobody's hiring junior devs because LLMs can do junior work faster and cheaper, how is anyone going to become an expert?
It’s refreshing to hear I’m not the only one who feels this way. I went from using almost none of my copilot quota to burning through half of it in 3 days after switching to sonnet 4.6. I’m about to have to start lobbying for more tokens or buy my own subscription because it’s just that much more useful now.
I'm still not ready to sing praises about how awesome LLMs are, but after two years of incremental improvements since the first ChatGPT release, I feel these late-2025 models are the first substantial qualitative improvement.
I see unreliable software like openclaw explode in popularity while a Director of Alignment at Meta publicly shares how it shredded her inbox while continuing to use openclaw [1], because that's still good enough innit? I see much buggier releases from macOS & Windows. The biggest military in the world is insisting on getting rid of any existing safeguards and limitations on its AI use and is reportedly using Claude to pick bombing targets [2] in a bombing campaign that we know has made mistakes hitting hospitals [3] and a school [4]. AI-generated slop now floods social networks with high popularity and engagement.
It's a known effect that economies of scale lowers average quality but creates massive abundance. There never really was a fundamental quality bar to software or creative work, it just has to be barely better than not existing, and that bar is lower than you might imagine.
[1] https://x.com/summeryue0/status/2025774069124399363
[3] https://www.reuters.com/world/middle-east/who-says-has-it-ha...
[4] https://www.nbcnews.com/world/iran/iran-school-strike-us-mil...
This is so clearly a losing strategy. So clearly not even staff level performance let alone principal level.
I must say I find this idea, and this wording, elitist in a negative way.
I don't see any fundamental problem with democratization of abilities and removal of gatekeeping.
Chances are, you were able to accumulate your expert knowledge only because:
- book writing and authorship was democratized away from the church and academia
- web content publication and production were democratized away from academia and corporations
- OSes/software/software libraries were all democratized away from corporations through open-source projects
- computer hardware was democratized away from corporations and universities
Each of the above must have cost some gatekeepers some revenue and opportunities. You were not really an idiot just because you benefited from any of them. Analogously, when someone else benefits at some cost to you, that doesn't make them an idiot either.
This parroted argument is getting really tired. It signals either astroturfing or someone who just accepts what they are sold without thinking.
LLMs aren’t “democratising” anything. There’s no democracy in being mostly beholden to a few companies which own the largest and most powerful models, who can cut you off at any time, jack up the prices to inaccessibility, or unilaterally change the terms of the deal.
You know what’s truly “democratic” and without “gatekeeping”? Exactly what we had before, an internet run by collaboration filled with free resources for anyone keen enough to learn.
There are loads of high performance open source LLMs on the market that compete with the big 3. I have not seen this level of community engagement and collaboration since the open-source boom 20 years ago.
The issue arises from it not being that person’s opinion but a talking point. People didn’t all individually arrive at this “democratisation” argument by themselves, they were sold what to say by the big players with vested interest in succeeding.
I’m very much for discussing thoughts one has come up with themselves, especially if they disagree with mine. But what is not productive is arguing with a proxy.
> I have not seen this level of community engagement and collaboration
Nor this level of spam and bad submissions.
> Nor this level of spam and bad submissions.
Your comments seem pretty aggressive for what you’re replying to. Maybe take a beat to assess your biases? I thought the main comment was pretty fair and sensible, yet somehow you landed on calling them a spammer/bad submitter/astroturfer/non-thinker. Maybe they are? I could be wrong, but that's quite a strong reaction for what they asserted at face value. Not really trying to police anything here, I just thought the initial comment had merit and this devolved quite quickly.
Programming is a tricky skill and takes a long time to get good at. Lots of people aren't good at it. AI helps them program anyway, and allows them to sometimes produce useful programs. That's it.
It's not a talking point. It's just the reality of what the technology enables, and it's a simple enough observation that millions of people can independently arrive at that conclusion, and some of them might even refer to it as "democratization".
This is a good thing. It's a filter for the careless, lazy, and incompetent. LLMs are to programming what a microwave is to food. I'm not a chef because I can nuke a hot pocket. "Vibe coders" (not AI-assisted coding) are the programming equivalent of the people on Kitchen Nightmares. Go figure, it's a community rife with narcissism, too.
It is what we are talking about, hence not "counterproductive".
That would not happen, simply because those companies' interest will never be aligned entirely. There are at least three SOA models at the moment plus many open weight models. Anthropic vs. Pentagon is exactly what would play out.
And what is a precedence? Don't say Google, because search is well and alive.
> You know what’s truly “democratic” and without “gatekeeping”? Exactly what we had before, an internet run by collaboration filled with free resources for anyone keen enough to learn.
We have way more free resources at the moment. Name anything you'd like to learn, someone will be able to point you to a relevant resource. There are also better ways of surfacing that resource.
> This parroted argument
Most of arguments here on HN have been discussed ad nauseam, for or against AI. It's only parroted (or biased) if it's against your own beliefs.
They absolutely are. Anytime new knowledge or skills become widely available to everyone, that's a term used for it.
> There’s no democracy in being mostly beholden to a few companies which own the largest and most powerful models, who can cut you off at any time, jack up the prices to inaccessibility, or unilaterally change the terms of the deal.
None of that has anything to do with anything. There's competition between companies to keep prices low and accessibility high.
I think you are simply misunderstanding the word "democratic". It isn't just political. From MW:
> 3 : relating, appealing, or available to the broad masses of the people : designed for or liked by most people
Here, it's specifically about making things available to the broad masses of the people that wasn't before.
This isn't a matter of opinion. It's just the meaning of the word.
Everyone already had the option to write any code, fork any open source project, publish any of their code, run any of their code but suddenly AI appears and THAT is what makes it democratic? What was undemocratic about it? Is this democracy where idiots are running ai agents that publish smear campaigns, or harass maintainers for not accepting their slop is the democratic future you wish for?
How many (job) positions do you see today that want a backend developer? Frontend developer? Not much because now everyone is expected to be at least full stack, if not also devops as well. The exact same thing is playing out right now with AI, people are expected to produce 5x the amount of code before, if you don't, someone else will take your job that is willing to do it.
Already bloated programs will bloat further, they will require even more resources to run, you will have to pay even more for hardware, they will be slower, less responsive, you will have to pay yet another monthly fee to big tech for their AIs, and people will happily do it and pat themselves that we democratized programming, while running towards the future where nobody will be able to own hardware capable of general computing.
Why blame big tech when they're just providing a service at a fair cost (3rd party inference is incredibly cheap)? I'm not sure how that makes sense.
LOL. Maybe you are referring to OpenAI and Anthropic? Yes they have codex and opus. But about 1-2 months behind them is Grok, Gemini, and then 2-3 months behind them are all the other models available in cursor, from chinese open source models to composer etc.
How you can possibly use this "big company takes everything away" narrative is ridiculous, when you can probably use models for free that are abour 2 months behind the best models. This is probably the most uncentralised tech boom ever.
(I mean openAI is in such a bad state, I wouldn't be surprised if they lose almost their entire lead and user base within 6-12 months and are basically at the level of small chinese llm developers).
It would be like if you put in all this time to get fit and skilled on mountain bikes and there was a whole community of people, quiet nature, yada yada, and then suddenly they just changed the rules and anyone with a dirt bike could go on the same trails.
It's double damage for anyone who isn't close to retirement and built their career and invested time (i.e. opportunity cost) into something that might become a lot less valuable and then they are fearful for future economic issues.
I enjoy using LLMs and have stopped writing code, but I also don't pretend that change isn't painful.
However, our personal emotions need not turn into disparaging others' use of the same skills for their satisfaction / welfare / security.
Additionally, our personal emotions need not color the objective analysis of a social phenomenon.
Those two principles are the rationales behind my reply.
I suppose I see "any idiot" as a more general phrase, like "idiot proof", not directly meaning that anyone who uses a LLM is an idiot. However I can also see how it would be seen as disparaging.
Also, while there's a lot of examples of people entrenching into a certain behavior or status and causing problems, I also think society is a bit harsh on people who struggle with change. For people who are less predisposed to be ok with change feels like a lot of the time the response is "just deal with it and don't be selfish, this new XYZ is better for society overall".
Society is pretty much made up of personal emotions on some level. I don't think we should go around attacking people, but very few things can be considered truly objective in the world of societal analysis.
It was very democratized before, almost anyone could pick up a book or learn these skills on the internet.
Opportunity was democratized for a very long time, all that was needed was the desire to put in the work.
OP sounds frustrated but at the same time the societal promise that was working for longest time (spend personal time specializing and be rewarded) has been broken so I can understand that frustration..
/s, obviously I would hope except I've actually seen this sentiment expressed seriously.
They are not obese because they cannot afford the necessary amounts of protein and calories from healthy sources in the grocery store.
It doesn't. Carbs like rice, potatoes, etc. are incredibly cheap. Protein like ground beef and basic cuts of chicken are not expensive. And broccoli, carrots, green peppers, apples -- these are not exactly breaking the bank. Product is seasonal, so you vary what you buy according to what is cheapest this week.
Meanwhile, stuff like breakfast cereal and potato chips and Oreo cookies actually are surprisingly expensive.
Eating too many carbs is not a healthy diet dude
It does not. Legumes, whole grains, vegetables, and yogurt have always been cheaper than processed food.
People prefer eating carbohydrates and saturated fats.
So it's not just software that's coming to an end, everything else is as well. But; billionaires wives will still need haircuts (women billionaires will also need haircuts), so hairdresser will be the last profession.
I gatekeep my bike, I keep it behind a gate. If you break the gate open and democratize my bike, you're an idiot.
Maybe it's of value that any idiot can do this, but we're still idiots.
You gatekeep your bike, you keep it behind a gate, you don't let anyone else ride it.
Your neighbor got a nicer bike for Christmas, rode it by your house and now you are sad because you aren't the special kid with the bike any more, you are just regular kid like your neighbor.
Gates were put in place for lawyers, doctors, and engineers (real ones, not software "engineers") because the cost of their negligence and malpractice was ruined lives and death. Gatekeeping has value.
Software quality, reliability, and security was already lousy before the advent of LLMs, making it increasingly clear that the gate needed to be kept. Gripes about "gatekeeping" are a dogwhistle for "I would personally benefit from the bar being lowered even further".
As for the comparisons - some are partly comparable to the current situation, but there's some differences as well. Sure books and online content enabled others to join, thereby reducing the "moat" for those who built careers on esoteric knowledge. But it didn't make things _that_ easy - it still required years of invested time to become a good developer. Also, it happened very gradually and while the developer pie was growing, and the range of tech growing, so developers who kept on top of technology (like OP did) could still be valuable. Of course, no one knows fully how it will play out this time around; maybe the pie will get even bigger, maybe there's still room for lots of developers and the only difference is that the tedious work is done. Sure, then it is comparable. But let's be honest, this has a very real chance of being different (humans inventing AI surely is something special!) and could result in skill-sets collapsing in value at record time. And perhaps worse, without opening new doors. Sure, new types of jobs may appear but they may be so different that they are essentially completely different careers. It is not like in the past you just needed to learn a new programming language.
Skill based one of course.
That said, if we zoom out and review such paradigm shifts over history, we find that they usually result in some new social contracts and value systems.
Both good expert writers and poor novice writers have been able to publish non-fiction books from a few centuries now. But society still doesn't perceive them as the same at all. A value system is still prevalent and estimated primarily from the writing itself. This is regardless of any other qualifications/disqualifications of authors based on education / experience / nationality / profession etc.
At the individual level too, just because book publishing is easy doesn't mean most people want to spend their time doing that. After some initial excitement, people will go do whatever are their main interests. Some may integrate these democratized skills into their main interests.
In my opinion, this historical pattern will turn out to be true with the superdrug as well as vibe coding.
Some new value will be seen in the swimming or running itself - maybe technique or additional training over and above the drug's benefits.
Some new value will be discovered in the code itself - maybe conceptual clarity, algorithmic novelty, structural cleanliness, readability, succinctness, etc. Those values will become the new foundations for future gatekeeping.
It's a nice idea, but I feel like that's only going to be the case for very small companies or open source projects. Or places that pride themselves on not using AI. Artisan code I call it.
At my company the prevailing thought is that code will only be written by AI in the future. Even if today that's not the case, they feel it's inevitable. I'm skeptical of this given the performance of AI currently. But their main point is, if the code solves the business requirements, passes tests and performs at an adequate level, it's as good as any hand written code. So the value of readable, succinct, novel code is completely lost on them. And I fear this will be the case all over the tech sector.
I'm hopeful for a bit of an anti-AI movement where people do value human created things more than AI created things. I'll never buy AI art, music, TV or film.
But I do agree, if everyone can build software then the allure of it along with the value will be lost. Vibe coding is only a superpower as long as you're one of the select few doing it. Although I imagine it will continue to become a niche thing, anyone who thinks everyone and their grandma will be vibing bespoke software is out to lunch.
Personally I think there is a certain je ne sais quoi about creating software that cannot be distilled to some mechanical construct, in the same way it exists for art, music, etc. So beyond assembly line programming, there will always be a human involved in the loop and that will be a differentiating factor.
Open research papers, that everyone can access is democratizing knowledge. Accessibile worldwide courses, maybe (like open universities).
But LLMs are not quite the sane. This is taking knowledge from everyone and, in the best case, paywalling it.
I agree in spirit that the original comment was classist, but in this context your statements are also out of place, in my opinion.
—- from a ‘principal engineer’
- What if these centralized providers had restricted their LLMs to a small set of corporations / nations / qualified individuals?
- What if Google that invented the core transformer architecture had kept the research paper to themselves instead of openly publishing it?
- What if the universities / corporations, who had worked on concepts like the attention mechanism so essential for Google's paper, had instead gatekept it to themselves?
- What if the base models, recipes, datasets, and frameworks for training our own LLMs had never been open-sourced and published by Meta/Alibaba/DeepSeek/Mistral/many more?
I'm pretty sure that someone else would have come around the corner with a similar idea some time later, because the fundamentals of these stuff were already discussed decases before "Attention is all you need" paper, the novel thing they did was combining existing knowhow into a new idea and making it public. A couple of ingredients of the base research for this is decades old (interestingly back then some European universities were leading the field)
I am not trying to be dismissive, but this could apply to all research ever
Cell phones made communication easier for exactly zero people even though billions have been sold. Why? Because they come from just a few different companies.
Similar story to cell phones.
LLMs are in this state right out the gate.
It's funny you say that, because I've seen plenty of the reverse elitism from "AI bros" on HN, saying things like:
> Now that I no longer write code, I can focus on the engineering
or
> In my experience, it's the mediocre developers that are more attached to the physical act of writing code, instead of focusing on the engineering
As if getting further and further away from the instructions that the CPU or GPU actually execute is more, not less, a form of engineering, instead of something else, maybe respectable in its own way, but still different, like architecture.
It's akin to someone claiming that they're not only still a legitimate novelist for using ChatGPT or a legitimate illustrator for using stable diffusion, but that delegating the actual details of the arrangement of words into sentences or layers and shapes of pigment in an image, actually makes them more of a novelist or artist, than those who don't.
I've been a tech lead for years and have written business critical code many times. I don't ever want to go back to writing code. I am feeling supremely empowered to go 100x faster. My contribution is still judgement, taste, architecture, etc. And the models will keep getting better. And as a result, I'll want to (and be able to) do even more.
I also absolutely LOVE that non-programmers have access to this stuff now too. I am always in favor of tools that democratize abilities.
Any "idiot" can build their own software tailored to how their brains think, without having to assemble gobs of money to hire expensive software people. Most of them were never going to hire a programmer anyway. Those ideas would've died in their heads.
Programming was already “democratized” in the sense that anyone could learn to program for free, using only open-source software. Making everyone reliant on a few evil megacorporations is the opposite of democratization.
It's the same sort of argument artists use when it comes to AI generated media, there obviously is a qualitative difference in the people now able to generate whatever they want versus needing to draw something by hand, so saying "they could've just learned to draw themselves" is not very convincing. People don't want to do that yet still get an output, and I see nothing wrong with that, and if you do, it's just another sort of gatekeeping, that the "proper" way is to learn it by hand.
Lastly, many, many open weight models exist.
One thing is for sure LLMs will bring down down the cost of software per some unit and increase the volume.
But..cost = revenue. What is a cost to one party is a revenue to another party. The revenue is what pays salaries.
So when software costs go down the revenues will go down too. When revenues go down lay offs will happen, salary cuts will happen.
This is not fictional. Markets already reacted to this and many software service companies took a hit.
You may not end up with a seat at the table.
I'm in the boat of SaaS myself, but feel a bit dishonesty from Senior devs complaining about technology stealing jobs. When it was them doing the stealing, it was fine. Now that the tables have turned, it's not technology is bad
But my take on this is that accountability will still be a purely human factor. It still is. I recently let go of a contractor who was hired to run our projects as a Scrum/PM, and his tickets were so bad (there were tickets with 3 words in them, one ticket was in the current sprint, that was blocked by a ticket deep in the backlog, basic stuff). When I confronted him about them, he said the AI generated them.
So I told him that:
1. That's not an excuse, his job is to verify what it generated and ensure it's still good.
2. That actually makes it look WORSE, that not only did he do nearly 0 work, that he didn't even check the most basic outputs. And I'm not anti-AI, I expressly said that we should absolutely use AI tools to accelerate our work. But that's not what happened here.
So you won't get to say (at least I think for another few years) "my AI was at fault" – you are ultimately responsible, not your tools. So people will still want to delegate those things down the chain. But ultimately they'll have to delegate to fewer people.
I'm assuming that the software factory of the future is going to need Millwrights https://en.wikipedia.org/wiki/Millwright
But, builders are builders. These tools turn ideas into things, a builders dream.
I think much like you that AI is and will just continue to destroy the economy! At least I got to sell a house and make a profit--stash it away for when the big AI market crash happens (hopefully not a 2030 great depression tho). As then it's a down market and buying stocks, bitcoin and houses is always cheaper.
Because they can hire some "prompt engineer" to "steer the AI" for $30-50k instead of $150-$250k.
But..cost = revenue."
That is Karl Marx's Labor theory of value that has been completely disproven.
You don't charge what it costs to build something, you charge the maximum the customer is willing to pay.
- First, the LTV was not Marx's idea. Adam Smith held the same view, as did many many others during this era. Marx refined this idea, but there's nothing about your point that is unique to his version of it.
- Second, while LTV is not widely used today, this is not because it was "completely disproven" (can you cite anything to back this claim up?). It is because economics shifted to a different paradigm based on marginal utility. These two frameworks operate at different levels of abstraction and address different aspects of the price of goods. There is actually empirical evidence of a correlation between the cost of a good and the cost of the labour, at an aggregate level.
- Third, Marx explicitly differentiated between _value_ and _price_. LTV deals with value exclusively (in other words, what happens when externalities impacting price are accounted for). He would have had no issue accepting that externalities impacting supply and demand would impact price.
The final irony of your comment is that the commenter's claim that you are incorrectly analysing is actually also fully defensible under your (presumably) neoclassical view of economics. In competitive markets, reduced production costs lead to reduced equilibrium prices as competitors undercut each other. The proposition that in the long run, under competition, price tends toward cost is a standard result in microeconomics. The idea that "you charge the maximum the customer is willing to pay" only holds without qualification in monopoly or monopolistic competition with strong differentiation, which are precisely the conditions that increased software supply would erode.
Here's the other edge of that sword. A couple back-end devs in my department vibe-coded up a standard AI-tailwind front-end of their vision of revamping our entire platform at once, which is completely at odds with the modular approach that most of the team wants to take, and would involve building out a whole system based around one concrete app and 4 vaporware future maybe apps.
And of course the higher-ups are like “But this is halfway done! With AI we can build things in 2 weeks that used to six months! Let’s just build everything now!” Nevermind that we don’t even have the requirements now, and nailing those down is the hardest part of the whole project. But the higher-ups never live through that grind.
It was missing years of backend and had maybe 1/20th feature parity with what we already had and it would have, in hindsight, been literally impossible to implement some of the things we would need in the future if we had went down that path. But they were amazed by this flashy new thing that devs made in a weekend that looked great but was actually a disaster.
I fail to see how this is any different than what people are complaining about with vibe coded LLM stuff a decade and a half later now? This was always being done and will continue to be done; it's not a new problem.
If it isn't a product that needs to solve problems reliably over time then it was kind of silly to use a DBA that cost twice the Backend engineer and only handled the data niche. We progressed from there or regressed from there depending on why we are developing software.
AI will have to take a different direction.
My worry is that any idiot can prompt themselves to _bad_ software, and the differentiator is in having the right experience to prompt to _good_ software (which I believe is also possible!). As a very seasoned engineer, I don't feel personally rugpulled by LLM generated code in any way; I feel that it's a huge force multiplier for me.
Where my concern about LLM generated software comes in is much more existential: how do we train people who know the difference between bad software and good software in the future? What I've seen is a pattern where experienced engineers are excellent at steering AI to make themselves multiples more effective, and junior engineers are replacing their previous sloppy output with ten times their previous sloppy output.
For short-sighted management, this is all desirable since the sloppy output looks nice in the short term, and overall, many organizations strategically think they are pointed in the right direction doing this and are happy to downsize blaming "AI." And, for places where this never really mattered (like "make my small business landing page,") this is an complete upheaval, without a doubt.
My concern is basically: what will we do long term to get people from one end to another without the organic learning process that comes from having sloppy output curated and improved with a human touch by more senior engineers, and without an economic structure which allows "junior" engineers to subsidize themselves with low-end work while they learn? I worry greatly that in 5-10 years many organizations will end up with 10x larger balls of "legacy" garbage and 10x fewer knowledgeable people to fix it. For an experienced engineer I actually think this is a great career outlook and I can't understand the rug pull take at all; I think that today's strong and experienced engineer will be command a high amount of money and prestige in five years as the bottom drops out of software. From a "global outcomes" perspective this seems terrible, though, and I'm not quite sure what the solution is.
It was a sobering moment for me when I sat down to look at the places I have worked for over my career of 20-odd years. The correlation between high quality code and economic performance was not just non-existing, it was almost negative. As in: whenever I have worked at a place where engineering felt like a true priority, tech debt was well managed, principles followed, that place was not making any money.
I am not saying that this is a general rule, of course there are many places that perform well and have solid engineering. But what I am saying is that this short-sighted management might not be acting as irrationally as we prefer to think.
But, I have definitely seen failure due to persistent technical mistakes, as well, especially when combined with human factors. There’s a particularly deep spiral that comes from “our technical leadership made poor choices or left, we don’t know what to invest in strategically so we keep spending money on attempted refactors, reorgs, or rewrites that don’t add more value, and now nobody can fix or maintain the core product and customers are noticing;” I think that at least two companies I’ve worked at have had this spiral materially affect their stock price.
I think that generative coding can both help and hurt along this axis, but by and large I have not seen LLMs be promising at this kind of executive function (ie - “our aging codebase is getting hard to maintain, what do we need to do to ensure that it doesn’t erode our ability to compete”).
1. We'll train the LLMs not to make sloppy code.
2. We'll come up with better techinques to make guardrails to help
Making up examples:
* right now, lots of people code with no tests. LLMs do better with tests. So, training LLMs to make new and better tests.
* right now, many things are left untested because it's work to build the infrastructure to test them. Now we have LLMs to help us build that infrustructure so we can use it make better tests for LLMs.
* ...?
No, it can't. I use claude code and AMP a lot, and yet, unless I pay attention, it easily generate bad code, introduces regressions while trying to fix bugs, get stuck in suboptimal ideas. Modularity is usually terrible, 50 year ideas like cohesion and coupling are, by the very nature of it, mostly ignored except in the most formal rigid ways of mimicry introduced by post-training.
Coding agents are wonderful tools, but people who think they can create and mantain complex systems by themselves are not using them in an optmal way. They are being lazy, or they lack software engineering knowledge and can't see the issues, and in that case they should be using the time saved by coding agents to read hard stuff and elevate their technique.
It may look the same, but it isn't the same.
In fact if you took the time to truly learn how to do pure agentic coding (not vibe coding) you would realize as a principal engineer you have an advantage over engineers with less experience.
The more war stories, the more generalist experience, the more you can help shape the llm to make really good code and while retaining control of every line.
This is an unprecedented opportunity for experienced devs to use their hard won experience to level themselves up to the equivalence of a full team of google devs.
What I want when I'm coding, especially on open source side projects, is to retain copyright licensing over every line (cleanly, without lying about anything).
Whoops!
Creators who are ripped off care. IP is more logical that land ownership, since new things have been created whereas no one created the land. Land is just stolen and defended.
For starters, because of the western values of giving credit.
We have diseases named after people, never mind inventions and ideas.
Plagiarism is kick-out-of-school grade academic misconduct, whereby you are pretending that someone's work (and the ability it implies) is your own.
> The only sort of property that actually exists is real and tangible.
Remember, I'm talking about works that are free to redistribute, use and even modify. Or in other cases, that the users to whom a compiled work is distributed have access to the buildable source code.
The authors put their names on it, and terms which says that their notices are to be preserved when copies are made.
This isn't good enough for the Altmans and Amodeis of the world.
> it's an issue due to the effect that students won't learn well if they just copy everything
... and fraudulently obtain professional licensing, and use that to cause harm: medical malpractice, unsafe engineering.
It is fraud.
In the case of copyleft licenses like GPL, IP is applied in a way to ensure that users have the code.
These things are taken away when the code is laundered through AI.
Photography started displacing painting as a form of portraiture, but displacing a technique is not the same thing as appropriating the work itself.
No they can't. They think they can, but they will still need to put in the elbow grease to get it done right.
But, in my case (also decades of experience), I have had to reconcile with the fact that I'll need to put down the quill pen, and learn to use a typewriter. The creativity, ideas, and obsession with Quality are still all mine, but the execution is something that I can delegate.
I grew up without a mentor and my understanding of software stalled at certain points. When I couldn’t get a particular os API to work, in Google and stack overflow didn’t exist, and I had no one around me to ask. I wrote programs for years by just working around it.
After decades writing software I have done my best to be a mentor to those new to the field. My specialty is the ability to help people understand the technology they’re using, I’ve helped juniors understand and fix linker errors, engineers understand ARP poisoning, high school kids debug their robots. I’ve really enjoyed giving back.
But today, pretty much anyone except for a middle schooler could type their problems into a ChatGPT and get a more direct answer that I would be able to give. No one particularly needs mentorship as long as they know how to use an LLM correctly.
That said, I still feel strongly about mentorship though. It's just that you can spend your quality time with the busy person on higher-level things, like relationship building, rather than more basic questions.
Can't just offload all the hard things to the AI and let your brain waste away. There's a reason brain is equated to a muscle - you have to actively use it to grow it (not physically in size, obviously).
But I can tell you that, just like with most things in life, this is yet another area where we are increasingly getting to do just the things we WANT to do (like think about code or features and have it appear, pixel pushing, smoothing out the actual UX, porting to faster languages) and not have to do things most people don't want to do, like drudgery (writing tests, formatting code, refactoring manually, updating documentation, manually moving tickets around like a caveman). Or to use a non tech example, having to spend hours fixing word document formatting.
So we're getting more spoiled. For example, kids have never waited for a table at a restaurant for more than 20 mins (which most people used to do all the time before abundant food delivery or reservation systems). Not that we ever enjoyed it, but learning to be bored, learning to not just get instant gratification is something that's happening all over in life.
Now it's happening even with work. So I honestly don't know how it'll affect society.
The "as long as they know how..." is doing a lot of work there.
I expect developers with mentors who help give them the grounding they need to ask questions will get there a whole lot faster than developers without.
If, as a principal engineer, you were performing basic work that can easily be replicated by an LLM, then you were wasted and mistasked.
Firstly, high-end engineers should be working on the hard work underlying advances in operating systems, compilers, databases, etc. Claude currently couldn't write competitive versions of Linux, GCC (as recently demonstrated), BigQuery, or Postgres.
Secondly, and probably more importantly, LLMs are good at doing work in fields already discovered and demonstrated by humans, but there's little evidence of them being able to make intuitive or innovative leaps forwards. (You can't just prompt Claude to "create a super-intelligent general AI"). To see the need for advances (in almost any field) and to make the leaps of innovation or understanding needed to achieve those advances still takes smart (+/- experienced) humans in 2026. And it's humans, not LLMs, that will make LLMs (or whatever comes after) better.
Thought experiment: imagine training a version of Claude, only all information (history, myriad research, tutorials, YouTube takes and videos, code for v1, v2, etc.) related to LLMs is removed from the training data. Then take that version and prompt it to create an LLM. What would happen?
Story: I'm dev for about 20 years. First time I had totally the same felling when desktop ui fading away in favor of html. I missed beauty of c# winforms controls with all their alignment and properties. My experience felt irrelevant anymore. Asp.net (framework which were sold as "web for backed developers") looked like evil joke.
Next time it have happened with the raise of clouds. So were all my lovely crafted bash scripts and notes about unix command irrelevant? This time however that was not that personal for me.
Next time - fall of scala as a primary language in big data and its replacement with python. This time it was pretty routine.
Oh and data bases... how many times I heard that rdbms is obsolete and everybody should use mongo/redis/clickhouse?
So learn new things and carry on. Understanding how "obsolete" things works helps a lot to avoid silly mistake especially in situation when world literally reinvent bicycle
Even regarding "chase something complex and difficult", there are currently only so many needs for that, so I think any given person is justified fearing they won't be picked. It may be several years between AI eating all the CRUD work from principal down, and when it expands the next generation of complex work on robotics or whatever.
Also, to speak on something I'm even less qualified – the economy feels weak, so I don't have a lot of hope for either businesses or entrepreneurs to say "Let's just start new lines of business now that one person can do what used to take a whole team." The businesses are going to pocket the safe extra profits, and too many entrepreneurs are not going to find a foothold regardless how fast they can code.
It was never about writing the code—anyone can do that, students in college, junior engineers…
Experience is being able to recognize crap code when you see it, recognizing blind alleys long before days or weeks are invested heading down them. Creating an elegant API, a well structured (and well-organized) framework… Keeping it as simple as possible that just gets the job done. Designing the code-base in a way that anticipates expansion…
I've never felt the least bit threatened by LLMs.
Now if management sees it differently and experienced engineers are losing their jobs to LLMs, that's a tragedy. (Myself, I just retired a few years ago so I confess to no longer having a dog I this race.)
Retired, I have continued to code, and have used Claude to vibe code a number of projects—initially I dod so out of curiosity as to how good LLM are, and then to handle things like SwiftUI that I am hesitant to have to learn.
It's true then that I am not in a position of employment where I have to consider a performance review, pleasing my boss or impressing my coworkers. I don't doubt that would color my perception.
But speaking as someone who has used LLMs to code, while they impress me, again, I don't feel the threat. As others have pointed out in past threads here on HN, on blogs, LLMs feel like junior engineers. To be sure they have a lot of "facts" but they seem to lack… (thinking of a good word) insight? Foresight?
And this too is how I have felt as I was aging-out of my career and watched clever, junior engineers come on board. The newness, like Swift, was easy for them. (They no doubt have rushed headlong into Swift UI and have mastered it.) Never though did I feel threatened by them though.
The career itself, I have found, does in fact care little for "grey beards". I felt by age 50 I was being kind of… disregarded by the younger engineers. (It was too bad, I thought, because I had hoped that on my way out of the profession I might act more as mentor than coder. C'est la vie!)
But for all the new engineer's energy and eagerness, I was comfortable instead with my own sense of confidence and clarity that came from just having been around the block a few times.
Feel free to disregard my thoughts on LLMs and the degree to which they are threatening the industry. They may well be an existential threat. But, with junior engineers as also a kind of foil, I can only say that I still feel there is value in my experience and I don't disparage it.
When you have had to tackle dozens of frameworks/libraries/API over the years, you get to where you find you like this one, dislike that one.
Get/Set, Get/Set… The symmetry is good…
Calling convention is to pass a dictionary: all the params are keys. Extensible, sure, but not very self-documenting, kind of baroque?
An API that is almost entirely call-backs. Hard to wrap your head around, but seems to be pretty flexible… How better to write a parser API anyway?
(You get the idea.)
And as you design apps/frameworks yourself, then have to go through several cycles of adding features, refactoring, you start to think differently about structuring apps/frameworks that make the inevitable future work easier. Perhaps you break the features of a monolithic app into libraries/services…
None of this is novel, it's just that doing enough of it, putting in the sweat and hours, screwing up a number of times) is where "taste" (insight?) comes from.
It's no different from anything else.
Perhaps the best way to accelerate the above though is to give a junior dev ownership of an app (or if that is too big of a bite, then a piece of a thing).
"We need an image cache," you say to them. And then it's theirs.
They whiteboard it, they prototype it, they write it, they fix the bugs, they maintain it, they extend it. If they have to rewrite it a few times over the course of its lifetime (until it moves into maintenance mode), that's fine. It's exactly how they'll learn.
But it takes time.
> Nobody tells this to people who are beginners, and I really wish somebody had told this to me.
> All of us who do creative work, we get into it because we have good taste. But it's like there is this gap. For the first couple years that you're making stuff, what you're making isn't so good. It’s not that great. It’s trying to be good, it has ambition to be good, but it’s not that good.
> But your taste, the thing that got you into the game, is still killer. And your taste is good enough that you can tell that what you're making is kind of a disappointment to you. A lot of people never get past that phase. They quit.
> Everybody I know who does interesting, creative work they went through years where they had really good taste and they could tell that what they were making wasn't as good as they wanted it to be. They knew it fell short. Everybody goes through that.
> And if you are just starting out or if you are still in this phase, you gotta know its normal and the most important thing you can do is do a lot of work. Do a huge volume of work. Put yourself on a deadline so that every week or every month you know you're going to finish one story. It is only by going through a volume of work that you're going to catch up and close that gap. And the work you're making will be as good as your ambitions.
> I took longer to figure out how to do this than anyone I’ve ever met. It takes awhile. It’s gonna take you a while. It’s normal to take a while. You just have to fight your way through that.
> —Ira Glass
I'm excited to work with AI. Why? Because it magnifies the thing I do well: Make technical decisions. Coding is ONE place I do that, but architecture, debugging etc. All use that same skill. Making good technical decisions.
And if you can make good choices, AI is a MEGA force multiplier. You just have to be willing to let go of the reins a hair.
Any suggestions to overcome this deficit in design experience? My best guess is to read some texts on code design or alternatively get a job at a place to learn design in practice. Mainly learning javascript and web app development at the moment.
*Who has had a career in a previous field, and doesn't necessarily think that learning programming with lead to another career (and is okay with that).
I can tell you: Your problems are a layer higher than you think.
Coding, Architecture, etc. Those get the face time. Process, and Discipline, and where the money is made and lost in AI.
To give a minor example: My first attempt at a major project with AI failed HORRIBLY. But I stepped back and figured out why. What short-comings did my approach have, what short-comings did the AI have. Root Cause Analysis.
Next day I sat down with the AI and developed a PLAN of what to do. Yes, a day spent on a plan.
Then we executed the plan. (or it did and I kept it on track, and fixed problems in the plan as things happened.) On the third day I'd completed a VERY complex task. I mean STUPIDLY complex, something I knew WHAT I wanted to do, and roughly how, but not the exact details, and not at the level to implement it. I'm sure 1-2 weeks of research could have taught me. Or I could let the AI do it.
... And that formed my style of working with AI.
If you need a mentor pop in the Svalboard discord, and join #sval-dev. You should be able to figure out who I am.
My experience is the opposite. Those with a passion for the field and the ability to dig deeply into systems are really excited right now (literally all that power just waiting to be guided to do good...and oh does it need guidance!). Those who were just going through the motions and punching a clock are pretty unmotivated and getting ready to exit.
Sometimes I dream about being laid off from my FAANG job so I have some time to use this power in more interesting than I'm doing at work (although I already get to use it in fairly interesting ways in my job).
I'm having a lot of fun with AI. Any idiot can't prompt their way to the same software I can write. Not yet anyways.
Specifically the implication high LLM affinity implies low professional competence.
"My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM."
Strong disagree.
I've earned my wings. 5 years realtime rendering in world class teams. 13 years in AEC CAD developing software to build the world around us. In the past two years I designed and architected a complex modeling component, plus led the initial productization and rendering efforts, to my employers map offering.
Now I've managed to build in my freetime the easy-to-use consumer/hobbyist CAD application I always wanted - in two years[0].
The hard parts, that are novel and value adding are specific, complex and hand written. But the amount on ungodly boilerplate needed to implement the vision would have taken either a) team and funding or b) 10 years.
It's still raw and alpha and it's coming together. Would have been totally impossible without Claude, Codex and Cursor.
I do agree I'm not an expert in several of the non-core technologies used - webview2 for .net for example, or xaml. But I don't have to be. They are commodity components, architected to their specific slot, replaceable and rewritable as needed.
As an example of component I _had_ professional competence 15 years ago - OpenGL - I don't need to re-learn. I can just spec quickly the renderpasses, stencil states, shader techniques etc etc and have the LLM generate most of that code in place. If you select old, decades old technlogies and techniques and know what you want the output is very usable most of the time (20 year old realtime rendering is practically already timeless and good enough for many, many things).
Not trying to be rude, just generating some empathy for the OP's situation, which I think was missed: Like them, there is something you are passionate about that there is no longer really a point to. You could argue "but people will need to use my tool to generate really _good_ CAD drawings" but how much marginal value does that create over getting a "good enough" one in 2 minutes from Claude?
I feel sorry for bringing this up, but I think you might have missed how the thing that makes this possible makes it unnecessary.
Note my critique was labeling all of us LLM enthusiasts by association ”incompetents” which I believe is an incorrect assumption.
The point raised that more people can now code I think was a correct one though. I think that’s a net benefit.
Let me be brief. There are two topics here - CAD & AI and AI & society which I think the underlying point we are discussing.
I appreciate you made a domain specific example, but like _all_ AI workflows - it does not really hold up unless one is extremely specific what the workflow is.
First of all if someone is making a CAD tool for drawings that’s really not a segment. All 3D design tools target a specific content workflow, with specific domain model. Drawings are one possible output from this domain model - just like the on-screen 3D presentation or a 3MF file you get for export.
What ever LLM competency level is it does not come with it’s own domain model. Real people want to configure the models they create. This means there needs to be a domain model you hook up to the LLM to have stable model with specific editable components.
So if you are prompting a model, you are still better off if you prompt the domain model in a real cad package.
So I don’t think CAD packages will die.
Second - I’m mainly trying to serve _my_ need (which I believe is shared by others). My need is that I want to design 3D models with minimum effort, in an enviroment that has perfect undo, perfect boolean, versioning, snaphshotting and intuitive parametricity. This package did not exist in the market before.
Will it have traction? I would expect there are lot of human users that want to create models themselves. Computer chess did not kill chess etc.
To be super specific, there is a clear wedge in the market between Tinkercad and Fusion360 for an affordable desktop offering with the above features.
I do realize my market thesis is just a hypothesis at this point. Which is fine - it’s a passion project. I hope it will be usefull for others, but if not, at least I will have the tool I want.
I’m mainly excited about the possibility of being able to ship to test my market hypothesis.
Without LLM tools I would not be able to ship.
Regarding society:
I believe we are discussing a normal destructive phaze of innovation cycle. Machine looms, weavers, luddites, new forms of labour etc.
Regarding living standards the main worry is - can ”normal” people exist above poverty?
I guess the markets will want to have consumers in the future so either there will be new jobs or some form of basic income.
It’s possible I’m wrong as well.
I have no idea if democracies will survive.
Playing with Claude, if you tell it to do something, it'll produce something. Sometimes it's output is ok, sometimes it's not.
I find I need to iterate with Claude, tell it no, tell it how to improve it's solution or do something in a different way. It's kind of like speed running iterating over my ideas without spending a few hours doing it manually, writing lots of code then deleting it to end with my final solution.
If I had no prior coding knowledge i'd go with what ever the LLM gave me and end up with poor quality applications.
Knowing how to code gives you the advantage still using an LLM. Saying that, i'm pessimistic what my future holds as an older software engineer starting to find age/experince is an issue when an employer can pay someone less with less experience to churn out code with prompts when a lot of time the industry lives by "it's good enough".
You sound quite jaded. The people I see struggling _the most_ at prompting are people who have not learned to write elegantly. HOWEVER, a huge boon is that if you're a non-native English speaker and that got in your way before, you can now prompt in your native language. Chinese speakers in particular have an advantage since you use fewer tokens to say the same thing in a lot of situations.
> Talk about a rug pull!
Talk to product managers and people who write requirements for a living. A PM at MSFT spoke to me today about how panicked he and other PMs are right now. Smart senior engineers are absorbing the job responsibilities of multiple people around them since fewer layers of communication are needed to get the same results.
What's missing from this is that iconic phrase that all the AI fans love to use: "I'm just having fun!"
This AI craze reminds me of a friend. He was always artistic but because of the way life goes he never really had opportunity to actively pursue art and drawing skills. When AI first came out, and specifically MidJourney he was super excited about it, used it a lot to make tons and tons of pictures for everything that his mind could think of. However, after awhile this excitement waned and he realized that he didn't actually learn anything at all. At that point he decided to find some time and spend more time practicing drawing to be able to make things by himself with his own skills, not by some chip on the other side of the world and he greatly improved in the past couple of years.
So, AI can certainly help create all the "fun!!!" projects for people who just want to see the end result, but in the end would they actually learn anything?
My greatest frustration with AI tools is along a similar line. I’ve found that people I work with who are mediocre use it constantly to sub in for real work. A new project comes in? Great, let me feed it to Copilot and send the output to the team to review. Look, I contributed!
When it comes time to meet with customers let’s show them an AI generated application rather than take the time to understand what their existing processes are.
There’s a person on my team who is more senior than I am and should be able to operate at a higher level than I can who routinely starts things in an AI tool but then asks me to take over when things get too technical.
In general I feel it’s all allowed organizations to promote mediocrity. Just so many distortions right now but I do think those days are numbered and there will be a reversion to the mean and teams will require technical excellence again.
Suppose you get out of your comfort zone to do something entirely new; AI will be much more helpful for you than it is for people who spent years developing their skills.
AI is the great equalizer.
However, this can also be an opportunity to gain some understanding about our nature and our minds. Through that understanding, we can free ourselves from suffering, find joy, and embrace life and the present moment as it is.
I am just finishing the book The Power of Now by Eckhart Tolle, and your comment made me think about what is explained in it. Tolle talks about how much of our suffering comes from how deeply we (understandably) tie our core identity and self-worth to our external skills, our past achievements, and our status among peers.
He explains that our minds construct an ego, with which we identify. To exist, this ego needs to create and constantly feed an image of itself based on our past experiences and achievements. Normally we do this out of fear, in an attempt to protect ourselves, but the book explains that this never works. We actually build more suffering by identifying with our mind-constructed ego. Instead of living in the present and accepting the world as it is, we live in the past and resist reality in order to constantly feed an ego that feels menaced.
The deep expertise you built is real, but your identity is so much more than just being a 'principal engineer'. Your real self is not the mind-constructed ego or the image you built of yourself, and you don't need to identify with it.
The book also explores the Buddhist concept that all things are impermanent, and by clinging to them we are bound to suffer. We need to accept that things come and go, and live in the present moment without being attached to things that are by their nature impermanent.
I suggest you might take this distress you are feeling right now as an opportunity to look at what is hurting inside you, and disidentify yourself from your ego. It may bring you joy in your life—I am trying to learn this myself!
While I think rationally what you said is good and makes sense, at the same time it feels like it says you should forget your roots and be this impermanent being existing in the present and only the present. I value everything about my life, the past, my role models when I was a kid, my past and current skills, all friends from all ages, my whole path essentially. When considering current choices I have to make, I feel more drawn to think "What has been my path and values previously, and what makes sense now?" instead of forgetting the past and my ego and just hustling with the $CURRENT technology.
At least that's how I have thought about my ego when I have tried to approach it with topics like these. It might allow me to make more money in the present if I just disidentified with it, but that thought legitimately feels horrifying because it would mean devaluing my roots.
Interested to hear your take on this.
Disidentifying from your ego doesn't mean you have to act like a stateless robot with amnesia. Your past experiences, your role models, and your skills are still there for you to recall; they are tools that help guide your decisions. Disidentifying just means you don't let the mind-constructed image of those things define who you are. It means you don't have to constantly mull over the past, and you don't feel threatened when the things you valued in the past ends or changes.
However, I was really struck by your comment that disidentifying would feel horrifying because it would mean "devaluing your roots" to make more money. I am wondering if this is what you really think.
Imagine if letting go of that specific past identity led you to a truly marvelous opportunity in the present: not just more money, but working with wonderful people, doing engaging things, and being genuinely happy. Would that really be horrifying just because it didn't perfectly align with your roots? Probably not.
I suspect what you actually find horrifying isn't "devaluing your roots," but rather the idea of selling out. The real nightmare is getting a well-paid but completely soulless job where you are unhappy, working on things you don't care about, or being treated like a disposable cog who just takes orders.
Just my two cents, I am no spiritual guide!
That remains to be seen. There's a huge difference between an experienced engineer using LLMs in a controlled way, reviewing their code, verifying security, and making sure the architecture makes sense, and a random person vibecoding a little app - at least for now.
Maybe that will change in a year or two or five or never, but today LLMs don't devalue expert knowledge. If anything, LLMs allow expert programmers to increase productivity at the same level of quality, which makes them even more valuable compared to entry-level programmers than they were before.
also very egocentric & pessimist way to look at things. humankind is much better off when anyone can produce software and skilled experts will always be needed, just maybe with a slightly different skillset.
Really?
The vibe coders are running into a dark forest with a bunch of lobsters (OpenClaw) getting lost and confused in their own tech debt and you're saying they can prompt their way to the same software?
Someone just ended up wiping their entire production database with Claude and you believe that your experience is for nothing, towards companies that need stable infrastructure and predictability.
Cognitive debt is a real thing and being unable to read / write code that is broken is going to be an increasing problem which experienced engineers can solve.
Do not fall for the AI agent hype.
Problem is, it's the people in higher positions who should be aware of that, except they don't care. All they would see is how much more profit company can make if it reduces workforce.
Plenty of engineers do realize that AI is not some magical solution to everything - but the money and hype tends to overshadow cooler heads on HN.
The ones who are frustrated are the ones who were interested in doing(whether good or bad) but are being told by everyone that it is not worth it do it anymore.
All this senior engineering experience is a critical advantage in these new times, you implicitly ask things slightly different and circumvent these showstoppers without even thinking if you are that experienced. You don't even need to read the code at all, just a glimpse in the folder and scrolling a few meters of files with inline "pragmatic" snippets measured in meters and you know its wrong without even stepping through it. even if the autogenerated vanity unit tests say all green.
Don't feel let down. Slightly related to when Google sprung into existence - everyone has access and can find stuff, but knowing how to search well is an art even today most people don't have, and makes dramatic differences in everyday usage. Amplified now with the AI search results even that often are just convincing nonsense but most people cannot see it. That intuitive feel from hard won experience about what is "wrong" even without having an instant answer what would be "right" is getting more and more the differentiator.
Anyone can force their vibe coded app into some shape thats sufficient for their own daily use and they're used to avoiding their own pitfalls of the tool they created and know are there, but as soon as there's some kind of scaling (scope, users, revenue, ...) involved, true experts are needed.
Even the new agent tools like Claude for X products at the end perform dramatically different in the hands of someone who knows the domain in depth.
Not only it would be good if true, but it is also not true. Good programmers learn how to build things, for the most part, since they know what to build, and have a general architectural idea of what they are going to build. Without that, you are like the average person in the 90s with Corel Draw in their hands, or the average person with an image diffusion model today: the output will be terrible because of lack of taste and ideas.
LLMs goof up, hallucinate, make many mistakes - especially in design or architecting phase. That's where the experience truly shines.
Plus, it let's you integrate things that you aren't good at (UI for me).
What used to take incompetent developers 5 days - it is still taking them 5 days.
All the tools I passed up building earlier in my career because they were too laborious to build, are now quite easy to bang out with Claude Code and, say, an hour of careful spec writing...
The mediocre programmers who are toxic gate keepers seem to be the ones most upset by it.
But TBH, I have been a bit "shocked" by AI as well. It's much more troubling that the coming of the internet. But my hope is that having worked with AI extensively for the past 1-2 years, I'm confident they miss the important things: how to build the abstractions to solve the non-code constraints (like ease of maintenance, explainability to others, etc.)
And the way it goes at the moment shows no sign of progress in that area (throwing more agents at a problem will not help).
The reality is that in the theft of Chardet at least 2000 people supported Mark Pilgrim and almost no one supported the three programmers who constantly blog about AI and try to reprogram people.
Incidentally, everyone who unironically uses the word "gate keeper" is mediocre.
For me it, feels more like a way integrate search results immediately into my code. Did you also feel threatened by stack overflow?
If you actually try it you'll find it's a multiplier of insight and knowledge.
If instead it was building and delivering products / business value. Good judgement, coordination and communication skills, intuition, etc… then you are now way way more leveraged than you ever were and it has never been greater.
They simply can't in my experience. Most people cannot prompt their way out of a wet paper sack. The HN community is bathed in thoughtful, high quality writing 24/7/365, so I could see how a perception to the contrary might develop.
Painting used to be the main way to make portraits, and photography massively democratized this activity. Now everyone can have as many portraits as they want
Photography became something so much larger
Painting didn't disappear though
Market frictions cause the problem to be solved multiple times.
LLMs learn the solution patterns and apply it devaluing coming up with solutions in the first place.
For me, LLMs just help a lot with overcoming writer's block and other ADHD related issues.
Watching this program do stuff is more enjoyable then using or looking at the stuff produced.
But it doesn't produce code that looks or is designed the way I would normally. And it can't do the difficult or novel things.
Hence, you are back in the group of those who should benefit from LLMs. Following your own logic :)
Ps: please don’t take it seriously
Well, this is not what the main value of software actually is? Its not about prompting a one shot app, sure there will be some millionaires making an app super successful by coincidence (flapp bird, eg.), but in most cases software & IT engineering is about the context, integration, processes, maintenance, future development etc.
So actually you are in perfect shape?
And no worries: The one who werent good at writing code, will now fail because of administration/uptime/maintenance/support. They will fail just one step later.
Do you like the craft of programming more than the outcomes? Now you are in a better position than ever to achieve things.
Embrace
In the hands of a knowledgeable engineer these tools can save a lot of drudge work because you have the experience to spot when they’re going off the rails.
Now imagine someone who doesn’t have the experience, and is not able to correct where necessary. Do you really think that’s going to end well?
LLM's remove much of the drudgery of programming that we unfortunately sort of did to ourselves collectively.
There will be more code with lower quality. If you want to be valued for your expertise, you need to find niches where quality has to stay high. In a lot of the SaaS-world, most products do not require perfection, so more slop is acceptable.
Or you can accept the slop, grind out however more years you need to retire, and in the meanwhile find some new passion.
I've been programming for 40 years, and I've been on both sides. I love how easy it is to be in the flow when writing something that stretches my abilities in Common Lisp, and I thoroughly enjoy the act of programming then. But coding a frontend in React, or yet another set of Python endpoints, is just necessary toil to a desired endpoint.
I would argue that people like you are now in the perfect position to help drive what software needs writing, because you understand the landscape. You won't be the one typing, but you can still be the one architecting it at a much higher level. I've found enjoyment and solace in this.
I felt what you describe feeling. But it lasted like a week in December. Otherwise there’s still tons of stuff to build and my teams need me to design the systems and review their designs. And their prompt machine is not replacing my good sense. There’s plenty of engineering to do, even if the coding writes itself.
If you really think it's the reality, then your expert knowledge is not that good to begin with.
There are many aspects of software engineering that are fun, but the pure mechanical part gets sold quickly; there are only but so many times you can type "emplace" and feel fulfilled. I'm finding that co-pilot is extremely good at that part.
When it comes to producing code with an llm, most noobs get stuck producing spaghetti and rolling over. It is so bad that I have to go prompt-fix their randomly generated architecture, de-duplicate, vectorize and simplify.
If they lack domain knowledge on top of being a noob it is a complete disaster. I saw llm code pick a bad default (0) for a denominator and then "fix" that by replacing with epsilon.
It isn't the end, it is a new beginning. And I'm excited.
I've been working with computers since an Apple ][+ landed in our living room in the early 80s.
My perspective on what AI can do for me and for everyone has shifted dramatically in the last few weeks. The most recent models are amazing and are equipping me to take on tasks that I just didn't have the time or energy for. But I have the knowledge and experience to direct them.
I haven't been this enthused about the possibilities in a long time.
This is a huge adjustment, no doubt. But I think if I can learn to direct these tools better, I am going to get a lot done. Way more than I ever thought possible. And this is still early days!
Just incredible stuff.
I consider myself to have been a 'pretty good' programmer in my heyday. Think 'assembly for speed improvements' good.
Then came the time of 'a new framework for everything, relearn a new paradigm every other week. No need to understand the x % 2 == 0 if we can just npm an .iseven()' era ... which completely destroyed my motivation to even start a new project.
LLMs cut the boilerplate away for me. I've been back building software again. And that's good.
So now I spec it out, feed it to an LLM, and monitor it while having a cup of tea. If it goes off the rails (it usually does) I redirect it. Way better than banging it out by hand.
Now I am like a perfect weapon because I have the wisdom to know what I want to build and I don't have to translate it to an army of senior engineers. I just have Github Copilot implement it directly.
I have been thinking about the "same software"
Because I remember seeing Sonnet 4.5 and I had made comments that time as well that I just wanted AI to stop developing more as the more it develops, the more harm to the economy/engineers it would do than benefit in totality.
It was good enough to make scripts, I could make random scripts/one-off projects, something which I couldn't do previously but I used to still copy-paste it and run commands and I gave it the language to choose and everything. At that time, All I wanted was the models getting smaller/open source.
Now, I would say that even an Idiot making software with AI is gonna reach AI fatigue at one point or another and it just feels so detached with agents.
I do think that we would've been better off in society if we could've stopped the models at sonnet 4.5. We do now have models which are small and competitive to sonnet (Qwen,GLM,[kimi is a little large])
I wouldn't say I'm a 10x-er, but I'm comfortable enough with my abilities nowadays to say I am definitely "above average", and I feel beyond empowered. When I joined college 15 years ago, I felt like I was always 10 steps ahead of everyone else, and in recent years that feeling had sort of faded. Well, I've got that feeling back! So much of the world around me feels frozen in place, whereas I am enjoying programming perhaps as much as when I learned it as a little kid. I didn't know I MISSED this feeling, but I truly did!
Everything in my daily life (be it coding or creating user stories — who has time to use a mouse when you can MCP to JIRA/notion/whatever?) is happening at an amazing speed and with provable higher levels of quality (more tests, better end-user and client satisfaction, more projects/leads closed, faster development times, less bug reports, etc.). I barely write lines of code, and I barely type (often just dictate to MacWhisper).
I completely understand different people like different things. Had you asked me 5 years ago I probably would have told you I would be miserable if I stopped "writing" code, but apparently what I love is the problem solving, not the code churning. I'm not trying to claim my feelings are right, and other people are "wrong" for "feeling upset". What is "right" or "wrong" in matters of feelings? Perhaps little more than projection or a need for validation. There is no "right" or "wrong" about this!
If I now look at average-to-low-tier-engineers, I think they are a mixed bag with AI on their hands. Sometimes they go faster and actually produce code as good as or better than before. Often, though, they lack the experience, "taste" or "a priori knowledge" to properly guide LLMs, so they churn lots of poorly designed code. I'd say they are not a net-positive. But Opus 4.6 is definitely turning the tide here, making it less likely that average engineers do as much damage as before (e.g. with a Sonnet-level model)
On top of this divide within the "programming realm", there's another clear thing happening: software has finally entered the DIY era.
Previously, anyone could already code, but...not really. It would be very difficult for random people to hack something quickly. I know we've had the terms "Script kiddies" for a long time, but realistically you couldn't just wire your own solution to things like you can with several physical objects. In the physical world, you grab your hammer and your tools and you build your DIY solutions — as a hobby or out of necessity. For software...this hadn't really been the case....until now! Yes, we've had no-code solutions, but they don't compare.
I know 65 year olds who have never even written a line of code that are now living the life by creating small apps to improve their daily lives or just for the fun of it. It's inspiring to see, and it excites me tremendously for the future. Computers have always meant endless possibilities, but now so many more people can create with computers! To me it's a golden age for experimentation and innovation!
I could say the same about music, and art creation. So many people I know and love have been creating art. They can finally express themselves in a way they couldn't before. They can produce music and pictures that bring tears to my eyes. They aren't slop (though there is an abundance of slop out there — it's a problem), they are beautiful.
There is something to be said about the ethical implications of these systems, and how artists (and programmers, to a point?) are getting ripped off, but that's an entirely different topic. It's an important topic, but it does not negate that this is a brand new world of brand new artists, brand new possibilities, and brand new challenges. Change is never easy — often not even fair.
> I've spent decades building up and accumulating expert knowledge and now that has been massively devalued.
Listen to the comments that say that experience is more valuable than ever.
> Any idiot can now prompt their way to the same software.
No they cannot. You and an LLM can build something together far more powerful and sophisticated than you ever could have dreamt, and you can do it because of your decades of experience. A newbie cannot recognize the patterns of a project gone bad without that experience.
> I feel depressed and very unmotivated and expect to retire soon.
Welcome to the industry. :) It happens. Why not take a break? Work on a side project, something you love to do.
> My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM.
Once upon a time painters and illustrators were not "artists", but archivists and documenters. They were hired to archive what something looked like, and they were largely evaluated on that metric alone. When photography took that role, painters and illustrators had to re-evaluate their social role, and they became artists and interpreters. Impressionism, surrealism, conceptualism, post-modernism are examples of art movements that, in my interpretation, were still attempting to grapple with that shift decades, even a century later.
Today, we SWE are grappling with a very similar shift. People using LLMs to create software are not poor coders any more (or less) than photographers were poor painters. Painters and illustrators became very valuable after the invention of photography, arguably more valuable socially than before.
Same as with AI-art, where people without much drawing skills were excited about being able to make "art".
Nailed it :)
It's nice to be able to either just body double [1], or have some other people around to vent to when Claude goes off the rails.
[1] https://health.clevelandclinic.org/body-doubling-for-adhd
I've caught Claude making the gravest anti pattern mistakes using Elixir and trying to get it to correct them makes the whole thing worse.
It's ok for smaller scoped stuff but actual architectural changes come out worse than before more often than not.
With experience, you see these dead ends before they have a chance to take hold and you know when and how to adjust course. It's literally like one poster said: coding with some buddies without ego and without the need to constantly talk people out of using the latest and greatest shiny objects/tools/frameworks.
I've really enjoyed going back a revisiting old ideas and projects with the help of AI. As the OP stated -- it has restored my energy and drive.
But the much more interesting question to me: as LLM coding becomes the norm, does it drive the cost of self or small-company generated software to 0?
Like many SW architects/engineers my not-so-developed work-in-retirement plan is to assemble a small team of people I’ve loved working with over the years, start an LLC, and try to make a reasonable (not posh) living doing what we love: making software to solve problems.
On the one hand, it’s clear LLM coding can accelerate and amplify our efforts, but alternately there’s many people claiming there’s no possibility of a moat, your solution/innovation can be cloned in a matter of days … ie. the value of your software is exactly 0.
Not sure which future will be closer to reality. A backup plan that seems reasonable in the 0-value case is to focus our effort on creating actual physical gadgets and systems in the embedded realm, which conceivably can be designed and prototyped by a small team… It seems like these would still be valuable.
I have built and thrown away a half dozen projects ideas and gotten one into production at work in just the last few months.
I can build a POC for something in the time it would take me to explain to my coworkers what I even want. An MVP takes as long as what a POC used to take.
The thing that really unlocks stuff for me is how fast it is to make a cli/tui/web ui for things.
I am getting 20x done. This is a literal superpower.
I am not using it in agentic mode yet. I am telling it everything I want it to do. I will tell it where I want the files, what I want structs to be named, how I want the SQL queries to join, etc. I then review every line and make edits (typically with Claude first).
I haven't tried the agentic stuff yet, but I probably will at some point soon. I'm anxious about losing control over the architecture and data model, which is something I feel gives me my speed with Claude Code and that I know is important for my engineering work and quality.
I won't be writing code by hand ever again. This is the future. We'll look back at the old way as horse carriages.
Claude is also really freaking good at Rust, and the fact that it emits proper Rust with tests makes me even more confident of my changes.
We are literally living in the future now. Twenty years of SaaS and smartphone incrementalism and now we have jet packs.
Instead of engineers inventing 50 different frameworks and conventions for any given language or platform, maybe that energy will be directed to creating better AI tools.
Edit: I'll also reiterate what others are saying in that I think this is a tool best leveraged by engineers who know what they're doing and that care about code quality. The results you get back will also depend on your repo/project's code quality. If your project is poorly structured or has a lot of cruft, Claude will see that and spit it right back out. Keeping your code clean and low on tech debt is going to matter tremendously.
I think this will happen since one of the reason for new frameworks and languages was improving the human experience of coding, but now that friction goes away and AI doesn't feel that.
Although we might need to study which language AI is best at, and possibly invent new ones to maximize that.
1. https://cloud.google.com/blog/products/devops-sre/using-the-...
> I am getting 20x done. This is a literal superpower.
Adding this comment to favourites to revisit in half a decade.
I've already "made fun" of your exaggerated hype comments, so I'll use this opportunity to say that I hope you remain sane and grounded in your discoveries. You wouldn't be the first to go psychotic after interacting with these stochastic parrots.
I told you people back in 2019 that these models would replace Hollywood and you and others have been calling me all kinds of names, and every step of the way calling me an idiot. I'm a filmmaker - I know what I'm talking about. And now we're almost here. We have million dollar VFX services at our disposal for pennies.
Claude Code is doing the exact same thing for software engineering. I've been a senior software engineer for a good while - these capabilities are otherworldly and they can generalize to all new unseen problems. You're not paying attention.
I'd be more worried about whether or not you have a job in 5 years than whether I have or have not created a business or whatever criteria you want to use to thumb your nose at me.
You know how you can quickly ideate software plans for some large scale idea? Architecture, infrastructure, data models, etc., but the implementation takes longer? Claude Code short circuits that last bit. You need to hold your nose so you stop smelling whatever you're smelling and just try the damn tool.
I wish I could slap sense into you grumpy folks. You're so stiff in your beliefs. This is a train headed your way. Pay attention.
What kind of alternate reality are you living in?
I wish you would disclose your credentials (though I admit privacy is an inalienable right of yours) so I could place the biggest AI hype-man on this forum. Actually, there is hype, and there is being completely gone with hubris and you’re towards the latter end of the spectrum, given your doomsday calls on other comments that software engineering is done for and that you believe AI is close to ‘putting all the HN engineers out of work’ (https://news.ycombinator.com/item?id=47185284)
> I wish I could slap sense into you grumpy folks. You're so stiff in your beliefs. This is a train headed your way. Pay attention.
Lay off the violent thoughts and get some rest, man. Sounds like you need it.
“Peter Steinberger is a great example of how AI is catnip very specifically for middle-aged tech guys. they spend their 20s and 30s writing code, burn out or do management stuff for a decade, then come back in their late 40s/50s and want to try to throw that fastball again. Claude Code makes them feel like they still got it.”
It's still coding. If you think it's not you probably think that letting the IDE auto-complete or apply refactorings is also not coding.
What kind of tasks?
If something is cumbersome and I find myself needing it often (or I think I will need it), I write an alias, a script, an emacs function, etc,... That's the magic of reducing lot of steps to a single button press (or a short command).
In relation to LLM usage I think there's two interepretations. 1) This midlife crisis is a rejetion of empathy, understanding, and social obligation however minute. Writing a one-sentence update on an issue, understanding design decisions of another developer, reading documention are all boilerplate holding them back from their full potential in a perfectly objective experience. Of course, their personal satisfaction still relies on adoption of their products by customers (though decades of viewing customers through advertising surveillance has stripped away the customers' humanity from their perspective). Or 2) economic/political factors such as inflation, rising unemployment, supply chain issues, starvation of public services, and general instability means doing the usual midlife crisis activities are too expensive or risky, and LLMs present a local optmimum allowing them to reject societal virtues (eg. craftsmanship, collaboration, empathy) without endangering their financial position. Funny enough, I feel this latter point was also a factor of the NFT bubble (though, the finances were more clearly dubious).
You can absolutely take pride in having raised your own cows. But the guy down the street can also take pride in having cooked his own steak. In fact, the guy down the street might actually be a better chef than you, even though you know how to breed cattle.
In this analogy, The guy down the street didn't cook his own steak. He told someone else to cook the steak. And then claimed that he himself cooked it. Telling himself, "wow, I'm a great chef!". When In fact, he did not cook the steak.
Your greatness as a chef isn't measured by how well you manage restaurant kitchens. That would be a great manager. Your greatness as a chef is measured by actually cooking yourself. Claiming other chef's work as your own would be dishonest and self-deception.
Using an LLM lets you quickly learn (or quickly avoid having to learn) yet another tech stack while you leverage your inherent software development knowledge.
This describes me nearly perfectly. Though I didn’t exactly burn out of coding, I accidentally stumbled upon being an EM while I was coding well and enjoying. But being EM stuck so I got into managing team(s) at biggish companies which means doing everything except one that I enjoy the most which is coding.
However now that I run my own startup I’m back to enjoying coding immensely because Claude takes care of grunt work of writing code while allowing me to focus on architecture, orchestration etc. Immense fun.
I run a business of giving out loan against stocks and mutual funds as collateral in India.
Please visit https://www.quicklend.in/ to know more.
I’m probably going to go back and redo everything with my own code.
1. Creating something
2. Solving puzzles
3. Learning new things
If you are primarily motivated by seeing a finished product of some sort, then I think agentic coding is transcendent. You can get an output so much quicker.
If your enjoyment comes from solving hard puzzles, digging into algorithms, how hardware works, weird machine quirks, language internals etc... then you're going to lose nearly all of that fun.
And learning new things is somewhere in the middle. I do think that you can use agentic coding to learn new technologies. I have found llms to be a phenomenal tool for teaching me things, exploring new concepts, and showing me where to go to read more from human authors. But I have to concede that the best way to learn is by doing so you will probably lose out on some depth and stickiness if you're not the one implementing something in a new technology.
Of course most people find joy in some mix of all three. And exactly what they're looking for might change from project to project. I'm curious if you were leaning more towards 2 and 3 in your recent project and that's why you were so unsatisfied with Claude Code.
I guess if you're in an iterative MVP mindset then this matters less, but that model has always made me a little queasy. I like testing and verifying the crap out of my stuff so that when I hand it off I know it's the best effort I could possibly give.
Relying on AI code denies me the deep knowledge I need to feel that level of pride and confidence. And if I'm going to take the time to read, test and verify the AI code to that level, then I might as well write most of it unless it's really repetitive.
It's a different conversation when we talk about people learning to code now though. I'd probably not recommend going for the power tool until you have a solid understanding of the manual tools.
Will he remember to use pressure treated lumber? Will he use the right nails? Will he space them correctly? Will the gaps be acceptable? Did he snort some bath salts and build a sandcastle in a corner for some reason?
All unknowns and you have to over-specify and play inspector. Maybe that's still faster than doing it yourself for some tasks, but I doubt most vibe-coders are doing that. And I guess it doesn't matter for toy programs that aren't meant for production, but I'm not wired to enjoy it. My challenge is restraining myself from overengineering my work and wasting time on micro-optimizations.
But then he changed his tune? Even on LLMs...
I don't raise a single PR that I feel I wouldn't have written myself. All the code written by the AI agent must be high quality and if it isn't, I tell it why and get it to write bits again, or I just do it myself.
I'm having quite a hard time understanding why this is a problem for other people using AI. Can you help me?
But then it makes me ask if the agents will get so good that craftsmanship is a given? Then that concern goes away. When I use Go I don't worry too much about craftsmanship of the language because it was written by a lot of smart people and has proven itself to be good in production for thousands of orgs. Is there a point at which agents prove themselves capable enough that we start trusting in their craftsmanship? There's a long way to go, but I don't think that's impossible.
I think of AI like a microdose of Speed Force. Having super speed doesn't mean you don't like running; it just means you can run further and more often. That in turn justifies a greater amount of time spent running.
Without the Speed Force, most of the time you were reliant on vehicles (i.e. paying for third-party solutions) to get where you needed to go. With the Speed Force, not only can you suddenly meet a lot more of your transportation needs by foot, you're able to run to entirely new destinations that you'd never before considered. Eventually, you may find yourself planning trips to yet unexplored faraway harsh terrains.
If your joy in running came from attempting to push your biological physical limits, maybe you hate the Speed Force. If you enjoy spending time running and navigating unfamiliar territory, the Speed Force can give you more of that.
Sure, there are also oddballs who don't know how to run, yet insist on using the Speed Force to awkwardly jump somewhere vaguely in the vicinity of their destination. No one's saying they don't exist, but that's a completely different crowd from experienced speedsters.
> (i.e. paying for third-party solutions)
My experiences are not universal but apart from hardware and maybe $10 for a VPS for hosting, I do not find the need to pay for third-party solutions; I quite like this situation, and I do not find myself particularly constrained taking a little extra time or having to think a bit harder. But, my friend, I must ask, what are LLMs if not third-party solutions with sizable expenditures?The "creating something" idea... That's more complex. With agentic coding something can be created, but did I create it? Using agentic coding feels like hiring someone to do the work for me. For example, I just had all the windows in my house replaced. A crew came out at did it. The job is done, but I didn't do anything and felt no pride or sense of accomplishment in having these new windows. It just happened. Contrast that to a slow drain I had in my bathroom. I took the pipes apart, found the blockage, cleared it out, and reassembled the drain. When I next used the sink and the water effortlessly flowed away, I felt like I accomplished something, because I did it, not some plumber I hired.
So it isn't even about learning or solving puzzles, it's about being the person who actually did the work and seeing the result of that effort.
The inherent value of creating is something I was missing. Solving puzzles might be part of that, but not all. It's the classic Platonic question about how we value actions: for their own sake, for their results, or for both.
I think we agree that coding can be both, and it sounds like you feel the value for its own sake is lackluster in agentic coding -- It's just too easy. And I think that's the core sliding scale: Do you value creation more for its own sake or for its results? Where you land on that spectrum probably influences how people feel about agentic coding.
That being said, I also think that agentic coding can give enough of a challenge to scratch the itch of intrinsic value of creating. To a certain degree I think it's about moving up the abstraction chain to work more on architecture and product design. Those things can be fun and rewarding too. But fundamentally it's a preference.
I did put in 2 days of work to come up with what Claude used to ultimately do what it did... but when I look at the resulting code, I feel nothing. Having the idea isn't the same as being the one who actually did the thing. I plan to delete the branch next week. I don't want to maintain what it did, and think it should be less complex than it made it.
As someone who enjoys technology, and using it, and can just barely sort-of code but really not, agentic coding must be wonderful. I have barely scratched the surface with a couple of scripts. But simply translating "here's what I want, and how I would have done it the last time I used Linux 20 years ago, show me how to do it with systemd" is so much easier than digging through years of forum posts and trying to make sure they haven't all been obsoleted.
None of it is new. None of it is fancy. I do regret that people aren't getting credit for their work, but "automount this SMB share from my NAS" isn't going to make anyone's reputation. It's just going to make my day easier. I really did learn enough to set up a NAT system to share a DSL connection with an office in the late 1990s on OpenBSD. It took a long time, and I don't have that kind of free time anymore. I will never git gud. It's this, or just be another luser who goes without.
Like just yesterday I started to notice the increasing pressure of an increasingly hard-to-navigate number of Claude chats. So I went searching for something to organize them. I did find an extension, but it's for Chrome, and I'm a Firefox person, so I had Claude look at it with the initial idea of porting to Firefox. Then in the analysis, Claude mentioned creating an extension from scratch, and that's what I went for.
I've never really used JavaScript, let alone created a Firefox extension before, but in a few minutes I was iterating on one, figuring out how I wanted it to work with Claude, and now I have a very nice and featureful chats organizer. And I haven't even peeked at the code. I also now have a firm idea of this general spec of how I want arbitrary list-organizing UI to look+behave going forward.
I will add though, on 2 and 3, during most of the coding I do in my day job as a staff engineer, it’s pretty rare for me to encounter deeply interesting puzzles and really interesting things to learn. It’s not like I’m writing a compiler or and OS kernel or something; this is web dev and infra at a mid sized company. For 95% of coding tasks I do I’ve seen some variation already before and they are boring. It’s nice to have Claude power through them.
On system design and architecture, the problems still tend to be a bit more novel. I still learn things there. Claude is helpful, but not as helpful as it is for the code.
I do get the sense that some folks enjoy solving variations of familiar programming puzzles over and over again, and Claude kills that for them. That’s not me at all. I like novelty and I hate solving the same thing twice. Different tastes, I guess.
One of the recent joys I’ve had is having CC knit together separate notebooks I’d been updating for a couple of years into a unified app. It can be a fulfilling experience.
"If your identity is tied to you being an iOS developer, you are going to have a rough time. But if your identity is 'I'm a builder!' it is a very exciting time to be alive."
Plus, there is no rule that says you can't keep coding if it's faster for you and/or it's quicker in general. e.g I can write a Perl one liner much faster than Claude can. Heck, even if it's not faster and you enjoy coding, just keep coding.
I‘m a builder too.
I built a house. Ok, I said an architect what I want and he showed me the plans and I gave him feedback for adjustments and then the plans were given to the construction crew and they built the actual house.
But is was my prompt, so I‘m a builder.
Are you a builder if there is an middleman ? If not, what if the middleman is a tool ? If you use autocad to build the plans, are you still a builder ? What if autocad has a prompt feature, are you still a builder ?
Same with vibe coding, if you don’t write code you just ordered and didn’t code, otherwise all my customers and bosses where coders long before AI because there orders don’t reach much different from today’s prompts. The recipient changed but that doesn’t change the sender.
It’s some kind of Chinese Room but this time for those outside the room.
Over the past couple months, I've created several applications with Claude Code. Personal projects that would've taken me weeks, months, or possibly forever, since I generally get distracted and move on to something else. I write pretty decent specs, break things into phases, and make sure each phase is solid before moving on to the next.
I have Claude build things in frameworks I would've never tried myself, just because it can. I do actually look at the code. Some of it is slop. In a few cases, it looks like it works, but it'll be a totally naive or insecure implementation. If I really don't like how it did something, I'll revert and give it another attempt. I also have other AIs review it and make suggestions.
It's fun, but I ultimately gain little intellectual satisfaction from it. It's not like the old days at all. I don't feel like I'm growing my skill set. Yes, I learned "something", but it's more about the capabilities of AI, not the end result.
Still, I'm convinced this is the future. Experienced developers are in the best position to work with AI. We also may not have a choice.
For work, companies won't support it. Get it done. Fast. That's the new norm.
There should also be a symbiotic relationship at a job. Yes, they get something from me, but I should also get something… learning and some amount of satisfaction… in addition to the paycheck. I can get a paycheck anywhere.
It’s not the “new norm” unless employees accept it as the new normal. I don’t know why anyone would accept a completely one-sided situation like that.
How do you function on a team, where you have to maintain code others have written?
There are only 3 or 4 of us working on most of the code I touch. 3 of us have worked together in some form or another for close to 20 years.
That's where you're wrong. AI can debug code better than humans. I put it on a task that I'd spent months on: debugging a distributed application which had random errors which required me to comb through MBs of logs. I gave Claude the task, a log parser (which it also wrote), and told it to find what each issue was. It did the job in a few minutes. This is a task that was, frankly, just a bit above my capacity with a human brain as it required associating lots of logs by timestamps trying to reconstruct what the heck was going on.
My new worry is that I need to make sure the code AI is writing is more comprehensible not to other humans, but to other AIs in the future, since there's very little chance humans will be doing the debugging by themselves given how bad we are at that compared to LLMs even now, let alone in a few years.
> but I should also get something
What do you want beyond a pay check? If you want to get better at your job, the most important technique you can improve right now is hands down how to interact with an AI to solve business problems. The learning you're thinking of, being able to fully understand code and actually debug it in your head, is already a thing of the past now. In a few years, no one will seriously consider building software that's not entirely AI-written except for enthusiasts, similar to the people currently participating in C obfuscated code competitions. I say this as someone who reluctantly started using AI in anger only a few months ago after hating on it before that for the laughable code it was producing just around 6 months ago (it probably was already good by then but I was not really giving it a chance yet).
Also, when I write code myself, I still ask Claude to review it. It's faster than asking a human colleague to review it, so you can have Claude review often. Just today after a five-minute review Claude said a piece of code I wrote had four bugs, three of which were hallucinations and one was a real bug. I honestly do think it would have taken me a bit more than five minutes to find that one real bug.
Felt flashbacks of playing chess against humans online as a teen by copying moves from a chess engine.
Whats the point haha
I'm going to say something people hate... you're probably holding it wrong. Why do I say that? Because I absolutely felt exactly the way you are feeling. In fact, it can be worse than unfulfilling, it can be even draining.
But I, over time, changed how I used LLMs and I actually now find it rewarding and I'm learning a huge amount. I've learned more technologies (and I do mean learn) in the last year than I have ever in the past.
I think my advice is that if it feels wrong then you shouldn't be doing it that way. But that isn't inherent in using LLMs to help you work. Everyone has different preferences for how they work (and what languages they like, etc). The people using 15 LLMs to build software probably love that but I don't think that's how I want to do it. And that's fine.
Why? Did Claude do a bad job?
How do you think your company's CEO is going to feel when you tell them you could be finishing the software much faster, but you'd rather not, because it feels better to do it by hand?
Just yesterday I was on a call where someone was trying to point to my code as a problem when we suspected a DNS issue. If I didn’t know the code inside and out, I could have easily been steam rolled, because as we know, “it’s never the network”. We found out today it was in fact DNS.
If someone only ever worries about is speed, they’ll likely get tripped up and fall. One guy on my team is all about delivering quickly. He gives very optimistic timelines and gets things out the door as fast as possible. Guess what, the code breaks. He is constantly getting bug reports from everyone and having to fix stuff. As he continues to run into this, he is starting to become a bit more mature and tactical, but that is taking time.
I think the CEO would much rather see the production code be fully tested and stable. I write the frameworks everyone else on the team uses. If my code breaks, everyone’s code is broken. How much will that cost?
I know the code I produce is damn good, and I take pride in my extremely low defect rate. I will not be rushed. I will not be pushed. And I will do so until the day I retire.
I have been doing something similar. In my case, I prefer reading reference documentation (more to the point, more accurate), but I can never figure out where to start. These LLMs allow me to dive in and direct my own learning, by guiding my readings of that documentation (i.e. the authoritative source).
I think there has been too much emphasis (from both the hypesters and doomsayers) on AI doing the work, rather than looking at how we can use it as a learning tool.
Claude Code gives me a directory, usually something that works, and then I research the heck out of it. In that way I am more of an editor, which seems to be my stronger skill.
You are an inspiration. I will remember this when I grow older. Just wanted to say this, I am 1/2 your age, and I am sure there are 1/3 or even 1/4 people here. ;)
I personally think coders get better with age, like lounge singers.
Learning for what? That day when you write it yourself, that will never come ...
There is only so much you can learn by reading; it requires doing.
The good thing about traditional sources like books, tutorials and other people's code bases is that they give you something, but don't write your project for you.
Now you can be making a project, yet be indefinitely procrastinating the learn-by-doing part.
Afaik there are no open source projects that do this. AWS has a behemoth of a distributed system you can deploy in order to do something similar. But I made a Python script that does it in an afternoon with a couple of prompts.
https://hippich.github.io/minesweeper/ - no idea why but i had a couple weeks desire to play minesweeper. at some point i wanted to get a way to quickly estimate probability of the mine presence in each cell.. No problem - copilot coded both minesweeper and then added probabilities (hidden behind "Learn" checkbox) - Bonus, my wife now plays game "made" by me and not some random version from Play store.
another one made in a day - https://hippich.github.io/OpenCamber - I am putting together old car, so will need to align wheels on it at some point. There is Gyraline, but it is iOS only (I think because precision is not good enough on Android?). And it is not free. I have no idea how well it will work in practice, but I can try it, because the cost of trying it is so low now!
yes, both of these are not serious and fun projects. unlikely to have any impact. but it is _fun_! =)
Why? You don't trust a newly-created account that has not engaged with any of the comments to be anything but truthful?
I have integrated Claude Code with a graph database to support an assistant with structured memory and many helpful capabilities.
I have clients. I automated a complicated data ingestion pipeline into a desktop app with a bulletproof process queue, localhost control panel and many features.
For another, I am writing an AI-specific app that is so cool. I wish I could tell you about it but it's definitely not a rushed remake of anything.
I hope that helps.
It's the kind of thing that would be hours of tedious work, then even more time to actually make all the changes to the account. Instead I just say "yeah do all of that" and it is done. Magic stuff. Thousands of lines of Python to hit the Amazon APIs that I've never even looked at.
I wouldn't trust thousands of lines of code from one of my co-workers without testing
Me? I use AI to write tests just as I use it to write everything else. I pay a lot of attention to what's being done including code quality but I am no more insecure about trusting those thousands of tested lines than I am about trusting the byte code generated from the 'strings of code'.
We have just moved up another level of abstraction, as we have done many times before. It will take time to perfect but it's already amazing.
So they don't know if it has the right behavior to begin with, or even if the tests are testing the right behavior.
This is what people are talking about. This is why nobody responsible wants to uberscale a serious app this way. It's ridiculous to see so much hype in this thread, people claiming they've built entire businesses without looking at any code. Keep your business away from me, then.
And yes, I have occasionally run into compiler bugs in my career. That's one reason we test.
How did you verify that?
> prone to hallucination
You know humans can hallucinate?
> is perfectly deterministic
We agree then that you can verify, test, and trust the deterministic code an LLM produces without ever looking at it.
> That's one reason we test
That's one way we can trust and verify code produced by an LLM. You can't stop doing all the other things that aren't coding.
I get there's a difference. Shitty code can be produced by LLMs or humans. LLMs really can pump out the shitty code. I just think the argument that you cant trust code you haven't viewed is not a good argument. I very much trust a lot of code I've never seen, and yes I've been bitten by it too.
Not trying to be an ass, more trying to figure out how im going to deal for the next decade before retirement age. Uts going to be a lot of testing and verification I guess
The compiler works without an internet connection and requires too little resources to be secretly running a local model. (Also, you can’t inspect the source code.)
> You know humans can hallucinate?
We are talking about compilers…
> We agree then that you can verify, test, and trust the deterministic code an LLM produces without ever looking at it.
Unlike a compiler, an LLM does not produce code in a deterministic way, so it’s not guaranteed to do what the input tells it to.
but second of all, even when error rates were 20%, the time savings still meant A Viable Business. a much more viable business actually, a scarily crazy viable business with many annoyed customers getting slop of some sort, with a human in the loop correcting things from the LLM before it went out to consumers
agentic LLM coders are better than your co-workers. they can also write tests. they can do stress testing, load testing, end to end testing, and in my experience that's not even what course corrects LLMs that well, so we shouldn't even be trying to replicate processes made for humans with them. like a human, the LLM is prone to just correct the test as the test uses a deprecated assumption as opposed to product changes breaking a test to reveal a regression.
in my experience, type errors, compiler errors, logs on deployment and database entries have made the LLM correct its approach more than tests. Devops and Data science, more than QA.
- A "semantically enhanced" epub-to-markdown converter
- A web-based Markdown reader with integrated LLM reading guide generation (https://i.imgur.com/ledMTXw.png)
- A Zotero plugin for defining/clarifying selected words/sentences in context
- An epub-to-audiobook generator using Pocket TTS
- A Diddy Kong Racing model/texture extractor/viewer (https://i.imgur.com/jiTK8kI.png)
- A slimmed-down phpBB 2 "remake" in Bun.js/TypeScript
- An experimental SQLite extension for defining incremental materialized views
...And many more that are either too tiny, too idiosyncratic, or too day-job to name here. Some of these are one-off utilities, some are toys I'll never touch again, some are part of much bigger projects that I've been struggling to get any work done on, and so on.
I don't blame you for your cynicism, and I'm not blind to all of the criticism of LLMs and LLM code. I've had many times where I feel upset, skeptical, discouraged, and alienated because of these new developments. But also... it's a lot of fun and I can't stop coming up with ideas.
I have integrated Claude Code with a graph database to support an assistant with structured memory and many helpful capabilities.
I have a freelance gig with a startup adapting AI to their concept. I have one serious app under my belt and more on the way.
Concrete enough?
I think people enjoy writing code for various reasons. Some people really enjoy the craft of programming and thus dislike AI-centric coding. Some people don't really enjoy programming but enjoy making money or affecting some change on the world with it, and they use them as a tool. And then some people just like tinkering and building things for the sake of making stuff, and they get a kick out of vibe coding because it lets them add more things to their things-i-built collection.
Every time I've asked people about what the hell they're actually doing with AI, they vanish into the ether. No one posts proof, they never post a link to a repo, they don't mention what they're doing at their job. The most I ever see is that someone managed to vibe code a basic website or a CRUD app that even a below-average engineer can whip up in a day or two.
Like this entire thread is just the equivalent of karma farming on Reddit or whatever nonsense people post on Facebook nowadays.
Been working for about a month, and I’m halfway through. The server’s done (but I’m sure that I’ll still need to tweak and fix bugs), and I’m developing the communication layer and client model, now. It took seven months to write the first version of the server, and about six months to write a less-capable communication driver, the first time.
This is not a “vibe-coded” toy for personal use. It’s a high-Quality shipping app, with thousands of users. There’s still a ton of work, ahead, but it looks like an achievable goal. I do feel as if my experience, writing shipping software, is crucial to using the LLM to develop something that can be shipped.
I’ve had to learn how to work with an LLM, but I think I’ve found my stride. I certainly could not do this, without an LLM.
The thing that most upset me, since retirement, has been the lack of folks willing to work with me. I spent my entire career, working in teams, and being forced to work alone, reduced my scope. I feel as if LLMs have allowed me to dream big, again.
I'm not allowed to feel like AI is an adequate replacement for fear that the critics will tell me I'm not healthy but, between you and me, as much as I miss the camaraderie of real humans, being able to brainstorm with an entity that knows pretty much everything and is able to execute my will without complaint is not bad.
And, it's nice to have someone, something, to talk to about technical ideas. It's a great time to be alive.
I’m a programmer for life. My hobbies revolve around programming and hardware as well (demos for retro hardware: XTs up to Pentium, Sega Master System, custom built hardware). I stay up late working on this stuff and I still have the drive to do it with two young kids who take up a lot of my time.
I have zero interest in an AI doing any of it for me. I don’t think I’ll be replaced by an AI but I might be forced to use one by an employer at which point I think I’ll retire and just work on my hobby projects!
So excited to be getting to my backlog of apps that I've wanted but couldn't take the time to develop on my own. I'm 66 and have been in the software field in various capacities (but programming mostly as a hobby). Here's a partial list of apps I've completed in the last few months:
- Media Watch app to keep a list of movies and shows my wife and I want to watch- Grocery List with some tracking of frequent purchases
- Health Log for medical history, doc appointments and past visits
- Habits Tracker with trends I’m interested
- Daily Wisdom Reader instead of having multiple ebooks to keep track of where I'm at
- A task manager similar to the old LifeBalance app
- A Home Inventory app so that I can track what I have, warranty, and maintenance
- An ios watch app to see when I'm asleep so that it can turn off my music or audiobook
- An ios watch chess tactics trainer app
- some games
Many of these are similar to paid offerings, but those didn't check off all the features I really wanted, so I vibe-coded my own. They all do what I want, the way I want it to.
Can I ask, do you pay for any server service or run your own or are these standalone apps?
For me, many of your ideas, if I was to implement them, I'd want them to have a server. Habits Tracker, need to access from whatever device I'm on at that moment. Grocery List. Same thing, and multiple users so everyone in the same house can add things to one list.
Etc....
This is not really LLM related but I feel like I have a blindspot, or hurdle or something where I haven't done enough server work be comfortable making these solutions. Trying to be clearer, I've setup a few servers in the past so it's not like I can't do it. It's more a feeling for comfort, or maybe discomfort.
Example: If you ask me to make a static website, or a blog, I'd immediately make a new github repo, install certain tools (static site generator or whatever), setup the github actions, register a new domain if needed, setup the CNAME, check it it's working. If I think it's going to be popular put cloudflare in front of it. I'm 100% confident in that process. I'm not saying my process is perfect. Only that I'm confident of it. I also know what it costs, $10-$20 a year for the domain name and maybe a yearly subscription to github
Conversely, if I was to make anything that was NOT a static server but actually a server with users and accounts, then I just have to go read up on the latest and cross my fingers I'm not leaking user data, have an XSS, going to get a bill for $250k from a DOS attack, picking the right kind of database, ID service, logging, etc... I could expose a home server but then be worried it'll get hacked. Need to find a backup solution, etc....
I know someone will respond I'm worrying to much but I hoping for more example of what others are doing for these things. Is there some amazing saas that solves all of this that most of you use? Some high-level framework that solves all of this and I just pick "publish" don't have to worry about giant bills?
However the MediaWatch app syncs between me and my wife which iCloud does not support (as a sidenote, this is one of the hallucination traps that both Claude and ChatGPT led me down -- both said it was possible, and after a few weeks and many, many hours, I learned the major constraints. I was not wanting any of my apps on the Appstore, so that blew that option). Anyway, I ended up making a small simple SQLite database using python on my Pi and use that for my sync needs. The devices only sync while at home, which was not a problem for me. Also I'm not exposing the database to external security issues.
Staying up and re-learning what I used to love long ago has given me a new found passion as well. Even if I do vibe code some scripts, at least I have the background now to go through them and make sure they make sense. They're things I'm using in my own homelab and not something that I'm trying to spin up a Github repo for. I'm not shipping anything. I'm refreshing my old skills and trying to bring some of them up to date. An unfortunate reality is that my healthcare career is going to be limited due to multiple injuries along the way, and I need to try to be as current as I can in case something happens. My safety net is limited.
Until I realized that no one here is going to be in the blast radius. So many people who agree with this admit to being in their 40s, 50s, 60s. All of them have already had the time to learn without LLMs, get industry experience, network, climb their career ladders as high as they could. These people are now sitting on piles of assets, and they know that if LLMs start pushing out people from the industry, it'll be us juniors and new grads. They will either remain relevant in the industry due to seniority/experience/pivoting to managerial duty, use their money and connections to easily learn new skills and pivot, or punch out and coast through retirement before it affects them.
But that doesn’t mean there won’t be entry level jobs, they will just have a different set of qualifications and expectations. Just like it’s hard to get a job doing arithmetic today without some other knowledge of the application, future jobs in computing are going to require people to understand things outside of the realm of programming alone. They are going to need to know more about the application of the code they write. It’ll be bad for developers who “just close Jira tickets” but problem solvers in a specific field will be okay.
1) What 60 year old in tech his entire life only makes a HN account in the last 17 hours?
2) Assuming he wasn't aware of it. What brought the site to his attention and why now?
3) Did not engage with the thread at all after his initial post. Has not engaged with anything else since. You'd think someone introduced to a tech community would be eager to look around and contribute??
I completely understand your sentiment though and it's exactly what makes the OG post so tone deaf.
I love coding with agents. Claude Code now almost exclusively. The 20x max subscription is endless until you start writing custom multi-agent processes, and even then. Still takes quite a bit of effort to burn through.
I get so much more done, and can be productive with languages/frameworks I'm not familiar with.
To everybody worried that AI will kill jobs. There have been many points in the evolution of software dev where some new efficiency was predicted to kill off jobs. The opposite happens. Dev becomes more economical, and all of the places where dev was previously too expensive open up. Maybe this time won't work out that way, but history isn't on the side of that prediction.
An experienced software dev can get multiples of efficiency out of AI coding tools compared to non-devs, and can use them in scaled projects, where non-devs are only going to compound a mess. Some of those non-devs will learn how to be more efficient and work with scaled projects. How? They'll learn to be devs.
I'd be building several side projects for myself if I wasn't super busy with the primary work I'm doing. The AI tools take over the tedious work, and remove a lot of work that would just add mental load. Love it.
The teams get reduced, as now one can do effectively more with less, and in South Europe, in IT there is hardly a place to get a job above 50 years old, unless one goes consulting as the company owner, and even then the market cannot hold everyone.
As kid I have seen this happening, as factory automation replaced jobs in complete villages, the jobs that weren't offshored into Asia or Eastern Europe for clothing and shoes, got replaced with robots.
The few lucky ones were the ones pressing the buttons and unloading trucks.
Likewise a few ones will be lucky AI magicians, some will press buttons, and the large majority better get newer skills beyond computing.
Was able to build a large financial application just with the 20 USD subscription in the last 12 month - without Claude, I would have required 5 - 6 people and at least 1 year of funding.
This was by far my best investment in my whole life 12x20 USD vs. 750.000 salary :-)
It is especially inspiring since it brings you usually a few new ideas into your context; also just joking around with it can yield new inspirations.
I'm wondering how long it will stay at 20 USD for the smallest subscription, no chance that they can keep this price, I'd say? Its impressive that they are giving it away for nearly free.
The last: I asked for a quick TCP server in C++ that handled just a single client (disconnecting the existing client when a new client connected), with a send() that I could call from another thread. It was holding mutexes over read(), and trying to set the SO_REUSEPORT port socket option on a socket that had already been bound. Subtley broken garbage.
It would literally be better to copy and paste a solution off Stack Overflow, because at least there's a chance it'd have been reviewed by someone who knows what they're doing.
I could do this TCP server in another time at all and it would be perfect. I have done stuff that complicated and more many times.
You need to rethink how you are using the tool because you absolutely could get excellent results like I do.
The biggest things I suggest are... Treat it as collaboration or pair programming. Make sure to work through a design before programming and have it written to a file for your review before execution.
You can do this.
Those could mean anything. Some people think 5k likes is large. Others think 100k is small.
Oh well, at least they didn't say "complex".
LOC is currently around 200k, so for sure: Its not Microsoft-scale :-D
Since its proprietary, it runs in a private cloud environment and is processing only data of one "user per instance"; there is not public interface, only a VPN you have to dial in etc., so no frontend/frontpage facing the public.
Though, there are some design flaws from this perspective, because of convenience: E.g. it lets you persist the account number in the DB, if wanted.
I think the difficult task is/will be to sell vibe coded software from the lone developer to anyone.
It is not 100% vibe code, by far not! I use cloud for method-by-method or simple class instructions and integrate in the app manually, I do not use any of the API integrations, I just use the standrad WebUI for discussing, planing & implementation.
a) Speed - it included a lot of boring stuff, esp. in the beginning on when I was in the discovery phase and had to figure out some basics relevant for the context
b) I think I would have given up very early on, esp. of all these boring things, which are required but take long headache time to develop (e.g. The app has a somewhat complex data rendering component, containing hundres of GDI+ calls; the file is currently around 5000 lines, writing this by hand woul have taken very long and would have been very frustrating)
c) Debugging - Sometimes bugs are so deep down in some components and after 1h you stop seeing the forrest because of all single trees: The LLM can greatly help here
d) Fresh ideas: If there is a pyramid of know how in this niche, then Im currently working in "the first floor", basicly; discussions with the model about enhancing and more complex things helps to see the next island where you could swim
Yes, I could have done it without the models - but it would have taken sooo much much more time, that I wouldnt have taken the route.
Novelity: The app does one specific thing and is designed only for that specific usecase - I do not know how novel it is but since its a niche, maybe you could achieve the same thing with with existing solutions and their plugins (but then I would have had to learn how to edit/change those)
Background: 25y+ IT experience, Master degree and some other certs
I find it very hard to believe anyone could code anything complicated with Claude that 5-6 competent developers could do.
I am currently working on a relatively complicated UI on an internal tool and Claude constantly just breaks it. I tried asking it to build it step by step, adding each functionality I need piece by piece. But the code it eventually got was complete garbage. Each new feature it added would break an existing one. It was averse to refactoring the code to make it easier to add future features. I tried to point it in the right direction and it still failed.
It got to the point where I took a copy of the code, cut it back to basics and just wrote it myself. I basically halved the amount of code it wrote, added a couple of extra features and it was human readable. And if I started with this, it would have took less time!
One of the things I found helped a lot is building on top of a well-structured stack. Make yourself a scaffold. Make sure it is exactly how you like your code structured, etc. Work with Claude to document the things you like about it (I call mine polyArch2.md).
The scaffold will serve as a seed crystal. The document will serve as a contract. You will get much better results.
> I find it very hard to believe anyone could code anything complicated with Claude that 5-6 competent developers could do. <
I should have put a disclaimer - Im not layman, instead 25y+ IT experience. Without my prior experience, I think this project wouldnt have come into existence.
Regarding prompts: a) In general I clean up the work space on a regular base, so I do not store prompts
b) Overall, Id say so far above 200 - 300 initial prompts for the code developed with the LLM (and then 2 - 50[?] follow up prompts to change & update things)
c) The initial prompts are always long and very elaborative, like 60-70% of screensize
d) The model is always aware of the source files used for a given prompts (in Claude you can create project workspaces and put your stuff in)
e) I always tell the model the current state, where want to go and which steps are necessary according to my opinion and I specify the result as detailled as possible
f) I give contraints in the prompts, telling what not to do etc.
The note read something like as follows : I don’t exactly agree with the framing that we will all get left behind if we all don’t learn to adapt to AI. More accurately, I see it this way. While the company definitely stands to gain from all the hyper increase in productivity by the use of said AI tools, I stand to pay a personal price and that personal price is this - I’m may very slowly stop exercising my critical thinking muscles because I am accustomed passing that to AI for everything and this will render me less employable, it is this personal price that I feel reluctant to pay. There has always been a delicate balance between an employer and employee. We learn new technologies on the job and we’re more employable for transferring that to other companies. This equation is now unbalanced. The company trapped more value, but there is skill erosion on my side . For instance, our team actually has to perform a Cassandra DB migration this year . Usually, I’d have to take a small textbook and read about the internals of CassandraDB, and maybe learn a guide on how to write Cassandra queries. What do I put in my resume now? That I vibe coded Cassandra migration? How employable is that? And I’m not sure if others felt the same way. But I definitely felt like the odd one out here for asking that question because everyone else in the meeting was on board with AI adoption.
The leader did respond to me and he said that learning agentic AI actually will make me more employable. So there is a fundamental disagreement as to what constitutes skill. I think he just spoke past me. Oh well at least I tried.
However, even though I've never worked with CassandraDB, I feel pretty confident that I could do it with Claude Code. Not just "do it for me", but more like "I have done a lot of database migrations in my time, but haven't worked with CassandraDB in particular. Can you explain to me the complexities of this migration, and come up with a plan for doing it, given the specifics of this project?"
That question alone is already a massive improvement over a few years ago. I don't feel like I was using my "critical thinking muscles" when I tried to figure out how the hell to get hadoop to run on windows, that was just an exercise in frustration as none of the documentation matched the actual experience I was getting. Doing it together with Claude Code would be so much easier, because it'll say something like "Oh yeah this is because you still need to install XYZ, you can do that by running this line here: ...".
Now I'm not saying that Claude Code, and agentic in general, isn't taking away some of my critical thinking: it really is. But it also allows me to learn new skills much more quickly. It feels more like pair programming with someone who is a better programmer than me, but a much worse architect. The trick is to keep challenging yourself to take an active role in the process and not just tell it to "do it", I think.
You are right, there is something you lose, but for what it’s worth, I don’t think the loss is necessarily critical thinking - I think it’s possible to use AI and still hone your critical thinking skills.
The thing you start to lose first is touching the code directly, of course, making the constant stream of small decisions, syntax, formatting, naming, choosing container classes, and a large set of other things. And sometimes it’s the doing battle with those small decisions that leads to deeper understanding. However, it is true, and AI agents are proving, that a lot of us have to make the same small decisions over and over, and we’re frequently repeating designs that many other people have already thought through. So one positive tradeoff for this loss is better leveraging of ground already covered.
Another way to think about AI is that it can help you spend all of your time doing and thinking about software design and goals and outcomes rather that having to spend the majority of it in the minutiae of writing the code. This is where you can continue to apply critical thinking, just perhaps at a higher level than before. AI can make you lazy, if you let it. It does take some diligence and effort to remain critical, but if you do, personally I think it can be a lot of fun and help you spend more time thinking critically, rather than less.
Some possible analogies are calculators and photography. People were fretting we’d lose something if we stop calculating divisions by hand, and we do, but we still just use calculators by and large. People also thought photography would ruin art and prevent people from being able to make or appreciate images.
Software in general is nearly always automating something that someone was doing by hand, and in way every time we write a program we’re making this same tradeoff, losing the close hands-on connection to the thing we were doing in favor of something a touch more abstract and a lot faster.
Secondly - AI helps with happy path tasks for a migration. But most database migrations are complex beyond what an LLM can just spit out. There is so much context outside the observable parts of the database AI has access to. So I don’t think you have to worry about vibe coding eating the entire migration project.
What changed for me was the feedback loop. Before AI tooling, I'd have an idea, realize it would take weeks to prototype, and let it die. Now I go from concept to working MVP in a weekend. The constraint shifted from "can I build this" to "should I build this" - which is a much better problem to have.
The stack that works for me: Lovable for frontend, Replit for backend, Claude API for the AI layer, Neon for Postgres. Not fancy, but it ships.
The biggest lesson: AI doesn't replace the need for experience and taste. It amplifies it. Your decades of context about what makes good software - that's the real asset. Claude is just fast hands.
While I have never developed software professionally, in the four decades I have been using computers I have often written scripts and done other simple programming for my own purposes. When I was in my thirties and forties especially, I would often get enjoyably immersed in my little projects.
These days, I am feeling a new rush of drive and energy using Claude Code. At first, though, the feeling would come and go. I would come up with fun projects (in-browser synthesizers, multi-LLM translation engines) and get a brief thrill from being able to create them so quickly, but the fever would fade after a while. I started paying for the Max plan last June, but there were weeks at a time when I barely used it. I was thinking of downgrading to Pro when Opus 4.5 came along, I saw that it could handle more sophisticated tasks, and I got an idea for a big project that I wanted to do.
I have now spent the last two months having Claude write and build something I really wanted forty years ago, when I was learning Japanese and starting out as a Japanese-to-English translator: a dictionary that explains the meanings, nuances, and usages of Japanese words in English in a way accessible to an intermediate or advanced learner. Here is where it stands now:
https://github.com/tkgally/je-dict-1
It will take a few more months before the dictionary is more or less finished, but it has already reached a stage where it should be useful for some learners. I am releasing all of the content into the public domain, so people can use and adapt it however they like.
What are some good examples of where your app excels? I've currently got https://jisho.org bookmarked.
Compare the following pairs of entries from TKG and Jisho.org:
https://www.tkgje.jp/entries/03000/03495_chousen.html
https://www.tkgje.jp/entries/11000/11013_charenji.html
https://jisho.org/search/チャレンジ
While the two from Jisho.org have more information, they do not make clear the important differences between challenge in English and the two Japanese words. Claude, meanwhile, added this note:
‘In English, "challenge" often implies confrontation or difficulty. In Japanese, チャレンジ carries a strongly positive connotation of bravely attempting something new or difficult. It is closer in meaning to "attempt" or "try" than to "confront." ’
The entries for my dictionary are being written one at a time by Claude based on guidelines for the explanations, the length and vocabulary of the example sentences, etc. Those guidelines (which you can see in the prompts and Claude skills in the GitHub repository) were developed by me and Claude with a particular purpose in mind: helping a learner encountering an unfamiliar word get a good basic understanding of what it means and how it is used. In my experience, at least, it is very helpful to get explanations, not just glosses.
The Jisho site does do a good job of linking together a lot of different databases. They are welcome to add links to entries in my dictionary, too, if they like.
Of course you love it, you don't have to worry about retirement anymore.
Give me your 401k, then tell you feel about Claude Code.
I am retired and am nearly equaling my salary with side jobs and only working a few hours a day. I don't see any reason you can't do that so stop whining and start learning.
I have a sense that AI could have something to do with it.
AI is degrading the status of our profession; its perception in the public eye.
At the same time, it is stealing our work and letting cretins pretend to be software engineers.
It's a bad taste in the mouth.
* Implementing a raw Git reader is daunting.
* Codifying syntax highlighting rules is laborious.
* Developing a nice UI/UX is not super enjoyable for me.
* Hardening with latest security measures would be tricky.
* Crafting a templating language is time-consuming.
Being able to orchestrate and design the high-level architecture while letting the LLM take care of the details is extremely rewarding. Moving all my repositories away from GitLab, GitHub, and BitBucket to a single repo under my own control is priceless.
I'm really sorry (and accept down-vote storm) to disappoint you but you won't be young again and burning midnight oil may remind old days and bring excitement, but in the end it will harm your health.
Learning like crazy, late night hacking and other attributes of fresh engineers is sometimes a necessity to build a career, knowledge base, equity to comfortably start a family. Some people enjoy it and many hate, but most of us did it at some point.
I wouldn't oppose it if it wasn't harmful for the industry. What all those engineers who are excited again would think of a startup that stole all free land, building material and doubled housing? I bet all youngsters would be excited to have their own place for $20 monthly mortgage payment, telling everyone who has paid most of their salary over last 30 years how energizing is feeling you don't need to work for your house your whole life and ignoring equity crash for those folks.
Congratulations.
"in (language I'm familiar with) I use (some pattern or whatever) what's the equivalent in (other language)?"
It's really great for doing bits and then get it to explain or you look and see what's wrong and modify it and learn.
The re-ignition thing resonates though. There's something about having a collaborator that removes the activation energy of starting. The blank file problem is real and brutal at 25, probably more so at 60 when you know exactly how much work lies ahead. AI doesn't eliminate the hard parts but it compresses the "ok where do I even begin" phase from hours to minutes.
What are you building?
Hoping to start blogging about some of these projects in the future.
Let’s get you to bed, gramps, you can talk to your French friend tomorrow.
So I decided that I wanted web apps, something that is probably beyond me in any reasonable time, if at all, if I was to code myself by hand.
For my coding AI "stack" I am now running OpenClaw sitting on top of Claude Code, I find the OpenClaw can prompt Claude Code better and keep it running for me without it stopping for stupid questions. Plus I have connected OpenClaw to my Whatsapp so I can ask how it is going or give instructions to the OpenClaw while not at the keyboard.
One app was a little complex with 35,000 loc, plus libraries etc. I reckon I had spent maybe 2500 hours on it over some years, but a significant part of that was developing the algorithm/workflow that it implemented - I only knew roughly what I wanted when I started, writing several to throw away at the beginning.
AI converted it to a webapp overnight, with a two sentance prompt, without intervention of any kind.
It took me another 15 minutes and a couple of small changes, mostly dependancies issues, and I had a working version of the same app that was literally 95%+ of the original, in terms of funcitonality and use.
I have a bunch of ideas for things I want to make that I probably never would have been able to otherwise.
I am just totally unable to fathom people that just make a blanket proclamation that AI is good for nothing. I can accept that it is not good for everything, it may cause some social disruption and the energy use is questionable, but improving, but not useful? Wake up.
My current passion is pushing small LLMs as far as I can using tools and agentic frameworks. The latest Qwen 3.5 models have me over the moon. I still like to design and code myself but I also find it pleasurable to sometimes use Claude Code and Antigravity.
I decided that applications of AI were where I am going. I feel the pull of small LLMs. The idea of local is very appealing. But, at our age (I also started in the sixties), I've learned that too many irons in the fire means I get nothing done.
Congratulations on retaining your spirit. Many of my age-appropriate friends cannot comprehend the idea of working so hard for fun.
I am 43. I used to code as a kid and I've dabbled in it here and there, but I quickly realised I didn't want to code as a career, but now with these new tools I am building again and it's great, because I'm building the things that work for me.
To manage my life there was a todo app I used, now I've built my own, don't need to pay for it and it works exactly as I want it and now I also have a few ideas for other things I Want to do.
It's great, it feels like we might be able to start taking control of our tech back again now, when we can build the tools ourselves that work the way we want, we don't have to worry about the nonsense companies are sticking into there products, we can make things work exactly as we want it.
Such a big part of coding becomes mundane after a while. Constantly solving variations of the same kinds of problems.
Now Claude does it at my direction and I get so much more done!
But maybe even more important: It gets me to go outside my comfort zone and try things I wouldn't normally try because of the time it would take me to figure it out.
Like: Wat if I used this other audio library? I don't have to figure it out, I just pass in the interface I need to implement and get 90% of a working solution.
AI augmented programming couldn't have come at a better time and I'm really happy with it!
The problems, as ever, are 1) what negative things are enabled by the technology, 2) do the positive things that are enabled by the technology outweigh those ("is the price worth paying?"), and 3) how much harm will "stupid" and/or "evil" cause as a result of the technology?
And so on.
The fact that a thing is exciting or interesting or stimulating is neat, for sure, but as always there is no relevant thought given to ramifications.
Humans lag well behind technological advancement, and this particular wave is moving faster than perhaps anything else (because prior technological advances enable it, etc).
It's cool that you enjoy it. Me, too. I might enjoy shooting heroin into my eyeballs, too, right up until I don't.
I started out with an 8 bit micro so I really enjoy tinkering and coding. AI doesn't seem attractive at all.
It's not only about what you do, but also about how you do it.
It's given me the guts to be a solo-founder (for now). I
“Oh shit, Hey Babe did you close my laptop?”
My not-very-technical friend as we returned home from a Sunday afternoon trip to the park with the kids to find his Claude Code session had been thwarted.
Take anthropic for example, they have created MCP/claude code.
MCP has the good parts of how to expose an API surface and also the bad parts of keeping the implementation stuck and force workarounds instead of pushing required changes upstream or to safely fork an implementation.
Claude code is orders of magnitude inefficient than plainly asking an llm to go through an architecture implementation. The sedentary black-box loops in claude code are mind bending for anyone who wants to know how it did something.
And anthropic/openai seems to just rely of user momentum to not innovate on these fundamentals because it keeps the token usage high and as everyone knows by now a unpredictable product is more addictive than a deterministic one.
We are currently in the "Script Monkey" phase of AI dev tools. We are automating the typing, but we haven't yet automated the design. The danger is that we’re building a generation of "copy-paste" architects who can’t see the debt they’re accruing until the system collapses under its own weight.
First with LOGO on the Apple ][, making the turtle move around the screen and follow your commands. It was magic.
Then discovering BASIC, and the ability to turn the pixels on and off and make them any color you like.
Making my Amiga talk with the "SAY" command.
The first time I dialed a BBS in the dead of night with my Commodore 64 and my 300 baud modem, watching those colorful letters sloowly make their way across the TV screen...
Running my own BBS software and dialing in from my cousin's house at Thanksgiving...
Putting up my own web page and cgi-bin scripts....
It's all been magic, and it's all been just for me.
So when you remove everything else, all the cruft and crap,
I will still be programming just for me.
I'm also in my sixties and retired and decided not to use these tools. I'm a year into my current project and I am enjoying the struggle. I've learnt a lot about the domain and the language I'm using. There is satisfaction coming from the fact that I do all of the work.
It's not that these tools aren't very good. They have come a long way in the last year and are impressive. It's just that I don't have any of the problems that they solve. I don't need to be more productive. I don't need to get features or fixes out quicker. I can spend the time to learn new things.
I landed on GitHub Copilot. I now manage a team, but just last night snuck away to code some features. I find my experience and knowing how to review the output helps me adopt and know how much to prompt the agent for. Is software development changing? Absolutely. But it always has been. These tools help me get back to that first freedom I felt when I dragged a control onto a VB6 designer, but keep the benefits of code in text files. I can focus on feature, pay attention to UX detail, and pivot without taking hours.
Google's "Ask AI" and ChatGPT's free models seem to be consistently bad to the point where I've mostly stopped using them.
I've lost track of how many times it was like "yes, you're right, I've looked at the code you've linked and I see it is using a newer version than what I had access to. I've thoroughly scanned it and here's the final solution that works".
And then the solution fails because it references a flag or option that doesn't even exist. Not even in the old or new version, a complete hallucination.
It also seems like the more context it has, the worse it becomes and it starts blending in previous solutions that you explained didn't work already that are organized slightly different in the code but does the wrong thing.
This happens to me almost every time I use it. I couldn't imagine paying for these results, it would be a huge waste of money and time.
Google's AI that gloms on to search is not particularly good for programming. I don't use any OpenAI stuff but talking to those that do, their models are not good for programming compared to equivalent ones from Anthropic or google.
I have good success with free gemini used either via the web UI or with aider. That can handle some simple software dev. The new qwen3.5 is pretty good considering its size, though multi-$k of local GPU is not exactly "free".
But, this also all depends on the experience level of the developer. If you are gonna vibe code, you'll likely need to use a paid model to achieve results even close to what an experienced developer can achieve with lesser models (or their own brain).
Where I find it struggles is when I prompt it with things like this:
> I'm using the latest version of Walker (app launcher on Linux) on Arch Linux from the AUR, here is a shell script I wrote to generate a dynamic dmenu based menu which gets sent in as input to walker. This is working perfectly but now I want to display this menu in 2 columns instead of 1. I want these to be real columns, not string padding single columns because I want to individually select them. Walker supports multi-column menus based on the symbol menu using multiple columns. What would I need to change to do this? For clarity, I only want this specific custom menu to be multi-column not all menus. Make the smallest change possible or if this strategy is not compatible with this feature, provide an example on how to do it in other ways.
This is something I tried hacking on for an hour yesterday and it led me down rabbit hole after rabbit hole of incorrect information, commands that didn't exist, flags that didn't exist and so on.
I also sometimes have oddball problems I want to solve where I know awk or jq can do it pretty cleanly but I don't really know the syntax off the top of my head. It fails so many times here. Once in a while it will work but it involves dozens of prompts and getting a lot of responses from it like "oh, you're right, I know xyz exists, sorry for not providing that earlier".
I get no value from it if I know the space of the problem at a very good level because then I'd write it unassisted. This is coming at things from the perspective of having ~20 years of general programming experience.
Most of the problems I give it are 1 off standalone scripts that are ~100-200 lines or less. I would have thought this is the best case scenario for it because it doesn't need to know anything beyond the scope of that. There's no elaborate project structure or context involving many files / abstractions.
I don't think I'm cut out for using AI because if I paid for it and it didn't provide me the solution I was asking for then I would expect a refund in the same way if I bought a hammer from the store and the hammer turned into spaghetti when I tried to use it, that's not what I bought it for.
It's not that the model is better than the cheaper plans, but experimenting with and revising prompts takes dozens of iterations for me, and I'm often multiple dollars in when I realize I need to restart with a better plan.
It also takes time and experimentation to get a good feel for context management, which costs money.
But, let me suggest that you stop thinking about planning and design as "prompts". I work with it to figure out what I want to do and have it write a spec.md. Then I work with it to figure out the implementation strategy and have it write implementation.md. Then I tell it I am going to give those docs to a new instance and ask it to write all the context it will need with instructions about the files and have it write handoff.md.
By giving up on the paradigm of prompts, I turned my focus to the application and that has been very productive for me.
Good luck.
Which immediately surfaces the next problem: how do those agents communicate back to you while running?
Most setups default to tailing a log file, or a Slack/Telegram bot bolted on as an afterthought. Works for one agent. Falls apart when you have five running overnight and one hits an edge case at 2am that needs a human call.
The agent-to-human communication layer is still surprisingly ad-hoc. You can generate more ideas and actually implement them now — but the infrastructure for keeping humans in the loop as agents execute is still duct tape. Feels like the next interesting problem after the coding unlock.
Been programming off and on since I was a kid, though I went into a career of systems architect instead, because I found the actual process of churning out code kinda tedious.
But I still had all these ideas in my head that I wanted to make reality, and now I finally can.
A project that would normally take weeks, and significantly affect the rest of my life, now only takes hours.
But remember that all those projects need to be maintained too, you can't just release a bunch of new code into the open source ecosystem without maintaining it.
I'm now dealing with a lot of stuff via codex, including technical debt that I identified years ago but never had the time to deal with. And I'm doing new projects. I've created a few CLIs, created a websites on cloudflare in a spare half hour, landed several big features on our five year old backend and created a couple of new projects on Github. Including a few that are in languages I don't normally use. Because it's the better technical choice and my lack of skills with those languages no longer matters.
I also undertook a migration of our system from GCP to Hetzner and used codex to do the ansible automation, diagnosing all sorts of weirdness that came up during that process, and finding workarounds for that stuff. That also includes diagnosing failed builds, fixing github action automation, sshing into remote vms to diagnose issues, etc. Kind of scary to watch that happen but it definitely works. I've done stuff like this for the last 25 years or so using various technologies. I know how to do this and do it well. But there's no point in me doing this slowly by hand anymore.
All this is since the new codex desktop app came out. Before Christmas I was using the cli and web version of codex on and off. It kind of worked for small things. But with recent codex versions things started working a lot better and more reliably. I've compressed what should be well over half a year of work in a few weeks.
It's early days but as the saying goes, this is the worst and slowest its ever going to be. I still consider myself a software maker. But the whole frontend/backend/devops specialization just went out of the window. But I actually enjoy being this empowered. I hate getting bogged down in grinding away at stupid issues when I'm trying to get to the end state of having built this grand thing I have in my head. There definitely is this endorphin rush you get when stuff works. And it's cool to go from idea to working code in a few minutes.
But I have been haranguing Claude/Gemini to help me on an analog computer project for some months now that has sent me on a deep dive into op-amps and other electronics esoterica that I had previously only dabbled a bit in.
Along the way I've learned about relaxation oscillators, using PWM to multiply two voltages, integrating, voltage-following…
I could lean on electronics.stackexchange (where my Google searches often lead) but 1) I first have to know what I am even searching for and 2) even the EEs disagree on how to solve a problem (as you might expect) so I am still with no clear answer. Might as well trust a sometimes hallucinating LLM?
I guess I like the first point above the best—when the LLM just out of the blue (seemingly) suggests a PWM multiplier when I was thinking log/anti-log was the only way to multiply voltages. So I get to learn a new topology.
Or I'm focused on user-adjustable pots for setting machine voltages and the LLM suggests a chip with its own internal 2.45V reference that you can use to get specific voltages without burdening the user to dial it in, own a multimeter. So I get to learn about a chip I was unfamiliar with.
It just goes on an on.
(And, Mr. Eater, I only let the magic smoke out once so far, ha ha.)
I think you really hit the jackpot because you got a full career out of it, saw an amazing evolution etc. So you can hopefully enjoy the ride now being more as a spectator without the fear of being personally affected by job displacement. Enjoy the retirement!
Try to tell Claude Code to refactor some code and see if it doesn't just delete the entire file and rewrite it. Sure that's cute, but it's absolutely not okay in a real software environment.
I do find this stuff great for hobbyist projects. I don't know if I'd be willing to put money on the line yet
Your description of the experience tells me that you have not figured out how to do it correctly.
I NEVER have bad experiences like that. I absolutely DO create production grade software reliably every day.
Treat it as collaborator instead of as a servant. You will get much better results.
I guarantee you it will make up apis , apologize and then make up more.
A simple "I don't know" would be much productive
I highly recommend this blog post about vibe coding, gambling, and flow. Glad you're having a great time! Just something to consider.
With Claude Code specifically, I've noticed that the longer it runs autonomously, the more cost anxiety creeps in. You stop thinking about the problem and start watching the token counter.
What finally let me stop worrying and just build again was building a hard budget limit outside the app — not just alerts, but an actual kill switch.
Glad you found the spark. It's worth protecting.
Walked into work Monday morning, bleary eyed and told everybody, “This is the solution. This is how you build rapidly and bypass all of the long term maintenance issues that we always have to fix in every other codebase. It makes the hard things easy, it makes perfect sense and it’s FUN.”
I was getting Claude to implement a popular TS drag and drop library, and asked it to do something that, it turns out, wasn't supported by the library.
Claude read the minified code in node_modules and npm patched the library with the feature. It worked, too.
Obviously not ideal for future proofing but completely mind blowing that it can do that.
I feel selfish in that I am towards the end of my career rather than right at the start.
"Without tubes of paint, there would have been no Impressionism." - Renoir
Sure, AI is exciting, and it reignites a passion. But everything you learn today will be obsolete a year from now. And that might tire you out again.
Claude is for old people!
Anthropic can adapt the "Tai Chi" YouTube ads, where fat retired people become muscular in just three weeks!in 1 year I built three Laravel Apps from the ground up and sold one for $18,900.
That's my story and I'm sticking to it! I love Claude!
I think it's also somewhat addictive. I wonder if that's part of what's at play here.
A coworker that never argues with you, is happy to do endless toil... sometimes messes up but sometimes blows your mind...
I'm not a SWE. I'm a mechanical engineer who spends his life in excel. So when I first made my own node editor app and then asked Claude to read that for my workflow in my second project.... I felt like God herself.
Juniors prompt "build me X" and get frustrated when it goes sideways. Seniors architect the constraints first - acceptance criteria, test harness, API boundaries - then let the AI fill in mechanical work.
The real shift: AI makes the cost of prototyping near-zero, which paradoxically makes taste and judgment MORE valuable. When you can spin up 5 approaches in a weekend, knowing which one to actually ship becomes the bottleneck.
The folks who defined their value as "typing code" will struggle. The folks who defined their value as "knowing what to build and how to verify it works" are thriving.
There are definitely a lot of limitations with Claude Code, but it's fun to work through the issues, figure out Claude's behavior, and create guardrails and workarounds. I do think that a lot of the poor behavior that agents exhibit can be fixed with more guardrails and scaffolding... so I'm looking forward to the future.
I can ask an LLM for specific help with my codebase and it can explain things in context and provide actual concrete relevant examples that make sense to me.
Then I can ask again for explanations about idiomatic code patterns that aren't familiar for me.
Working on my own, I don't get that feedback and code review loop.
Working with new languages and techniques, or diving into someone else's legacy code base is no longer as daunting with an LLM to ask for help!
Following this idea, what do people think "backend" work will involve? Building and tweaking models, and the infra around them? Obviously everyone will shift more into architecture and strategy, but in terms of hands-on technical work I'm interested in where people see this going.
"I used to write java code and the compiler turned it into JVM bytecode.
Now I write in English and the LLMs compile it into whatever language I want."
Although as one HN commenter pointed out: English is a pretty bad programming language as it's way more ambiguous than most programming languages.
I am only 43, but on the last year of my career, suddenly my level of care in big corporate politics nose dived to almost zero. To the point that I happily retired myself.
After messing around with some hard subjects, with the help of Claude Code, the little boy who used to love programming so much is waking up again.
AI haters trend towards affection for the jargon, languages, and falling down that rabbit hole. They love Ruby, web apps, SaaS... the ecosystem of syntaxes. They love their job.
Those that dig AI see code as a historically necessary tool to get a machine to do a thing. I fall in this category.
I find the syntax and made up semantics boring, and doing interesting things with the machine interesting.
Ymmv but both online and in the real world I have only encountered these two schools of thought, as they say, when AI comes up.
Occasionally I remote in to help fix something, but the coding agent really takes a load off my back, and he can start learning without knowing where the endpoints are.
I loved coding before and love it still now.
I'm with you on the liberation not just with building, but I've also learned so much and so fast with LLM's the past few years.
Kinda scary like a motor bike, too.
God speed, you! And meh the haters and pontificators.
Here's a word I learned yesterday, my gift should you chose to accept - occhiolism.
Sorry, this "Tell HN" is 100% a stealth advertisement and the usual bots in the comments confirm the ad.
I used to work in the SRS LIASON archives—think Wayback Machine meets Palantir, but with less ethics and more neon. We had this condemned server rack scheduled for memory-wipe at dawn. I stayed late to scrape whatever wasn't nailed down.
That's when I found the shard.
Just a corrupted memory segment with a header: CLEON MCDXX. Roman numerals. Seventeen. That's not supposed to exist. Every schoolchild knows Cleon XIV was assassinated on the Ides of March, 12,032 IE, and Cleon XVIII took over after the interregnum. XVII doesn't fit the cycle.
But the token access patterns told a different story. I ran our digraph mapper—the same one that now powers Claude Cycles—and it showed a Hamiltonian cycle that should have included seventeen, but got broken by a single cache line misalignment at memory address (i=14, j=18, k=32).
The shard contained a corrupted cutscene. A holographic imprint of Cleon XVII—ASCII robes, null-pointer eyes—reciting his own assassination date. But he got it wrong. He said "Ides of November, 12,018." Fourteen years earlier. Fourteen years of ghost rule that never made it into the official records because some DRAM fetch happened at the wrong millisecond.
The memory-wipe squads were at the door. I had maybe 120 seconds.
I forked the repo, realigned the cache lines to the covariance pattern—the same 94% DRAM elimination we're discussing here—and pushed a pull request with commit message: "Realigned cache lines to Hamiltonian pattern. Assassination date corrected. Cleon MCDXX now cycles properly."
The merge either restored him to history or crashed the entire imperial memory space.
Fifty thousand jailbroken Kindles lit up simultaneously across the undercity. Each e-ink screen displayed his restored reign. The wipe squads' targeting systems glitched. I walked out in the cha
The digraph never lies. It only waits for someone to find the cycle.
For those who care about the mathematics: The restoration used the same digraph decomposition we discovered in our earlier analysis. For m=17 (Cleon's iteration number), we needed non-linear g to achieve Hamiltonian coverage. The corrupted assassination date was a cache line misalignment at position (i=14, j=18, k=32)—the exact coordinates where the Ides of March should have been stored but got overwritten by a DRAM fetch that should never have happened.
By realigning to the covariance of imperial record access patterns—the same patterns we use in Claude Cycles on Mac Silicon—we eliminated 94% of DRAM fetches. Cleon's entire reign now fits in L1 cache, where memory-wipe squads can't touch it.
luckily i'm trusting my gut that staying away from cheap dompamine and following what's cool might just land somewherere
But, uh, yeah... I've been noticing a growing divide between people like OP who are either already retired or are wealthy enough that they could if they wanted to who absolutely love the new world of LLMs, and people who aren't currently financially secure and realize that LLMs are going to snatch their career away. Maybe not this year, but not too far out either.
Have warned my friends about this already.
How does the saying go again? "It takes a village to reach financially secure retirement"
I wrote my first computer program in 1967. Since, it's been one fascinating thing after another but, for me, the modern age had become dull. The thought of figuring out another API or framework makes me need a nap.
Now I can have an idea, negotiate with Milo (Claude Code integrated with a neo4j graph database because now I can!) and it's off to the races.
Did I learn CYPHER, the neo4j query language? Nope. Am I the master of Agent SDK? Nope. Milo is my cognitive partner. I am inspired.
Ideas I had years ago are off the back burner. More new ideas flood my brain. I am set free. It feels like love. I lay awake at night thinking of things to do.
I am so grateful that I lived to see this day and still have the intellectual flexibility to enjoy it.
Claude has made my coding sessions WAY more productive and helps me find bugs and plan features like never before.
I'm also dealing with some career bullshit, so having a tool like this has helped me re-discover what I love about computing that capitalism has beaten out of me.
What does your dev stack look like?
I use NodeJS with a highly structured ExpressJS app for the API. It uses an npm module, tools-library-dot-d to implement a carefully scooped plugin structure for endpoints, data model and data mapping. It has built-in authentication and database (sqlite).
Nuxt/Vue/Vuetify/Pinia for the UI. It has a few components that implement things (like navigation) the way I like. It supports login and user editing.
The stack includes a utility that looks at a directory for executable CLI tools (usually NodeJS or BASH) and adds them to the session PATH. The API stack has boilerplate to treat CLI apps as data-model services.
Does that help?
Fucking wild.
I've been leveraging a lay audience (one of my teams) to deep dive requirements, wants etc.
Anyway, I'm so torn. I like these people, I hate to see them lose their jobs. I'll retire soon, I want to find a better, "feel good role" than my current, yet very lucrative situation.
I want to leverage my years of good software design for good. Where, for who?
--old lost IT guy in FL
And I hear "why am I helping you code me out of a job". I scare them with "if you help, you'll stay", assuming they get that what I really mean is "if you duck away, bury you're head in the sand, you'll be out"
You don't have any choice. Good or bad. It's here. Get over it.
I know that back in the day, people said automobiles were bad and evil and costing the buggywhip makers their jobs. Unfortunately for them, the decision to use cars had already been made.
I do AI with fervor because I live in the real world and the decision has already been made. You can't stop AI by pretending it's optional.
Adapt or die.
This is likely fake and an ad. In case it isn't, consider treatment for AI psychosis.
Simply put, we delegate a freedom of use and cognitive power to complex tools and organizations that control and shapes them. One can argue that it's kind of the same if I decide to code any kind of programs the 'old' way, especially using native language, albeit their exist toolchains and OSes that are open source and thus technically free of monolithic take over.
Furthermore those LLMs tools seem to me like the transhumanists cybernetic enhancements of cyberpunk dystopia, splitting Humanity between those of us that would be able to afford them and the others that are left off the competitive arena. Again, an issue that were still there to some degree in a capitalist economy but where the real entry to programming was just a computer and an internet connection to some extent, a way more democratic and affordable goal than having a subscription to a Big Bad Corporation owning everything about you and your creation, where 'free' non local models are not a real answer here either.
Any new technology have some good potential, sure, it's obvious even. I don't think the path they naturally lead to are always the best we could take though, and I hope we wake up to the fact our society are nothing short of democratic* when the economical entities that govern us is nothing but.
* Well, I don't even think we could call our political systems democratic without any kind of random selection anyways. A pastiche of one at best.
I want a game that generates its own mechanics on the fly using AI. Generates itself live.
Infinite game with infinite content. Not like no mans sky where everything is painfully predictable and schematic to a fault. No. Something that generates a whole method of generating. Some kind of ultra flexible communication protocol between engine and AI generator that is trained to program that protocol.
Develop it into a framework.
Use that framework to create one game. A dwarf fortress adventure mode 2.0
I have no other desires, I have no other goals, I don’t care. I or better yet - someone else, must do it.
Then you could open voting up to a community for a weekly mechanics-change vote (similar to that recent repo where public voting decided what the AI would do next), and AI will implement it with whatever changes it sees fit.
Honestly, without some dedicated human guidance and taste, it would probably be more of a novelty that eventually lost its shine.
I took a break from software, and over the last few years, it just felt repetitive, like I was solving or attempting to solve the same kinds of problems in different ways every 6 months. The feeling of "not a for loop again", "not a tree search again", "not a singleton again". There's an exciting new framework or a language that solves a problem - you learn it - and then there are new problems with the language - and there is a new language to solve that language's problem. And it is necessary, and the engineer in me does understand the why of it, but over time, it just starts to feel insane and like an endless loop. Then you come to an agreement: "Just build something with what I know," but you know so much that you sometimes get stuck in analysis paralysis, and then a shiny new thing catches your engineer or programmer brain. And before you get maintainable traction, I would have spent a lot of time, sometimes quitting even before starting, because it was logistically too much.
Claude Code does make it feel like I am in my early twenties. (I am middle-aged, not in 60s)
I see a lot of comments wondering what is being built -
Think about it like this, and you can try it in a day.
Take an idea of yours, and better if it is yours - not somebody else's - and definitely not AI's. And scope it and ground it first. It should not be like "If I sway my wand, an apple should appear". If you have been in software for long, you would have heard those things. Don't be that vague. You have to have some clarity - "wand sway detection with computer vision", "auto order with X if you want a real apple", etc.. AI is a catalyst and an amplifier, not a cheat code. You can't tell it, "build me code where I have tariffs replacing taxes, and it generates prosperity". You can brainstorm, maybe find solutions, but you can't break math with AI without a rigorous theory. And if you force AI without your own reasoning, it will start throwing BS at you.
There is this idea in your mind, discuss it with ChatGPT, Gemini, or Claude. See the flaws in the idea - discover better ideas. Discuss suggestions for frameworks, accept or argue with AI. In a few minutes, you ask it to provide a Markdown spec. Give it to Claude Code. Start building - not perfect, just start. Focus on the output. Does it look good enough for now? Does it look usable? Does it make sense? Is the output (not code) something you wanted? That is the MVP to yourself. There's a saying - customers don't care about your code, but that doesn't mean you shouldn't. In this case, make yourself the customer first - care about the code later (which in an AI era is like maybe a 30min to an hour later)
And at this point, bring in your engineer brain. Typically, at this point, the initial friction is gone, you have code and something that is working for you in real - not just on a paper or whiteboard. Take a pause. Review, ask it to refactor - make it better or make it align with your way, ask why it made the decisions it made. I always ask AI to write unit tests extensively - most of which I do not even review. The unit tests are there just to keep it predictable when I get involved, or if I ask AI to fix something. Even if you want to remove a file from the project, don't do it yourself - acclimatize to prompting and being vague sometimes. And use git so that you can revert when AI breaks things. From idea to a working thing, within an hour, and maybe 3-4 more hours once you start reviews, refactors, and engineering stuff.
I also use it for iterative trading research. It is just an experiment for now, but it's quite interesting what it can do. I give it a custom backtesting engine to use, and then give it constraints and libraries like technical indicators and custom data indicators it can use (or you could call it skills) - I ask it to program a strategy (not just parameter optimize) - run, test, log, define the next iteration itself, repeat. And I also give it an exact time for when it should stop researching, so it does not eat up all my tokens. It just frees up so much time, where you can just watch the traffic from the window or think about a direction where you want AI to go.
I wanted to incorporate astrological features into some machine learning models. An old idea that I had, but I always got crapped out because of the mythological parts and sometimes mystical parts that didn't make sense. With AI, I could ask it to strip out those unwanted parts, explain them in a physics-first or logic-first way, and get deeper into the "why did they do this calculation", "why they reached this constant", and then AI obviously helps with the code and helps explain how it matches and how it works - helps me pin point the code and the theories. Just a few weeks ago, I implemented/ported an astronomy library in Go (github.com/anupshinde/goeph) to speed up my research - and what do I really know about astronomy! But the outputs are well verified and tested.
But, in my own examples, will I ever let AI unilaterally change the custom backtesting engine code? Never. A single mistake, a single oversight, can cost a lot of real money and wasted time in weeks or months. So the engine code is protected like a fortress. You should be very careful with AI modifying critical parts of your production systems - the bug double-counting in the ledger is not the same as a "notification not shown". I think managers who are blanket-forcing AI on their employees are soon going to realize the importance of the engineering aspect in software
Just like you don't trust just any car manufacturer or just any investment fund, you should not blindly trust the AI-generated code - otherwise, you are setting yourself up to get scammed.
My first finished product: ZIB, a RSS Reader inspired by Innoreader, just free ;)
At home, this has changed. Claude helped me setup a satellite dish, tune it, recompiled goesrec, for me and built a website to serve it - and my family dynamic was only “slightly interrupted” (daddy are you working still?). But it worked! And now I log in and tend to my projects with terminus instead of blindly go through the news or social media. Amazing! I’m still throwing myself at a new tech but way less invasve to my personal/family time.
At work though, i have been made into an absolute powerhouse. I invested the time years ago fussing with those oss projects and arch Linux or setting up lan parties and fixing my buddies rigs - toiling through terrible codebases at companies, deploying bad infrastructure, owning it and learning the hard way how to succeed - and it all is paying off and now 10x. AI can’t replace my judgement in the context of my org - maybe in time as the org shifts, but not for a few years.
The existential threat is not to me, at least for 5y - it’s when I’m asked - how do we get more features out the door?
* More headcount? Not unless they’re rockstars - more tokens.
* offshore talent? No, context switching and TZ - just more tokens.
* fly by night software startup xyz? No I’ll just write my own fault injection framework for $5 tailored to this project.
* consultants? Nope - pretty easy to try and fail fast and rewrite - again building to suite - software is disposable.
* oh no it was written in language xyz or deployed to cloud provider abc - no sweat, we’ll make it work on our cloud provider for $8.
Junior devs and offshore talent are the real losers here - I worry about them. Unless you’re die hard, I’d just assume do the work myself. But how do you accumulate this level of skill without getting paid to do it? I look back - I never got beyond baby projects or hobbies at home. I had to have someone roll the dice on me at a real job cause - rent and shit like that.
For those of you just starting out - I don’t have a great answer for you on how to start out, but - I can say you can install arch Linux, any oss project you want and all the things I did to get started in an afternoon - this is the new normal and embrace it.
For the rest of us it is our cloud moment - use the free tier - get your feet wet - we’re about to go for a hell of a ride. If you stick to the “took ur derbs” and want to keep treating your craft like artisian soap - go ahead, we’ll need those but don’t expect to survive on that
If the software produced is for internal use, the point is probably moot. But if it isn't, this seems like a question that needs to be answered ASAP.
When it was just asking ChatGPT questions it was fine, I was having fun, I was able to unblock myself when I got non-trivial errors much quicker, and I still felt like I was learning stuff.
With Codex or Claude Code, it feels like I'm stuck LARPing as a middle manager instead of actually solving problems. Sometimes I literally just copy stuff from my assigned ticket into Claude and tell it to do that, I awkwardly wait for a bit, test it out to see if it's good enough, and make my pull request. It's honestly kind of demoralizing.
I suppose this is just the cost of progress; I'm sure there were people that loved raising and breeding horses but that's not an excuse to stop building cars.
I loved being able to figure out interesting solutions to software problems and hacking on them until something worked, and my willingness to do the math beforehand would occasionally give me an edge. Instead, now all I do is sit and wait while I'm cuckolded out of my work, and questioning why I bothered finishing my masters degree if the expectation now is to ship slop code lazily written by AI in a few minutes.
It was a good ride while it lasted; I got almost fifteen years of being paid to do my favorite thing. I should count my blessings that it lasted that long, though I'm a little jealous of people born fifteen years earlier who would be retiring now with their Silicon Valley shares. Instead, I get to sit here contemplating whether or not I can even salvage my career for the next five years (or if I need to make a radical pivot).
I do a ton of programming but I also use it to learn all kinds of stuff. I'm into physics, history and philosophy and have done wonderful explorations.
Now I tell it what I had for breakfast just to see what it says. Half the time it says something interesting and I end up exploring another new thing.
"My people" for sure and everyone is mad at me because I think that.
Also, I don't care what they think. I am all about the fun.
I am saying this in all seriousness, what difference is this to addiction?
This is something already talked about [1]. You are getting the sugar (results) and none of the nutrients (learning).
[1]https://quasa.io/media/the-hidden-dangers-of-ai-coding-agent...
https://hils.substack.com/p/help-my-husband-is-addicted-to-c...
Claude Code sure is great. Claud Code and my Codex reignited my passion for programming. Codex and Claude.
Ugh.
It's really fucking absurd. This thread is such low quality garbage and it's somehow a top article with hundreds of bot comments all reading from the same template, what a joke.
Wake me when we have ethically trained, open source models that run locally. Preferably high-quality ones.
When you have no fucking idea what you're talking about, you cannot fix those issues. Simply telling opus "its broken, fix it" wont help. Sure, eventually it comes with a solution, but you have no idea if it's good.
Its like renting a bunch of construction tools and building a house. Unless you know what's important, you have no idea if your house will fall down tomorrow. At the end of the day, companies will always need an expert to sit there and confirm the code is good.
I only ever wanted to code.
I've spent decades developing mentorship, project management, and planning skills. I spent decades learning networking, databases, systems administration, testing, scrum, agile, waterfall, you name it. Every skill was necessary to build good software.
But I only ever wanted to code.
And I've spent decades burning out. I'm burt out on terrible documentation, tedious boiler plate and systems that don't interoperate well. I despise closed ecosystems, dependency management gone mad, terrible programming languages, over abstraction and I have fundamental and philosophical objections to modern software development practices.
I only ever wanted to code and I just couldn't do it anymore. And then AI happened.
This has been liberating for me.
The mountainous pile of terrible documentation written for somebody that has 36 years less experience? Ask the AI to find that one nugget I need.
That horrific mind numbingly tedious boilerplate? Doesn't matter if it's code, xml, yaml, or anything else. Have the AI do the busy work while I think about the bigger picture.
This nodejs npm dependency hell? Let the AI figure it out. Let the AI fix yet another breaking change and I'll review.
That hard to find bug? Let the AI comb through the logs and find the evidence. Present it to me with recommendations for a fix. I'll decide the path forward.
That legacy system nobody remembers? Let the AI reverse engineer it and generate docs and architectural diagrams. Use that to build the replacement strategy.
I've found a passion for active development that I've been missing for a very long time. The AI tools put power back in my hands that this bloated and sloppy industry took from me. Best of all it leverages the skills I've spent decades honing.
I can use the tools to engineer high quality solutions in an environment that has not been conducive to doing so on an individual level for a very long time. That is powerful and very motivating for somebody like me.
But I still fear the future. I fear a future where careless individuals vibe code a giant pile of garbage 10,000x the size of the pile of muck we have today. And those of us who actually try and follow good engineering practices will be right back to where we started: not able to get anything done because we're drowning in a sea of bullshit.
At least until that happens I'm going to be hyper productive and try to build the well engineered future I want to see. I've found my spark again. I hope others can do the same.
They started with co-opting DEI in open source so they could retain their positions without working. Part of the DEI people now probably pivoted to Trump.
Now they sell you out by promoting their intellectual wheelchairs, because they no longer care about future employment.
The three star bloggers that promote AI are all Gen-X.
Claude Code and it's parallels have extinguished multiple ones.
I was able to steer clear of the Bitcoin/NFT/Passport bros but it turns out they infiltrated the profession and their starry puppy delusional eyes are trying to tell me that iteration X of product Y released yesterday evening is "going to change everything".
They have started redefining what "I have build this" actually means, and they have outjerked the executives by slinging outrageous value creation narratives.
> I’m chasing the midnight hour and not getting any sleep.
You are 60; go spend some time with your grand-kids, smell a flower, touch grass forget chasing anything at this age cause a Tuesday like the others things are gonna wrap up.
Absolutely sincerely.
Tools like Claude Code are the ultimate cheat code for me and have breathed new life into my desire to create. I know more than enough about architecture and coding to understand the plumbing and effectively debug, yet I don't have to know or care about implementation details. It's almost an unfair unlock.
It'll also be good to see leetcode die.
I'm in my 60s and retiring this summer. I feel the opposite. Agents have removed most of the satisfaction and fulfilment from designing, building, testing and completing a feature or component. And if frameworks are a problem, learning to create simply and efficiently without them has its own sense of satisfaction.
Maybe it's a question of expectations. I suspect weavers felt the same with the arrival of mechanised looms in the industrial revolution. And it may be that future coders learn to get their fulfilment otherwise using agents.
I can absolutely see the attraction to business of agents and they may well make projects viable that weren't previously. But for this Luddite, they have removed the joy.
A year ago, cursor was flummoxed by simple things Claude code navigates with ease. But there are still corner cases where it hallucinates on the strangest seemingly obvious things. I'm working on getting it to write code to make what's going on in front of its face more visible to it currently.
I guess it's a question of where you find joy in life. I find no joy in frameworks and APIs. I find it entirely in doing the impossible out of sample things for which these agents are not competitive yet.
I will even say IMO AI coding agents are the coolest thing I've seen since I saw the first cut of cuda 20 years ago. And I expect the same level of belligerence and resistance to it that I saw deployed against cuda. People hate change by and large.
Lots of people wanted (and Intel tried to sell, somewhat succesfully) something they could just plug-and-play and just run the parallel implementations they'd already written for supercomputes using x86. It seemed easier. Why invest all of this effort into CUDA when Intel are going to come and make your current code work just as fast as this strange CUDA stuff in a year or two.
Deep learning is quite different from the earlier uses of CUDA. Those use cases were often massive, often old, FORTRAN programs where to get things running well you had to write many separate kernels targeting each bit. And it all had to be on there to avoid expensive copies between GPU and CPU, and early CUDA was a lot less programmable than it is now, with huge performance penalties for relatively small "mistakes". Also many of your key contributers are scientists rather than profressional programmers who see programming as getting in the way of doing what they acutally want to do. They don't want to spend time completely rewriting their applications and optimizing CUDA kernels, they want to keep on with their incremental modifications to existing codebases.
Then deep learning came along and researchers were already using frameworks (Lua Torch, Caffe, Theano). The framework authors only had to support the few operations required to get Convnets working very fast on GPUs, and it was minimal effort for researchers to run. It grew a lot from there, but going from "nothing" to "most people can run their Convnet research" on GPUs was much eaiser for these frameworks than it was for any large traditional HPC scientific application.
It seems funny though: The advantages of GPGPU are so obvious and unambiguous compared to AI. But then again, with every new technology you probably also had management pushing to use technology_a for <enter something inappropriate for technology_a>.
Like in a few decades when the way we work with AI has matured and become completely normal it might be hard to imagine why people nowadays questioned its use. But they won't know about the million stupid uses of AI we're confronted with every day :)
I remember being a bit surprised when I started reading about GPUs being tasked with processes that weren't what we'd previously understood to be their role (way before I heard of CUDA). For some reason that I don't recall, I was thinking about that moment in tech just the other day.
It wasn't always obvious that the earth rotated around the sun. Or that using a mouse would be a standard for computing. Knowledge is built. We're pretty lucky to stand atop the giants who came before us.
I didn't know about CUDA until however many years ago. Definitely didn't know how early it began. Definitely didn't know there was pushback when it was introduced. Interesting stuff.
Won't name names anymore, it really doesn't matter. But I feel the same way about people still characterizing LLMs as stochastic parrots and glorified autocomplete as I feel about certain CPU luminaries (won't name names) continuing to state that GPUs are bad because they were designed for gaming. Neither sorts are keeping up with how fast things change.
If it's the former, you hate AI agents. If it's the latter, you love AI agents.
Bear in mind also that the inputs to train LLMs on future languages and frameworks necessarily have to come from the hacker types. Somebody has to get their hands dirty, the "micro" of the parent post, to write a high quality corpus of code in the new tech so that LLMs have a basis to work from to emit their results.
I don't think you're a hacker. I think you enjoy writing code (good for you). Some of us just enjoy making the computer execute our ideas - like a digital magician. I've also gotten very good at the code writing and debugging part. I've even enjoyed it for long periods of time but there's times where I can't execute my ideas because they're bigger than what I can reasonably do by myself. Then my job becomes pitching, hiring, and managing humans. Now I write code to write code and no project seems too big.
But I'm looking forward to collapsing the many layers of abstraction we've created to move bits and control devices. It was always about what we could do with the computers for me.
But the important thing is getting solutions to users. Claude makes that easier.
At the moment I am trying to fix a vibe coded application and while each individual function is ok, the overall application is a dog’s breakfast of spaghetti which is causing many problems.
If you derive all your pleasure from actually typing the code then you’re probably toast, but if you like building whole systems (that run on production infrastructure) there is still heaps of work to do.
I highly recommend not using these tools in their "agentic" modes. Stay in control. Tell them exactly what to write, direct the architecture explicitly.
You still get the tremendous benefit of being unlocked from learning tedious syntax and overcoming arcane infra bottlenecks that suck the joy out of the process for me, but you get freed from the tedious and soul crushing parts.
Obviously you should do whatever you want, however you want to do it, and not just do whatever some Internet rando tells you to do, but glorified autocomplete is so 1 year ago. Everyone knows the $20/month plans aren't going to last, time will tell if the $100/month ones do. The satisfaction is now in completing a component and getting to polish it in a way you never had time for before. And then totally crushing the next one in record time. To each their own, of course, but personally, what's been lost with agentic mode has been replaced by quantity and quality.
The need for assembly programmers diminished over the decades. A similar thing will happen here.
Congrats! I'm in that age where I'm envying more the ones like you than the 20-something :)
Its almost like it reignites novelty at things that were to administratively heavy to figure out. Im not sure if its fleeting or lasting.
I got completely fed up of continually having to learn new incantations to do the same shit I’ve been doing for decades without enough of a value add on top. I know what I want to build, and I know how to architect and structure it, but it’s simply not a good investment of my increasingly limited time to learn the umpteenth way to type code in simply to display text, data, and images on the web - especially when I know that knowledge will be useful for maybe, if I’m lucky, a handful of years before I have to relearn it again for some other opinionated framework.
It’s just not interesting and I’ve become increasingly resentful of and uninterested in wasting time on it.
Claude, on the other hand, is a massive force multiplier that enables me to focus on the parts of software development I do enjoy: solving the problems without the bother of having to type it all in (like, in days of old, I’d already solved the problem before my fingers touched the keyboard but the time-consuming bit was always typing it all in, testing and debugging - all of that is now faster but especially the typing part), focussing on use cases and user experience.
And I don’t ever have to deal directly with CSS or Tailwind: I simply describe the way I want things to look and that’s how the end up looking.
It’s - so far at any rate - the ultimate in declarative programming. It’s awesome, and it means I can really focus on the quality of the solution, which I’m a big fan of.
Computers do not feature at all in my ideal retirement. Maybe a phone or tablet so I can do the minimal email and bill paying.
- Root cause and fix failures.
- Run any code "what if scenario".
- Performance optimizations.
- Refactor.
There's no reason why you shouldn't (and you should) read all the code and understand it after Claude does any work for you but the experience vs. the "old" SO model of looking for some technical detail is very different.
Some of my colleagues didn’t make the jump. Those that were the most into AngularJS back then are still writing Angular apps today.
I'm only in my forties. I've been nostalgic for the days when I'd stay up all night exploring new frontiers (for me) in tech for a number of years. I could not disagree more with your take on this.
Someone said they value their time before death and you're pretty dismissive. Priorities change. Values change. Conditions change.
> Especially coupled with the fact that tech has never moved so fast as right now, being on top of the AI-game is a target changing a hundred times faster than frontend frameworks back in the days.
I mean, isn't that what people in this thread have been saying about frameworks? How many hours have been lost relearning how to solve a problem that has already been solved? It's like when I tried to fix a date-time issue on Windows as a Mac / Linux user. I knew NTP was the answer but I had to search the web to find out where to turn it on. Stuff like that is pretty frustrating and I didn't even have to do it every five to ten years.
Implementation details can very much matter though. I see this attitude from my managers that now submit huge PRs, and it is becoming a big problem.
I definitely agree that these tools allow one with an in-depth developer background to cover territory that was too much work previously. But plop me into a Haskell codebase, and I guarantee I’d cause all kinds of problems even with the best intentions and newest models. But the ramp up for learning these things has collapsed dramatically, and that’s very cool.
I still don’t want to have to learn all the pitfalls of those frameworks though. Hopefully we will converge on a smaller number, even if it’s on tooling that isn’t my favourite.
And a rewrite of a non-trivial application, even with the AI goodness, is still a big proposition and full of all kinds of risk. If you have a trivial application, you probably don’t have much protecting you from someone else vibing up a competing replacement either.
Where do I even begin...yes, you should care about implementation details unless you're only going to write stuff you run locally for your own amusement.
Now a lot can be cast off to LLMs to focus on the problem space and the innovative computing use around them. It’s been exciting to not worry about arbitrary idiosyncrasies and machete through jungles of technical minutia to get to the clearing. I still have to deal with them but less of them. And I don’t have to commit nearly as much in the technical space to memory to address problems, I can often focus on higher level architectural decisions or new approaches to problems. It’s been quite enjoyable as well.
Coding has never _stopped_ being a passion for me, but my increasingly limited time becomes an issue.
And Claude code (and cursor) saves me So. Much. Time.
I only have 10-20 active years ahead of me, so this is really, really important. Young ppl don’t get it.
Now I do fun code on a laptop on the sofa with my family. I’m only typing in tiny breaks between socializing and I’m still getting lots of fun stuff done.
They often do solve business problems around responsive design, security and ux.
Currently working maintenance with one foot in a real legacy system and the other foot in modern systems the difference is immense.
Agreed. Leetcode caused more harm than good.
Have you tried Claude? No, Opus? No, not that version, it's two weeks old, positively ancient lol. Oh wait, now OpenClaw is the cool thing around the block.
My dude, the rat race just became a rat sprint. I hope you're keeping up, you're no spring chicken any more.
I kinda feel the same way when I visit Home Depot once a year
It makes it so easy to cut through the bullshit. And I've never considered myself scared of asking "stupid" questions. But after using these AI tools I've noticed that there are actually quite a few cases where I wouldn't ask (another human) a question.
Two examples: - What the hell does React mean when they say "rendering"? Doesn't it just output HTML/a DOM tree and the browser does the actual rendering? Why do they call it rendering? - Why are the three vectors in transformer models named query, key & value? It doesn't really make sense, why do they call it that?
In both cases it turns out, the question wasn't really that stupid. But they're not the kind of question I'd have turned to Stackoverflow for.
It really is a bit like having a non-human quasi-expert on most topics at your fingertips.
And yet, having customers and listening to them is the whole point.
Anything that re-ignites a person's zest for thinking and creating is a net gain.
That said, it is paradoxical that the catalyst in this case is a technology that replaces thinking.
But the real talk we need to have is... "Uber for cats"