A lot of how I form my thoughts is driven by writing code, and seeing it on screen, running into its limitations.
Maybe it's the kind of work I'm doing, or maybe I just suck, but the code to me is a forcing mechanism into ironing out the details, and I don't get that when I'm writing a specification.
Outsourcing this to an LLM is similar to an airplane stall .. I just dip mentally. The stress goes away too, since I assume the LLM will get rid of the "problem" but I have no more incentives to think, create, solve anything.
Still blows my mind how different people approach some fields. I see people at work who are drooling about being able to have code made for them .. but I'm not in that group.
It probably helps that I have 40 years of experience with producing code the old ways, including using punch cards in middle school and learning basic on a computer with no persistent storage when I was ten.
I think I've done enough time in the trenches and deserve to play with coding agents without shame.
people seem to have a inability to predict second and third order effects
the first order effect is "I can sip a latte while the bot does my job for me"... well, great I suppose, while it lasts
but the second order effect is: unless you're in the top 10%, you will now lose your job, permanently
and the third order effect is the economy collapses as it is built on consumer spending
What I believe is going to be interesting is what happens when non-engineers adopt building with agentic AI. Maybe 70 or 80% of their needs will be met without anyone else directly involved. My suspicion is that it will just create more work: making those generated apps work in a trustworthy manner, giving the agents more access to build context and make decisions, turning those one off generated apps into something maintainable, etc.
These people just drool at being able to have work done for them to begin with. Are you sure it is just "code"?
We have always survived it, so probably we can also survive mediocre coders not reading the code the LLM generates for them because they are unable to see the problems that they were never able to see in their handwritten code.
That said, the next generation may struggle, but they’ll find their way.
AI is not capable yet of automating my job completely – I anticipate this will happen within two years, maybe even this year (I’m an ML researcher).
a) Specify what you want them to do.
b) Check if the result meets your expectations.
Does your current job include neither a nor b?
But it's also likely that these tools will produce mountains of unmaintainable code and people will get buried by the technical debt. It kind of strikes me as similar to the hubris of calling the Titanic "unsinkable." It's an untested claim with potentially disastrous consequences.
It's not just likely, but it's guaranteed to happen if you're not keeping an eye on it. So much so, that it's really reinforced my existing prejudice towards typed and compiled languages to reduce some of the checking you need to do.
Using an agent with a dynamic language feels very YOLO to me. I guess you can somewhat compensate with reams of tests though. (which begs the question, is the dynamic language still saving you time?)
Meanwhile they are hollowing out work forces based on those metrics.
If we make doing the right thing career limiting this all gets rather messy rather quickly.
OR,
something like Mercor, currently getting paid really well by Meta, OpenAI, Anthropic and Gemini to pay very smart humans really well to proof language model outputs.
But it is sad if good programmers should loose sight of the opportunities the future will bring (future as in the next few decades). If anything, software expertise is likely to be one of the most sought-after skills - only a slightly different kind of skill than churning out LOCs on a keyboard faster than the next person: People who can harness the LLMs, design prompts at the right abstraction level, verify the code produced, understand when someone has injected malware, etc. These skills will be extremely valuable in the short to medium term AFAICS.
But ultimately we will obviously become obsolete if nothing (really) catastrophic happens, but when that happens then likely all human labor will be obsolete too, and society will need to be organized differently than exchanging labor for money for means of sustenance.
Any implementation moat around rapid prototyping, and any fundraising moat around hiring a team of 10 to knock out your first few versions, seems gone now. Trying to sell MVP-tier software is real hard when a bunch of your potential customers will just think "thanks for the idea, I'll just make my own."
The crunch for engineers, on the other hand, seems like that even if engineers are needed to "orchestrate the agents" and manage everything, there could be a feature-velocity barrier for the software that you can still sell (either internally or externally). Changing stuff more rapidly can quickly hit a point of limited ROI if users can't adjust, or are slowed by constant tooling/workflow churn. So at some point (for the first time in many engineers' career, probably) you'll probably see product say "ok even though we built everything we want to test, we can't roll it all out at once!". But maybe what is learned from starting to roll those things out will necessitate more changes continually that will need some level of staffing still. Or maybe cheaper code just means ever-more-specialized workflows instead of pushing users to one-size-fits-all tooling.
In both of those cases the biggest challenge seems to be "how do you keep it from toppling down over time" which has been the biggest unsolved problem in consumer software development for decades. There's a prominent crowd right now saying "the agents will just manage it by continuing to hack on everything new until all the old stuff is stable too" but I'm not sure that's entirely realistic. Maybe the valuable engineering skills will be putting in the right guardrails to make sure that behavioral verification of the code is a tractable problem. Or maybe the agents will do that too. But right now, like you say, I haven't found particularly good results in conceptualizing better solutions from the current tools.
yeah, and i'm surprised nobody talks about this much. prompting is not that hard, and some non software people are smart enough to absorb the necessary details (especially since the llm can tutor them on the way) and then let the loop produce the MVP.
> Or maybe cheaper code just means ever-more-specialized workflows instead of pushing users to one-size-fits-all tooling.
Interesting thought
I think we realistically have a few years of runway left though. Adoption is always slow outside of the far right of the bell curve.
Maybe I'm just as naive as those who said that photographs lack the soul of paintings. But I'm not 100% convinced we're done for yet, if what you're actually selling is thinking, reasoning and understanding.
Some will have to crash and burn their company before they realize that no human at all in the loop is a non sense. Let them touch fire and make up their mind I guess.
People are also non deterministic. When I delegate work to team of five or six mid level developers or God forbid outsourced developers, I’m going to have to check and review their work too.
It’s been over a decade that my vision/responsibility could be carried out by just my own two hands and be done on time within 40 hours a week - until LLMs
And there are no computer professionals who haven’t heard about hallucinations.
Reviewing whether the code meets requirements through manual and automated tests - and that’s all I cared about when I had a team of 8 under me - is the same regardless. I wasn’t checking whether John used a for loop or while loop in between my customer meetings and meetings with the CTO. I definitely wasn’t checking the SOQL (not a typo) of the Salesforce consultants we hired. I was testing inputs and outputs and UX.
Perhaps outsourcing is a good analogy. But in that case I'd call it outsourcing without accountability. LLMs feel more like an infinite chain of outsourcing.
I’ve said repeatedly over the past couple of days that if a web component was done by someone else, it might as well have been created by Claude, I haven’t done web development in a decade. If something isn’t right or I need modifications I’m going to either have to Slack the web developer or type a message to Claude.
People will say "oh, it's the same as when the printing press came, people were afraid we'd get lazy from not copying text by hand", or any of a myriad of other innovations that made our lives easier. I think this time it's different though, because we're talking about offloading the very essence of humanity – thinking. Sure, getting too lazy to walk after cars became widespread was detrimental to our health, but if we get too lazy to think, what are we?
I can't help but resist this line of thinking as a result. If the end is nigh for us, it's nigh for everyone else too. Imagine the droves of less technical workers in the workforce who will be unseated before software engineers. I don't think it is tenable for every worker in the first world to become replaced by a computer. If an attempt at this were to occur, those smart unemployed people would be a real pain in the ass for the oligarchs.
Frankly, I am not sure there is a place in the world at all for me in ten years.
I think the future might just be a big enough garden to keep me fed while I wait for lack of healthcare access to put me out of my misery.
I am glad I am not younger.
I think you have every right to doubt those telling us that they run 5 agents to generate a new SAAS-product while they are sipping latté in a bar. To work like that I believe you'll have to let go of really digging into the code, which in my experience is needed if want good quality.
Yet I think coding agents can be quite a useful help for some of the trivial, but time consuming chores.
For instance I find them quite good at writing tests. I still have to tweak the tests and make sure that they do as they say, but overall the process is faster IMO.
They are also quite good at brute-forcing some issue with a certain configuration in a dark corner of your android manifest. Just know that they WILL find a solution even if there is none, so keep them on a leash!
Today I used Claude for bringing a project I abandoned 5 years ago up to speed. It's still at work in progress, but the task seemed insurmountable (in my limited spare time) without AI, now it feels like I'm half-way there in 2-3 hours.
The implications to your statement seems to me that is: "you'll never have to directly care about it yourself, so why do you care about it?". Unless you were talking about the codebase in a user-application relationship which in this case feel free to ignore the rest of my post.
I don't believe that the code will become an implementation detail, ever. When all you do is ship an MVP to demonstrate what you're building then no one cares, before or after LLM assistance. But any codebase that lives more than a year and serves real users while generating revenue deserves to have engineers who knows what's happening beyond authoring markdown instructions to multiple agents.
Your claim seems to push us towards a territory where externalizing out thought processes to a third party is the best possible outcome for all parties, because the models will only get better and stay just as affordable.
I will respond to that by pointing out that, models that will ultimately be flawless in code generation will worth a fortune in terms of adding value, and any corporation that will win the arms race will be actually killing themselves by not raising the cost of access to their services by a metric ton. This is because there will be few LLM providers that actually worth it by then, and because oligopoly is a thing.
So no. I don't expect that we'll ever reach a point where the average person will be "speaking forth" software the same way they post on Reddit, without paying cancer treatment levels of money.
But even if it's actually affordable... Why would I ever want to use your app instead of just asking an LLM to make me one from scratch? No one seems to think about that.
Not entirely unlike other code generation mechanisms, such as tools for generating HTML based on a graphical design. A human could edit that, but it may not have been the intent. The intent was that, if you want a change, go back to the GUI editor and regenerate the HTML.
We largely moved back away from "work in a graphic tool then spit out HTML from it" because it wasn't robust for the level of change/iteration pace, this wasn't exactly my domain but IIRC there were especially a lot of problems around "small-looking changes are now surprisingly big changes in the generated output that have a large blast radius in terms of the other things (like interactivity) we've added in."
Any time you do a refactor that changes contract boundaries between functions/objects/models/whatever, and you have to update the tests to reflect this, you have a big risk of your new tests not covering exactly the same set of component interactions that your old tests did. LLM's don't change this. They can iterate until the tests are green, but certain changes will require changing the tests, and now "iterating until the tests are green" could be resolved by changing the tests in a way that subtly breaks surprising user-facing things.
The value of good design in software is having boundaries aligned with future desires (obviously this is never perfect foresight) to minimize that risk. And that's the scary thing to myself about not even reading the code.
So I think this question needs to be asked in the context of particular projects, not as an industry-wide yes or no answer. Does your particular project still need humans involved at the code level? Even just for review? If so, then you probably ought to retain human-oriented software design and coding techniques. If not, then, whatever. Doesn't matter. Aim for whatever efficiency metric you like.
I would guess that >90% of all web crud can already be done better by an LLM managed by a decent developer, than purely by the developer himself.
https://mitpress.mit.edu/9780262526401/artificial-intelligen...
envisions a future of AI assistance that looks not too far off from today.
The difference being that compilers and related tools are deterministic, and we can manage the outputs using mathematical proof of correctness.
The LLM's driving this new abstraction layer are another beast entirely.
Also we live in a capitalist society. The boss will soon ask: "Why the fuck am I paying you to sip a latte in a bar? While am machine does your work? Use all your time to make money for me, or you're fired."
AI just means more output will be expected of you, and they'll keep pushing you to work as hard as you can.
That’s a bit too cynical for me. After all, yes, your boss is not paying you for sipping lattes, but for producing value for the company. If there is a tool that maximises your output, why wouldn’t he want you to use that to great efficiency?
Put differently, would a carpenter shop accept employees rejecting the power saw in favour of a hand saw to retain their artisanal capability?
Because I deny that? It's not fun for me.
> would a carpenter shop accept employees rejecting the power saw in favour of a hand saw to retain their artisanal capability?
Why not? If that makes enough money to keep going.
You might argue that in theoretical ideal market companies who're not utilizing every possible trick to improve productivity (including AI) will lose competition, but let's be real, a lot of companies are horribly inefficient and that does not make them bankrupt. The world of producing software is complicated.
I know that I deliver. When I'm asked to write a code, I deliver it and I responsible for it. I enjoy the process and I can support this code. I can't deliver with AI. I don't know what it'll generate. I don't know how much time would it take to iterate to the result that I precisely want. So I can't longer be responsible for my own output. Or I'd spend more time baby-sitting AI than it would take me to write the code. That's my position. Maybe I'm wrong, they'll fire me and I'll retire, who knows. AI hype is real and my boss often copy&pasting ChatGPT asking me to argue with it. That's super stupid and irritating.
I started this career because I liked writing code. I no longer write a lot of code as a lead, but I use writing code to learn, to gain a deeper understanding of the problem domain etc. I'm not the type who wants to write specs for every method and service but rather explore and discover and draft and refactor by... well, coding. I'm amazed at creating and reading beautiful, stylish, working code that tells a story.
If that's taken away, I'm not sure how I could retain my interest in this profession. Maybe I'll need to find something else, but after almost a decade this will be a hard shift.
Sitting in a cafe enjoying a latte is not "producing value for the company." If having "5 agents to generate a new SAAS-product" matches your non-AI capacity and gives you enough free time to relax in a cafe, he's going to want to you run 50 agents generating 5 new SAAS products, until you hit your capacity.
If he doesn't need 5 new SAAS products, just one, then he's going to fire you or other members of your team.
Think of it this way: you're a piece of equipment to your boss, and every moment he lets you sit idle (on the clock) is money lost. He wants to run that piece of equipment as hard as he can, to maximize his profit.
That's labor under capitalism.
How I see it is we've reverted back to a heavier spec type approach, however the turn around time is so fast with agents that it still can feel very iterative simply because the cost of bailing on an approach is so minimal. I treat the spec (and tests when applicable) as the real work now. I front load as much as I can into the spec, but I also iterate constantly. I often completely bail on a feature or the overall approach to a feature as I discover (with the agent) that I'm just not happy with the gotchas that come to light.
AI agents to me are a tool. An accelerator. I think there are people who've figured out a more vibey approach that works for them, but for now at least, my approach is to review and think about everything we're producing, which forms my thoughts as we go.
For mission-critical applications I wonder if making "writing the actual code" so much cheaper means that it would make more sense to do more formal design up front instead, when you no longer have a human directly in the loop during the writing of the code to think about those nasty pops-up-on-the-fly decisions.
Love this! Be it design specs or a mock from the designer. So many unaccounted for decisions. Good devs will solve many on their own, uplevel when needed, and provide options.
And absolutely it means more design up front. And without human in the direct loop, maybe people won’t skimp on this!
You can check it out here: https://ai-lint.dosaygo.com/
Sometimes the AI does weird stuff too. I wrote a texture projection for a nonstandard geometric primitive, the projection used some math that was valid only for local regions… long story. Claude kept on wanting to rewrite the function to what it thought was correct (it was not) even when I directed to non related tasks. Super annoying. I ended up wrapping the function in comments telling it to f#=% off before it would leave it alone.
> I use LLMs in the same way i used to used stack overflow, if I go much farther to automate my work than that I spend more time on cleanup compared to if I just write code myself.
yea, same here.i've asked an ai to plan and setup some larger non straight forwards changes/features/refactorings but it usually devolves into burning tokens and me clicking the 'allow' button and re-clarifying over and over when it keeps trying to confirm the build works etc...
when i'm stuck though, or when im curious of some solution it usually opens the way to finish the work similar to stack overflow
We vibe around a lot in our heads and that's great. But it's really refreshing, every so often, to be where the rubber meets the road.
This past week, I spent a couple of days modifying a web solution written by someone else + converting it from a Terraform based deployment to CloudFormation using Codex - without looking at the code as someone who hasn’t done front in development in a decade - I verified the functionality.
More relevantly but related, I spent a couple of hours thinking through an architecture - cloud + an Amazon managed service + infrastructure as code + actual coding, diagramming it, labeling it , and thinking about the breakdown and phases to get it done. I put all of the requirements - that I would have done anyway - into a markdown file and told Claude and Codex to mark off items as I tested each item and summarize what it did.
Looking at the amount of work, between modifying the web front end and the new work, it would have taken two weeks with another developer helping me before AI based coding. It took me three or four days by myself.
The real kicker though is while it worked as expected for a couple of hundred documents, it fell completely to its knees when I threw 20x documents into the system. Before LLMs, this would have made me look completely incompetent telling the customer I now wasted two weeks worth of time and 2 other resources.
Now, I just went back to the literal drawing board, rearchitected it, did all of the things with code that the managed services abstracted away with a few tweaks, created a new mark down file and was done in a day. That rework would have taken me a week by itself. I knew the theory behind what the managed service was doing. But in practice I had never done it.
It’s been over a decade where I was responsable for a delivery that I could do by myself without delegating to other people or that was simple enough that I wouldn’t start with a design document for my own benefit. Now within the past year, I can take on larger projects by myself without the coordination/“mythical man Month” overhead.
I can also in a moment of exasperation say to Codex “what you did was an over complicated stupid mess, rethink your implementation from first principles” without getting reported to HR.
There is also a lot of nice to have gold plating that I will do now knowing that it will be a lot faster
This is so on point. The spec as code people try again and again. But reality always punches holes in their spec.
A spec that wasn't exercised in code, is like a drawing of a car, no matter how detailed that drawing is, you can't drive it, and it hides 90% of the complexity.
To me the value of LLMs is not so much in the code they write. They're usually to verbose, start building weird things when you don't constantly micromanage them.
But you can ask very broad questions, iteratively refine the answer, critique what you don't like. They're good as a sounding board.
The problem is that this spec-driven philosophy (or hype, or mirage...) would lead to code being entirely deprecated, at least according to its proponents. They say that using LLMs as advisors is already outdated, we should be doing fully agentic coding and just nudge the LLM etc. since we're losing out on 'productivity'.
With AI, the correct approach is to think more like a software architect.
Learning to plan things out in your head upfront without to figure things out while coding requires a mindset shift, but is important to work effectively with the new tools.
To some this comes naturally, for others it is very hard.
I dont think any complex plan should be planned in your head. But drawing diagrams, sketching components, listing pros and cons, 100%. Not jumping directly into coding might look more like jumping into spec writing a poc
It shouldn't be your only source of a plan as you'd likely wind up dropping something, but figuring out how to jiggle things around before getting it 'on paper' is something I've found helpful.
The same kind of planning you’re describing can and do happen sans LLM, usually on the sofa, or in front of a whiteboard. Or by reading some research materials. No good programmer rushes to coding without a clear objective.
But the map is not the territory. A lot of questions surface during coding. LLMs will guess and the result may be correct according to the plan, but technically poor, unreliable, or downright insecure.
I also use these things to just plan out an approach. You can use plan mode for yourself to get an idea of the steps required and then ask the agent to write it to a file. Pull up the file and then go do it yourself.
Two principles I have held for many years which I believe are relevant both to your sentiment and this thread are reproduced below. Hopefully they help.
First:
When making software, remember that it is a snapshot of
your understanding of the problem. It states to all,
including your future-self, your approach, clarity, and
appropriateness of the solution for the problem at hand.
Choose your statements wisely.
And: Code answers what it does, how it does it, when it is used,
and who uses it. What it cannot answer is why it exists.
Comments accomplish this. If a developer cannot be bothered
with answering why the code exists, why bother to work with
them?Actually those same markdown files answer the second question.
Most people can't answer why they themselves exist, or justify why they are taking up resources rather than eating a bullet and relinquishing their body-matter.
According to the philosophy herein, they are therefore worthless and not worth interacting with, right?
I completely agree but my thought went to how we are supposed to estimate work just like that. Or worse, planning poker where I'm supposed to estimate work someone else does.
If you need that, don't use AI for it. What is it that you don't enjoy coding or think it's tangential to your thinking process? Maybe while you focus on the code have an agent build a testing pipeline, or deal with other parts of the system that is not very ergonomic or need some cleanup.
> If you need that, don't use AI for it.
this is the right answer, but many companies mandate to use ai (burn x tokens and y percent of code) now, so people are bound to use it where it might not fitIs the entire AI bubble just the result of taking performance metrics like "lines of code written per day" to their logical extreme?
Software quality and productivity have always been notoriously difficult to measure. That problem never really got solved in a way that allowed non technical management to make really good decisions from the spreadsheet level of abstraction... but those are the same people driving adoption of all these AI tools.
Engineers sometimes do their jobs in spite of poor incentives, but we are eliminating that as an economic inefficiency.
On the other hand, every time the matter is seriously empirically studied, it turns out that overall:
* productivity gains are very modest, if not negative
* there are considerable drawbacks, including most notably the brainrot effect
Furthermore, AI spend is NOT delivering the promised returns to the extent that we are now seeing reversals in the fortunes of AI stocks, up to and including freakin' NVIDIA, as customers cool on what's being offered.
So I'm supposed to be an empiricist about this, and yet I'm supposed to switch on the word of a "cool story bro" about how some guy built an app or added a feature the other day that he totally swears would have taken him weeks otherwise?
I'm like you. I use code as a part of my thought process for how to solve a problem. It's a notation for thought, much like mathematical or musical notation, not just an end product. "Programs must be written for people to read, and only incidentally for machines to execute." I've actually come to love documenting what I intend to do as I do it, esp. in the form of literate programming. It's like context engineering the intelligence I've got upstairs. Helps the old ADHD brain stay locked in on what needs to be done and why. Org-mode has been extremely helpful in general for collecting my scatterbrained thoughts. But when I want to experiment or prove out a new technique, I lean on working directly with code an awful lot.
If I have to say, we're just waiting for the AI concern caucus to get tired of performing for each other and justifying each other's inaction in other facets of their lives.
My hierarchy of static analysis looks like this (hierarchy below is Typescript focused but in principle translatable to other languages):
1. Typesafe compiler (tsc)
2. Basic lint rules (eslint)
3. Cyclomatic complexity rules (eslint, sonarjs)
4. Max line length enforcement (via eslint)
5. Max file length enforcement (via eslint)
6. Unused code/export analyser (knip)
7. Code duplication analyser (jscpd)
8. Modularisation enforcement (dependency-cruiser)
9. Custom script to ensure shared/util directories are not over stuffed (built this using dependency-cruiser as a library rather than an exec)
10. Security check (semgrep)
I stitch all the above in a single `pnpm check` command and defined an agent rule to run this before marking task as complete.
Finally, I make sure `pnpm check` is run as part of a pre-commit hook to make sure that the agent has indeed addressed all the issues.
This makes a dramatic improvement in code quality to the point where I'm able to jump in and manually modify the code easily when the LLM slot machine gets stuck every now and then.
(Edit: added mention of pre-commit hook which I missed mention of in initial comment)
BUT, what is the point of max line length enforcement, just to see if there are crazy ternary operators going on?
Remember these are still fundamentally trained on human communication and Dale Carnegie had some good advice that also applies to language generators.
I use a pre-commit hook to run `pnpm check`. I missed mentioning it in the original comment. Your reply reminded me of it and I have now added it. Thanks.
For the pre-commit hook, I assume you run it on just the files changed?
> Custom script to ensure shared/util directories are not over stuffed (built this using dependency-cruiser as a library rather than an exec)
Would you share this?
This was pushed back hard by management because it "took too much time to create a ticket". I fought it for some months but at the end I stopped and also really lose the ability and patience of do that. Juniors suffered, implementation took more time. Time passed.
Now, I am supposed to do the exact same thing, but even better and for yesterday.
By the time you do everything outlined here you’ve basically recreated waterfall and lost all speed advantage. Might as well write the code yourself and just use AI as first-pass peer review on the code you’ve written.
A lot of the things the writer points out also feel like safeguards against the pitfalls of older models.
I do agree with their 12th point. The smaller your task the easier to verify that the model hasn’t lost the plot. It’s better to go fast with smaller updates that can be validated, and the combination of those small updates gives you your final result. That is still agile without going full “specifications document” waterfall.
“Break things down” is something most of us do instinctively now but it’s something I see less experienced people fail at all the time.
Next: vibe brain surgery.
/i
Brain surgery is highly technical AND highly vibe based.
You need both in extremely high quantities. Every brain is different, so the super detailed technical anatomies that we have is never enough, and the surgeon needs constant feedback (and insanely long/deep focus).
But there’s more time to do some of these other things if the actual coding time is trending toward zero.
And the importance of it can go up with AI systems because they do actually use the documentation you write as part of their context! Direct visible value can lead people to finally take more seriously things that previously felt like luxuries they didn’t have time for.
Again if you’ve been a developer for more than 10 minutes, you’ve had the discouraging experience of pain-stakingly writing very good documentation only for it to be ignored by the next guy. This isn’t how LLMs work. They read your docs.
I think you'll find even less time - as "AI" drives the target time to ship toward zero.
One of the problems with writing detailed specs is it means you understand the problem, but often the problem is not understand - but you learn to understand it through coding and testing.
So where are we now?
The main difference now is the parrots have reduced the cost of the wrong program to near zero, thereby eliminating much of the perceived value if a spec.
It’s also nothing new, as it’s basically Joe Armstrong's programming method. It’s just not prohibitively expensive for the first time in history.
Before I also had to code it and then make sure it had no issues.
Now I can skip the coding and then just have something spit out something which I can evaluate whether I believe is a good implementation of my solution or not.
Of course, you need the skill to know good from bad but for medium to senior devs, AI is incredibly useful to get rid of the mundane task of actually writing code, while focusing on problem solving with critical review of magically generated code.
The danger is treating the output as a black box. If you skip the review step and just accept whatever it produces, yes, you'll lose proficiency and accumulate debt. But if you stay engaged with the code, reading it as critically as you would a junior dev's PR, you maintain your understanding while moving faster.
The technical debt concern is valid but it's a process problem, not an inherent flaw. We solved "juniors write bad code" with code review, linting, and CI. We can solve "LLMs write inconsistent code" with the same tools - hannofcart's 10-layer static analysis stack is a good example. The LLM lies about passing checks? Pre-commit hook catches it.
And in practice, I am happy enough that the LLM helps me to eliminate some toil, but I think you need to know when it is time to fold your cards, and leave the game. I prefer to fix small bugs in the generated code myself, than asking the agent, as it tends to go too far when fixing its own code.
Coding agents made me really get something back from the money I pay for my O'Reilly subscription.
So, coding agents are making me a better engineer by giving me time to dive deeper into books instead of having to just read enough to do something that works under time pressure.
If the issue is SNR and the ratio of "good" vs "bad" practices in the input training corpus, I don't know if that's getting better.
Yes!
> by some of the smartest coders in the world
Hmm... How will it filter out those by the dumbest coders in the world?
Including those by parrots?
Just because of a hype?
https://xcancel.com/hamptonism/status/2019434933178306971
And all that after stealing everyone's output.
In reality that man is hoping to IPO in 6-12 months, if anyone is wondering why the “use claude or you’re left behind” is so heavy right now.
Define data structures manually, ask AI to implement specific state changes. So JSON, C .h or other source files of func sigs and put prompts in there. Never tried the Agents.md monolithic definition file approach
Also I demand it stick to a limited set of processing patterns. Usually dynamic, recursive programming techniques and functions. They just make the most sense to my head and using one style I can spot check faster.
I also demand it avoid making up abstractions and stick to mathematical semantics. Unique namespaces are not relevant to software in the AI era. It's all about using unique vectors as keys to values.
Stick to one behavior or type/object definition per file.
Only allow dependencies that are designed as libraries to begin with. There is a ton of documentation to implement a Vulkan pipeline so just do that. Don't import an entire engine like libgodot.
And for my own agent framework I added observation of my local system telemetry via common Linux files and commands. This data feeds back in to be used to generate right-sized sched_ext schedules and leverage bpf for event driven responses.
Am currently experimenting with generation of small models of my own data. A single path of images for example not the entire Pictures directory. Each small model is spun akin to a Docker container.
LLMs are monolithic (massive) zip files of the entire web. No one really asking for that. And anyone who needs it already has access to the web itself
1. Keep things small and review everything AI written, or 2. Keep things bloated and let AI do whatever it wants within the designated interface.
Initially I drew this line for API service / UI components, but it later expanded to other domains. e.g. For my hobby rust project I try to keep "trait"s to be single responsible, never overlap, easy to understand etc etc. but I never look at AI generated "impl"s as long as it passes some sensible tests and conforming the traits.
I find rust generally easier to reason about, but can't stand writing it.
The compiler works well with LLMs plenty of good tooling and LSPs.
If I'm happy with the shape of the code and I usually write the function signatures/ Module APIs. And the compiler is happy with it compiling. Usually the errors if any are logical ones I should catch in reviews.
So I focus on function, compiler focuses on correctness and LLM just does the actual writing.
Tl;Dr I don't mind reading rust I hate writing it and the compiler meets me in the middle.
Writing rust scares me, but I can read it just fine. I've come up with super masochistic linting rules that claude isn't allowed to change and that has improved things quite a bit.
I wish there was a mature framework for frontend that can be configured to be as strict as rust.
The controlling systems are not give it more words at the start. Agentic coding needs to work in loop with dedicated context.
You need to think about how can i give as much intent as possible with as little words.
You can built a tremendous amount of custom lint rules ai never needs to read except they miss it.
Every pattern in your repo gets repeated, repo will always win over documentation and when your repo is good structured you don’t need to repeat this to AI
It’s like dev always has been, watch what has gone wrong and make sure the whole type or error can’t happen again.
> repo will always win over documentation
it really does seem like this... also new devs are like that too: "i just copied this pattern use over here and there whats wrong?" is something i've heard over and over loli think languages that allow expression of "this is deprecated, use x instead" will be usefull for that too
https://github.com/ryanthedev/code-foundations
I’m currently working on a checklist and profile based code review system.
Long as you review the code and correct it, it is no more different than using stackoverflow. A stack overflow that reads your code and helps stitch the context.
One session's scaffold assumes one pattern. Second session scaffold contradicts it. You reviewed both in isolation. Both looked fine. Neither knows about the other.
Reviewing AI code per-session is like proofreading individual chapters of a novel nobody's reading front to back. Each chapter is fine. The plot makes no sense.
1. Have the LLM write code based on a clear prompt with limited scope 2. Look at the diff and fix everything it got wrong
That's it. I don't gain a lot in velocity, maybe 10-20%, but I've seen the code, and I know it's good.
https://github.com/glittercowboy/get-shit-done
You still need to know the hard parts: precisely what you want to build, all domain/business knowledge questions solved, but this tool automates the rest of the coding and documentation and testing.
It's going to be a wild future for software development...
I do allow it to write the tests (lots of typing there), but I break them manually to see how they fail. And I do think about what the tests should cover before asking LLM to tell me (it does come up with some great ideas, but it also doesn't cover all the aspects I find important).
Great tool, but it is very easy to be led astray if you are not careful.
The suggestions you make are all sensible but maybe a little bit generic and obvious. Asking ChatGPT to generate advice on effectively writing quality code with AI generates a lot of similar suggestions (albeit less well written).
If this was written with help of AI, I'd personally appreciate a small notice above the blog post. If not, I'd suggest to augment the post with practical examples or anecdotal experience. At the moment, the target group seems to be novice programmers rather than the typical HN reader.
i have written this text by myself except like 2 or 3 sentences which i iterated with an LLM to nail down flow and readability. I would interpret that as completely written by me.
> The suggestions you make are all sensible but maybe a little bit generic and obvious. Asking ChatGPT to generate advice on effectively writing quality code with AI generates a lot of similar suggestions (albeit less well written).
Before i wrote this text, i also asked Gemini Deep Research but for me the results where too technical and not structural or high level as i describe them here. Hence the blogpost to share what i have found works best.
> If not, I'd suggest to augment the post with practical examples or anecdotal experience. At the moment, the target group seems to be novice programmers rather than the typical HN reader.
I have pondered the idea and also wrote a few anecdotal experiences but i deleted them again because i think it is hard to nail the right balance down and it is also highly depended on the project, what renders examples a bit useless.
And i also kind of like the short and lean nature of it the last few days when i worked on the blogpost. I might will make a few more blogposts about that, that will expand a few points.
Thank you for your feedback!
I've always advocated for using a linter and consistent formatting. But now I'm not so sure. What's the point? If nobody is going to bother reading the code anymore I feel like linting does not matter. I think in 10 years a software application will be very obfuscated implementation code with thousands of very solidly documented test cases and, much like compiled code, how the underlying implementation code looks or is organized won't really matter
I too use AI, but mostly to generate scripts (the most usefull use of AI is 100-200 line scripts imho), test _cases_ (i write the test itself, the data inside is generated) and HTML/CSS/JS shenanigans (the logic i code, the presentation i'm inferior to any random guy on the internet, so i might as well use an AI). I also use it for stuff that never end in repository, for exploration/proof of concept and outside of scope tests (i like to understand how stuff work, that helps), or to summarize Powerpoint presentations so i can do actual work during 60-person "meetings" and still get the point.
This is the opinion of someone who has not tried to use Claude Code, in a brand new project with full permissions enabled, and with a model from the last 3 months.
There’s a lot of engineers who will refuse to wake up to the revolution happening in front of them.
I get it. The denialism is a deeply human response.
So far the only output is the "How I use AI blogs", AI marketing blogs, more CVEs, more outages, degraded software quality, and not much of shipping anything.
Is there any examples of real products and not just anecdotes of "I'm 10x more productive!"?
The two main takeaways. Create a CLAUDE.md file that defines everything about the project. Have Claude feed back into the file when it makes mistakes and how to fix them.
Now it creates well structured code and production level applications. I still double check everything of course, but the level of errors is much lower.
An example application it created from a CLAUDE.md I wrote. The application reads multiple PDF's, finds the key stakeholders and related data, then generates a network graph across those files and renders it in an explorable graph in Godot.
That took 3 hours to make, test. It also supports OpenAI (lmstudio), Claude and Ollama for its LLM callouts.
What issue I can see happening is the duplication of assets in work. Instead of finding an asset someone built, people have been creating their own.
It’s not like with AI we’re making miraculous things you’ve never seen before. We’re shipping the same kinda stuff just much faster.
I don’t know what you’re looking for. Code is code it’s just more and more being written by AI.
Even If I like this tech, I still dont want to support the companies who make it. Yet to pay a cent to these companies, still using the credits given to me by my employer.
On the flip side, anyone who believes you can create quality products with these tools without actually working hard is also deluded. My productivity is insane, what I can create in a long coding session is incredible, but I am working hard the whole time, reviewing outputs, devising GOOD integration/e2e tests to actually test the system, manually testing the whole time, keeping my eyes open for stereotypically bad model behaviors like creating fallbacks, deleting code to fulfill some objective.
It's actually downright a pain in the ass and a very unpleasant experience working in this way. I remember the sheer flow state I used to get into when doing deep programming where you are so immersed in managing the states and modeling the system. The current way of programming for me doesn't seem to provide that with the models. So there are aspects of how I have programmed my whole life that I dearly miss. Hours used to fly past me without me being the wiser due to flow. Now that's no longer the case most of the times.
Must be nice. Claude and Codex are still a waste of my time in complex legacy codebases.
Don't get me wrong, I do think AI coding is pretty dangerous for those without the right expertise to harness it with the right guardrails, and I'm really worried about what it will mean for open source and SWE hiring, but I do think refusing to use AI at this point is a bit like the assembly programmer saying they'll never learn C.
Religiously, routinely refactor. After almost every feature I do a feature level code analysis and refactoring, and every few features - codebase wide code analysis and refactoring.
I am quite happy with the resulting code - much less shameful than most things I've created in 40 years of being passionate about coding.