This is a great article. I’ve been trying to see how layered AI use can bridge this gap but the current models do seem to be lacking in the ambiguous design phase. They are amazing at the local execution phase.
Part of me thinks this is a reflection of software engineering as a whole. Most people are bad at design. Everyone usually gets better with repetition and experience. However, as there is never a right answer just a spectrum of tradeoffs, it seems difficult for the current models to replicate that part of the human process.
It also reduces my hesitation to get started with something I don't know the answer well enough yet. Time 'wasted' on vibe-coding felt less painful than time 'wasted' on heads-down manual coding down a rabbit hole.
Nowhere is this more obvious in my current projects than with CRUD interface building. It will go nuts building these elaborate labyrinths and I’m sitting there baffled, bemused, foolishly hoping that THIS time it would recognise that a single SQL query is all that’s needed. It knows how to write complex SQL if you insist, but it never wants to.
But even with those frustrations, damn it is a lot faster than writing it all myself.
90 percent of the things users want either A) dont exist or B) are impossible to find, install and run without being deeply technical.
These things dont need to scale, they dont need to be well designed. They are for the most part targeted, single user, single purpose, artifacts. They are migration scripts between services, they are quick and dirty tools that make bad UI and workflows less manual and more managable.
These are the use cases I am seeing from people OUTSIDE the tech sphere adopt AI coding for. It is what "non techies" are using things like open claw for. I have people who in the past would have been told "No, I will not fix your computer" talk to me excitedly about running cron jobs.
Not everything needs to be snap on quality, the bulk of end users are going to be happy with harbor freight quality because it is better than NO tools at all.
There is no doubt that when used in the right way an AI coding assistant can be very helpful, but using it in the right way does not result in the fantastic productivity-increasing factors claimed by some. TFA describes a way of using AI that seems right and it also describes the temptations of using AI wrong, which must be resisted.
More important is whether the productivity improvement is worth a subscription price. Nothing that I have seen until now convinces me about this.
On the other hand, I believe that running locally a good open-weights coding assistant, so that you do not have to worry about token price or about exceeding subscription limits in a critical moment, is very worthwhile.
Unfortunately, thieves like Altman have ensured that running locally has become much more difficult than last year, due to the huge increases in the prices of DRAM and of SSDs. In January I have been forced to replace an old mini-PC, but I was forced to put in the new mini-PC only 32 GB of DDR5, the same as in the 7-year old replaced mini-PC. If I had made the upgrade a few months earlier, I would have put in it 96 GB, which would have made it much more useful. Fortunately, I also have older computers with 64 GB or 128 GB DRAM, where bigger LLMs may be run.
This is one thing I also wonder about. If it's a really good programming helper, making 20% of your job 5x faster, then you can compute the value. Say for a $250K SWE this looks like $40k/year roughly. You don't want to hand 100% of that value to the LLM providers or you've just broken even, so then maybe it is worth $200/mo.
For now, there is a lot of unpredictability in the future cost of AI, whenever you do not host it yourself.
If you pay per token, it is extremely hard to predict how many tokens you will need. If you have an apparently fixed subscription, it is very hard to predict whether you will not hit limits in the most inconvenient moment, after which you will have to wait for a day or so for the limits to be reset.
Recently, there have been a lot of stories where the AI providers seem to try to reduce continuously the limits allowed by a subscription. There is also a lot of incertitude about future raises of the subscription prices, as the most important providers appear to use prices below their expenses, for now.
Therefore, while I agree with you that when something provides definite benefits you should be able to assess whether paying for it provides a net gain for you, I do not believe that using an externally-hosted AI coding assistant qualifies for such an assessment, at least not for now.
https://techcrunch.com/2025/03/11/google-has-given-anthropic...
They don't care. They want software engineers replaced by any means necessary. They know generative AI isn't a big business, that is why they slowwalk it themselves.
Replacement won't work of course, that is why marketing blog posts are needed.
This experience is familiar to every serious software engineer who has used AI code gen and then reviewed the output:
> But when I reviewed the codebase in detail in late January, the downside was obvious: the codebase was complete spaghetti14. I didn’t understand large parts of the Python source extraction pipeline, functions were scattered in random files without a clear shape, and a few files had grown to several thousand lines. It was extremely fragile; it solved the immediate problem but it was never going to cope with my larger vision,
Some people never get to the part where they review the code. They go straight to their LinkedIn or blog and start writing (or having ChatGPT write) posts about how manual coding is dead and they’re done writing code by hand forever.
Some people review the code and declare it unusable garbage, then also go to their social media and post how AI coding is completely useless and they’re not going to use it for anything.
This blog post shows the journey that anyone not in one of those two vocal minorities is going through right now: A realization that AI coding tools can be a large accelerator but you need to learn how to use them correctly in your workflow and you need to remain involved in the code. It’s not as clickbaity as the extreme takes that get posted all the time. It’s a little disappointing to read the part where they said hard work was still required. It is a realistic and balanced take on the state of AI coding, though.