How to effectively write quality code with AI
276 points by i5heu 20 hours ago | 226 comments

aenis 19 minutes ago
I'd add:

Religiously, routinely refactor. After almost every feature I do a feature level code analysis and refactoring, and every few features - codebase wide code analysis and refactoring.

I am quite happy with the resulting code - much less shameful than most things I've created in 40 years of being passionate about coding.

reply
OptionOfT 18 hours ago
I wonder at the end of this if it's the still worth the risk?

A lot of how I form my thoughts is driven by writing code, and seeing it on screen, running into its limitations.

Maybe it's the kind of work I'm doing, or maybe I just suck, but the code to me is a forcing mechanism into ironing out the details, and I don't get that when I'm writing a specification.

reply
agumonkey 18 hours ago
I second this. This* is the matter against which we form understanding. This here is the work at hand, our own notes, discussions we have with people, the silent walk where our brain kinda process errors and ideas .. it's always been like this since i was a kid, playing with construction toys. I never ever wanted somebody to play while I wait to evaluate if it fits my desires. Desires that often come from playing.

Outsourcing this to an LLM is similar to an airplane stall .. I just dip mentally. The stress goes away too, since I assume the LLM will get rid of the "problem" but I have no more incentives to think, create, solve anything.

Still blows my mind how different people approach some fields. I see people at work who are drooling about being able to have code made for them .. but I'm not in that group.

reply
joquarky 8 hours ago
My think/create/solve focus is on making my agentic coding environment produce high quality code with the least cost. Seems like a technical challenge worth playing with.

It probably helps that I have 40 years of experience with producing code the old ways, including using punch cards in middle school and learning basic on a computer with no persistent storage when I was ten.

I think I've done enough time in the trenches and deserve to play with coding agents without shame.

reply
doug_durham 18 hours ago
I'll push it back against this a little bit. I find any type of deliberative thinking to be a forcing function. I've recently been experimenting with writing very detailed specifications and prompts for an LLM to process. I find that as I go through the details, thoughts will occur to me. Things I hadn't thought about in the design will come to me. This is very much the same phenomenon when I was writing the code by hand. I don't think this is a binary either or. There are many ways to have a forcing function.
reply
hed 17 hours ago
I think it's analogous to writing and refining an outline for a paper. If you keep going, you eventually end up at an outline where you can concatenate what are basically sentences together to form paragraphs. This is sort of where you are now, if you spec well you'll get decent results.
reply
agumonkey 17 hours ago
I agree, I felt this a bit. The LLM can be a modeling peer in a way. But the phase where it goes to validate / implement is also key to my brain. I need to feel the details.
reply
blibble 16 hours ago
> I see people at work who are drooling about being able to have code made for them .. but I'm not in that group.

people seem to have a inability to predict second and third order effects

the first order effect is "I can sip a latte while the bot does my job for me"... well, great I suppose, while it lasts

but the second order effect is: unless you're in the top 10%, you will now lose your job, permanently

and the third order effect is the economy collapses as it is built on consumer spending

reply
cowlby 13 hours ago
Alternatively, another second order effect is can't sip latte anymore because you're orchestrating 8 bots do the work and you're back to 80%-100% time saturation.
reply
coldtea 7 hours ago
The previous second order effect is more likely. For the one orchestrating 8 bots, 7 others are not needed anymore.
reply
jochem9 6 hours ago
So far in my career I have always had more requests coming in than implementations going out. If I can go 3 or 10 times faster, than I will still have plenty of work. Especially for the slew of ideas that are never even considered to put towards a dev, because it's already considered to be too low value to have it even be considered to be build. Or the ideas that are so far fetched they were never considered feasible. I am not worried work will dry up.

What I believe is going to be interesting is what happens when non-engineers adopt building with agentic AI. Maybe 70 or 80% of their needs will be met without anyone else directly involved. My suspicion is that it will just create more work: making those generated apps work in a trustworthy manner, giving the agents more access to build context and make decisions, turning those one off generated apps into something maintainable, etc.

reply
dagss 7 hours ago
Or, there is just a lot more software written as the costs drop. I think most people work with software not tailored enough for their situation..
reply
re-thc 38 minutes ago
> I see people at work who are drooling about being able to have code made for them

These people just drool at being able to have work done for them to begin with. Are you sure it is just "code"?

reply
CTDOCodebases 17 hours ago
I wonder over the long term how programmers are going to maintain the proficiency to read and edit the code that the LLM produces.
reply
elzbardico 9 hours ago
There were always many mediocre engineers around, some of them even with fancy titles like "Senior," "Principal", and CTO.

We have always survived it, so probably we can also survive mediocre coders not reading the code the LLM generates for them because they are unable to see the problems that they were never able to see in their handwritten code.

reply
agumonkey 17 hours ago
Personally I planned to allocate weekly challenges to stay sharp.
reply
therealdrag0 8 hours ago
Honestly it’s not that hard. I already coded less and less as part of my job as I get more senior and just didn’t have time, but I was still easy to do code reviews and fix bugs, sit down and whip out a thousand lines in a power session. Once you learn it doesn’t take much practice to maintain it. A lot of traditional coding is very inefficient. With AI it’s like we’re moving from combustion cars to EVs, the energy efficiency is night and day, for doing the same thing.

That said, the next generation may struggle, but they’ll find their way.

reply
p1esk 9 hours ago
I don’t read or edit the code my claude code agent produces. That’s its job now. My job is to organize the process and get things done.
reply
shinycode 6 hours ago
In this case why can’t other agents just automate your job completely ? They are capable of that. What do you bring in the process of still doing manual organization ?
reply
p1esk 2 hours ago
I still have to tell it what to do, and often how to do it. I manage its external memory and guidelines, and review implementation plans. I’m still heavily involved in software design and test coverage.

AI is not capable yet of automating my job completely – I anticipate this will happen within two years, maybe even this year (I’m an ML researcher).

reply
shinycode 2 hours ago
Do you mean, from your perspective, within 2 years humans won’t be able to bring anything of value to the equation in management and control ?
reply
p1esk 2 hours ago
No, I mean that my job in its current form – as an ML researcher with a phd and 15 years of experience - will be completely automated within two years.
reply
fauigerzigerk 32 minutes ago
If you want a machine (or in fact another human) to do something for you, there are two tasks you cannot delegate to them:

a) Specify what you want them to do.

b) Check if the result meets your expectations.

Does your current job include neither a nor b?

reply
p1esk 18 minutes ago
A/B happen at different abstractions levels. My abstraction level will be automated. My manager’s level will probably last another year or so.
reply
fauigerzigerk 7 minutes ago
So your assumption is that it will ultimately be the users of software themselves who will throw some every day language at an AI and it will reliably generate something that meets those users' intuitive expectations?
reply
dadandang 4 hours ago
simonw alert!!!
reply
ath3nd 6 hours ago
[dead]
reply
Akranazon 18 hours ago
Everything you have said here is completely true, except for "not in that group": the cost-benefit analysis clearly favors letting these tools rip, even despite the drawbacks.
reply
gtowey 17 hours ago
Maybe.

But it's also likely that these tools will produce mountains of unmaintainable code and people will get buried by the technical debt. It kind of strikes me as similar to the hubris of calling the Titanic "unsinkable." It's an untested claim with potentially disastrous consequences.

reply
rapind 17 hours ago
> But it's also likely that these tools will produce mountains of unmaintainable code and people will get buried by the technical debt.

It's not just likely, but it's guaranteed to happen if you're not keeping an eye on it. So much so, that it's really reinforced my existing prejudice towards typed and compiled languages to reduce some of the checking you need to do.

Using an agent with a dynamic language feels very YOLO to me. I guess you can somewhat compensate with reams of tests though. (which begs the question, is the dynamic language still saving you time?)

reply
Wobbles42 14 hours ago
Companies aren't evaluating on "keeping an eye on technical debt", but then ARE directly evaluating on whether you use AI tools.

Meanwhile they are hollowing out work forces based on those metrics.

If we make doing the right thing career limiting this all gets rather messy rather quickly.

reply
joquarky 8 hours ago
> If we make doing the right thing career limiting this all gets rather messy rather quickly.

This has already happened. The gold rush brogrammers have taken over.

Careers are over. Company loyalty is a relic. Now it's a matter of adapting quickly to earn enough to survive.

reply
zingar 16 hours ago
Tests make me faster. Dynamic or not feels irrelevant when I consider how much slower I’d be without the fast feedback loop of tests.
reply
rapind 11 hours ago
You can (and probably should) still do tests, but there's an entire class of errors you know can't happen, so you need far less tests, focusing only on business logic for the most part.
reply
recursive 14 hours ago
Static type checking is even faster than running the code. It doesn't catch everything, but if finding a type error in a fast test is good, then finding it before running any tests seems like it would be even better.
reply
agumonkey 18 hours ago
Oh I'm well aware of this. I admitted defeat in a way.. I can't compete. I'm just at loss, and unless LLM stall and break for some reason (ai bubble, enshittification..) I don't see a future for me in "software" in a few years.
reply
anjel 10 hours ago
The future is either a language model trained on AI code bloats and the ways to optimize the bloat away

OR,

something like Mercor, currently getting paid really well by Meta, OpenAI, Anthropic and Gemini to pay very smart humans really well to proof language model outputs.

reply
stareatgoats 15 hours ago
Somehow I appreciate this type of attitude more than the one which reflects total denial of the current trajectory. Fervent denial and AI trash-talking being maybe the single most dominant sentiment on HN over the last year, by all means interspersed with a fair amount of amazement at our new toys.

But it is sad if good programmers should loose sight of the opportunities the future will bring (future as in the next few decades). If anything, software expertise is likely to be one of the most sought-after skills - only a slightly different kind of skill than churning out LOCs on a keyboard faster than the next person: People who can harness the LLMs, design prompts at the right abstraction level, verify the code produced, understand when someone has injected malware, etc. These skills will be extremely valuable in the short to medium term AFAICS.

But ultimately we will obviously become obsolete if nothing (really) catastrophic happens, but when that happens then likely all human labor will be obsolete too, and society will need to be organized differently than exchanging labor for money for means of sustenance.

reply
agumonkey 15 hours ago
I get crazy over the 'engineer are not paid to write loc', nobody is sad because they don't have to type anymore. My two issues are it levels the delivery game, for the average web app, anybody can now output something acceptable, and then it doesn't help me conceptualize solution better, so I revert to letting it produce stuff that is not maleable enough.
reply
majormajor 12 hours ago
I wonder about who "anybody can now output something acceptable" will hit most - engineers or software entrepreneurs.

Any implementation moat around rapid prototyping, and any fundraising moat around hiring a team of 10 to knock out your first few versions, seems gone now. Trying to sell MVP-tier software is real hard when a bunch of your potential customers will just think "thanks for the idea, I'll just make my own."

The crunch for engineers, on the other hand, seems like that even if engineers are needed to "orchestrate the agents" and manage everything, there could be a feature-velocity barrier for the software that you can still sell (either internally or externally). Changing stuff more rapidly can quickly hit a point of limited ROI if users can't adjust, or are slowed by constant tooling/workflow churn. So at some point (for the first time in many engineers' career, probably) you'll probably see product say "ok even though we built everything we want to test, we can't roll it all out at once!". But maybe what is learned from starting to roll those things out will necessitate more changes continually that will need some level of staffing still. Or maybe cheaper code just means ever-more-specialized workflows instead of pushing users to one-size-fits-all tooling.

In both of those cases the biggest challenge seems to be "how do you keep it from toppling down over time" which has been the biggest unsolved problem in consumer software development for decades. There's a prominent crowd right now saying "the agents will just manage it by continuing to hack on everything new until all the old stuff is stable too" but I'm not sure that's entirely realistic. Maybe the valuable engineering skills will be putting in the right guardrails to make sure that behavioral verification of the code is a tractable problem. Or maybe the agents will do that too. But right now, like you say, I haven't found particularly good results in conceptualizing better solutions from the current tools.

reply
agumonkey 10 hours ago
> your potential customers will just think "thanks for the idea, I'll just make my own."

yeah, and i'm surprised nobody talks about this much. prompting is not that hard, and some non software people are smart enough to absorb the necessary details (especially since the llm can tutor them on the way) and then let the loop produce the MVP.

> Or maybe cheaper code just means ever-more-specialized workflows instead of pushing users to one-size-fits-all tooling.

Interesting thought

reply
almostdeadguy 15 hours ago
If the world comes to that it will be absolutely catastrophic, and it’s a failure of grappling with the implications that many of the executives of AI companies think you can paper over the social upheaval with some UBI. There will be no controlling what happens, and you don’t even need to believe in some malicious autonomous AI to see that.
reply
acedTrex 17 hours ago
Yep, its a rather depressing realization isnt it. Oh well, life moves on i suppose.

I think we realistically have a few years of runway left though. Adoption is always slow outside of the far right of the bell curve.

reply
agumonkey 16 hours ago
i'm sorry if I pulled everybody down .. but it's been many months since gemini and claude became solid tools, and regularly i have this strong gut feeling. i tried reevaluating my perception of my work, goals, value .. but i keep going back to nope.
reply
gspr 7 hours ago
I hear you. And maybe you're right. Maybe I'm deluding myself, but: when I look at my skilled colleagues who vibecode, I can't understand how this is sustainable. They're smart people, but they've clearly turned off. They can't answer non-trivial questions about the details of the stuff they (vibe-)delivered without asking the LLM that wrote it. Whoever uses the code downstream aren't gonna stand (or pay!) for this long-term! And the skills of the (vibe-)authors will rapidly disappear.

Maybe I'm just as naive as those who said that photographs lack the soul of paintings. But I'm not 100% convinced we're done for yet, if what you're actually selling is thinking, reasoning and understanding.

reply
shinycode 6 hours ago
The difference with a purely still photograph is that code is a functional encoding of an intention. Code of an LLM could be perfect and still not encode the perfect intention of the product. I’ve seen that in many occasions. Many people don’t understand what code really is about and think they have a printer toy now and we don’t have to use pencils. That’s not at all the same thing. Code is intention, logic, specific use case all at once. With a non deterministic system and vague prompting there will be misinterpreted intentions from LLM because the model makes decisions to move forward. The problem is the scale of it, we’re not talking about 1000 loc. In a month you can generate millions of loc, in a year hundreds of millions of loc.

Some will have to crash and burn their company before they realize that no human at all in the loop is a non sense. Let them touch fire and make up their mind I guess.

reply
raw_anon_1111 4 hours ago
> Code is intention, logic, specific use case all at once. With a non deterministic system and vague prompting there will be misinterpreted intentions from LLM because the model makes decisions to move forward. The problem is the scale of it, we’re not talking about 1000 loc. In a month you can generate millions of loc, in a year hundreds of millions of loc.

People are also non deterministic. When I delegate work to team of five or six mid level developers or God forbid outsourced developers, I’m going to have to check and review their work too.

It’s been over a decade that my vision/responsibility could be carried out by just my own two hands and be done on time within 40 hours a week - until LLMs

reply
shinycode 2 hours ago
Ofc people are non deterministic. But usually we expect machines to be. That’s why we trust them blindly and don’t check the calculations. We review people’s work all the time though. Here people will stop review machine LLM code as it’s kind of a source of truth like in other areas. That’s my point, reviewing code takes time and even more time when no human wrote it. It’s a dangerous path to stop reviews because of trust in the machine now that the machine is just kind of like humans, non deterministic.
reply
raw_anon_1111 2 hours ago
No one who has any knowledge or who has ever used an LLM expects determinism.

And there are no computer professionals who haven’t heard about hallucinations.

Reviewing whether the code meets requirements through manual and automated tests - and that’s all I cared about when I had a team of 8 under me - is the same regardless. I wasn’t checking whether John used a for loop or while loop in between my customer meetings and meetings with the CTO. I definitely wasn’t checking the SOQL (not a typo) of the Salesforce consultants we hired. I was testing inputs and outputs and UX.

reply
skydhash 4 minutes ago
There are so many types of requirements though. Security is one, performance is another. No one has cared about while/for for a long time.
reply
gspr 2 hours ago
People are indeed not deterministic. But they are accountable. In the legal sense, of course, but more importantly, in an interpersonal sense.

Perhaps outsourcing is a good analogy. But in that case I'd call it outsourcing without accountability. LLMs feel more like an infinite chain of outsourcing.

reply
raw_anon_1111 2 hours ago
As a former tech lead and now staff consultant who leads cloud implementations + app dev, I am ultimately responsible for making sure that projects are done on time, on budget and meets requirements. My manager nor the customer would allow me to say it’s one of my team members fault that something wasn’t done correctly any more than I could say don’t blame me blame Codex.

I’ve said repeatedly over the past couple of days that if a web component was done by someone else, it might as well have been created by Claude, I haven’t done web development in a decade. If something isn’t right or I need modifications I’m going to either have to Slack the web developer or type a message to Claude.

reply
andhuman 6 hours ago
I have this nagging feeling I’m more and more skimming text, not just what the LLMs output, but all type of texts. I’m afraid people will get too lazy to read, when the LLM is almost always right. Maybe it’s a silly thought. I hope!
reply
agumonkey 4 hours ago
there are some youtube videos about the topic, be it pupil in high school addicted to llms, or adults losing skills, and not dev only, society is starting to see strange effects
reply
atentaten 2 hours ago
Can you provide links to these videos?
reply
gspr 5 hours ago
This is my fear too.

People will say "oh, it's the same as when the printing press came, people were afraid we'd get lazy from not copying text by hand", or any of a myriad of other innovations that made our lives easier. I think this time it's different though, because we're talking about offloading the very essence of humanity – thinking. Sure, getting too lazy to walk after cars became widespread was detrimental to our health, but if we get too lazy to think, what are we?

reply
untrust 12 hours ago
Imagine everyone who is in less technical or skilled domains.

I can't help but resist this line of thinking as a result. If the end is nigh for us, it's nigh for everyone else too. Imagine the droves of less technical workers in the workforce who will be unseated before software engineers. I don't think it is tenable for every worker in the first world to become replaced by a computer. If an attempt at this were to occur, those smart unemployed people would be a real pain in the ass for the oligarchs.

reply
Wobbles42 14 hours ago
I feel the same.

Frankly, I am not sure there is a place in the world at all for me in ten years.

I think the future might just be a big enough garden to keep me fed while I wait for lack of healthcare access to put me out of my misery.

I am glad I am not younger.

reply
Der_Einzige 11 hours ago
Yup. The majority of this website is going to find out they were grossly overpaid for a long time.
reply
sdf2erf 12 hours ago
So why havent you been fired already?

.......

reply
agumonkey 10 hours ago
gemini has only been deployed in the corp this year, but the expectations are now higher (doubled). i'll report by the end of the year..
reply
jeppester 18 hours ago
That's also how I feel.

I think you have every right to doubt those telling us that they run 5 agents to generate a new SAAS-product while they are sipping latté in a bar. To work like that I believe you'll have to let go of really digging into the code, which in my experience is needed if want good quality.

Yet I think coding agents can be quite a useful help for some of the trivial, but time consuming chores.

For instance I find them quite good at writing tests. I still have to tweak the tests and make sure that they do as they say, but overall the process is faster IMO.

They are also quite good at brute-forcing some issue with a certain configuration in a dark corner of your android manifest. Just know that they WILL find a solution even if there is none, so keep them on a leash!

Today I used Claude for bringing a project I abandoned 5 years ago up to speed. It's still at work in progress, but the task seemed insurmountable (in my limited spare time) without AI, now it feels like I'm half-way there in 2-3 hours.

reply
frankc 17 hours ago
I think we really need to have a serious think of what is "good quality" in the age of coding agents. A lot of the effort we put into maintaining quality has to do with maintainability, readability etc. But is it relevant if the code isn't for humans? What is good for a human is not what is good for an AI necessarily (not to say there is no overlap). I think there are clearly measurable things we can agree still apply around bugs, security etc, but I think there are also going to be some things we need to just let go of.
reply
Thanemate 4 hours ago
>But is it relevant if the code isn't for humans?

The implications to your statement seems to me that is: "you'll never have to directly care about it yourself, so why do you care about it?". Unless you were talking about the codebase in a user-application relationship which in this case feel free to ignore the rest of my post.

I don't believe that the code will become an implementation detail, ever. When all you do is ship an MVP to demonstrate what you're building then no one cares, before or after LLM assistance. But any codebase that lives more than a year and serves real users while generating revenue deserves to have engineers who knows what's happening beyond authoring markdown instructions to multiple agents.

Your claim seems to push us towards a territory where externalizing out thought processes to a third party is the best possible outcome for all parties, because the models will only get better and stay just as affordable.

I will respond to that by pointing out that, models that will ultimately be flawless in code generation will worth a fortune in terms of adding value, and any corporation that will win the arms race will be actually killing themselves by not raising the cost of access to their services by a metric ton. This is because there will be few LLM providers that actually worth it by then, and because oligopoly is a thing.

So no. I don't expect that we'll ever reach a point where the average person will be "speaking forth" software the same way they post on Reddit, without paying cancer treatment levels of money.

But even if it's actually affordable... Why would I ever want to use your app instead of just asking an LLM to make me one from scratch? No one seems to think about that.

reply
tiny-automates 10 hours ago
i've been building agent tooling for a while and this is the question i keep coming back to. the actual failure mode isn't messy code, agents produce reasonably clean, well-typed output these days. it's that the code confidently solves a different problem than what you intended. i've had an agent refactor an auth flow that passed every test but silently dropped a token refresh check because it "simplified" the logic. clean code, good types, tests green, security hole. so for me "quality" has shifted from cyclomatic complexity and readability scores to "does the output behaviour match the specification across edge cases, including the ones i didn't enumerate." that's fundamentally an evaluation problem, not a linting problem.
reply
mlaretallack 9 hours ago
This is where I think its going, it feels that in the end we will end up with an "llm" language, one that is more suited to how an llm works and less human.
reply
skydhash 17 hours ago
You can’t drop anything as long as a programmer is expected to edit the source code directly. Good luck investigating a bug when the code is unclear semantically, or updating a piece correctly when you’re not really sure it’s the only instance.
reply
tjr 17 hours ago
I think that's the question. Is a programmer expected to ever touch the source code? Or will AI -- and AI alone -- update the code that it generated?

Not entirely unlike other code generation mechanisms, such as tools for generating HTML based on a graphical design. A human could edit that, but it may not have been the intent. The intent was that, if you want a change, go back to the GUI editor and regenerate the HTML.

reply
majormajor 12 hours ago
> Not entirely unlike other code generation mechanisms, such as tools for generating HTML based on a graphical design. A human could edit that, but it may not have been the intent. The intent was that, if you want a change, go back to the GUI editor and regenerate the HTML.

We largely moved back away from "work in a graphic tool then spit out HTML from it" because it wasn't robust for the level of change/iteration pace, this wasn't exactly my domain but IIRC there were especially a lot of problems around "small-looking changes are now surprisingly big changes in the generated output that have a large blast radius in terms of the other things (like interactivity) we've added in."

Any time you do a refactor that changes contract boundaries between functions/objects/models/whatever, and you have to update the tests to reflect this, you have a big risk of your new tests not covering exactly the same set of component interactions that your old tests did. LLM's don't change this. They can iterate until the tests are green, but certain changes will require changing the tests, and now "iterating until the tests are green" could be resolved by changing the tests in a way that subtly breaks surprising user-facing things.

The value of good design in software is having boundaries aligned with future desires (obviously this is never perfect foresight) to minimize that risk. And that's the scary thing to myself about not even reading the code.

reply
bornfreddy 16 hours ago
So like we went from assembler to higher level programming languages, we will now move to specifications for LLMs? Interesting thought... Maybe, once the "compilers" get good enough, but for mission critical systems they are not nearly good enough yet.
reply
tjr 16 hours ago
Right. I work in aerospace software, and I do not know if this option would ever be on the table. It certainly isn't now.

So I think this question needs to be asked in the context of particular projects, not as an industry-wide yes or no answer. Does your particular project still need humans involved at the code level? Even just for review? If so, then you probably ought to retain human-oriented software design and coding techniques. If not, then, whatever. Doesn't matter. Aim for whatever efficiency metric you like.

reply
Gud 5 hours ago
Not everyone works in aerospace engineering, though.

I would guess that >90% of all web crud can already be done better by an LLM managed by a decent developer, than purely by the developer himself.

reply
9dev 15 hours ago
Then again, would anyone have guessed we’d even be seriously discussing this topic 10, 20, 40 years ago?
reply
tjr 15 hours ago
Maybe. This book from 1990

https://mitpress.mit.edu/9780262526401/artificial-intelligen...

envisions a future of AI assistance that looks not too far off from today.

reply
9dev 14 hours ago
It’s also pretty close to Steve Jobs initial vision of computing in the future (https://stevejobsarchive.com/stories/objects-of-our-life, 1983) but my point is that whatever it is we call AI now became reality so much faster than anyone really saw coming. Even if the pace slows down, and it didn’t yet, things are improving so massively all the time that the world can’t keep up changing to accommodate.
reply
Wobbles42 14 hours ago
This is exactly what is happening from a levels of abstraction standpoint.

The difference being that compilers and related tools are deterministic, and we can manage the outputs using mathematical proof of correctness.

The LLM's driving this new abstraction layer are another beast entirely.

reply
palmotea 17 hours ago
> I think you have every right to doubt those telling us that they run 5 agents to generate a new SAAS-product while they are sipping latté in a bar. To work like that I believe you'll have to let go of really digging into the code, which in my experience is needed if want good quality.

Also we live in a capitalist society. The boss will soon ask: "Why the fuck am I paying you to sip a latte in a bar? While am machine does your work? Use all your time to make money for me, or you're fired."

AI just means more output will be expected of you, and they'll keep pushing you to work as hard as you can.

reply
9dev 15 hours ago
> AI just means more output will be expected of you, and they'll keep pushing you to work as hard as you can.

That’s a bit too cynical for me. After all, yes, your boss is not paying you for sipping lattes, but for producing value for the company. If there is a tool that maximises your output, why wouldn’t he want you to use that to great efficiency?

Put differently, would a carpenter shop accept employees rejecting the power saw in favour of a hand saw to retain their artisanal capability?

reply
vbezhenar 9 hours ago
> why wouldn’t he want you to use that to great efficiency

Because I deny that? It's not fun for me.

> would a carpenter shop accept employees rejecting the power saw in favour of a hand saw to retain their artisanal capability?

Why not? If that makes enough money to keep going.

You might argue that in theoretical ideal market companies who're not utilizing every possible trick to improve productivity (including AI) will lose competition, but let's be real, a lot of companies are horribly inefficient and that does not make them bankrupt. The world of producing software is complicated.

I know that I deliver. When I'm asked to write a code, I deliver it and I responsible for it. I enjoy the process and I can support this code. I can't deliver with AI. I don't know what it'll generate. I don't know how much time would it take to iterate to the result that I precisely want. So I can't longer be responsible for my own output. Or I'd spend more time baby-sitting AI than it would take me to write the code. That's my position. Maybe I'm wrong, they'll fire me and I'll retire, who knows. AI hype is real and my boss often copy&pasting ChatGPT asking me to argue with it. That's super stupid and irritating.

reply
gombosg 6 hours ago
I can totally relate to your experience.

I started this career because I liked writing code. I no longer write a lot of code as a lead, but I use writing code to learn, to gain a deeper understanding of the problem domain etc. I'm not the type who wants to write specs for every method and service but rather explore and discover and draft and refactor by... well, coding. I'm amazed at creating and reading beautiful, stylish, working code that tells a story.

If that's taken away, I'm not sure how I could retain my interest in this profession. Maybe I'll need to find something else, but after almost a decade this will be a hard shift.

reply
palmotea 10 hours ago
> That’s a bit too cynical for me. After all, yes, your boss is not paying you for sipping lattes, but for producing value for the company. If there is a tool that maximises your output, why wouldn’t he want you to use that to great efficiency?

Sitting in a cafe enjoying a latte is not "producing value for the company." If having "5 agents to generate a new SAAS-product" matches your non-AI capacity and gives you enough free time to relax in a cafe, he's going to want to you run 50 agents generating 5 new SAAS products, until you hit your capacity.

If he doesn't need 5 new SAAS products, just one, then he's going to fire you or other members of your team.

Think of it this way: you're a piece of equipment to your boss, and every moment he lets you sit idle (on the clock) is money lost. He wants to run that piece of equipment as hard as he can, to maximize his profit.

That's labor under capitalism.

reply
recursive 14 hours ago
If the power saw ran itself without any oversight, the carpenter shop wouldn't accept any type of employees.
reply
9dev 14 hours ago
But that’s the exact opposite of what the GP was arguing; you will be expected to stick with the agent more, not less.
reply
coldtea 7 hours ago
You or someone else might be expected. The rest will just be expected to be fired.
reply
rapind 17 hours ago
I still do this, but when I'm reviewing what's been written and / or testing what's been built.

How I see it is we've reverted back to a heavier spec type approach, however the turn around time is so fast with agents that it still can feel very iterative simply because the cost of bailing on an approach is so minimal. I treat the spec (and tests when applicable) as the real work now. I front load as much as I can into the spec, but I also iterate constantly. I often completely bail on a feature or the overall approach to a feature as I discover (with the agent) that I'm just not happy with the gotchas that come to light.

AI agents to me are a tool. An accelerator. I think there are people who've figured out a more vibey approach that works for them, but for now at least, my approach is to review and think about everything we're producing, which forms my thoughts as we go.

reply
majormajor 13 hours ago
Historically software engineering has been seen as "assembly line" work by a lot of people (see all the efforts to outsource it through spec handoffs and waterfall through the years) but been implemented in practice as design-as-you-build (nobody anticipates all the questions or edge cases in advance, software specs are often an order of magnitude simpler than the actual number of branches in the code).

For mission-critical applications I wonder if making "writing the actual code" so much cheaper means that it would make more sense to do more formal design up front instead, when you no longer have a human directly in the loop during the writing of the code to think about those nasty pops-up-on-the-fly decisions.

reply
wes-k 12 hours ago
> software specs are often an order of magnitude simpler than the actual number of branches in the code

Love this! Be it design specs or a mock from the designer. So many unaccounted for decisions. Good devs will solve many on their own, uplevel when needed, and provide options.

And absolutely it means more design up front. And without human in the direct loop, maybe people won’t skimp on this!

reply
keepamovin 4 hours ago
I think of it differently: I’ve been coding so long that ironing out the details and working through the specification with AI comes extremely naturally. It’s like how I would talk to a colleague and iterate on their work. However, the quality of the code produced by LLMs needs to be carefully managed to assure it’s of a high standard. That’s why I formalized a system of checks and balances for my genetic coding that contains architectural guidelines as well as language, specific taste advice.

You can check it out here: https://ai-lint.dosaygo.com/

reply
wasmainiac 18 hours ago
I also second this. I find that I write better by hand, although I work on niche applications it’s not really standard crud or react apps. I use LLMs in the same way i used to used stack overflow, if I go much farther to automate my work than that I spend more time on cleanup compared to if I just write code myself.

Sometimes the AI does weird stuff too. I wrote a texture projection for a nonstandard geometric primitive, the projection used some math that was valid only for local regions… long story. Claude kept on wanting to rewrite the function to what it thought was correct (it was not) even when I directed to non related tasks. Super annoying. I ended up wrapping the function in comments telling it to f#=% off before it would leave it alone.

reply
andrekandre 12 hours ago

  > I use LLMs in the same way i used to used stack overflow, if I go much farther to automate my work than that I spend more time on cleanup compared to if I just write code myself.
yea, same here.

i've asked an ai to plan and setup some larger non straight forwards changes/features/refactorings but it usually devolves into burning tokens and me clicking the 'allow' button and re-clarifying over and over when it keeps trying to confirm the build works etc...

when i'm stuck though, or when im curious of some solution it usually opens the way to finish the work similar to stack overflow

reply
discreteevent 18 hours ago
Exactly. 30 years ago a mathematician I knew said to me: "The one thing that you can say for programming is that it forces you to be precise."

We vibe around a lot in our heads and that's great. But it's really refreshing, every so often, to be where the rubber meets the road.

reply
roysting 4 hours ago
I liken it to manual versus automated industrial production. I think manual coding will always have its place just like how there are even still people who craft things by manual labor, whether it’s woodworkers only using manual tools or blacksmiths who still manually stoke coke fires that produce very unique and custom products; vs the highly automated production lines we have that produce acceptable forms of something efficiently, and many of them so many people can have them.
reply
raw_anon_1111 16 hours ago
In 1987 when I first started coding, I would either write my first attempt in BASIC and see it was too slow and rewrite parts in assembly or I would know that I had to write what I wanted from the get go in assembly because the functionality wasn’t exposed at all in BASIC (using the second 64K of memory or using double hires graphics).

This past week, I spent a couple of days modifying a web solution written by someone else + converting it from a Terraform based deployment to CloudFormation using Codex - without looking at the code as someone who hasn’t done front in development in a decade - I verified the functionality.

More relevantly but related, I spent a couple of hours thinking through an architecture - cloud + an Amazon managed service + infrastructure as code + actual coding, diagramming it, labeling it , and thinking about the breakdown and phases to get it done. I put all of the requirements - that I would have done anyway - into a markdown file and told Claude and Codex to mark off items as I tested each item and summarize what it did.

Looking at the amount of work, between modifying the web front end and the new work, it would have taken two weeks with another developer helping me before AI based coding. It took me three or four days by myself.

The real kicker though is while it worked as expected for a couple of hundred documents, it fell completely to its knees when I threw 20x documents into the system. Before LLMs, this would have made me look completely incompetent telling the customer I now wasted two weeks worth of time and 2 other resources.

Now, I just went back to the literal drawing board, rearchitected it, did all of the things with code that the managed services abstracted away with a few tweaks, created a new mark down file and was done in a day. That rework would have taken me a week by itself. I knew the theory behind what the managed service was doing. But in practice I had never done it.

It’s been over a decade where I was responsable for a delivery that I could do by myself without delegating to other people or that was simple enough that I wouldn’t start with a design document for my own benefit. Now within the past year, I can take on larger projects by myself without the coordination/“mythical man Month” overhead.

I can also in a moment of exasperation say to Codex “what you did was an over complicated stupid mess, rethink your implementation from first principles” without getting reported to HR.

There is also a lot of nice to have gold plating that I will do now knowing that it will be a lot faster

reply
carlmr 7 hours ago
>but the code to me is a forcing mechanism into ironing out the details, and I don't get that when I'm writing a specification.

This is so on point. The spec as code people try again and again. But reality always punches holes in their spec.

A spec that wasn't exercised in code, is like a drawing of a car, no matter how detailed that drawing is, you can't drive it, and it hides 90% of the complexity.

To me the value of LLMs is not so much in the code they write. They're usually to verbose, start building weird things when you don't constantly micromanage them.

But you can ask very broad questions, iteratively refine the answer, critique what you don't like. They're good as a sounding board.

reply
gombosg 6 hours ago
I love using LLMs as well as rubber ducks - what does this piece of code do? How would you do X with Y? etc.

The problem is that this spec-driven philosophy (or hype, or mirage...) would lead to code being entirely deprecated, at least according to its proponents. They say that using LLMs as advisors is already outdated, we should be doing fully agentic coding and just nudge the LLM etc. since we're losing out on 'productivity'.

reply
the_duke 17 hours ago
That's because many developers are used to working like this.

With AI, the correct approach is to think more like a software architect.

Learning to plan things out in your head upfront without to figure things out while coding requires a mindset shift, but is important to work effectively with the new tools.

To some this comes naturally, for others it is very hard.

reply
mejutoco 6 hours ago
> Learning to plan things out in your head

I dont think any complex plan should be planned in your head. But drawing diagrams, sketching components, listing pros and cons, 100%. Not jumping directly into coding might look more like jumping into spec writing a poc

reply
Nasrudith 4 hours ago
Maintaining a 'mental RAM Cache' is a powerful tool to understanding the system as a whole on a deep and intuitive level, even if you can only 'render' sections at a time. The bigger it is the more you can keep track of to be able to foresee interactions between distant pieces.

It shouldn't be your only source of a plan as you'd likely wind up dropping something, but figuring out how to jiggle things around before getting it 'on paper' is something I've found helpful.

reply
skydhash 17 hours ago
I think what GP is referring too are technical semantics and accidental complexity. You can’t plan for those.

The same kind of planning you’re describing can and do happen sans LLM, usually on the sofa, or in front of a whiteboard. Or by reading some research materials. No good programmer rushes to coding without a clear objective.

But the map is not the territory. A lot of questions surface during coding. LLMs will guess and the result may be correct according to the plan, but technically poor, unreliable, or downright insecure.

reply
chasd00 18 hours ago
Using AI or writing your own code isn't an xor thing. You can still write the code but have a coding assistant or something an alt/cmd-tab away. I enjoy writing code, it relaxes me so that's what I do but when I need to look something up or i'm not clear on the syntax for some particular operation instead of tabbing to a browser and google.com I tab to the agent and ask it to take a look. For me, this is especially helpful for CSS and UI because I really suck at and dislike that part of development.

I also use these things to just plan out an approach. You can use plan mode for yourself to get an idea of the steps required and then ask the agent to write it to a file. Pull up the file and then go do it yourself.

reply
AdieuToLogic 10 hours ago
> A lot of how I form my thoughts is driven by writing code, and seeing it on screen, running into its limitations.

Two principles I have held for many years which I believe are relevant both to your sentiment and this thread are reproduced below. Hopefully they help.

First:

  When making software, remember that it is a snapshot of 
  your understanding of the problem. It states to all, 
  including your future-self, your approach, clarity, and 
  appropriateness of the solution for the problem at hand. 
  Choose your statements wisely.
And:

  Code answers what it does, how it does it, when it is used, 
  and who uses it. What it cannot answer is why it exists. 
  Comments accomplish this. If a developer cannot be bothered 
  with answering why the code exists, why bother to work with 
  them?
reply
raw_anon_1111 8 hours ago
To your first point - so are my many markdown files that I tell Codex/Claude to keep updated while I’m doing my work including telling them to keep them updated with why I told them to do certain things. They have detailed documentation of my initial design goals and decisions that I wrote myself.

Actually those same markdown files answer the second question.

reply
8fu8uf8 10 hours ago
> If a developer cannot be bothered with answering why the code exists, why bother to work with them?

Most people can't answer why they themselves exist, or justify why they are taking up resources rather than eating a bullet and relinquishing their body-matter.

According to the philosophy herein, they are therefore worthless and not worth interacting with, right?

reply
mrklol 3 hours ago
I am similar but I think we just have to adjust. Learn and improve writings specs with all the details.
reply
mcny 10 hours ago
> A lot of how I form my thoughts is driven by writing code, and seeing it on screen, running into its limitations.

I completely agree but my thought went to how we are supposed to estimate work just like that. Or worse, planning poker where I'm supposed to estimate work someone else does.

reply
gchamonlive 14 hours ago
> A lot of how I form my thoughts is driven by writing code, and seeing it on screen, running into its limitations.

If you need that, don't use AI for it. What is it that you don't enjoy coding or think it's tangential to your thinking process? Maybe while you focus on the code have an agent build a testing pipeline, or deal with other parts of the system that is not very ergonomic or need some cleanup.

reply
andrekandre 12 hours ago

  > If you need that, don't use AI for it.
this is the right answer, but many companies mandate to use ai (burn x tokens and y percent of code) now, so people are bound to use it where it might not fit
reply
PeterStuer 18 hours ago
Any sufficiently detailed specification converges on code.
reply
shinryuu 18 hours ago
I couldn't agree more. It's often when you are in the depth of the details that I make important decisions on how to engineer the continuation.
reply
jofla_net 18 hours ago
Yes, I look at this in a similar vein to the (Eval <--> Appply) Cycle in SICP textbook, as a (Design <--> Implement) cycle.
reply
tiny-automates 10 hours ago
i go back and forth on this. when i'm working on something where the hard part is the actual algorithm, say custom scheduling logic or a non-trivial state machine, i need my hands in the code because the implementation is the thinking. but for anything where the complexity is in integration rather than logic, wiring up OAuth flows, writing CRUD endpoints, setting up CI pipelines, agents save me hours and the output is usually fine after one review pass. the "code as thought" argument is real but it applies to maybe 20% of what most of us ship day to day. the other 80% is plumbing where the bottleneck is knowing what to build, not how.
reply
vunderba 18 hours ago
Sounds like the coders equivalent of the Whorfian hypothesis.
reply
Wobbles42 14 hours ago
I sometimes wonder if the economics of AI coding agents only work if you totally ignore all the positive externalities that come with writing code.

Is the entire AI bubble just the result of taking performance metrics like "lines of code written per day" to their logical extreme?

Software quality and productivity have always been notoriously difficult to measure. That problem never really got solved in a way that allowed non technical management to make really good decisions from the spreadsheet level of abstraction... but those are the same people driving adoption of all these AI tools.

Engineers sometimes do their jobs in spite of poor incentives, but we are eliminating that as an economic inefficiency.

reply
bitwize 11 hours ago
I dunno. On the one hand, I keep hearing anecdata, including hackernews comments, friends, and coworkers, suggesting that AI-assisted coding is a literal game changer in terms of productivity, and if you call yourself a professional you'd better damn well lock the fuck in and learn the tools. At the extreme end this takes the form of, you're not a real engineer unless you use AI because real engineering is about using the optimal means to solve problems within time, scale, and budget constraints, and writing code by hand is now objectively suboptimal.

On the other hand, every time the matter is seriously empirically studied, it turns out that overall:

* productivity gains are very modest, if not negative

* there are considerable drawbacks, including most notably the brainrot effect

Furthermore, AI spend is NOT delivering the promised returns to the extent that we are now seeing reversals in the fortunes of AI stocks, up to and including freakin' NVIDIA, as customers cool on what's being offered.

So I'm supposed to be an empiricist about this, and yet I'm supposed to switch on the word of a "cool story bro" about how some guy built an app or added a feature the other day that he totally swears would have taken him weeks otherwise?

I'm like you. I use code as a part of my thought process for how to solve a problem. It's a notation for thought, much like mathematical or musical notation, not just an end product. "Programs must be written for people to read, and only incidentally for machines to execute." I've actually come to love documenting what I intend to do as I do it, esp. in the form of literate programming. It's like context engineering the intelligence I've got upstairs. Helps the old ADHD brain stay locked in on what needs to be done and why. Org-mode has been extremely helpful in general for collecting my scatterbrained thoughts. But when I want to experiment or prove out a new technique, I lean on working directly with code an awful lot.

reply
tayo42 18 hours ago
I was just thinking this the other day after I did a coding screen and didn't do well. I know the script for the interviewee is your not suppsed to write any code until you talk through the whole thing, but I think i woukd have done better if I could have just wrote a bunch of throw away code to iterate on.
reply
positron26 16 hours ago
Are there still people under the impression that the correct way to use Stack Overflow all these years was to copy & paste without analyzing what the code did and making it fit for purpose?

If I have to say, we're just waiting for the AI concern caucus to get tired of performing for each other and justifying each other's inaction in other facets of their lives.

reply
rkafbg 16 hours ago
Lab-grown meat slop producer defends AI slop.
reply
positron26 15 hours ago
So now we're pro-slaughter and low-yield agriculture as long as we get to ride the keyboard eh?
reply
throwaway613746 17 hours ago
[dead]
reply
hannofcart 14 hours ago
The post touches very briefly on linting in 7. For me, setting up a large number of static code analysis checks has had the highest impact on code quality.

My hierarchy of static analysis looks like this (hierarchy below is Typescript focused but in principle translatable to other languages):

1. Typesafe compiler (tsc)

2. Basic lint rules (eslint)

3. Cyclomatic complexity rules (eslint, sonarjs)

4. Max line length enforcement (via eslint)

5. Max file length enforcement (via eslint)

6. Unused code/export analyser (knip)

7. Code duplication analyser (jscpd)

8. Modularisation enforcement (dependency-cruiser)

9. Custom script to ensure shared/util directories are not over stuffed (built this using dependency-cruiser as a library rather than an exec)

10. Security check (semgrep)

I stitch all the above in a single `pnpm check` command and defined an agent rule to run this before marking task as complete.

Finally, I make sure `pnpm check` is run as part of a pre-commit hook to make sure that the agent has indeed addressed all the issues.

This makes a dramatic improvement in code quality to the point where I'm able to jump in and manually modify the code easily when the LLM slot machine gets stuck every now and then.

(Edit: added mention of pre-commit hook which I missed mention of in initial comment)

reply
altern8 5 hours ago
Very nice.

BUT, what is the point of max line length enforcement, just to see if there are crazy ternary operators going on?

reply
tiny-automates 10 hours ago
this is close to what i've landed on too. the pre-commit hook is non-negotiable. i've had Claude Code report "all checks pass" when there were 14 failing eslint rules. beyond the static analysis though, i keep hitting a harder problem: code that passes every lint rule, compiles clean, and greens the test suite but implements a subtly wrong interpretation of the spec. like an API handler that returns 200 with an empty array instead of 404, technically valid but semantically wrong. evaluating behavioural correctness against intent, not just syntax or type safety, is the gap nobody's really cracked yet. property-based testing helps but it still requires you to formalize the invariants upfront, which is often the hard part.
reply
Tade0 14 hours ago
My setup has some of the things mentioned and I found that occasionally the LLM will lie that something passes, when it doesn't.
reply
joquarky 8 hours ago
Make the error message much more dramatic and it will be less likely to miss it. Create a wrapper if you can't change the error message.

Remember these are still fundamentally trained on human communication and Dale Carnegie had some good advice that also applies to language generators.

reply
hannofcart 14 hours ago
Yup I have run into the same.

I use a pre-commit hook to run `pnpm check`. I missed mentioning it in the original comment. Your reply reminded me of it and I have now added it. Thanks.

reply
chrysoprace 9 hours ago
That's something I find to be incredibly frustrating. I have to keep reminding it that we're not done, no matter how much I enforce that the lints must pass before we're done.
reply
esperent 7 hours ago
If you're using Claude try the hookify plugin and ask if to block commits unless the rules pass.
reply
ppoooNN 12 hours ago
These kinda things aren’t really the issues I run into. Lack of clarity of thought, overly verbose code, needlessly defensive programming - the stuff that really rots a codebase. Honestly some of the above rules you have I’d want the LLM to ignore at the times if we’re going for maximum maintainability.
reply
esperent 8 hours ago
Except for dependency cruiser which I hadn't heard of, this is almost exactly what I've built up over the past few weeks.

For the pre-commit hook, I assume you run it on just the files changed?

> Custom script to ensure shared/util directories are not over stuffed (built this using dependency-cruiser as a library rather than an exec)

Would you share this?

reply
not_that_d 58 minutes ago
The funny thing is, when I got a lead position in my job, I just to do real detailed ticket descriptions, going into technical considerations and possible cross domain problems. I did it for the juniors - and to be honest - for my self, since I know if I took that ticket, from that moment to the moment I put some code down I could just forget stuff.

This was pushed back hard by management because it "took too much time to create a ticket". I fought it for some months but at the end I stopped and also really lose the ability and patience of do that. Juniors suffered, implementation took more time. Time passed.

Now, I am supposed to do the exact same thing, but even better and for yesterday.

reply
whynotminot 18 hours ago
The real value that AI provides is the speed at which it works, and its almost human-like ability to “get it” and reasonably handle ambiguity. Almost like tasking a fellow engineer. That’s the value.

By the time you do everything outlined here you’ve basically recreated waterfall and lost all speed advantage. Might as well write the code yourself and just use AI as first-pass peer review on the code you’ve written.

A lot of the things the writer points out also feel like safeguards against the pitfalls of older models.

I do agree with their 12th point. The smaller your task the easier to verify that the model hasn’t lost the plot. It’s better to go fast with smaller updates that can be validated, and the combination of those small updates gives you your final result. That is still agile without going full “specifications document” waterfall.

reply
adriand 17 hours ago
It’s a solid post overall and even for people with a lot of experience there’s some good ideas in here. “Identify and mark functions that have a high security risk, such as authentication, authorization” is one such good idea - I take more time when the code is in these areas but an explicit marking system is a great suggestion. In addition to immediate review benefits, it means that future updates will have that context.

“Break things down” is something most of us do instinctively now but it’s something I see less experienced people fail at all the time.

reply
chrisjj 2 hours ago
> By the time you do everything outlined here you’ve basically recreated waterfall and lost all speed advantage.

Next: vibe brain surgery.

/i

reply
aeonik 54 minutes ago
Brain surgery is probably a bad example... or maybe a good one, but for different reasons?

Brain surgery is highly technical AND highly vibe based.

You need both in extremely high quantities. Every brain is different, so the super detailed technical anatomies that we have is never enough, and the surgeon needs constant feedback (and insanely long/deep focus).

reply
blauditore 16 hours ago
I can't help but keep finding it ridiculous how everyone now discovers basic best practices (linting, documentation, small incremental changes) that have been known for ages. It's not needed because of AI, you should have been doing it like this before as well.
reply
whynotminot 15 hours ago
Anyone who’s been a developer for more than 10 minutes knows that best practices are hard to always follow through on when there’s pressure to ship.

But there’s more time to do some of these other things if the actual coding time is trending toward zero.

And the importance of it can go up with AI systems because they do actually use the documentation you write as part of their context! Direct visible value can lead people to finally take more seriously things that previously felt like luxuries they didn’t have time for.

Again if you’ve been a developer for more than 10 minutes, you’ve had the discouraging experience of pain-stakingly writing very good documentation only for it to be ignored by the next guy. This isn’t how LLMs work. They read your docs.

reply
chrisjj 2 hours ago
> Anyone who’s been a developer for more than 10 minutes knows that best practices are hard to always follow through on when there’s pressure to ship. > But there’s more time to do some of these other things if the actual coding time is trending toward zero.

I think you'll find even less time - as "AI" drives the target time to ship toward zero.

reply
chrisjj 2 hours ago
These best practice protections become essential only when you give the work to really bad programmers - such as parrots.
reply
feastingonslop 11 minutes ago
I don’t understand the interest in “quality code.” I never need to look at the code itself. I just make sure it runs right.
reply
acbart 10 minutes ago
It makes it easier to make sure it runs right. Code that is easier to make sure is quality code. Code that is hard to make sure is not quality code.
reply
jweir 17 hours ago
Remember having to write detailed specs before coding? Then folks realized it was faster and easier to skip the specs and write the code? So now are we back to where we were?

One of the problems with writing detailed specs is it means you understand the problem, but often the problem is not understand - but you learn to understand it through coding and testing.

So where are we now?

reply
chrisjj 2 hours ago
Skip specs, and you often ended up writing the wrong program - at substantial cost.

The main difference now is the parrots have reduced the cost of the wrong program to near zero, thereby eliminating much of the perceived value if a spec.

reply
exitb 16 hours ago
We’re not „thinking with portals” about these things enough yet. Typically we’d want a detailed spec beforehand, as coding is expensive and time consuming, thus we want to make sure we’re coding the right thing. With AI though, coding is cheap. So let AI skip the spec and write the code badly. Then have it review the solution, build understanding, design a spec for better solution and have it write it again. Rinse and repeat as many times you need.

It’s also nothing new, as it’s basically Joe Armstrong's programming method. It’s just not prohibitively expensive for the first time in history.

reply
chrisjj 2 hours ago
Joe should sue.
reply
exitb 48 minutes ago
That’d be challenging for him right now.
reply
bitwize 16 hours ago
Astronaut 1, AI-assisted developers: You mean, it's critical to plan and spec out what you want to write before you start in on code?

Astronaut 2, Tim Bryce: Always has been...

reply
emsign 17 hours ago
Sounds like an awful lot of work and nannying just to avoid writing code yourself. Coding used to be fun and enjoyable once...
reply
shockwaverider 17 hours ago
I’m finding it to be the opposite. I used to love writing everything by hand but now Claude is giving me the ability to focus more on architecture. I like just sitting down with my coffee and thinking about the next part of my project, how I’d like it to be written and Claude just fills it in for me. It makes mistakes at times but it also finds a lot of mine that I hadn’t even realized were in my code base.
reply
xandrius 16 hours ago
Yep, I get that some people love the act of literally typing "x = 2;" but to me coding is first and foremost problem solving. I have a problem (either truly mine or someone else's), I come up with a solution in my head and slowly implement it.

Before I also had to code it and then make sure it had no issues.

Now I can skip the coding and then just have something spit out something which I can evaluate whether I believe is a good implementation of my solution or not.

Of course, you need the skill to know good from bad but for medium to senior devs, AI is incredibly useful to get rid of the mundane task of actually writing code, while focusing on problem solving with critical review of magically generated code.

reply
jatora 9 hours ago
A good bit of scaffolding and babysitting allows you to let the model run much faster and more efficiently. Building your tool faster. I don't code to code, I code to build something I want.
reply
clarity_hacker 13 hours ago
The forcing function doesn't disappear - it shifts. When you read and critique AI-generated code carefully, you get a similar cognitive workout: Why did it structure this that way? What edge case did it miss? How does this fit the broader architecture?

The danger is treating the output as a black box. If you skip the review step and just accept whatever it produces, yes, you'll lose proficiency and accumulate debt. But if you stay engaged with the code, reading it as critically as you would a junior dev's PR, you maintain your understanding while moving faster.

The technical debt concern is valid but it's a process problem, not an inherent flaw. We solved "juniors write bad code" with code review, linting, and CI. We can solve "LLMs write inconsistent code" with the same tools - hannofcart's 10-layer static analysis stack is a good example. The LLM lies about passing checks? Pre-commit hook catches it.

reply
teaearlgraycold 13 hours ago
Pre commit hook is definitely necessary. One thing I’ve seen a lot with Opus recently is it lying that a new linter warning or error was there before it made a change. They’ve learned from us too well!
reply
elzbardico 9 hours ago
In general, I prefer to do the top-level design and the big abstractions myself. I care a lot about cohesion and coupling; I like to give a lot of thought to my interfaces.

And in practice, I am happy enough that the LLM helps me to eliminate some toil, but I think you need to know when it is time to fold your cards, and leave the game. I prefer to fix small bugs in the generated code myself, than asking the agent, as it tends to go too far when fixing its own code.

reply
elzbardico 9 hours ago
Ironically, I use the time saved using agents to read technical books ferociously.

Coding agents made me really get something back from the money I pay for my O'Reilly subscription.

So, coding agents are making me a better engineer by giving me time to dive deeper into books instead of having to just read enough to do something that works under time pressure.

reply
lz400 13 hours ago
The best thing about this is that AI bots will read, train on and digest the million "how to write with AI" posts that are being written right now by some of the smartest coders in the world and the next gen AI will incorporate all of this, making them ironically unnecessary.
reply
kimixa 2 hours ago
None of this is new, it was pretty much all "best practice" for decades and so already in the training data for the first generation.

If the issue is SNR and the ratio of "good" vs "bad" practices in the input training corpus, I don't know if that's getting better.

reply
chrisjj 2 hours ago
> AI bots will read, train on and digest the million "how to write with AI" posts that are being written right now

Yes!

> by some of the smartest coders in the world

Hmm... How will it filter out those by the dumbest coders in the world?

Including those by parrots?

reply
coldtea 7 hours ago
Each extra generation of AI produced crap AI consumes as training, the worse it gets. This has been mathematically proven.
reply
klysm 13 hours ago
They will also be reading all of the slop generated by the current and previous generations of LLMs
reply
aristofun 2 hours ago
Why shallow and likely generated posts of a “Knowledge Management Advocate” get so many stars on hn?

Just because of a hype?

reply
egrtah 18 hours ago
Too bad that software developers are carrying water for those who hate them and mock them for being obsolete in 6-12 months, while they are eating caviar (probably evading sanctions) and clink the champagne glasses in Davos:

https://xcancel.com/hamptonism/status/2019434933178306971

And all that after stealing everyone's output.

reply
atomic128 17 hours ago
Underground Resistance Aims To Sabotage AI With Poisoned Data

https://news.ycombinator.com/item?id=46827777

reply
red75prime 16 hours ago
Textile workers sabotage mechanical looms. History repeats itself.
reply
Nasrudith 4 hours ago
What do we want? Meaningless preventable toil! When do we want it? Now!
reply
kergonath 8 minutes ago
Nobody wants meaningless preventable toil. What people want is a living. Nobody would be afraid of AI taking their jobs if it didn’t mean that they’d get fired.
reply
anonnon 5 hours ago
The enthusiasm so many devs show for it is also quite bizarre, saying things like "AI makes me so much more productive," with the implication that they will be its primary beneficiaries, and that it won't result in a massive reduction in demand, compensation, and status for developers, adversely affecting them. Even more bizarre when you realize these devs aren't the ones optimizing some popular video codec or writing avionics software for a fighter jet, but instead gluing together NPM packages--probably the first or second rung on on the software "innovator's dilemma" ladder of disruption.
reply
ppoooNN 12 hours ago
I’ll believe it when those same engineers fix CC’s awful performance (mostly kidding, though I do wonder why they can’t. Feels like it’s doable).

In reality that man is hoping to IPO in 6-12 months, if anyone is wondering why the “use claude or you’re left behind” is so heavy right now.

reply
joriJordan 16 hours ago
My tricks:

Define data structures manually, ask AI to implement specific state changes. So JSON, C .h or other source files of func sigs and put prompts in there. Never tried the Agents.md monolithic definition file approach

Also I demand it stick to a limited set of processing patterns. Usually dynamic, recursive programming techniques and functions. They just make the most sense to my head and using one style I can spot check faster.

I also demand it avoid making up abstractions and stick to mathematical semantics. Unique namespaces are not relevant to software in the AI era. It's all about using unique vectors as keys to values.

Stick to one behavior or type/object definition per file.

Only allow dependencies that are designed as libraries to begin with. There is a ton of documentation to implement a Vulkan pipeline so just do that. Don't import an entire engine like libgodot.

And for my own agent framework I added observation of my local system telemetry via common Linux files and commands. This data feeds back in to be used to generate right-sized sched_ext schedules and leverage bpf for event driven responses.

Am currently experimenting with generation of small models of my own data. A single path of images for example not the entire Pictures directory. Each small model is spun akin to a Docker container.

LLMs are monolithic (massive) zip files of the entire web. No one really asking for that. And anyone who needs it already has access to the web itself

reply
undeveloper 15 hours ago
small agents.md files are worth it, at least for holding some basic information (look at build.md to read how to build, the file structure looks like so), rather than have whatever burn double the amount of tokens searching for whatever anyways.
reply
blmarket 17 hours ago
Some pattern I found from my hobby project.

1. Keep things small and review everything AI written, or 2. Keep things bloated and let AI do whatever it wants within the designated interface.

Initially I drew this line for API service / UI components, but it later expanded to other domains. e.g. For my hobby rust project I try to keep "trait"s to be single responsible, never overlap, easy to understand etc etc. but I never look at AI generated "impl"s as long as it passes some sensible tests and conforming the traits.

reply
rustyhancock 17 hours ago
I'm finding Rust is perfect for me with LLMs.

I find rust generally easier to reason about, but can't stand writing it.

The compiler works well with LLMs plenty of good tooling and LSPs.

If I'm happy with the shape of the code and I usually write the function signatures/ Module APIs. And the compiler is happy with it compiling. Usually the errors if any are logical ones I should catch in reviews.

So I focus on function, compiler focuses on correctness and LLM just does the actual writing.

reply
bwestergard 17 hours ago
Do you think Rust will end up getting a boost from LLM adoption?
reply
rustyhancock 17 hours ago
It definitely has for me! I just replied to the parent explaining why.

Tl;Dr I don't mind reading rust I hate writing it and the compiler meets me in the middle.

reply
gck1 16 hours ago
Same here. I had to do a lot of being in the loop with Python, but with rust - compiler gives Claude all the information it may need and then it figures things out without me.

Writing rust scares me, but I can read it just fine. I've come up with super masochistic linting rules that claude isn't allowed to change and that has improved things quite a bit.

I wish there was a mature framework for frontend that can be configured to be as strict as rust.

reply
jwpapi 15 hours ago
The first rule is an antipattern. I think describing your architecture or ANY kind of documentation for your AI is an anti-pattern and blows the context window leading to worse results, and actual more deviation.

The controlling systems are not give it more words at the start. Agentic coding needs to work in loop with dedicated context.

You need to think about how can i give as much intent as possible with as little words.

You can built a tremendous amount of custom lint rules ai never needs to read except they miss it.

Every pattern in your repo gets repeated, repo will always win over documentation and when your repo is good structured you don’t need to repeat this to AI

It’s like dev always has been, watch what has gone wrong and make sure the whole type or error can’t happen again.

reply
andrekandre 12 hours ago

  > repo will always win over documentation
it really does seem like this... also new devs are like that too: "i just copied this pattern use over here and there whats wrong?" is something i've heard over and over lol

i think languages that allow expression of "this is deprecated, use x instead" will be usefull for that too

reply
ryanthedev 13 hours ago
I created my own Claude skill to enforce this and be sure it weaves in all the best practices we learned.

https://github.com/ryanthedev/code-foundations

I’m currently working on a checklist and profile based code review system.

reply
Sparkyte 10 hours ago
I use it for scaffolding and often correct it for the layour I prefer. Then I use to check my code, and then scaffold in some more modules. I then connect them together.

Long as you review the code and correct it, it is no more different than using stackoverflow. A stack overflow that reads your code and helps stitch the context.

reply
anupamchugh 9 hours ago
"Stack Overflow that reads your codebase" — perfect. But Stack Overflow is stateless. Agent sessions aren't.

One session's scaffold assumes one pattern. Second session scaffold contradicts it. You reviewed both in isolation. Both looked fine. Neither knows about the other.

Reviewing AI code per-session is like proofreading individual chapters of a novel nobody's reading front to back. Each chapter is fine. The plot makes no sense.

reply
InsideOutSanta 16 hours ago
My approach:

1. Have the LLM write code based on a clear prompt with limited scope 2. Look at the diff and fix everything it got wrong

That's it. I don't gain a lot in velocity, maybe 10-20%, but I've seen the code, and I know it's good.

reply
scherlock 15 hours ago
Same. Small units if work, iterate in it till it's right, commit it, push it, then do the next increment of work. It's how I've always worked like that, except now, I sometimes let someone else figure the exact API calls (I'm still learning react, but Claude helps get the basics in place for me). If the AI just keeps screwing up, I'll grab the wheel and do it myself. It sometimes helps me get things going, but it hasn't been a huge increase in productivity, but I'm not paying the bill so whatever.
reply
wreath 5 hours ago
so is the 10-20% in velocity worth the money and the process-complexity added? I'm assuming you're measuring your own velocity, not your team's, since that includes time to review and deploy etc.
reply
kbaker 16 hours ago
The GSD tool (get-shit-done) automates a very similar process to this, and has been mind-blowing for larger projects and refactors.

https://github.com/glittercowboy/get-shit-done

You still need to know the hard parts: precisely what you want to build, all domain/business knowledge questions solved, but this tool automates the rest of the coding and documentation and testing.

It's going to be a wild future for software development...

reply
bornfreddy 16 hours ago
I found an easier way that Works For Me (TM). I describe the problem to LLM and ask it to solve it step by step, but strictly in the Ask mode, not Agent. Then I copy or even type the linws to the code. If I wouldn't write the line myself, it doesn't go in, and I iterate some more.

I do allow it to write the tests (lots of typing there), but I break them manually to see how they fail. And I do think about what the tests should cover before asking LLM to tell me (it does come up with some great ideas, but it also doesn't cover all the aspects I find important).

Great tool, but it is very easy to be led astray if you are not careful.

reply
sakopov 17 hours ago
Every engineering org should be pleading devs to not let AI write tests. They're awful and sometimes they literally don't even assert the code that was generated and instead assert the code in tests.
reply
bigstrat2003 10 hours ago
Every engineering org should be pleading devs to not let AI write code, period. They continue to routinely get stuff wrong and can't be trusted any further than you can throw them.
reply
dwheeler 14 hours ago
I also made a list of tips on writing code with AI, with a special focus on security. Others may find the tips useful. Here they are: https://openssf.org/blog/2026/01/05/ai-software-development-...
reply
raphman 18 hours ago
Hi i5heu. Given that you seem to use AI tools for generating images and audio versions of your posts, I hope it is not too rude to ask: how much of the post was drafted, written or edited with AI?

The suggestions you make are all sensible but maybe a little bit generic and obvious. Asking ChatGPT to generate advice on effectively writing quality code with AI generates a lot of similar suggestions (albeit less well written).

If this was written with help of AI, I'd personally appreciate a small notice above the blog post. If not, I'd suggest to augment the post with practical examples or anecdotal experience. At the moment, the target group seems to be novice programmers rather than the typical HN reader.

reply
i5heu 17 hours ago
Hi raphman,

i have written this text by myself except like 2 or 3 sentences which i iterated with an LLM to nail down flow and readability. I would interpret that as completely written by me.

> The suggestions you make are all sensible but maybe a little bit generic and obvious. Asking ChatGPT to generate advice on effectively writing quality code with AI generates a lot of similar suggestions (albeit less well written).

Before i wrote this text, i also asked Gemini Deep Research but for me the results where too technical and not structural or high level as i describe them here. Hence the blogpost to share what i have found works best.

> If not, I'd suggest to augment the post with practical examples or anecdotal experience. At the moment, the target group seems to be novice programmers rather than the typical HN reader.

I have pondered the idea and also wrote a few anecdotal experiences but i deleted them again because i think it is hard to nail the right balance down and it is also highly depended on the project, what renders examples a bit useless.

And i also kind of like the short and lean nature of it the last few days when i worked on the blogpost. I might will make a few more blogposts about that, that will expand a few points.

Thank you for your feedback!

reply
orwin 17 hours ago
First article about writing code with AI i can get behind 100%. Stuff i already do, stuff i've thought about doing, and at ideas i've never thought doing ("Mark code review levels" especially is a _great_ idea)
reply
Frannky 8 hours ago
I want to give a try to gsd + open code + Cerebras code. Any experience?
reply
rektlessness 16 hours ago
All this boils down to is that AI wins when it amplifies engineers, not replaces them. And the best code still comes from devs who ultrathink.
reply
ewuhic 60 minutes ago
AI slop article. Just show me the prompt.
reply
nxobject 16 hours ago
In her defence, I use most of those strategies myself as well...
reply
krashidov 17 hours ago
> Use strict linting and formatting rules to ensure code quality and consistency. This will help you and your AI to find issues early.

I've always advocated for using a linter and consistent formatting. But now I'm not so sure. What's the point? If nobody is going to bother reading the code anymore I feel like linting does not matter. I think in 10 years a software application will be very obfuscated implementation code with thousands of very solidly documented test cases and, much like compiled code, how the underlying implementation code looks or is organized won't really matter

reply
orwin 16 hours ago
That's the opposite. I've never read and re-read code more than i do today. The new hires generate 50 more code than they use to, and you _have_ to check it or have compounding production issues (been there, done that). And the errors can now be anywhere, when before you more or less knew what the person writing code is thinking and can understand why some errors are made. LLMs errors could hide _anywhere_, so you have to check it all.
reply
bornfreddy 16 hours ago
Isn't that a losing proposition? Or do you get 50 times the value out of it too? In my experience the more verbose the code is, the less thought out it is. Lots of changes? Cool, now polish some more and come back when it's below 100 lines change, excluding tests and docs. I don't dare touch it before.
reply
orwin 3 hours ago
I agree, but i'm shouting at the cloud. Stuff needs to be done, it seems to work at first, so either i just abandon quality and let things rot, or i read everything and underline each time the code smell.

I too use AI, but mostly to generate scripts (the most usefull use of AI is 100-200 line scripts imho), test _cases_ (i write the test itself, the data inside is generated) and HTML/CSS/JS shenanigans (the logic i code, the presentation i'm inferior to any random guy on the internet, so i might as well use an AI). I also use it for stuff that never end in repository, for exploration/proof of concept and outside of scope tests (i like to understand how stuff work, that helps), or to summarize Powerpoint presentations so i can do actual work during 60-person "meetings" and still get the point.

reply
gck1 16 hours ago
They serve as guardrails for agents to not do stupid things.

If your goal is for AI to write code that works, is maintainable and extensible, you have to include as many deterministic guardrails as possible.

reply
johnsmith1840 16 hours ago
How to write good code with AI -> put in as much effort as you did before on 20% more code than you used to work with.
reply
IhateAI 14 hours ago
How to write quality code with AI? Don't let it write the code.
reply
einpoklum 18 hours ago
That sounds like the advice of someone who doesn't actually write high-quality code. Perhaps a better title would be "how to get something better than pure slop when letting a chatbot code for you" - and then it's not bad advice I suppose. I would still avoid such code if I can help it at all.
reply
Akranazon 18 hours ago
Man, you are really missing out of the biggest revolution of my life.

This is the opinion of someone who has not tried to use Claude Code, in a brand new project with full permissions enabled, and with a model from the last 3 months.

reply
bigstrat2003 10 hours ago
People have been saying "the models from (recent timeframe) are so much better than the old ones, they solve all the problems" for years now. Since GPT-4 if not earlier. Every single time, those goalposts have shifted as soon as the next model came out. With such an abysmal track record, it's not reasonable to expect people to believe that this time the tool actually has become good and that it's not just hype.
reply
Akranazon 7 hours ago
When is the last time someone said that, motivating you to try the latest model? If it was 6 or more month ago, my reply is that the sentiment expressed was partially incorrect in the past, but it is not incorrect now. If a conspiracy theorist is always wrong about a senior citizen being killed, that does not make the senior immortal.
reply
whynotminot 18 hours ago
This is a fading but common sentiment on hacker news.

There’s a lot of engineers who will refuse to wake up to the revolution happening in front of them.

I get it. The denialism is a deeply human response.

reply
bopbopbop7 12 hours ago
Where is all the amazing software and/or improvements in software quality that is supposed to be coming from this revolution?

So far the only output is the "How I use AI blogs", AI marketing blogs, more CVEs, more outages, degraded software quality, and not much of shipping anything.

Is there any examples of real products and not just anecdotes of "I'm 10x more productive!"?

reply
EagnaIonat 46 minutes ago
I was in the same mindset until I actually took the Claude code course they offer. I was doing so much wrong.

The two main takeaways. Create a CLAUDE.md file that defines everything about the project. Have Claude feed back into the file when it makes mistakes and how to fix them.

Now it creates well structured code and production level applications. I still double check everything of course, but the level of errors is much lower.

An example application it created from a CLAUDE.md I wrote. The application reads multiple PDF's, finds the key stakeholders and related data, then generates a network graph across those files and renders it in an explorable graph in Godot.

That took 3 hours to make, test. It also supports OpenAI (lmstudio), Claude and Ollama for its LLM callouts.

What issue I can see happening is the duplication of assets in work. Instead of finding an asset someone built, people have been creating their own.

reply
whynotminot 8 hours ago
Sounds like a skill issue. I’ve seen it rapidly increase the speed of delivery in my shop.
reply
osn9363739 4 hours ago
Why is it so hard to find examples?
reply
whynotminot 2 hours ago
You’re asking to see my company’s code base?

It’s not like with AI we’re making miraculous things you’ve never seen before. We’re shipping the same kinda stuff just much faster.

I don’t know what you’re looking for. Code is code it’s just more and more being written by AI.

reply
falloutx 17 hours ago
Its only revolutionary if you think engineers were slow before or software was not being delivered fast enough. Its revolutionary for some people sure, but everyone is in a different situation, so one man's trash can be other man's treasure. Most people are treading both paths as automation threatens their livelihood and work they loved, also still not able to understand why would people pay to companies that are actively trying to convince your employer that your job is worthless.

Even If I like this tech, I still dont want to support the companies who make it. Yet to pay a cent to these companies, still using the credits given to me by my employer.

reply
whynotminot 17 hours ago
Of course software hasn’t been delivered fast enough. There is so so so much of the world that still needs high quality software.
reply
pickleRick243 10 hours ago
Do you have this same understanding for all the people whose livelihoods are threatened (or already extinct) due to the work of engineers?
reply
falloutx 4 hours ago
Yes, but who did we automate out of a job by building crappy software? Accountants are more threatened by AI than any of the software we created before, same with Lawyers, teachers. We didnt automate any physical labourers out of a job too.
reply
computerex 17 hours ago
It's insane! We are so far beyond gpt-3.5 and gpt-4. If you're not approaching Claude Code and other agentic coding agents with an open mind with the goal of deriving as much value from them as possible, you are missing out on super powers.

On the flip side, anyone who believes you can create quality products with these tools without actually working hard is also deluded. My productivity is insane, what I can create in a long coding session is incredible, but I am working hard the whole time, reviewing outputs, devising GOOD integration/e2e tests to actually test the system, manually testing the whole time, keeping my eyes open for stereotypically bad model behaviors like creating fallbacks, deleting code to fulfill some objective.

It's actually downright a pain in the ass and a very unpleasant experience working in this way. I remember the sheer flow state I used to get into when doing deep programming where you are so immersed in managing the states and modeling the system. The current way of programming for me doesn't seem to provide that with the models. So there are aspects of how I have programmed my whole life that I dearly miss. Hours used to fly past me without me being the wiser due to flow. Now that's no longer the case most of the times.

reply
pletnes 16 hours ago
Claude code is great at figuring out legacy code! I dont get the «for new systems only» idea, myself.
reply
notpachet 18 hours ago
> in a brand new project

Must be nice. Claude and Codex are still a waste of my time in complex legacy codebases.

reply
bigfishrunning 17 hours ago
Brand new projects have a way of turning into legacy codebases
reply
bornfreddy 16 hours ago
What are you talking about? Exploring and explaining the legacy codebases is where they shine, in my experience.
reply
dasil003 17 hours ago
This take is pretty uncharitable. I write high quality code, but also there's a bunch of code that could be useful, but that I don't write because it's not worth the effort. AI unlocks a lot of value in that way. And if there's one thing my 25 years as a software engineer has taught me is that while code quality and especially system architecture matter a lot, being super precious about every line of code really does not.

Don't get me wrong, I do think AI coding is pretty dangerous for those without the right expertise to harness it with the right guardrails, and I'm really worried about what it will mean for open source and SWE hiring, but I do think refusing to use AI at this point is a bit like the assembly programmer saying they'll never learn C.

reply
computerex 18 hours ago
Can you be specific? You didn't provide any constructive feedback, whatsoever.
reply
einpoklum 17 hours ago
The article did not provide a constructive suggestion on how to write quality code, either. Nor even empirical proof in the form of quality code written by LLMs/agents via the application of those principles.
reply
computerex 17 hours ago
Yes it did, it provided 12 things that the author asserts helps produce quality code. Feel free to address the content with something productive.
reply
xandrius 16 hours ago
Look up luddites on Wikipedia, might be too deep to see the similarities though.
reply
bopbopbop7 12 hours ago
I heard that about NFTs not long ago.
reply
theywillnvrknw 3 hours ago
TLDR: Know what you are doing and outsource the typing to LLM
reply
geenkeuse 5 hours ago
[dead]
reply
rulerviper 14 hours ago
[dead]
reply
ath3nd 7 hours ago
[dead]
reply
th0ma5 18 hours ago
[dead]
reply
dadandang 4 hours ago
[flagged]
reply