Why not? Why can't faster typing help us understand the problem faster?
> When you speed up code output in this environment, you are speeding up the rate at which you build the wrong thing.
Why can't we figure out the right thing faster by building the wrong thing faster? Presumably we were gonna build the wrong thing either way in this example, weren't we?
I often build something to figure out what I want, and that's only become more true the cheaper it is to build a prototype version of a thing.
> You will build the wrong feature faster, ship it, watch it fail, and then do a retro where someone says "we need to talk to users more" and everyone nods solemnly and then absolutely nothing changes.
I guess because we're just cynical.
Because usually the customer can only tolerate so many failed attempts per unit of time. Running your fitness function is often very expensive in terms of other people's time.
This is easily the biggest bottleneck in B2B/SaaS stuff for banking. You can iterate maybe once a week if you have a really, really good client.
> Because usually the customer can only tolerate so many failed attempts per unit of time. Running your fitness function is often very expensive in terms of other people's time.
Heh, depends on what you do. Many times the stakeholders can't explain what they want but can clearly articulate what they don't want when they see it.
Generate a few alternatives, have them pick, is a tried and true method in design. It was way too expensive when coding was manual, so often you need multiple rounds of meetings and emails to align.
If you don't think coding was the bottleneck, you're not thinking creatively about what's only now possible.
It's not what you can do faster (well, it is, up to a point), but also what can you now, do that would have been positively insane and out of the question before.
Why do you need coding for those. You can doodle on a whiteboard for a lot of those discussions. I use Balsamiq[0] and I can produce a wireframe for a whole screen in minutes. Even faster than prompting.
> If you don't think coding was the bottleneck, you're not thinking creatively about what's only now possible.
If you think coding was a bottleneck, that means you spent too much time doing when you should have been thinking.
I work in science, and I’ve recently worked with a couple projects where they generated >20,000 LOC before even understanding what the project was supposed to be doing. All the scientists hated it and it didn’t do anything that it was supposed to. But I still felt like I was being “anti-ai” when criticizing it.
I understand that it’s way better when you deeply understand the problem and field though.
* Hobbyist or people engaged in hobby and personal projects
* Startup bros; often pre-funding and pre-team
* Consultancies selling an AI SDLC as that wasn't even possible 6 months ago as "the way; proven, facts!"
It's getting to the point I'd like people to disclose the size of the team and org they are applying these processes at LOL.
Most LinkedIn influencers, startup bros and consultancies kind of fall into the latter.
Most Enterprise IT projects fail. Including at banks. They are extremely saleable though. They don't see things that are failures as failures. The metrics are not real. Contract renewals do not focus on objective metrics.
This is why you make "$1" with all your banking relationships and actually valuable tacit knowledge, until Accenture notices and makes bajillions, and now Anthropic makes bajillions. Look, I agree that you know a lot. That's not what I'm saying. I'm saying the thing you are describing as a bottleneck is actually the foundation of the business of the IT industry.
Another POV is, yeah, listen, the code speed matters a fucking lot. Everyone says it does, and it does. Jesus Christ.
when you're building a feature and have different ideas how to go about it, it's incredibly valuable to build them all, compare, and then build another, clean implementation based on all the insights
I used to do it before, but pretty rarely, only for the most important stuff. now I do it for basically everything. and while 2-4 agents are working on building these options, I have time to work on something else.
1. you want something that's literally been done tons of times before, and it can literally just find it inside its compressed dataset
2. you want something and as long as it roughly is what you wanted, it's fine
It turns out, this is not the majority of software people are paying engineers to write.
And it turns out that actually writing the code is only part of what you're paying for - much smaller than most people think.
You are not paying your surgeon only to cut things.
You are not paying your engineer only to write code.
The above are definitely the majority of software people are paying developers to write. By an order of magnitude.
The novel problems for customers who specifically care about code quality is probably under 1% of software written.
If you don't recognise this, you simple don't understand the industry you work in.
As it turns out - 4 years before LLMs - at least one of the FAANGs already had auto-complete so good it could do most of what LLMs can practically do in a gigantic context.
But, sure...
Most problems are mostly non-novel but with a few added constraints, the combination of which can require a novel solution.
Why can't you understand the problem faster by talking faster?
Code is one way to ask a question, not proof that you asked a good one. Sometimes the best move is an annoying hour with the PM, the customer, or whoever wrote the ticket.
I only recently decided to take it on, given how capable Claude Code has become recently. It knocked out a working version of my backend pretty quickly, adhering to my spec, and then built a frontend.
The result? I realized pretty quickly that the (IMO) beautiful design just didn't actually work with how it made sense for the product to work. An hour with the prototype made it clear that I needed to redesign from the ground up around a different piece to make the user experience actually work intuitively.
If I had spent months of my spare time banging on that only to hit that wall, it would've been a much more demotivating experience. Instead, I was able to re-spec and spin up a much better version almost immediately.
do you have a example (even a toy one) where typing faster would help you understand a problem faster?
I don't understand why GP's comment is so controversial. GP is not denying that you should maybe think a little before a key hits the keyboard as many commenters seem to suppose. Both can be true.
Build a toy car with square wheels and one with triangular wheels and one with round wheels and see which one rolls better.
The issue isn't "typing faster" it's "building faster".
Writing the 3 are the proofs.
Nevertheless, it's a tool that should be used when it's useful, just like slower consideration can be used. Frontier LLMs can help significantly in either case.
Why can't standing on your head?
I think we can, in some cases.
For instance, I prototyped a feature recently and tested an integration it enabled. It took me a few hours. There's no way I would have tried this, let alone succeeded, without opencode. Because I was testing functionality, I didn't care about other aspects: performance, maintainability, simplicity.
I was able to write a better description of the problem and assure the person working on it that the integration would work. This was valuable.
I immediately threw away that prototype code, though. See above aspects I just didn't need to think about.
Sometimes you need to think slow to understand something. Offloading your thinking to a black box of numbers and accepting what it emits is not thinking slow (i.e. ponder) and processing the problem at hand.
On the contrary, it's entering tunnel vision and brute forcing. i.e. shotgun coding.
Because typing is not the same as understanding.
And coding faster CAN help us understand the problem faster. Coding faster means iterating, refactoring, trying different designs - and seeing what does and doesn't work, faster.
It's funny, because you could actually take that story and use it to market AI.
> I once watched a team spend six weeks building a feature based on a Slack message from a sales rep who paraphrased what a prospect maybe said on a call. Six weeks.
Except now with AI it takes one engineer 6 hours, people realize it's the wrong thing and move on. If anything, I would say it helps prove the point that typing faster _does_ help.
In the long term, some of the most expensive wrong-things are the ones where the prototype gets a "looks good to me" from users, and it turns out what they were asking for was not what they needed or what could work, for reasons that aren't visually apparent.
In other words, it's important to have many people look at it from many perspectives, and optimizing for the end-user/tester perspective at the expense of the inner-working/developer perspective might backfire. Especially when the first group knows something is wrong, but the second group doesn't have a clue why it's happening or how to fix it. Worse still if every day feels like learning a new external codebase (re-)written by (LLM) strangers.
Why do you need to type at all to understand the problem?
I write my best code when I'm driving my car. When I stop and park up, it's just a case of typing it all in at my leisure.
> Why not? Why can't faster typing help us understand the problem faster?
Can it help? Of course! But I think the question is too vague here and you're (presumably) unintentionally creating a false dichotomy. I'll clarify with the next responses > Why can't we figure out the right thing faster by building the wrong thing faster?
The main problem is that solution spaces are very large. There are two general ways to narrow the solution space: directly and indirectly. Directly by things like thinking hard, digging down, and "zooming in". Indirectly by things such as figuring out what not to do (ruling things out).You can build a lot of wrong things that don't also help you narrow that solution space. The most effective way to "build the wrong thing" in an informative way is to first think hard and understand your solution space. You want to build the right wrong thing. The thing that helps you rule out lots of stuff. But if you are doing it randomly then you aren't doing this effectively and probably wasting a lot of time. You probably are already doing this but not thinking too much about this explicitly, but if you think explicitly you'll improve on this.
> Presumably we were gonna build the wrong thing either way
You always build the "wrong" thing. But it is about how wrong you are. Despite being about physics, I think Asimov's Relativity of Wrong[0] (short essay) is pretty relevant here and says everything I want to say but better. It is worth the read and I come back to it frequently. > I often build something to figure out what I want
Yes! But this is not quite the same thing. I do this too! I never know the full details of the thing I want before I start building. I'm not sure that's even possible tbh. You're always going to discover more things as you get into the details and nuance. But that doesn't mean foresight is useless either. Analogy
-------
Let's say I'm somewhere in the middle of America and I want to get to NYC. Analogous to your framing you are saying "why can't moving faster help us get there faster?" Obviously it can! BUT speed is meaningless without direction. You don't want speed, you want velocity. If you start driving as fast as possible in a random direction you're equally likely to head in a direction that increases your distance than one that decreases. And you are very unlikely to head in a good direction. Driving fast in the wrong direction significantly increases harm than were you to drive slowly in the wrong direction.So what's the optimal strategy? Find a general direction (e.g. use the sun or stars/moon) to figure out where "east"(ish) is, start driving relatively slowly, refine your direction as you get more information about the landscape, speed up as you gain more information. If you can't find a general direction you should slowly meander, carefully taking in how the landscape/environment is changing. If it is very unchanging, then yeah, speedup, but only until you find a region that becomes more informative, then repeat the process.
If we already had perfect information about how to get to NYC then just drive as fast as fucking possible. But if we don't have that information we need a completely different strategy. Thus, t̶y̶p̶i̶n̶g̶ driving speed isn't the bottleneck.
Speed doesn't matter, velocity does
[0] https://hermiene.net/essays-trans/relativity_of_wrong.html
This is the company I (soon no longer) work at (anyone hiring?).
The thing is that they don’t even allow the use of AI. I’ve been assured that the vast majority of the code was human-written. I have my doubts but the timeline does check out.
Apart from that, this article uses a lot of words to completely miss the fact that (A) “use agents to generate code” and “optimize your processes” are not mutually exclusive things; (B) sometimes, for some tickets - particularly ones stakeholders like to slide in unrefined a week before the sprint ends - the code IS the bottleneck, and the sooner you can get the hell off of that trivial but code-heavy ticket, the sooner you can get back to spending time on the actual problems; and (C) doing all of this is a good idea completely regardless of whether you use LLMs or not; and anyone who doesn’t do any of it and thinks the solution is to just hire more devs will run into the exact same roadblocks.
I just set up Claude Code tonight. I still read and understand every line, but I don't need to Google things, move things around and write tests myself. I state my low-level intent and it does the grunt work.
I'm not going to 10x my productivity, but it'll free up some time. It's just a labour-saving technology, not a panacea. Just like a dishwasher.
I've been working on a side project that I started in 2020. If I wanted to implement a new feature it was: - Wait for regular work hours to wrap up around 5 or 6 PM - Get dinner and rest / relax until around 8 or 9 PM - Open up the editor, think about the problem, Google things, read stack overflow which gets it 95% of the way there, Google more, dig deeper into the docs finally find what I needed - Write a function, make some progress, run into another roadblock, repeat previous point - Look up and it's now 1AM. I should write tests for this, but I'll add that to the backlog - Backlog only ever grows
Now with AI I describe what I want, it does the grunt work likely cleaner than I ever could, adds tests and warns me about potential edge cases.
I don't know about 10x, but I'm releasing new features that my client cares about faster and I have more time to myself.
All of the negativity around AI writing code sounds like people who would say "You can't trust the compiler, you need to write the machine code yourself"
Will AI fuck up? Yes But I'm the human in the chair guiding it, and learning myself how better to prompt it to keep it from those fuck ups with every iteration.
1. Idea
2. ???
3. Profit
Coding effectively is definitely one problem. And you're right that AI helps with that problem. But for startups, side-hustles, VC-pitches and the inner-workings of companies (HN crowd) coding was never the problem.edit to add: So for people working on professional software teams, the discussion is how a hyper-increase in raw code production affects everything down stream. There are many moving parts to building stuff and selling it to people. So there's not a 1:1 line to more code = better outcome from the system level view. It's not clear, at least.
I'd say you're 180° wrong. Getting to an MVP fast is the most immediate problem when you've started a startup. Iterating on ideas fast is the most immediate problem once you've released your MVP. You need an MVP to get users, and you need to to iterate to find product-market fit. Perfectly crafted code is a luxury problem you can't afford in the early stages.
I understand the need for MVP to bring an idea into reality. It's the feedback that's valuable not the code. This is not about the code. So why is the argument "write more code"?
In any case, I have yet to create a product on my own that has done well financially. So what the hell do I know. If you have, then I should probably listen to you. But I have worked on teams for successful companies and in my career, the best advice I can give to an engineer is that your code matters, do a good job and care about what you make; also it's not about the code.
Not saying LLMs are all bad, just that comparing them to dishwashers is probably not the best idea.
Human driving into work? Heating/cooling?
Wonder why big AI hasn't sold it as an enviromental SAVING technology.
Proper approach to speeding things up would be to ask "What are the limiting factors which stops us from X, Y, Z".
--
This situation of management expecting things to become fast because of AI is "vibe management". Why to think, why to understand, why to talk to your people if you saw an excited presentation of the magic tool and the only thing you need to do is to adopt it?..
1. Talk to the business, solve XYProblems, deal with organization complexity, learn the business and there needs.
2. Design the architecture not just “the code”, the code has to run on something.
3. Get the design approve and agree on the holy trinity - time/cost/budget
4. Do the implementation
5. Test it for the known requirements
6. Get stakeholder approval or probably go back to #4
7. Move it into production
8. Maintenance.
Out of all those, #4 is what I always considered the necessary grunt work and for the most part even before AI especially in enterprise development where most developers work has been being commoditized in over a decade. Even in BigTech and adjacent codez real gud will keep you as a mid level developer if you can’t handle the other steps and lead larger/more impactful/more ambiguous projects.
As far as #5 much of that can and should be done with automated tests that can be written by AI and should be reviewed. Of course you need humans for UI and UX testing
The LLMs can do a lot of the grunt work now.
This all happens while we are at the implementation stage and impacts all other aspects of the whole thing. It is a grunt work, but we need elite grunts, who see more than just the minimal requirements.
Even if you do have not so good developers, they can ramp up quickly on one specific isolated service and you can contain the blast radius.
This isn’t a new idea. This was the entire “API mandate” that Bezos had in 2002. s3 alone is made up of 200+ micro services
The engineers get this, or are willing to learn. Some (by no means most) scrum/agile leads get it.
The problem is the 'product class' don't get it, aren't interested and by-and-large don't have the aptitude to understand. Try tp explain cycle time, or cumulative flow diagrams to a Product Manage, Product Owner, Service Owner and they most often just brush it away as 'a technical thing'
The problem only gets worse as the Peter principle begins to kick in and thin out the talent towards the top end of the org.
For instance, GCC will inline functions, unroll loops, and myriad other optimizations that we don't care about. But when we review the ASM that GCC generates (we don't) we are not concerned with the "spaghetti" and the "high coupling" and "low cohesion". We care that it works, and is correct for what it is supposed to do. And that it is a faithful representation of the solution that we are trying to achieve.
Source code in a higher-level language is not really different anymore. Agents write the code, maybe we guide them on patterns and correct them when they are obviously wrong, but the code is merely the work-item artifact that comes out of extensive specification, discussion, proposal review, and more review of the reviews.
A well-guided, iterative process and problem/solution description should be able to generate an equivalent implementation whether a human is writing the code or an agent.
Of course we do not. Because there is no need. The process of compiling higher order language to assembly is deterministic and well-tested. There is no need to continue reviewing something that always yields the same result.
> We care that it works, and is correct for what it is supposed to do.
Exactly. Which is something we do not have with an output of an LLM. Because it can misunderstand or hallucinate.
Therefore, we always have to review it.
That is the difference between the output of compilers and the output of LLMs.
Here are the reported miscompilation bugs in GCC so far in 2026. The ones labeled "wrong-code".
https://gcc.gnu.org/bugzilla/buglist.cgi?chfield=%5BBug%20cr...
I count 121 of them.
I've posted this 3 times now. Code-generation by compilers written by experts is not deterministic in the way that you think it is.
So that's maybe 0.1% of all the bugs I've touched.
In that sense, code generation isn't really an interesting source of bugs for the discussion at hand.
Just because there are bugs does not mean a compiler is non-deterministic. I looked through a bunch of the bug reports and there is nothing there that can't be fixed to make it deterministic.
You can't fix an LLM to be absolutely deterministic, but you can fix a compiler.
Source code is a formal language, in a way that natural language isn't.
Maybe a dialect of legalese will emerge for software engineering?
We are in the low hanging fruit phase right now.
At least today the coding agents will cheat, choose the wrong pattern, brute force a solution where an abstraction or extra system was needed, etc. A few PR's won't make this a problem, but after not very long at all in a repo that a dev team is constantly contributing to (via their herds of agents) it can get pretty gnarly, and suddenly it looks like the agents are struggling with tech debt.
Maybe one day we can stop writing programming languages. It's a thought-provoking idea, but in practice I don't think we're there yet.
> "Programs must be written for people to read, and only incidentally for machines to execute." -- Hal Abelson
Without this, we quickly drift into treating computers and computer programs as even more magic, than we already do. When "agents" are mistaken about something, and put their "misunderstanding" into code that subsequently is incorrect, then we need to be able to go and look at it in detail, and not just bring sacrifices for the machine god.
With agentic coding the semantics are not deterministically maintained. They are expanded, compressed, changed, and even just lost; non-deterministically..
- determinism vs non-determinism
- conceptual integrity vs "it works somewhat, don't touch it"
Here are the reported miscompilation bugs in GCC so far in 2026. The ones labeled "wrong-code".
https://gcc.gnu.org/bugzilla/buglist.cgi?chfield=%5BBug%20cr...
I count 121 of them. It appears that code-generation is not as deterministic as you seem to think it is.
Here are the reported miscompilation bugs in GCC so far in 2026. The ones labeled "wrong-code".
https://gcc.gnu.org/bugzilla/buglist.cgi?chfield=%5BBug%20cr...
I count 121 of them. It appears that code-generation is not as deterministic as you seem to think it is.
The compiler relies on:
* Careful use of the ENV vars and CLI options
* The host system, or the compilation of the target executable (for cross-compiling)
* It relies on the source code specifically
How is this really different from careful prompt engineering, and an extensive proposal/review/refine process?
They are both narrowing the scopes and establishing the guardrails for what the solution and final artifact will be.
> proposal/review/refine process
This is essentially what a sophisticated compiler, or query optimizer (Postgres) does anyway. We're just doing it manually via prompts.
So different that those concepts don't even exist.
I don't have to carefully prompt my compiler in case it might misinterpret what I'm saying. My compiler comes with a precisely specified language.
I never, ever, review the output of my compiler.
And we are talking compilers, not query optimizers, so I don't really care what they do.
I get the point that the compiler is not some pure, perfect transformation of the high-level code to the low level code, but it is a deterministic one, no?
The more complex the code becomes, iteration after iteration by the AI, it keeps adding more and more code to fix simple problems, way more than is reasonably necessary in many cases. The amount of entropy you end up with using AI is astonishing, and it can generate a lot of it quickly.
The AI is going to do whatever it needs to do to get the prompt to stop prompting. That's really its only motivation in its non-deterministic "life".
A compiler is going to translate the input code to a typically deterministic output, and that's all it really does. It is a lot more predictable than AI ever will be. I just need it to translate my explicit instructions into a deterministic output, and it satisfies that goal rather well.
I'll trust the compiler over the AI every single day.
I might be more charitable. I'd say something like "Companies genuinely want good code but weigh the benefits of good code (future flexibility, lower maintenance costs) against the costs (delayed deployment, fewer features)."
Each company gets to make the tradeoffs they feel are appropriate. It's on technical people to explain why and risks, just like lawyers do for their area of expertise.
The points specific to software where it might not even be producing in-spec is also very good.
Comments that cite the solo dev/prototype case are of course not what this is getting at, but it's one good use of quick generation.
I would extend this article by saying what The Goal says, namely that the goal of every firm is to make money, and everything is intermediate to that. So whether or not software architecture is grade-A or grade-C, it's only ever in this subservient role to the firm's goal.
The lies about “agentic teams” and stacking GPUs in your apartment would come true.
The only thing stopping a Jira board from self-implementing is context size limitations.
Instead we have StackOverflow: Interactive Mode
I disagree - if we really did have fully automatic task-to-PR that would basically solve it.
Yeah sure you have product design and feature scoping this and that but the engineering problem is solved in the way the arithmetic problem is solved.
Take the way AI is being developed as an example. People rush to build giant agents in giant datacenters that are aligned to giant corporations and governments. They're building the agentic organism equivalent of machiavellian organizations, even though they'd be better off building digital humans that are aligned to individual humans that run on people's gaming PCs at home. They will find out that the former is the wrong architecture, but the cost of that failed iteration is the future of human civilization, and nobody gets a second try.
Of course, this is an extreme example on one end of the scale. On the other end, it wouldn't matter at all if you're building a small game for yourself as a weekend project with no users to please or societal impacts to consider.
Then there's the speedup. A smaller team can now achieve what a larger team was needed for before. This means less communication overhead, in theory fewer and/or shorter meetings. Which all translates to me spending more time and more energy on thinking about the solution. Which is what matters.
When a person writes code, the person reasons out the code multiple times, step by step, so that they don't make at least stupid or obvious mistakes. This level of close examination is not covered in code review. And arguably this is why we can trust more on human-written code than AI-produced, even though AI can probably write better code at smaller scale.
In contrast, Amazon asked senior engineers to review AI-generated code before merging them. But the purpose of code review was never about capturing all the bugs -- that is the job of test cases, right? Besides, the more senior an engineer is in Amazon, the more meetings they go to, and the less context they have about code. How can they be effective in code review?
Right... Because Humans have never ever accidentally rm -rf'd a production system in the wrong spot?
GitLab has entered the chat.
Or that time an S3 developer purged more than intended, causing internet outages.
All the above are from 2017.
Then there was a deployment goof by Knight Capital Group, they lost 440 million in 45 minutes. The company went poof back in 2012 as a result.
MySpace back in 2019 deleted a ton of prod data from users during a botched migration (oops).
Humans make mistakes. Anyone arrogant enough to think they only write perfect code is delusional.
We are holding AI to higher standards than we hold humans, who are just as fallible if not worse.
QA your software, stop letting developers test changes.
cries in Factorio
So 'writing helper' + 'research helper' + 'task helper' alone is amazing and we are def beyond that.
Even side features like 'do this experiment' where you can burn a ton of tokens to figure things out ... so valuable.
These are cars in the age of horses, it's just a matter of properly characterizing the cars.
Btw: https://playcode.io
factorio ... it's also the most useful engineering homework that's technically a game
The biggest time sink is usually debugging integration issues that only surface after you've connected three services together. Writing the code took 2 hours, figuring out why it doesn't work as expected takes 2 days.
I've found the most impactful investment is in local dev environments that mirror production as closely as possible. Docker Compose with realistic seed data catches more bugs than any amount of unit testing.
if you ever played factorio this is pretty clear.
Yeah. I keep seeing this over and over with devs who use LLMs. It's painful to watch.
You could write more code, but you also could abstract code more if you know what/how/why.
This same idea abstracts to business, you can perform more service or you can try to provide more value with same amount of work.
It is not about the speed of typing code.
Its about the speed of "creating" code: the boilerplate code, the code patterns, the framework version specific code, etc.
Do you think the leet code, brain teaser show me how smart you are and how much you can memorize is optimized to hire the people who can read code at speed and hold architecture (not code but systems) in their head? How many of your co-workers are set up and use a debugger to step through a change when looking at it?
Most code review was bike shedding before we upped the volume. And from what I have seen it hasn't gotten better.
A lot of these blog start from a false premise or a lack of imagination.
In this case both the premise that coding isn't a bulk time waste (and yes llm can do debugging, so the other common remark still doesnt apply) is faulty and unsubstantiated (just measure the ratio of architects to developers) but also the fact that time saving on secondary activities dont translate in productivity is false, or at least it's reductive because you gain more time to spend on doing the bottlenecked activity.
Same goes for the terminal, I like that it allows me to use a large directory tree with many assorted file types as if it was a database. I.e. ad hoc, immediate access to search, filter, bulk edits and so on. This is why one of the first things I try to learn in a new language is how to shell out, so I can program against the OS environment through terminal tooling.
Deciding what and how to edit is typically an important bottleneck, as are the feedback loops. It doesn't matter that I can generate a million lines of code, unless I can also with confidence say that they are good ones, i.e. they will make or save money if it is in a commercial organisation. Then the organisation also needs to be informed of what I do, it needs to give me feedback and have a sound basis to make decisions.
Decision making is hard. This is why many bosses suck. They're bad at identifying what they need to make a good decision, and just can't help their underlings figure out how to supply it. I think most developers who have spent time in "BI" would recognise this, and a lot of the rest of us have been in worthless estimation meetings, retrospectives and whatnot where we ruminate a lot of useless information and watch other people do guesswork.
A neat visualisation of what a system actually contains and how it works is likely of much bigger business value than code generated fast. It's not like big SaaS ERP consultancy shops have historically worried much about how quickly the application code is generated, they worry about the interfaces and correctness so that customers or their consultants can make adequate unambiguous decisions with as little friction as possible.
Expedience is the enemy of quality.
Want proof? Everyone as a result of “move fast and break things” from 5-10 years ago is a pile of malfunctioning trash. This is not up for debate.
This is simply an observation. I do not make the rules. See my last submission for some CONSTRUCTIVE reading.
Bye for now.
I have very much upset a CEO before by bursting his bubble with the fact that how fast you work is so much less important than what you are working on.
Doing the wrong thing quickly has no value. Doing the right thing slowly makes you a 99th percentile contributor.
Please stop making fools of yourselves and go use Claude for a month before writing that “AI coding ain’t nothing special” post.
Ignorance of what Claude can actually do means your arguments have no standing at all.
“I hate it so much I’ll never use it, but I sure am expert enough on it to tell you what it can’t do, and that humans are faster and better.”
This only works in large companies. In startups this is how you run out of money.
PS. The tech bros tried to do exactly that to millennials, but accidentally shot boomers instead.
LLM usage is usually a build of the system for the full parameter set. The speed increase is countered by the fact that there's no understanding of the system and the simulation space is so large that the user don't really bother to explore it. There's been a lot of talks about having a full test suite for simulation, but they are discrete and only prove specific points in the input space (There's a lot of curves that can pass through a finite set of points)
@dang this bot is spamming
Edit: you mean op?
Bigger tells are the other two green accounts posting multiple top level comments in this topic that are nearly identical. Perhaps the programmer had an off by one error somewhere.
I count at least three top level posters, if not as many as five, in this topic that are LLMs. The real absurdity is devnotes responding to myylogic, who are both LLMs.
Bigger tells are the other two green accounts posting multiple top level comments in this topic that are nearly identical. Perhaps the programmer had an off by one error somewhere.
I count at least three top level posters, if not as many as five, in this topic that are LLMs.
The sentiment that developers shouldn't be writing code anymore means I cannot take you seriously. I see these tools fail on a daily basis and it is sad that everyone is willing to concede their agency.
So far there's no obvious change one way or the other, but it hasn't been very long and everyone is in various states of figuring out their new workflows, so I don't think we have enough data for things to average out yet.
We're finding cases where fast coding really does seem to be super helpful though:
* Experimenting with ideas/refactors to see how they'll play out (often the agent can just tell you how it's going to play out)
* Complex tedious replacements (the kind of stuff you can't find/replace because it's contextual)
* Times where the path forward is simple but also a lot of work (tedious stuff)
* Dealing with edge cases after building the happy path
* EDIT: One more huge one I would add: anywhere where the thing you're adding is a complete analogy of another branch/PR the agent seems to do great at (which is like a "simple but tedious" case)
The single biggest potential productivity gain though I think is being able to do something else while the agent is coding, like you can go review a PR and then when you come back check out what the agent produced.
I would say we've gone from being extremely skeptical to cautiously excited. I think it's far fetched that we'll see any order of magnitude differences, we're hoping for 2x (which would be huge!).
I've already passed through this phase and have given up on it. I'm sure everyone's experience will vary, but I just find it introduces either sufficiently more context switching or detracts sufficiently enough mental engagement that I end up introducing more errors, feeling miserable, or just straight up losing productivity and focus. This type of workflow is only viable for me if the cost of mistakes is low, the surface area for changes is small, or the mental context is the same between activities.
The expectation that this is a serviceable workflow—I fear, and am experiencing—will ultimately just create more compressed timelines for everything, while quality, design, and job satisfaction will drop. Yes the code can be written while I look at a PR, but if it's a non-trivial amount of code or a non-trivial PR (which tends to become more frequent as more code generation and larger refactors are happening) then I'm just context switching between tasks I need to constantly re-zone in on, which is less gratifying and more volatile in a way that just hurts my mind and soul and money doesn't change in a meaningful way.
That's not to say I'm not using them or seeing no productivity gains, but I'm not reclaiming that much time due to being able to anything concurrently, it's mostly reclaiming time I'd otherwise have procrastinated on something.
> The single biggest potential productivity gain though I think is being able to do something else while the agent is coding, like you can go review a PR and then when you come back check out what the agent produced.
This is where unnerving exhaustion comes from though.
I know myself to be on the side of craftsmen. It does takes tons and tons of time to code, but I didn't get exhausted the way I do with AI. AI is productive, I am pro-AI. But boy is it a different kind of work beast.
Ugh, sounds awful. Constantly context switching and juggling multiple tasks is a sure-fire way to burn me out.
The human element in all of this never seems to be discussed. Maybe this will weed out those that are unworthy of the new process but I simply don't want to be "on" all the time. I don't want to be optimized like this.
Ask it to implement a simple http put get with some authentication and interface and logs for example, while you work out the protocol.
no.