Later that week, now that things were working, I profiled the n^2 search. The software controlled a piece of industrial test equipment, and the actual test process would take something around 4 hours to complete. Using the very worst case, far-beyond-reasonable data set, if I left the n^2 behavior in, would have added something like 6 seconds to that 4 hour runtime.
(Ultimately I fixed it anyways, but because it was easy, not because it mattered.)
"There's a third thing [beyond speed and memory] that you might want to optimize for which is much more important than either of these, which is years of your life required per program implementation." This is of course from the perspective of a solo indie game developer, but it's a good and interesting perspective to consider.
That also means that "just do an array of flat records" is a very sane default even if it seems brutish at first.
Even storing something simple such as array of complex numbers as an structure of arrays (SoA) rather than an array of structures (AoS) can unlock a lot of optimizations. For example, less permutes/shuffles and more arithmetic instructions.
Depending on how many fields you actually need when you iterate over the data, you prevent cache pollution as well.
It's a good consideration tbh.
This is me with hash maps.
* Braid was 3 years
* Cave Story was 5 years
* World of Goo was 2 years
* Limbo was about 3 years (but with 8-16 people)
So Braid seems pretty average.
In practice what I see fail most often is not premature optimization but premature abstraction. People build elaborate indirection layers for flexibility they never need, and those layers impose real costs on every future reader of the code. The irony is that abstraction is supposed to manage complexity but prematurely applied it just creates a different kind.
This matches my experience as well.
Someone here commented once that abstractions should be emergent, not speculative, and I loved that line so much I use it with my team all the time now when I see the craziness starting.
I truly believe this comes from devs who want to feel smart by "architecting" solutions to future problems before those problems have become well defined.
Compare and contrast https://people.mpi-sws.org/~dreyer/tor/papers/wadler.pdf
I'm sure it's super flexible but the exactly same thing could have been achieved with 8 YAML files and 60% of the content between them would be identical.
I found that over time my senses have been honed to more quickly identify things that are important to deeply study and plan right now and areas where I can skimp more and fix it later if problems develop. I don't know if there was a short cut to honing those senses that didn't involve a lot of pain as I needed to pick apart and rework oversights.
If you've been monitoring properly, you buy yourself time before it becomes a problem as such, but in my experience most developers who don't anticipate load scaling also don't monitor properly.
I've seen a "senior software engineer with 20 years of industry experience" put code into production that ended up needing 30 minute timeouts for a HTTP response only 2 years after initial deployment. That is not a typo, 30 minutes. I had to take over and rewrite their "simple" code to stop the VP-level escalations our org received because of this engineering philosophy.
There is nothing to suggest you should wait to optimize under pressure, only that you should optimize only after you have measured. Benchmark tests are still best written during the development cycle, not while running hot in production.
Starting with the naive solution helps quickly ensure that your API is sensible and that your testing/benchmarking is in good shape before you start poking at the hard bits where you are much more likely to screw things up, all while offering a baseline score to prove that your optimizations are actually necessary and an improvement.
This is something I tend to consider far, far worse than "AI Slop" in practice. I always hated Microsoft Enterprise Library's Data Access Application Block (DAAB) in practice. I've literally only ever seen one product that supported multiple database backends that necessitated that level of abstraction... but I've seen that library well over a dozen times in practice. Just as a specific example.
IMO, abstractions should generally serve to make the rest of the codebase reasonable more often than not... abstractions that hide complexity are useful... abstractions that add complexity much less so.
To address our favorite topic: while I use LLMs to assist on coding tasks a lot, I think they're very weak at this. Claude is much more likely to suggest or expand complex control flow logic on small data types than it is to recognize and implement an opportunity to encapsulate ideas in composable chunks. And I don't buy the idea that this doesn't matter since most code will be produced and consumed by LLMs. The LLMs of today are much more effective on code bases that have already been thoughtfully designed. So are humans. Why would that change?
Having implemented my shared of highly complex high-performance algorithms in the past, the key was always to figure out how to massage the raw data into structures that allow the algorithm to fly. It requires both a decent knowledge of the various algorithm options you have, as well as being flexible to see that the data could be presented a different way to get to the same result orders of magnitude faster.
"Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious." -- Fred Brooks, The Mythical Man Month (1975)
I think rule 5 is often ignored by a lot of distributed services. Where you have to make several calls, each with their own http, db and "security" overhead, when one would do. Then these each end up with caching layers because they are "slow" (in aggregate).
Very few software services built today are doing it right. Most assume they need to scale from day one, pick a technology stack to enable that, and then alter the product to reflect the limitations of the tech stack they picked. Then they wonder why they need to spend millions on sales and marketing to convince people to use the product they've built, and millions on AWS bills to scale it. But then, the core problem was really that their company did not need to exist in the first place and only does because investors insist on cargo-culting the latest hot thing.
This is why software sucks so much today.
I'll add one more modification if you're like me (and apparently many others): go too far with your distribution and pull it back to a sane (i.e. small handful) number of distributed services, hopefully before you get too far down the implementation...
For most of my career, sticking to rule 3 made the most sense. When the CS major would be annoying and talk about big-O they usually forgot n was tiny. But then my job changed. I started working on different things. Suddenly my job started sounding more like a leetcode interview people complain about. Now n really is big and now it really does matter.
Keep in mind that Rob Pike comes from a different era when programming for 'big iron' looked a lot more like programming for an embedded microcontroller now.
Of course there is a balance to this, the engineering time to implement both options is an important consideration. But given both algorithms are relatively easy to implement I will default to the one that is faster at large sizes even if it is slower at common sizes. I do suspect that there is an implicit assumption that "fancy" algorithms take longer and are harder to implement. But in many cases both algorithms are in the standard library and just need to be selected. If this post focused on "fancy" in terms of actual time to implement rather than speed for common sizes I would be more inclined to agree with it.
I wrote an article about this a while back: https://kevincox.ca/2023/05/09/less-than-quadratic/
It is important to remember that the art of sw engineering (like all engineering) lives in a balance between all these different requirements; not just in OPTIMIZE BIG-O.
Most people don't need FFT algorithm for multiplying large numbers, Karatsuba's algorithm is fine. But in some domains the difference does matter.
Personally I usually see the opposite effect - people first reach for a too-naive approach and implement some O(n^2) algorithm where it wouldn't have even been more complex to implement something O(n) or O(n log n). And n is almost always small so it works fine, until it blows up spectacularly.
I've always been a KISS/DRY person but over a decade there are plenty of moments where you're tempted to reach for a fancier database or rewrite something in a trendier stack. What's actually kept things running well at scale is boring, known technologies and only optimizing in the places where it actually matters.
We wrote our principles down recently and it basically just reads like Pike's rules in different words: https://www.geocod.io/code-and-coordinates/2025-09-30-develo...
For example, I've often heard "premature optimization is the root of all evil" invoked to support opposite sides of the same argument. Pike's rules are much clearer and harder to interpret creatively.
Also, it's amusing that you don't hear this anymore:
> Rule 5 is often shortened to "write stupid code that uses smart objects".
In context, this clearly means that if you invest enough mental work in designing your data structures, it's easy to write simple code to solve your problem. But interpreted through an OO mindset, this could be seen as encouraging one of the classic noob mistakes of the heyday of OO: believing that your code could be as complex as you wanted, without cost, as long as you hid the complicated bits inside member methods on your objects. I'm guessing that "write stupid code that uses smart objects" was a snappy bit of wisdom in the pre-OO days and was discarded as dangerous when the context of OO created a new and harmful way of interpreting it.
Dean is saying (implicitly) that you can estimate performance, and therefore you can design for speed a priori - without measuring, and, indeed, before there is anything to measure.
I suspect that both authors would agree that there's a happy medium: you absolutely can and should use your knowledge to design for speed, but given an implementation of a reasonable design, you need measurement to "tune" or improve incrementally.
But e.g. if you want to do fast math, you really need to design your pipeline around cache efficiency from the beginning – it's very hard to retrofit. Whereas reducing memory allocations in order to make parallel algorithms faster is something you can usually do after profiling.
It's so true, when specing things I always try to focused on DDL because even the UI will fall into place as well, and a place I see claude opus fail as well when building things.
"Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious." -- Fred Brooks, The Mythical Man Month (1975)
The thing is, if you build enough of the same kinds of systems in the same kinds of domains, you can kinda tell where you should optimize ahead of time.
Most of us tend to build the same kinds of systems and usually spend a career or a good chunk of our careers in a given domain. I feel like you can't really be considered a staff/principal if you can't already tell ahead of time where the perf bottleneck will be just on experience and intuition.
I have several times done performance tests before starting a project to confirm it can be made fast enough to be viable, the entire approach can often shift depending on how quickly something can be done.
It's the same thing with programmers who believe in BDUF or disbelieve YAGNI - they design architectures for anticipated futures which do not materialize instead of evolving the architecture retrospectively in line with the future which did materialize.
I think it's a natural human foible. Gambling, for instance, probably wouldnt exist if humans' gut instincts about their ability to predict future defaulted to realistic.
This is why no matter how many brilliant programmers scream YAGNI, dont do BDUF and dont prematurely optimize there will always be some comment saying the equivalent of "akshually sometimes you should...", remembering that one time when they metaphorically rolled a double six and anticipated the necessary architecture correctly when it wasnt even necessary to do so.
These programmers are all hopped up on a different kind of roulette these days...
Don't insist on file-based data ingestion being a wrapper around a json-rpc api just because most similar things are moving that direction; what matters is whether someone has specifically asked for that for this particular system yet.
.
Not all decisions can be usefully revisited later. Sometimes you really do need to go "what if..." and make sure none of the possibilities will bite too hard. Leaving the pizza cave occasionally and making sure you (have contacts who) have some idea about the direction of the industry you're writing stuff for can help.
> Sure, don't build your system to keep audit trails until after you have questions to answer so that you know what needs to go in those audit trails...what matters is whether someone has specifically asked for that for this particular system yet.
I spent ~15 years in life sciences.You're going to build an audit trail, no matter what. There's no validated system in LS that does not have an audit trail.
It's just like e-commerce; you're going to have a cart and a checkout page. There's no point in calling that a premature optimization. Every e-commerce website has more or less the same set of flows with simply different configuration/parameters/providers.
Audit trails are commonly neglected coz somebody didnt ask the right questions, not coz somebody didnt try to anticipate the future.
Rules are "kinda" made to be broken. Be free.
I've been sticking to these rules (and will keep sticking to them) for as long as I can program (I've been doing it for the last 30 years).
IMHO, you can feel that a bottleneck is likely to occur, but you definitely can't tell where, when, or how it will actually happen.
Notice my use of the word "Novelty".
I get hired because I'm very good at building specific kinds of systems so I tend to build many variants of the same kinds of systems. They are generally not that different and the ways in which the applications perform are similar.
I do not generally write new algorithms, operating systems, nor programming languages.
I don't think this is so hard to understand the nuance of Pike's advice and what we "mortals" do in or day-to-day to earn a living.
If you'd said Plan 9 and UTF-8 I'd agree with you.
Unless you meant to imply that UNIX isn't cool.
Unix was created by Ken Thompson and Dennis Ritchie at Bell Labs (AT&T) in 1969. Thompson wrote the initial version, and Ritchie later contributed significantly, including developing the C programming language, which Unix was subsequently rewritten in.
contribute < wrote.
His credits are huge, but I think saying he wrote Unix is misattribution.
Credits include: Plan 9 (successor to Unix), Unix Window System, UTF-8 (maybe his most universally impactful contribution), Unix Philosophy Articulation, strings/greps/other tools, regular expressions, C successor work that ultimately let him to Go.
"Premature optimization is the root of all evil."
First, let's not besmirch the good name of Tony Hoare. The quote is from Donald Knuth, and the missing context is essential.
From his 1974 paper, "Structured Programming with go to Statements":
"Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."
He was talking about using GOTO statements in C. He was talking about making software much harder to reason about in the name of micro-optimizations. He assumed (incorrectly) that we would respect the machines our software runs on.
Multiple generations of programmers have now been raised to believe that brutally inefficient, bloated, and slow software is just fine. There is no limit to the amount of boilerplate and indirection a computer can be forced to execute. There is no ceiling to the crystalline abstractions emerging from these geniuses. There is no amount of time too long for a JVM to spend starting.
I worked at Google many years ago. I have lived the absolute nightmares that evolve from the willful misunderstanding of this quote.
No thank you. Never again.
I have committed these sins more than any other, and I'm mad as hell about it.
I've seen people write some really head shaking code that makes remote calls in a loop that don't actually depend on each other. I wonder to what extend they are thinking "don't bother with optimization / speed for now"
But second, I'd remove "optimization" from considering here. The code you're describing isn't slow, it's bad code that also happens to be slow. Don't write bad code, ever, if you can knowingly avoid it.
It's OK to write good, clear, slow code when correctness and understandability is more important that optimizing that particular bit. It's not OK to write boneheaded code.
(Exception: After you've written the working program, it turns out that you have all the information to make the query once in one part of the broader program, but don't have all the information to make it a second time until flow reaches another, decoupled part of the program. It may be the lesser evil to do that than rearrange the entire thing to pass all the necessary state around, although you're making a deal with the devil and pinky swearing never to add a 3rd call, then a 4th, then a 5th, then...)
> look, I'm sorry, but the rule is simple: if you made something 2x faster, you might have done something smart if you made something 100x faster, you definitely just stopped doing something stupid
I think there is just a current (I've seen it mostly in Jr engineers) that you should just ignore any aspect of performance until "later"
"Later" never comes and all critical performance issues are either ignored, hot-patched externally with caches of various quality or just with more expensive hardware.
Sometimes, especially when it comes to distributed systems, going from working solution to fast working solution requires full blown up redesign from scratch.
1. I have seen too many "make it work first" that ended up absolute shitshow that was notoriously difficult to do anything with. You can build the software right the first time
2. The "fast" part is what I think too many people are focusing on and in my experience the "THEN" part is always missing resources utilization and other types of inefficiency that are not necessarily related to speed. I have seen absolute messes of software that work really fast
Far too often we generalise a piece of logic that we need in one or two places, making things more complicated for ourselves whenever they inevitably start to differ. And chances are very slim we will actually need it more than twice.
Premature generalisation is the most common mistake that separates a junior developer from an experienced one.
The goal is to have code that corresponds to a coherent conceptual model for whatever you are doing, and the resulting codebase should clearly reflect the design of the system. Once I started thinking about code in these terms, I realized that questions like "DRY vs YAGNI" were not meaningful.
It's not about copying identical code twice, it's about refactoring similar code into a shared function once you have enough examples to be able to see what the shared core is.
I too often see junior engineers (and senior data scientists…) write code procedurally, with giant functions and many, many if statements, presumably because in their brain they’re thinking about “1st I do this if this, 2nd I do that if that, etc”.
Instead, I tend to ask: if I change this code here, will I always also need to change it over there?
Copy-paste is good as long as I'm just repeating patterns. A for loop is a pattern. I use for loops in many places. That doesn't mean I need to somehow abstract out for loops because I'm repeating myself.
But if I have logic that says that button_b.x = button_a.x + button_a.w + padding, then I should make sure that I only write that information down once, so that it stays consistent throughout the program.
Your example is a pretty good one. In most practical applications, you do not want to be setting button x coordinates manually. You want to use a layout manager, like CSS Flexbox or Jetpack Compose's Row or Java Swing's FlowLayout, which takes in a padding and a direction for a collection of elements and automatically figures out where they should be placed. But if you only have one button, this is overkill. If you only have two buttons, this is overkill. If you have 3 buttons, you should start to realize this is the pattern and reach for the right abstraction. If you get to 10 buttons, you'll realize that you need to arrange them in 2D as well and handle how they grow & shrink as you resize the window, and there's a good chance you need a more powerful abstraction.
If you have two copies of some piece of code, and you can reasonably say that if you ever want to update one copy then you will almost certainly want to update the other copy as well, then it's probably a good idea to try to merge them and keep that logic in some centralized place.
On the other hand, if you have three copies of the same piece of code, but they kind of just "happen to" be identical and it's completely plausible that any one of the copies will be modified in the future for reasons which won't affect the other copies, maybe keeping them separate is a good idea.
And of course, it's sometimes worth it to keep two or more different copies which do share the same "reason to change". This is especially clear when you have the copies in different repositories, where making the code "DRY" would mean introducing dependencies between repositories which has its own costs.
An expensive consultant suggested creating pristine implementation and then writing a rule layer that would modify things as needed and deploying the whole thing as a pile of lamdba functions.
I copy pasted the protocol consumer file per producer and made all the necessary changes with proper documentation and mocks. Got it working quickly and we could add new ones without affecting.
If I'd try to keep it DRY, i think it would be a leaky mess.
Well, turns out that 3 of the APIs changed the way they return the data, so instead of separating the logic, someone kept adding a bunch of if statements into a single function in order to avoid repeating the code in multiple places. It was a nightmare to maintain and I ended up completely refactoring it, and even tho some of the code was repeated, it was much easier to maintain and accommodate to the API changes.
Having identical logic in multiple places (even only 2) is a big contributor to technical debt, since if you're searching for something and you find it and fix it /once/ we often thing of the job as done. Then the "there is still a bug and I already fixed that" confusion is avoided by staying DRY.
Sometimes four or five doesn’t seem too bad, sometimes two is too many
If two pieces of code use the same functionality by coincidence but could possibly evolve differently then don't refactor. Don't even refactor if this happens three, four, or five times. Because even if the code may be identical today the features are not actually identical.
But if you have two uses of code that actually semantically identical and will assuredly evolve together then go ahead and refactor to remove duplication.
Extract a method or object if it's something that feels conceptually a "thing" even if it has only one use. Most tools to DRY your code also help by providing a bit of encapsulation that do a great job of tidying things up to force you to think about "should I be letting this out of domain stuff leak in here?"
DRY is one step removed from that goal and people use it to make very unmaintainable code because they confuse any repeated code with unmaintainability. (or their theory that some day we might want to repeat this code so we might as well pre-DRY it)
The result is often a horrendous complex mess. Imagine a cookbook with a cookie recipe that resided on 47 different pages (40 of which were pointers on where to find other pointers on where to find other pointers on where to find a step) in attempts to never write the same step twice in the whole book or your planned sequels in a 20 volume set.
The problem is zealots. Zealotry doesn't work for indeterminate things that require judgement like "code quality" or "maintainability", but a simple rule like "don't repeat yourself" is easy for a zeal. They take a rule and shut down any argument with "because the rule!"
If you're arguing about code quality and maintainability without one sentence rules then you actually have to make arguments. If the rule is your argument there's no discussion only dogma.
As a result? Easy to distill rules spread fast, breed zealots, and result in bad code.
Oh yes, I'd recommend everyone who uses the phrase reads the rest of the paper to see the kinds of optimisations that Knuth considers justified. For example, optimising memory accesses in quicksort.
Tips like "don't try to write smart code" are often repeated but useless (not to mention that "smart" here means over-engineered or overly complex, not smart).
1. Somebody verifies with the users that speed is actually one of the most burning problems.
2. They profile the code and discover a bottleneck.
3. Somebody says "no, but we shouldnt fix that, that's premature optimization!"
Ive heard all sorts of people like OP moan that "this is why pieces of shit like slack are bloated and slow" (it isnt) when advocating skipping steps 1 and 2 though.
I dont think they misunderstand the rule, either, they just dont agree with it.
Did pike really have to specify explicitly that you have to identify that a problem is a problem before solving it?
Sometimes this is too late.
C++98 introduce `std::set` and `std::map`. The public interface means that they are effectively constrained to being red-black trees, with poor cache locality and suboptimal lookup. It took until C++11 for `std::unordered_map` and `std::unordered_set`, which brought with them the adage that you should probably use them unless you know you want ordering. Now since C++23 we finally have `std::flat_set` and `std::flat_map`, with contiguous memory layouts. 25 years to half-solve an optimisation problem and naive developers will still be using the wrong thing.
As soon as the interface made contact with the public, the opportunity to follow Rob Pike's Rule 5 was lost. If you create something where you're expected to uphold a certain behaviour, you need to consider if the performance of data structures could be a functional constraint.
At this point, the rule becomes cyclical and nonsensical: it's not premature if it's the right time to do it. It's not optimisation if it's functional.
std::set/std::map got into trouble because they chose the algorithm first and then made the data model match. Rule 5 suggests choosing the right data model first, indicating that it is most important.
When building interfaces you are bound to make mistakes which end users will end up depending on (not just regarding optimization).
The correct lesson to learn from this is not "just dont make mistakes" but to try and minimize migration costs to prevent these mistakes from getting tightly locked in and try to detect these mistakes earlier on in the design process with more coordinated experimentation.
C++ seems pretty bad at both. It's not unusual, either - migration and upgrade paths are often the most neglected part of a product.
I wish Knuth would come out and publicly chastise the many decades of abuse this quote has enabled.
I am almost certain that people building bloated software are not willfully misunderstanding this quote; it's likely they never heard about it. Let's not ignore the relevance of this half a century old advice just because many programmers do not care about efficiency or do not understand how computers work. Premature optimization is exactly that, the fact that is premature makes it wrong, regardless if it's about GOTO statements in the 70s or a some modern equivalent where in the name of craft or fun people make their apps a lot more complex than they should be. I wouldn't be surprised if some of the brutally inefficient code you mention was so because people optimized prematurely for web-scale and their app never ever needed those abstractions and extra components. The advice applies both to hackers doing micro-optimizations and architecture astronauts dreaming too big IMHO.
And then of course later is too late; you can't optimise most Python.
Profiling never achieved its place in most developers’ core loop the way that compiling, linting, or unit testing did.
How many real CI/CD pipelines spit out flame graphs alongside test results?
I find 98% of the time that users are clamoring to get something implemented or fixed which isnt speed related so I work on that instead.
When I do drill down what I tend to find in the flame graphs is that your scope for making performance improvements a user will actually notice is bottlenecked primarily by I/O not by code efficiency.
Meanwhile my less experienced coworkers will spot a nested loop that will never take more than a couple of milliseconds and demand it be "optimised".
Also the rule (quote?) says "speed hack", I don't think he is saying ignore runtime complexity totally, just don't go crazy with really complex stuff until you are sure you need it.
People don't ask for software to be fast and usable because it obviously should be. Why would they ask? They might complain when it's unusably slow. But that doesn't mean they don't want it to be fast.
It's true that premature optimization (that is, optimization before you've measured the software and determined whether the optimization is going to make any real-world difference) is bad.
The reality, though, is that most programmers aren't grappling with whether their optimizations are premature, they're grappling with whether to optimize at all. At most companies, once the code works, it ships. There's little, if any, time given for an extra "optimization" pass.
It's only after customers start complaining about performance (or higher-ups start complaining about compute costs) that programmers are given any time to go through and optimize things. By which point refactoring the code is now much harder than it wouldn've been originally.
Maybe I’ve had an unrepresentative career, but I’ve never worked anywhere where there’s much time to fiddle with performance optimisations, let alone those that make the code/system significantly harder to understand. I expect that’s true of most people working in mainstream tech companies of the last twenty years or so. And so that quote is basically never applicable.
Actually, I do not believe devs are to blame, or that CS education is to blame; I believe that's an unfortunate law of society that complexity piles up faster than we can manage it. Of course the economic system rewards shiping today at the expense of tomorrow's maintenance, and also rewards splitting systems in seemingly independent subsystems that are simpler in isolation but results in a more complex machinery (cloud, microservices...)
I'm even wondering if it's not a more fundamental law than that, because adding complexity is always simpler than removing it, right? Kind of a second law of termodynamic for code.
(AI will probably make this worse as well, having a bloat tendency all of its own)
It's just complaining about others making a different value judgement for what is a worthwhile optimization. Hiding behind the 'true meaning of the quote' is pointless.
I believe people don't think about Knuth when they choose to write app in Electron. Some other forces might be at play here.
Devs are obsessed with introducing functional-style constructs everywhere, just for the sake of it. FP is great for some classes of software, but baseline crufty for anything that requires responsiveness (front-ends basically), let alone anything at real interactive speeds (games, geo-software, ...)
The "premature optimization" quote is then always used as a way to ignore that entire code paths will be spamming the heap with hundreds of thousands of temporary junk, useless lexical scopes, and so forth. Writing it lean the first time is never considered, because of adherence to these fetishes (mutability is bad, oo is bad, loops lead to off-by-one errors, ...). It's absolutely exhausting to have these conversations, it's always starting from the ground up and these quotes like "premature optimization is the root of all evil" are only used as invocations to ward of criticism.
> He was talking about using GOTO statements in C.
I don’t think he was talking about C. That paper is from December 1974, and (early) C is from 1972, and “The UNIX Time-Sharing System” (https://dsf.berkeley.edu/cs262/unix.pdf) is from July 1974, so time wise, he could have known C, but AFAICT that paper doesn’t mention C, and the examples are PL/I or (what to me looks like) pseudocode, using ‘:=’ for assignment, ‘if…fi’ and ‘while…repeat’ for block, ‘go to’ and not C’s ‘goto’, etc.
100%
While it might not be necessary to spend hours fine-tuning every function; code optimization should be the mindset of every programmer no matter what they are coding.
How many fewer data centers would we need if all that software running in them was more efficient?
https://didgets.substack.com/p/finding-and-fixing-a-billion-...
That's not to bemoan the engineer with shortcomings. Even the most experienced and educated engineer might find themself outside their comfort zone, implementing code without the ability to anticipate the performance characteristics under the hood. A mental model of computation can only go so far.
Articulated more succinctly, one might say "Use the profiler, and use it often."
If you don't know enough to pick good starting points you probably won't know enough to optimize well. So don't optimize prematurely.
If you are experienced enough to pick good starting points, still don't optimize prematurely.
If you see a bad starting point picked by someone else, by all means, point it out if it will be problematic now or in the foreseeable future, because that's a bug.
While you were seeing those problems with Java at Google, I saw seeing it with Python.
So many levels of indirection. Holy cow! So many unneeded superclasses and mixins! You can’t reason about code if the indirection is deeper than the human mind can grasp.
There was also a belief that list comprehensions were magically better somehow and would expand to 10-line monstrosities of unreadable code when a nested for loop would have been more readable and just as fast but because list comprehensions were fetishized nobody would stop at their natural readability limits. The result was like reading the run-on sentence you just suffered through.
I can write bubble sort, it is simple and I have confidence it will work. I wrote quicksort for class once - I turned in something that mostly worked but there were bugs I couldn't fix in time (but I could if I spent more time - I think...)
However writing bubble sort is wrong because any good language has a sort in the standard library (likely timsort or something else than quicksort in the real world)
I think that's due to people doing premature optimization! If people took the quote to heart, they would be less inclined to increasing the amount of boilerplate and indirection.
Of course today this has changed. You can have multiple agents working on micro optimizing everything and have the pie and eat it too.
Sorry, folks, but that's just an excuse to make dumb choices. Premature _micro_optimization is the root of all evil.
EDIT: It was great training for when I started working on browser performance, though!
Like you, I've seen people produce a lot of slow code, but it's mostly been from people who would have a really hard time writing faster code that's less wrong.
I hate slow software, but I'd pick it anytime over bogus software. Also, generally, it's easier to fix performance problems than incorrect behavior, especially so when the error has created data that's stored somewhere we might not have access to. But even more so, when the harm has reached the real world.
We can and should have both.
This is a fraud, made up by midwits to justify their leaning towers of abstraction.
And I'd agree that "simple secure" is better than "complex secure" but you're kind of side-stepping what I said, what about "not secure at all", wouldn't that lead to simpler code? Usually does for me, especially if you have to pile it on top of something that is already not so secure, but even when taking it into account when designing from ground up.
"Do less and things get faster" is a very wide class of fixes. e.g. you could do tons of per-packet decision making millions of times per second for routing and security policies, or you could realize the answer changes slowly in time, and move that to upfront work, separating your control vs data processing, and generally making it easier to understand. Or you could build your logic into your addressing/subnets and turn it into a simple mask and small table lookup. So your entire logic gets boiled down to a table (incidentally why I can't understand why people say ipv6 is complex. Try using ipv4! Having more bits for addresses is awesome!).
Sort of. But if you keep the software simple, then it is easier to optimize the bottlenecks. You don't really need to make everything complicated to make it faster, just a few well selected places need to be refactored.
Same. I, too, am sick of bloated code. But I use the quote as a reminder to myself: "look, the fact that you could spend the rest of the workday making this function run in linear instead of quadratic time doesn't mean you should – you have so many other tasks to tackle that it's better that you leave the suboptimal-but-obviously-correct implementation of this one little piece as-is for now, and return to it later if you need to".
Yes, software is bloated, full of useless abstractions and bad design. You kids(well, anyone programming post 1980, so myself included) should be ashamed. Also let's not forget that those abstractions helped us solve problems and our friends in silicon valley(ok that no longer makes sense but imagine if SillyValley still just made HW) covered our mistakes. But yeah, we write crap a lot of the time.
But as other folks have said, it doesn't mean "don't optimize."
I've always used my own version of the phrase, which is: "Don't be stupid." As in, don't do dumb, expensive things unless you need to for a prototype. Don't start with a design that is far from optimal and slow. After profiling, fix the slow things. I'm pretty sure that's what most folks do on some level.
I don't think you can blame this phrase if people are going to drop an entire word out of an eight word sentence. The very first word, no less.
how do you know which code was written using this quote in mind.
The average university CS student in USA (and India I presume) is taught to "hack it" at any cost, and we see the results.
This is probably the worst use of the word "shortened" ever, and it should be more like "mutilated"?
I get where he's coming from, but I've seen people get this very wrong in practice. They use an algorithm that's indeed faster for small n, which doesn't matter because anything was going to be fast enough for small n, meanwhile their algorithm is so slow for large n that it ends up becoming a production crisis just a year later. They prematurely optimized after all, but for an n that did not need optimization, while prematurely pessimizing for an n that ultimately did need optimization.
> Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
My #1 programming principle would be phrased using a concept from John Boyd: make your OODA loops fast. In software this can often mean simple things like "make compile time fast" or "make sure you can detect errors quickly".
I'm a big fan of Data Oriented Design. Once you conceptualize how data is stored and transformed in your program, it just has to be reflected in data structures that make it possible.
Modern design approaches tend to focus on choosing a right abstraction like columnar/row layout, caching etc. They mostly fail to optimally work with the data. Optimal in this case meaning getting most of all underlying hardware capabilities, for example reading large and preferably continuous blocks of data from magnetic storage, parallel data processing, keeping intermediate results in CPU caches, utilizing all physical SSD queues.
It's all use-case and priority-specific, and I think the more varied your experience and more tools you have in the tool belt, the better off you can be to bring the right solution to bear. Of course, then you think you have the right solution in mind (lets say using partitions in postgres for something) but you find the ORM your service is using doesn't support it, then what is "best" becomes not only problem-specific but also tool-specific. Finally, even if you have the best solution and your existing ecosystem supports it but the rest of the engineering staff you have is unfamiliar with it, it may again no longer be "best".
this ladder of problem-fit, ecosystem-fit, staffing-fit is something I have grappled with in my career.
LLMs are only so-so at any of the above (even when including the agent as "staff".)
Of course with experience, you start to feel when the straight forward suboptimal code will cause massive performance issued. In this case it's critical to take action up front to avoid the mess. Its called software architecture, I guess.
If you can't tell in advance what is performance critical, then consider everything to be performance critical.
I would then go against rule #3 "Fancy algorithms are slow when n is small, and n is usually small". n is usually small, except when it isn't, and as per rule #1, you may not know that ahead of time. Assuming n is going to be small is how you get accidentally quadratic behavior, such as the infamous GTA bug. So, assume n is going to be big unless you are sure it won't be. Understand that your users may use your software in ways you don't expect.
Note that if you really want high performance, you should properly characterize your "n" so that you can use the appropriate technique, it is hard because you need to know all your use cases and their implications in advance. Assuming n will be big is the easy way!
About rule #4, fancy algorithms are often not harder to implement, most of the times, it means using the right library.
About rule #2 (measure), yes, you absolutely should, but it doesn't mean you shouldn't consider performance before you measure. It would be like saying that you shouldn't worry about introducing bugs before testing. You should do your best to make your code fast and correct before you start measuring and testing.
What I agree with is that you shouldn't introduce speed hacks unless you know what you are doing. Most of performance come from giving it consideration on every step. Avoiding a copy here, using a hash map instead of a linear search there, etc... If you have to resort to a hack, it may be because you didn't consider performance early enough. For example, if took care of making a function fast enough, you may not have to cache results later on.
As for #5, I agree completely. Data is the most important. It applies to performance too, especially on modern hardware. To give you an very simplified idea, RAM access is about 100x slower than running a CPU instruction, it means you can get massive speed improvement by making your memory footprint smaller and using cache-friendly data structures.
> Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
Always preferred Perlis' version, that might be slightly over-used in functional programming to justify all kinds of hijinks, but with some nuance works out really well in practice:
> 9. It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures.
>I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important. Bad programmers worry about the code. Good programmers worry about data structures and their relationships.
-- Linus Torvalds
> -- Linus Torvalds
What about programmers
- for whom the code is a data structure?
- who formulate their data structures in a way (e.g. in a very powerful type system) such that all the data structures are code?
- who invent a completely novel way of thinking about computer programs such that in this paradigm both code and data structures are just trivial special cases of some mind-blowing concept ζ of which there exist other special cases that are useful to write powerful programs, but these special cases are completely alien from anything that could be called "code" or "data (structures)", i.e. these programmers don't think/worry about code or data structures, but about ζ?
This kind of exploration can be a really positive use case for AI I think, like show me a sketch of this design vs that design and let's compare them together.
My recommendation is to truly learn a functional language and apply it to a real world product. Then you’ll learn how to think about data, in its pure state, and how it is transformed to get from point A to point B. These lessons will make for much cleaner design that will be applicable to imperative languages as well.
Or learn C where you do not have the luxury of using high-level crutches.
Not sure if SoTA codegen models are capable of navigating design space and coming up with optimal solutions. Like for cybersecurity, may be specialized models (like DeepMind's Sec-Gemini), if there are any, might?
I reckon, a programmer who already has learnt about / explored the design space, will be able to prompt more pointedly and evaluate the output qualitatively.
> sometimes a barrier to getting started for me
Plenty great books on the topic (:
Algorithms + Data Structures = Programs (1976), https://en.wikipedia.org/wiki/Algorithms_%2B_Data_Structures...
"Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious." -- Fred Brooks, The Mythical Man Month (1975)
This is what I've observed with using AI on relatively small (~1000 line) programs. When I add a requirement that requires a different data structure, Claude will happily move to the new optimal data structure, and rewrite literally everything accordingly.
I've heard that it gets dicier when you have source files that are 30K-40K lines and programs that are in the million+ line range. My reports have reported that Gemini falls down badly in this case, because the source file blows the context window. But even then, they've also reported that you can make progress by asking Gemini to come up with the new design, and then asking it to come up with a list of modules that depend upon the old structure, and then asking it to write a shim layer module-by-module to have the old code use the new data structure, and then have it replace the old data structure with the new one, and then have it remove the shim layer and rewrite the code of each module to natively use the new data structure. Basically, babysit it through the same refactoring that an experienced programmer would use to do a large-scale refactoring in a million+ line codebase, but have the AI rewrite modules in 5 minutes that would take a programmer 5 weeks.
You don't need to be able to pass a leet code interview, but you should know about big O complexity, you should be able to work out if a linked list is better than an array, you should be able to program a trie, and you should be at least aware of concepts like cache coherence / locality. You don't need to be an expert, but these are realities of the way software and hardware work. They're also not super complex to gain a working knowledge of, and various LLMs are probably a really good way to gain that knowledge.
Bill Gates, for example, always advocated for thinking through the entire program design and data structures before writing any code, emphasizing that structure is crucial to success.
While developing Altair BASIC, his choice of data structures and algorithms enabled him to fit the code into just 4 kilobytes.
This is the worst sort of documentation; technically true but quite unenlightening. It is, in the parlance of the Fred Brooks quote mentioned in a sibling comment, neither the "flowchart" nor the "tables"; it is simply a brute enumeration of code.
To which the fix is, ask for the right thing. Ask for it to analyze the key data structures (tables) and provide you the flow through the program (the flowchart). It'll do it no problem. Might be inaccurate, as is a hazard with all documentation, but it makes as good a try at this style of documentation as "conventional" documentation.
Honestly one of the biggest problems I have with AI coding and documentation is just that the training set is filled to the brim with mediocrity and the defaults are inferior like this on numerous fronts. Also relevant to this conversation is that AI tends to code the same way it documents and it won't have either clear flow charts or tables unless you carefully prompt for them. It's pretty good at doing it when you ask, but if you don't ask you're gonna get a mess.
(And I find, at least in my contexts, using opus, you can't seem to prompt it to "use good data structures" in advance, it just writes scripting code like it always does and like that part of the prompt wasn't there. You pretty much have to come back in after its first cut and tell it what data structures to create. Then it's really good at the rest. YMMV, as is the way of AI.)
My interpretation of his point of view is that what you need is a process/interpreter/live object that 'explains' the data.
https://news.ycombinator.com/item?id=11945722
EDIT: He writes more about it in Quora. In brief, he says it is 'meaning', not 'data' that is central to programming.
One part of it has interesting new resonance in the era of agentic LLMs:
alankay on June 21, 2016 | root | parent | next [–]
This is why "the objects of the future" have to be ambassadors that can negotiate with other objects they've never seen. Think about this as one of the consequences of massive scaling ...
Nowdays rather than the methods associated with data objects, we are dealing with "context" and "prompts".
I don't take it like that. A map could be the right data structure for something people typically reach for classes to do, and then you get a whole bunch of functions that can already operate on a map-like thing for free.
If you take a look at the standard library and the data structures of Clojure you'd see this approach taken to a somewhat extreme amount.
> Busywork code is not important. Data is important. And data is not difficult. It's only data. If you have too much, filter it. If it's not what you want, map it. Focus on the data; leave the busywork behind.
If I have learned one thing in my 30-40 years spent writing code, it is this.
You never know how requirements are going to change over the next 5 years, and pure structures are always the most flexible to work with.
You still have to worry about someone using kg when you use g, but you avoid a large class of problems and make your logic easier.
> 2. Functions delay binding; data structures induce binding. Moral: Structure data late in the programming process.
https://ocw.mit.edu/courses/6-001-structure-and-interpretati...
which I found very helpful in (finally) managing to get through that entire text (and do all the exercises).
What I am getting at is that when one has such gigantic data structure there is no separation of concerns.
Anytime one has access to a database one has access to one large global data structure that one can access from anywhere is a program.
This same concept goes for one's global state in one's game if one is making a game.
The key difficulty is identifying what these are is far from obvious upfront, and so often an index appears adjacent to a table that represents what the table should have been in the first place.
This is why I fundamentally find SQL too conservative and outdated. There are obvious patterns for cross-cutting concerns that would mitigate things like this but enterprise SQL products like Oracle and MS are awful at providing ways to do these reusable cross-cutting concerns consistently.
> Good programmers worry about data structures and their relationships.
> -- Linus Torvalds
I was specifically thinking about the "relationship" issues. The worst messes to fix are the ones where the programmer didn't consider how to relate the objects together - which relationships need to be direct PK bindings, which can be indirect, which things have to be cached vs calculated live, which things are the cache (vs the master copy), what the cardinality of each relationship is, which relationships are semantically ownerships vs peers, which data is part of the system itself vs configuration data vs live, how you handle changes to the data, (event sourcing vs changelogging vs vs append-only vs yolo update), etc.
Not quite "data structures" I admit but absolutely thinking hard about the relationship between all the data you have.
SQL doesn't frame all of these questions out for you but it's good getting you to start thinking about them in a way you might not otherwise.
That's great
When someone says "I want a programming language in which I need only say what I wish done," give him a lollipop.Pike is right.
“If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident.”
And:
“Use simple algorithms as well as simple data structures.”
A data structure general enough to solve enough problems to be meaningful will either be poorly suited to some problems or have complex algorithms for those problems, or both.
There are reasons we don’t all use graph databases or triple stores, and rely on abstractions over our byte arrays.
Let's say you're working for the DMV on a program for driver's licenses. The idea is to use one structure for driver's license data, as opposed to using one structure for new driver's licenses, a different one for renewals, and yet a third for expired ones, and a fourth one for name changes.
It is not saying that you should use byte arrays for driver's license records, so that you can use the same data structure for driver's license data and missile tracks. Generalize within your program, not across all possible programs running on all computers.
I learnt about rule-5 through experience before I had heard it was a rule.
I used to do tech due diligence for acquisition of companies. I had a very short time, about a day. I hit upon a great time saver idea of asking them to show their DB schema and explain it. It turned out to be surprisingly effective. Once I understood the scheme most of the architecture explained itself.
Now I apply the same principle while designing a system.
edit: s/data/data structure/
Knuth later attributed it to Hoare, but Hoare said he had no recollection of it and suggested it might have been Dijkstra.
Rule 5 aged the best. "Data dominates" is the lesson every senior engineer eventually learns the hard way.
Number 5 is timeless and relevant at all scales, especially as code iterations have gotten faster and faster, data is all the more relevant. Numbers 4 and 3 have shifted a bit since data sizes and performance have ballooned, algorithm overhead isn't quite as big a concern, but the simplicity argument is relevant as ever. Numbers 2 and 1 while still true (Amdahl's law is a mathematical truth after all), are also clearly a product of their time and the hard constraints programmers had to deal with at the time as well as the shallowness of the stack. Still good wisdom, though I think on the whole the majority of programmers are less concerned about performance than they should be, especially compared to 50 years ago.
There are a lot of systems where useless work and other inefficiencies are spread all over the place. Even though I think garbage collection is underrated (e.g. Rustifarians will agree with me in 15 years) it's a good example because of the nonlocality that profilers miss or misunderstand.
You can make great prop bets around "I'll rewrite your Array-of-Structures code to Structure-of-Arrays code and it will get much faster"
https://en.wikipedia.org/wiki/AoS_and_SoA
because SoA usually is much more cache friendly and AoS makes the memory hierarchy perform poorly in a way profilers can't see. The more time somebody spends looking at profilers and more they quote Rule 1 the more they get blindsided by it.
On #5, I think most people tend to just lean on RDBMS databases for a lot of data access patterns. I think it helps to have some fundamental understandings in terms of where/how/why you can optimize databases as well as where it make sense to consider non-relational (no-sql) databases too. A poorly structured database can crawl under a relatively small number of users.
Rob Pike’s Rules of Programming (1989) - https://news.ycombinator.com/item?id=38097031 - Nov 2023 (259 comments)
Rob Pike’s Rules of Programming (1989) - https://news.ycombinator.com/item?id=24135189 - Aug 2020 (323 comments)
Rob Pike’s Rules of Programming (1989) - https://news.ycombinator.com/item?id=15776124 - Nov 2017 (18 comments)
Rob Pike’s Rules of Programming (1989) - https://news.ycombinator.com/item?id=15265356 - Sept 2017 (112 comments)
Rob Pike’s Rules of Programming (1989) - https://news.ycombinator.com/item?id=7994102 - July 2014 (96 comments)
If the next generation doesn't even want to learn a programming language, they're definitely not going to learn how to write _clean_ code.
Maybe I'm just overly pessimistic about junior engineers at the moment because of that conversation lol.
Anyway, I've found that if you want to get a coworker into reading technical books, the best way is with a novel or three. I've had good success with The Martian. The Phoenix Project might work too. Slip them fun books until they've built a habit and then drop The Mythical Man Month on them. :)
Random side note: my teen son has grown up with iPhone-level tech, yet likes and finds my old Casio F91 watch very interesting. I still have faith :)
PS: Not that we do not have people working at all levels of stack today, just that each level of stack, like a discussion going on today about python's JIT compiler will be a few (dozen or hundred) specialists. Everyone else can work with prompts.
I’m hoping the situation with LLMs will be the same. Teach the basics and allow people to fall back on them for at least the simpler tasks for their lifetimes. I know people, by the way, who can still use an abacus and a slide rule. I can too, but with a refresher beforehand because I seldom use those.
There’s several orders of magnitude less available discussion of selecting data structures for problem domains than there is code.
If the underlying information is implicit in high volume of code available then maybe the models are good at it, especially when driven by devs who can/will prompt in that direction. And that assumption seems likely related to how much code was written by devs who focus on data.
I believe that’s what most algorithms books are about. And most OS book talks more about data than algorithms. And if you watch livestream or read books on practical projects, you’ll see that a lot of refactor is first selecting a data structure, then adapt the code around it. DDD is about data structure.
Based on everything public, Pike is deeply hostile to generative AI in general:
- The Christmas 2025 incident (https://simonwillison.net/2025/Dec/26/slop-acts-of-kindness/)
- he's labeled GenAI as nuclear waste (https://www.webpronews.com/rob-pike-labels-generative-ai-nuc...)
- ideologically, he's spent his career chasing complexity reduction, adovcating for code sobriety, resource efficiency, and clarity of thought. Large, opaque, energy-intensive LLMs represent the antithesis.
The whole article is an AI hallucination. It refers to the same "Christmas 2025 incident". The internet is dead for real.
Even knowing with 100% certainty that performance will be subpar, requirements change often enough that it's often not worth the cost of adding architectural complexity too early.
I think there is value in attempting to do something the "wrong way" on purpose to some extent. I have walked into many situations where I was beyond convinced that the performance of something would suck only to be corrected harshly by the realities of modern computer systems.
Framing things as "yes, I know the performance is definitely not ideal in this iteration" puts that monkey in a proper cage until the next time around. If you don't frame it this way up front, you might be constantly baited into chasing the performance monkey around. Its taunts can be really difficult to ignore.
<title> <h1>Rob Pike's 5 Rules of Programming</h1> </title>
That said management did not quite understand. They thought that I should have known about the bottleneck (Actually I did but I was told not to prematurely optimize)
I end up writing the program three times, the final solution was honestly beautiful.
Management was not happy. The customer was happy.
What I should have done was point to Rob's third rule (either in my comment or in the resulting threads)
[0] https://news.ycombinator.com/threads?id=awesome_dude&next=47...
This Axiom has caused far and away more damage to software development than the premature optimization ever will.
> Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
Rule 2 follows rule 1.
Rule 3 & 4 is a variation of Keep it Simple, Stupid (KISS) from the 1960s.
... and... now I feel stupid, because I read the last part, which is summarizing it in the same way.
I believe that's why Golang is a very simple but powerful language.
Funny handwritten html artifact though:
<title> <h1>Rob Pike's 5 Rules of Programming</h1> </title>1(a) Torvalds: "Bad programmers worry about the code. Good programmers worry about data structures and their relationships."
1(b) Pike Rule 5: "Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming."
— versus —
2. Perlis 2: "Functions delay binding; data structures induce binding. Moral: Structure data late in the programming process."
---
Ignorant as I am, I read these to advise that I ought to put data structures centrally, first, foremost — but not until the end of the programming process.
When you know what and how to build commit to good data structures. Do the types, structs, classes, Trie, CRDTs, XML, Protobuf, Parquet and whatnot where apropriate. Instrument your program. The efficiency of the final product counts.
So not really a contradiction, just Perlis talking about the functional shell and Torvalds/Pike talking about the imperative core.
Good structure comes from exploring until you understand the problem well AND THEN letting data structure dominate.
Thankfully newer languages like Nim, Odin, and Swift lean hard into value semantics. They drastically reduce the cost of focusing on data structures and writing obvious algorithms. Then, when bottlenecks appear, you can choose to opt into fine-tuning.
I think that Rob Pike was far more of a wordcel than a shape rotator for a famous computer scientist (which historically were very much on the shape rotator side).
> Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
If want to solve a problem - it's natural to think about logic flow and the code that implements that first and the data structures are an after thought, whereas Rule 5 is spot on.
Conputers are machines that transform an input to an output.
It is?
How can you conceive of a precise idea of how to solve a problem without a similarly precise idea of how you intend to represent the information fundamental to it? They are inseparable.
Do you start with the logical task first and structure the data second, or do you actually think about the data structures first?
Let's say I have a optimisation problem - I have a simple scoring function - and I just want to find the solution with the best score. Starting with the logic.
for all solutions, score, keep if max.
Simple eh? Problem is it's a combinatorial solution space. The key to solving this before the entropic death of the universe is to think about the structure of the solution space.
Neither data structures nor algorithms, but entities and tasks, from the user POV, one level up from any kind of implementation detail.
There's no point trying to do something if you have no idea what you're doing, or why.
When you know the what and why you can start worrying about the how.
Iff this is your 50th CRUD app you can probably skip this stage. But if it's green field development - no.
eg sort a list.
That's why a collection of "obvious" things formulated in a convincing way by a person with big street cred is still useful and worth elevating.
LLM's work will never be reproducible by design.