Really like the article I think it is awesome, and I strongly believe AI for coding will stay, but I also beleive that we need to still have a strong understanding of why we are building things and what they look like.
An example being the common attitude that [advanced tech] is just a math problem to be solved, and not a process that needs to play itself out in the real world, interacting with it and learning, then integrating those lessons over time.
Another way to put this is: experience is undervalued, and knowledge is overvalued. Probably because experience isn’t fungible and therefore cannot be quantified as easily by market systems.
1. Probably not his original idea, and now that I think about it this is kind of more Hegelian. I’m not familiar enough with Hegel to reference him though.
I've been using AI to help me write it and I've come to a couple conclusions:
- AI can make working PoCs incredibly quickly
- It can even help me think of story lines, decision paths etc
- Given that, there is still a TON of decisions to be made e.g. what artwork to use, what makes sense from a story perspective
- Playtesting alone + iterating still occurs at human speed b/c if humans are the intended audience, getting their opinions takes human time, not computer time
I've started using this example more and more as it highlights that, yes, AI can save huge amounts of time. However, as we learned from the Theory of Constraints, there is always another bottleneck somewhere that will slow things down.
Coming up with a genuinely interesting gameplay loop with increasing difficulty levels and progressively revealed gameplay mechanics is a fascinating and extremely difficult challenge, no matter how much AI you throw at the problem.
You fill a jar with sand and there is no space for big rocks.
But if you fill the jar with big rocks, there is plenty of space for sand. Remove one of the rocks and the sand instantly fills that void.
Make sure you fit the rocks first.
You fill the bottle with half of the water, you put the fish in, you can fill in the other half. If you start with the first half, you will end up with more water.
Yes, there are bad metaphors, and people who take metaphors too seriously. That you can conjure a bad metaphor with somewhat similar to semantics to some other metaphor does not mean that said metaphor is bad.
That water overflow step is missing / implicit. But that's an observable event.
then you fill 3 liter bottle again, and pour the contents into the 5 liter bottle until the 5 liter one is full
empty the 5 liter bottle, and pour the 1 liter in the 3 liter bottle into the 5 liter bottle
fill the 3 liter bottle again and pour that into the 1 liter already in the 5 liter bottle to get 4 liters of water
> Given a 3-liter container and a 5-liter container, both initially empty, and access to tap water, how can you measure exactly 4 liters of water without using any additional containers
I've offered and received some convoluted metaphors recently, love leaning hard into this one.
Not sure, I used to be better at diagnosing this type of episode.
Lost me in paragraph three. We pay for those things because they're recognizable status symbols, not because they took a long time to make. It took my grandmother a long time to knit the sweater I'm wearing, but its market value is probably close to zero.
The fact that those items took a long time to make is part of what makes them status symbols though, because if you pay a lot of money for something that took no time to make at all (see most NFTs) you look like an idiot to a lot of people.
This sort of thing was done at a time when everybody did it, and now that it's not done, nobody does it
No kid ever said "did you see the sweater that Timmy's grandma knitted for him? That kid is so cool! "
Mostly because they all had grams sweaters as well.
I don't know what term you were looking for, but a handmade present for someone dear is about the furthest thing from a "status symbol" that I can think of:
- it can't be bought
- it can't be transferred without losing almost all value (ie: it's only valuable to you, or at most your family, eBay doesn't want it)
- it provides no improvement whatsoever in one's social standing
I can't connect it at all to your listed points. An Olympic medal is about obvious a status symbol as I can imagine but it can't (meaningfully) be bought or transferred.
The status signified with a knit sweater is membership (and good standing!) in a caring family with elders not yet fully subsumed into their phones.
People, acquaintances and strangers alike, frequently comment on the knit socks I often wear, ask after who made them, and all of a sudden we're on "how's your mom" terms.
https://www.ebay.com/b/Olympic-Medal/27291/bn_55191416?_sop=...
> People, acquaintances and strangers alike, frequently comment on the knit socks I often wear,
Ok, that explains pretty much everything about your line of thought.
Thanks.
> https://www.ebay.com/b/Olympic-Medal/27291/bn_55191416?_sop=...
Of course you can buy an Olympic medal. You can't buy the status conferred by the medal (of Olympic champion / nth runner up).
> Ok, that explains pretty much everything about your line of thought.
I don't understand this either. Are you insulting me?
I'm also completely unimpressed by someone wearing a Rolex though, so different mileage for different people.
Understanding words does not require being impressed by anything, nor caring about the opinion of kids.
The old rich doesn't give a shit about Rolex watches beyond noticing the newb rich using them to tell on themselves.
If people don't consider that someone with more money is of a higher status then symbols of that wealth aren't meaningful.
I think a lot of people have an ingrained belief that "more money == more status"
Agentic coding very much feels like a "video game" in the sense of you pull the lever and open the loot box and sometimes it's an epic +10 agility sword and sometimes its just grey vendor trash. Whether or not it generates "good" or even "usable" code fades to the background as the thrill of "I just asked for a UI to orchestrate micro services and BLAMMO there it was!" moves to the fore.
Consider the idea of trying to determine how quickly an unknown number of timers will go ping, It could be 10,000 timers that go ping when finished or 1,000,000 timers that go ping when finished. I don't know when they are going to go ping, just that they all the timers are running at different speeds spread over some distribution.
After one time period, 5,000 pings have been detected. Should you conclude that timers are pinging fairly quickly?
You cannot tell the overall duration of timers if you don't know the number of timers there are out there. Your only data that the timer exists is the ping, consequently you cannot tell if a small population is at high speed or a large population is at a moderate speed. In both cases the data you receive are the fastest of the population.
In other words we haven't yet seen what the 10 year project made using these tools is like (or even if it exists/will exist), because they haven't been around for 10 years.
Think about the analogy of transaction speed of money transfer vs actual delivery of good. With AI, we would make all digital tasks instantaneous, but the physical world will hum along at its own speed unless we speed it up with dark factories and what not.
Yes, you cannot build years of community and trust in a weekend. But sometimes it's totally sufficient to plant a seed, give it some small amounts of water and leave it on its own to grow. Go ask my father having to deal with a huge maple tree, that I’ve planted 30 years ago and never cared for it.
Open Source projects sometimes work like this. I've created a .NET library for Firebase Messaging in a weekend a few years ago… and it grew on its own with PRs flowing in. So if your weekend project generates enough interest and continues to grow a community without you, what’s the bad thing here? I don’t get it.
Sometimes a tree dies and an Open Source project wasn’t able to make it.
That said, I’ve just finished rewriting four libraries to fix long standing issues, that I haven’t been able to fix for the past 10 years.
It's been great to use Gemini as a sparring partner to fix the API surface of these libraries, that had been problematic for the past 10 years. I was so quick to validate and invalidate ideas.
Once being one of the biggest LLM haters I have to say, that I immensely enjoy it right now.
Creating these wrong things is only cheaper with LLMs. Since developers now spend less time and effort to create that wrong thing, they don't feel the need validate or reflect on them so much.
The risk is not the tool itself, but the over-reliance on it and forgoing feedback loops that have made teams stronger, e.g. debugging, testing, and reasoning why something works a particular way.
I think of it differently. Speed is great because it means you can change direction very easily, and being wrong isn't as costly. As long as you're tracking where you're going, if you end up in the wrong place, but you got there quickly and noticed it, you can quickly move in a different direction to get to the right place.
Sometimes we take time mostly because it's expensive to be wrong. If being wrong doesn't cost anything, going fast and being wrong a lot may actually be better as it lets you explore lots of options. For this strategy to work, however, you need good judgment to recognize when you've reached a wrong position.
I feel this new world sucks. We have new technology that boosts the productivity of the individual engineer, and we could be doing MUCH better work, instead of just rushed slop to meet quotas.
I feel I'm just building my replacement, to bring the next level of profits to the c-suite. I just wish I wasn't burning out while doing so.
I don’t think it’s exclusive to startups or tech either, it seems more like a downstream consequence of the fact that there’s no real innovation anymore. Capitalism demands constant growth, and when there are real technological improvements you can achieve that growth through higher productivity. If there are none, you have to achieve that growth through other means like forcing employees to work longer or cutting costs. The alpha is all coming from squeezing the labor force right now.
This doesn't sound right to me. We are currently getting smacked upside the head by an enormous technological innovation. I believe that, even within the framework of capitalism, this problem has social and political roots. The "robber baron" period late 19th century America has strong similarities to what we are seeing today, and technological stagnation was not the cause.
What's slower now are threats to production - even minor regulations take years or decades, and often appear only when workarounds have surfaced.
So what changed in the last 40+ years are the many tools for businesses to shape the conditions of their business -the downstream market, upstream suppliers, and regulatory support/constraints. This is extremely patient work over generations of players, sometimes by individuals, but usually by coalitions of mutual corporate self-interest, where even the largest players couldn't refuse to participate.
It's evolution.
I do wonder if productivity with AI coding has really gone up, or if it just gives the illusion of that, and we take on more projects and burn ourselves out?
Here's the thing: we never had a remotely sane way to measure productivity of a software engineer for reasons that we all understand, and we don't have it now.
Even if we had it, it's not the sort of thing that management would even use: they decide how productive you are based on completely unrelated criteria, like willingness to work long hours and keeping your mouth shut when you disagree.
If you ask those types whether productivity has gone up with AI, they'll probably say something like "of course, we were able to let go a third of our programmers and nothing really seems to have changed"
"Productivity" became a poisoned word the moment that the suits realized what a useful weapon it was, and that it was impossible to challenge.
It doesn’t matter how fast we can make our widgets and chatbots when what you need is to have a self sufficient workforce. We have outsourced everything material and valuable for society. Now we are left with industries of gambling, ad machines and pharmaceuticals with a government that is functionally bankrupt and politicians that have completely sold out
ps: it's strange that YouTubers are talking about the same thing. People in different dev circles. Agentic feels like doom ide scroll.
It definitely hasn't for me. I spent about an hour today trying to use AI to write something fairly simple and I'm still no further forward.
I don't understand what problem AI is supposed to solve in software development.
When Russians invaded Germany during WWII, some of them (who had never seen a toilet) thought that toilets were advanced potato washing machines, and were rightfully pissed when their potatoes were flushed away and didn't come back.
Sounds like you're feeling a similar frustration with your problem.
Why is AI supposed to be good?
I ended up having to type hundreds of lines of description to get thousands of lines of code that doesn't actually work, when the one I wrote myself is about two dozen lines of code and works perfectly.
It just seems such a slow and inefficient way to work.
Vibe slop-ing at supersonic speeds and waiting years to grow aren't the only options, there's something in between where you have enough signal to keep going and enough speed to not waste years on the wrong thing.
I feel that today's VCs have completely disregarded the middle and are focused on getting as big as possible as fast as possible without regard to the effect it's having on the ecosystem.
But anyhow, you can buy large-ish burlapped trees but they aren’t as healthy, often die, and nothing close to a 100+ yr old estate oak tree or a decades old rose garden. You just can’t make it faster, transplanting plants that old will kill them.
Most of the trees do just fine, and these nurseries will typically provide a warranty.
absolutely although i wonder how different 'trust' is in the culture of tomorrow? will it 'matter' as much, be as cherished, as earned over the fullness of time?
i suspect it is a pendulum - and we are back to oak trees at some point - but which way is the pendulum swinging right now?
Refactoring decent sized components are an order of magnitude easier than it was, but the more important signal is still, why are you refactoring? What changed in your world or your world-view that caused this?
Good things still take time, and you can't slop-AI code your way to a great system. You still need domain expertise (as the EXCELLENT short story from the other day explained, Warranty Void if Regenerated (https://nearzero.software/p/warranty-void-if-regenerated) ). The decrease in friction does definitely allow for more slop, but it also allows for more excellence. It just doesn't guarantee excellence.
This is a bad start. Louis XIV at Versailles and Marly famously made while forests appear or disappear overnight, to the utter dismay of Saint-Simon, the memorialist, who thought this was an unacceptable waste of money and energy.
And this was before the industrial revolution. Today I'm sure many more miracles happen every day.
Imagine a world in which the promise of AI was that workers could keep their jobs, at the same compensation as before, but work fewer hours and days per week due to increased productivity.
What could you do with those extra hours and days? Sleep better. Exercise more. Prepare healthy meals. Spend more time with family and friends. The benefits to physical and mental well-being are priceless. Even if you happened to earn extra money for the same amount of work, your time can be infinitely more valuable than money.
Unfortunately, that's not this world. Which is why the "increased productivity" promise doesn't seem to benefit workers at all.
If you look at the technological utopias that people imagined 50, 60+ years ago, they involved lives of leisure. If you would have told them that advances in technology would not reduce our working hours at all, maybe they would have started smashing the machines back then. Now we're supposed to be happy with more "stuff", even if there's no more time to enjoy stuff.
What AI allow us is to do those things we would not have been able to prioritize before. To "write" those extra tests, add that minor feature or to solve that decade old bug. Things that we would never been able to prioritize are we noe able to do. It's not perfect, it's sometimes sloppy, but at least its getting shit done. It does not matter if you solve 10% of your problem perfect if you never have time for the remaining 90.
I do miss the coding, _a lot_, but productivity is a drug and I will take it.
But no one wants to go out of their house.
Social connections. Trust. Facetime. All matter more than ever.
Want a moatable software business? Know your customers on a personal level. Have a personal relationship. Know the people that sign the contracts, know their kids names, where they vacationed last winter, their favorite local restaurant.
Get out of the house.
Oh, I thought it was because they're a way to show off about being rich.
> We require age minimums for driving, voting, and drinking because we believe maturity only comes through lived experience.
Even if she could reach the pedals, my 4yo doesn't have the attention span to drive. This isn't a "lived experience" thing, it's a physical brain development thing. IIRC the are effects with learning math, where starting earlier had limited impact on being able to move to certain more advanced topics earlier; ie there's more going on than just hours of experience.
The standard age for voting is also the age for being a legal adult. There are sound logical reasons that these ages should match.
The standard drinking age is due to pressure by activists, and AIUI is lower in other countries.
Maybe for some. I think these examples were carefully chosen. Hermès are made in France, "Swiss watch" doesn't automatically mean Rolex, though in that case Rolex does own most of their manufacturing (though there is a whole world of carefully made watches out there that don't cost 10K). As for old properties... there is a huge range there, but unless you are living in a castle, most people, at least my city, are likely silently thinking: "I'm so sorry for them that they have to live in that old house."
Undoubtedly a lot of that comes down to production cost and safety. A plane is far more likely to kill people and it costs a shitload more to produce then an app (though plenty of software is mission critical). But now in software we can move quick enough up front that if we don't start applying some discipline it's going to bite us in the ass in the long run.
They have spent the last decade building processes and guardrails for getting consistent average performance from people. But now, some talented people who worked at those companies are building their own new companies without the overhead and moving much, much more quickly.
I think what we assume is "vibe slop at inference speed" is not as simple as people make it out to be. From a perspective, I think generally it might be people trying to save jobs.
I'm seeing more slop come out of larger, older companies than the new ones (with experienced operators).
And the speed is somewhat scary. For smaller team it doesn't take as much effort to build deep, beautiful product anymore.
The bottleneck was never the ability for a engineer to code. It was the 16 layers between the customer and the programmer which has vanished in smaller companies and is forcing larger ones to produce slop.
I'm reading Against The Machine by Paul Kingsnorth, and now reading this blog piece is hard not to make connections with the points of the book: the usage of the tree as a counter-argument for the machine's automation credo exposed in the blog post very much aligns with I've read so far.
Not true, we do this because the 99% of the time it's true, however there are people who would be perfectly competent and responsible to drive without living to the age of 16-18. Same with voting, there are humans who have a deep understanding and intelligence about politics at a younger age than suffrage. Equally there are people who will be reckless drivers at 40 and vote on whim at 60.
We have these rules not because sophistication only comes through lived experience, we have them because it's strongly correlated and covers of most error cases.
To take this to AI, run the model enough times with a higher enough temperature, then perhaps it can solve your challenges with a high enough quality - just a thought.
I’ve been noticing that this simple reality explains almost all of both the good and the bad that I hear about LLM-based coding tools. Using AI for research or to spin up a quick demo or prototype is using it to help plot a course. A lot of the multi-stage agentic workflows also come down to creating guard rails before doing the main implementation so the AI can’t get too far off track. Most of the success stories I hear seem to be in these areas so far. Meanwhile, probably the most common criticism I see is that an AI that is simply given a prompt to implement some new feature or bug fix for an existing system often misunderstands or makes bad assumptions and ends up repeatedly running into dead ends. It moves fast but without knowing which direction to move in.
This is a real problem when the "direction" == "good feedback" from a customer standpoint.
Before we had a product person for every ~20 people generating code and now we're all product people, the machines are writing the code (not all of it, but enough of it that I will -1 a ~4000 line PR and ask someone to start over, instead of digging out of the hole in the same PR).
Feedback takes time on the system by real users to come back to the product team.
You need a PID like smoothing curve over your feature changes.
Like you said, Speed isn't velocity.
Specifically if you have a decent experiment framework to keep this disclosure progressive in the customer base, going the wrong direction isn't a huge penalty as it used to be.
I liked the PostHog newsletter about the "Hidden dangers of shipping fast", I can't find a good direct link to it.
https://newsletter.posthog.com/p/the-hidden-danger-of-shippi...
I suppose there is an argument that if you are building the wrong thing, build it fast so that you can find out more quickly that you built the wrong thing, allowing you to iterate more quickly.
Way too often that is used as an excuse for various forms of laziness; to not think about the things you can already know. And that lack of thinking repeats in an endless cycle when, after your trial and error, you don't use what you learned because "let's look forward not backward", "let's fail fast and often" and similar platitudes.
Catchy slogans and heartfelt desires are great but you gotta put the brains in it too.
Discovery is great and all but if what you discover is that you didn't aim well to begin with that's not all that useful.
A lot of people are so enamored by speed, they are not even taking the time to carefully consider the full picture of what they are building. Take the HN frontpage story on OpenCode: IIRC, a maintainer admitted they keep adding many shallow features that are brittle.
Speed cannot replace product vision and discipline.
It also moves fast with a tendency to pick the wrong direction (according to the goal of the prompter) at every decision point (known or unknown).