But yeah my hunch is "the old way" - although not sure we can even call it that - is likely still on par with an "agentic" workflow if you view it through a wider lens. You retain much better knowledge of the codebase. You improve your understanding over coding concepts (active recall is far stronger than passive recognition).
Someone thought I was naive when I said my vibe coded internal web admin site met the security requirements without looking at a line of code.
I knew that because the requirements were that anyone who had access to the site could do anything on the site and the site was secured with Amazon Cognito credentials and the Lambda that served it had a least privileged role attached.
If either of those invariants were broken, Claude has found a major AWS vulnerability.
I saw this quote when looking at the Recurse Center website. How does one usually go about something like this if they work full time? Does this mainly target those who are just entering the industry or between jobs?
I know the article is mostly about what the author built at the coding retreat, but now he has me interested in trying to attend one!
Most folks do RC between jobs, either because they quit their job specifically to do RC or because they lost their job and then decide to apply. Other common ways are as part of a formal sabbatical (returning either to an industry job or to academia), as part of garden leave, or while on summer break (for college and grad students). We also get a fair number of freelancers/independent contractors (who stop doing their normal work during their batches), as well as some retirees.
Some folks use RC as a way to enter the industry (both new grads and folks switching careers), though the majority of people who attend have already worked professionally as programmers.
We've had people aged 12 to early 70s attend, though most Recursers are in their 20s, 30s, and 40s.
Unless you can swing a six week sabbatical and return to your current job
Once upon a time we wrote code in assembly language. Then we moved to C or other compiled languages. Assembly programming remained a very useful but niche skill. You compile your code and trust the compiler. You can examine the compiler output and that is at times necessary, but that's not something most developers know how to do.
We may be looking at something similar. Most development work moving to the LLM abstraction level, with the skills being writing good prompts, managing the context window, agents, memories and so on. Some developers will be able to examine LLM generated code and spot problems there, but most will not have that skill.
I'm not sure how to feel about it. Since ChatGPT showed up and until a couple months ago, I was firmly skeptical of LLM programming. We had new models every few weeks and I felt like each new model is just a different twist on the same low quality slop output. But recently the models seem to have crossed some threshold where their capabilities really improved and I have now used Claude - still using it sparingly - to implement features in much less time than I'd need myself or to locate a bug based on just log output. I don't yet buy the "coding is solved" hype but we're at least looking at the biggest change to programming since the adoption of high-level programming languages.
Then, when credits run out. It’s show time! The code is neatly organized, abstractions make sense, comments are helpful so I have a solid ground to do some good old organic human coding. I make sure that when i’m approaching limits I’m asking the AI to set the stage.
I used to get frustrated when credits ran out because the AI was making something I would need to study to comprehend. Now I’m eager to the next “brain time hand-out”
It sounds weird but it’s a form of teamwork. I have the means to pay for a larger plan but i’d rather keep my brain active.
I am seeing non technical people getting involved building apps with Claude. After the Openclaw and other Agentic obsession trends I just don't see it pragmatic to continue down the road of AI obsession.
In most other aspects of life my skills were valuated because of my ability to care about details under the hood and the ability to get my hands dirty on new problems.
Curious to see how the market adapts and how people find ways to communicate this ability for nuance.
I remember writing BASIC on the Apple II back when it wasn't retro to do so!
But when it comes to the final act I find myself unwilling to let an LLM write the actual code - I still do it myself.
Perhaps because my main project at the moment is a game I've been working on for four years, so the codebase is sizable, non-trivial, and all written by me. My strong sense even since coding LLMs showed up has been that continuing to write the code is important for keeping it coherent and manageable as a whole, including my mental model of it.
And also: for keeping myself happy working on it. The enjoyment would be gone if I leaned that far into LLMs.
Despite what some might say, there isn't a big moat between those who use LLMs for programming and those who don't. So if I ever truly need to use LLMs to survive, I'll just have to start paying for a subscription.
In the meantime, I'll be keeping my own skills sharp and see how that turns out in a few years. I'm afraid software quality is going to take a nosedive in the near future, it was already on a downward trend.
> 15 years of Clojure experience
My God I’m old.
I still keep hoping there'll be a glut of demand for traditional software engineers once the bibbi in the babka goes boom in production systems in a big way:
https://m.youtube.com/watch?v=J1W1CHhxDSk
But agentic workflows are so good now—and bound to get better with things like Claude Mythos—that programming without LLMs looks more and more cooked as a professional technique (rather than a curiosity or exercise) with each passing day. Human software engineers may well end up out of the loop completely except for the endpoints in a few years.
What scares the shit out of me are all these new CS grads that admit they have never coded anything more complex than basic class assignments by hand, and just let LLMs push straight to main for everything and they get hired as senior engineers.
It is like hiring an army of accountants that have never done math on paper and exclusively let turbotax do all the work.
If you have never written and maintained a complex project by hand, you should not be allowed to be involved in the development of production bound code.
But also, I feel this way about the industry long before LLMs. If you are not confident enough to run Linux on the computer in front of you, no senior sysadmin will hire you to go near their production systems.
Job one of everyone I mentor is to build Linux from scratch, and if you want an LLM build all the tools to run one locally for yourself. You will be way more capable and employable if you do not skip straight to using magic you do not understand.
It's not though. It's fundamentally different because TurboTax will still work with clear deterministic algorithms. We need to see that the jump to AI is not a jump from hand written math to calculators. It's a jump from understanding how the math works to another world of depending on magic machines that spit out numbers that sort of work 90% of the time.
They probably wouldn't think that the calculator makes them faster either
If we assume that there are 50 weeks per year, this gives us about 400-500 lines of code per week. Even at long average 65 chars per line, it goes not higher than 33K bytes per week. Your comment is about 1250 bytes long, if you write four such comments per day whole week, you would exceed that 33K bytes limit.
I find this amusing.
My software engineering experience longs almost 37 years now (December will be anniversary), six-to-seven years more than Earth's human population median age. I had two burnouts through that time, but no carpal tunnel syndrome symptoms at all. When I code, I prefer to factor subproblems out, it reduces typing and support costs.
So only the old hands allowed from now on, or how are we going to provide these learning opportunities at scale for new developers?
Serious question.
Employers were already refusing to hire juniors, even when 0.5-1 years' salary for a junior would be cheaper than spending the same on hiring a senior.
They'll never accept intentionally "slower" development for the greater good.
That comes post Chernobyl.
Always happy to mentor people at stagex and hashbang (orgs I founded).
Also being a maintainer of an influential open source project goes on a resume, and helps you get seen in a crowded market while boosting your skills and making the world better. Win/win all around.
I don't think SWE is a promising career to get started in today.
But pro-AI posts never seem to pin themselves down on whether code checked in will be read and understood by a human. Perhaps a lot of engineers work in “vibe-codeable” domains, but a huge amount of domains deal with money, health, financial reporting, etc. Then there are domains those domains use as infrastructure (OS, cloud, databases, networking, etc.)
Even where it is non-critical, such as a social media site, whether that site runs and serves ads (and bills for them correctly) is critical for that company.
We have a completely broken internet with almost nothing using memory encryption, deterministic builds, full source bootstrapping, secure enclaves, end to end encryption, remote attestation, hardware security auth, or proper code review.
Decades of human cognitive work to be done here even with LLM help because the LLMs were trained to keep doing things the old way unless we direct them to do otherwise from our own base of experience on cutting edge security research no models are trained on sufficiently.
That is exactly been the situation for years. Once graduated accountants are not doing maths. They are using software (Exel, Xero etc.). They do need to know some basic formulas eg. NPV.
What they need to know is the law, current business practices etc.
If that's true, then you likely used to produce slop for code. :-(
> I did things the old way for 25 years and my carpal tunnels are wearing out.
You wrote so much code as to wear out your carpal tunner? Are you sure it isn't the documentation and the online chatter with your peers? :-(
... anyway, I know it's corny to say, but - you should have, and shoudl now, improve the ergonomics of your setup. Play with things like the depth of your keyboard on your desk, the height of the chair and the desk, with/without chair handrests, keyboard angle, etc.
> Job one of everyone I mentor is to build Linux from scratch
"from scratch" can mean any number of things.
Local models are quite good now, and can jump right in to projects I coded by hand, and add new features to them in my voice and style exactly the way I would have, and with more tests than I probably would have had time to write by hand.
Three months ago I thought this was not possible, but local models are getting shockingly good now. Even the best rust programmers I know look at output now and go "well, shit, that is how I would have written it too"
That is a hard thing to admit, but at some point one must accept reality.
> anyway, I know it's corny to say, but - you should have, and shoudl now, improve the ergonomics of your setup. Play with things like the depth of your keyboard on your desk, the height of the chair and the desk, with/without chair handrests, keyboard angle, etc.
I already type with colemak on a split keyboard with each half separated and tented 45 degrees on a saddle stool, with sit/stand desk I alternate. I have read all the research and applied all of it that I can. Without having done all that I probably would have had to change careers.
> "from scratch" can mean any number of things.
As far as I know I was the first person alive to deterministically build linux from 180 bytes of machine code, up to tinycc, to gcc, to a complete llvm native linux distribution.
When I say from scratch, I mean from scratch. Also, all of this before AI without any help from AI, but I sure do appreciate it to help with package maintenance and debugging while I am sleeping.
This is exactly how you learn to create better abstractions and write clear code that future you will understand.
I do the former for fun. The latter to provide for my family.
There is a reason old men take on hobbies like woodworking and fixing old cars and other stuff that has been replaced by technology.
(I swapped the title for the subtitle earlier because I thought it was more informative. What I missed was the flamebaity effect that "the old way" would have. Obvious in hindsight!)
Why would you think that? The landscape is fast-moving. Prompting tricks and "AI skills" of yesterday are already dated and sometimes actively counterproductive. The explicit goal of the companies working on the tech is to lower the barriers to entry and make it easier to use, building harnesses and doing refinement that align LLMs to an intuitive mode of interaction.
Do you think they'll fail? Do you think we've plateaued in terms of what using a computer looks like and your learnings for wrangling the agents of this year will be relevant for whatever the new hotness is next year? It's a strong claim that demands similarly strong argument to support.
How? I just open multiple terminal panes, use git tree, and then basically it’s good old software dev practices. What am I missing?
Claude Opus is going to give zero fucks about your attempts to manage it.
It is hard indeed. I find it really quite exhausting.
Personally, I feel like I have always been a very competent programmer. I'm embracing the new way of working, but it seems like quite a different skillset. I somewhat believe that it will be relevant for a long time, because there is an incredibly large gap in outcomes between members of my team using AI. I've had good results so far, but I'm keen to improve.
For the good stuff, there’s no alternative but to know and to have taste. Llms change nothing.
> There were 2 or 3 bugs that stumped me, and after 20 min or so of debugging I asked Claude for some advice. But most of the debugging was by hand!
Twenty whole minutes. Us old-timers (I am 39) are chortling.
I am not trying to knock the author specifically. But he was doing this for education, not for work. He should have spent more like 6 hours before desperately reaching for the LLM. I imagine after 1 hour he would have figured it out on his own.
Though a lot of the time this is more an inefficiency of the documentation and Google rather than something only LLMs could do.
* Ask someone to come over and look
* Come back the next day, work on something else
* Add comment # KNOWN-ISSUE: ...., and move on and forget about it.
But year spent days on a bug at work before ha ha!
This is a tried and true way of working on puzzles and other hard problems.
I generally have 2-4 important things in flight, so I find myself doing this a lot when I get stuck.
Just a note that, for chronic procrastinators, having 2 to 4 important things going on is a trigger & they'd rather not complete anything.
I wonder, for such folks, if SoTA LLMs help with procrastination?
This can be set as far as 1h of being stuck. Can also be 5 minutes. But by default it is 30 seconds.
My inner kid was screaming "that's cheating!" :-D but on second thought it is a very cool feature for us busy adults, however it's sad the extremes that gamedevs have to go in order to appease the short-term mindless consumers of today's tik-toks.
But more seriously, where's the joy of generating long-standing memories of being stuck for a while on a puzzle that will make you remember that scene for 30 years? An iconic experience that separates this genre from just being an animated movie with more steps.
I couldn't imagine "Monkey Island II but every 30 seconds we push you forward". Gimme that monkey wrench.
TFA and this comment just made me have this thought about today's pace of consumption, work, and even gaming.
If you want to solve the problem quickly then just use the resources you have, if you want to become someone who can solve problems quickly then you need to spend hundreds of hours banging your head against a wall.
But just today a bug was reported by a customer (we are still in testing not a production bug). I implemented this project myself from an empty git repo and an empty AWS account including 3 weeks of pre implementation discovery.
I reproduced the issue and through the problem at Claude with nothing but two pieces of information - the ID of the event showing the bug and the description.
It worked backwards looking at the event stream in the database, looking at the code that stored the event stream, looking at the code that generated the event stream (separate Lambda), looking at the actual config table and found the root cause in 3 minutes.
After looking at the code locally, it even looked at the cached artifacts of my build and verified that what was deployed was the same thing that I had locally (same lambda deployment version in AWS as my artifacts). I had it document the debug steps it took in an md file.
Why make life harder on myself? Even if it were something I was doing as a hobby, I have a wife who I want to spend time with, I’m a gym rat and I’m learning Spanish. Why would I waste 6 hours doing something that a computer could do for me in 5 minutes?
Assuming he has a day job and gets off at 6, he would be spending all of his off time chasing down a bug that he could be using doing something else.
If you’re experienced as you are, you’re not learning the same way a junior assigned this might learn from it.
I also used Codex and asked questions about how the codebase worked to refresh my own memory. Why wouldn’t a junior developer do the same?
I mentioned that I had Codex describe in detail how it debugged it. It walked through each query it did, the lines of code it looked at and the IAC. It jogged my memory about code I wrote a year ago and after being on other projects
Just because it worked this time doesn’t mean it always will.
If you need further explanation of why you might want to spend more time resolving a bug to learn about the systems you’re tasked with maintaining then I’m at a loss sorry.
But he was doing this for education, not for work.
That's why he should spend 6 hours on it, and not give up and run to the gym. That's like saying "I shouldn't spend an hour at the gym this week, lifting weights is hard and I want to watch TV. I'll just get my forklift to lift the weights for me!"Having a tool that instantly searches through the first 50 pages of google and comes up with a reasonable solution is just speeding up what I would have done manually anyways.
Would I have learned more about (and around) the system I‘m building? Absolutely. I just prefer making my system work over anything else, so I don’t mind losing that.
Just so many confusing things go wrong in real-world software, and it is asinine to think that Mythos finding a ton of convoluted memory errors in legacy native code means we've solved debugging. People should pay more attention to the conclusion of "Claude builds a C compiler" - eventually it wasn't able to make further progress, the code was too convoluted and the AI wasn't smart enough. What if that happens at your company in 2027, and all the devs are too atrophied to solve the problem themselves?
I don't think we're "doomed" like some anti-AI folks. But I think a lot of companies - potentially even Anthropic! - are going to collapse very quickly under LLM-assisted technical debt.
The euphoria I felt after fixing bugs that I stayed up late working on is like nothing else.
The time wasted thinking our craft matters more than solving real world problems?
The amount of ceremony we're giving bugs here is insane.
Paraphrasing some of y'all,
> "I don't have to spend a day stepping through with a debugger hoping to repro"
THAT IS NOT A PROBLEM!
We're turning sand into magic, making the universe come alive. It's as if we just got electricity and the internet and some of us are still reminiscing about whale blubber smells and chemical extraction of kerosene.
The job is to deliver value. Not miss how hard it used to be and how much time we wasted finding obscure cache invalidation bugs.
Only algorithms and data structures are pure. Your business logic does not deserve the same reverence. It will not live forever - it's ephemeral, to solve a problem for now. In a hundred years, we'll have all new code. So stop worrying and embrace the tools and the speed up.
This is both a strawman and a false dichotomy.
Too many of our engineering conversations are dominated by veneration of the old. Let me be hyperbolic so that I can interrupt your train of thought and say this:
We're starting to live in the future.
Let go of your old assumptions. Maybe they still matter, but it's also likely some of them will change.
The old ways of doing things should be put under scrutiny.
In ten years we might be writing in new languages that are better suited for LLMs to manipulate. Frameworks and libraries and languages we use today might get tossed out the door.
All energy devoted to the old way of doing things is perhaps malinvested into a temporary state of affairs. Don't over-index on that.
If you cant fix the bug just slop some code over it so its more hidden.
This is all gonna be fascinating in 5-10 years.
But for juniors, it's invaluable experience. And as a field we're already seeing problems resulting from the new generations of juniors being taught with modern web development, whose complexity is very obstructing of debugging.
I worked on a project that depended on an open source but deprecated/unmaintained Linux kernel module that we used for customers running RHEL[1]. There were a number of serious bugs causing panics that we encountered, but only for certain customers with high VFS workloads. I spent days to a week+ on each one, reading kernel code, writing userland utilities to repro the problem, and finally committing fixes to the module. I was the only one on the team up to the task.
We couldn't tell the customers to upgrade, we couldn't write an alternative module in a reasonable timeframe, and they paid us a lot of money, so I did what I had to do.
I'm sure there are lots of other examples like this out there.
[1] Known for its use of ancient kernels with 10000 patches hand-picked by Red Hat. At least at the time (5-10 years ago).
Now, they are programming a chip from the seventies using an editor/assembler that was written in 1983 and has a line editor, not a full-screen one.
We had a total of 10 hours of class + lab where I taught them about assembly language and told them about the registers, instructions, and addressing modes of the chip, memory map and monitor routines of the Apple, and after that we went and wrote a few programs together, mostly using the low-resolution graphics mode (40x40): a drawing program, a bouncing ball, culminating in hand-rolled sprites with simple collision detection.
Their assignment is to write a simple program (I suggested a low-res game like Snake or Tetris but they can do whatever they want provided they tell me about it and I okay it), demo their program, and then explain to the class how it works.
At first they hated the line editor. But then a very interesting thing happened. They started thinking about their code before writing it. Planning. Discussing things in advance. Everything we told them they should do before coding in previous classes, but they didn't do because a powerful editor was right there so why not use it?...
And then they started to get used to the line editor. They told me they didn't need to really see the code on the screen, it was in their head.
They will of course go back to modern tools after class is finished, but I think it's good for them to have this kind of experience.
Today I program 6502/7 asm for my Atari to help me unwind and it grounds me and gives me joy, while in my day job I'm easily 10 levels of abstractions higher.