The threat is comfortable drift toward not understanding what you're doing
181 points by zaikunzhang 4 hours ago | 103 comments

hgo 51 seconds ago
I like this article and it reads well, but I have to say, that to me it really reads as something written by an LLM. Probably under supervision by a human that knew what it should say.

I don't know if I mind.

Example. This paragraph, to me, has a eerily perfect rhythm. The ending sentence perfectly delivers the twist. Like, why would you write in perfect prose an argument piece in the science realm?

> Unlike Alice, who spent the year reading papers with a pencil in hand, scribbling notes in the margins, getting confused, re-reading, looking things up, and slowly assembling a working understanding of her corner of the field, Bob has been using an AI agent. When his supervisor sent him a paper to read, Bob asked the agent to summarize it. When he needed to understand a new statistical method, he asked the agent to explain it. When his Python code broke, the agent debugged it. When the agent's fix introduced a new bug, it debugged that too. When it came time to write the paper, the agent wrote it. Bob's weekly updates to his supervisor were indistinguishable from Alice's. The questions were similar. The progress was similar. The trajectory, from the outside, was identical.

reply
sd9 2 hours ago
The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading. I need to decide if the remaining work - understanding requirements, managing teams, what have you - is still enjoyable enough to continue.

To be honest, I’m looking at leaving software because the job has turned into a different sort of thing than what I signed up for.

So I think this article is partly right, Bob is not learning those skills which we used to require. But I think the market is going to stop valuing those skills, so it’s not really a _problem_, except for Bob’s own intellectual loss.

I don’t like it, but I’m trying to face up to it.

reply
djaro 2 hours ago
> So if Bob can do things with agents, he can do things.

The problem arrises when Bob encounters a problem too complex or unique for agents to solve.

To me, it seems a bit like the difference between learning how to cook versus buying microwave dinners. Sure, a good microwave dinner can taste really good, and it will be a lot better than what a beginning cook will make. But imagine aspiring cooks just buying premade meals because "those aren't going anywhere". Over the span of years, eventually a real cook will be able to make way better meals than anything you can buy at a grocery store.

The market will always value the exact things LLMs can not do, because if an LLM can do something, there is no reason to hire a person for that.

reply
jacquesm 2 hours ago
Precisely. The first 10 rungs of the ladder will be removed, but we still expect you to be able to get to the roof. The AI won't get you there and you won't have the knowledge you'd normally gain on those first 10 rungs to help you move past #10.
reply
roenxi 43 minutes ago
That doesn't sound like much of an issue. Bob was already going to encounter problems that are too large and complex for him to solve, agents or otherwise. Life throws us hard problems. I don't recall if we even assumed Bob was unusually capable, he might be one of life's flounderers. I'd give good odds that if he got through a program with the help of agents he'll get through life achieving at least a normal level of success.

But there is also a more subtle thing, which is we're trending towards superintelligence with these AIs. At the point, Bob may discover that anything agents can't do, Alice can't do because she is limited by trying to think using soggy meat as opposed to a high-performance engineered thinking system. Not going to win that battle in the long term.

> The market will always value the exact things LLMs can not do, because if an LLM can do something, there is no reason to hire a person for that.

The market values bulldozers. Whether a human does actual work or not isn't particularly exciting to a market.

reply
kelnos 12 minutes ago
> we're trending towards superintelligence with these AIs

The article addresses this, because, well... no we aren't. Maybe we are. But it's far from clear that we're not moving toward a plateau in what these agents can do.

> Whether a human does actual work or not isn't particularly exciting to a market.

You seem to be convinced these AI agents will continue to improve without bound, so I think this is where the disconnect lies. Some of us (including the article author) are more skeptical. The market values work actually getting done. If the AIs have limits, and the humans driving them no longer have the capability to surpass those limits on their own, then people who have learned the hard way, without relying so much on an AI, will have an advantage in the market.

I already find myself getting lazy as a software developer, having an LLM verify my work, rather than going through the process of really thinking it through myself. I can feel that part of my skills atrophying. Now consider someone who has never developed those skills in the first place, because the LLM has done it for them. What happens when the LLM does a bad job of it? They'll have no idea. I still do, at least.

Maybe someday the AIs will be so capable that it won't matter. They'll be smarter and more through and be able to do more, and do it correctly, than even the most experienced person in the field. But I don't think that's even close to a certainty.

reply
b00ty4breakfast 4 minutes ago
>But there is also a more subtle thing, which is we're trending towards superintelligence with these AIs

do you have any evidence for that, though? Besides marketing claims, I mean.

reply
roenxi 2 minutes ago
[delayed]
reply
mattmanser 29 minutes ago
The authors point went a little over your head.

It doesn't matter if Bob can be normal. There was no point to him being paid to be on the program.

From the article:

If you hand that process to a machine, you haven't accelerated science. You've removed the only part of it that anyone actually needed.

reply
uoaei 34 minutes ago
"Things that have never been done before in software" has been my entire career. A lot of it requires specific knowledge of physics, modelling, computer science, and the tradeoffs involved in parsimony and efficiency vs accuracy and fidelity.

Do you have a solution for me? How does the market value things that don't yet exist in this brave new world?

reply
wizzwizz4 13 minutes ago
From the article:

> There's a common rebuttal to this, and I hear it constantly. "Just wait," people say. "In a few months, in a year, the models will be better. They won't hallucinate. They won't fake plots. The problems you're describing are temporary." I've been hearing "just wait" since 2023.

We're not trending towards superintelligence with these AIs. We're trending towards (and, in fact, have already reached) superintelligence with computers in general, but LLM agents are among the least capable known algorithms for the majority of tasks we get them to do. The problem, as it usually is, is that most people don't have access to the fruits of obscure research projects.

Untrained children write better code than the most sophisticated LLMs, without even noticing they're doing anything special.

reply
raldi 17 minutes ago
To me it feels more like learning to cook versus learning how to repair ovens and run a farm. Software engineering isn’t about writing code any more than it’s about writing machine code or designing CPUs. It’s about bringing great software into existence.
reply
b112 41 minutes ago
Worse, soon fewer and fewer people will taste good food, including even higher and higher scale restaurants just using pre-made.

As fewer know what good food tastes like, the entire market will enshitify towards lower and lower calibre food.

We already see this with, for example, fruits in cold climates. I've known people who have only ever bought them from the supermarket, then tried them at a farmers when they're in season for 2 weeks. The look of astonishment on their faces, at the flavour, is quite telling. They simply had no idea how dry, flavourless supermarket fruit is.

Nothing beats an apple picked just before you eat it.

(For reference, produce shipped to supermarkets is often picked, even locally, before being entirely ripe. It last longer, and handles shipping better, than a perfectly ripe fruit.)

The same will be true of LLMs. They're already out of "new things" to train on. I question that they'll ever learn new languages, who will they observe to train on? What does it matter if the code is unreadable by humans regardless?

And this is the real danger. Eventually, we'll have entire coding languages that are just weird, incomprehensible, tailored to LLMs, maybe even a language written by an LLM.

What then? Who will be able to decipher such gibberish?

Literally all true advancement will stop, for LLMs never invent, they only mimic.

reply
QuantumNomad_ 32 minutes ago
> if Bob can do things with agents, he can do things

I’ve been reminded lately of a conversation I had with a guy at hacker space cafe around ten years ago in Berlin.

He had been working as a programmer for a significantly longer time than me. Long enough that for many years of his career, he had been programming in assembly.

He was lamenting that these days, software was written in higher level languages, and that more and more programmers no longer had the same level of knowledge about the lower level workings of computers. He had a valid point and I enjoyed talking to him.

I think about this now when I think about agentic coding. Perhaps over time most software development will be done without the knowledge of the higher level programming languages that we know today. There will still be people around that work in the higher level programming languages in the future, and are intimately familiar with the higher level languages just like today there are still people who work in assembly even if the percentage of people has gotten lower over time relative to those that don’t.

And just like there are areas where assembly is still required knowledge, I think there will be areas where knowledge of the programming languages we use today will remain necessary and vibe coding alone wont cut it. But the percentage of people working in high level languages will go down, relative to the number of people vibe coding and never even looking at the code that the LLM is writing.

reply
sd9 27 minutes ago
Lovely story, thanks for sharing.

I wonder how many assembly programmers got over it and retrained, versus moved on to do something totally different.

I find the agentic way of working simultaneously more exhausting and less stimulating. I don’t know if that’s something I’m going to get over, or whether this is the end of the line for me.

reply
staindk 56 minutes ago
They aren't going away but for some they may become prohibitively expensive after all the subsidies end.

I do think coding with local agents will keep improving to a good level but if deep thinking cloud tokens become too expensive you'll reach the limits of what your local, limited agent can do much more quickly (i.e. be even less able to do more complex work as other replies mention).

reply
tonfa 43 minutes ago
> They aren't going away but for some they may become prohibitively expensive after all the subsidies end.

Even if inference was subsidized (afaik it isn't when paying through API calls, subscription plans indeed might have losses for heavy users, but that's how any subscription model typically work, it can still be profitable overall).

Models are still improving/getting cheaper, so that seems unlikely.

reply
ernst_klim 7 minutes ago
It probably is still subsidized, just not as much. We won't know if these APIs are profitable unless these companies go public, and till then it's safe to bet these APIs are underpriced to win the market share.
reply
codemonkey5 12 minutes ago
Some people probably enjoyed writing assembly (I am not one of those people, especially when I had to do it on paper in university exams) and code agents probably can do it well - but for the hard tasks, the tasks that are net new, code agents will produce bad results and you still need those people who enjoy writing that to show the path forward.

Code agents are great template generators and modifiers but for net new (innovative! work it‘s often barely usable without a ton of handholding or „non code generation coding“

reply
jurgenaut23 16 minutes ago
I understand your point, but this is a purely utilitarian view and it doesn’t account for the fact that, even if agents may do everything, it doesn’t mean they should, both in a normative and positive sense.

There is a vast range of scenarios in which being more or less independent from agents to perform cognitive tasks will be both desirable and necessary, at the individual, societal and economic level.

The question of how much territory we should give up to AI really is both philosophical and political. It isn’t going to be settled in mere one-sided arguments.

reply
sd9 12 minutes ago
The people who pay my bills operate in a largely utilitarian fashion.

They’re not going to pay me to manually program because I find it more enjoyable, when they can get Bob to do twice as much for less.

This is why I say I don’t like it, but it is what it is.

reply
torben-friis 29 minutes ago
Can you run an industry level LLM at home?

If not, you're changing learning to cook for Uber only meals.

And since the alternative is starving, Uber will boil the pot.

Don't give up your self sufficiency.

reply
sd9 26 minutes ago
I’m very good at the handcrafted stuff, I’ve been doing this a while. I don’t feel like giving up my self sufficiency, I just feel like the writing is on the wall.
reply
torben-friis 21 minutes ago
By "you" I actually meant this hypothetical person who's only good enough for AI assisted. Though even for us who are already experienced, we should keep the manual stuff even if it's just as going to the gym. I don't see myself retaining my skills for long by just reviewing LLM output.
reply
sd9 20 minutes ago
Yes sorry, I didn’t think you were addressing me directly, just adding my own thoughts.

I agree totally with the sentiment, and I am concerned about my own skills atrophying.

reply
gbro3n 21 minutes ago
I think a good analogy is people not being able to work on modern cars because they are too complex or require specialised tools. True I can still go places with my car, but when it goes wrong I'm less likely to be able to resolve the problem without (paid for) specialised help.
reply
b00ty4breakfast 15 minutes ago
And just like modern vehicles rob the user of autonomy, so too for coding agents. Modern tech moves further and further away from empowering normal people and increasingly serves to grow the influence of corporations and governments over our day to day lives.

It's not inherent, but it is reality unless folks stop giving up agency for convenience. I'm not holding my breath.

reply
qsera 33 minutes ago
>The thing is, agents aren’t going away...

Aren't they currently propped up by investor money?

What happens when the investors realize the scam that it is and stop investing or start investing less...

reply
rustyhancock 8 minutes ago
The whole premise is bad. If the supervisor can do it in 2 months, then they can do it in 2 weeks with AI.

Didn't PhD projects used to be about advancing the state of art?

Maybe we'll get back to that.

reply
nidnogg 45 minutes ago
I don't like it either. But what is really guaranteeing other markets from flunking similarly later on? What's to say other jobs are going to be any better? Back in college, most of my peers would say "I'm not cut out for anything else. This is it". They were, sure enough, computer and/or math people at heart from an early age.

More importantly, what's gonna be the next stable category of remote-first jobs that a person with a tech-adjacent or tech-minded skillset can tack onto? That's all I care about, to be honest.

I may hate tech with a passion at times and be overly bullish on its future, but there's no replacing my past jobs which have graced me and many others with quality time around family, friends, nature and sports while off work.

reply
sd9 41 minutes ago
I don’t know, it’s only since about December that I felt things really start to shift, and February when my job started to become very different.

Personally I’m looking at more physical domains, but it’s early days in my exploration. I think if I wanted to stick to remote work (which I have enjoyed since 2020), then the AI story would just keep playing out.

I’m also totally open to taking a big pay cut to do something I actually enjoy day to day, which I guess makes it easier.

reply
throwanem 18 minutes ago
So recent? I've been on sabbatical (the real kind, self-funded) for eighteen months, and while my sense has been things have not stopped heading downhill since I stepped off the ride back in 2024, to hear of such a sudden step change is somewhat novel. "Very different" just how, if you don't mind my asking?

(I'm also looking for local, personally satisfying work, in exchange for a pay cut. Early days, and I am finding the profession no longer commands quite the social cachet it once did, but I'm not foolish enough to fail to price for the buyer's market in which we now seek to sell our labor. Besides, everyone benefits from the occasional reminder to humility! "Memento mori" and all that.)

reply
sd9 15 minutes ago
I feel like the models and harnesses had a step change in capability around December, as somebody who’s been using them daily since early/mid 2025. It’s gone from me doing the majority of the programming, to me doing essentially none, since December. And that change felt quite sudden.

The more recent shift after December is mostly explained by people at my company catching up with the events that happened in December. And that’s more about drastically increased productivity expectations, layoffs, etc.

I’m also considering a self funded sabbatical. I could do it. What sort of thing have you been up to, any advice?

reply
bakugo 15 minutes ago
Bob can't do things, Bob's AI can do things that Bob asks it to do. And the AI can only do things that have been done before, and only up to a certain level of complexity. Once that level is reached, the AI can't do things anymore, and Bob certainly isn't going to do anything about that, because Bob doesn't know how to do anything himself. One has to question what value Bob himself even brings to the table.

But let's assume Bob continues to have an active role, because the people above him bought in to the hype and are convinced that "prompt engineer" is the job of the future. When things inevitably start falling apart because the Bobs of the world hit a wall and can't solve the problems that need to be solved (spoiler: this is already happening), what do we do? We need Alices to come in and fix it, but the market actively discourages the existence of Alice, so what happens when there are no more Alices left? Do we just give up and collectively forget how to do things beyond a basic level?

I have a feeling that, yes, we as a species are just going to forget how to do things beyond a certain level. We are going to forget how to write an innovative science paper. We are going to forget how to create websites that aren't giant, buggy piles of React spaghetti that make your browser tab eat 2GB of RAM. We've always been forgetting, really - there are many things that humans in the past knew how to do, but nobody knows how to do today, because that's what happens when the incentive goes missing for too long. Price and convenience often win over quality, to the point that quality stops being an option. This is a form of evolutionary regression, though, and negatively affects our quality of life in many ways. AI is massively accelerating this regression, and if we don't find some way to stop it, I believe our current way of life will be entirely unrecognizable in a few decades.

reply
plato65 2 hours ago
> So if Bob can do things with agents, he can do things.

I think the key issue is whether Bob develops the ability to choose valuable things to do with agents and to judge whether the output is actually right.

That’s the open question to me: how people develop the judgment needed to direct and evaluate that output.

reply
mattmanser 5 minutes ago
There's a long, detailed, often repeated answer to your open question in the article.

Namely, if you can't do it without the AI, you can't tell when it's given you plausible sounding bullshit.

So Bob just wasted everyone's time and money.

reply
troupo 47 minutes ago
> The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

Can he? If he outsources all his thinking and understanding to agents, can he then fix things he doesn't know how to fix without agents?

Any skill is practice first and foremost. If Bob has had no practice, what then?

reply
sd9 38 minutes ago
My point is it doesn’t matter whether he can fix things without agents. The real world isn’t an exam hall where your boss tells you “no naughty AI!”, you just get stuff done, and if Bob can do that with agents, nobody cares how he did it.
reply
troupo 36 minutes ago
> The real world isn’t an exam hall where your boss tells you “no naughty AI!”, you just get stuff done, and if Bob can do that with agents, nobody cares.

Indeed. That's why Anthropic had to hire real engineers to make sure their vibe-coded shit doesn't consume 68GB of RAM. Because real world: https://x.com/jarredsumner/status/2026497606575398987

reply
sd9 29 minutes ago
If your job has been totally unaffected by AI, then I am jealous.

I’m not trying to argue that AI can do everything today. I acknowledge that there are many things that it is not good at.

reply
DavidPiper 13 minutes ago
I've just started a new role as a senior SWE after 5 months off. I've been using Claude a bit in my time off; it works really well. But now that I've started using it professionally, I keep running into a specific problem: I have nothing to hold onto in my own mind.

How this plays out:

I use Claude to write some moderately complex code and raise a PR. Someone asks me to change something. I look at the review and think, yeah, that makes sense, I missed that and Claude missed that. The code works, but it's not quite right. I'll make some changes.

Except I can't.

For me, it turns out having decisions made for you and fed to you is not the same as making the decisions and moving the code from your brain to your hands yourself. Certainly every decision made was fine: I reviewed Claude's output, got it to ask questions, answered them, and it got everything right. I reviewed its code before I raised the PR. Everything looked fine within the bounds of my knowledge, and this review was simply something I didn't know about.

But I didn't make any of those decisions. And when I have to come back to the code to make updates - perhaps tomorrow - I have nothing to grab onto in my mind. Nothing is in my own mental cache. I know what decisions were made, but I merely checked them, I didn't decide them. I know where the code was written, but I merely verified it, I didn't write it.

And so I suffer an immediate and extreme slow-down, basically re-doing all of Claude's work in my mind to reach a point where I can make manual changes correctly.

But wait, I could just use Claude for this! But for now I don't, because I've seen this before. Just a few moments ago. Using Claude has just made it significantly slower when I need to use my own knowledge and skills.

I'm still figuring out whether this problem is transient (because this is a brand new system that I don't have years of experience with), or whether it will actually be a hard blocker to me using Claude long-term. Assuming I want to be at my new workplace for many years and be successful, it will cost me a lot in time and knowledge to NOT build the castle in the sky myself.

reply
theteapot 10 minutes ago
I have a vaguely unrelated question re:

> You do what your supervisor did for you, years ago: you give each of them a well-defined project. Something you know is solvable, because other people have solved adjacent versions of it. Something that would take you, personally, about a month or two. You expect it to take each student about a year ...

Is that how PhD projects are supposed to work? The supervisor is a subject matter expert and comes up with a well-defined achievable project for the student?

reply
loveparade 5 minutes ago
I think it just really depends. There is no fixed rule to how PhD programs are supposed to work. Sometimes your advisor will suggest projects he finds interesting and wants to see done, he just doesn't have time to do it himself. That's pretty common. Sometimes advisors don't have that and/or want students to come up with their own projects proposals, etc.
reply
InkCanon 6 minutes ago
Often at the start yes. So the students gets a bit of recognition, a bit of experience and a bit of knowledge.
reply
stavros 2 hours ago
I see this fallacy being committed a lot these days. "Because LLMs, you will no longer need a skill you don't need any more, but which you used to need, and handwaves that's bad".

Academia doesn't want to produce astrophysics (or any field) scientists just so the people who became scientists can feel warm and fuzzy inside when looking at the stars, it wants to produce scientists who can produce useful results. Bob produced a useful result with the help of an agent, and learned how to do that, so Bob had, for all intents and purposes, the exact same output as Alice.

Well, unless you're saying that astrophysics as a field literally does not matter at all, no matter what results it produces, in which case, why are we bothering with it at all?

reply
djaro 2 hours ago
The problem is that LLMs stop working after a certain point of complexity or specificity, which is very obvious once you try to use it in a field you have deep understanding of. At this point, your own skills should be able to carry you forward, but if you've been using an LLM to do things for you since the start, you won't have the necessary skills.

Once they have to solve a novel problem that was not already solved for all intentes and purposes, Alice will be able to apply her skillset to that, whereas Bob will just run into a wall when the LLM starts producing garbage.

It seems to me that "high-skill human" > "LLM" > "low-skill human", the trap is that people with low levels of skills will see a fast improvement of their output, at the hidden cost of that slow build-up of skills that has a way higher ceiling.

reply
brookst 36 seconds ago
This whole argument can be made for why every programmer needs to deeply understand assembly language and computer hardware.

At a certain point, higher level languages stop working. Performance, low level control of clocks and interrupts, etc.

I’m old enough dropping into assembly to be clever with the 8259 interrupt controller really was required. Programmers today? The vast majority don’t really understand how any of that works.

And honestly I still believe that hardware-up understanding is valuable. But is it necessary? Is it the most important thing for most programmers today?

When I step back this just reads like the same old “kids these days have it so easy, I had to walk to school uphill through the snow” thing.

reply
stavros 2 hours ago
Then test Bob on what you actually want him to produce, ie novel problems, instead of trivial things that won't tell you how good he is.

Why is it a problem of the LLM if your test is unrelated to the performance you want?

reply
skydhash 24 minutes ago
What people forget about programming is it is a notation for formal logic, one that can be executed by a machine. That formal logic is for solving a problem in the real world.

While we have a lot of abstractions that solve some subproblems, there still need to connect those solutions to solve the main problem. And there’s a point where this combination becomes its own technical challenge. And the skill that is needed is the same one as solving simpler problems with common algorithms.

reply
troupo 46 minutes ago
How can Bob produce novel things when he lacks the skills to do even trivial things?

I didn't get to be a senior engineer by immediately being able to solve novel problems. I can now solve novel problems because I spent untold hours solving trivial ones.

reply
stavros 42 minutes ago
Because trivial things aren't a prerequisite for novel things, as any theoretical mathematician who can't do long division will tell you.
reply
Folcon 36 minutes ago
There's a difference between needing no trivial skills to do novel things and not needing specific prerequisite trivial skills to do a novel thing
reply
troupo 40 minutes ago
Ah yes. The famous theoretical mathematicians who immediately started on novel problems in theoretical mathematics without first learning and understanding a huge number of trivial things like how division works to begin with, what fractions are, what equations are and how they are solved etc.

Edit: let's look at a paper like Some Linear Transformations on Symmetric Functions Arising From a Formula of Thiel and Williams https://ecajournal.haifa.ac.il/Volume2023/ECA2023_S2A24.pdf and try and guess how many of trivial things were completely unneeded to write a paper like this.

reply
stavros 35 minutes ago
Seems that teaching Bob trivial things would be a simple solution to this predicament.
reply
pards 2 hours ago
> Take away the agent, and Bob is still a first-year student who hasn't started yet. The year happened around him but not inside him. He shipped a product, but he didn't learn a trade.

We're minting an entire generation of people completely dependent on VC funding. What happens if/when the AI companies fail to find a path to profitability and the VC funding dries up?

reply
Paradigma11 22 minutes ago
What will happen is pretty obvious. Those companies will either be classified as too important to fail and get government support or go bankrupt and will be bought for pennies on the dollar. For the customers nothing much will change since tokens are getting cheaper every year and the business is already pretty profitable. Progress will slow down massively till local open weight models catch up to pre-crash SotA and go on from there.
reply
stavros 2 hours ago
Do you think that'll take a generation to happen?
reply
rafterydj 45 minutes ago
ChatGPT 3.5 came out coming on 4 years ago now. I don't think a human generation (~20-30 years) needs to be the benchmark here, but new juniors in the industry for a handful of years can be said to be a whole "generation". That how I was reading OP.
reply
nandomrumber 2 hours ago
> why are we bothering with it at all?

Because we largely want people who have committed to tens of thousands of dollars of debt to feel sufficiently warm and fuzzy enough to promote the experience so that the business model doesn’t collapse.

It’s difficult to think anyone would end up truly regretting doing a course in astrophysics, or any of the liberal arts and sciences if they have a modicum of passion, but it’s very believable that a majority of them won’t go on to have a career in it, whatever it is, directly.

They’re probably more likely to gain employment on their data science skills, or whether core competencies they honed, or just the fact that they’ve proven they can learn highly abstract concepts, or whatever their field generalises to.

Most of the jobs are in not-highly-specific academic-outcome.

reply
hirako2000 2 hours ago
I was reading in the article that what matters is the process that leads to the (typically useless) result, what the people get out of it.

Once I realized that this white on black contrast was hurting my eyes, I decided to stop as I didn't want to see stripes for too long when looking away.

Some activity has outcomes that aren't strictly in the results.

reply
stavros 2 hours ago
Yeah, it was saying that what matters is the process of training people to be good scientists, so they can produce other, more useful, results. That's literally what training is, everywhere.

This argument boils down to "don't use tools because you'll forget how to do things the hard way", which nobody would buy for any other tool, but with LLMs we seem to have forgotten that line of reasoning entirely.

reply
rglullis 43 minutes ago
> so they can produce other, more useful, results

But to even *know* what is more useful, it is crucial to have walked the walk. Otherwise we will all end up with a bunch of people trying to reinvent the wheel, over and over again, like JavaScript "developers" who keep reinventing frameworks every six months.

> which nobody would buy for any other tool

I don't know about you, but I wasn't allowed to use calculators in my calculus classes precisely to learn the concepts properly. "Calculators are for those who know how to do it by hand" was something I heard a lot from my professors.

reply
hirako2000 2 hours ago
There is an argument to make that tools that speed up a process whilst keeping acuity intact are legitimate.

LLMs, the way they typically get used, are solely to save time by handing over nearly the entire process. In that sense acuity can't remain intact, even less so improving over time.

reply
stavros 57 minutes ago
So?
reply
hirako2000 18 minutes ago
You previous comment reads as if LLMs get some unjustified different treatment.

Do you agree the different treatment is justified ? (Many do not). Or are you asking , so what if acuity is diminished so long as an LLM does the job equally well?

reply
defrost 2 hours ago
> This argument boils down to "don't use tools because you'll forget how to do things the hard way", which nobody would buy for any other tool,

This is false. There absolutely are people that fall back on older tools when fancy tools fail. You will find such people in the military, in emergency services, in agriculture, generally in areas where getting the job done matters.

Perhaps you're unfamiliar.

They other week I finished putting holes in fence posts with a bit and brace as there was no fuel for the generator to run corded electric drills and the rechargable batteries were dead.

Ukrainians, and others, need to fall back on no GPS available strategies and have done so for a few years now.

etc.

reply
thijson 21 minutes ago
In the 80's the Americans thought that the Russians were backwards to be still using vacuum tubes in their military vehicles. Later they found out that they were being used because they are more tolerant to EMP from a nuclear blast.
reply
nathan_compton 2 hours ago
People say this in a very large number of other contexts. Mathematica has been able to do many integrals for decades and yet we still make students learn all the tricks to integrate by hand. This pattern is very common.
reply
mzhaase 2 hours ago
Why should we only do things that produce some sort of value? Do we really want to reduce all of human existence to increasing profits?
reply
stavros 56 minutes ago
You said "value" and "profit". I said "useful".
reply
nemo44x 54 minutes ago
What’s a better method for determining how to utilize and distribute resources? To determine where energy should be used and where it should be moved from?
reply
sega_sai 29 minutes ago
You missed the argument. When we are talking about faculty, yes their result is the only thing that matters, so if it was produced quicker with a LLM, that's great. But when we are talking about the student, there is a drastic difference in the student in the with LLM vs without LLM cases. In the latter they have much better understanding. And that matters in the system when we are educating future physicists.
reply
gedy 4 minutes ago
We aren't talking pocket calculators here, LLMs are hugely expensive things made and controlled behind costly commercial subscriptions. And likely in the middle of a huge investment bubble. So we all need to be careful about "gee we don't need that skill or person anymore", etc.
reply
nathan_compton 2 hours ago
Is that what "academia" wants? Last I checked "academia" is not a dude I can call and ask for an opinion or definition of what it was interested in.

I will make an explicit, plausible, counterpoint: academia wants to produce understanding. This is, more or less, by definition, not possible with an AI directly (obviously AIs can be useful in the process).

Take GR as an example. The vast majority of the dynamical character of the theory is inaccessible to human beings. We study it because we wanted to understand it, and only secondarily because we had a concrete "result" we were trying to "achieve."

A person who cares only about results and not about understanding is barely a person, in my opinion.

reply
selimthegrim 28 minutes ago
Completely missed the point of the blog post which was that the point was producing the scientist not the result
reply
scrpgil 7 minutes ago
As a CTO, I see the Alice/Bob split play out in hiring every month. The uncomfortable part isn't that Bob exists — it's that I can't tell Alice from Bob in a 60-minute interview anymore.

But I think the article underestimates Bob. Bob isn't static. Bob-with-agents who ships for five years will eventually develop intuition — just from a different path. Not from reading papers, but from pattern-matching across hundreds of agent-assisted outcomes. It's a worse path for deep understanding, but it's not zero.

The real danger isn't Bob. It's the organization that can't tell the difference and stops investing in creating Alices.

reply
oncallthrow 2 hours ago
I think this article is largely, or at least directionally, correct.

I'd draw a comparison to high-level languages and language frameworks. Yes, 99% of the time, if I'm building a web frontend, I can live in React world and not think about anything that is going on under the hood. But, there is 1% of the time where something goes wrong, and I need to understand what is happening underneath the abstraction.

Similarly, I now produce 99% of my code using an agent. However, I still feel the need to thoroughly understand the code, in order to be able to catch the 1% of cases where it introduces a bug or does something suboptimally.

It's possible that in future, LLMs will get _so_ good that I don't feel the need to do this, in the same way that I don't think about the transistors my code is ultimately running on. When doing straightforward coding tasks, I think they're already there, but I think they aren't quite at that point when it comes to large distributed systems.

reply
spicyusername 35 minutes ago
So we already have this problem and things are "fine"?
reply
mbbutler 16 minutes ago
In my personal experience, the rate at which Claude Code produces suboptimal Rust is way higher than 1%.
reply
kgwxd 35 minutes ago
> LLMs will get _so_ good that I don't feel the need to do this, in the same way that I don't think about the transistors my code is ultimately running on.

The problem is, they're nothing like transistors, and never will be. Those are simple. Work or don't, consistently, in an obvious, or easily testable, way.

LLM are more akin to biological things. Complex. Not well understood. Unpredictable behavior. To be safely useful, they need something like a lion tamer, except every individual LLM is its own unique species.

I like working on computers because it minimizes the amount of biological-like things I have to work with.

reply
oncallthrow 19 minutes ago
I suppose transistors is a bad example.

Perhaps a better analogy would be the Linux kernel. It's built by biological humans, and fallible ones at that. And yet, I don't feel the need to learn the intricacies of kernel internals, because it's reliable enough that it's essentially never the kernel's fault when my code doesn't work.

reply
AlexWilkins12 20 minutes ago
Ironically, this article reeks of AI-generated phrases. Lot's of "It's not X, it's Y". eg: - "The failure mode isn't malice. It's convenience", - "You haven't saved time. You've forfeited the experience that the time was supposed to give you." - "But the real threat isn't either of those things. It's quieter, and more boring, and therefore more dangerous. The real threat is a slow, comfortable drift toward not understanding what you're doing. Not a dramatic collapse. Not Skynet. Just a generation of researchers who can produce results but can't produce understanding."

And indeed running it through a few AI text detectors, like Pangram (not perfect, by any means, but a useful approximation), returns high probabilities.

It would have felt more honest if the author had included a disclaimer that it was at least part written with AI, especially given its length and subject matter.

reply
patcon 31 minutes ago
The exciting and interesting to me is that we'll probably need to engage "chaos engineering" principles, and encode intentional fallibility into these agents to keep us (and them) as good collaborators, and specifically on our toes, to help all minds stay alert and plastic

If that comes to pass, we'll be rediscovering the same principles that biological evolution stumbled upon: the benefits of the imperfect "branch" or "successive limited comparison" approach of agentic behaviour, which perhaps favours heuristics (that clearly sometimes fail), interaction between imperfect collaborators with non-overlapping biases, etc etc

https://contraptions.venkateshrao.com/p/massed-muddler-intel...

> Lindblom’s paper identifies two patterns of agentic behavior, “root” (or rational-comprehensive) and “branch” (or successive limited comparisons), and argues that in complicated messy circumstances requiring coordinated action at scale, the way actually effective humans operate is the branch method, which looks like “muddling through” but gradually gets there, where the root method fails entirely.

reply
throwaway132448 22 minutes ago
The flip side I don’t see mentioned very often is that having a product where you know how the code works becomes its own competitive advantage. Better reliability, faster fixes and iteration, deeper and broader capabilities that allow you to be disruptive while everything else is being built towards the mean, etc etc. Maybe we’ve not been in this new age for long enough for that to be reflected in people’s purchasing criteria, but I’m quite looking forward to fending off AI-built competitors with this edge.
reply
grafelic 15 minutes ago
"He shipped a product, but he didn't learn a trade." I think is the key quote from this article, and encapsulates the core problem with AI agents in any skill-based field.
reply
mikeaskew4 8 minutes ago
“The world still needs empirical thinkers, Danny.”

- Caddyshack

reply
jerkstate 5 minutes ago
Nobody actually understands what they're doing. When you're learning electronics, you first learn about the "lumped element model" which allows you to simplify Maxwell's equations. I think it is a mistake to think that solving problems with a programming language is "knowing how to do things" - at this point, we've already abstracted assembly language -> machine instructions -> logic gates and buses -> transistors and electronic storage -> lumped matter -> quantum mechanics -> ???? - so I simply don't buy the argument that things will suddenly fall apart by abstracting one level higher. The trick is to get this new level of abstraction to work predictably, which admittedly it isn't yet, but look how far it's come in a short couple of years.

This article first says that you give juniors well-defined projects and let them take a long time because the process is the product. Then goes on to lament the fact that they will no longer have to debug Python code, as if debugging python code is the point of it all. The thing that LLMs can't yet do is pick a high-level direction for a novel problem and iterate until the correct solution is reached. They absolutely can and do iterate until a solution is reached, but it's not necessarily correct. Previously, guiding the direction was the job of the professor. Now, in a smaller sense, the grad student needs to be guiding the direction and validating the details, rather than implementing the details with the professor guiding the direction. This is an improvement - everybody levels up.

I also disagree with the premise that the primary product of astrophysics is scientists. Like any advanced science it requires a lot of scientists to make the breakthroughs that trickle down into technology that improves everyday life, but those breakthroughs would be impossible otherwise. Gauss discovered the normal distribution while trying to understand the measurement error of his telescope. Without general relativity we would not have GPS or precision timekeeping. It uncovers the rules that will allow us to travel interplanetary. Understanding the composition and behavior of stars informs nuclear physics, reactor design, and solar panel design. The computation systems used by advanced science prototyped many commercial advances in computing (HPC, cluster computing, AI itself).

So not only are we developing the tools to improve our understanding of the universe faster, we're leveling everybody up. Students will take on the role of professors (badly, at first, but are professors good at first? probably not, they need time to learn under the guidance of other faculty). professors will take on the role of directors. Everybody's scope will widen because the tiny details will be handled by AI, but the big picture will still be in the domain of humans.

reply
efields 58 minutes ago
I literally don't know how compilers work. I've written code for apps that are still in production 10 years later.
reply
layer8 16 minutes ago
You don’t need to understand compilers because the code it compiles, when valid according to the language specification, is supposed to work as written, and virtually always does. There is no language specification and no “as written” with LLMs.
reply
Herbstluft 44 minutes ago
Are you working on compilers? If not it seems you did not understand what is being talked about here.

Do you lack fundamental understand of those apps you built that are still in use? Did you lack understanding of their workings when you built them?

reply
bakugo 49 minutes ago
Have you written a compiler, though?
reply
inatreecrown2 33 minutes ago
Using AI to solve a task does not give you experience in solving the task, it gives you experience in using AI.
reply
squirrel 15 minutes ago
The article is well-written and makes cogent points about why we need "centaurs", human/computer hybrids who combine silicon- and carbon-based reasoning.

Interestingly, the text has a number of AI-like writing artifacts, e.g. frequent use of the pattern "The problem isn't X. The problem is Y." Unlike much of the typical slop I see, I read it to the end and found it insightful.

I think that's because the author worked with an AI exactly as he advocates, providing the deep thinking and leaving some of the routine exposition to the bot.

reply
ghc 2 hours ago
As straw men go, this is an attractive one, but...

When I was fresh out of undergrad, joining a new lab, I followed a similar arc. I made mistakes, I took the wrong lessons from grad student code that came before mine, I used the wrong plotting libraries, I hijacked python's module import logic to embed a new language in its bytecode. These were all avoidable mistakes and I didn't learn anything except that I should have asked for help. Others in my lab, who were less self-reliant, asked for and got help avoiding the kinds of mistakes I confidently made.

With 15 more years of experience, I can see in hindsight that I should have asked for help more frequently because I spent more time learning what not to do than learning the right things.

If I had Claude Code, would I have made the same mistakes? Absolutely not! Would I have asked it to summarize research papers for me and to essentially think for me? Absolutely not!

My mother, an English professor, levies similar accusations about the students of today, and how they let models think for them. It's genuinely concerning, of course, but I can't help but think that this phenomenon occurs because learning institutions have not adjusted to the new technology.

If the goal is to produce scientists, PIs are going to need to stop complaining and figure out how to produce scientists who learn the skills that I did even when LLMs are available. Frankly I don't see how LLMs are different from asking other lab members for help, except that LLMs have infinite patience and don't have their own research that needs doing.

reply
jacquesm 58 minutes ago
AI does not give you knowledge. It magnifies both intelligence and stupidity with zero bias towards either. If you were above average intelligent then you may be able to do a little bit more than before assuming you were trained before AI came along. And if you were not so smart then you will be able to make larger messes.

The problem, and I think the article indirectly points at that, is that the next generation to come along won't learn to think for themselves first. So they will on average end up on the 'B' track rather than that they will be able to develop their intelligence. I see this happening with the kids my kids hang out with. They don't want to understand anything because the AI can do that for them, or so they believe. They don't see that if you don't learn to think about smaller problems that the larger ones will be completely out of reach.

reply
skydhash 14 minutes ago
Students are given student-level problem, not because someone wants the result, but because they can learn how solving problems works. Solving those easy problems with LLM does not help anyone.
reply
robot-wrangler 19 minutes ago
Another threat is that you can find tons of papers pointing out how neural AI still struggles handling simple logical negation. Who cares right, we use tools for symbolics, yada yada. Except what's really the plan? Are we going to attempt parallel formalized representations of every piece of input context just to flag the difference between please DONT delete my files and please DO? This is all super boring though and nothing bad happened lately, so back to perusing latest AGI benchmarks..
reply
djoldman 2 hours ago
These themes have been going around and around for a while.

One thing I've seen asserted:

> What he demonstrated is that Claude can, with detailed supervision, produce a technically rigorous physics paper. What he actually demonstrated, if you read carefully, is that the supervision is the physics. Claude produced a complete first draft in three days... The equations seemed right... Then Schwartz read it, and it was wrong... It faked results. It invented coefficients...

The argument that AI output isn't good enough is somewhat in opposition to the idea that we need to worry about folks losing or never gaining skills/knowledge.

There are ways around this:

"It's only evident to experts and there won't be experts if students don't learn"

But at the end of the day, in the long run, the ideas and results that last are the ones that work. By work, I mean ones that strictly improve outcomes (all outputs are the same with at least one better). This is because, with respect to technological progress, humans are pretty well modeled as just a slightly better than random search for optimal decisioning where we tend to not go backwards permanently.

All that to say that, at times, AI is one of the many things that we've come up with that is wrong. At times, it's right. If it helps on aggregate, we'll probably adopt it permanently, until we find something strictly better.

reply
jacquesm 54 minutes ago
AI is extremely good at producing well formatted bullshit. You need to be constantly on guard against stuff that sounds and looks right but ultimately is just noise. You can also waste a ton of time on this. Especially OpenAI's offering shows poorly in this respect: it will keep circling back to its own comfort zone to show off some piece of code or some concept that it knows a lot about whilst avoiding the actual question. It's really good at jumping to the wrong conclusions (and making it sound like some kind of profound insight). But the few times that it is on the money make up for all of that noise. Even so, I could do without the wasted time and endless back and forths correcting the same stuff over and over again, it is extremely tedious.
reply
tom-blk 35 minutes ago
Strongly agree,we see this almost everywhere now
reply
sam_lowry_ 2 hours ago
See also The Profession by Isaac Asimov [0] and his small story The Feeling of Power [1]. Both are social dramas about societies that went far down the path of ignorance.

[0] http://employees.oneonta.edu/blechmjb/JBpages/m360/Professio...

[1] https://s3.us-west-1.wasabisys.com/luminist/EB/A/Asimov%20-%...

reply
simianwords 33 minutes ago
> Frank Herbert (yeah, I know I'm a nerd), in God Emperor of Dune, has a character observe: "What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking; there's the real danger." Herbert was writing science fiction. I'm writing about my office. The distance between those two things has gotten uncomfortably small.

The author is a bit naive here:

1. Society only progresses when people are specialised and can delegate their thinking

2. Specialisation has been happening for millenia. Agriculture allowed people to become specialised due to abundance of food

3. We accept delegation of thinking in every part of life. A manager delegates thinking to their subordinates. I delegate some thinking to my accountant

4. People will eventually get the hang of using AI to do the optimum amount of delegation such that they still retain what is necessary and delegate what is not necessary. People who don't do this optimally will get outcompeted

The author just focuses on some local problems like skill atrophy but does not see the larger picture and how specific pattern has been repeating a lot in humanity's history.

reply
zajio1am 5 minutes ago
A related quote from A. N. Whitehead:

> It is a profoundly erroneous truism ... that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them.

reply
jeremie_strand 2 hours ago
[dead]
reply
huflungdung 54 minutes ago
[dead]
reply
garn810 3 hours ago
Academia always been full of narcissists chasing status with flashy papers and halfbaked brilliant ideas (70%? maybe) LLMs just made the whole game trivial and now literally anyone can slap together something that sounds deep without ever doing the actual grind. LLMs just speeding up the process, just a matter of time how quickly this shit is exposing what the entire system has been all along
reply