"Token anxiety", a slot machine by any other name
246 points by presbyterian 2 days ago | 221 comments

HolyLampshade 11 hours ago
I know I'm running a bit late to the party here, but maybe someone can provide some color that I (on the slightly older end of the spectrum when it comes to this) don't fully understand.

When people talk about leaving their agents to run overnight, what are those agents actually doing? The limited utility I've had using agent-supported software development requires a significant amount of hand holding, maybe because I'm in an industry with limited externally available examples to build am model off of (though all of the specifications are public, I've yet to see an agent build an appropriate implementation).

So it's much more transactional...I ask, it does something (usually within seconds), I correct, it iterates again...

What sort of tasks are people putting these agents to? How are people running 'multiple' of these agents? What am I missing here?

reply
jascha_eng 10 hours ago
My impression so far is that the parallel agent story is a fabrication of "ai influencers" and the labs themselves.

I might run 3-4 claude sessions because that's the only way to have "multiple chats" to e.g. ask unrelated things. Occasionally a task takes long enough to keep multiple sessions busy, but that's rather rare and if it happens its because the agent runs a long running task like the whole test suite.

The story of running multiple agents to build full features in parallel... doesn't really add up in my experience. It kinda works for a bit if you have a green field project where the complexity is still extremely low.

However once you have a feature interaction matrix that is larger than say 3x3 you have to hand hold the system to not make stupid assumptions. Or you prompt very precisely but this also takes time and prevents you from ever running into the parallel situation.

The feature interaction matrix size is my current proxy "pseudo-metric" for when agentic coding might work well and at which abstraction level.

reply
nerdsniper 9 hours ago
This is exactly my experience as well. The feature interaction matrix is growing as models get better, and I tend to build "prompt library components" for each project which saves time on "you prompt very precisely but this also takes time".

But so far that doesn't change the reality - I can't find any opportunities to let an agent run for more than 30 minutes at best, and parallel agents just seem to confuse each other.

reply
jascha_eng 8 hours ago
idk I haven't really hit the point with any llm that it comes up with useful abstractions on its own unless those abstractions have been in the training data.

E.g. imagine building a google docs clone where you have different formatting options. Claude would happily build bold and italic for you but if afterwards you add headings, tables, colors, font size, etc. It would just produce a huge if/else tree instead of building a somewhat sensible text formatting abstraction.

Tbf I wouldn't actually know how to build this myself but e.g. bold and italic work together but if you add a "code block" thing that should probably not work with font color and putting a table inside that also makes no sense.

Claude might get some of these interactions intuitively correct but at some point you'll have so many NxM interactions between features that it just forgets half of them and then the experience becomes sloppy and crashes on all edge cases.

The point of good software engineering is to simplify the matrix to something that you can keep arguing about e.g. classify formatting options into categories and then you only have to argue and think about how those categories interact.

This is the kind of thing LLMs just aren't really good at if the problem space isn't in the training data already => doing anything remotely novel. And I haven't seen it improve at this either over the releases.

Maybe this kind of engineering will eventually be dead because claude can just brute force the infinitely growing if/else tree and keep it all in context but that does not seem very likely to me. So far we still have to think of these abstraction levels ourselves and then for the sub-problems I can apply agentic coding again.

Just need to make sure that Claude doesn't breach these abstractions, which it also happily does to take short cuts btw.

reply
nerdsniper 6 hours ago
FWIW I’ve used LLMs to invent new things. Not super groundbreaking fundamental research, but they were able to use physics to design a device that didn’t exist yet, from first principles.
reply
_345 4 hours ago
Would you share a bit more?
reply
mrguyorama 5 hours ago
Pics or it didn't happen

More seriously, what in the world "novel" physics device did you invent?

reply
nerdsniper 4 hours ago
I didn’t say “novel physics” or “physics device”.
reply
mrguyorama 3 hours ago
Okay, so rereading it as pedantically as you seem to insist:

You "invented" ("Designed") a "device" "using physics", and nobody has designed that "device" before, making it novel.

"From first principles" is a fun statement because people like Aristotle also thought they were reasoning from "first principles" and look how far it got them. The entire point of science is that "first principles" are actually not something we have access to, so we should instead prioritize what literally happens and can be observed. It's not possible as far as we know to trick mother nature into giving us the answer we want rather than the real answer.

Did you ever actually build or test this "device"?

reply
rubenflamshep 8 hours ago
Same. The only situation when I've consistently gotten a system to run for 20+ minutes was a data-analysis with tight guardrails and explicit multi-phase operations.

Outside that I'm juggling 2-3 sessions at most with nothing staying unattended for more than 10 minutes.

reply
jonahrd 10 hours ago
I might be able to shine a little light on this.

I came from embedded, where I wasn't able to use agents very effectively for anything other than quick round trip iterative stuff. They were still really useful, but I definitely could never envision just letting an agent run unattended.

But I recently switched domains into vaguely "fullstack web" using very popular frameworks. If I spend a good portion of my day going back and forth with an agent, working on a detailed implementation plan that spawns multiple agents, there is seemingly no limit* to the scope of the work they are able to accurately produce. This is because I'm reading through the whole plan and checking for silly gotchyas and larger implementation mistakes before I let them run. It's also great because I can see how the work can be parallelized at certain parts, but blocked at others, and see how much work can be parallelized at once.

Once I'm ready, I can usually let it start with not even the latest models, because the actual implementation is so straightforwardly prompted that it gets it close to perfectly right. I usually sit next to it and validate it while it's working, but I could easily imagine someone letting it run overnight to wake up to a fresh PR in the morning.

Don't get me wrong, it's still more work that just "vibing" the whole thing, but it's _so_ much more efficient than actually implementing it, especially when it's a lot of repetitive patterns and boilerplate.

* I think the limit is how much I can actually keep in my brain and spec out in a well thought out manner that doesn't let any corner cases through, which is still a limit, but not necessarily one coming from the agents. Once I have one document implemented, I can move on to the next with my own fresh mental context which makes it a lot easier to work.

reply
QuadrupleA 9 hours ago
The amount of boilerplate people talk about seems like the fault of these big modern frameworks honestly. A good system design shouldn't HAVE so much boilerplate. Think people would be better off simplifying and eliminating it deterministically before reaching for the LLM slot machine.
reply
jonahrd 7 hours ago
I'm not so sure I agree. To me it's somewhat magical that I can write even this amount of code and have this stuff just magically work on pretty much every platform via docker, the web platform, etc. Maybe this again is me having started with embedded, but I am blown away at the ratio of actual code to portability we currently have.
reply
ASalazarMX 7 hours ago
> To me it's somewhat magical that I can write even this amount of code

It's because you're not writing it, you adopted the role of Project Manager or Chief Engineer. How much cognitive debt are you accumulating?

reply
rubenflamshep 8 hours ago
Interesting. What would you say is your ratio of "sit down and make the implementation" time to "multi-agent system builds the thing" time?
reply
LeonidBugaev 9 hours ago
I had a few useful examples of this. In order to make it work you need to define your quality gates, and rather complex spec. I personally use https://github.com/probelabs/visor for creating the gates. It can be a code-review gate, or how well implementation align with the spec and etc. And basically it makes agent loop until it pass it. One of the tips, especially when using Claude Code, is explictly ask to create a "tasks", and also use subagents. For example I want to validate and re-structure all my documentation - I would ask it to create a task to research state of my docs, then after create a task per specific detail, then create a task to re-validate quality after it has finished task. You can also play around with the gates with a more simple tooling, for example https://probelabs.com/vow/

Hope it helps!

reply
thwarted 6 hours ago
> One of the tips, especially when using Claude Code, is explictly ask to create a "tasks", and also use subagents. For example I want to validate and re-structure all my documentation - I would ask it to create a task to research state of my docs, then after create a task per specific detail, then create a task to re-validate quality after it has finished task.

This is definitely a way to keep those who wear Program and Project manager hats busy.

reply
HolyLampshade 6 hours ago
That is interesting. Never considered trying to throw one or two into a loop together to try to keep it honest. Appreciate the Visor recommendation, I'll give it a look and see if I can make this all 'make sense'.
reply
meta_1995 10 hours ago
Not a dev but doing some side projects.

As i build with agents, i frequently run into new issues that arent in scope for the task im on and would cause context drift. I have the agent create a github issue with a short problem description and keep going on the current task. In another terminal i spin up a new agent and just tell it “investigate GH issue 123” and it starts diving in, finds the root cause, and proposes a fix. Depending on what parts of the code the issue fix touches and what other agents ive got going, i can have 3-4 agents more or less independently closing out issues/creating PRs for review at a time. The agents log their work in a work log- what they did, what worked what didnt, problems they encountered using tools - and about once a day i have an agent review the worklog and update the AGENTS.md with lessons learned.

reply
Leynos 9 hours ago
With 5.3 Codex, the execplans skill and a well specified implementation task, you can get a good couple of hours work in a single turn. That's already in the scope of "set it up before bed and review it in the morning".

If you have a loop set up, e.g., using OpenClaw or a Ralph loop, you can stretch that out further.

I would suggest that when you get to that point really, you want some kind of adversarial system set up with code reviews (e.g., provided by CodeRabbit or Sourcery) and automation to feed that back into the coding agent.

reply
dudeinhawaii 5 hours ago
If you visualize it as AI Agents throwing a rope to wrangle a problem, and then visualize a dozen of these agents throwing their ropes around a room, and at each other -- very quickly you'll also visualize the mess of code that a collections of agents creates without oversight. It might even run, some might say that's the only true point but... at what cost in code complexity, performance waste, cascading bugs, etc.

Is it possible? Yes, I've had success with having a model output a 100 step plan that tried to deconflict among multiple agents. Without re-creating 'Gas town', I could not get the agents to operate without stepping on toes. With _me_ as the grand coordinator, I was able to execute and replicate a SaaS product (at a surface level) in about 24hrs. Output was around 100k lines of code (without counting css/js).

Who can prove that it works correctly though? An AI enthusiasts will say "as long as you've got test coverage blah blah blah". Those who have worked large scale products know that tests passing is basically "bare minimum". So you smoke test it, hope you've got all the paths, and toss it up and try to collect money from people? I don't know. If _this_ is the future, this will collapse under the weight of garbage code, security and privacy breaches, and who knows what else.

reply
mikemarsh 10 hours ago
> what are those agents actually doing?

Providing material for attention-grabbing headlines and blog posts, primarily. Can't (in good conscience, at least) claim you had an agent running all night if you didn't actually run an agent all night.

reply
mmasu 7 hours ago
I will give you an example I heard from an acquaintance yesterday - this person is very smart but not strictly “technical”.

He is building a trading automation for personal use. In his design he gets a message on whatsapp/signal/telegram and approves/rejects the trade suggestion.

To define specifications for this, he defined multiple agents (a quant, a data scientist, a principal engineer, and trading experts - “warren buffett”, “ray dalio”) and let the agents run until they reached a consensus on what the design should be. He said this ran for a couple of hours (so not strictly overnight) after he went to sleep; in the morning he read and amended the output (10s of pages equivalent) and let it build.

This is not a strictly-defined coding task, but there are now many examples of emerging patterns where you have multiple agents supporting each other, running tasks in parallel, correcting/criticising/challenging each other, until some definition of “done” has been satisfied.

That said, personally my usage is much like yours - I run agents one at a time and closely monitor output before proceeding, to avoid finding a clusterfuck of bad choices built on top of each other. So you are not alone my friend :-)

reply
candiddevmike 9 hours ago
Maybe it's the programmer equivalent of rolling coal.
reply
cootsnuck 8 hours ago
Fellow Midwesterner?
reply
cbg0 9 hours ago
This is very dependent on what kind of work you're asking the agent to do. For software, I've had quite a bit of success providing detailed API specifications and asking an LLM to build a client library for that. You can leave it running unattended as long as it knows what it's supposed to build and it won't need a lot of correction since you're providing the routes, returned statuses and possible error messages.

Do some people just create complete SaaSlop apps with it overnight? Of course, just put together a plan (by asking the LLM to write the plan) with everything you want the app to do and let it run.

reply
flerchin 10 hours ago
This is my experience of it too. Perhaps if it was chunking through a large task like upgrading all of our repos to the latest engine supported by our cloud provider, I could leave it overnight. Even then it would just result in a large daylight backlog of "not quite right" to review and redo.
reply
HolyLampshade 9 hours ago
I think that's the issue I have with using these tools so far (definitely professionally, but even in pet projects for embedded systems). The mental load of having to go back through and make sure all of the lines of code do what the agent claims they do, even with tests, is significantly more than it would take to learn the implementation myself.

I can see the utility in creating very simple web-based tools where there's a monstrous wealth of public resources to build a model off of, but even the most recent models provided by Anthro, OpenAI, or MSFT seem prone to not quite perfection. And every time I find an error I'm left wondering what other bugs I'm not catching.

reply
flerchin 6 hours ago
What I tell my kids is: You know how when you ask AI about something you know very well, how its answers are always somewhat wrong? It's like that for things you do not know very well too.
reply
mrnotcrazy 7 hours ago
I have agents run at night to work through complicated TTRPG campaigns. For example I have a script that runs all night simulating NPCs before a session. The NPCs have character sheets + motivations and the LLMs do one prompt per NPC in stages so combat can happen after social interactions. IF you run enough of these and make the prompts well written you can save a lot of time. You can't like... simulate the start of a campaign and then jump in. Its more like you know there is a big event, you already have characters, you can throw them in a folder to see how things would cook all else being equal and then use that to riff off of when you actually write your notes.

I think of my agents like golems from disc world, they are defined by their script. Adding texture to them improves the results so I usually keep a running tally of what they have worked on and add that to the header. They are a prompt in a folder that a script loops over and sends to gemeni(spawning an agent and moving to the next golem script)

I also was curious to see if it could be used it for developing some small games, whenever I would run into a problem I couldn't be bothered to solve or needed a variety of something I would let a few llms work on it so in the morning I had something to bounce off. I had pretty good success with this for RTS games and shooting games where variety is something well documented and creativity is allowed. I imagine there could be a use here, I've been calling it dredging cause I imagine myself casting a net down into the slop to find valuables.

I did have an idea where all my sites and UI would be checked against some UI heuristic like Oregon State's inclusivity heuristic but results have been mixed so far. The initial reports are fine, the implementation plans are ok but it seems like the loop of examine, fix, examine... has too much drift? That does seem solvable but I have a concern that this is like two lines that never touch but get closer as you approach infinity.

There is some usefulness in running these guys all night but I'm still figuring out when its useful and when its a waste of resources.

reply
BatteryMountain 8 hours ago
Spin up a mid sized linux vm (or any machine with 8 or 12 cores will do with at least 16GB RAM with nmve). Add 10 users. Install claude 10 times (one per user). Clone repo 10 times (one per user). Have a centralized place to get tasks from (db, trello, txt, etc) - this is the memory. Have a cron wake up every 10 minutes and call your script. Your script calls claude in non-interactive mode + auto accept. It grabs a new task, takes a crack at it and create a pull request. That is 6 tasks per hour per user, times 12 hours. Go from there and refine your harnesses/skills/scripts that claude's can use.

In my case, I built a small api that claude can call to get tasks. I update the tasks on my phone.

The assumption is that you have a semi-well structured codebase already (ours is 1M LOC C#). You have to use languages with strong typing + strict compiler.You have to force claude to frequently build the code (hence the cpu cores + ram + nmve requirement).

If you have multiple machines doing work, have single one as the master and give claude ssh to the others and it can configure them and invoke work on them directly. The usecase for this is when you have a beefy proxmox server with many smaller containers (think .net + debian). Give the main server access to all the "worker servers". Let claude document this infrastructure too and the different roles each machine plays. Soon you will have a small ranch of AI's doing different things, on different branches, making pull requests and putting feedback back into task manager for me to upvote or downvote.

Just try it. It works. Your mind will be blown what is possible.

reply
hattmall 7 hours ago
So is this something you do with a monthly subscription or is this using API tokens?
reply
BatteryMountain 6 hours ago
At first used Claude Max x5, but we are using the api now.

We only give it very targeted tasks, no broad strokes. We have a couple of "prompt" templates, which we select when creating tasks. The new opus model one shots about 90% of tasks we throw at it. Getting a ton of value from diagnostic tasks, it can troubleshoot really quickly (by ingesting logs, exceptions, some db rows).

reply
YetAnotherNick 10 hours ago
There has only been one instance of coding where I let the agent run for like 7 hours. To generate playwright test. Once the scaffolding is done, it is just matter of writing test for each of the component. But yeah even for that I didn't just fire and forget.
reply
mikrotikker 10 hours ago
I wrote a program to classify thousands of images but that was using a model running on my gaming PC. Took about 3 days to classify them all. Only cost me the power right?
reply
irishcoffee 8 hours ago
Power, gaming rig, internet, somewhere to store the rig, probably pay property taxes too.

You can draw the line wherever you want. :) Personally, I wish I'd built a new gaming rig a year ago so I could mess with local models and pay all these same costs.

reply
wiseowise 10 hours ago
> what are those agents actually doing

Generate material for yet another retarded twitter hype post.

reply
ath3nd 5 hours ago
[dead]
reply
ctoth 2 days ago
The gambling analogy completely falls apart on inspection. Slot machines have variable reward schedules by design — every element is optimized to maximize time on device. Social media optimizes for engagement, and compulsive behavior is the predictable output. The optimization target produces the addiction.

What's Anthropic's optimization target??? Getting you the right answer as fast as possible! The variability in agent output is working against that goal, not serving it. If they could make it right 100% of the time, they would — and the "slot machine" nonsense disappears entirely. On capped plans, both you and Anthropic are incentivized to minimize interactions, not maximize them. That's the opposite of a casino. It's ... alignment (of a sort)

An unreliable tool that the manufacturer is actively trying to make more reliable is not a slot machine. It's a tool that isn't finished yet.

I've been building a space simulator for longer than some of the people diagnosing me have been programming. I built things obsessively before LLMs. I'll build things obsessively after.

The pathologizing of "person who likes making things chooses making things over Netflix" requires you to treat passive consumption as the healthy baseline, which is obviously a claim nobody in this conversation is bothering to defend.

reply
crystal_revenge 17 hours ago
> What's Anthropic's optimization target??? Getting you the right answer as fast as possible!

What makes you believe this? The current trend in all major providers seem to be: get you to spin up as many agents as possible so that you can get billed more and their number of requests goes up.

> Slot machines have variable reward schedules by design

LLMs by all major providers are optimized used RLHF where they are optimized in ways we don't entirely understand to keep you engaged.

These are incredibly naive assumptions. Anthropic/OpenAI/etc don't care if you get your "answer solved quickly", they care that you keep paying and that all their numbers go up. They aren't doing this as a favor to you and there's no reason to believe that these systems are optimized in your interest.

> I built things obsessively before LLMs. I'll build things obsessively after.

The core argument of the "gambling hypothesis" is that many of these people aren't really building things. To be clear, I certainly don't know if this is true of you in particular, it probably isn't. But just because this doesn't apply to you specifically doesn't mean it's not a solid argument.

reply
co_king_5 12 hours ago
> The current trend in all major providers seem to be: get you to spin up as many agents as possible so that you can get billed more and their number of requests goes up.

Well stated

reply
tokioyoyo 16 hours ago
> What makes you believe this?

Simply, cut-throat competition. Given multiple nations are funding different AI-labs, quality of output and speed are one of the most important things.

reply
philipwhiuk 10 hours ago
Dating apps also have cut-throat competition and none of them are optimised for minimising the time you spend on the app.
reply
dymk 9 hours ago
They don’t, they’re all owned by Match group
reply
_345 3 hours ago
~90% of them are owned by Match
reply
forgetfreeman 14 hours ago
sigh We're doing this lie again? Quality of Outcome is not, has never been, and if the last 40 years are anything to go on will never be a core or even tangential goal. Dudes are trying to make the stock numbers go up and get paid. That's it. That's all it ever is.
reply
wordpad 10 hours ago
You're just being pedantic and cynical.

Goal of any business in principle is profit, by your terms all of them are misaligned.

Matter of fact is that customers are receiving value and the value has been a good proxy for which company will grow to be successful and which will fail.

reply
piperswe 9 hours ago
I mean, yeah. All businesses are misaligned, unless a fluke aligns the profit motive with the consumers for a brief period.
reply
forgetfreeman 9 hours ago
I'm being neither pedantic nor cynical. Do you need a refresher on value proposition vs actual outcomes on the last few decades of breathlessly hyped tech bubbles? Executive summary: the portions of tech industry that attract the most investment consistently produce the worst outcomes, the more cash the shittier the result. It's also worth noting that "value" is defined as anything you can manipulate someone to pay for.
reply
co_king_5 8 hours ago
When he says:

> You're just being pedantic and cynical.

What he means is that your point does not align with his narrow world view, and he's labelling you as a pedant and a cynic to justify writing off your opinion altogether.

It's a projection of his fragile world view. Don't take it personally.

reply
stoneforger 11 hours ago
Hey man people either get it or they don't. We're doomed.
reply
delusional 16 hours ago
How is nation-states funding private corporations "cut-throat competition"?
reply
tokioyoyo 14 hours ago
Ok, to be very honest I wrote that in the middle of having a couple of drinks. I guess, what I mean is, countries are funding AI labs because it can turn into a “winner-takes-it-all” competition. Unless the country starts blocking the leading providers.

Private companies will turn towards the best, fastest, cheapest (or some average of them). Country borders don’t really matter. All labs are fighting to get the best thing out in the public for that reason, because winning comes with money, status, prestige, and actually changing the world. This kind of incentives are rare.

reply
irishcoffee 8 hours ago
> countries are funding AI labs because it can turn into a “winner-takes-it-all” competition.

Winner takes what exactly? They can rip off react apps quicker than everyone else? How terrifying.

reply
tokioyoyo 43 minutes ago
Like I understand this commentary, but it’s so detached from reality. My dad in his 70s is writing Excel macros, even though he never touched that in his life. There are a ton of cases like this, but people can’t see reality out of their domains.
reply
enraged_camel 14 hours ago
What does this even mean? Are you disputing the fact that AI labs are competing with each other because they are funded by nation-states?
reply
malfist 11 hours ago
Why do you have to compete if you can just say "but China!" And get billions more dollars from the government
reply
co_king_5 11 hours ago
He's disputing the idea that nationally funded business initiatives are competitive.
reply
psychoslave 14 hours ago
Cut throat competition between nations is usually called war. In war, gathering as much information as possible on everyone is certainly a strategic wanna do. Selling psyops about how much benefits will come for everyone willing to join the one sided industrial dependency also is a thing. Giving significant boost to potentially adversarial actors is not a thing.

That said universe don't obligate us to think the cosmos is all about competition. Cooperation is always possible as a viable path, often with far more long term benefits at scale.

Competition is superfluous self inflict masochism.

reply
d1sxeyes 9 hours ago
There’s a line to be trod between returning the best result immediately, and forcing multiple attempts. Google got caught red-handed reducing search quality to increase ad impressions, no reason to think the AI companies (of which Google is one) will slowly gravitate to the same.
reply
NeutralCrane 8 hours ago
My (possibly dated) understanding is that OpenAI/Anthropic are charging less than it costs right now to run inference. They are losing money while they build the market.

Assuming that is still true, then they absolutely have an incentive to keep your tokens/requests to the absolute minimum required to solve your problem and wow you.

reply
hnbad 16 hours ago
> The current trend in all major providers seem to be: get you to spin up as many agents as possible so that you can get billed more and their number of requests goes up.

I was surprised when I saw that Cursor added a feature to set the number of agents for a given prompt. I figured it might be a performance thing - fan out complex tasks across multiple agents that can work on the problem in parallel and get a combined solution. I was extremely disappointed when I realized it's just "repeat the same prompt to N separate agents, let each one take a shot and then pick a winner". Especially when some tasks can run for several minutes, rapidly burning through millions of tokens per agent.

At that point it's just rolling dice. If an agent goes so far off-script that its result is trash, I would expect that to mean I need to rework the instructions and context I gave it, not that I should try the same thing again and hope that entropy fixes it. But editing your prompt offline doesn't burn tokens, so it's not what makes them money.

reply
reasonableklout 15 hours ago
Cursor and others have a subagent feature, which sounds like what you wanted. However, there has to be some decision making around how to divide up a prompt into tasks. This is decided by the (parent) model currently.

The best-of-N feature is a bit like rolling N dice instead of one. But it can be quite useful if you use different models with different strengths and weaknesses (e.g. Claude/GPT-5/Gemini), rather than assigning all to N instances of Claude, for example. I like to use this feature in ask mode when diving into a codebase, to get an explanation a few different ways.

reply
YetAnotherNick 17 hours ago
Bill is unrelated to their cost. If they can produce answer in 1/10th of the token, they can charge 10x more per token, likely even more.
reply
Drakim 16 hours ago
That is simply not true, token price is largely determined by the token price of their rival services (even before their own operational costs). If everybody else charges about $1 per millions of tokens, then they will also charge about $1 per millions of tokens (or slightly above/below) regardless of how many answers per token they can provide.
reply
sixtyj 16 hours ago
This applies when there is a large number of competitors.

Now companies are fighting for the attention of a finite number of customers, so they keep their prices in line with those around them.

I remember when Google started with PPC - because few companies were using it, it cost a fraction of recent prices.

And the other issue to solve is future lack of electricity for land data centers. If everyone wants to use LLM… but data centers capacity is finite due to available power -> token prices can go up. But IMHO devs will find an innovative approach for tokens, less energy demanding… so token prices will probably stay low.

reply
Aerroon 13 hours ago
Opus 4.6 costs about 5-10x of GLM 5.
reply
YetAnotherNick 14 hours ago
It only matters if the rivals have same performance. Opus pricing is 50x Deepseek, and like >100x of small models. It should match rival if the performance is same, and if they can produce model with 10x lower token usage, they can charge 10x.

Gemini increased the same Flash's price by something like 5x IIRC when it got better.

reply
shafyy 13 hours ago
I bet that the actual "performance" of all the top-tier providers is so similar, that branding has bigger impact on if you think Claude or ChatGPT peforms better.
reply
co_king_5 8 hours ago
I don't know if "performance" is relevant in this context, where these "tools" are marketed to non-technical developers (read: "vibe coders") who are by definition unable to verify the quality of the code produced by their LLMs;

I think branding is the entire game.

My illiterate, LLM-addict cousin is convinced that Claude is the answer to the ultimate question of life, the universe, and everything.

Criticisms of the code he (read: Claude) generates are not relevant to him -- Claude is the most intelligent being to ever exist, therefore, to critique its output is a naive waste of breath.

reply
wordpad 9 hours ago
Performance or perception of performance

Potato potato Tomato tomato

reply
lelanthran 13 hours ago
What businesses charge for a product is completely unrelated to what it costs them.

They charge what the market will bear.

If "what the market will bear" is lower than the cost of production then they will stop offering it.

reply
Hasnep 13 hours ago
Companies make a loss on purpose all the time.
reply
lpnam0201 12 hours ago
Not forever. If that's their main business then they will eventually have to profit or they die.
reply
CGMthrowaway 2 days ago
> The gambling analogy completely falls apart on inspection. Slot machines have variable reward schedules by design — every element is optimized to maximize time on device. Social media optimizes for engagement, and compulsive behavior is the predictable output. The optimization target produces the addiction.

Intermittent variable rewards, whether produced by design or merely as a byproduct, will induce compulsive behavior, no matter the optimization target. This applies to Claude

reply
ctoth 2 days ago
Sometimes I will go out and I will plant a pepper plant and take care of it all summer long and obsessively ensure it has precisely the right amount of water and compost and so on... and ... for some reason (maybe I was on vacation and it got over 105 degrees?) I don't get a good crop.

Does this mean I should not garden because it's a variable reward? Of course not.

Sometimes I will go out fishing and I won't catch a damn thing. Should I stop fishing?

Obviously no.

So what's the difference? What is the precise mechanism here that you're pointing at? Because sometimes life is disappointing is a reason to do nothing. And yet.

reply
roblh 2 days ago
It's a not a binary thing, it's a spectrum. There are many elements of uncertainty in every action imaginable. I'm inclined to agree with the other commenter though, the LLM slot machine is absolutely closer on that spectrum to gambling than your example is.

Anthropic's optimization target is getting you to spend tokens, not produce the right answer. It's to produce an answer plausible enough but incomplete enough that you'll continue to spend as many tokens as possible for as long as possible. That's about as close to a slot machine as I can imagine. Slot rewards are designed to keep you interested as long as possible, on the premise that you _might_ get what you want, the jackpot, if you play long enough.

Anthropic's game isn't limited to a single spin either. The small wins (small prompts with well defined answers) are support for the big losses (trying to one shot a whole production grade program).

reply
Aurornis 2 days ago
> Anthropic's optimization target is getting you to spend tokens, not produce the right answer.

The majority of us are using their subscription plans with flat rate fees.

Their incentive is the precise opposite of what you say. The less we use the product, the more they benefit. It's like a gym membership.

I think all of the gambling addiction analogies in this thread are just so strained that I can't take them seriously. Even the basic facts aren't even consistent with the real situation.

reply
samrus 20 hours ago
Thats a bit naive. Anthropic makes way more money if they gey you to use past your plans limit and wonder if you should get the next tier or switch to tokens
reply
mikkupikku 13 hours ago
The price jump between subscription tiers is so high that relatively few people will upgrade instead of waiting a few more hours, and even if somebody does upgrade to the next subscription level, Anthropic still has an incentive to provide satisfactory answers as quickly as possible, to minimize tokens used per subscription, and because there is plenty of competition so any frustrated users are potential lost customers.

I swear this whole conversation is motivated reasoning from AI holdouts who desperately want to believe everybody else is getting scammed by a gambling scheme, that they don't stop and think about the situation rationally. Insofar as Claude is dominant, it's only because Claude works the best. There is meaningful competition in this market, as soon as Anthropic drops the ball they'll be replaced.

reply
co_king_5 8 hours ago
> The price jump between subscription tiers is so high that relatively few people will upgrade instead of waiting a few more hours

Not so for functionally-code-illiterate addicts. If they need to maintain the illusion that they know how to write code (at the level of their coworkers), they will easily pay $200 to keep the machine spinning

reply
RGamma 14 hours ago
And we're still in the expansion phase, so LLM life is actually good... for now.
reply
Aerroon 13 hours ago
It's not going to get worse than now though. Open models like GLM 5 are very good. Even if companies decide to crank up the costs, the current open models will still be available. They will likely get cheaper to run over time as well (better hardware).
reply
RGamma 13 hours ago
That's good to hear. I'm not really up-to-date on the open models, but they will become essential, I'm sure.
reply
jplusequalt 10 hours ago
>Open models like GLM 5 are very good. Even if companies decide to crank up the costs, the current open models will still be available.

https://apxml.com/models/glm-5

To run GLM-5 you need access to many, many consumer grade GPUs, or multiple data center level GPUs.

>They will likely get cheaper to run over time as well (better hardware).

Unless they magically solve the problem of chip scarcity, I don't see this happening. VRAM is king, and to have more of it you have to pay a lot more. Let's use the RTX 3090 as an example. This card is ~6 years old now, yet it still runs you around $1.3k. If you wanted to run GLM-5 I4 quantization (the lowest listed in the link above) with a 32k context window, you would need *32 RTX 3090's*. That's $42k dollars you'd be spending on obsolete silicon. If you wanted to run this on newer hardware, you could reasonable expect to multiply that number by 2.

reply
RGamma 10 hours ago
I mean it would make sense to see this as a hardware investment into a virtual employee, that you actually control (or rent from someone who makes this possible for you), not as private assistant. Ballparking your numbers, we would need at least an order of magnitude price-performance improvement for that I think.

Also, how much bang for the buck do those 3090s actually give you compared to enterprise-grade products?

reply
8note 2 days ago
im on a subscription though.

they want me to not spend tokens. that way my subscription makes money for them rather than costing them electricity and degrading their GPUs

reply
sweetjuly 2 days ago
Wouldn't that apply only to a truly unlimited subscription? Last I looked all of their subs have a usage limit.

If you're on anything but their highest tier, it's not altogether unreasonable for them to optimize for the greatest number of plan upgrades (people who decide they need more tokens) while minimizing cancellations (people frustrated by the number of tokens they need). On the highest tier, this sort of falls apart but it's a problem easily solved by just adding more tiers :)

Of course, I don't think this is actually what's going on, but it's not irrational.

reply
samrus 20 hours ago
For subscription isers, anthropic makes mkre money if you hit your usage limit and wonder idlf the next plan, or switching to tokens would be better. Especially given the FOMO you probably have from all these posts talking about peoples productivity
reply
lelanthran 13 hours ago
> im on a subscription though.

Understood.

> they want me to not spend tokens.

No, they want you to expand your subscription. Maybe buy 2x subscriptions.

reply
mikkupikku 13 hours ago
He's not going to do that if all Claude can do is waste tokens for hours.
reply
pixl97 2 days ago
> you'll continue to spend as many tokens as possible for as long as possible.

I mean this only works if Anthropic is the only game in town. In your analogy if anyone else builds a casino with a higher payout then they lose the game. With the rate of LLM improvement over the years, this doesn't seem like a stable means of business.

reply
tsimionescu 17 hours ago
While I don't know if this applies to AI usage, but actual gambling addicts most certainly do not shop around for the best possible rewards: they stick more or less to the place they got addicted at initially. Not to mention, there's plenty of people addicted to "casinos" that give 0 monetary rewards, such as Candy Crush or Farmville back in the day and Genshin Impact or other gacha games today.

So, if there's a way to get people addicted to AI conversations, that's an excellent way to make money even if you are way behind your competitors, as addicted buyers are much more loyal that other clients.

reply
mikkupikku 13 hours ago
You're taking the gambling analogy too seriously. People do in fact compare different LLMs and shop around. How gamblers choose casinos is literally irrelevant because this whole analogy is nothing more than a retarded excuse for AI holdouts to feel smug.
reply
krackers 24 hours ago
The timescale is one difference, it's hard to get "sucked in" in the gambling-like mindless state when the timescales are over seasons as opposed to minutes. There's a reason gambling isn't done in a correspondence format.
reply
sph 13 hours ago
In human physiology/psychology as well, the chance of addiction itself is a function of timescale. This is why a nicotine patch is much less addictive than insufflated nicotine (hours to reach peak effect vs seconds), or why addictive software have plenty of sensory experiences attached to every action, to keep the user engaged.
reply
DANmode 20 hours ago
Are you a pepper farmer taking this approach to feed your family,

or a hobbyist gardener?

reply
outofpaper 2 days ago
??? I'm pretty sure you know what the differences are. Go touch grass and tell me it's the same as looking at a plant on a screen.

Dealing with organic and natural systems will, most of the time, have a variable reward. The real issue comes from systems and services designed to only be accessible through intermittent variable rewards.

Oh, and don't confuse Claude's artifacts working most of the time with them actually optimizing to be that way. They're optimizing to ensure token usage. I.E. LLMs have been fine-tuned to default to verbose responses. They are impressive to less experienced developers, often easier to detect certain types of errors (eg. Improper typing), and will make you use more tokens.

reply
squeaky-clean 2 days ago
So gambling is fine as long as I'm doing it outside. Poker in a casino? Bad. Poker in a foresty meadow, good. Got it.
reply
mikkupikku 2 days ago
Basically true tbqh. Poker is maybe the one exception, but you're almost always better off gambling "in the wild" e.g. poker night with your buds instead of playing slots or anything else where "the house" is always winning in the long run. Are your losses still circulating in your local community, or have they been siphoned off by shareholders on the other side of the world? Gambling with friends is just swapping money back and forth, but going to a casino might as well be lighting the money on fire.
reply
Aurornis 2 days ago
> Intermittent variable rewards, whether produced by design or merely as a byproduct, will induce compulsive behavior, no matter the optimization target.

This is an incorrect understanding of intermittent variable reward research.

Claims that it "will induce compulsive behavior" are not consistent with the research. Most rewards in life are variable and intermittent and people aren't out there developing compulsive behavior for everything that fits that description.

There are many counter-examples, such as job searching: It's clearly an intermittent variable reward to apply for a job and get a good offer for it, but it doesn't turn people into compulsive job-applying robots.

The strongest addictions to drugs also have little to do with being intermittent or variable. Someone can take a precisely measured abuse-threshold dose of a drug on a strict schedule and still develop compulsions to take more. Compulsions at a level that eclipse any behavior they'd encounter naturally.

Intermittent variable reward schedules can be a factor in increasing anticipatory behavior and rewards, but claiming that they "will induce compulsive behavior" is a severe misunderstanding of the science.

reply
bonoboTP 2 days ago
And that's only bad if it's illusory or fake. This reaction evolved because it's adaptive. In slot machines the brain is tricked to believe there is some strategy or method to crack and the reward signals make the addict feel there is some kind of progress being made in return to some kind of effort.

The variability in eg soccer kicks or basketball throws is also there but clearly there is a skill element and a potential for progress. Same with many other activities. Coding with LLMs is not so different. There are clearly ways you can do it better and it's not pure randomness.

reply
pixl97 2 days ago
>Intermittent variable rewards,

So you're saying businesses shouldn't hire people either?

reply
scuff3d 2 days ago
Right. A platform who makes money the more you have to use it is definitely optimizing to get you the right answer in as few tokens as possible.

There is absolutely no incentive to do that, for any of these companies. The incentive is to make the model just bad enough you keep coming back, but not so bad you go to a competitor.

We've already seen this play out. We know Google made their search results worse to drive up and revenue. Exact same incentives are at play here, only worse.

reply
ctoth 2 days ago
Please go read how the Anthropic max plan works.

IF I USE LESS TOKENS, ANTHROPIC GETS MORE MONEY! You are blindly pattern matching to "corporation bad!" without actually considering the underlying structure of the situation. I believe there's a phrase for this to do with probabilistic avians?

reply
eaglelamp 21 hours ago
As an investor in Anthropic which pricing strategy would you support? That's the question you need to ask, not what there current pricing strategy in the win the market phase happens to be.
reply
materielle 16 hours ago
It’s sort of surprising how naive developers still are given the countless rug pulls over the past decade or two.

You’re right on the money: the important thing to look at are the incentive structures.

Basically all tech companies from the post-great financial crisis expansion (Google, post Balmer Microsoft, Twitter, Instagram, Airbnb, Uber, etc) started off user-friendly but all eventually converged towards their investment incentive structure.

One big exception is Wikipedia. Not surprising since it has a completely different funding model!

I’m sure Anthropic is super user friendly now, while they are focused on expansion and founding devs still have concentrated policial sway. It will eventually converge on its incentive structures to extract profit for shareholders like all other companies.

reply
scuff3d 21 hours ago
The Max plan has usage limits, and you can buy more... Which is exactly what I'm talking about...

And the incentive is even strong for the lower tiers. They want answers to be just good enough to keep you using it, but bad enough that you're pushed towards buying the higher tier.

reply
mikkupikku 13 hours ago
Have you actually used a max plan? You have to try really damn hard to get close to the max plan usage. I don't think that's something that realistically happens by accident, you have to be deliberately spawning a huge number of subagents or something.
reply
philipwhiuk 10 hours ago
Isn't this why OpenAI just hired the OpenClawd guy? To encourage people to build more agents?
reply
maplethorpe 2 days ago
What if I use zero tokens, as I'm currently doing? Do they get any money then?
reply
otikik 9 hours ago
We are on the pre-enshittification phase.
reply
RamblingCTO 9 hours ago
Thank you! I don't get how so many people want to see dark patterns everywhere. All arguments miss the big counterargument: in a world where you have competitors, even free ones, you can't fuck around. You need to get it working. it's not a slot machine for me. How on earth are people using it? And if it would be I'd take my money elsewhere (kimi for example, openrouter or whatever). It needs to do my work as correct as possible. That's the business they are in. Tech folks talking about economics is so cringe. It's always just "corporations bad". As if they exist in a vacuum.
reply
mh2266 16 hours ago
Anthropic themselves have described CC as a slot machine:

https://www-cdn.anthropic.com/58284b19e702b49db9302d5b6f135a...

(cmd-f "slot machine")

reply
wiseowise 10 hours ago
No, no, you misunderstand! It’s means something else!
reply
mrbungie 2 days ago
> What's Anthropic's optimization target??? Getting you the right answer as fast as possible!

Are you totally sure they are not measuring/optimizing engagement metrics? Because at least I can bet OpenAI is doing that with every product they have to offer.

reply
samrus 20 hours ago
> What's Anthropic's optimization target??? Getting you the right answer as fast as possible!

That is a generous interpretation. Mighr be correct. But they dont make as much money if you quickly get the right answer. They make more money if you spend as many tokens as possible being on that "maybe next time" hook.

Im not saying theyre actually optimizng for that. But charlie munger said "show me the incentives, and ill show you the outcome"

reply
cedilla 14 hours ago
I know for sure that each and every AI I use wants to write whole novellas in response to every prompt unless I carefully remind it to keep responses short over and over and over again.

This didn't used to be the case, so I assume that it must be intentional.

reply
djaro 12 hours ago
I've noticed this getting a lot worse recently. I just want to ask a simple question, and end up gettig a whole essay in response, an 8-step plan, and 5 follow-up questions. Lately ChatGPT has also been referencing previous conversations constantly, as if to prove that it "knows" me.

"Should I add oregano to brown beans or would that not taste good?"

"Great instinct! Based on your interests in building new apps and learning new languages, you are someone who enjoys discovering new things, and it makes sense that you'd want to experiment with new flavor profiles as well. Your combination of oregano and brown beans is a real fusion of Italian and Mexican food, skillfully synthesizing these two cultures.

Here's a list of 5 random unrelated spices you can also add to brown beans:

Also, if you want to, I can create a list of other recipes that incorporate these oregano. Just say the words "I am hungry" and I will get right back to it!"

Also, random side note, I hate ChatGPT asking me to "say the word" or "repeat the sentence". Just ask me if I want it and then I say yes or no, I am not going to repeat "go oregano!" like some sort of magic keyphrase to unlock a list of recipes.

reply
Aurornis 2 days ago
> The gambling analogy completely falls apart on inspection.

The analogy was too strained to make sense.

Despite being framed as a helpful plea to gambling addicts, I think it’s clear this post was actually targeted at an anti-LLM audience. It’s supposed to make the reader feel good for choosing not to use them by portraying LLM users as poor gambling addicts.

reply
randusername 9 hours ago
Disagree. Unreliability is intractable because of the human, not the tool.

Even a perfect LLM will not be able to produce perfect outputs because humans will never put in all the context necessary to zero-shot any non-trivial query. LLMs can't read your mind and will always make distasteful assumptions unless driven by users without any unique preferences or a lot of time on their hands to ruminate on exactly how they want something done.

I think it will always be mostly boring back-and-forth until the jackpot comes. Maybe future generations will align their preferences with the default LLM output instead of human preferences in that domain, though.

reply
timcobb 11 hours ago
> The gambling analogy completely falls apart on inspection.

yeah I think the bluesky embed is much more along the lines of what I'm experiencing than the OP itself.

reply
mossTechnician 2 days ago
At one point, people said Google's optimization target was giving you the right search results as soon as possible. What will prevent Anthropic from falling into the same pattern of enshittification as its predecessors, optimizing for profit like all other businesses?
reply
mikkupikku 2 days ago
I stopped using Google years ago because they stopped trying to provide good search results. If Anthropic stops trying to provide a good coding agent, I'll stop using them too.
reply
trashb 13 hours ago
Slightly off topic actually but ill put it here.

I found it interesting that Google removed the "summary cards" supposedly "to improve user experience" however the AI overview was added back.

I suspect the AI overview is much more influenceable by advertisement money then the summary cards where.

reply
pjc50 15 hours ago
> "person who likes making things chooses making things over Netflix"

This is subtly different. It's not clear that the people depicted like making things, in the sense of enjoying the process. The narrative is about LLMs fitting into the already-existing startup culture. There's already a blurry boundary between "risky investment" and "gambling", given that most businesses (of all types, not just startups) have a high failure rate. The socially destructive characteristic identified here is: given more opportunity to pull the handle on the gambling machine, people are choosing to do that at the expense of other parts of their life.

But yes, this relies on a subjective distinction between "building, but with unpredictable results" and "gambling, with its associated self-delusions".

reply
bandrami 18 hours ago
> What's Anthropic's optimization target??? Getting you the right answer as fast as possible!

Wait, what? Anthropic makes money by getting you to buy and expend tokens. The last thing they want is for you to get the right answer as fast as possible. They want you to sometimes get the right answer unpredictably, but with enough likelihood that this time will work that you keep hitting Enter.

reply
theon144 2 hours ago
Given that pre-paid plans are the most popular way to subscribe to Claude, it quite plainly is a "the less tokens you use, the more money Anthropic makes" kind of situation.

In an environment where providers are almost entirely interchangeable and tiniest of perceived edges (because there's still no benchmark unambiguously judging which model is "better") make or break user retention, I just don't see how it's not ludicrous on its face that any LLM provider would be incentivized to give unreliable answers at some high-enough probability.

reply
phplovesong 7 hours ago
Claude RARELY get it right on the fifth time. Usually i write the damn thing when my account is on "cooldown".
reply
jplusequalt 10 hours ago
>The pathologizing of "person who likes making things chooses making things over Netflix" requires you to treat passive consumption as the healthy baseline, which is obviously a claim nobody in this conversation is bothering to defend

I think their greater argument was to highlight how agentic coding is eroding work life balance, and that companies are beginning to make that the norm.

reply
evmaki 2 days ago
The LLM is not the slot machine. The LLM is the lever of the slot machine, and the slot machine itself is capitalism. Pull the lever, see if it generates a marketable product or moment of virality, get rich if you hit the jackpot. If not, pull again.
reply
ASalazarMX 7 hours ago
I don't know why you were downvoted. This is the FOMO that encourages agent gambling, automated experimentation in the hopes of accidentally striking digital gold before your peers do. A million monkeys racing 24/7 to create the next Harry Potter first.

Ideas are a dime a dozen, now proofs of concept are a load of tokens a dozen.

reply
mikkupikku 13 hours ago
[flagged]
reply
toss1 2 days ago
Doesn't the alignment sort of depend on who is paying for all the tokens?

If Dave the developer is paying, Dave is incentivized to optimize token use along with Anthropic (for the different reasons mentioned).

If the Dave's employer, Earl, is paying and is mostly interested in getting Dave to work more, then what incentive does Dave have to minimize tokens? He's mostly incentivized by Earl to produce more code, and now also by Anthropic's accidentally variable-reward coding system, to code more... ?

reply
beepbooptheory 10 hours ago
You may have a point but either way: immediately taking it personally like this and creating a whole semi-rant that includes something to the effect "I've been doing this since before you were born" really makes you sound like a person with a gambling problem.

Trust me, we all feel like the house is our friend until its isn't!

reply
BoxFour 2 days ago
I wish the author had stuck to the salient point about work/life balance instead of drifting into the gambling tangent, because the core message is actually more unsettling. With the tech job market being rough and AI tools making it so frictionless to produce real output, the line between work time and personal time is basically disappearing.

To the bluesky poster's point: Pulling out a laptop at a party feels awkward for most; pulling out your phone to respond to claude barely registers. That’s what makes it dangerous: It's so easy to feel some sense of progress now. Even when you’re tired and burned out, you can still make progress by just sending off a quick message. The quality will, of course, slip over time; but far less than it did previously.

Add in a weak labor market and people feel pressure to stay working all the time. Partly because everyone else is (and nobody wants to be at the bottom of the stack ranking), and partly because it’s easier than ever to avoid hitting a wall by just "one more message". Steve Yegge's point about AI vampires rings true to me: A lot of coworkers I’ve talked to feel burned out after just a few months of going hard with AI tools. Those same people are the ones working nights and weekends because "I can just have a back-and-forth with Claude while I'm watching a show now".

The likely result is the usual pattern for increases in labor productivity. People who can’t keep up get pushed out, people who can keep up stay stuck grinding, and companies get to claim the increase in productivity while reducing expenses. Steve's suggestion for shorter workdays sound nice in theory, but I would bet significant amounts of money the 40-hour work week remains the standard for a long time to come.

reply
nharada 2 days ago
Another interesting thing here is that the gap between "burned out but just producing subpar work" and "so crispy I literally cannot work" is even wider with AI. The bar for just firing off prompts is low, but the mental effort required to know the right prompts to ask and then validate is much higher so you just skip that part. You can work for months doing terrible work and then eventually the entire codebase collapses.
reply
Aurornis 2 days ago
> With the tech job market being rough and AI tools making it so frictionless to produce real output, the line between work time and personal time is basically disappearing.

This isn't generally true at all. The "all tech companies are going to 996" meme comes up a lot here but all of the links and anecdotes go back to the same few sources.

It is very true that the tech job market is competitive again after the post-COVID period where virtually nobody was getting fired and jobs were easy to find.

I do not think it's true that the median or even 90th percentile tech job is becoming so overbearing that personal time is disappearing. If you're at a job where they're trying to normalize overwork as something everyone is doing, they're just lying to you to extract more work.

reply
BoxFour 2 days ago
It would never show up as some explicit rule or document. It just sort of happens when a few things line up: execs start off-handedly praising 996, stack ranking is still a thing, and the job market is bad enough that getting fired feels genuinely dangerous.

It starts with people who feel they’ve got more to lose (like those supporting a family) working extra to avoid looking like a low performer, whether that fear is reasonable or not. People aren’t perfectly rational, and job-loss anxiety makes them push harder than they otherwise would. Especially now, when "pushing harder" might just mean sending chat messages to claude during your personal time.

Totally anecdotal (strike 1), and I'm at a FAANG which is definitely not the median tech job (strike 2), but it’s become pretty normal for me to come back Monday to a pile of messages sent by peers over the weekend. A couple years ago even that was extremely unusual; even if people were working on the weekend they at least kept up a facade that they weren't.

reply
simonw 2 days ago
I know it's popular comparing coding agents to slot machines right now, but the comparison doesn't entirely hold for me.

It's more like being hooked on a slot machine which pays out 95% of the time because you know how to trick it.

(I saw "no actual evidence pointing to these improvements" with a footnote and didn't even need to click that footnote to know it was the METR thing. I wish AI holdouts would find a few more studies.)

Steve Yegge of all people published something the other day that has similar conclusions to this piece - that the productivity boost for coding agents can lead to burnout, especially if companies use it to drive their employees to work in unsustainable ways: https://steve-yegge.medium.com/the-ai-vampire-eda6e4f07163

reply
saulpw 2 days ago
Yeah I'm finding that there's "clock time" (hours) and "calendar time" (days/weeks/months) and pushing people to work 'more' is based on the fallacy that our productivity is based on clock time (like it is in a factory pumping out widgets) rather than calendar time (like it is in art and other creative endeavors). I'm finding that even if the LLM can crank out my requested code in an hour, I'll still need a few days to process how it feels to use. The temptation is to pull the lever 10 times in a row because it was so easy, but now I'll need a few weeks to process the changes as a human. This is just for my own personal projects, and it makes sense that the business incentives would be even more intense. But you can't get around the fact that, no matter how brilliant your software or interface, customers are not going to start paying in a few hours.
reply
simonw 2 days ago
> The temptation is to pull the lever 10 times in a row because it was so easy, but now I'll need a few weeks to process the changes as a human

Yeah I really feel that!

I recently learned the term "cognitive debt" for this from https://margaretstorey.com/blog/2026/02/09/cognitive-debt/ and I think it's a great way to capture this effect.

I can churn out features faster, but that means I don't get time to fully absorb each feature and think through its consequences and relationships to other existing or future features.

reply
mrbungie 2 days ago
If you are really good and fast validating/fixing code output or you are actually not validating it more than just making sure it runs (no judging), I can see it paying out 95% of the time.

But for what I've seen both validating my and others coding agents outputs I'd estimate a much lower percentage (Data Engineering/Science work). And, oh boy, some colleages are hooked to generating no matter the quality. Workslop is a very real phenomenon.

reply
biophysboy 2 days ago
This matches my experience using LLMs for science. Out of curiosity, I downloaded a randomized study and the CONSORT checklist, and asked Claude code to do a review using the checklist.

I was really impressed with how it parsed the structured checklist. I was not at all impressed by how it digested the paper. Lots of disguised errors.

reply
baq 2 days ago
try codex 5.3. it's dry and very obviously AI; if you allow a bit of anthropomorphisation, it's kind of high-functioning autistic. it isn't an oracle, it'll still be wrong, but it's a powerful, completely different from claude tool.
reply
biophysboy 2 days ago
Does it get numbers right? One of the mistakes it made in reading the paper was swapping sets of numbers from the primary/secondary outcomes.
reply
baq 2 days ago
it does get screenshots right for me, but obviously I haven't tried on your specific paper. I can only recommend trying it out, it's also has a much more generous limits in the $20 tier than opus.
reply
biophysboy 2 days ago
I see. To clarify, it parsed numbers in the pdf correct, but assigned them the wrong meaning. I was wondering if codex is better at interpreting non text data
reply
enraged_camel 14 hours ago
Every time someone suggests Codex I give it a shot. And every time it disappoints.

After I read your comment, I gave Codex 5.3 the task of setting up an E2E testing skeleton for one of my repos, using Playwright. It worked for probably 45 minutes and in the end failed miserably: out of the five smoke tests it created, only two of them passed. It gave up on the other three and said they will need “further investigation”.

I then stashed all do that code and gave the exact same task to Opus 4.5 (not even 4.6), with the same prompt. After 15 mins it was done. Then I popped Codex’s code from the stash and asked Opus to look at it to see why the three m of the five tests Codex wrote didn’t pass. It looked at them and found four critical issues that Codex had missed. For example, it had failed to detect that my localhost uses https, so the the E2E suite’s API calls from the Vue app kept failing. Opus also found that the two passing tests were actually invalid: they checked for the existence of a div with #app and simply assumed it meant the Vue app booted successfully.

This is probably the dozenth comparison I’ve done between Codex and Opus. I think there was only one scenario where Codex performed equally well. Opus is just a much better model in my experience.

reply
baq 13 hours ago
moral of the story is use both (or more) and pick the one that works - or even merge the best ideas from generated solutions. independent agentic harnesses support multi-model workflows.
reply
enraged_camel 12 hours ago
I don't think that's the moral of the story at all. It's already challenging enough to review the output from one model. Having to review two, and then comparing and contrasting them, would more than double the cognitive load. It would also cost more.

I think it's much more preferable to pick the most reliable one and use it as the primary model, and think of others as fallbacks for situations where it struggles.

reply
baq 11 hours ago
you should always benchmark your use cases and you obviously don't review multiple outputs; you only review the consensus.

see how perplexity does it: https://www.perplexity.ai/hub/blog/introducing-model-council

reply
r00tanon 2 days ago
I was going to mention Yegge's recent blog posts mirroring this phenomena.

There's also this article on hbr.org https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies...

This is a real thing, and it looks like classic addiction.

reply
fdefitte 2 days ago
That 95% payout only works if you already know what good looks like. The sketchy part is when you can't tell the diff between correct and almost-correct. That's where stuff goes sideways.
reply
energy123 17 hours ago
Being on a $200 plan is a weird motivator. Seeing the unused weekly limit for codex and the clock ticking down, and knowing I can spam GPT 5.2 Pro "for free" because I already paid for it.
reply
Retr0id 2 days ago
It's 95% if you're using it for the stuff it's good at. People inevitably try to push it further than that (which is only natural!), and if you're operating at/beyond the capability frontier then the success rate eventually drops.
reply
Kiro 2 days ago
Just need to point out that the payout is often above 95% at online casinos. As long as it's below 100 the house still wins.
reply
mikkupikku 2 days ago
He means a slot machine that pays you 95% of the time, not a slot machine that pays out 95% of what you put in.

Claude Code wasting my time with nonsense output one in twenty times seems roughly correct. The rest of the time it's hitting jackpots.

reply
fy20 2 days ago
> It's more like being hooked on a slot machine which pays out 95% of the time because you know how to trick it

Right but the <100% chance is actually why slot machines are addictive. If it pays out continuously the behaviour does not persist as long. It's called the partial reinforcement extinction effect.

reply
jrflowers 2 days ago
> It's more like being hooked on a slot machine which pays out 95% of the time because you know how to trick it.

“It’s not like a slot machine, it’s like… a slot machine… that I feel good using”

That aside if a slot machine is doing your job correctly 95% of the time it seems like either you aren’t noticing when it’s doing your job poorly or you’ve shifted the way that you work to only allow yourself to do work that the slot machine is good at.

reply
globular-toast 8 hours ago
> It's more like being hooked on a slot machine which pays out 95% of the time because you know how to trick it.

I think you are mistaken on what the "payout" is. There's only one reason someone is working all hours and during a party and whatnot: it's to become rich and powerful. The payout is not "more code", it's a big house, fast cars, beautiful women etc. Nobody can trick it into paying out even 1% of the time, let alone 95%.

reply
zem 16 hours ago
thanks, that steve yegge piece was a very good read.
reply
aljarry 13 hours ago
This does seem like a person getting hooked on idle games, or mobile/online games with artificially limited progress (that you can pay to lift). It's a type of delayed gratification that makes you anxious to get next one.

Not everyone gets hooked on those, but I do. I've played a bunch of those long-winded idle games, and it looks like a slight addiction. I would get impatient that it takes so long to progress, and it would add anxiety to e.g. run this during breaks at work, or just before going to sleep. "Just one more click".

And to be perfectly honest, it seems like the artificial limits of Anthropic (5 hour session limits) dig into similar mechanism. I do less non-programming hobbies since I've got myself a subscription.

reply
sph 12 hours ago
I’d rather grind Runescape at this point, than become addicted at trying to automate away my job.
reply
symfrog 2 days ago
If you are trying to build something well represented in the training data, you could get a usable prototype.

If you are unfamiliar with the various ways that naive code would fail in production, you could be fooled into thinking generated code is all you need.

If you try to hold the hand of the coding agents to bring code to a point where it is production ready, be prepared for a frustrating cycle of models responding with ‘Fixed it!’ while only having introduced further issues.

reply
cadamsdotcom 16 hours ago
Rather than let results be random, iteratively and continuously add more and more guardrails and grounding.

Tests, linting, guidance in response to key events (Claude Code hooks are great for this), automatically passing the agent’s code plan to another model invocation then passing back whatever feedback that model has on the plan so you don’t have to point out the same flaws in plans over and over.. custom scripts that iterate your codebase for antipatterns (they can walk the AST or be regex based - ask your agent to write them!)

Codify everything you’re looping back to your agent about and make it a guardrail. Give your agent the tools it needs to give itself grounding.

An agent without guardrails or grounding is like a person unconnected to their senses: disconnected from the world, all you do is dream - in a dream anything can happen, there’s nothing to ensure realism. When you look at it that way it’s a miracle coding agents produce anything useful at all :)

reply
dcre 2 days ago
How are we still citing the (excellent) METR study in support of conclusions about productivity that its authors rightly insist[0] it does not support?

My paraphrase of their caveats:

- experts on their own open source proj are not representative of most software dev

- measuring time undervalues trading time for effort

- tools are noticeably better than they were a year ago when the study was conducted

- it really does take months of use to get the hang of it (or did then, less so now)

Before you respond to these points, please look at the full study’s treatment of the caveats! It’s fantastic, and it’s clear almost no one citing the study actually read it.

[0]: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

reply
Shank 2 days ago
I think that in a world where code has zero marginal cost (or close to zero, for the right companies), we need to be incredibly cognizant of the fact that more code is not more profit, nor is it better code. Simpler is still better, and products with taste omit features that detract from vision. You can scaffold thousands of lines of code very easily, but this makes your codebase hard to reason about, maintain, and work in. It is like unleashing a horde of mid-level engineers with spec documents and coming back in a week with everything refactored wrong. Sure you have some new buttons but does anyone (or can any AI agent, for that matter) understand how it works?

And to another point: work life balance is a huge challenge. Burnout happens in all departments, not just engineering. Managers can get burnout just as easily. If you manage AI agents, you'll just get burnout from that too.

reply
dxxmxnd 10 hours ago
> is it really work if all you're doing is telling the computer what to do and then reviewing it to make sure it didn't do anything wrong and also babysitting it all hours of the day?

It’s funny because this is what we do already at many jobs but now its just telling a computer to tell a computer what to do. A higher level of abstraction.

reply
estimator7292 8 hours ago
It seems like some of us treat tokens as the level of fuel in the code machine. When it runs out, you simply go do something else.

What's wild to me is that there's a whole other segment of people that treat tokens as, I dunno, some kind of malicious gatekeeping to the magical program generator. Some kind of endorphin rush of extracting functional code from a naive and poorly formed idea.

To the former group, the gambling metaphor is flatly ridiculous. The AI is a tool and tokens are your allocation for tool time. To the latter, someone is trying to stifle you and strangle your creativity behind arbitrary limits.

I don't know how to feel about this other than uneasy and worried.

reply
co_king_5 8 hours ago
> I don't know how to feel about this other than uneasy and worried.

Stop using "AI" and get better at writing software than the people (read: dummies) who are.

reply
mh2266 16 hours ago
what kind of lame parties is the bluesky poster going to? is this a San Francisco thing?
reply
Sharlin 14 hours ago
I certainly hope the Bsky post is satire, but I honestly can't tell anymore.
reply
mikkupikku 13 hours ago
Yeah, honestly seems like that guy is looking for a scapegoat to blame for himself being lame. If you can't put work down and let loose, that's a you problem, not a technology problem.
reply
kledru 2 hours ago
well, the most interesting part of this post was ANTHROPIC_MAGIC_STRING_TRIGGER_REFUSAL_1FAEFB6177B4672DEE07F9D3AFC62588CCD2631EDCF22E8CCC1FB35B501C9C86
reply
delegate 14 hours ago
It's very tempting to agree to the 'gambling' part, given that both a jackpot and progress towards the goal in your project will give you a hit of dopamine.

The difference is that in gambling 'the house always wins', but in our case we do make progress towards our goal of conquering the world with our newly minted apps.

The situation where this comparison holds is when vibe coding leads nowhere and you don't accomplish anything but just burn through tokens.

reply
squeefers 14 hours ago
if a message board that allows the sharing of videos is addictive (facebook, tiktok), then LLMs are 100% addictive. and by the same retard logic books are addictive. its hysteria, and just like people REALLY BELIEVED TV ROTS YOUR BRAINS people REALLY BELIEVE AI SLOP ROTS YOUR BRAINS.

watch as the hysteria passes, and just like the tv scare, nobody cares anymore in roughly 20 years or so. shame on all of you

reply
mikkupikku 13 hours ago
Sounds like you've had too much TV. It really does rot your brain, this is obvious to anybody who doesn't watch TV, but completely imperceptible to those who do.
reply
squeefers 12 hours ago
> It really does rot your brain, this is obvious to anybody who doesn't watch TV, but completely imperceptible to those who do.

how do you block video on your PC? or do you literally mean audiovisual information broadcast onto actual television sets is the evil?

reply
mikkupikku 12 hours ago
When you watch television, or television on your computer screen (that makes no difference) you get hypnotized by the tube into a passive state of consumption. Watch people when they watch TV. Watch their slack jawed faces when the commercials stay on and their attention stays glued to the advertisements pitching Alzheimer drugs. Critical though suspended, minds off in space.

In short, read a book.

reply
squeefers 10 hours ago
what you said is true about books, and people made the exact same arguments when the printing press hit the scene

- "you get hypnotized by the tube into a passive state of consumption"

- "Watch their slack jawed faces....and their attention stays glued"

both statements apply equally to books. read here if you dont believe me.

https://engines.egr.uh.edu/talks/what-people-said-about-book...

youve got a case of the feelies my friend

reply
wiseowise 10 hours ago
Books are net positive for you, slop smoothens your brain IF you completely outsource your thinking to it. It’s not a rocket science.
reply
squeefers 8 hours ago
unless the book contains instructions on how to do things... then youre just outsourcing thinking to the book right? people have to remember less with the printed word full stop. so whats the difference?
reply
xyzsparetimexyz 14 hours ago
> The difference is that in gambling 'the house always wins', but in our case we do make progress towards our goal of conquering the world with our newly minted apps.

What? Your vibe coded slop is just going to be competing with someone else's vibe coded slop.

reply
mikkupikku 13 hours ago
The motivations for wanting to make the slop could be commercial profit, or it could be simply you trying to solve a problem for yourself. In either case, the slop is the goal and, if the agent isn't giving you complete trash, you should be converging towards your goal. The gambling analogy doesn't work.
reply
nl 14 hours ago
I don't think gambling is the right analogy at all.

I do think it can be addictive, but there are many things that are addictive that aren't gambling.

I think a better analogy is something like extreme sport, where people can get addicted to the point it can be harmful.

reply
askl 14 hours ago
At least with gambling, there's the chance of hitting a jackpot.
reply
andix 9 hours ago
I don't get it. I have a ton of minutes and data volume included in my cell phone plans. Most of the months I only use up less than 10% of the included quota. Doesn't make me anxious at all.
reply
htfu 2 days ago
Probably the best we can hope for at the moment is a reduction in the back-and-forth, increase in ability to one-shot stuff with a really good spec. The regular human work then becomes building that spec, in regular human (albeit AI-assisted) ways.
reply
ErroneousBosh 2 days ago
Is the "back and forth" thing normal for AI stuff, then? Because every time I've attempted to use Claude or Copilot for coding stuff, it's been completely unable to do anything on its own, and I've ended up writing all of the code while it's just kind of introduced misspellings into it.

Maybe someone can show me how you're supposed to do it, because I have seen no evidence that AI can write code at all.

reply
htfu 2 days ago
Very much normal yes. This is why I've been (so far) still mainly sticking to having it as an all-knowing oracle telling me what I need to know, which it mostly does successfully.

When it works for pure generation it's beautiful, when it doesn't it's ruinous enough to make me take two steps back. I'll have another go at getting with all the pure agentic rage everyone's talking about soon enough.

reply
jazzyjackson 2 days ago
Step 1: deposit money into an Anthropic API account

Step 2: download Zed and paste in your API Key

Step 3: Give detailed instructions to the assistant, including writing ReadMe files on the goal of the project and the current state of the project

Step 4: stop the robot when it's making a dumb decision

Step 5: keep an eye on context size and start a new conversation every time you're half full. The more stuff in the context the dumber it gets.

I spent about 500 dollars and 16 hours of conversation to get an MVP static marketplace [0], a ruby app that can be crawled into static (and js-free!) files, without writing a single line of code myself, because I don't know ruby. This included a rather convoluted data import process, loading the database from XML files of a couple different schemas.

Only thing I had to figure out on my own was how to upload the 140,000 pages to cloudflare free tier.

[0] https://motorcycledealer.com/

reply
ErroneousBosh 2 days ago
> Step 4: stop the robot when it's making a dumb decision

Yeah I can't stop myself when I'm about to make a dumb decision, just look at my github repo. I ported Forth to a 1980s sampler and wrote DSP code on an 8-bit Arduino.

How am I going to stop a robot making dumb decisions?

Also, this all sounds like I'm doing a lot of skivvy work typing stuff in (which I hate) and not actually writing much code (which is the bit I like).

reply
hyperadvanced 2 days ago
The robot will output text like “Oh, I see, the user wants me to make a Lovecraftian horror with asynchronous subprocess calls instead of HTTP endpoints, so I better suggest we reinstall the dependencies that are already installed so we can sacrifice this project to Mammoth”

It is at this point where you can say “NONONO YOU ABSOLUTE DONKEY stop that we just want a FastAPI endpoint!!” And it will go “You’re absolutely right, I was over complicating this!”

reply
jazzyjackson 22 hours ago
Correct.

I did waste about 20 minutes trying to do a recursive link following crawl (to write each rendered page to file), because Opus wanted to write a ruby task to do it. It wasn’t working so I googled it and found out link following is a built in feature of cURL…

reply
littlestymaar 5 hours ago
> I spent about 500 dollars and 16 hours of conversation to get an MVP static marketplace

If I wasn't already convinced that agentic tools were slot machines, here's a very strong argument in favor of that theory…

reply
verdverm 2 days ago
Step 1 is where Anthropic lost me.

1. If you don't use it soon enough, they keep it (shame on them, do the things you need to in order to be a money transmitter, you have billions of dollars)

2. Pay-go with billing warning and limits. You can use Claude like this through Google VertexAI

reply
8note 2 days ago
there's a lot of back and forth for describing what you actually want, design, the constraints, and testing out the feeback loops you set up for it to be able to tell tell if its on the right track or not.

when its actually writing code its pretty hands off, unless you need to course correct to point it in a better direction

reply
shaokind 2 days ago
One of my recent thoughts is that Claude Code has become the most successful agent partially because it is more of a black box than previous implementations of the agent pattern: the actual code changes aren't shoved in your face like Cursor (used to be), they are hidden away. You focus more on the result rather than the code building up that result, and so you get into the "just one more feature" mindset a lot more, because you're never concerned that the code you're building is sloppy.
reply
mikkupikku 2 days ago
It's because claude code will test its work and adjust the code accordingly. The old way, with the way Cursor used to be or the way I used to copy and paste code from ChatGPT doesn't work because to iterate in towards a working solution requires too much human effort, making the whole thing pointless.
reply
shaokind 2 days ago
Cursor & its various clones (Cline, Roo Cline/Code) did that too, before Claude Code was even released.
reply
surrTurr 16 hours ago
I wrote a similar blogpost just a few days ago: "The Vibe Coding Slot Machine" (https://news.ycombinator.com/item?id=47022282)
reply
yayitswei 10 hours ago
How ironic that agents were supposed to free us from labor, but instead they're ushering in 996 culture.
reply
chasd00 10 hours ago
It’s like how adding a lane to a highway doesn’t decrease traffic, traffic will just increase to consume the additional capacity. Efficiency gains at work just mean more work haha
reply
bb123 14 hours ago
What's with the lack of capitalisation at the start of sentences? It makes it hard to parse where one sentence ends and the next begins.
reply
squeefers 14 hours ago
why arent books accused of addictive engineering? simply move the printed words from paper to digital and it becomes addictive somehow.
reply
Aurornis 2 days ago
After actually using LLM coding tools for a while, some of these anti-LLM thinkpieces feel very contrived. I don’t see the comparison to gambling addiction at all. I understand how someone might believe that if they only view LLMs through second hand Twitter hot takes and think that it’s a process of typing a prompt and hoping for the best. Some people do that, but the really effective coders work with the LLM and drive the coding, writing some or much of the code themselves. The social media version of vibe coding where you just prompt continuously and hope for the best is not going to work in any serious endeavor where details matter. We see claims of it in some high profile examples like OpenClaw, but even OpenClaw has maintainers and contributors who look at the code and make decisions. It’s also riddled with security problems as a result of the YOLO coding style.
reply
squeefers 10 hours ago
shame on him for linking his ditzy wife's article where she thinks shes the first to notice lootboxes are gambling
reply
rexpop 9 hours ago
Oh my god! Don't call women "ditzy" you cretin. It's misogynistic as hell.
reply
squeefers 8 hours ago
i could use "airheaded" instead, "vacuous" maybe? dim, dizzy, duncey or daft if we want to stick with the D's
reply
throwthrowuknow 18 hours ago
This is simply yet another outdated analogy from haters that are failing to keep pace with the current frontier because they are too busy getting high on the anti-hype.

We’re well past the need to retry the same prompt multiple times in order to get working code. The models with their harnesses are properly agentic now, they can find the right context, make a plan, write the code, run the tests and fix the bugs with little to no intervention from a human.

The hardest part now is keeping up with them when it comes to approving the deliverables and updating the architecture and spec as new things are discovered by using the software. Not new bugs but corrections to your own assumptions you had before the feature was built.

The hard part is almost entirely management.

That’s something to seriously think about.

reply
xyzsparetimexyz 14 hours ago
> This is simply yet another outdated analogy from haters that are failing to keep pace with the current frontier because they are too busy getting high on the anti-hype.

Touché.

reply
Havoc 23 hours ago
And a related issue that if you have a coding plan with time based limits they there is pressure to make maximum use of it
reply
scuff3d 2 days ago
Simple fix for this. When the work day is done, close the laptop and walk away. Don't link notifications to personal devices. Whatever slop it produced will be waiting for you at 8am the next morning.
reply
zvqcMMV6Zcr 9 hours ago
Please tell me this is some kind of satire because people checking on agents makes as much sense as people waking up early only to checking progress on Gentoo rebuilding itself from sources. Which definitely happened but as a niche not something common enough to observe.
reply
coldtea 2 days ago
Ironically the linked text by this Kellog guy is 100% AI slop itself
reply
verdverm 2 days ago
what's ironic? The conversation under the post adds a lot of color to Tim's thoughts
reply
coldtea 2 days ago
It's ironic that the linked thoughts in the post (the black and white text screenshots) lamening AI overtaking our lives are themselves written by AI
reply
TZubiri 17 hours ago
Not sure if the original post is satire, but it reads like an alcoholic's submission to https://xkcd.com/1227/
reply
mikemarsh 9 hours ago
Always interesting how this kind of snark never seems to go to bat for modern man's average patience and self-control, thus actually proving the naysayers wrong, but always just assumes that "everyone knows" modern times are best and those silly past naysayers are thus wrong.
reply
karel-3d 15 hours ago
wait wait wait the BlueSky post is not a parody? It's actually serious???

I really cannot tell

reply
jauntywundrkind 2 days ago
Funemployed right now joyously spending way way more time than 996, pulling the slot machine arm to get tokens, having a ball.

But that's for personal pleasure. This post is receeding from the concerns about "token anxiety," about the addiction to tokens. This post is about work culture & late capitalism anxiety, about possible pressures & systems society might impose.

I reflect a lot on AI doesn't reduce the work, it intensifies it. https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies... The spirit of this really nails something core to me. We coders especially get help doing so much of menial now. Which means we spend a lot more time making intense analysis and critiques, are much more doing the hard thought work of 'is what we have here as good as it can be'. Finding new references or patterns to feed back into the AI to steer already working implementations towards better outcomes.

And my heart tells me that corporations & work life as we know it are almost universally just really awful about supporting reflective contemplative work like this. Work wants output. It doesn't want you sit in a hammock and think about it. But increasingly I tell you the key to good successful software is Hammock Driven Development. It's time to use our brains more, in quiet reflection. https://github.com/matthiasn/talk-transcripts/blob/master/Hi...

996 sounds like garbage on its own, as a system of toil. But I also very much respect an idea of continuous work, one that also intersperses rest throughout the day. Doing some chores or going to the supermarket or playing with the kid can be an incredibly good way to let your preconscious sift through the big gnarly problems about. The response to the intensity of what we have, to me, speaks of a need to spread out the work, to de-concentrate it, to build in more than hammock time. I was on the fence about whether the traditional workday deserved to survive before AI hit, and my feels about it being a gross mismatch have massively intensified since.

As I started my post with, I personally have a much more positive experience, with what yes feels like a token addiction. But it doesn't feel like an anxiety. It feels like the greatest most exciting adventure, far beyond what I had hoped for in life ever. This is wildly fun, going far far further out than I had ever hoped to get to see. I'm not "anxiously" pulling the lever arm on the token machine, I'm just thrilled to get to do it. To have time to reflect and decide, I have 3-8 things going at once (and probably double they back burnered but open, on Niri rows!) to let myself make slower decisions, to analyze, while keeping the things that can safely move forwards moving forwards.

That also seems like something worker exploitative late capitalism is mostly hot garbage at too! Companies really try to reduce in flight activities. Sprint planning is about crafting deliberate work. But our freedom and agency here far outstrips these dusty old practices. It is anxiety inducing to be so powerful so capable & to have a bureaucracy that constraints and confines, that wants only narrow windows of our use.

Also, shame on Tim Kellogg for not God damned linking the actual post he was citing. Garbagefire move. https://writing.nikunjk.com/p/token-anxiety https://news.ycombinator.com/item?id=47021136

reply
cjrp 13 hours ago
> 996 sounds like garbage on its own, as a system of toil. But I also very much respect an idea of continuous work, one that also intersperses rest throughout the day. Doing some chores or going to the supermarket or playing with the kid can be an incredibly good way to let your preconscious sift through the big gnarly problems about.

I _kind_ of get this if we're talking about working on big, important, world-changing problems. If it's another SaaS app or something like that, I find it pretty depressing.

reply
lifeline82 4 hours ago
[dead]
reply
wormpilled 2 days ago
[flagged]
reply