https://chatgpt.com/share/69aaab4b-888c-8003-9a02-d1df80f9c7...
Claude's Cycles [pdf] - https://news.ycombinator.com/item?id=47230710 - March 2026 (362 comments)
Research institutes like those founded by Terence Tao in our current present feel like they will align to this future almost perfectly on a long enough timeline -- tho I think on a shorter timeline this area of research is almost certain to provide a ton of useful ways to advance our current ai systems as our current systems are still in a state where literally anything that can generate new information that is "accurate" in some way -- like our current theorem prover engines are enormously valuable parts of our still manually curated training loops.
> * After EVERY exploreXX.py run, IMMEDIATELY update this file [plan.md] before doing anything else. * No exceptions. Do not start the next exploration until the previous one is documented here.
Is this known to improve performance for advanced problem solving? If so, why this specific prompt?
feels like half the battle with AI tools is not the UX, but just having stable access to the models behind them
How long will it take before they rob a bank?
If they do either of those things will the results have been intentional from the simian’s POV?
"oh awesome let's see if he can solve p!=np!"
Edit: This is going to have huge ramifications for the tech security industry as these systems will be able to break security systems as easily it solved the proof. The sooner the good guys, if there are any left, understand this the better it will be for everybody.
> Super interesting but what does this mean for us mere mortals?
I would go for a 2 or 3 hour walk with my phone using the remote control feature looking every 5 - 10 minutes to make sure it doesn't need human help. I went to the coffeeshop and drank very good coffee listening to music. Then at night I sat and had a beer thinking about T.S. Eliot's 'The Wasteland', the effect of industrialization in England at that time and his views of how ennui affected the aristocracy.
Well, for those among us that are not aristocracy already, except for the vanishingly small number of people required to oversee such processes, we’re probably the closest we’re going to get to it. If they don’t need people to do the tech labor, we’ve got way more people than we need, so that’s a huge oversupply of tech skills, which means tech skills are rapidly becoming worthless. Glad to see how fast we’re moving in our very own race to the bottom!
Sounds like a great starting plot for an interesting story.
However…
I have to acknowledge my craft of SE has been putting people out of work for decades. I myself came up with business process improvement that directly let the company release about 20 people. I did this twice.
So… fair play.
Yeah, but why does it need to take the fun jobs first, like painting, writing poems, coding, making music, ...
I want the AI to cook, do the dishes, take out the trash, etc.
It truly was joyful to have this available to me. It didn’t have to have mass appeal or need me to pay the right artists the right amounts. I had it in moments.
It’s a wonderful world.
Citation needed. Do you have an example of someone in the arts losing their job because of AI?
Like beg on the corners and starve in the street? Trying to figure out how the basics of capitalism where labor is exchanged for money is not going to work well when the only jobs left are side gigs. Something will have to change and a lot of People will fight said change.
The work will become even more fulfilling however.
1) It’s not my job to fix all the problems of Capitalism. It’s painful to try to fight the system without collective action. My family and I have to eat too.
2) We have had a solution all along for the particular problem of AI putting devs out of work. It’s called professional licensure, and you can see it in action in engineering and medical fields. Professional Software Engineers would assume a certain amount of liability and responsibility for the software they develop. That’s regardless of whether they develop it with LLM tools or something else.
For example, you let your tools write slop that you ship without even looking? And it goes on to wreak havoc? That’s professional malpractice. Bad engineer.
If we do this then Software Engineers become the responsible humans in the loop of so-called “AI” systems.
Say you found a job shooting people in the head for money. Like if you work for ICE or something…
You need to feed your family. Is this job ok? You may decide yes. I decided no. I will find another way to feed my family.
You don’t get to escape consequences because you are a small cog in a large system.
In the bigger picture, automation should free people from labor. But that requires some very greedy people to relax their grip ever so slightly. I imagine they see automation as a way to reduce reliance on labor, and if they don’t need labor, they don’t need people. So let them starve and stop having kids.
It’s not even the money-making skill: it’s the application of it. People that are good at shooting people can be beneficial to society as protectors or they can be the the business end of systemic oppression. People with software development skills don’t have to help optimize the motor in the brand-new shiny capitalism juicer.
To a point. Then it just frees up people to do nothing.
> The goal should be to put everyone out of a job.
That is in fact the goal. The less labor capital needs, the more money (and power) the capitalists get to keep for themselves.
What can the good guys do? Fire up Claude to improve their systems? Unless you have it working fully autonomously to counter-act abuse, I don't see how you can beat the "bad guys". There may be some industries where this is a solved problem (e.g. you can do all the validation server-sided, religiously follow best practices to prevent and mitigate abuse), but a lot of stuff like multiplayer video games will be doomed unless they move to a "you must use a locked down system we control" model. I honestly don't consider it liberating as someone that has various hobby projects, that now in addition to plain old DDoS I'll also have people spin up layer 7 attacks with just their credit card. It almost makes me want to give up instead of pushing forward in a world where the worst of the worst has access to the best of the best.
I was putting off security updates on my npm dependencies in my personal project because it's a pain to migrate if the upgrade isn't trivial. It's not a critical website, but I run npm scripts locally, and dependabot is telling me things.
I told Claude Code to make a migration plan to upgrade my deps. It updated code for breaking changes (there were API changes, not all fixes are minor version upgrades) and replaced abandoned unmaintained packages with newer ones or built-in Node APIs. It was all done in an hour. I even got unit tests out of it to test for regressions.
In this case, I was able to skip the boring task of maintaining code and applying routine updates and focus on the fun feature stuff.
That is a nightmarish scenario tbh
Most likely your 3 hours will be filled with managing 36 different AI sessions at a time and it will slowly break your brain.
At least if we keep doing capitalism the way we are.
Later this boredom was described by the Stones, "And though she’s not really ill / There’s a little yellow pill / She goes running for the shelter of a mother’s little helper".
It is a nightmare. Mostly what I'm thinking about while the agents are running is how bored I'm going to be. That is the joke, my deep thought on T.S. Eliot are about the wasteland this thing is going to create.
>After a week, scores of iterations, it can reverse engineer any website
Cool, let’s see the proof.
It’s insane how insufferable this place is now.
> There is no proof, just a self-congratulatory word salad with dubious authenticity.
I worked 8 days straight on that and have been working non-stop on the second draft that is much cleaner and safer. I'm a human being. Please don't be mean. If humanity does come to end, it won't be because of AI, it will be because we can't stop being assholes to each other.
[0] https://github.com/adam-s/intercept/tree/main?tab=readme-ov-...
It is proof-of-concept. Seriously burns some tokens (~80k - ~200k) but doesn't require AI after to scrape and automate a website so if all the people at Browser Use, Browser Base, and every one pounding every website used it, I think, the net benefit would be in the billions. I would recommend using it in isolation. Nonetheless, it works very very well on my machine.
> This type of slop comment is somehow worse than spam.
Please don't be mean.
> I think, the net benefit would be in the billions.
I think, you must forgive people if they are somewhat hostile, if not sick and tired of these claims. It’s quite frustrating seeing individuals constantly saying things like this. Meanwhile I don’t think a lot of people are seeing the structural shifts that these claims imply. This is not an original idea. The disruption claim has been made for the past several years in various fields and the goalposts keep getting moved. AI will absolutely change and render some jobs moot even in its current state if Claude/GPT are able to make a profitable business model. If it turns out that Claude is really being subsidized by investors and it turns out that $200/month subscription is really a $5,000/month when Claude has to stand on its on, I’m not sure what’s going to happen.
It’s clear you’ve gotten some good, if expensive use out of AI, but I’m not sure that experience scales or if it will exist in 5 years.
2-3 hours "walking" while having to check in every 5-10 minutes?
If I have to check in every 5-10 minutes, I won't taste coffee or hear that there's good music playing.
However I do not trust AI anywhere near as much as I trust the humans. The AI is super capable but also occasionally a psychopath toddler. I sat in amused astonishment when faced with job 2 not running because job 1 was failing Claude went in to the database, changed the failure record to success, triggered job 2 which produced harmful garbage, and then claimed victory. Only the most troubled person would even think of doing that, but Claude thought it was the best solution.
There is some real power in AI, for sure. But as I have been working with it, one thing is very clear. Either AI is not even close to a real intelligence (my take), or it is an alien intelligence. As I develop a system where it iterates on its own contexts, it definitely becomes probabilistically more likely to do the right thing, but the mistakes it makes become even more logic-defying. It's the coding equivalent of a hand with extra fingers.
I'm only a few weeks into really diving in. Work has given me infinite tokens to play with. Building my own orchestrator system that's purely programmatic, which will spawn agents to do work. Treat them as functions. Defined inputs and defined outputs. Don't give an agent more than one goal, I find that giving it a goal of building a system often leads it to assert that it works when it does not, so the verifier is a different agent. I know this is not new thinking, as I said I am new.
For me the most useful way to think about it has been considering LLMs to be a probabilistic programming language. It won't really error out, it'll just try to make it work. This attitude has made it fun for me again. Love learning new languages and also love making dirty scripts that make various tasks easier.
we've had AlphaFold for a while. it's not a novel that we have ML solutions that can find, erm, novel solutions.
however, by and large, most LLMs as typically used by most individuals aren't solving novel problems. and in those scenarios, we often end up with regurgitated/most common/lowest common denominator outputs... it's a probability distribution thing.
Also that it is now good enough to make researchers faster.
If we seriously expect whits collar jobs not be a thing anymore, then I am not seeing trades having nearly enough capacity to absorb all the released workforce
The AI CEO's are pointing out that when chess was "solved", in that Kasparov was famously beaten by deep blue, there was a window of time after that event where grandmasters + computers were the strongest players. The knowledge/experience of a grandmaster paired with the search/scoring of the engines was an unbeatable pair.
However, that was just a window in time. Eventually engines alone were capable of beating grandmaster + engine pairs. Think about that carefully. It implies something. The human involvement eventually became an impediment.
Whether you believe this will transfer to other domains is up to you to decide.
It's like pairing with the fastest person on the team, except he is wrong often enough to cost you time and still sounds sure.
Math seems difficult to us because it's like using a hammer (the brain) to twist in a screw (math).
LLMs are discovering a lot of new math because they are great at low depth high breadth situations.
I predict that in the future people will ditch LLMs in favor of AlphaGo style RL done on Lean syntax trees. These should be able to think on much larger timescales.
Any professional mathematician will tell you that their arsenal is ~ 10 tricks. If we can codify those tricks as latent vectors it's GG
Ergo these are latent vectors in our brain. We use analogies like geometry in order to use Algebraic Geometry to solve problems in Number Theory.
An AI trained on Lean Syntax trees might develop it's own weird versions of intuition that might actually properly contain ours.
If this sounds far fetched, look at Chess. I wonder if anyone has dug into StockFish using mechanistic interpretability
https://arxiv.org/abs/2504.13837
That said, reachability and novel strategies are somewhat overlapping areas of consideration, and I don't see many ways in which RL in general, as mainly practiced, improves upon models' reachability. And even when it isn't clipping weights it's just too much of a black box approach.
But none of this takes away from the question of raw model capability on novel strategies, only such with respect to RL.
[0] https://arxiv.org/pdf/2506.14245
[1] https://www.vice.com/en/article/a-human-amateur-beat-a-top-g...
This is far from unsolvable. It just means that the "apply RL like AlphaGo" attitude is laughably naive. We need at least one more trick.
I see posts like your all the time comforting themselves that humans still matter, and every-time people like you are describing a human owning an ever shrinking section of the problem space.
It used to be the case that the labs were prioritising replacing human creativity, e.g. generative art, video, writing. However, they are coming to realise that just isn't a profitable approach. The most profitable goal is actually the most human-oriented one: the AI becomes an extraordinarily powerful tool that may be able to one-shot particular tasks. But the design of the task itself is still very human, and there is no incentive to replace that part. Researchers talk a bit less about AGI now because it's a pointless goal. Alignment is more lucrative.
Basically, executives want to replace workers, not themselves.
The paradigm shift has already happened to me and there will be more shifts to come.
People can use other people as tools. An LLM being a tool does not preclude it from replacing people.
Ultimately it’s a volume problem. You need at least one person to initialize the LLM. But after that, in theory, a future LLM can replace all people with the exception of the person who initializes the LLM.
And if we can train the systems to discover new tricks, whoa Nelly.
I love this and have a corollary saying: the last job to be automated will be QA.
This wave of technology has triggered more discussion about the types of knowledge work that exist than any other, and I think we will be sharper for it.
You forgot to include resources:
What makes people with capital able to turn things into more capital is their ability to buy labor and resources. If people with more capital can generate capital faster than people with less capital, then (unless they are constrained, for example, by law or conscious) the people with the most capital will eventually own effectively all scarce resources, such as land. And that's likely to be a problem for everyone else.
If you don't have capital, the only way to get it is by trading resources or labor for it. Most poor people don't have resources, but they do have the ability to do labor that's valued. But AI is a substitute for labor. And as AI gets better, the value of many kinds of labor will go towards zero.
If it was hard for poor people to escape poverty in the past, it's going to be even harder with AI. Unless we change something about the structure of society to ensure that the benefits of AI are shared with poor people.
You have to believe that LLM scaling (down) is impossible or will never happen. I assure you that this is not the case.
This is certainly my hope.
In my spare time, I'm slowly, very slowly, inching towards a prototype of something that could work like that.
For example, there was a recent post here about GPT-5.4 (and later some other models) solving a FrontierMath open problem: https://news.ycombinator.com/item?id=47497757
That would definitely be considered "new math" if a human did it, but since it was AI people aren't so sure.
The most obvious example of this thinking is, if LLMs are replacing developers, why us open ai still hiring?
So devs are being replaced.
And other stories people tell themselves to sleep better at night
In any other context than when your paycheck depends on it, you would probably not be following orders from a random manager. If your paycheck depended on following the instructions of an AI robot, the world might start to look pretty scary real soon.
That's already the case, minus AI, for gig workers. Their only agency is to accept or decline a ride/delivery, the rest is follow instructions.
- Coherent customer interaction
- Common sense judgements
- Scheduling
- Quality control
All which are baked into humans but not so much into LLMs
Even if it were legal to have an LLM as a GM, I think it would fair poorly
Imagine mcdonald management would enforce dog related rules. No more filthy muppets! If dog harasses customers, AI would call cops, and sue for restraining order! If dog defecates in middle of restaurant, everything would get desinfected, not just smeared with towels!
Nutters would crucify AI management!
1. https://mppbench.com/
Of course, because it takes multi-modal intelligence to manage a McDonalds. I.e. it requires human intelligence.
> I predict that in the future people will ditch LLMs in favor of AlphaGo style RL
Same for coding as well. LLM's might be the interface we use with other forms of AI though.
Programming is more multimodal than math.
Something like performance engineering might be free lunch though
I have no idea how you come to this conclusion, when the evidence on the ground for those training models suggests it is precisely the opposite.
We are much further along the path of writing code than writing new maths, since the latter often requires some degree of representational fluency of the world we live in to be relevant. For example, proving something about braid groups can require representation by grid diagrams, and we know from ARC-AGI that LLMs don't do great with this.
Programming does not have this issue to the same extent; arguably, it involves the subset of maths that is exclusively problem solving using standard representations. The issues with programming are primarily on the difficulty with handling large volumes of text reliably.
I feel like something people miss when they talk about intelligence is that humans have incredible breadth. This is really what differentiates us from artificial forms of intelligence as well as other animals. Plus we have agency, the ability to learn, the ability to critically think, from first principles, etc.
Also animals thrive in underspecified environments, while AIs like very specific environments. Math is the most specified field there is lol
One difference between intelligence and artificial intelligence is that humans can thrive with extremely limited training data, whereas AI requires a massive amount of it. I think if anybody is worried about being replaced by AI, they should look at maximising their economic utility in areas which are not well specified.
Don't argue. If you think Hackernews is a representative sample of the field then you haven't been in the field long enough.
What LLMs have actually done is put the dream of software engineering within reach. Creativity is inimical to software engineering; the goal has long been to provide a universal set of reusable components which can then be adapted and integrated into any system. The hard part was always providing libraries of such components, and then integrating them. LLMs have largely solved these problems. Their training data contains vast amounts of solved programming problems, and they are able to adapt these in vector space to whatever the situation calls for.
We are already there. Software engineering as it was long envisioned is now possible. And if you're not doing it with LLMs, you're going to be left behind. Multimodal human-level thinking need only be undertaken at the highest levels: deciding what to build and maybe choosing the components to build it. LLMs will take care of the rest.
I was thinking the other day of how things would go if some of my less tech savvy clients tried to vibe code the things I implement for them, and frankly I could only imagine hilarity ensuing. They wouldn't be able to steer it correctly at all and would inevitably get stuck.
Someone needs to experiment with that actually: putting the full set of agentic coding tools in the hands of grandma and recording the outcome.
Basically when every single line needs to be reviewed extremely closely the time taken to write the code is not a bottleneck at all, and if using AI you would actually gain a bottleneck in the time spent removing the excess and superfluous code it produces.
And my intuition is that the line between those two kinds of programming - let's call them careful and careless programming to coin an amusing terminology - I think that line may not shrink as far back as some think, and I think it definitely won't shrink to zero.
AI usage is a useless metric, look at results. Thus far, results and AI usage are uncorrelated.
1) there hasn't been a whole lot of research into AI productivity period;
2) many of the studies that have been done (the 2025 METR study for example) are both methodologically flawed and old, not taking into account the latest frontier models
3) corporate transitions to AI-first/AI-native organizations are nowhere near complete, making companywide productivity gains difficult to assess.
However, it isn't hard to find stories on Hackernews from devs about how much time generative AI has saved them in their work. If the time savings is real, and you refuse to take advantage of it, you are stealing from your employer and need to get with the program.
As for IDEs, if you're working in C# and not using Visual Studio, or Java and not using JetBrains, then no—you are not working as efficiently as you could be.
People and corporations have been trying for at least the last five decades to reduce software development to a mechanistic process, in which a system is understandable solely via it's components and subcomponents, which can then be understood and assembled by unskilled labourers. This has failed every time, because by reducing a graph to a DAG or tree, you literally lose information. It's what makes software reuse so difficult, because no one component exists in isolation within a system.
The promise of AI is not that it can build atomic components which can be assembled like my toaster, but rather that it can build complex systems not by ignoring the edges, but managing them. It has not shown this ability yet at scale, and it's not conclusive that current architectures ever will. Saying that LLM's are better than most professional programmers is also trivially false, you do yourself no favours making such outlandish claims.
To tie back into your point about creativity, it's that creativity which allows humans to manage the complexity of systems, their various feedback loops, interactions, and emergent behaviour. It's also what makes this profession broadly worthwhile to its practitioners. Your goal being to reduce it to a mechanistic process is no different from any corporation wishing to replace software engineers with unskilled assembly line workers, and also completely misses the point of why software is difficult to build and why we haven't done that already. Because it's not possible, fundamentally. Of course it's possible AI replaces software developers, but it won't be because of a mechanistic process, but rather because it becomes better at understanding how to navigate these complex phenomena.
This might be besides the point, but I also wish AI boosters such as yourself would disclose any conflict of interests when it comes to discussing AI. Not in a statement, but legally bound, otherwise it's worthless. Because you are one of the biggest AI boosters on this platform and it's hard to imagine the motivation of spending so much time hardlining a specific narrative just for the love of the game, so to speak.