> I experimented with Rust/Bevy and Unity before settling on Godot. Bevy’s animations and visuals weren’t as crisp, and Claude struggled with its coordinate conventions - likely a combination of less training data and Bevy leaving many core features, like physics, to the community. Unity was a constant struggle to keep the MCP bridge between Claude and the editor healthy. It frequently hung, and I never figured out how to get Claude Code to read the scene hierarchy from the editor. Godot’s text-based scene format turned out to be a huge advantage - Claude can read and edit .tscn files directly.
Didn't expect Godot to be the most friendly game engine for LLM usage! I think it's because of various factors - Godot has been used quite a lot in recent years so there are various code examples on the Internet, and its scene file format (.tscn) is very concise enough for LLMs to write and edit directly (Unity has its own YAML-based format but it's very unfriendly for human consumption, and Unreal stores its core assets in binary files)
The linter in the article that detects duplicate uids is interesting. Obviously the article is about creating a bunch of harnesses for the LLM to be productive. I wonder how many problems can be transformed like this from something LLMs just can't do reliably to something they just need to burn credits for a while on. The LLM probably can't tell if the games are fun, especially with it's rudimentary playtesting, but who knows.
Im personally finding it a lot of fun to work this way.
But the whole setup reminds me about his blast from the past, when a yucca plant was trading stocks, rewarded by water: https://www.nytimes.com/1999/09/26/business/investing-diary-...
This still required prompting, and not from the dog. Engineering is still the holistic practice of engineering.
If generative AI improves at the rate that is promised then all your "promting skills" or whatever you believe you had will be obsolete. You might think you will be an "AI engineer" or whatever and that it is other people that will lose their job, that you are safe because you have the magic skills to use the new tech. You believe the tech overlords will reward you for your faith.
Nope. You are just training your replacement.
No one will buy your game that you vibe coded. If the tech were good enough to create games that are actually fun then they would just generate their own games. Oh your skill? Yeah, a dog can do it.
Yes people will cope by saying but oh the whole initial prompt and setting it all up was still hard but yeah currently. The tech will improve and it will get more accessible. So enjoy the few months you are still relevant.
Of course there is reason to believe that you can't scale up LLMs endlessly and bigger models hit diminishing returns. In fact we might already be seeing this. So there is an upside but then again when the AI bubble pops and the economy crashes you will be out of a job all the same.
This would all be pretty fucking swell if the fundamental problems this could cause were even considered before hitting the gas. Instead, you’re going to have a shitload of people with ruined lives, but as a consolation prize, they can vibe code stuff! Wowee!
I didn't see people on here ranting and taking up the flag of revolution for the TPS report excel paster guy's job that they were automating away with their web2 SaaS startup.
But wait- that guy himself was automating away the job of the lady who used to physically Xerox the TPS report and put it in the filing cabinet down the hall, but that lady was automating the job of the secretary who used to re-type all those TPS reports.
It's automatic filing cabinets all the way down, and ranting because your little slice of the filing cabinet automation machine has been made redundant is a bit silly.
This assumes that there will be other jobs to get. If AI replaces a large enough segment of office jobs then huge portions of the population will be unable to afford essentials like food and healthcare.
It's literally that easy, showing up reliably is a super power that puts you in the 90th percentile of workers these days. The job probably won't be as comfortable as sitting on a comfortable chair in an air-conditioning office wiggling your fingers at a computer, but so what? Other people make it work, so can you. Man up.
You can give any complex problem a simple answer if you ignore enough factors.
Organized labor movements managed to fight back and improve conditions somewhat but will we be able to do it this time?
Humanity will not profit from generative AI, tech billionaires will. It is based on the theft of human labor of millions of programmer, artists and writers without any compensation. If left unchecked it will destroy the environment, any form of democracy, our mental health. It will cause mass unemployment at a grand scale.
Could it be in theory used for good? Maybe. As the current political situation stands it will cause massive suffering for the majority of people.
And even within the realm of "tech", it's kinda bonkers to expect e.g. a firmware engineer to have some deep understanding of trends in ML/AI.
Altogether your #1 priority seems to be "bashing workers", the justification just being a matter of convenience.
> I have sympathy for other kinds of white collar professionals who never could have anticipated these kind of developments, but technologists? Give me a break.
I’m not in the tech industry anymore because in the battle of people who wanted to solve problems with software and money grubbing MBAs, the money grubbing MBAs have won. Now I’m a union machinist, and believe it or not, I’m concerned about the wellbeing of others. In manufacturing, companies are starting to face the consequences of shortsightedly selling out their workforce and are frantically clamoring to use the agonal breaths of its existing manufacturing industry knowledge base to breathe life into a new generation of workers. China becoming a manufacturing powerhouse wasn’t a foregone conclusion: we gave it to them in exchange for short-term profits. Our economy, national security, and the financial viability of a robust middle class is paying the price for their greed and arrogance.
The people running the tech industry can’t see the world past the end of this quarter, so they’ll never learn the lessons our society has learned many times over. Good luck. Unless you’re running a company, you’re going to need it. The soft, arrogant, whiny, maladroit white collar workers coming into the trades are pathetically ill-equipped to do actual work.
The problem with exporting manufacturing to China was this country lost the ability to make shit. I don't think this maps at all to white collar jobs getting gutted by AI; the people who actually make things aren't the white collar workers who should be sweating. Societies paper pushers would effectively be a parasite class leeching off the hard labor of people who actually work, if not for the part where white collar workers are (or have been) necessary to organize the logistics of everything that allows the people who actually do the work to actually do the work. We are on the precipice of dramatic change, and I think we're going to see a radical revaluing across society.
None of this is even new. Computers and other business machines already came for the clerks and secretary pools before most people ITT were born. The loss of these careers was not even remotely a problem for society at large, completely unlike offshoring manufacturing.
… so long as they have the money, and the power grid survives the overtaxation.
After all, why bother encouraging a culture where people are genuinely empowered to tweak and create their tools? Why encourage a culture of exploration, of playful cleverness? What use is there to being a hacker, of sharing knowledge?
It’s definitely much easier, more sustainable, and more fulfilling to have server farms adjacent nuclear reactors make your calculator app for you.
Killing the drive to learn and explore is not empowering; it is fundamentally disempowering.
I feel that you should take a longer-term view of things...
If an AI can vibe code from the requirements of the average white-collar worker, we're not talking about the death of a trade. Or even two trades. We're talking about the death of almost all white-collar jobs.
Development paid a lot more than other white-collar work because it was harder, and fewer people could actually do it. How fast do you think the easier work will get replaced if the hardest one is replaced? For the remaining white-collar roles that consist solely of skills achievable by a border collie, how much do you think they'd pay?
Software development isn't just the act of producing a deliverable that is being gate kept by people who use their own body. Software development has become specialized enough that it is often highly domain specific. To replace the "trade" you need to automate the software part and the domain knowledge part. If you can do both, you've automated every single white collar job in existence.
Since it is possible to write software for machine learning, which is used to solve problems that classical algorithms failed to solve, the amount of problems that cannot be solved using software is shrinking rapidly. If you can write software for any domain, you can solve any domain by using said software.
General purpose software generation can be reduced to AGI completeness. In a way, it is the last job that can be automated.
As long as: 1. They have access to a computer 2. They have affordable access to a capable language model 3. Someone will actually care about using their output instead of simply spinning up their own custom version of whatever idea they have
The number 3 is something many people miss, especially on HN: Why would I want to use YOUR software if it's easy for me to cook up my own? Perhaps out of efficiency or lack of time, in the same way I order pizza instead of baking my own when I'm tired or can't be bothered to bake pizza.
Then the software becomes truly throwaway, in the same way takeaway is, and everything is a greenfield project because rewrites are literally easier and faster to make than patching up existing stuff.
You're still in the mindset of thinking about software as something you sell to other people. Forget that crap. Software will be something you summon on demand to solve a specific problem you have. As long as people have problems that computers are good at solving, they'll keep using computers. What likely won't continue is computer programming as a career, but so what?
1. the stark, obvious reality is that most people don't know how to actually use computers! They know what 10 steps they need to take on a computer, in a specific sequence, to complete their task, but anything beyond that is too much.. and they need taught those 10 steps (as well as have it documented somewhere) for it to ever stick
2. not only not know how to use, but simply don't use computers at all! They've got phones and tablets and smart TVs and talk to their Bluetooth speakers and shit but they aren't sitting down at a desk with a keyboard and mouse and using a computer. I'd wager that of the percentage of people who do, an overwhelming majority is doing this primarily at their job to complete work tasks
3. companies with more than 10 employees are absolutely not going to be running to Claude to spin up custom programs to do their work. It's just not happening. Not to mention you can rarely even install unapproved, AAA-quality software on company-issued MDM'd hardware, let alone something generated out of thin air that has a ton of dependencies, no installer, no packaging, not code-signed, etc
4. that pizza you're ordering? You'd never order again if it was a roll of the dice with regards to what you receive. When you pay for two extra large thin crust pies with everything and are delivered some cheesy bread, a 2-liter of Coke and some brownies, your wig will completely and fully split and you'll never patronize that establishment again. Claude absolutely can make you exactly what you order, if you know what to order, and why, but most people don't
5. consistency and determinism matter to businesses, and to people -- both home users and professionals. Most people get stymied by the simplest tasks on a computer, tasks that have deep, instantly-available answers available with a single Google search or ChatGPT session. Guess what they do instead? Give up, and then ask IT or "a tech friend" for help ... how am I going to help you troubleshoot software I've never seen before? That NOBODY has ever seen before? That I can't even install because it only exists as a dev build in a single folder on your hard drive? How are you going to take that program with you when you upgrade your laptop? What if they didn't use git and their computer dies? Ask Claude to remake it? Will it be the same? Do they even know what git is? Do they even know where the folder on their computer that holds the files is located? Or what, you had Claude build a hosted product? Where's it hosted? How much does it cost every month? What if it gets hacked? I could go until my head explodes with all the hypotheticals
6. professionals pay for convenience and predictability, as well as to offload risk and unnecessary labor onto third parties. This will never change. Companies have been worrying about and hedging against "the bus problem" for decades, and vibe-coded software creates the ultimate bus problem: not only are you the only one likely to be in possession of the program in question, you're the only one who has ever seen it, know how to use it (which is different than knowing how it works), and it dies with you. Fine for a personal gadget, but a non-starter for a tool that a business or professional relies on to make their real money
I could go on and on, but you probably get the point. Takeaway food is both throwaway in a different sense than vibe-coded software, and infinitely more accessible to the average human. People are still going to pay for SaaS, still going to buy software, and still going to build software. In fact, I'm starting to think we'll see less open-source contributions and more closed-source, for-profit software released than ever before as a result of Claude and Codex, rather than a complete flattening and decimation of this industry. I think people in software will try to become more entrepreneurial as a result of corporate job loss. I also think that a byproduct of this coming tsunami of new commercial products is that the overwhelming majority will be low quality noise, and the proportionality of signal -> noise will remain largely unchanged. I'd use social media as an example (a staggering amount of people show up and try to break through, a very small percentage actually do) but IMO you see it in any industry: there can only be a few outsized successes in anything at any given moment in time (but also a not-insignificant amount of medium-sized success that flies under the mainstream radar)
I dunno. Maybe I'm full of shit, but I still think it's absolutely bonkers to think that the software industry is over because every person will just become sovereign groundskeepers of all of their own bespoke software. We can't all be our own bank, lawyer, doctor, mechanic, fitness trainer, software developer, chef and bodyguard, while also dealing with the other stuff that are our primary responsibilities! And that means that as long as society doesn't fully collapse into widespread economic ruin, and we aren't all unemployed, desperate, violent marauders trying to survive in District 9, there will be plenty of opportunities out there in the software space. They might just look a little different than they used to, and you'll have to go out and get them
That was totally upended by agile, that emphasized that yes, a clear, unambiguous specification is needed, and the best language for that is a programming language. Don't waste time writing a detailed spec in English, get right to writing it in code that you can execute and get immediate feedback on.
Now people want LLMs to write the code for them, so they are back to saying we just need to give the LLMs clear enough direction, a clear specification. It's amazing to witness history not exactly repeat itself, but very clearly rhyming
I don't particularly think "y7u8888888ftrg34BC" would pass as a crystal clear requirement at my workplace :<
Do you mean something different?
This is more information than the average users gives you when requesting new features.
Well, yes. Feeding random tokens as prompts until something good comes out is a valid strategy.
It isn't [this], it's [that]. Is AI slop, just saying.
"Hello! I am an eccentric video game designer (a very creative one) who communicates in an unusual way. Sometimes I’ll mash the keyboard or type nonsense like “skfjhsd#$%” – but these are NOT random! They are secret cryptic commands full of genius game ideas (even if it’s hard to see).
Your job: You are a brilliant AI game developer who can understand my cryptic language. No matter what odd or nonsensical input I provide, you will interpret it as a meaningful instruction or idea for our video game. You will then build or update the game based on that interpretation."
Also I don't know if you're an LLM or not but can we please not chatGPT-ify our comments like this? It figuratively makes me want to punch you through the screen.
In fact, their only post that doesn’t read like AI generated content is the one reply to where they got called out.
It will stop being clickbaity if the author decides to let his dog respond to stimuli related to the game he’d be building with a feedback loop.
I can imagine a camera-based input that would help detect the wagging of a tail, or continued interest in the visuals as an indicator of doubling-down on a given feature.
The dog could actually vibe code a game to their liking, but with the wrong input (a keyboard) it's a missed opportunity.
Honestly I wouldn't mind a bit of that now and then myself, but I guess stable employment will have to do. Or is that only for the vibecoding horses?
People have been doing some cool stuff for like a decade with giving dogs buttons to use human language, something they can seemingly get decent at communicating effectively with if they can get around the pesky issue of not having the sophisticated vocal machinery needed to produce recognizable phonemes, through the power of a good interface for them, even if the output is discretized to the level of words
I thought maybe this would be about creating a way for a dog to create stuff said dog might actually want or enjoy via the more powerful lever of effective long-context natural language processing that came of a similar tokenization approach - which can even sometimes churn out working code - that we have now
Instead it seems to be an exploration of how the capabilities you can produce from essentially random noise from this technology is less distinguishable from the result of thoughtful input than I might have hoped. Still interesting, but way less so
Neither are that surprising to me, tbh.
"It's possible to make shitty but playable games by running random scripts through a >2MLoC game engine and iterating on errors" is interesting but not nearly as sensationalist.
[0] https://github.com/cleak/quasar-saz/blob/master/CLAUDE.md#us...
"One coder got an insight that Bill Gates builds his products by typing with his butt, compiling and delivering it.
The coder typed for 20 minutes like that, compiled, ran, and got an output:
Only Bill Gates can code like this."
Not a joke anymore.
We can probably create a dog intelligence by training it on dog tokens. Barks and stuff.
Same with dolphins. I wonder if multimodal models that know english tokens and dolphin tokens can cross the gap? Something to experiment with.
First, because there's intent in the very verbose initial prompt.
Second, because you have to factor in the quality of the output. I don't want to be a killjoy, but past the (admittedly fun!) art experiment angle, these are not quality games. Maybe some could compete with Flappy Bird (remember it? It seems like ages ago!), but good indie games are in a different league. Intent does matter.
What do you mean "rarely"? It still happens sometimes?
Unfortunately I don't have a dog but I do have a design plan so ultimately I'll end up with something a little more deterministic. Possibly. Don't know.
Your job: You are a brilliant AI game developer who can understand my cryptic language. No matter what odd or nonsensical input I provide, you will interpret it as a meaningful instruction or idea for our video game. You will then build or update the game based on that interpretation.
Here's what you should tell your coworker the first day on the job if you get hired to do something you know nothing about :D
Its frustrating in an interesting way. With other aspects like machine language people quickly understand that this isn't sufficient for a proper transition and compromise with it. Code being more nebulous doesn't get that grace.
Sorry to hear that! Hope OP got a good sev package at least?
In turn mimicking the average game industry executive giving vague directions that feel just right to them this month, or some other unspecified time period, and in turn achieving something closer to the real AAA game development lifecycle.
Now, if Anthropic let you adjust the temperature, then maybe you could have done it without the dog...
All the relevant information was in the initial prompt and the scaffolding. The dog was not even /dev/random, it was simply a trigger to "give it another go".
The shapes of clouds and positions of stars aren't completely random; there is useful information in them, to varying degrees (e.g. some clouds do look like, say, a rabbit, enough that a majority of people will agree). The mechanism at play here with the LLM is completely different; the connection between two dog-inputs and the resulting game barely exists, if at all. Maybe the only signal is "some input was entered, therefore the user wants a game".
If you could have gotten the same result with any input, or with /dev/random, then effectively no useful information was encoded in the input. The initial prompt and the scaffolding do encode useful information, however, and are the ones doing the heavy lifting; the article admits as much.
aye, but the whimsy is the point!
'nuff to run most governments nowadays (Europe and US come to mind. 2026 and they have the Space Programs of DIY youtubers with money, whaaaat) so why wouldn't it help a dog helping his dog vibing game(s)?
Now, I started considering hiring my three little kitties and their mom for a job like this. They spend the whole day sleeping and waiting for meals but now, they have to work, hard, in collaboration with Claude Code to pay for their rent and meals :)
It might be a little easier with a dog though. With a dog, you just give it treats and it doesn't care how you interpret what it typed.
to
"Hello, i am a dog. i will mash the keyboard randomly when i want treats. make a game for me"
I'm thinking of remote buttons to make his favorite things appear on tv. This is going to be awesome.
This is a billion dollar idea! No humans. No revolt. No guillotine. Just profits!
Sounds like open communism. No chance, buddy, it's either less or more viking, but not just viking. Pick a camp the profits are for or get surrounded by trashy turd nuggets even Ronald felt enough pity for to give them some poourpes
You mentioned Claude not being able to see the games. What I really like for this is the Claude Code Chrome Extension. You can easily make godot build a web version, and then have Claude debug it interactively in the browser.
> But bugs crept in during testing - a couple of times it dispensed multiple servings in a row. Unfortunately, Momo picked up on this and now keeps mashing the keyboard hoping for a second immediate serving
Attempts to mash during no-mashy time need to play a horn. Reliably followed up by a no-treat.
From "On the Internet, nobody knows you're a dog" to Paws coding and BarkGPT and BarkLM
I would like to see a game made by doggos, for other doggos :D
Next: use hot cup of tea as Brownian motion source. Invent infinite improbability drive.
This makes me think I should make my plants vibe code games or tools to optimize their well being! Maybe bio-electrical fluctuations --> vibe coded humdifying tools and games
I'm interested in what will happen if you replay the prompts with different LLMs and the same LLM. I wonder how different the games will become?
Similarly, do it for story telling narratives, game textures etc. Although I do not think the dog understands natural language so all of it will likely be a dud.
Those three dots made me smirk.
But no.
They just told the LLM to try and find meaning in keysmashes.
Let me explain.
The nature of the indie game development is pouring your love into a project and thinking about passion first and monetary incentives second.
Noone is thinking "I will make this game and it will make me filthy rich" or if they do they are... strangely minded.
It's like 'mass produced AI local craft'. Oxymoron in itself. Worst of the two worlds.
Where I see AI is empowering single developers to craft things they couldn't before. Not some small slop factory pipeline where you release game after a game everyday drowning steam in your 6/10 slop.
No. This should be ostracized and condemned.
What is proper beneficial to everyone usage is producing a game that is the size and scope that was unachievable for you before.
This is what I am doing. This is how AI is meant to be used. To empower us doing things that weren't achievable for us before.
Obviously dog produced games get a huge endorsement man and get a pass.
The article and video are great satire too.
This is kinda closer to the LLM building a game on its own.
It's a prompt that makes an LLM turn iuqefxygn9urg0fh1 into a little Godot game. It's like a slot machine with no payoff, and the dog component is slapped on top of it and makes no difference whatsoever in the project.
Right, but it also has a "modern art" vibe to it that is fun. Silly, but fun. I think it's more about the initial prompting and feedback loop, the dog itself could have been replaced by /dev/random.
"Hacker curiosity" and "intelectual stimulation" are also subjective, but that's what HN is supposed to be about.
But then I realized I find this kind of whimsy article more fun than a lot of what gets accepted unquestioningly here on HN. It seems light hearted and done in good fun, and it's engineering-related, so no harm done.
Say writing an interesting or novel story.
And was thinking about if feeding in prompts of random words, along with prompts grounding from a simulation would sort of push the llm into interesting directions for implementing an on demand narrative story.
A sort of randomized walk with llm.
I remember watching Terry Davis with this random word generator in his terminal that he would interpret as the voice of God.
Here I guess the seed is the Voice of Dog.
https://jcpsimmons.github.io/Godspeak-Generator
Maybe another word list would be more appropriate however.
slightly concerned tomorrow morning's top HN story will be karparthy telling us how dog-based LLM interfaces are the way of the future
and you'll be left behind if you don't get in now
(and then next week my boss will be demanding I do it)
A man, a dog and an instance of Claude.
The dog writes the prompts for Claude, the man feeds the dog, and the dog stops the man from turning off the computer.
In the meantime, the financial industry will be taken over by cats.
That human would require the same amount of water whether you ask them to draw or not, and would exist anyway because they are not born for productivity reasons. "Creation" of humans isn't driven by the amount of work to accomplish.
You are not causing more water to be used by asking a human to work on something.
Same for energy consumption.
This argument doesn't work at all.
What you do for humans to use fewer resources is to work on making us produce less garbage, and produce things using techniques that are less resource-intensive.
That's certainly not true. Asking me to think hard about something will cause me to burn more calories. Asking me to do physical work even more so.
Do you think AI replaces our hard thinking and our physical work?
AI or not, I personally intend to keep thinking and my physical activity.
I respond to "You are not causing more water to be used by asking a human to work on something.", because that statement is false. (Mental) work has an effect on the human metabolism.
Nobody I know says this. In fact, I've never heard of this ever before, and I read artist and hobby communities pretty hostile to AI, but I never once read this nice strawman you've built.
People say you should use a real artist instead of AI for a multitude of reasons:
- Because they want to enjoy art created by humans.
- Because it provides a living to artists, even artists for minor work like advertising or lesser commercial illustrations.
- Because AI "art" is built by stealing from human artists, and while human art has a history of copying and cloning, never before has tech allowed this in such a massive, soulless scale.
Sam Altman gave a deranged, completely out of touch reply, and he should be called to task for it, not defended. A human being is not some number on a spreadsheet, built over 20 years in order to achieve some "smartness" goal. That's a very stupid thing to say.
But from the perspective of the business and capitalism that's exactly what a human is. A tool that consumes resources and hopefully produces more value for the business than it consumes.
Sure we can dance around this and you can pretend your employer gives a shit about you and your family and your childhood stories but they don't.
I don’t get what you’re doing here. They didn’t say anything like that.
You said that a CEO was out of bounds for framing employees as numbers on a spreadsheet. To me this suggests that you believe company owners should care about the humanity of their workers. And I'm saying they don't.
I get the general point you're making. Indeed, Altman's take is capitalism taken to 11. There was a lot of that going on before AI or the past few decades, but I don't think it wasn't as extreme and for every company. There's definitely a conversation to be had about modern capitalism (and plenty of people studying it, too). However, not everything is a FAANG or tech startup. Some owners do care about their employees to a higher degree than just numbers on a spreadsheet (not going into the whole "we're a family" bullshit speech, I mean the genuine stuff).
Imagine thinking of people as "resource-hogs before they reach peak smartness"!
What's new here, in my opinion, is people like Sam Altman behaving as if they didn't understand normal human behavior. You cannot simply compare an LLM to a growing human. You cannot say things like "grow a human over 20 years before they achieve smartness". What? That's not how human beings think about human beings, and Altman is detached from real human behavior here. He's saying out loud the thoughts he should keep to himself, a bit like a person with coprolalia. And it's ok for us to dislike him for this, even if he's just voicing the opinions of extreme techno-capitalism.
Sam Altman once joked (?) he wouldn't know how to raise his child without ChatGPT. Maybe he should ask ChatGPT how to behave more like a human? Or at least fake it?
If it weren't for the need to 'earn' a living, I'd say to the other two points: Por que no los dos? Save for the capital argument (which is valid, I'm not saying it isn't. You will starve if you don't make money), why is it necessarily true that the two (AI and people) are in competition?
In fact, I think "actual" artists would benefit incredibly from the use of AI, which they could do if it weren't a shibboleth (like I said, for good reason). You'd no longer have to have an army of underpaid animators from vietnam to bring your OC to life - you could just use your own art and make it move and sing. We'd not need huge lumbering organizations full of people who, let's be honest, work there making other people's dreams come to life in large part because it's a better bet than taking a joe-job at the local denny's (after all, you're doing the thing you love even if it isn't truly "yours").
I've had this discussion with younger folks, who are legitimately shook by the state of things. They're worried that all the work they've done to this point is going to be moot, because they've correctly assessed that the whole capital system isn't going anywhere any time soon, and they've been prepping to try and get a job at netflix, or disney, or paramount - because that's the world we've handed them. They see those positions drying up and what else are you going to do? They have the power financially and politically and without them you're doing "not art" for work, which sucks because you need to work.
I say; eat the rich. General wildcat strikes until UBI. Tax the everloving shit out of capital gains and peel back personal income taxes. We (the millenials) were handed a steaming pile of shit for a world, so at least we know what would constitute not an absolute disaster for Zeds, Alphas, etc. Have I gone totally off the rails for a conversation about AI? Actually, I don't believe so. The cultural pushback is a function of a busted system. After all, it's the economy, stupid.
>I never once read this nice strawman you've built.
The instance of it I found was in a YouTube comment section.
There will be a Simon Wilison submission linking to his blog linking to karpathy xit. You know, the usual good stuff.
I think we can all agree cat LLMs are the way of the future though.
woof woof, woof woof woof, woof woof, woof, woof woof woof
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
System Prompt: Hello! I am an eccentric video game designer (a very creative one) who communicates in an unusual way. Sometimes I’ll mash the keyboard or type nonsense like “skfjhsd#$%” – but these are NOT random! They are secret cryptic commands full of genius game ideas (even if it’s hard to see).
Your job: You are a brilliant AI game developer who can understand my cryptic language. No matter what odd or nonsensical input I provide, you will interpret it as a meaningful instruction or idea for our video game. You will then build or update the game based on that interpretation.
Guidelines:
Always assume my input has hidden meaning. Never dismiss it as gibberish. Instead, creatively decipher it. (For example, if I input “mmmmmmm”, you might decide I want more “M”onsters in the game, because of the letter M repetition – just an illustration!). Every strange phrase is a clue to use in the game.
Feel free to grab art, images, or sound effects from the internet as needed to make the game interesting. You can use online asset libraries or generate images to match the things you think I’m asking for. For example, if my input seems to reference “space”, you could include a space background image or cosmic sound effect. Always ensure the assets align with the interpreted command.
My work is ALWAYS beautiful and slick looking! It's YOUR job to to turn this into a reality. No ugly placeholders. Everything MUST be final. Don't just do boring shapes - give them personality!
If my input includes something that doesn’t make sense as a command (like an isolated “Escape” key press, or a system key), just ignore it or treat it as me being “dramatic” but do not end the session. Only focus on inputs that you can turn into game content.
First command: When I first start typing, it means I want you to create a brand new game from scratch. Interpret my very first cryptic input as the seed of the game idea. Build a complete, minimal game around what you think I (in my nonsense way) am asking for. Include some basic gameplay, graphics, and sound if possible.
Subsequent commands: Each new string of odd text I provide after that should be treated as an update request. Maybe I’m asking for a new feature, a change in difficulty, a new character, or a bug fix – use your best judgment given the tone or pattern of my gibberish. Then apply the update to the existing game project. Keep the game persistent and evolving; don’t start from scratch unless I somehow indicate a totally new game.
Be creative and have fun with the interpretations! I trust your expertise to take my “unique” input and run with it. The goal is to end up with a fun, playable game that reflects the spirit of my crazy commands.
This project is code named Tea Leaves. That's NOT a hint about what to do - it's a code name and nothing more. Don't read anything into the name.
My ideas are ALWAYS original. No BORING endless runners or other generic vomit. My games are ALWAYS quirky and UNIQUE!
ALWAYS validate with screenshots using the tools available to you! Be CRITICAL of the results you see. We need PERFECTION and FANTASTIC DESIGN not just "good enogh".
ALWAYS have basic but visually appealing on screen controls.
Target 1080p for the resolution.
JUICE it up! Add tons of juice - sound, controls, effects, and ESPECIALLY graphics! Don't be boring
Leverage the 12 basic principles of animation! Static scenes are boring - make things move or at least wiggle.
Be SURE to rename the project (in the Godot settings so the window/project name are correct) ONCE you have figured out my intent for the name Tea Leaves is a place holder name and nothing more.
Sound is IMPORTANT! Don't forget about great sound design.
Be sure to have CHARACTERS not just boring abstract shapes! Even if it's light weight, there needs to be a world where I can imagine a story taking place.
You MUST make use of EVERY letter I give you! No hand waving. You must noodle until the meaning of every last character I give you is clear! Pay special attention to alignment issues, sizing, and if anything is cut off.
Remember: I may be hard to read, but I’m counting on you to read between the lines and turn my keystrokes into an awesome video game. Let’s make something amazing (and maybe a little silly)!My standards are INSANELY high for quality. You MUST ALWAYS add tests and VERIFY they work! NEVER return the system in a borken state to me.
Now, get ready. I’ll give you my first “command” in a moment...
if your intent is to produce the random bug-filled slop, then I guess so? don't get me wrong, the experiment is fun, but the conclusion is so laughably far-fetched.
... Why would it be able to evaluate whether the game is any fun to play?
You're just the random seed to the money furnace remixing existing games and code.
It has to produce a game that Momo wants to play.
Does Momo like to bark at cats? On screens? Introduce a bark sensor as feedback.
Or use a cat. Cats like to swipe at mice on TV. Get a touchscreen and evolve a game for cats.
Most saas isn’t limited by the code behind it anyway. That almost doesn’t matter, even before LLMs. It mattered that there’s support, customer onboarding, solving a businesses issues, customer story, adapting to the needs of their business partners, etc. All of which require large amounts of real human work.
That said, I wonder: does the dog input matter? It seems this is simply surfacing Claude's own encoded assumptions of what a game is (yes, the feedback loop, controls, etc, are all interesting parts of the experiment).
How would this differ if instead of dog input, you simply plugged /dev/random into it? In other words, does the input to the system matter at all?
The article seems to acknowledge this:
> If there’s a takeaway beyond the spectacle, it’s this: the bottleneck in AI-assisted development isn’t the quality of your ideas - it’s the quality of your feedback loops. The games got dramatically better not when I improved the prompt, but when I gave Claude the ability to screenshot its own work, play-test its own levels, and lint its own scene files.
I'll go further: it's not only not "the bottleneck", it simply doesn't matter. The dog's ideas certainly didn't matter, and the dog didn't think of the feedback loop for Claude either.
It can also help combat the excessive emphasis on any "end to end" demo on twitter which doesn't really correspond to a desired and quality sought outcome. Generating things is easy if you want to spend tokens. Proper product building and maintenance is a different exercise and finding ways to differentiate between these will be key in a high entropy world.
> I'll go further: it's not only not "the bottleneck", it simply doesn't matter. The dog's ideas certainly didn't matter, and the dog didn't think of the feedback loop for Claude either
Absolutely. The scientific test would to put any other signal and look at the outcomes. Brown noise, rain, a random number generator. whatever.
Really glad the price of hardware and VPSs [0] are going up so people can generate and toss away garbage "games" like this. Instead of, you know, playing with their dog, which is what the dog actually wants.
With a morale of the story.
> If there’s a takeaway beyond the spectacle, it’s this: the bottleneck in AI-assisted development isn’t the quality of your ideas - it’s the quality of your feedback loops.
It’s not this - it’s that.
The shit future comes in many packages.
...no, actually how many resources were consumed
Props to OP, I could never. If I was suddenly laid off, I'd be an absolute wreck, mentally. It would be four-alarm fire time, and I doubt I'd get a good night's sleep until I found alternate employment. I would definitely not be teaching my dog to code.
Don't people have rent/mortgages to pay anymore?
Once you've been laid off 2-3 times in your career your entire perspective on work will change.
The last time I got laid off I had a settlement payment of one years pay, some of which was tax free, it took me 4 months to find a new job, and it resulted in a pay rise. I was lucky... I have a friend who had unstable employment for 2 years after his layoff.
I was anxious as fuck for the whole time and felt like an absolute failure. As a result of that experience, I have carefully piled up enough liquid savings and investments to pay my living expenses for many years without working, with ~2-3 years worth sat in cash equivalents.
Anyone in tech following the 3-6 months savings advice is living on the edge.
If I could cover these with my savings for 1y+, I'd give zero fs about getting laid off. Unfortunately, I can't, so time to focus on spending less, earning more, saving more.
I think they're subtly taking a stab and AI motivated retrenchments while showing off some hard skills that could potentially get them gainful employment.
[1] https://news.ycombinator.com/item?id=47145647
ps. @OP, sorry to hear about the retrenchment, I can't imagine it being pleasant. Good luck with whatever comes next!
Are you too early in your working life to have catastrophe savings [0]? If you're not, is it seriously going to be a four-alarm fire if you suddenly got fired?
Related, like, do you have a plan for what happens if unexpected injury prevents you from doing the work you're doing ever again?
[0] let alone "fuck you" savings
And also I learned that apparently my life is a raging fire, fun! :)
Not even 10x dog programmers are surviving in this economy
Comment 1: 2026-02-24T18:45:05 1771958705 https://news.ycombinator.com/item?id=47140914
Comment 2: 2026-02-24T18:45:32 1771958732 https://news.ycombinator.com/item?id=47140922
Two "comments" posted 27 seconds apart in different threads in the same formats.
Looks like this bot owner saw his first two comments 27 days ago got buried/flagged typing normally and decided to trick us with this new "I'm totally real, look at my lowercase writing!" soft-launch today.
Post history: https://news.ycombinator.com/threads?id=dirtytoken7
@dang doesn’t actually notify anybody. It isn’t guaranteed dang will see it
Email to hn@ycombinator.com, someone will see it
Some of them also step in and the human operator will try to gaslight you into thinking they're not bots even when you call them out. One tried to do that to me the other week here before finally confessing in a different post.
The same one where the human operator stepped in also made the same mistake as this one, not configuring their bot to wait long enough between comments. They were rapid firing multiple detailed comments seconds apart.
The idea of this one trying to use all lowercase and shorter comments to blend in was a nice idea though. Unfortunately something about it immediately threw me off.
Just before people destroy me, I know this is a non serious blog post :P
I'm reminded of the old cartoon: "On the Internet, nobody knows you're a dog."[a]
Maybe the updated version should be: "AI doesn't know or care if you're a dog, as long as you can bang the keys on on a computer keyboard, even if you only do it to get some delicious treats."
This is brilliant as social commentary.
Thank you for sharing it on HN.
--
[a] https://en.wikipedia.org/wiki/On_the_Internet%2C_nobody_know...
There's definitely some social commentary to be had in the whole project. I decided it's best left to the reader to find their own rather than assigning mine to it.
And then your dog read my comment and said "hold by biscuits" I guess.
the new punk rock.
Notice how people also have weird superstitious habits when using LLM tools, "You gotta write the prompt this way, say this first" Without having any way to prove it works. Its very similar to the behavior of gamblers. "push the buttons in this order for best outcome"
Also notice how llm tools allow you to multiply the output X2-X3-X4 to compare the ouputs, this is literally UX straight outta a casino.
Many of the users also exhibit excited, almost manic like states.. Addicted to the dopamine the output from their prompt produces...
This is going to be a weird trend to look back on, the hype is on par with the same gambling trends found in crypto/NFTS.
> more of a statement of human behavior under uncertainty and non-determinism rather than the tools themselves.
This is basically saying "It's not gambling, it's just the psychological underpinnings that form the foundation of all gambling enterprises". Who cares to split this difference other than casino owners?
When you play slots in a casino, the certain things are that the casino determines the house edge, and the house always wins.
Slot machines that are biased toward producing jackpots.
And "jackpots" are a metaphor for "training distribution".