>Kent Pitman
He left out Guessing Animals!
https://en.wikipedia.org/wiki/Kent_Pitman
>While in high school, he saw output from one of the guess the animal pseudo-artificial intelligence (AI) games then popular. He considered implementing a version of the program in BASIC, but once at the Massachusetts Institute of Technology (MIT), instead he implemented it in several dialects of Lisp, including Maclisp.
Kent Pitman's Lisp Eliza from MIT-AI's ITS History Project (sites.google.com)
https://news.ycombinator.com/item?id=39373567
https://sites.google.com/view/elizagen-org
https://climatejustice.social/@kentpitman/111236824217096297
https://web.archive.org/web/20131102031307/http://open.salon...
This, together with grand claims that obviously don't hold up in reality, does make you an AI advocate no matter how much you dislike the label.
If you comment was more measured and had nuanced view, then I'd understand wanting to push back on it. But then you also say stuff like "Even a legacy code base can be poured into an LLM, which will grok it instantly" so no wonder others see you as a AI advocate.
Catch all the security holes while you were reviewing it, or did you leave those to the machine as well?
Which LLM can read a whole code base? Embeddings do not count.
Before the incessant AI hype it was crypto, and before that it was JavaScript frameworks and before that it was ...
But the force-multiplier effects of LLMs are not to be denied, even if you are that kind of hacker. Eric S. Raymond doesn't even write code by hand anymore—he has ChatGPT do everything. And he's produced more correct code faster with LLMs than he ever did by hand so now he's one of those saying "you're not a real software engineer if you don't use these tools". With the latest frontier models, he's probably right. You're not going to be able to keep pace with your puny human brain with other developers using LLMs, which is going to make contributing to open source projects more difficult unless you too are using LLMs. And open source projects which forbid LLM use are going to get lapped by those which allow it. This will probably be the next major Linux development after Rust. The remaining C code base may well be lifted into Rust by ChatGPT, after which contributing kernel code in C will be forbidden throughout the entire project. Won't that be a better world!
I’m comfortable to declare that are not macros the most powerful thing of Lisp, but the concept of an environment. Still in 2026 many languages now implement the concept of evaluating the code and make it immediately available but nothing is like Lisp.
Lower level programming languages today they all still requires compilation. Lisp is one of the few that I found having the possibility to eval code and its immediately usable and probably the only that really relies heavily on REPL driven development.
Env+REPL imo is the true power still far ahead of other languages. I can explore the memory of my program while my program is running, change the code and see the changes in real time.
The issue is that CL is old, and Clojure is so close to be perfect if it wasn’t for Java. Clojure replaces Java, not CL and this is its strength but also its weakness.
Also, your Lisp will always behave exactly as you intended and hallucinate its way to weird destinations.
What they can certainly do is iterate with a listener with you acting as a crude cut and paste proxy. It will happily give you forms to shove into a REPL and process the results of them. I’ve done it, in CL. I’ve seen it work. It made some very interesting requests.
I’ve seen the LLM iterate, for example, with source code by running it, adding logging, running it again, processing the new log messages, and cycling through that, unassisted, until it found its own “aha” and fixed a problem.
What difference does it make whether it’s talking to a shell or a CL listener? It’s not like it cares. Again, the mechanics of hooking up an LLM to a listener directly, I don’t know. I haven’t dabbled enough in that space to matter. But that’s a me problem, not an LLM problem.
As for hallucinations, I believe those are like version 0 of the thing we call lateral thinking and creativity when humans manifest it. Hallucinations can be controlled and corrected for. And again—you really need to spend some time with the paid version of a frontier model because it is fundamentally different from what you've been conditioned to expect from generative AI. It is now analyzing and reasoning about code and coming back with good solutions to the problems you pose it.
It is NOT reasoning about code. It's a glorified autocomplete that wastes energy. Associating "reasoning" to it is an antropomorphizateion.
And calling hallucinations "lateral thinking" is a fucking stretch.
"Let's use tool `foo` with flag `-b`" even if the man page doesn't even mention said flag.
Sure, they might be able to create numerous iterations of containers, testing them, burning resources....but that is literally a thousand monkeys smashing their heads on typewriters to crank out 4chan posts.
FWIW, I also think performant languages like rust will gain way more prominence. Their main downside is that they’re more “involved” to write. But they’re fast and have good type systems. If humans aren’t writing code directly anymore, would a language being simpler or cleverer to read and write ultimately matter? Why would you ask a model to write your project in python, for instance? If only a model will ever interact with code, choice of language will be purely functional. I know we’re not fully there yet but latest models like opus 4.6 are extremely good at reasoning and often one-shotting solutions.
Going back to lower level languages isn’t completely out of the picture, but models have to get way better and require way less intervention for that to happen.
I used to appreciate Lisp for the enhanced effectiveness it granted to the unaided human programmer. It used to be one of the main reasons I used the language.
But a programmer+LLM is going to be far more effective in any language than an unaided programmer is in Lisp—and a programmer+LLM is going to be more effective in a popular language with a large training set, such as Java, TypeScript, Kotlin, or Rust, than in Lisp. So in a world with LLMs, the main practical reason to choose Lisp disappears.
And no, LLMs are doing more than just generating text, spewing nonsense into the void. They are solving problems. Try spending some time with Claude Opus 4.6 or ChatGPT 5.3. Give it a real problem to chew on. Watch it explain what's going on and spit out the answer.
You are working on the assumption that humans don't need to even look at the code ever again. At this point it in time, it is not true.
The trajectory over the last 3 years do not lead me to believe that it will be true in the future.
But, lets assume that in some future, it is true: If that is the case, then Lisp is a better representation than those other languages for LLMs to program in; after all, why have the LLMs write in Javascript (or Java, or Rust, or whatever), which a compiler backend lowers into an AST, which then gets lowered into machine code.
Much better to program in the AST itself.
IOW, why program in the intermediate language like JS, Java, Rust, etc when you can program in the lowered language?
For humans, using the JS, Java or Rust lets us verbosely describe whatever the AST is in terms humans can understand, however the more compact AST is unarguably better for the way LLMs work (token prediction).
So, in a world where all code is written by LLMs, using an intermediate verbose language is not going to happen unless the prompter specifically forcibly selects a language.
Everything changed in November of 2025 with Opus 4.5 and GPT 5.2 a short time later. StrongDM is now building out complex systems with zero human intervention. Again, stop and actually use these models first, then engage in discussion about what they can and can't do.
> But, lets assume that in some future, it is true: If that is the case, then Lisp is a better representation than those other languages for LLMs to program in; after all, why have the LLMs write in Javascript (or Java, or Rust, or whatever), which a compiler backend lowers into an AST, which then gets lowered into machine code.
That's your human brain thinking it knows better. The "bitter lesson" of AI is that more data=better performance and even if you try to build a system that encapsulates human-brain common sense, it will be trounced by a system simply trained on more data.
There is vastly, vastly more training data for JavaScript, Java, and Rust than there is for Lisp. So, in the real world, LLMs perform better with those. Unlike us, they don't give a shit about notation. All forms of token streams look alike to them, whether they involve a lot of () or a lot of {;}.
I feel you glossed over what I was saying.
Let me try to rephrase: if we ever get to a future where humans are not needed to look at or maintain code again, all the training data would be LLM generated.
In that case, the ideal language for representing logic in programming is still going to be a Lisp-like one.