Think for instance the Debian package configuration dialogs -- they're far more comfortable than the same questions without a TUI, and still work over a serial console if you have to use one.
For tools like various kinds of "top", there's many potential tools you can use to the same end and intentionally using one that draws CPU graphs over one that just displays a number. Graphs are much easier to interpret than a column of numbers.
In many cases they're the optimal choice given some constraint -- like the desire to have minimal dependencies, working over SSH, and being usable without breaking the flow. Yeah, you could make a tunnel to a tool that runs a local webserver and delivers graphs by HTTP, but the ergonomics of that are terrible.
TUIs which are just TUI views of data you can get otherwise are fine; TUIs which are the only way to interact with something... less so.
The thing is Windows 98 let you throw up a HTML window with almost zero overhead (the OS used the same libraries) and javascript could get data easily from another process via COM etc.
Now like 25 years later, apparently our choices are shipping bespoke copies of Chrome and Node, OR making shit work on an emulated 1981 DEC terminal. Lack of vision is exactly right.
.net core => avalonia ui
Java => JavaFXI'd really want explicit UIs from 2000, but in the mean time TUIs feel like an improvement.
The whole benefit of text-based interface is that search, filter and transformation is always available and completely independent of the running program.
I can see the visual discoverability aspect for GUIs, but for the visual layout GUIs and TUIs are on par, the difference is rather in the rendering mechanism: pixel vs. character-based.
I agree that they can get very clunky for non text based tasks or anything where you actually need custom text formating.
You'd think that, but you'd be wrong. Case in point from Emacs/Vim and the Borland IDEs to Claude, plus all kinds of handy utils from mc and htop to mutt.
>They flatten the structure of a UI under a character stream. You’re forced to use it exactly the way it was designed and no different. Modern GUIs, even web pages too, expose enough structure to the OS to let you use it more freely
That's not necessarily bad. Not everything has to be open ended.
> That's not necessarily bad. Not everything has to be open ended.
I think it is necessarily bad and everything should be open ended. Bad in the sense of low quality, but if we’re talking about critical accessibility (someone is unable to use your application at all), morally bad too.
In many ways, GUI was developed as the natural evolution of TUI. X server, with its client-server architecture, is meant to allow you to interact with remote sessions via "casted" GUI rather than a terminal.
Countless engineers spent many man-hours to develop theories and frameworks for creating GUI for a reason.
TUI just got the nostalgia "coolness".
How many people eat microwave meals? How many eat gourmet Michelin star dishes?
I don't care "how many use VSCode". My argument Emacs/Vim have great, well loved TUIs. And they are used by a huge number of the most respected coders in the industry. Whether a million React jockeys use VSCode doesn't negate this.
>Countless engineers spent many man-hours to develop theories and frameworks for creating GUI for a reason.
Yes, it sells to the masses. Countless food industry scientists aspend many man-hours to develop detrimental ultra-processed crap for a reason too.
You'd be surprised. Most people can't eat anything adventurous or out of the junk-food category with some comfort food staples thrown in.
They... are not great. They provide the absolute bare minimum of an UI.
An UI, even a terminal one, is more than a couple of boxes with text in them. Unfortunately, actual great TUIs more or less died in the 1990s. You can google Turbo Vision for examples.
Perhaps I'm in some sort of "TUI bubble", but I'd bet good money that Emacs/Vim users outnumber VSCode users by an order of magnitude. But maybe I'm just surrounded by *nix devs.
https://survey.stackoverflow.co/2025/technology#1-dev-id-es
Note that respondents may use multiple tools, but around 76% answered VSCode, whereas 24% answered Vim.
So, I’d wager you’re indeed in a *nix bubble.
Care to bet even those 24% vim devs code circles around the VSCode ones?
https://meta.stackoverflow.com/q/437921/1593077
Not that your conclusion is necessarily wrong of course.
If you need to support screen readers, your UI would have to be totally different: You should allow the user to snapshot the system state and navigate it. Generate succinct summary text to impart the same sense that a dashboard would to a visual user. "Normal: All systems OK" "Critical: Boeing RPA servers down since 2:17PM PDT and 54 others". Once you've done this work, a CLI tool could expose this just as screen-readable:
$ cli status
all systems OK, last outage resolved 2:27 PDT
$ cli topjob cpu
117 Boeing RPA, 78% CPU
434 SAIC PDM, 43% CPU
$ cli downtime today 117
Boeing RPA down 10 minutes today, resolved nowGuess it's like the separation between backend and front-end. When the logic is neatly wrapped in a nice API you can potentially get a lot of reusability from that since the API can be integrated into other things with other use cases.
But a TUI probably doesn't naturally come with a separate backend. However, if a cli is built in a non TUI way it is about as flexible as a backend. Output can be streamed into pipes etc.
I can't stream k9s output into a pipe or variable but I can with kubectl.
Would be nice if we could have the cake and eat it here. Can TUI frameworks encourage having it both ways?
So ... Like all Apple products?
Huh? "Feels"? Plenty of Electron/Tauri apps feel perfectly normal. Like I've been saying, the TUI craze is just a fad.
Isn't this ... everything though? Even the browser which you mention as better in the next paragraph.
Similar to WebApps, it's only since the November'25 renaissance that I felt I could use them to create TUIs. Once I had that revelation, I started going into my backlog and using it.
I maintain a TUI Charting library, NTCharts. In January, I fixed a bug - totally obvious once identified - that I personally failed to find earlier. But the test harness, prompting, and Gemini got it done [1]. Gemini's spatial understanding was critical in completing the task.
I've been vibe-crafting a local LLM conversation viewing tool called thinkt. After scraping ~/.claude and making a data model, this is the point in PROMPTS.md where I start creating the TUI using BubbleTea. [2].
[1] https://github.com/NimbleMarkets/ntcharts/issues/7#issuecomm...
[2] https://github.com/wethinkt/go-thinkt/blob/main/PROMPTS.md#2...
Go watch copilot drive VS2026 if you've never seen it in action. There is no way you are going to be able to communicate this same amount of information via plain text in the same amount of time. I can catch a lot of bad stuff mid-flight because I can actually multitask my UI and click into diffs as files are edited in real time.
It helps that VSCode has really improved in the last couple of releases - before then, the features available in Claude Code were useful though that it was worth using despite the baggage, and there's still a handful of things I miss in VSCode. But I think the visual information density and acuity that you can get out of a GUI application is far beyond what you can ever achieve in a TUI, and I think as these tools start reaching something like feature parity, that makes GUIs a lot nicer to use.
Want to do that with web technologies? You’ll need a browser AND a server or build an app using electron or tauri.
I was (am) excited for vs codes new native Claude code integration, but it’s pretty buggy and unreliable.
Everyone knows all the best programmers are using the command line firing off one line Awk scripts that look like runic incantations occasionally opening vim to do stuff at blazing warp speed.
So the AI tools people build want to take on those trappings to convince people they are serious tools for grown ups.
Ignore that they are basically a full web stack React/CSS conglomeration - feel the L33t hackerness of 'using the command line'. No IDE like a scrub developer you are using a text console, you are a real programmer now.
I've achieved 3 and 4 orders of magnitude CPU performance boosts and 50% RAM reductions using C in places I wouldn't normally and by selecting/designing efficient data structures. TUIs are a good example of this trend. For internal engineering, to be able to present the information we need while bypassing the millions of SLoC in the webstack is more efficient in almost every regard.
If your business requirements are stable and you have a good test suite, you're living in a golden age for leveraging your current access to LLMs to reduce your future operational costs.
Making 50 SOTA AI requests per day ≈ running a 10W LED bulb for about 2.5 hours per day
Given I usually have 2-3 lights on all day in the house, that's like 1500 LLM requests per day (which sounds quite more than I do).
So even a month worth of requests for building some software doesn't sound that much. Having a local beefy traditional build server compling or running tests for 4 hours a day would be like ~7,600 requests/day
This seems remarkably far from what we know. I mean, just to run the data centre aircon will be an order of magnitude greater than that.
So back of the napkin, for a decently sized 1000 token response you’re talking about 8s/3600s*1000 = 2wh which even in California is about $0.001 of electricity.
It was like two-shot, cos the first version had some issues with CJK chars.
I was impressed as it would have taken me a bunch of screwing around on lining up all the data etc when I wanted to concentrate on the scraping algorithm, not the pretty bits.
I do like CLIs though, especially the ones that are properly capable of working in pipelines. Composing a pipeline of simple command-line utilities to achieve exactly what you want is very powerful.
Having said that....
If one is willing to build one's own HTTP server with integrated MAC, etc., and is able to demonstrate mitigations against known vulnerabilities, one may be able to get the certifying bodies on board. Time will tell.
Yes, this is very niche, but TUIs are in general niche.
I also find TUIs are easier to program for the same reason they’re limited. Fewer human interface aspects in play and it’s not offensive to use the same UI across OSes. (There are still under-the-hood differences across OSes, e.g. efficient file event watching.)
And that helps? I tried that a while ago and it very often said this is not a good way of doing something even though it was objectively the best way of doing something. I removed it after a while because it was too random.
Other CLI things benefit from this "have a minimal ui interface in the workflow for the one step where it makes sense".
1. Navigating all my chat sessions and doing admin work. It's super fast to push a single key to go in and see what it was about before deleting it.
2. Testing out features and code changes without the web UI / vs code extension complexity.
3. Places where I cannot connect VS Code. I still want to chat and see diffs, a TUI is much easier than a CLI for this.
It also has a CLI, basically three interfaces (CLI, TUI, GUI (vscode/webapp)) to the core features of my personal swiss army knife (https://github.com/hofstadter-io/hof)
It seems like it already was like this from the start, though? I’m not a frontend / TUI dev, but why are these issues so hard to fix?
E.g., the user hit ESC -> internal state is CANCELED/WAIT FOR USER -> internal GUI representation now includes a prompt that asks the user to tell Claude what to do differently -> rendering output actually shows said prompt.
Any stateful UI needs a state management backend and a rendering frontend, React isn’t a bad choice for the former.
There are about a million other ways of doing state management than retrofitting it into both React and TUI.
Parent comment talks about using React for reconsilliation which is React-speak for "we take a diff between current state of UI and new state of UI, and apply that diff". Which is entirely unnecessary not just for TUIs, but for the vast majority of GUIs in general, especially for non-DOM-based ones.
As an example, in Claude Code this insanity leads to them spending 16ms "creating a scene" and rendering a couple of hundred of characters on screen: https://x.com/trq212/status/2014051501786931427
For example Claude Code could emit a strange symbol and if the terminal has to go and load a font from disk to be able to rasterize something that can eat into the budget and prevent the terminal from having a smooth frame rate by causing a frame drop.
So they literally take 16ms to rasterize just a few hundred characters on screen. Of those, 11ms are spent in "React scene graph", and they have 5ms to do the extremely complex task of rendering a few characters.
16ms is an eternity. A game engine renders thiusands if complex 3D objects in less time. You can output text to terminal at hundreds of frames per second in Javascript: https://youtu.be/LvW1HTSLPEk?si=G9gIwNknqXEWAM96
> and if the terminal has to go and load a font from disk to be able to rasterize something that can eat into the budget
Into which budget? They spend 11ms "laying out a scene" for a few hundred characters. "Reading something from disk" to render something is a rare event. And that's before we start questioning assumptions about read speeds [1], whether something needs to be rendered in a TUI at 60fps etc.
[1] Modern SSDs can probably load half of their contents into cache before you even begin to see the impact on frames. Unless it's a Microsoft terminal for which they claim they need a PhD to make it fast.
Did you measure this yourself? Where is this number coming from? I am talking about a budget. Even if it takes 1ms total as long as that is under 16 ms that is fine.
--- start quote ---
A typical random read [on a HDD] can be performed in 1-3 milliseconds.
A random read on an SSD varies by model, but can execute as fast as 16μs (μs = microsecond, which is one millionth of a second).
--- end quote ---
If you drop frames in a TUI on a HDD/SDD read for a font file (10-20 KB), you're likely completely incompetent.
For the record I can’t stand CC flickering and general slowness and ditched Claude subscription entirely.
In TUIs you can re-render the entire screen at hundreds of frames per second. Has nothing to do with state management. Doesn't need React to "figure out what to render".
Again, React is not a state management library. And it's not really applicable to non-DOM rendering approaches.
https://github.com/rothgar/awesome-tuis
https://terminaltrove.com/explore/
Building for Charm, ratatui and many others is really getting much easier than before thanks to AI.
https://github.com/microsoft/MS-DOS/blob/main/v4.0/src/TOOLS...
We should be saying "Building X is faster now" instead. But I guess that doesn't induce god complex that effectively.
https://pchalasani.github.io/claude-code-tools/tools/aichat/...
Well, it is like code completion on a higher level.
I still don't like this approach. Besides, who is going to maintain that code? Such code will probably forever be required to be maitnained via Claude. So no humans involved. Just autogenerated stuff. I dislike this idea a lot.
Humans are slower, ok, but they built excellent software before Claude. What is coming next? Claude Linux-like Kernel? Top500 supercomputers will run it?
> Besides, who is going to maintain that code?
I maintain the code. If Claude gets sunset tomorrow, I'll still be able to maintain and write it - I've already rewritten parts of it.
You could make the same argument for a team member leading a project that you've worked on. Is that code forever required to be maintained by one team member?
Previously the overhead of ensuring code quality when the development process was driven by Claude Code was greater than just writing the code myself. But that was different for this project.
For the demo at https://tui.hatchet.run, to answer some messages asking about it: I built this with the fantastic ghostty-web project (https://github.com/coder/ghostty-web). It's been a while since I've used WASM for anything and this made it really easy. I deployed the demo across six Fly.io regions (hooray stateless apps) to try to minimize the impact of keystroke latency, but I imagine it's still felt by quite a few people.
I was also intrigued by it being a lot of Go dependencies as I have developed a bit of a fancy for this language recently.
I.e. something that is lightweight, lightning fast, great to use with just a thumb or so and looks a bit boring and dated, yet also inviting?
They have a bunch of functions that concatenate strings, which may not be very efficient compared to using string.builders, but I haven't yet had performance problems.
However I haven't had such a great experience with AI, IMO they're bad at ASCII art.
All of the skills I saw demonstrated were deterministic. So does this end in a Functional Core, Imperative Shell scenario that looks like a Terraform Plan and a search engine-style natural language processor out front?
I can’t stand Gemmin-CLI. That tui gets in the way constantly
I’m mixed in jj’s tui. It’s better than no ui tho
Mostly tho I’m curious when I’d want a tui. Most of the time in a terminal I don’t want one
I want my interfacing with computers to be mouseless and TUIs offer that. I don’t think I’ve run into a GUI, no matter how many hotkeys it has and I know, where I didn’t have to reach for the mouse.
CLI only also requires remembering commands, some of which I use very infrequently, thus need to look up every time I use them.
I think TUIs hold a very nice spot between GUIs and CLI.
I use the TUI from a terminal tab in VS Code, my agent works with that and the custom extension with a webapp based interface, seamlessly and concurrently
GUIs, TUIs, and PR/kanban all make sense in different situations. We'll all use at least two of them on regular basis for coding agents.
TUIs make way less sense for your average user
It was good enough for ncurses, it's good enough today.
Isn't everyone else remoting into a Claude instance on their phones?
I think the only reasonable option seems to be reimplementing one yourself, which is massively stupid.
Have you tried porting your test app to a web page? I'd really like to have a good TUI experience on the web.
No idea what this means.
https://news.ycombinator.com/item?id=46580844
Here's a similar situation, a submission called "webdev is fun again", and what you find inside is just gushing about how good AI is. Genuinely what value does it bring? I think this phenomenon is literally "clickbaiting" but on hackernews
Other users from here seem to see the same thing that I do:
If you wanted to write a shell that has mouse support you could certainly do so, and this would be based on sending escape codes to the terminal to tell it to return mouse events as input rather than let the terminal consume them itself. The shell could then itself support the clipboard if it wanted to, as well as support mouse editing.
I just googled it, and apparently "fish shell" does exactly this, but your hypothetical user is more likely to stumble upon a bash shell which is letting the terminal emulator consume the mouse events and support copy and paste.
https://github.com/lrstanley/bubblezone
There are a lot of components that resemble things you find in web component libraries
It runs poorly, loses keystrokes, and easily gets bogged down with too much terminal input.
I don't want candy coated monospace ASCII graphics. I want something fast and functional. The graphics are _entirely_ secondary. You've missed the point of what a TUI is.
It's somewhat ironic that a web page about performant terminal user interfaces uses gratuitously complex CSS mask compositing and cubic gradients which reduce smooth scrolling on my 1 year-old, high-end Dell XPS laptop (>$3k) to Commodore 64 level (on default 'Balanced' battery mode). While it's pretty, it's also just a very subtle, non-critical background animation effect. Not being a CSS guru myself, here's what Gemini says:
> "Specifically, this is a Scrim or Easing Gradient. Instead of a simple transition between two colors, it uses 16 color stops to mimic a "cubic-bezier" mathematical curve. This creates a smoother, more natural fade than a standard linear gradient, but it forces the browser to calculate high-precision color math across the entire surface during every scroll repaint."
My Firefox smooth scrolls like butter on thousands of pages, so you might want ask your web designer to test on non-Mac, iGPU laptops with hiDPI and consider the performance cost of web pages with always-running subtle background animations in a world of diverse hardware platforms. In case it helps, here's the animation with the gradient layers disabled so you can see all 6,400,000 pixels which are being recalculated every scroll line (https://i.imgur.com/He3RkEu.jpeg).
https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
If you check the source (not the DOM) the actual content is loaded in `<div hidden="" id="S:0"> ...` which is then moved/copied into the proper main content div in the DOM using a JS event it seems.
That’s unfair to C64 which can smooth scroll very well.
Shaking effects that did not require memory copy were even easier.
> here's what Gemini says
Surely, if people care to see LLM generated text, they can do it themselves.
The real issue is something was causing the container with the gradient to repaint on every scroll.
As for why people don't like LLMs being wrong versus a human being wrong, I think it's twofold:
1. LLMs have a nasty penchant for sounding overly confident and "bullshitting" their way to an answer in a way that most humans don't. Where we'd say "I'm not sure," an LLM will say "It's obviously this."
2. This is speculation, but at least when a human is wrong you can say "hey you're wrong because of [fact]," and they'll usually learn from that. We can't do that with an LLM because they don't learn (in the way humans do), and in this situation they're a degree removed from the conversation anyway.