What does " 2>&1 " mean?
405 points by alexmolas 2 days ago | 241 comments

wahern 2 days ago
I find it easier to understand in terms of the Unix syscall API. `2>&1` literally translates as `dup2(1, 2)`, and indeed that's exactly how it works. In the classic unix shells that's all that happens; in more modern shells there may be some additional internal bookkeeping to remember state. Understanding it as dup2 means it's easier to understand how successive redirections work, though you also have to know that redirection operators are executed left-to-right, and traditionally each operator was executed immediately as it was parsed, left-to-right. The pipe operator works similarly, though it's a combination of fork and dup'ing, with the command being forked off from the shell as a child before processing the remainder of the line.

Though, understanding it this way makes the direction of the angled bracket a little odd; at least for me it's more natural to understand dup2(2, 1) as 2<1, as in make fd 2 a duplicate of fd 1, but in terms of abstract I/O semantics that would be misleading.

reply
jez 24 hours ago
Another fun consequence of this is that you can initialize otherwise-unset file descriptors this way:

    $ cat foo.sh
    #!/usr/bin/env bash

    >&1 echo "will print on stdout"
    >&2 echo "will print on stderr"
    >&3 echo "will print on fd 3"

    $ ./foo.sh 3>&1 1>/dev/null 2>/dev/null
    will print on fd 3
It's a trick you can use if you've got a super chatty script or set of scripts, you want to silence or slurp up all of their output, but you still want to allow some mechanism for printing directly to the terminal.

The danger is that if you don't open it before running the script, you'll get an error:

    $ ./foo.sh
    will print on stdout
    will print on stderr
    ./foo.sh: line 5: 3: Bad file descriptor
reply
hielke 8 hours ago
With exec you can open file descriptors of your current process.

  if [[ ! -e /proc/$$/fd/3 ]]; then
      # check if fd 3 already open and if not open, open it to /dev/null
      exec 3>/dev/null
  fi
  >&3 echo "will print on fd 3"
This will fix the error you are describing while keeping the functionality intact.

Now with that exec trick the fun only gets started. Because you can redirect to subshells and subshells inherit their redirection of the parent:

  set -x # when debugging, print all commands ran prefixed with CMD:
  PID=$$
  BASH_XTRACEFD=7
  LOG_FILE=/some/place/to/your/log/or/just/stdout
  exec 3> >(gawk '!/^RUN \+ echo/{ print strftime("[%Y-%m-%d %H:%M:%S] <PID:'$PID'> "), $0; fflush() }' >> $LOG_FILE)
  exec > >(sed -u 's/^/INFO:  /' >&3)
  exec 2> >(sed -u 's/^/ERROR: /' >&3)
  exec 7> >(sed -u 's/^/CMD:   /' >&3)
  exec 8>&1 #normal stdout with >&8
  exec 9>&2 #normal stderr with >&9
And now your bash script will have a nice log with stdout and stderr prefixed with INFO and ERROR and has timestamps with the PID.

Now the disclaimer is that you will not have gaurantees that the order of stdout and stderr will be correct unfortunately, even though we run it unbuffered (-u and fflush).

reply
casey2 3 hours ago
Nice! Not really sure the point since AI can bang out a much more maintainable (and sync'd) wrapper in go in about 0.3 seconds

(if runners have sh then they might as well have a real compiler scratch > debian > alpine , "don't debug in prod")

reply
account42 13 hours ago
If you just want to print of the terminal even if normal stdout/stderr is disabled you can also use >/dev/tty but obviously that is less flexible.
reply
47282847 23 hours ago
Interesting. Is this just literally “fun”, or do you see real world use cases?
reply
nothrabannosir 20 hours ago
The aws cli has a set of porcelain for s3 access (aws s3) and plumbing commands for lower level access to advanced controls (aws s3api). The plumbing command aws s3api get-object doesn't support stdout natively, so if you need it and want to use it in a pipeline (e.g. pv), you would naively do something like

  $ aws s3api get-object --bucket foo --key bar /dev/stdout | pv ...
Unfortunately, aws s3api already prints the API response to stdout, and error messages to stderr, so if you do the above you'll clobber your pipeline with noise, and using /dev/stderr has the same effect on error.

You can, though, do the following:

  $ aws s3api get-object --bucket foo --key bar /dev/fd/3 3>&1 >/dev/null | pv ...
This will pipe only the object contents to stdout, and the API response to /dev/null.
reply
stabbles 17 hours ago
Would be nice if `curl` had something to dump headers to a third file descriptor while outputting the response on stdout.
reply
homebrewer 15 hours ago
This should work?

  curl --dump-header /dev/fd/xxx https://google.com
or

  mkfifo headers.out
  curl --dump-header headers.out https://google.com
unless I'm misunderstanding you.
reply
stabbles 15 hours ago
Ah yeah, `/dev/fd/xxx` works :) somehow thought that was Linux only.
reply
xantronix 7 hours ago
(Principal Skinner voice) Ah, it's a Bash expression!
reply
jez 22 hours ago
I have used this in the past when building shell scripts and Makefiles to orchestrate an existing build system:

https://github.com/jez/symbol/blob/master/scaffold/symbol#L1...

The existing build system I did not have control over, and would produce output on stdout/stderr. I wanted my build scripts to be able to only show the output from the build system if building failed (and there might have been multiple build system invocations leading to that failure). I also wanted the second level to be able to log progress messages that were shown to the user immediately on stdout.

    Level 1: create fd=3, capture fd 1/2 (done in one place at the top-level)
    Level 2: log progress messages to fd=3 so the user knows what's happening
    Level 3: original build system, will log to fd 1/2, but will be captured
It was janky and it's not a project I have a need for anymore, but it was technically a real world use case.
reply
figmert 15 hours ago
One of my use-cases previously has been enforcing ultimate or fully trust of a gpg signature.

    tmpfifo="$(mktemp -u -t gpgverifyXXXXXXXXX)"
    gpg --status-fd 3 --verify checksums.txt.sig checksums.txt 3>$tmpfifo
    grep -Eq '^\[GNUPG:] TRUST_(ULTIMATE|FULLY)' $tmpfifo
It was a while ago since I implemented this, but iirc the reason for that was to validate that the key that has signed this is actually trusted, and the signature isn't just cryptographically valid.

You can also redirect specific file descriptors into other commands:

    gpg --status-fd 3 --verify checksums.txt.sig checksums.txt 3>(grep -Eq '^\[GNUPG:] TRUST_(ULTIMATE|FULLY)')
reply
1718627440 9 hours ago
This is often used by shell scripts to wrap another program, so that those's input and output can be controlled. E.g. Autoconf uses this to invoke the compiler and also to control nested log output.
reply
jas- 22 hours ago
Red hat and other RPM based distributions recommended kickstart scripts use tty3 using a similar method
reply
post-it 23 hours ago
Multiple levels of logging, all of which you want to capture but not all in the same place.
reply
skydhash 22 hours ago
Wasn't the idiomatic way the `-v` flag (repeated for verbosity). And then stderr for errors (maybe warning too).
reply
post-it 8 hours ago
Yes, but sometimes you want just important non-error logs to go to the console or journal, and then those plus verbose logs to go to a file that gets rotated, and then also stderr on top of that.
reply
notpushkin 19 hours ago
It is, and all logs should ideally go to stderr. But that doesn’t let you pipe them to different places.
reply
goku12 16 hours ago
This is probably one of the reasons why many find POSIX shell languages to be unpleasant. There are too many syntactical sugars that abstract too much of the underlying mechanisms away, to the level that we don't get it unless someone explains it. Compare this with Lisps, for example. There may be only one branching construct or a looping construct. Yet, they provide more options than regular programming languages using macros. And this fact is not hidden from us. You know that all of them ultimately expand to the limited number of special forms.

The shell syntactical sugars also have some weird gotchas. The &2>&1 question and its answer are a good example of that. You're just trading one complexity (low level knowledge) for another (the long list of syntax rules). Shell languages break the rule of not letting abstractions get in the way of insight and intuitiveness.

I know that people will argue that shell languages are not programming languages, and that terseness is important for the former. And yet, we still have people complaining about it. This is the programmer ego and the sysadmin ego of people clashing with each other. After all, nobody is purely just one of those two.

reply
skywal_l 15 hours ago
There must be a law of system design about this, because this happens all the time. Every abstraction creates a class of users who are powerful but fragile.

People who build a system or at least know how it works internally want to simplify their life by building abstractions.

As people come later to use the system with the embedded abstractions, they only know the abstractions but have no idea of the underlying implementations. Those abstractions used to make perfect sense for those with prior knowledge but can also carry subtle bias which makes their use error prone for non initiated users.

reply
shevy-java 9 hours ago
> Those abstractions used to make perfect sense for those with prior knowledge but can also carry subtle bias which makes their use error prone for non initiated users.

I don't think 2>&1 ever made any sense.

I think shell language is simply awful.

reply
goku12 4 hours ago
> I don't think 2>&1 ever made any sense.

It's not that hard. Consider the following:

  $ command &2>&1
The shell thinks that you're trying to run the portion before the & (command) in the background and the portion after the & (2>&1) in the foreground. There is just one problem. The second part (2>&1) means that you're redirecting stderr/fd2 to stdout/fd1 for a command that is to follow (similar to how you set environment variables for a command invocation). However, you haven't specified the command that follows. The second part just freezes waiting for the command. Try it and see for yourself.

  $ command 2>1
Here the shell redirects the output of stderr/fd2 to a file named 1. It doesn't know that you're talking about a file descriptor instead of a filename. So you need to use &1 to indicate your intention. The same confusion doesn't happen for the left side (fd2) because that will always be a file descriptor. Hence the correct form is:

  $ command 2>&1
> I think shell language is simply awful.

Honestly, I wish I could ask the person who designed it, why they made such decisions.

reply
lukan 10 hours ago
I like abstractions when they hide complexity I don't need to see nor understand to get my job done. But if abstractions misdirect and confuse me, they are not syntactical sugar to me, but rather poison.

(But I won't claim that I am always able to strike the right balance here)

reply
taneq 14 hours ago
Seems related to the Law of Leaky Abstractions?
reply
carlmr 12 hours ago
It's not necessarily a leaky abstraction. But a lack of _knowledge in the world_.

The abstraction may be great, the problem is the lack of intuitive understanding you can get from super terse, symbol heavy syntax.

reply
reacweb 15 hours ago
make 2>&1 | tee m.log is in my muscle memory, like adding a & at the end of a command to launch a job, or ctrl+z bg when I forget it, or tar cfz (without the minus so that the order is not important). Without this terseness, people would build myriads of personal alias.

This redirection relies on foundational concepts (file descriptors, stdin 0, stdout 1, stderr 2) that need to be well understood when using unix. IMO, this helps to build insight and intuitiveness. A pipe is not magic, it is just a simple operation on file descriptors. Complexity exists (buffering, zombies), but not there.

reply
cpach 12 hours ago
Are you sure you understood the comment you replied to?

I agree that 2>&1 is not complex. But I think I speak for many Bash users when I say that this idiom looks bad, is hard to Google, hard to read and hard to memorize.

reply
skywhopper 12 hours ago
It’s not like someone woke up one morning and decided to design a confusing language full of shortcuts to make your life harder. Bash is the sum of decades of decisions made, some with poor planning, many contradictory, by hundreds of individuals working all over the world in different decades, to add features to solve and work around real world problems, keep backwards compatibility with decades of working programs, and attempt to have a shared glue language usable across many platforms. Most of the special syntax was developed long before Google existed.

So, sure, there are practical issues with details like this. And yet, it is simple. And there are simple methods for learning and retaining little tidbits like this over time if you care to do so. Bash and its cousins aren’t going away, so take notes, make a cheat sheet, or work on a better replacement (you’ll fail and make the problem worse, but go ahead).

reply
simoncion 10 hours ago
Yeah, seriously. It's as if people want to playact as illiterate programmers.

The "Redirections" section of the manual [0] is just seven US Letter pages. This guy's cheat sheet [1] that took me ten seconds to find is a single printed page.

[0] <https://www.gnu.org/software/bash/manual/html_node/Redirecti...>

[1] <https://catonmat.net/ftp/bash-redirections-cheat-sheet.pdf>

reply
goku12 3 hours ago
> The "Redirections" section of the manual [0] is just seven US Letter pages.

"Just" seven US Letter pages? You're talking about redirections alone, right? How many such features exist in Bash? I find Python, Perl and even Lisps easier to understand. Some of those languages wouldn't have been even conceived if shell languages were good enough.

There is another shell language called 'execline' (to be precise, it's a replacement for a shell). The redirections in its commands are done using a program named 'fdmove' [1]. It doesn't leave any confusion as to what it's actually doing. fdmove doesn't mention the fact that it resorts to FD inheritance to achieve this. However, the entire 'shell' is based on chain loading of programs (fork, exec, FD inheritance, environment inheritance, etc). So fdmove's behavior doesn't really create any confusion to begin with. Despite execline needing some clever thinking from the coder, I find it easier to understand what it's actually doing, compared to bash. This is where bash and other POSIX shell languages went wrong with abstractions. They got carried away with them.

[1] https://www.skarnet.org/software/execline/fdmove.html

reply
simoncion 27 minutes ago
> "Just" seven US Letter pages?

Yes. It's the syntax alongside prose explaining the behavior in detail. Go give it a read.

If you want documentation that's done up in the "modern" style, then you'll prefer that one-page cheat sheet that that guy made. I find that "modern" documentation tends to leave it up to each reader to discover the non-obvious parts of the behavior for themselves.

> I find Python ... easier to understand.

Have you read the [0] docs for Python's 'subprocess' library? The [1] docs for Python's 'multiprocess' library? Or many of the other libraries in the Python standard library that deal with nontrivial process and I/O management? Unless you want to underdocument and leave important parts of the behavior for users to incorrectly guess, such documentation is going to be much larger than a cheat sheet would be.

[0] ...twenty-five pages of...

[1] ...fifty-nine pages of...

reply
miki123211 9 hours ago
Shell is optimized for the minimal number of keystrokes (just like Vim, Amadeus and the Bloomberg Terminal are optimized for the minimum number of keystrokes. Programming languages are primarily optimized for future code readability, with terseness and intuitiveness being second or third (depending on language).
reply
darkwater 12 hours ago

  ? (defun even(num) (= (mod num 2) 0))
  ? (filter '(6 4 3 5 2) #'even)
I'm zero Lisp expert and I don't feel comfortable at all reading this snippet.
reply
goku12 3 hours ago
This:

> I'm zero Lisp expert

and this:

> I don't feel comfortable at all reading this snippet

are related. The comfort in reading Lisp comes from how few syntactic/semantic rules there are. There's a standard form and a few special forms. Compare that to C - possibly one of the smallest popular languages around. How many syntactical and semantic rules do you need to know to be a half decent C programmer?

If you look at the Lisp code, it has just 2 main features - a tree in the form of nested lists and some operations in prefix notation. It needs some getting used to for regular programmers. But it's said that programming newbies learn Lisps faster than regular programming languages, due to the fewer rules they have to remember.

reply
emmelaich 2 days ago
Yep, there's a strong unifying feel between the Unix api, C, the shell, and also say Perl.

Which is lost when using more modern or languages foreign to Unix.

reply
tkcranny 2 days ago
Python too under the hood, a lot of its core is still from how it started as a quick way to do unixy/C things.
reply
momentoftop 9 hours ago
> The pipe operator works similarly, though it's a combination of fork and dup'ing

Any time the shell executes a program it forks, not just for redirections. Redirections will use dup before exec on the child process. Piping will be two forks and obviously the `pipe` syscall, with one process having its stdout dup'd to the input end of the pipe, and the other process having its stdin dup'd to the output end.

Honestly, I find the BASH manual to be excellently written, and it's probably available on your system even without an internet connection. I'd always go there than rely on stack overflow or an LLM.

https://www.gnu.org/software/bash/manual/bash.html#Redirecti...

reply
kccqzy 24 hours ago
And just like dup2 allows you to duplicate into a brand new file descriptor, shells also allow you to specify bigger numbers so you aren’t restricted to 1 and 2. This can be useful for things like communication between different parts of the same shell script.
reply
ontouchstart 5 hours ago
I did a google search on “dup2(2, 1)” in a fresh private tab on my iPhone Safari and this thread came up the second, between

https://man7.org/linux/man-pages/man2/dup.2.html

and

https://man.archlinux.org/man/dup2.2.en

A lot of bots are reading this. Amazing.

reply
jolmg 13 hours ago
> Though, understanding it this way makes the direction of the angled bracket a little odd; at least for me it's more natural to understand dup2(2, 1) as 2<1, as in make fd 2 a duplicate of fd 1, but in terms of abstract I/O semantics that would be misleading.

Since they're both just `dup2(1, 2)`, `2>&1` and `2<&1` are the same. However, yes, `2<&1` would be misleading because it looks like you're treating stderr like an input.

reply
ifh-hn 24 hours ago
Haha, I'm even more confused now. I have no idea what dup is...
reply
jpollock 24 hours ago
There are a couple of ways to figure out.

open a terminal (OSX/Linux) and type:

    man dup
open a browser window and search for:

    man dup
Both will bring up the man page for the function call.

To get recursive, you can try:

    man man unix
(the unix is important, otherwise it gives you manly men)
reply
Bender 24 hours ago
otherwise it gives you manly men

That's only just after midnight [1][2]

[1] - https://www.youtube.com/watch?v=XEjLoHdbVeE

[2] - https://unix.stackexchange.com/questions/405783/why-does-man...

reply
ifh-hn 15 hours ago
I love that this situation occured.
reply
trashb 15 hours ago
you may also consider gnu info

  info dup
reply
niobe 21 hours ago
I find it very intuitive as is
reply
manbash 19 hours ago
Respectfully, what was the purpose of this comment, really?

And I also disagree, your suggestion is not easier. The & operator is quite intuitive as it is, and conveys the intention.

reply
goku12 17 hours ago
Perhaps it is intuitive for you based on how you learned it. But their explanation is more intuitive for anyone dealing with low level stuff like POSIX-style embedded programming, low level unix-y C programming, etc, since it ties into what they already know. There is also a limit to how much you can learn about the underlying system and its unseen potential by learning from the abstractions alone.

> Respectfully, what was the purpose of this comment, really?

Judging by its replies alone, not everyone considers it purposeless. And even though I know enough to use shell redirections correctly, I still found that comment insightful. This is why I still prefer human explanations over AI. It often contains information you didn't think you needed. HN is one of the sources of the gradually dwindling supply of such information. That comment is still on-topic. Please don't discourage such habits.

reply
raincole 21 hours ago
The comments on stackoverflow say the words out of my mouth so I'll just copy & paste here:

> but then shouldn't it rather be &2>&1?

> & is only interpreted to mean "file descriptor" in the context of redirections. Writing command &2>& is parsed as command & and 2>&1

That's where all the confusion comes from. I believe most people can intuitively understand > is redirection, but the asymmetrical use of & throws them off.

Interestingly, Powershell also uses 2>&1. Given an once-a-lifetime chance to redesign shell, out of all the Unix relics, they chose to keep (borrow) this.

reply
jcotton42 15 hours ago
PowerShell actually has 7 streams. Success, Error, Warning, Verbose, Debug, Information, and Progress (though Progress doesn't get a number) https://learn.microsoft.com/en-us/powershell/module/microsof...
reply
ptx 14 hours ago
Although PowerShell borrows the syntax, it (as usual!) completely screws up the semantics. The examples in the docs [1] show first setting descriptor 2 to descriptor 1 and then setting descriptor 1 to a newly opened file, which of course is backwards and doesn't give the intended result in Unix; e.g. their example 1:

  dir C:\, fakepath 2>&1 > .\dir.log
Also, according to the same docs, the operators "now preserve the byte-stream data when redirecting output from a native command" starting with PowerShell 7.4, i.e. they presumably corrupted data in all previous versions, including version 5.1 that is still bundled with Windows. And it apparently still does so, mysteriously, "when redirecting stderr output to stdout".

[1] https://learn.microsoft.com/en-us/powershell/module/microsof...

reply
b40d-48b2-979e 10 hours ago
IIRC PowerShell would convert your command's stream to your console encoding. I forget if this is according to how `chcp.com` was set or how `[Console]::OutputEncoding` was set (which is still a pain I feel in my bones for knowing today).

It's also not a file descriptor. It's a PowerShell Stream, of which there are five? you can redirect to that are similar to log levels.

reply
layer8 8 hours ago
I agree that it adds to the confusion, but note that `file1>file2` also wouldn’t work (in the sense of “send the output currently going to file1 to file2”) and isn’t symmetrical in that sense as well. Or take `/dev/stderr>/dev/stdout` as the more direct equivalent.
reply
cesaref 13 hours ago
The way I read it, the prefix to the > indicates which file descriptor to redirect, and there is just a default that means no indicated file descriptor means stdout.

So, >foo is the same as 1>foo

If you want to get really into the weeds, I think 2>>&1 will create a file called 1, append to a file descriptor makes no sense (or maybe, truncate to a file descriptor makes no sense is maybe what I mean), but why this is the case is probably an oversight 50 years ago in sh, although i'd be surprised if this was codified anywhere, or relied upon in scripts.

reply
xeyownt 15 hours ago
I don't get the confusion.

You redirect stdout with ">" and stderr with "2>" (a two-letter operator).

If you want to redirect to stdout / stderr, you use "&1" or "&2" instead of putting a file name.

reply
zwischenzug 19 hours ago
Isn't that because of posix?
reply
TheDong 17 hours ago
Powershell is not posix compliant and does not pretend to be. Like conditionals using `()` instead of `[]` is already a clear departure from posix
reply
b40d-48b2-979e 10 hours ago
I don't think they were talking about pwsh? pwsh actually has types and is its own programming lang unlike *sh, so it doesn't rely on builtin command exit codes.
reply
ontouchstart 9 hours ago
Sometime all you need is to RTMF from the source instead of Nth hand information (N > 1)

https://www.gnu.org/software/bash/manual/html_node/Redirecti...

reply
GuB-42 9 hours ago
Great if you know where to look, but most people who ask themselves the question don't know they have to look up the bash manual in the "redirection" section.

The usual thing (before LLMs) is to Google the question, but for the question to appear in Google, someone has to ask it first, and here we are.

Also the Stackoverflow answers give different perspectives, context, etc... rather than just telling you what it does, which is useful to someone unfamiliar with how redirections work. As I said, someone who doesn't know about "2>&1" is unlikely to be an expert given how common the pattern is, so a little hand holding doesn't hurt.

reply
layer8 8 hours ago
> Great if you know where to look, but most people who ask themselves the question don't know they have to look up the bash manual in the "redirection" section.

Where else would you look but in the manual of your shell? And you don’t have to know in which section to look, you can just search for “2>&1” in the bash man page.

reply
GuB-42 7 hours ago
What is a command and what is shell syntax is not always obvious, especially to a beginner, which I assume most people asking this question are.

Take the command "ls -l ~/.. ; fg" for instance. What is interpreted by the shell and what are commands? If you have some experience in bash, you probably know, and therefore you know which part to look in which man page, but you probably also know "2>&1".

Spoiler: "-l" is part of the command, so look in the "ls" manpage. "~", is expanded by the shell, ";" is shell syntax and "fg" is a builtin, all three are in the "bash" manpage. ".." is part of the filesystem.

reply
shevy-java 9 hours ago
And how do you find that?

Google search literally is useless for these days, for Average Joe.

reply
momentoftop 8 hours ago
Try "info bash" on your system. It's the same manual.

In Emacs, when I hit C-h i I get a menu of all my info manuals and I first read the bash one there.

reply
ontouchstart 8 hours ago
That is correct. And also simple `man bash`:

REDIRECTION Before a command is executed, its input and output may be redirected using a special notation interpreted by the shell. Redirection may also be used to open and close files for the current shell execution environment. The following redirection operators may precede or appear anywhere within a simple command or may follow a command. Redirections are processed in the order they appear, from left to right.

       In the following descriptions, if the file descriptor number is
       omitted, and the first character of the redirection operator is <, the
       redirection refers to the standard input (file descriptor 0).  If the
       first character of the redirection operator is >, the redirection
       refers to the standard output (file descriptor 1).

       The word following the redirection operator in the following
       descriptions, unless otherwise noted, is subjected to brace expansion,
       tilde expansion, parameter expansion, command substitution, arithmetic
       expansion, quote removal, pathname expansion, and word splitting.  If
       it expands to more than one word, bash reports an error.

       Note that the order of redirections is significant.  For example, the
       command

              ls > dirlist 2>&1

       directs both standard output and standard error to the file dirlist,
       while the command

              ls 2>&1 > dirlist

       directs only the standard output to file dirlist, because the standard
       error was duplicated as standard output before the standard output was
       redirected to dirlist.
...
reply
momentoftop 8 hours ago
Ah, thanks, didn't realise they put the whole manual into the manpage. For other tools (e.g. make), the info manual is complete but the manpage is just a summary.
reply
giglamesh 9 hours ago
Do these days include the nearly seventeen years ago when this question was posted? ;)
reply
ontouchstart 8 hours ago
https://www.gnu.org was there for decades, much older than SO and google.

I know GNU -> Google -> SO -> LLM is a culture shift, this is how human attention goes. The search engine and LLM only capture the latest group attention and we have short memory. That is why reading is a critical skill for us as human beings, we can't afford to outsource that to machine. (Same with writing.)

reply
solomonb 23 hours ago
Man I miss stack overflow. It feels so much better to ask humans a question then the machine, but it feels impossible to put the lid back on the box.
reply
rkachowski 14 hours ago
It's really jarring to see this wave of nostalgia for "the good old days" appear since ~2025. Suddenly these rose tinted glasses have dropped and everything before LLM usage became ubiquitous was a beautiful romantic era of human collaboration, understanding and craftsmanship.

I still acutely remember the gatekeeping and hostility of peak stack overflow, and the inanity of churning out jira tickets as fast as possible for misguided product initiatives. It's just wild yo

reply
mrpopo 13 hours ago
Probably people complaining about AI today were fine with Stack Overflow before and didn't have anything to complain about back then.

I also had a better experience with Stack Overflow over AI. It's been unable to tell me that I couldn't assign a new value to my std::optional in my specific case, and kept hallucinating copy constructor rules. A Stack Overflow question matching my problem cleared that up for me.

Sometimes you need someone to tell you no.

reply
ruszki 12 hours ago
Or, like me, the kind of questions in which I’m interested are answered in a way worse rate by LLMs than StackOverflow, like ever.

I have and had problems with StackOverflow. But LLMs are nowhere near that, and unfortunately, as we can see, StackOverflow is basically dead, and that’s very problematic with kinda new things, like Android compose. There was exactly zero time when for example Opus could answer the best options for the first time, like a simple one, like I want a zero WindowInset object… it gives an answer for sure, and completely ignores the simplest one. And that happens all the time. I’m not saying that StackOverflow was good regarding this, but it was better for sure.

reply
rkachowski 11 hours ago
> A Stack Overflow question matching my problem cleared that up for me.

Perhaps if there was no question already available you'd have had a different experience. Getting clearly written and specific questions promptly closed as duplicates of related, yet distinct issues, was part of the fun.

I find that AI hallucinates in the same way that someone can be very confident and wrong at the same time, with the difference that the feedback is almost instant and there are no difficult personalities to deal with.

reply
mrpopo 10 hours ago
> someone can be very confident and wrong at the same time

And sometimes that someone can be you, and AI is notoriously bad at telling you that you're wrong (because it has to please people)

reply
rkachowski 8 hours ago
I've found recent claude code to be surprisingly good at dispelling false assumptions and incorrect framing. I say this as someone who experimented with it last summer and found it to be kinda stupid; since December last year it's turned the curve - it's not the sycophantic nonsense it used to be.
reply
skydhash 11 hours ago
I don’t think I’ve ever ask a question on Stack Overflow, but I’ve consulted it several time. Even when I’ve not found my exact use case, there’s always something similar or related that gave me the right direction for research (a book or an article reference, the name of a concept to use as keyword,…)

It’s kinda the same feeling when browsing the faq of a project. It gives you a more complete sense of the domain boundaries.

I still prefer to refer to book or SO instead of asking the AI. Coherency and purposefulness matter more to me then a direct answer that may be wrong.

reply
tdb7893 5 hours ago
I think most people found StackOverflow to be pretty easy and useful since it's a pretty small minority of people that ever asked questions on it so many people didn't interact at all with the more annoying parts.
reply
LatencyKills 13 hours ago
MSGA: Make Software Great Again? /s
reply
numbers 21 hours ago
and no ai fluff to start or end the answer, just facts straight to the point.
reply
jamesnorden 12 hours ago
Perhaps you mean searching for your question first, before asking. :)
reply
globular-toast 17 hours ago
It is possible. Many people choose a healthy lifestyle instead of becoming morbidly obese and incapable which is easy to do in our society.
reply
webdevver 12 hours ago
> It feels so much better to ask humans a question then the machine

I could not disagree more! With pesky humans, you have all sorts of things to worry about:

- is my question stupid? will they think badly of me if i ask it?

- what if they dont know the answer? did i just inadvertantly make them look stupid?

- the question i have is related to their current work... i hope they dont see me as a threat!

and on and on. asking questions in such a manner as to elicit the answer, without negative externalities, is quite the art form as i'm sure many stack overflow users will tell you. many word orderings trigger a 'latent space' which activates the "umm, why are you even doing this?" with the implication begin "you really are stupid!", totally useless to the question-asker and a much more frustrating time-waster than even the most moralizing LLM.

with LLMs, you don't have to play these 'token games'. you throw your query at it, and irrespective of the word order, word choice, or the nture of the question - it gives you a perfectly neutral response, or at worst politely refuses to answer.

reply
skydhash 11 hours ago
That’s a level of paranoia that I can’t really understand. I just do my research, then for information I can’t access, don’t know how to access, or can’t comprehend, I reach out. People have the right to not want to share information. If it’s in a work setting and the situation is blocking, I notify my supervisor.

> many word orderings trigger a 'latent space' which activates the "umm, why are you even doing this?" with the implication begin "you really are stupid!"

You may have heard of the XY situation when people asks a Y question only because they have an incorrect answer to X. A question has a goal (unless rethorical) and to the person being asked, it may be confusing. You may have a valid reason to go against common sense, but if the other person is not your tutor or a fellow researcher, he may not be willing to accommodate you and spend his time for a goal he have no context about.

Remember the car wash question for LLMs? Some phrasing have the pattern of a trick question and that’s another thing people watch out for.

reply
james_marks 10 hours ago
Claude’s answer, which is the only one that clicked for me:

Normally when you do something like command > file.txt, you’re only capturing the normal output — errors still go to your screen.

2>&1 is how you say: “send the error pipe into the same place as the normal output pipe.” Breaking it down without jargon: • 2 means “the error output” • > means “send it to” • &1 means “wherever the normal output is currently going” (the & just means “I’m referring to a pipe, not a file named 1”)

reply
DonaldPShimoda 9 hours ago
> Claude’s answer

This response is essentially just the second answer to the linked question (the response by dbr) with a bunch of the important words taken out.

And all it cost you to get it was more water and electricity than simply clicking the link and scrolling down — to say nothing of the other costs.

reply
james_marks 9 hours ago
FWIW, I clicked the link, scanned the SO thread, then scanned the HN thread. The "bunch of important words taken out" is exactly the service I paid AI for.

"I didn't have time to write you a short letter, so I wrote you a long one." is real.

reply
NekkoDroid 9 hours ago
> • 2 means “the error output” • > means “send it to” • &1 means “wherever the normal output is currently going” (the & just means “I’m referring to a pipe, not a file named 1”)

If you want it with the correct terminology:

2 means "file descriptor 2", > means "assign the previous mentioned to the following", &2 means "file descriptor 1" (and not file named "1")

reply
r4bbb1t 9 hours ago
[dead]
reply
amelius 2 days ago
It's a reminder of how archaic the systems we use are.

File descriptors are like handing pointers to the users of your software. At least allow us to use names instead of numbers.

And sh/bash's syntax is so weird because the programmer at the time thought it was convenient to do it like that. Nobody ever asked a user.

reply
zahlman 2 days ago
At the time, the users were the programmers.
reply
amelius 24 hours ago
This is misleading because you use plural for both and I'm sure most of these UX missteps were _each_ made by a _single_ person, and there were >1 users even at the time.
reply
Msurrow 24 hours ago
I think he meant that at that time all users were programmers. Yes, _all_ .
reply
zahlman 21 hours ago
It was a bit of an over-generalization, but yes that's basically what I was going for.
reply
ifh-hn 24 hours ago
> and there were >1 users even at the time.

Are you sure there wasn't >&1 users... Sorry I'll get my coat.

reply
mjevans 20 hours ago
I think that's likely to work as a no-op
reply
worldsavior 12 hours ago
Get out.
reply
NooneAtAll3 12 hours ago
did you mean to write "<1"?
reply
andoando 24 hours ago
programmers are people too! bash syntax just sucks
reply
booi 2 days ago
arguably if you're using the CLI they still are
reply
spiralcoaster 23 hours ago
Yeah but now they're using npm to install a million packages to do things like tell if a number is greater than 10000. The chances of the programmer wanting to understand the underlying system they are using is essentially nil.
reply
spott 23 hours ago
Yea, they are just much higher level programmers… most programmers don’t know the low level syscall apis.
reply
kube-system 24 hours ago
nah, we have long had other disciplines using the CLI who do not write their own software, e.g. sysadmins
reply
xenadu02 23 hours ago
> At least allow us to use names instead of numbers.

You can for the destination. That's the whole reason you need the "&": to tell the shell the destination is not a named file (which itself could be a pipe or socket). And by default you don't need to specify the source fd at all. The intent is that stdout is piped along but stderr goes directly to your tty. That's one reason they are separate.

And for those saying "<" would have been better: that is used to read from the RHS and feed it as input to the LHS so it was taken.

reply
agentdrek 23 hours ago
It should be a lesson to learn on how simple, logical and reliable tools can last decades.
reply
bool3max 23 hours ago
… Or how hard it is to replace archaic software that’s extremely prevalent.
reply
phailhaus 22 hours ago
Bash syntax is anything but simple or logical. Just look at the insane if-statement syntax. Or how the choice of quotes fundamentally changes behavior. Argument parsing, looping, the list goes on.
reply
Steltek 8 hours ago
You could make a list of WTFs about any language.

Bash syntax is the pinnacle of Chesterton's Fence. If you can't articulate why it was done that way, you have no right to remove it. Python would be an absolutely unusable shell language.

reply
phailhaus 2 hours ago
I didn't say that there wasn't a reason. I said it was absolute trash to use. It's so bad that the moment I need even the slightest bit of complexity, I will switch away from bash. Can't really say that for any other language.
reply
akdev1l 21 hours ago
if statements are pretty simple

if $command; then <thing> else <thing> fi

You may be complaining about the syntax for the test command specifically or bash’s [[ builtin

Also the choice of quotes changing behavior is a thing in:

1. JavaScript/typescript 2. Python 3. C/C++ 4. Rust

In some cases it’s the same difference, eg: string interpolation in JavaScript with backticks

reply
viraptor 21 hours ago
> Also the choice of quotes changing behavior is a thing in:

In those languages they change what's contained in the string. Not how many strings you get. Or what the strings from that string look like. ($@ being an extreme example)

reply
phatskat 18 hours ago
> $@ being an extreme example

From the bash man page via StackOverflow:

> @ Expands to the positional parameters, starting from one. When the expansion occurs within double quotes, each parameter expands to a separate word. That is, "$@" is equivalent to "$1" "$2" ... If the double-quoted expansion occurs within a word, the expansion of the first parameter is joined with the beginning part of the original word, and the expansion of the last parameter is joined with the last part of the original word. When there are no positional parameters, "$@" and $@ expand to nothing (i.e., they are removed).

That’s…a lot. I think Bash is interesting in the “I’m glad it works but I detest having to work with it” kind of way. Like, fine if I’m just launching some processes or tail’ing some logs, but I’ve rarely had a time when I had to write an even vaguely complex bash script where I didn’t end up spending most of my time relearning how to do things that should be basic.

Shellcheck was a big game changer at least in terms of learning some of the nuance from a “best practice” standpoint. I also think that the way bash does things is just a little too foreign from the rest of my computing life to be retained.

reply
skydhash 11 hours ago
Complex and bash script should not be in the same sentence. If a script you have is becoming complex, that’s an hint to use an anemable programming language with proper data types and structures.

Shell scripts is for automating shell sessions.

reply
Towaway69 17 hours ago
Are taxes simple?

Why does Bash syntax have to be "simple"? For me, Bash syntax is simple.

reply
phailhaus 9 hours ago
Uh, reading a bash script shouldn't be as hard as doing your taxes. Bash syntax has to be simple because bash code is going to be read and reasoned by humans. Reading just a simple if statement in bash syntax requires a TON of knowledge to avoid shooting yourself in the foot. That's a massive failure of usability just to save a couple of keystrokes.

This is like saying "what's wrong with brainfuck??? makes sense to me!" Every syntax can be understood, that does not automatically make them all good ideas.

reply
crazygringo 22 hours ago
It's more like how the need for backwards compatibility prevents bad interfaces from ever getting improved.
reply
varenc 20 hours ago
You can do:

   2>/dev/stdout
Which is about the same as `2>&1` but with a friendlier name for STDOUT. And this way `2> /dev/stdout`, with the space, also works, whereas `2> &1` doesn't which confuses many. But it's behavior isn't exactly the same and might not work in all situations.

And of course I wish you could use a friendlier name for STDERR instead of `2>`

reply
goku12 16 hours ago
> You can do: > > 2>/dev/stdout

The situation where this is going to cause confusion is when you do this for multiple commands. It looks like they're all writing to a single file. Of course, that file is not an ordinary file - it's a device file. But even that isn't enough. You have to know that each command sees its own incarnation of /dev/stdout, which refers to its own fd1.

reply
nusl 22 hours ago
I quite like how archaic it is. I am turned off by a lot of modern stuff. My shell is nice and predictable. My scripts from 15 years ago still work just fine. No, I don't want it to get all fancy, thanks.
reply
Steltek 8 hours ago
For a while, there was a strong trend of "I want to do everything in one singular language". Your coding is in language XYZ. Your build tools will be configured/written in XYZ. Your UI frontend will be generated from XYZ. Everything will be defined in XYZ.

Shell is from a time when you had a huge selection of languages, each for different purposes, and you picked the right one for the job. For complex applications, you would have multiple languages working together.

People look at Bash and think, "I would never dare do $Task with that language!". And you'd be right, because you're thinking you only have one tool in the toolbox.

reply
fulafel 18 hours ago
They're more like capabilities or handles than pointers. There's a reason in Rust land many systems use handles (indices to a table of objects) in absence of pointer arithmetic.

In the C API of course there's symbolic names for these. STDIN_FILENO, STDOUT_FILENO, etc for the defaults and variables for the dynamically assigned ones.

reply
minitech 17 hours ago
What they point to are capabilities, but the integer handles that user space gets are annoyingly like pointers. In some respects, better, since we don’t do arithmetic on them, but in others, worse: they’re not randomized, and I’ve never come across a sanitizer (in the ASan sense) for them, so they’re vulnerable to worse race condition and use-after-free issues where data can be quietly sent to the entirely wrong place. Unlike raw pointers’ issues, this can’t even be solved at a language level. And maybe worst of all, there’s no bug locality: you can accidentally close the descriptor backing a `FILE*` just by passing the wrong small integer to `close` in an unrelated part of the program, and then it’ll get swapped out at the earliest opportunity.
reply
eichin 17 hours ago
BITD the one "fd sanitizer" I ever encountered was "try using the code on VxWorks" which at the time was "posix inspired" at best - fds actually were pointers, so effectively random and not small integers. It didn't catch enough things to be worth the trouble, but it did clean up some network code (ISTR I was working on SNTP and Kerberos v4 and Kerberized FTP when I ran into this...)
reply
1718627440 9 hours ago
Handles and pointers are the same concept, the difference is just who resolves them. Pointers don't represent hardware addresses either.
reply
csours 24 hours ago
The conveniences also mean that there is more than ~one~ ~two~ several ways to do something.

Which means that reading someone else's shell script (or awk, or perl, or regex) is INCREDIBLY inconvenient.

reply
amelius 24 hours ago
Yes. There are many reasons why one shouldn't use sh/bash for scripting.

But my main reason is that most scripts break when you call them with filenames that contain spaces. And they break spectacularly.

reply
nixon_why69 22 hours ago
Counter reason in favor is that you can always count on it being there and working the same way. Perl is too out of fashion and python has too many versioning/library complexities.

You have to write the crappy sh script once but then you get simple, easy usage every time. (If you're revising the script frequently enough that sh/bash are the bottleneck, then what you have is a dev project and not a script, use a programming language).

reply
ndsipa_pomu 23 hours ago
You're not wrong, but there's fairly easy ways to deal with filenames containing spaces - usually just enclosing any variable use within double quotes will be sufficient. It's tricker to deal with filenames that contain things such as line breaks as that usually involves using null terminated filenames (null being the only character that is not allowed in filenames). e.g find . -type f -print0
reply
gbacon 5 hours ago
Ah, but then there are the unusual cases. See “The shell and its crappy handling of whitespace.”

https://blog.plover.com/Unix/whitespace.html

reply
deathanatos 21 hours ago
You're not wrong, but at my place, our main repository does not permit cloning into a directory with spaces in it.

Three factors conspire to make a bug:

  1. Someone decides to use a space
  2. We use Python
  3. macOS
Say you clone into a directory with a space in it. We use Python, so thus our scripts are scripts in the Unix sense. (So, Python here is replacable with any scripting language that uses a shebang, so long as the rest of what comes after holds.) Some of our Python dependencies install executables; those necessarily start with a shebang:

  #!/usr/bin/env python3
Note that space.

Since we use Python virtualenvs,

  #!/home/bob/src/repo/.venv/bin/python3
But … now what if the dir has a space?

  #!/home/bob/src/repo with a space/.venv/bin/python3
Those look like arguments, now, to a shebang. Shebangs have no escaping mechanism.

As I also discovered when I discovered this, the Python tooling checks for this! It will instead emit a polyglot!

  #!/bin/bash

  # <what follows in a bash/python polyglot>
  # the bash will find the right Python interpreter, and then re-exec this
  # script using that interpreter. The Python will skip the bash portion,
  # b/c of cleverness in the polyglot.
Which is really quite clever, IMO. But, … it hits (2.). It execs bash, and worse, it is macOS's bash, and macOS's bash will corrupt^W remove for your safety! certain environment variables from the environment.

Took me forever to figure out what was going on. So yeah … spaces in paths. Can't recommend them. Stuff breaks, and it breaks in weird and hard to debug ways.

reply
joshuaissac 21 hours ago
If all of your scripts run in the same venv (for a given user), can you inject that into the PATH and rely on env just finding the right interpreter?

I suppose it would also need env to be able to handle paths that have spaces in them.

reply
ndsipa_pomu 15 hours ago
What a headache!

My practical view is to avoid spaces in directories and filenames, but to write scripts that handle them just fine (using BASH - I'm guilty of using it when more sane people would be using a proper language).

My ideological view is that unix/POSIX filenames are allowed to use any character except for NULL, so tools should respect that and handle files/dirs correctly.

I suppose for your usage, it'd be better to put the virtualenv directory into your path and then use #!/usr/bin/env python

reply
skydhash 11 hours ago
For the BSDs and Linux, I believe that shebang are intepreted by the kernel directly and not by the shell. /usr/bin/env and /bin/sh are guaranteed by POSIX to exists so your solution is the correct one. Anything else is fragile.
reply
balnaphone 11 hours ago
These are part of the rituals of learning how a system works, in the same way interns get tripped up at first when they discover ^S will hang an xterm, until ^Q frees it. If you're aware of the history of it, it makes perfect sense. Unix has a personality, and in this case the kernel needs to decide what executable to run before any shell is involved, so it deliberately avoids the complexity of quoting rules.

I'd give this a try, works with any language:

  #!/usr/bin/env -S "/path/with spaces/my interpreter" --flag1 --flag2
Only if my env didn't have -S support, I might consider a separate launch script like:

  #!/bin/sh
  exec "/path/with spaces/my interpreter" "$0" "$@"
But most decent languages seems to have some way around the issue.

Python

  #!/bin/sh
  """:"
  exec "/path/with spaces/my interpreter" "$0" "$@"
  ":"""
  # Python starts here
  print("ok")
Ruby

  #!/bin/sh
  exec "/path/with spaces/ruby" -x "$0" "$@"
  #!ruby
  puts "ok"
Node.js

  #!/bin/sh
  /* 2>/dev/null
  exec "/path/with spaces/node" "$0" "$@"
  */
  console.log("ok");
Perl

  #!/bin/sh
  exec "/path/with spaces/perl" -x "$0" "$@"
  #!perl
  print "ok\n";
Common Lisp (SBCL) / Scheme (e.g. Guile)

  #!/bin/sh
  #|
  exec "/path/with spaces/sbcl" --script "$0" "$@"
  |#
  (format t "ok~%")
C

  #!/bin/sh
  #if 0
  exec "/path/with spaces/tcc" -run "$0" "$@"
  #endif
  
  #include <stdio.h>
  
  int main(int argc, char **argv)
  {
      puts("ok");
      return 0;
  }
Racket

  #!/bin/sh
  #|
  exec "/path/with spaces/racket" "$0" "$@"
  |#
  #lang racket
  (displayln "ok")
Haskell

  #!/bin/sh
  #if 0
  exec "/path/with spaces/runghc" -cpp "$0" "$@"
  #endif
  
  main :: IO ()
  main = putStrLn "ok"
Ocaml (needs bash process substitution)

  #!/usr/bin/env bash
  exec "/path/with spaces/ocaml" -no-version /dev/fd/3 "$@" 3< <(tail -n +3 "$0")
  print_endline "ok";;
reply
xorcist 11 hours ago
> At least allow us to use names instead of numbers

Many people probably think in terms of "fd 0" and "fd 1" instead of "standard in" and "standard out", but should you wish to use names at least on modern Linux/BSD systems do:

  echo message >/dev/stdout
  echo error_message >/dev/stderr
reply
vbezhenar 9 hours ago
I don't have macos right now but I think that it doesn't have these files. What's worse is that bash emulates these files so they might even somewhat work, but not in all situations. I distinctly remember issues with this command:

    install /dev/stdin file <<EOF
    something
    EOF
reply
mechanicalpulse 4 hours ago
I do and it does.

    $ ls -al /dev/std*
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stderr -> fd/2
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdin -> fd/0
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdout -> fd/1
    $ ls -n /dev/fd/[012]
    crw--w----  1 501  4  0x10000000 Feb 27 13:38 /dev/fd/0
    crw--w----  1 501  4  0x10000000 Feb 27 13:38 /dev/fd/1
    crw--w----  1 501  4  0x10000000 Feb 27 13:38 /dev/fd/2
    $ uname -v
    Darwin Kernel Version 24.6.0: Mon Jan 19 22:00:55 PST 2026; root:xnu-11417.140.69.708.3~1/RELEASE_ARM64_T6000
    $ sw_vers
    ProductName:  macOS
    ProductVersion:  15.7.4
    BuildVersion:  24G517
Lest you think it's some bashism that's wrapping ls, they exist regardless of shell:

    $ zsh -c 'ls -al /dev/std*'
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stderr -> fd/2
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdin -> fd/0
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdout -> fd/1
    $ csh -c 'ls -al /dev/std*'
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stderr -> fd/2
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdin -> fd/0
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdout -> fd/1
    $ tcsh -c 'ls -al /dev/std*'
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stderr -> fd/2
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdin -> fd/0
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdout -> fd/1
    $ ksh -c 'ls -al /dev/std*'
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stderr -> fd/2
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdin -> fd/0
    lr-xr-xr-x  1 root  wheel  0 Feb 24 15:08 /dev/stdout -> fd/1
I tried the install example that you provided and it worked on macOS as well as Linux.
reply
burnt-resistor 3 hours ago
Bash and zsh also allow, and modern Borne-compatible shells (sh) might too:

   echo >&2 error_message
On Linux, /dev/std* requires the kernel to do file name resolution in the virtual file system because it could point to something nonstandard that isn't a symlink to something like /proc/self/fd/XX and then the kernel has to check that that should hopefully point to a special character device.
reply
Dylan16807 20 hours ago
> At least allow us to use names instead of numbers.

You can use /dev/stdin, /dev/stdout, /dev/stderr in most cases, but it's not perfect.

reply
murphyslaw 19 hours ago
> You can use /dev/stdin, /dev/stdout, /dev/stderr in most cases

Never ever write code that assumes this. These dev shorthands are Linux specific, and you'll even need a certain minimum Linux version.

I cringe at the amount of shell scripts that assume bash is the system interpreter, and not sh or ksh.

Always assume sh, it's the most portable.

Linux != Unix.

reply
homebrewer 15 hours ago
It's a waste of time unless you're specifically targeting and testing mac, all of the BSDs, various descendants of Solaris, and other flavors of Unix. I wrote enough "portable shell" to run into so many quirks and slight differences in flags, in how different tools handle e.g. SIGPIPE.

Adding a new feature in a straightforward way often makes it work only on 4/7 of the operating systems you're trying to support. You then rewrite it in a slightly different way (because it's shell — there's always 50 ways to do the same thing). This gets you to 5/7 working systems, but breaks one that previously worked. You rewrite it yet another way, fixing the new breakage, but another one breaks. Repeat this over and over again, trying to find an implementation that works everywhere, or start adding workarounds for each system. Spend an hour on a feature that should have taken two minutes.

If it's anything remotely complicated, and you need portability, then use perl/python/go.

reply
eichin 17 hours ago
Actually, while the Actual Nodes are a linux thing, bash itself implements (and documents) them directly (in redirections only), along with /dev/tcp and /dev/udp (you can show with strace that bash doesn't reference the filesystem for these, even if they're present.)

So, you're not wrong, but...

reply
Dylan16807 18 hours ago
You shouldn't be assuming I'm writing code for Unix.
reply
lpln3452 18 hours ago
lol truly informative and clearly something no one here knew. But your terminology is inaccurate. Please change it to GNU/Linux != Unix
reply
kristopolous 15 hours ago
I've long wanted easy, trivial multichannel i/o with duplication

I want to be able to route x independent input and y independent output trivially from the terminal

Proper i/o routing

It shouldn't be hard, it shouldn't be unsolved, and it shouldn't be esoteric

reply
bmicraft 14 hours ago
That's what named pipes do.
reply
direwolf20 9 hours ago
They don't. They're single reader and, if I remember correctly, sequential single writer.
reply
spiralcoaster 23 hours ago
Who do you imagine the users were back when it was being developed?
reply
crazygringo 22 hours ago
People who were not that one programmer?

Even if you're a programmer, that doesn't mean you magically know what other programmers find easy or logical.

reply
themafia 13 hours ago
> At least allow us to use names instead of numbers.

Sure. Here's what that looked like:

https://en.wikipedia.org/wiki/Job_Control_Language

reply
HackerThemAll 2 days ago
> bash's syntax is so weird

What should be the syntax according to contemporary IT people? JSON? YAML? Or just LLM prompt?

reply
bigstrat2003 23 hours ago
Nushell, Powershell, Python, Ruby, heck even Perl is better. Shell scripting is literally the worst language I've ever seen in common use. Any realistic alternative is going to be better.
reply
murphyslaw 19 hours ago
It always exists on any Unix system. Even a busybox root environment. Why do you want to save a few bytes to compromise portability?
reply
bashkindasucks 17 hours ago
But it isn't portable, unless you stick to posix subset which kinda sucks. You'll use some feature that some dude using an ancient shell doesn't have then he'll complain to you. And that list of features is LONG: https://oneuptime.com/blog/post/2026-02-13-posix-shell-compa...

If you're using shell specific features in a tightly controlled environment like a docker container then yeah, go wild. If you're writing a script for personal use, sure. If you're writing something for other people to run then your code will be working around all the missing features posix hasn't been updated to include. You can't use arrays, or arithmetic context, nothing. It sucks to use.

Besides, if you're writing a script it is likely that it will grow, get more complicated, and you will soon bump up against the limitations of the language and have to do truly horrible workarounds.

This is why if I need something for others to run then I just use python from the beginning. The code will be easier to read and more portable. At this point the vast majority of OS's and images have it available anyway so it's not as big a barrier as it used to be.

reply
ifh-hn 24 hours ago
Nushell! Or powershell, but I much prefer nushell!
reply
sigwinch 23 hours ago
There's a movement to write JSON to fd 3, as a machine-parsable alternative to rickety fd 1.
reply
mjevans 20 hours ago
Anything that is infected by UCS-2 / UTF-16 garbage should be revised and reconsidered... Yeah UTF-8 has carve outs for those escape sequences... However JSON is even worse, you _have_ to use UTF-16 escapes. https://en.wikipedia.org/wiki/JSON#Character_encoding
reply
nazgul17 23 hours ago
Trying to be language agnostic: it should be as self-explanatory as possible. 2>&1 is all but.

Why is there a 2 on the left, when the numbers are usually on the right. What's the relationship between 2 and 1? Is the 2 for std err? Is that `&` to mean "reference"? The fact you only grok it if you know POSIX sys calls means it's far from self explanatory. And given the proportion of people that know POSIX sys calls among those that use Bash, I think it's a bit of an elitist syntax.

reply
stephenr 22 hours ago
POSIX has a manual for shell. You can read 99% of it without needing to know any syscalls. I'm not as familiar with it but Bash has an extensive manual as well, and I doubt syscall knowledge is particularly required there either.

If your complaint is "I don't know what this syntax means without reading the manual" I'd like to point you to any contemporary language that has things like arrow functions, or operator overloading, or magic methods, or monkey patching.

reply
nazgul17 12 hours ago
No, the complaint is that "the syntax is not intuitive even knowing the simpler forms of redirection": this one isn't a competition of them, but rather an ad-hoc one.

I know about manuals, and I have known this specific syntax for half of my life.

Arrow functions etc are mechanisms in the language. A template you can build upon. This one is just one special operator. Learn it and use it, but it will serve no other purpose in your brain. It won't make anything easier to understand. It won't help you decipher other code. It won't help you draw connections.

reply
stephenr 7 hours ago
> the syntax is not intuitive even knowing the simpler forms of redirection

The MDN page for arrow functions in JS has, I shit you not, 7 variations on the syntax. And your complaint is these are not intuitively similar enough?

call > output

call 2>&1

call > output 2> error

call 1> output 2> error

Give me a fucking break.

reply
marxisttemp 15 hours ago
Tcl
reply
xeonmc 24 hours ago
Haskell
reply
amelius 24 hours ago
Honestly, Python with the "sh" module is a lot more sane.
reply
Normal_gaussian 24 hours ago
Is it more sane, or is it just what you are used to?

Python doesn't really have much that makes it a sensible choice for scripting.

Its got some basic data structures and a std-lib, but it comes at a non-trivial performance cost, a massive barrier to getting out of the single thread, and non-trivial overhead when managing downstream processes. It doesn't protect you from any runtime errors (no types, no compile checks). And I wouldn't call python in practice particularly portable...

Laughably, NodeJS is genuinely a better choice - while you don't get multithreading easily, at least you aren't trivially blocked on IO. NodeJS also has pretty great compatibility for portability; and can be easily compiled/transformed to get your types and compile checks if you want. I'd still rather avoid managing downstream processes with it - but at least you know your JSON parsing and manipulation is trivial.

Go is my goto when I'm reaching for more; but (ba)sh is king. You're scripting on the shell because you're mainly gluing other processes together, and this is what (ba)sh is designed to do. There is a learning curve, and there are footguns.

reply
gdevenyi 22 hours ago
The programmers were the users. They asked. They said it was ok.
reply
jballanc 22 hours ago
Wait until you find out where "tty" comes from!
reply
arjie 24 hours ago
Redirects are fun but there are way more than I actually routinely use. One thing I do is the file redirects.

    diff <(seq 1 20) <(seq 1 10)
I do that with diff <(xxd -r file.bin) <(xxd -r otherfile.bin) sometimes when I should expect things to line up and want to see where things break.
reply
Calzifer 22 hours ago
Process substitution and calling it file redirect is a bit misleading because it is implemented with named pipes which becomes relevant when the command tries to seek in them which then fails.

Also the reason why Zsh has an additional =(command) construct which uses temporary files instead.

reply
wmanley 10 hours ago
It's a shame that unix tools don't support file descriptors better. The ability to pass a file (or stream, or socket etc) directly into a process is so powerful, but few commands actually support being used this way and require filenames (or hostnames, etc) instead. Shell is so limited in this regard too.

It would be great to be able to open a socket in bash[^1] and pass it to another program to read/write from without having an extra socat process and pipes running (and the buffering, odd flush behaviour, etc.). It would be great if programs expected to receive input file arguments as open fds, rather than providing filenames and having the process open them itself. Sandboxing would be trivial, as would understanding the inputs and outputs of any program.

It's frustrating to me because the underlying unix system supports this so well, it's just the conventions of userspace that get in the way.

[^1]: I know about /dev/tcp, but it's very limited.

reply
1718627440 9 hours ago
Yeah I started to design all my (sub)programs this way. If it should also be invoked by the shell, I make a wrapper program that sets the fds correctly.
reply
gnabgib 2 days ago
Better: Understanding Linux's File Descriptors: A Deep Dive Into '2>&1' and Redirection https://news.ycombinator.com/item?id=41384919 https://news.ycombinator.com/item?id=39095755
reply
murphyslaw 19 hours ago
O'Reilly's Essential System Administration [1], I never do a job interview without it.

[1]: https://www.oreilly.com/library/view/essential-system-admini...

reply
MathMonkeyMan 20 hours ago
I regularly refer to [the unix shell specification][1] to remember the specifics of ${foo%%bar} versus ${foo#bar}, ${parameter:+word} versus ${parameter:-word}, and so on.

It also teaches how && and || work, their relation to [output redirection][3] and [command piping][2], [(...) versus {...}][4], and tricky parts like [word expansion][5], even a full grammar. It's not exciting reading, but it's mostly all there, and works on all POSIXy shells, e.g. sh, bash, ksh, dash, ash, zsh.

[1]: https://pubs.opengroup.org/onlinepubs/7908799/xcu/chap2.html

[2]: https://pubs.opengroup.org/onlinepubs/7908799/xcu/chap2.html...

[3]: https://pubs.opengroup.org/onlinepubs/7908799/xcu/chap2.html...

[4]: https://pubs.opengroup.org/onlinepubs/7908799/xcu/chap2.html...

[5]: https://pubs.opengroup.org/onlinepubs/7908799/xcu/chap2.html...

reply
ptaffs 7 hours ago
I understood the point of the question was how shells work seems very context driven. An & here means something different to an & there. IFS=\| read A B C <<< "first|second|third" the read is executed and the IFS assignment is local to the one command echo hello this will "hello this", even though in the assignment above the space was important an & at the end of a line is run the task background and in the middle of the redirect isn't. All these things can be learned, but it's hard to explain the patterns, I think.
reply
csours 24 hours ago
If you need to know what 2>&1 means, then I would recommend shellcheck

It's very, very easy to get shell scripts wrong; for instance the location of the file redirect operator in a pipeline is easy to get wrong.

reply
TacticalCoder 24 hours ago
As someone who use LLMs to generate, among others, Bash script I recommend shellcheck too. Shellcheck catches lots of things and shall really make your Bash scripts better. And if for whatever reason there's an idiom you use all the time that shellcheck doesn't like, you can simply configure shellcheck to ignore that one.
reply
vessenes 2 days ago
Not sure why this link and/or question is here, except to say LLMs like this incantation.

It redirects STDERR (2) to where STDOUT is piped already (&1). Good for dealing with random CLI tools if you're not a human.

reply
WhyNotHugo 24 hours ago
Humans used this combination extensively for decades too. I'm no aware of any other simple way to grep both stdout and stderr from a process. (grep, or save to file, or pipe in any other way).
reply
TacticalCoder 24 hours ago
"not humans" are using this extensively precisely because humans used this combination extensively for decades. It's muscle-memory for me. And so is it for LLMs.
reply
GetTheFacts 21 minutes ago
>It's muscle-memory for me. And so is it for LLMs.

LLMs have neither muscles nor memories. They're token combinators based on statistical correlation, no more, no less.

That's not to say LLMs can't be useful when they string together tokens. Quite the contrary, in fact. But let's not pretend LLMs are something they're not.

reply
ElijahLynn 2 days ago
I found the explanation useful, about "why" it is that way. I didn't realize the & before the 1 means to tell it is the filedescriptor 1 and not a file named 1.
reply
weavie 2 days ago
I get the ocassional file named `1` lying around.
reply
hrmtst93837 17 hours ago
The distinction between file descriptors and regular files trips up many people at first. Recognizing that `&` signifies a file descriptor clears up the confusion about the syntax.
reply
LtWorf 24 hours ago
It's an operator called ">&", the 1 is the parameter.
reply
WJW 24 hours ago
Well sure, but surely this takes some inspiration from both `&` as the "address of" operator in C as well as the `>` operator which (apart from being the greater-than operator) very much implies "into" in many circumstances.

So `>&1` is "into the file descriptor pointed to by 1", and at the time any reasonable programmer would have known that fd 1 == STDOUT.

reply
anitil 24 hours ago
I've also found llms seem to love it when calling out to tools, I suppose for them having stderr interspersed messaged in their input doesn't make much difference
reply
lgeorget 11 hours ago
It reminds me of this answer I made some years ago: https://unix.stackexchange.com/a/138046

The question was how to remember it's "2>&1" and not "2&>1". If you think of "&1" as the address/destination of, the syntax is quite natural.

reply
NoSalt 6 hours ago
I always said it as: "2 goes into the address of 1", so wherever 1 is pointing, that's where 2 is going.
reply
ucarion 2 days ago
I've almost never needed any of these, but there's all sorts of weird redirections you can do in GNU Bash: https://www.gnu.org/software/bash/manual/bash.html#Redirecti...
reply
keithnz 2 days ago
agentic ai tends to use it ALL the time.
reply
kazinator 24 hours ago
It means redirect file descriptor 2 to the same destination as file descriptor 1.

Which actually means that an undelrying dup2 operation happens in this direction:

   2 <- 1   // dup2(2, 1)
The file description at [1] is duplicated into [2], thereby [2] points to the same object. Anything written to stderr goes to the same device that stdout is sending to.

The notation follows I/O redirections: cmd > file actually means that a descriptor [n] is first created for the open file, and then that descriptor's decription is duplicated into [1]:

   n <- open("file", O_RDONLY)
   1 <- n
reply
xg15 16 hours ago
Always wondered how the parser managed the ambiguity between & for file descriptors and & to start background tasks. (And without a good mental model, I kept forgetting where to put the & correctly in redirects)

Treating ">&" as a distinct operator actually makes an elegant solution here. I like the idea.

reply
wodenokoto 2 days ago
I enjoyed the commenter asking “Why did they pick such arcane stuff as this?” - I don’t think I touch more arcane stuff than shell, so asking why shell used something that is arcane relative to itself is to me arcane squared.
reply
Normal_gaussian 23 hours ago
I love myself a little bit of C++. A good proprietary C++ codebase will remind you that people just want to be wizards, solving their key problem with a little bit of magic.

I've only ever been tricked into working on C++...

reply
Normal_gaussian 23 hours ago
I know the underlying call, but I always see the redirect symbols as indicating that "everything" on the big side of the operator fits into a small bit of what is on the small side of the operator. Like a funnel for data. I don't know the origin, but I'm believing my fiction is right regardless. It makes <(...) make intuitive sense.

The comment about "why not &2>&1" is probably the best one on the page, with the answer essentially being that it would complicate the parser too much / add an unnecessary byte to scripts.

reply
antonvs 2 hours ago
It means that whoever designed it didn’t have very good taste regarding language ergonomics.
reply
k3vinw 11 hours ago
Perhaps it’s the odd placement of the ampersand. Something like >2&1 would make more sense to me.

On the other hand, pipe “|” is brilliant!

reply
emmelaich 2 days ago
A gotcha for me originally and perhaps others is that while using ordering like

   $ ./outerr  >blah 2>&1
sends stdout and stderr to blah, imitating the order with pipe instead does not.

   $ ./outerr  | 2>&1 cat >blah
   err
This is because | is not a mere redirector but a statement terminator.

    (where outerr is the following...)
    echo out 
    echo err >&2
reply
time4tea 24 hours ago
Useless use of cat error/award

But also | isnt a redirection, it takes stdout and pipes it to another program.

So, if you want stderr to go to stdout, so you can pipe it, you need to do it in order.

bob 2>&1 | prog

You usually dont want to do this though.

reply
kazinator 24 hours ago
The point is that the order in which that is processed is not left to right.

First the | pipe is established as fd [1]. And then 2>&1 duplicates that pipe into [2]. I.e. right to left: opposite to left-to-right processing of redirections.

When you need to capture both standard error and standard output to a file, you must have them in this order:

  bob > file 2>&1
It cannot be:

  bob 2>&1 > file
Because then the 2>&1 redirection is performed first (and usually does nothing because stderr and stdout are already the same, pointing to your terminal). Then > file redirects only stdout.

But if you change > file to | process, then it's fine! process gets the combined error and regular output.

reply
emmelaich 6 hours ago
Try it without the `cat` and tell me what you get.
reply
murphyslaw 19 hours ago
You can pipe the fd directly:

# echo 1 >&2 2>| echo

reply
inigyou 24 hours ago
Why would that second one be expected to work?
reply
charcircuit 22 hours ago
I am surprised that there still is no built in way to pipe stdout and stderr. *| would be much more ergonomic than 2>&1 |.
reply
gaogao 22 hours ago
Doesn't |& work with bash?
reply
b5n 21 hours ago
&>
reply
maxeda 24 hours ago
> I am thinking that they are using & like it is used in c style programming languages. As a pointer address-of operator. [...] 2>&1 would represent 'direct file 2 to the address of file 1'.

I had never made the connection of the & symbol in this context. I think I never really understood the operation before, treating it just as a magic incantation but reading this just made it click for me.

reply
jibal 24 hours ago
No, the shell author needed some way to distinguish file descriptor 1 from a file named "1" (note that 2>1 means to write stderr to the file named "1"), and '&' was one of the few available characters. It's not the address of anything.

To be consistent, it would be &2>&1, but that makes it more verbose than necessary and actually means something else -- the first & means that the command before it runs asynchronously.

reply
kazinator 24 hours ago
It's not inconsistent. The & is attached to the redirection operator, not to the 1 token. The file descriptor being redirected is also attached:

Thus you cannot write:

  2 > &1

You also cannot write

  2 >& 1
However you may write

  2>& 1
The n>& is one clump.
reply
kalterdev 16 hours ago
rc [1] replaced it with a far more telling >[1=2] and >[1=] for closing.

1: https://p9f.org/sys/doc/rc.html

reply
zem 2 days ago
back when stackoverflow was still good and useful, I asked about some stderr manipulation[0] and learnt a lot from the replies

[0] https://stackoverflow.com/questions/3618078/pipe-only-stderr...

reply
nikeee 23 hours ago
So if i happen to know the numbers of other file descriptors of the process (listed in /proc), i can redirect to other files opened in the current process? 2>&1234? Or is it restricted to 0/1/2 by the shell?

Would probably be hard to guess since the process may not have opened any file once it started.

reply
hugmynutus 20 hours ago
> Or is it restricted to 0/1/2 by the shell?

It is not. You can use any arbitrary numbers provided they're initialized properly. These values are just file descriptors.

For Example -> https://gist.github.com/valarauca/71b99af82ccbb156e0601c5df8...

I've used (see: example) to handle applications that just dump pointless noise into stdout/stderr, which is only useful when the binary crashes/fails. Provided the error is marked by a non-zero return code, this will then correctly display the stdout/stderr (provided there is <64KiB of it).

reply
viraptor 21 hours ago
No restrictions. You can create your own beautiful monsters that way.

> Would probably be hard to guess since the process may not have opened any file once it started.

You need to not only inspect the current state, but also race the process before the assignments change.

reply
casey2 3 hours ago
This is why I dislike sites like stackoverflow. If I needed a quick lookup the v7 manpage explains it better, the v6 doesn't have it, but that's because unix didn't have bourne shell til V7

https://man.cat-v.org/unix_7th/1/sh#:~:text=%3C%26digit%0A%2...

Seriously when it comes to unix RTFM RTFM RTFM and you'll get the top comment on SO and HN rolled into one.

reply
adzm 2 days ago
I always wondered if there ever was a standard stream for stdlog which seems useful, and comes up in various places but usually just as an alias to stderr
reply
knfkgklglwjg 24 hours ago
Powershell has ”stdprogress”
reply
jibal 24 hours ago
/dev/stderr on Linux
reply
nurettin 2 days ago
I saw this newer bash syntax for redirecting all output some years ago on irc

    foo &> file  
    foo |& program
reply
rezonant 24 hours ago
I didn't know about |&, not sure if it was introduced at the same time. So I'd always use &> for redirection to file and 2>&1 for piping
reply
ndsipa_pomu 23 hours ago
I think the "|&" is the most intuitive syntax - you can just amend an existing pipe to also include STDERR
reply
AnimalMuppet 21 hours ago
Somewhat off topic, but related: I worked at this place that made internet security software. It ran on Windows, and on various flavors of Unix.

One customer complained about our software corrupting files on their hard disk. Turns out they had modified their systems so that a newly-spawned program was not given a stderr. That is, it was not handed 0, 1, and 2 (file descriptors), but only 0 and 1. So whenever our program wrote something to stderr, it wrote to whatever file had been the first one opened by the program.

We talked about fixing this, briefly. Instead we decided to tell the customer to fix their broken environment.

reply
otikik 17 hours ago
To me it means “I didn’t want to come up with an intelligible syntax for this”. Shell scripts have many dark corners and sharp edges like this is one.
reply
whatever1 21 hours ago
Awesome. Next week I will forget it again.
reply
simoncion 11 hours ago
While you're still thinking about it, make sure to bookmark the "redirections" section of the manual. [0] Also useful might be the "pipelines" section [1] to remind you of the "|&" operator.

[0] <https://www.gnu.org/software/bash/manual/bash.html#Redirecti...>

[1] <https://www.gnu.org/software/bash/manual/bash.html#Pipelines...>

reply
tempodox 18 hours ago
That’s nothing, try `&>`.
reply
oguz-ismail2 15 hours ago
This is one of those places where Bash diverges from POSIX. The standard says `echo &>/dev/null' is two commands, namely `echo &' and `>/dev/null', but Bash interprets it as redirect both stdout and stderr of `echo' to `/dev/null' both in normal and POSIX mode.
reply
jolmg 14 hours ago
Also known as `>&`.

  cmd >&out-and-err.txt
reply
hinkley 16 hours ago
I first encountered this thirty four years ago and I still hate it. Almost as much as I hate when people ask me to explain it.

Look man, I didn’t invent this stupid shit, and I’m not telling you it’s brilliant, so don’t kill the messenger.

I thought I’d seen somewhere that zsh had a better way to do this but I must have imagined it. Or maybe I’m confusing it with fish.

reply
kuon 13 hours ago
It was never fully clear to me why the ordre mattered.
reply
JackAcid 22 hours ago
A.I. has made the self-important neckbeards of Stack Overflow obsolete.
reply
alwillis 3 hours ago
Yes! And they're not happy about it.
reply
everyone 11 hours ago
stackoverflow, how quaint. Anyone here remember when it was actually useful and questions like the one featured could be asked and answered?
reply
joelthelion 10 hours ago
Closed as not a real question.
reply
nodesocket 24 hours ago
I understand how this works, but wouldn’t a more clear syntax be:

command &2>&1

Since the use of & signifies a file descriptor. I get what this ACTUALLY does is run command in the background and then run 2 sending its stout to stdout. That’s completely not obvious by the way.

reply
dheera 24 hours ago
even clearer syntax:

command &stderr>&stdout

reply
jolmg 14 hours ago
You're not limited to the standard file descriptors.

  command 4>&3
reply
aichen_dev 13 hours ago
[dead]
reply
parasti 13 hours ago
Cool tip - never knew this. I always figured piping to `tee` is a must in order to view-and-save command output at the same time. Turns out I can do "command >&1 >file.txt" instead!
reply
stuartjohnson12 13 hours ago
Unfortunately you are replying to an AI spambot
reply
cpach 12 hours ago
If you see an account that you suspect is a spambot, please send an email to hn@ycombinator.com, then the mods can take action.
reply
twocommits 5 hours ago
[dead]
reply
datawars 12 hours ago
[dead]
reply
esafak 23 hours ago
It means someone did not bother to name their variables properly, reminding you to use a shell from this century.
reply