> but then shouldn't it rather be &2>&1?
> & is only interpreted to mean "file descriptor" in the context of redirections. Writing command &2>& is parsed as command & and 2>&1
That's where all the confusion comes from. I believe most people can intuitively understand > is redirection, but the asymmetrical use of & throws them off.
Interestingly, Powershell also uses 2>&1. Given an once-a-lifetime chance to redesign shell, out of all the Unix relics, they chose to keep (borrow) this.
dir C:\, fakepath 2>&1 > .\dir.log
Also, according to the same docs, the operators "now preserve the byte-stream data when redirecting output from a native command" starting with PowerShell 7.4, i.e. they presumably corrupted data in all previous versions, including version 5.1 that is still bundled with Windows. And it apparently still does so, mysteriously, "when redirecting stderr output to stdout".[1] https://learn.microsoft.com/en-us/powershell/module/microsof...
It's also not a file descriptor. It's a PowerShell Stream, of which there are five? you can redirect to that are similar to log levels.
So, >foo is the same as 1>foo
If you want to get really into the weeds, I think 2>>&1 will create a file called 1, append to a file descriptor makes no sense (or maybe, truncate to a file descriptor makes no sense is maybe what I mean), but why this is the case is probably an oversight 50 years ago in sh, although i'd be surprised if this was codified anywhere, or relied upon in scripts.
You redirect stdout with ">" and stderr with "2>" (a two-letter operator).
If you want to redirect to stdout / stderr, you use "&1" or "&2" instead of putting a file name.
https://www.gnu.org/software/bash/manual/html_node/Redirecti...
The usual thing (before LLMs) is to Google the question, but for the question to appear in Google, someone has to ask it first, and here we are.
Also the Stackoverflow answers give different perspectives, context, etc... rather than just telling you what it does, which is useful to someone unfamiliar with how redirections work. As I said, someone who doesn't know about "2>&1" is unlikely to be an expert given how common the pattern is, so a little hand holding doesn't hurt.
Where else would you look but in the manual of your shell? And you don’t have to know in which section to look, you can just search for “2>&1” in the bash man page.
Take the command "ls -l ~/.. ; fg" for instance. What is interpreted by the shell and what are commands? If you have some experience in bash, you probably know, and therefore you know which part to look in which man page, but you probably also know "2>&1".
Spoiler: "-l" is part of the command, so look in the "ls" manpage. "~", is expanded by the shell, ";" is shell syntax and "fg" is a builtin, all three are in the "bash" manpage. ".." is part of the filesystem.
Google search literally is useless for these days, for Average Joe.
In Emacs, when I hit C-h i I get a menu of all my info manuals and I first read the bash one there.
REDIRECTION Before a command is executed, its input and output may be redirected using a special notation interpreted by the shell. Redirection may also be used to open and close files for the current shell execution environment. The following redirection operators may precede or appear anywhere within a simple command or may follow a command. Redirections are processed in the order they appear, from left to right.
In the following descriptions, if the file descriptor number is
omitted, and the first character of the redirection operator is <, the
redirection refers to the standard input (file descriptor 0). If the
first character of the redirection operator is >, the redirection
refers to the standard output (file descriptor 1).
The word following the redirection operator in the following
descriptions, unless otherwise noted, is subjected to brace expansion,
tilde expansion, parameter expansion, command substitution, arithmetic
expansion, quote removal, pathname expansion, and word splitting. If
it expands to more than one word, bash reports an error.
Note that the order of redirections is significant. For example, the
command
ls > dirlist 2>&1
directs both standard output and standard error to the file dirlist,
while the command
ls 2>&1 > dirlist
directs only the standard output to file dirlist, because the standard
error was duplicated as standard output before the standard output was
redirected to dirlist.
...I know GNU -> Google -> SO -> LLM is a culture shift, this is how human attention goes. The search engine and LLM only capture the latest group attention and we have short memory. That is why reading is a critical skill for us as human beings, we can't afford to outsource that to machine. (Same with writing.)
I still acutely remember the gatekeeping and hostility of peak stack overflow, and the inanity of churning out jira tickets as fast as possible for misguided product initiatives. It's just wild yo
I also had a better experience with Stack Overflow over AI. It's been unable to tell me that I couldn't assign a new value to my std::optional in my specific case, and kept hallucinating copy constructor rules. A Stack Overflow question matching my problem cleared that up for me.
Sometimes you need someone to tell you no.
I have and had problems with StackOverflow. But LLMs are nowhere near that, and unfortunately, as we can see, StackOverflow is basically dead, and that’s very problematic with kinda new things, like Android compose. There was exactly zero time when for example Opus could answer the best options for the first time, like a simple one, like I want a zero WindowInset object… it gives an answer for sure, and completely ignores the simplest one. And that happens all the time. I’m not saying that StackOverflow was good regarding this, but it was better for sure.
Perhaps if there was no question already available you'd have had a different experience. Getting clearly written and specific questions promptly closed as duplicates of related, yet distinct issues, was part of the fun.
I find that AI hallucinates in the same way that someone can be very confident and wrong at the same time, with the difference that the feedback is almost instant and there are no difficult personalities to deal with.
And sometimes that someone can be you, and AI is notoriously bad at telling you that you're wrong (because it has to please people)
It’s kinda the same feeling when browsing the faq of a project. It gives you a more complete sense of the domain boundaries.
I still prefer to refer to book or SO instead of asking the AI. Coherency and purposefulness matter more to me then a direct answer that may be wrong.
I could not disagree more! With pesky humans, you have all sorts of things to worry about:
- is my question stupid? will they think badly of me if i ask it?
- what if they dont know the answer? did i just inadvertantly make them look stupid?
- the question i have is related to their current work... i hope they dont see me as a threat!
and on and on. asking questions in such a manner as to elicit the answer, without negative externalities, is quite the art form as i'm sure many stack overflow users will tell you. many word orderings trigger a 'latent space' which activates the "umm, why are you even doing this?" with the implication begin "you really are stupid!", totally useless to the question-asker and a much more frustrating time-waster than even the most moralizing LLM.
with LLMs, you don't have to play these 'token games'. you throw your query at it, and irrespective of the word order, word choice, or the nture of the question - it gives you a perfectly neutral response, or at worst politely refuses to answer.
> many word orderings trigger a 'latent space' which activates the "umm, why are you even doing this?" with the implication begin "you really are stupid!"
You may have heard of the XY situation when people asks a Y question only because they have an incorrect answer to X. A question has a goal (unless rethorical) and to the person being asked, it may be confusing. You may have a valid reason to go against common sense, but if the other person is not your tutor or a fellow researcher, he may not be willing to accommodate you and spend his time for a goal he have no context about.
Remember the car wash question for LLMs? Some phrasing have the pattern of a trick question and that’s another thing people watch out for.
Normally when you do something like command > file.txt, you’re only capturing the normal output — errors still go to your screen.
2>&1 is how you say: “send the error pipe into the same place as the normal output pipe.” Breaking it down without jargon: • 2 means “the error output” • > means “send it to” • &1 means “wherever the normal output is currently going” (the & just means “I’m referring to a pipe, not a file named 1”)
This response is essentially just the second answer to the linked question (the response by dbr) with a bunch of the important words taken out.
And all it cost you to get it was more water and electricity than simply clicking the link and scrolling down — to say nothing of the other costs.
"I didn't have time to write you a short letter, so I wrote you a long one." is real.
If you want it with the correct terminology:
2 means "file descriptor 2", > means "assign the previous mentioned to the following", &2 means "file descriptor 1" (and not file named "1")
File descriptors are like handing pointers to the users of your software. At least allow us to use names instead of numbers.
And sh/bash's syntax is so weird because the programmer at the time thought it was convenient to do it like that. Nobody ever asked a user.
You can for the destination. That's the whole reason you need the "&": to tell the shell the destination is not a named file (which itself could be a pipe or socket). And by default you don't need to specify the source fd at all. The intent is that stdout is piped along but stderr goes directly to your tty. That's one reason they are separate.
And for those saying "<" would have been better: that is used to read from the RHS and feed it as input to the LHS so it was taken.
Bash syntax is the pinnacle of Chesterton's Fence. If you can't articulate why it was done that way, you have no right to remove it. Python would be an absolutely unusable shell language.
if $command; then <thing> else <thing> fi
You may be complaining about the syntax for the test command specifically or bash’s [[ builtin
Also the choice of quotes changing behavior is a thing in:
1. JavaScript/typescript 2. Python 3. C/C++ 4. Rust
In some cases it’s the same difference, eg: string interpolation in JavaScript with backticks
In those languages they change what's contained in the string. Not how many strings you get. Or what the strings from that string look like. ($@ being an extreme example)
From the bash man page via StackOverflow:
> @ Expands to the positional parameters, starting from one. When the expansion occurs within double quotes, each parameter expands to a separate word. That is, "$@" is equivalent to "$1" "$2" ... If the double-quoted expansion occurs within a word, the expansion of the first parameter is joined with the beginning part of the original word, and the expansion of the last parameter is joined with the last part of the original word. When there are no positional parameters, "$@" and $@ expand to nothing (i.e., they are removed).
That’s…a lot. I think Bash is interesting in the “I’m glad it works but I detest having to work with it” kind of way. Like, fine if I’m just launching some processes or tail’ing some logs, but I’ve rarely had a time when I had to write an even vaguely complex bash script where I didn’t end up spending most of my time relearning how to do things that should be basic.
Shellcheck was a big game changer at least in terms of learning some of the nuance from a “best practice” standpoint. I also think that the way bash does things is just a little too foreign from the rest of my computing life to be retained.
Why does Bash syntax have to be "simple"? For me, Bash syntax is simple.
This is like saying "what's wrong with brainfuck??? makes sense to me!" Every syntax can be understood, that does not automatically make them all good ideas.
2>/dev/stdout
Which is about the same as `2>&1` but with a friendlier name for STDOUT. And this way `2> /dev/stdout`, with the space, also works, whereas `2> &1` doesn't which confuses many. But it's behavior isn't exactly the same and might not work in all situations.And of course I wish you could use a friendlier name for STDERR instead of `2>`
The situation where this is going to cause confusion is when you do this for multiple commands. It looks like they're all writing to a single file. Of course, that file is not an ordinary file - it's a device file. But even that isn't enough. You have to know that each command sees its own incarnation of /dev/stdout, which refers to its own fd1.
Shell is from a time when you had a huge selection of languages, each for different purposes, and you picked the right one for the job. For complex applications, you would have multiple languages working together.
People look at Bash and think, "I would never dare do $Task with that language!". And you'd be right, because you're thinking you only have one tool in the toolbox.
In the C API of course there's symbolic names for these. STDIN_FILENO, STDOUT_FILENO, etc for the defaults and variables for the dynamically assigned ones.
Which means that reading someone else's shell script (or awk, or perl, or regex) is INCREDIBLY inconvenient.
But my main reason is that most scripts break when you call them with filenames that contain spaces. And they break spectacularly.
You have to write the crappy sh script once but then you get simple, easy usage every time. (If you're revising the script frequently enough that sh/bash are the bottleneck, then what you have is a dev project and not a script, use a programming language).
Three factors conspire to make a bug:
1. Someone decides to use a space
2. We use Python
3. macOS
Say you clone into a directory with a space in it. We use Python, so thus our scripts are scripts in the Unix sense. (So, Python here is replacable with any scripting language that uses a shebang, so long as the rest of what comes after holds.) Some of our Python dependencies install executables; those necessarily start with a shebang: #!/usr/bin/env python3
Note that space.Since we use Python virtualenvs,
#!/home/bob/src/repo/.venv/bin/python3
But … now what if the dir has a space? #!/home/bob/src/repo with a space/.venv/bin/python3
Those look like arguments, now, to a shebang. Shebangs have no escaping mechanism.As I also discovered when I discovered this, the Python tooling checks for this! It will instead emit a polyglot!
#!/bin/bash
# <what follows in a bash/python polyglot>
# the bash will find the right Python interpreter, and then re-exec this
# script using that interpreter. The Python will skip the bash portion,
# b/c of cleverness in the polyglot.
Which is really quite clever, IMO. But, … it hits (2.). It execs bash, and worse, it is macOS's bash, and macOS's bash will corrupt^W remove for your safety! certain environment variables from the environment.Took me forever to figure out what was going on. So yeah … spaces in paths. Can't recommend them. Stuff breaks, and it breaks in weird and hard to debug ways.
I suppose it would also need env to be able to handle paths that have spaces in them.
My practical view is to avoid spaces in directories and filenames, but to write scripts that handle them just fine (using BASH - I'm guilty of using it when more sane people would be using a proper language).
My ideological view is that unix/POSIX filenames are allowed to use any character except for NULL, so tools should respect that and handle files/dirs correctly.
I suppose for your usage, it'd be better to put the virtualenv directory into your path and then use #!/usr/bin/env python
I'd give this a try, works with any language:
#!/usr/bin/env -S "/path/with spaces/my interpreter" --flag1 --flag2
Only if my env didn't have -S support, I might consider a separate launch script like: #!/bin/sh
exec "/path/with spaces/my interpreter" "$0" "$@"
But most decent languages seems to have some way around the issue.Python
#!/bin/sh
""":"
exec "/path/with spaces/my interpreter" "$0" "$@"
":"""
# Python starts here
print("ok")
Ruby #!/bin/sh
exec "/path/with spaces/ruby" -x "$0" "$@"
#!ruby
puts "ok"
Node.js #!/bin/sh
/* 2>/dev/null
exec "/path/with spaces/node" "$0" "$@"
*/
console.log("ok");
Perl #!/bin/sh
exec "/path/with spaces/perl" -x "$0" "$@"
#!perl
print "ok\n";
Common Lisp (SBCL) / Scheme (e.g. Guile) #!/bin/sh
#|
exec "/path/with spaces/sbcl" --script "$0" "$@"
|#
(format t "ok~%")
C #!/bin/sh
#if 0
exec "/path/with spaces/tcc" -run "$0" "$@"
#endif
#include <stdio.h>
int main(int argc, char **argv)
{
puts("ok");
return 0;
}
Racket #!/bin/sh
#|
exec "/path/with spaces/racket" "$0" "$@"
|#
#lang racket
(displayln "ok")
Haskell #!/bin/sh
#if 0
exec "/path/with spaces/runghc" -cpp "$0" "$@"
#endif
main :: IO ()
main = putStrLn "ok"
Ocaml (needs bash process substitution) #!/usr/bin/env bash
exec "/path/with spaces/ocaml" -no-version /dev/fd/3 "$@" 3< <(tail -n +3 "$0")
print_endline "ok";;Many people probably think in terms of "fd 0" and "fd 1" instead of "standard in" and "standard out", but should you wish to use names at least on modern Linux/BSD systems do:
echo message >/dev/stdout
echo error_message >/dev/stderr install /dev/stdin file <<EOF
something
EOF $ ls -al /dev/std*
lr-xr-xr-x 1 root wheel 0 Feb 24 15:08 /dev/stderr -> fd/2
lr-xr-xr-x 1 root wheel 0 Feb 24 15:08 /dev/stdin -> fd/0
lr-xr-xr-x 1 root wheel 0 Feb 24 15:08 /dev/stdout -> fd/1
$ ls -n /dev/fd/[012]
crw--w---- 1 501 4 0x10000000 Feb 27 13:38 /dev/fd/0
crw--w---- 1 501 4 0x10000000 Feb 27 13:38 /dev/fd/1
crw--w---- 1 501 4 0x10000000 Feb 27 13:38 /dev/fd/2
$ uname -v
Darwin Kernel Version 24.6.0: Mon Jan 19 22:00:55 PST 2026; root:xnu-11417.140.69.708.3~1/RELEASE_ARM64_T6000
$ sw_vers
ProductName: macOS
ProductVersion: 15.7.4
BuildVersion: 24G517
Lest you think it's some bashism that's wrapping ls, they exist regardless of shell: $ zsh -c 'ls -al /dev/std*'
lr-xr-xr-x 1 root wheel 0 Feb 24 15:08 /dev/stderr -> fd/2
lr-xr-xr-x 1 root wheel 0 Feb 24 15:08 /dev/stdin -> fd/0
lr-xr-xr-x 1 root wheel 0 Feb 24 15:08 /dev/stdout -> fd/1
$ csh -c 'ls -al /dev/std*'
lr-xr-xr-x 1 root wheel 0 Feb 24 15:08 /dev/stderr -> fd/2
lr-xr-xr-x 1 root wheel 0 Feb 24 15:08 /dev/stdin -> fd/0
lr-xr-xr-x 1 root wheel 0 Feb 24 15:08 /dev/stdout -> fd/1
$ tcsh -c 'ls -al /dev/std*'
lr-xr-xr-x 1 root wheel 0 Feb 24 15:08 /dev/stderr -> fd/2
lr-xr-xr-x 1 root wheel 0 Feb 24 15:08 /dev/stdin -> fd/0
lr-xr-xr-x 1 root wheel 0 Feb 24 15:08 /dev/stdout -> fd/1
$ ksh -c 'ls -al /dev/std*'
lr-xr-xr-x 1 root wheel 0 Feb 24 15:08 /dev/stderr -> fd/2
lr-xr-xr-x 1 root wheel 0 Feb 24 15:08 /dev/stdin -> fd/0
lr-xr-xr-x 1 root wheel 0 Feb 24 15:08 /dev/stdout -> fd/1
I tried the install example that you provided and it worked on macOS as well as Linux. echo >&2 error_message
On Linux, /dev/std* requires the kernel to do file name resolution in the virtual file system because it could point to something nonstandard that isn't a symlink to something like /proc/self/fd/XX and then the kernel has to check that that should hopefully point to a special character device.You can use /dev/stdin, /dev/stdout, /dev/stderr in most cases, but it's not perfect.
Never ever write code that assumes this. These dev shorthands are Linux specific, and you'll even need a certain minimum Linux version.
I cringe at the amount of shell scripts that assume bash is the system interpreter, and not sh or ksh.
Always assume sh, it's the most portable.
Linux != Unix.
Adding a new feature in a straightforward way often makes it work only on 4/7 of the operating systems you're trying to support. You then rewrite it in a slightly different way (because it's shell — there's always 50 ways to do the same thing). This gets you to 5/7 working systems, but breaks one that previously worked. You rewrite it yet another way, fixing the new breakage, but another one breaks. Repeat this over and over again, trying to find an implementation that works everywhere, or start adding workarounds for each system. Spend an hour on a feature that should have taken two minutes.
If it's anything remotely complicated, and you need portability, then use perl/python/go.
So, you're not wrong, but...
I want to be able to route x independent input and y independent output trivially from the terminal
Proper i/o routing
It shouldn't be hard, it shouldn't be unsolved, and it shouldn't be esoteric
Even if you're a programmer, that doesn't mean you magically know what other programmers find easy or logical.
Sure. Here's what that looked like:
What should be the syntax according to contemporary IT people? JSON? YAML? Or just LLM prompt?
If you're using shell specific features in a tightly controlled environment like a docker container then yeah, go wild. If you're writing a script for personal use, sure. If you're writing something for other people to run then your code will be working around all the missing features posix hasn't been updated to include. You can't use arrays, or arithmetic context, nothing. It sucks to use.
Besides, if you're writing a script it is likely that it will grow, get more complicated, and you will soon bump up against the limitations of the language and have to do truly horrible workarounds.
This is why if I need something for others to run then I just use python from the beginning. The code will be easier to read and more portable. At this point the vast majority of OS's and images have it available anyway so it's not as big a barrier as it used to be.
Why is there a 2 on the left, when the numbers are usually on the right. What's the relationship between 2 and 1? Is the 2 for std err? Is that `&` to mean "reference"? The fact you only grok it if you know POSIX sys calls means it's far from self explanatory. And given the proportion of people that know POSIX sys calls among those that use Bash, I think it's a bit of an elitist syntax.
If your complaint is "I don't know what this syntax means without reading the manual" I'd like to point you to any contemporary language that has things like arrow functions, or operator overloading, or magic methods, or monkey patching.
I know about manuals, and I have known this specific syntax for half of my life.
Arrow functions etc are mechanisms in the language. A template you can build upon. This one is just one special operator. Learn it and use it, but it will serve no other purpose in your brain. It won't make anything easier to understand. It won't help you decipher other code. It won't help you draw connections.
The MDN page for arrow functions in JS has, I shit you not, 7 variations on the syntax. And your complaint is these are not intuitively similar enough?
call > output
call 2>&1
call > output 2> error
call 1> output 2> error
Give me a fucking break.
Python doesn't really have much that makes it a sensible choice for scripting.
Its got some basic data structures and a std-lib, but it comes at a non-trivial performance cost, a massive barrier to getting out of the single thread, and non-trivial overhead when managing downstream processes. It doesn't protect you from any runtime errors (no types, no compile checks). And I wouldn't call python in practice particularly portable...
Laughably, NodeJS is genuinely a better choice - while you don't get multithreading easily, at least you aren't trivially blocked on IO. NodeJS also has pretty great compatibility for portability; and can be easily compiled/transformed to get your types and compile checks if you want. I'd still rather avoid managing downstream processes with it - but at least you know your JSON parsing and manipulation is trivial.
Go is my goto when I'm reaching for more; but (ba)sh is king. You're scripting on the shell because you're mainly gluing other processes together, and this is what (ba)sh is designed to do. There is a learning curve, and there are footguns.
diff <(seq 1 20) <(seq 1 10)
I do that with diff <(xxd -r file.bin) <(xxd -r otherfile.bin) sometimes when I should expect things to line up and want to see where things break.Also the reason why Zsh has an additional =(command) construct which uses temporary files instead.
It would be great to be able to open a socket in bash[^1] and pass it to another program to read/write from without having an extra socat process and pipes running (and the buffering, odd flush behaviour, etc.). It would be great if programs expected to receive input file arguments as open fds, rather than providing filenames and having the process open them itself. Sandboxing would be trivial, as would understanding the inputs and outputs of any program.
It's frustrating to me because the underlying unix system supports this so well, it's just the conventions of userspace that get in the way.
[^1]: I know about /dev/tcp, but it's very limited.
[1]: https://www.oreilly.com/library/view/essential-system-admini...
It also teaches how && and || work, their relation to [output redirection][3] and [command piping][2], [(...) versus {...}][4], and tricky parts like [word expansion][5], even a full grammar. It's not exciting reading, but it's mostly all there, and works on all POSIXy shells, e.g. sh, bash, ksh, dash, ash, zsh.
[1]: https://pubs.opengroup.org/onlinepubs/7908799/xcu/chap2.html
[2]: https://pubs.opengroup.org/onlinepubs/7908799/xcu/chap2.html...
[3]: https://pubs.opengroup.org/onlinepubs/7908799/xcu/chap2.html...
[4]: https://pubs.opengroup.org/onlinepubs/7908799/xcu/chap2.html...
[5]: https://pubs.opengroup.org/onlinepubs/7908799/xcu/chap2.html...
It's very, very easy to get shell scripts wrong; for instance the location of the file redirect operator in a pipeline is easy to get wrong.
It redirects STDERR (2) to where STDOUT is piped already (&1). Good for dealing with random CLI tools if you're not a human.
LLMs have neither muscles nor memories. They're token combinators based on statistical correlation, no more, no less.
That's not to say LLMs can't be useful when they string together tokens. Quite the contrary, in fact. But let's not pretend LLMs are something they're not.
So `>&1` is "into the file descriptor pointed to by 1", and at the time any reasonable programmer would have known that fd 1 == STDOUT.
The question was how to remember it's "2>&1" and not "2&>1". If you think of "&1" as the address/destination of, the syntax is quite natural.
Which actually means that an undelrying dup2 operation happens in this direction:
2 <- 1 // dup2(2, 1)
The file description at [1] is duplicated into [2], thereby [2] points to the same object. Anything written to stderr goes to the same device that stdout is sending to.The notation follows I/O redirections: cmd > file actually means that a descriptor [n] is first created for the open file, and then that descriptor's decription is duplicated into [1]:
n <- open("file", O_RDONLY)
1 <- nTreating ">&" as a distinct operator actually makes an elegant solution here. I like the idea.
I've only ever been tricked into working on C++...
The comment about "why not &2>&1" is probably the best one on the page, with the answer essentially being that it would complicate the parser too much / add an unnecessary byte to scripts.
On the other hand, pipe “|” is brilliant!
$ ./outerr >blah 2>&1
sends stdout and stderr to blah, imitating the order with pipe instead does not. $ ./outerr | 2>&1 cat >blah
err
This is because | is not a mere redirector but a statement terminator. (where outerr is the following...)
echo out
echo err >&2But also | isnt a redirection, it takes stdout and pipes it to another program.
So, if you want stderr to go to stdout, so you can pipe it, you need to do it in order.
bob 2>&1 | prog
You usually dont want to do this though.
First the | pipe is established as fd [1]. And then 2>&1 duplicates that pipe into [2]. I.e. right to left: opposite to left-to-right processing of redirections.
When you need to capture both standard error and standard output to a file, you must have them in this order:
bob > file 2>&1
It cannot be: bob 2>&1 > file
Because then the 2>&1 redirection is performed first (and usually does nothing because stderr and stdout are already the same, pointing to your terminal). Then > file redirects only stdout.But if you change > file to | process, then it's fine! process gets the combined error and regular output.
I had never made the connection of the & symbol in this context. I think I never really understood the operation before, treating it just as a magic incantation but reading this just made it click for me.
To be consistent, it would be &2>&1, but that makes it more verbose than necessary and actually means something else -- the first & means that the command before it runs asynchronously.
[0] https://stackoverflow.com/questions/3618078/pipe-only-stderr...
Would probably be hard to guess since the process may not have opened any file once it started.
It is not. You can use any arbitrary numbers provided they're initialized properly. These values are just file descriptors.
For Example -> https://gist.github.com/valarauca/71b99af82ccbb156e0601c5df8...
I've used (see: example) to handle applications that just dump pointless noise into stdout/stderr, which is only useful when the binary crashes/fails. Provided the error is marked by a non-zero return code, this will then correctly display the stdout/stderr (provided there is <64KiB of it).
https://man.cat-v.org/unix_7th/1/sh#:~:text=%3C%26digit%0A%2...
Seriously when it comes to unix RTFM RTFM RTFM and you'll get the top comment on SO and HN rolled into one.
foo &> file
foo |& programOne customer complained about our software corrupting files on their hard disk. Turns out they had modified their systems so that a newly-spawned program was not given a stderr. That is, it was not handed 0, 1, and 2 (file descriptors), but only 0 and 1. So whenever our program wrote something to stderr, it wrote to whatever file had been the first one opened by the program.
We talked about fixing this, briefly. Instead we decided to tell the customer to fix their broken environment.
[0] <https://www.gnu.org/software/bash/manual/bash.html#Redirecti...>
[1] <https://www.gnu.org/software/bash/manual/bash.html#Pipelines...>
Look man, I didn’t invent this stupid shit, and I’m not telling you it’s brilliant, so don’t kill the messenger.
I thought I’d seen somewhere that zsh had a better way to do this but I must have imagined it. Or maybe I’m confusing it with fish.
command &2>&1
Since the use of & signifies a file descriptor. I get what this ACTUALLY does is run command in the background and then run 2 sending its stout to stdout. That’s completely not obvious by the way.
Though, understanding it this way makes the direction of the angled bracket a little odd; at least for me it's more natural to understand dup2(2, 1) as 2<1, as in make fd 2 a duplicate of fd 1, but in terms of abstract I/O semantics that would be misleading.
The danger is that if you don't open it before running the script, you'll get an error:
Now with that exec trick the fun only gets started. Because you can redirect to subshells and subshells inherit their redirection of the parent:
And now your bash script will have a nice log with stdout and stderr prefixed with INFO and ERROR and has timestamps with the PID.Now the disclaimer is that you will not have gaurantees that the order of stdout and stderr will be correct unfortunately, even though we run it unbuffered (-u and fflush).
(if runners have sh then they might as well have a real compiler scratch > debian > alpine , "don't debug in prod")
You can, though, do the following:
This will pipe only the object contents to stdout, and the API response to /dev/null.https://github.com/jez/symbol/blob/master/scaffold/symbol#L1...
The existing build system I did not have control over, and would produce output on stdout/stderr. I wanted my build scripts to be able to only show the output from the build system if building failed (and there might have been multiple build system invocations leading to that failure). I also wanted the second level to be able to log progress messages that were shown to the user immediately on stdout.
It was janky and it's not a project I have a need for anymore, but it was technically a real world use case.You can also redirect specific file descriptors into other commands:
The shell syntactical sugars also have some weird gotchas. The &2>&1 question and its answer are a good example of that. You're just trading one complexity (low level knowledge) for another (the long list of syntax rules). Shell languages break the rule of not letting abstractions get in the way of insight and intuitiveness.
I know that people will argue that shell languages are not programming languages, and that terseness is important for the former. And yet, we still have people complaining about it. This is the programmer ego and the sysadmin ego of people clashing with each other. After all, nobody is purely just one of those two.
People who build a system or at least know how it works internally want to simplify their life by building abstractions.
As people come later to use the system with the embedded abstractions, they only know the abstractions but have no idea of the underlying implementations. Those abstractions used to make perfect sense for those with prior knowledge but can also carry subtle bias which makes their use error prone for non initiated users.
I don't think 2>&1 ever made any sense.
I think shell language is simply awful.
It's not that hard. Consider the following:
The shell thinks that you're trying to run the portion before the & (command) in the background and the portion after the & (2>&1) in the foreground. There is just one problem. The second part (2>&1) means that you're redirecting stderr/fd2 to stdout/fd1 for a command that is to follow (similar to how you set environment variables for a command invocation). However, you haven't specified the command that follows. The second part just freezes waiting for the command. Try it and see for yourself. Here the shell redirects the output of stderr/fd2 to a file named 1. It doesn't know that you're talking about a file descriptor instead of a filename. So you need to use &1 to indicate your intention. The same confusion doesn't happen for the left side (fd2) because that will always be a file descriptor. Hence the correct form is: > I think shell language is simply awful.Honestly, I wish I could ask the person who designed it, why they made such decisions.
(But I won't claim that I am always able to strike the right balance here)
The abstraction may be great, the problem is the lack of intuitive understanding you can get from super terse, symbol heavy syntax.
This redirection relies on foundational concepts (file descriptors, stdin 0, stdout 1, stderr 2) that need to be well understood when using unix. IMO, this helps to build insight and intuitiveness. A pipe is not magic, it is just a simple operation on file descriptors. Complexity exists (buffering, zombies), but not there.
I agree that 2>&1 is not complex. But I think I speak for many Bash users when I say that this idiom looks bad, is hard to Google, hard to read and hard to memorize.
So, sure, there are practical issues with details like this. And yet, it is simple. And there are simple methods for learning and retaining little tidbits like this over time if you care to do so. Bash and its cousins aren’t going away, so take notes, make a cheat sheet, or work on a better replacement (you’ll fail and make the problem worse, but go ahead).
The "Redirections" section of the manual [0] is just seven US Letter pages. This guy's cheat sheet [1] that took me ten seconds to find is a single printed page.
[0] <https://www.gnu.org/software/bash/manual/html_node/Redirecti...>
[1] <https://catonmat.net/ftp/bash-redirections-cheat-sheet.pdf>
"Just" seven US Letter pages? You're talking about redirections alone, right? How many such features exist in Bash? I find Python, Perl and even Lisps easier to understand. Some of those languages wouldn't have been even conceived if shell languages were good enough.
There is another shell language called 'execline' (to be precise, it's a replacement for a shell). The redirections in its commands are done using a program named 'fdmove' [1]. It doesn't leave any confusion as to what it's actually doing. fdmove doesn't mention the fact that it resorts to FD inheritance to achieve this. However, the entire 'shell' is based on chain loading of programs (fork, exec, FD inheritance, environment inheritance, etc). So fdmove's behavior doesn't really create any confusion to begin with. Despite execline needing some clever thinking from the coder, I find it easier to understand what it's actually doing, compared to bash. This is where bash and other POSIX shell languages went wrong with abstractions. They got carried away with them.
[1] https://www.skarnet.org/software/execline/fdmove.html
Yes. It's the syntax alongside prose explaining the behavior in detail. Go give it a read.
If you want documentation that's done up in the "modern" style, then you'll prefer that one-page cheat sheet that that guy made. I find that "modern" documentation tends to leave it up to each reader to discover the non-obvious parts of the behavior for themselves.
> I find Python ... easier to understand.
Have you read the [0] docs for Python's 'subprocess' library? The [1] docs for Python's 'multiprocess' library? Or many of the other libraries in the Python standard library that deal with nontrivial process and I/O management? Unless you want to underdocument and leave important parts of the behavior for users to incorrectly guess, such documentation is going to be much larger than a cheat sheet would be.
[0] ...twenty-five pages of...
[1] ...fifty-nine pages of...
> I'm zero Lisp expert
and this:
> I don't feel comfortable at all reading this snippet
are related. The comfort in reading Lisp comes from how few syntactic/semantic rules there are. There's a standard form and a few special forms. Compare that to C - possibly one of the smallest popular languages around. How many syntactical and semantic rules do you need to know to be a half decent C programmer?
If you look at the Lisp code, it has just 2 main features - a tree in the form of nested lists and some operations in prefix notation. It needs some getting used to for regular programmers. But it's said that programming newbies learn Lisps faster than regular programming languages, due to the fewer rules they have to remember.
Which is lost when using more modern or languages foreign to Unix.
Any time the shell executes a program it forks, not just for redirections. Redirections will use dup before exec on the child process. Piping will be two forks and obviously the `pipe` syscall, with one process having its stdout dup'd to the input end of the pipe, and the other process having its stdin dup'd to the output end.
Honestly, I find the BASH manual to be excellently written, and it's probably available on your system even without an internet connection. I'd always go there than rely on stack overflow or an LLM.
https://www.gnu.org/software/bash/manual/bash.html#Redirecti...
https://man7.org/linux/man-pages/man2/dup.2.html
and
https://man.archlinux.org/man/dup2.2.en
A lot of bots are reading this. Amazing.
Since they're both just `dup2(1, 2)`, `2>&1` and `2<&1` are the same. However, yes, `2<&1` would be misleading because it looks like you're treating stderr like an input.
open a terminal (OSX/Linux) and type:
open a browser window and search for: Both will bring up the man page for the function call.To get recursive, you can try:
(the unix is important, otherwise it gives you manly men)That's only just after midnight [1][2]
[1] - https://www.youtube.com/watch?v=XEjLoHdbVeE
[2] - https://unix.stackexchange.com/questions/405783/why-does-man...
And I also disagree, your suggestion is not easier. The & operator is quite intuitive as it is, and conveys the intention.
> Respectfully, what was the purpose of this comment, really?
Judging by its replies alone, not everyone considers it purposeless. And even though I know enough to use shell redirections correctly, I still found that comment insightful. This is why I still prefer human explanations over AI. It often contains information you didn't think you needed. HN is one of the sources of the gradually dwindling supply of such information. That comment is still on-topic. Please don't discourage such habits.