Say you've got the following bash command-line that runs a series of commands in a backgrounded subshell.
exec 3< <(sleep 1 && echo success && cat file-not-found)
To read the stdout of the subshell you can read file descriptor 3.
cat <&3
# success
The trouble is that the stderr of the subshell is connected to the stderr of the parent bash process and is printed to the terminal 1 second after the subshell starts.
cat: file-not-found: Filen eller katalogen finns inte
You want to capture this error separately from stdout, to read it and ensure that it is empty. How would you go about it?
On Linux (and most likely nowhere else) a read-write redirection in Bash will work as an “anonymous pipe”, to an extent. This makes it possible to run a command in the background and process its standard and error outputs independently, without using any pipes visible in the file system.
The “anonymous pipe” comes with a caveat though: Unlike a fully fledged pipe node created using mkfifo, this “pipe” is merely a file descriptor. This implies that it remains open and there is no close()-like behavior at the end of an input, e.g. when a Bash statement with redirection terminates.
Consequently, cookbook examples like readarray -t ... or while IFS= read -r line; do ... ; done will freeze. This caveat can be circumvented using a suitable convention (protocol) to explicitly announce the end of input instead of relying on close(). In the snippets below, all lines are prefixed with a space (removed on the other side of the pipe), so that a completely empty line can serve as an end-of-input signal.
Let’s assume we have a command called background_process. It may look like this:
background_process() {
sleep 1
echo ' stdout a' # Notice the leading space.
sleep 1
echo ' stderr a' >&2 # Notice the leading space.
sleep 1
echo ' stdout b' # Notice the leading space.
sleep 1
echo ' stderr b' >&2 # Notice the leading space.
sleep 1
echo # Empty line announces end of standard output.
echo >&2 # Empty line announces end of error output.
}
Quite counter-intuitively, because we want to accumulate the error output of the background process and inspect it later, the error output is read and accumulated in the foreground, unlike the standard output, which runs in a subshell:
exec {out}<> <(:) # Standard output pipe.
exec {err}<> <(:) # Error output pipe.
background_process 2>&"$err"- >&"$out"- &
background_pid="$!"
while IFS= read -r line && [[ "$line" ]]; do
printf 'standard output: %s\n' "${line:1}"
done <&"$out"- {err}<&- & # Standard output processing runs in the background.
reader_pid="$!"
exec {out}>&- # Closing standard output pipe for good.
declare -a errors
while IFS= read -r line && [[ "$line" ]]; do
errors+=("${line:1}")
done <&"$err"- # Accumulation of errors runs in the foreground.
exec {err}>&- # Closing error output pipe for good.
wait "$background_pid" || echo "Backgorund process failed with status $?"
wait "$reader_pid" || echo "Error output reader failed with status $?"
for error in "${!errors[@]}"; do
printf "standard error %d: %s\n" "$error" "${errors[error]}"
done
For the sake of brevity, this example honors the “space convention” directly in the “business logic” of background_command. In practice one could improve the separation of concerns using a suitable wrapper:
space_prefixing_wrapper() {
"$@" \
2> >(while IFS= read -r line; do printf ' %s\n' "$line"; done >&2) \
> >(while IFS= read -r line; do printf ' %s\n' "$line"; done)
}
# ...
space_prefixing_wrapper some arbitrary command 2>&"$err"- >&"$out"- &
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With