I'm writing a bash script that is supposed to be "transparent" to the user. It reads commands from the user and intercepts them, allowing only some of them to be executed by bash, depending on some criteria. It (basically) works like this:
while true; do
read COMMAND
can_be_done $COMMAND
if [ $? == 0 ]; then
eval $COMMAND
if [ $? != 0 ]; then
echo "Error: command not found"
fi
fi
done
The problem is, when the command fails, you also get stuff printed to the console. BUT, if I keep the result in a variable and only print it when it doesn't fail, like so:
RESULT=$(eval $COMMAND)
Then there's another problem: The special formatting gets lost (for example, "ls --color" doesn't show colors anymore)
My question is: Is there a way to have the command print to STDOUT if successful, but to /dev/null if it fails?
Do you really need the second part, replacing the output of the command with an error message? Linux commands print their own error messages, which aren't necessarily "command not found". You'd be hiding the true error (permission denied, file not found, out of memory, segfault, etc.) with an oftentimes incorrect error message (command not found).
If you remove that check, you could simplify the loop to something like this:
while true; do
read -e COMMAND
if can_be_done "$COMMAND"; then
eval "$COMMAND"
fi
done
read -e uses readline to obtain the command, making the prompt a lot more shell-like (↑ and ↓ for history, for instance).command; if [ $? == 0 ]; then is more idiomatically written as if <command>; then.If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With