Quoting the Bash manual page:
If a compound command or shell function executes in a context where -e is being ignored, none of the commands executed within the compound command or function body will be affected by the -e setting, even if -e is set and a command returns a failure status. If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes.
All this isn't so clear, but it actually means that if a simple command (or a pipeline, etc.) exits with an error code inside a function which the caller checks, that command will not be subject to set -e
.
In practice, it pretty much means that using functions renders set -e
moot:
set -e
foo() {
false # this command returns 1
true
}
foo # that one would fail
foo && echo "No errors!" # would print "No errors!", and pass fine
All this basically means set -e
is untrustworthy when using shell functions to factorize the code. Worse, it gives an impression of safety, where there isn't any.
See also SO answer 37191242 and the bash-bug thread linked there for a "rationale" (can't say I understand why they don't introduce a custom shell option for this, though).
[edit] I was confused about set -E
, so scrapped it. But it's basically unrelated, only affects ERR traps.
To import the environment of another process, one can use while read -d '' -r ev; do export "$ev"; done <"/proc/$(pgrep -u "$USER" -x PROCNAME)/environ"
(when using bash).
This is particularly handy e.g. when connecting to a machine through SSH while a graphical session is running and wanting to interact with it (X, DBus, etc.).
This leverages some Bash specifics, like read -d ''
to use NUL as the line separator. There are solutions only using POSIX constructs, but the only I know involves a temporary file, which is not as handy. Before discovering read -d ''
I was using another Bashism: process substitution, in the form of <(tr '\0' '\n' </proc/$(pgrep -u "$USER" -x PROCNAME)
. It isn't as good as it would not properly handle newlines in the environment, though, but it could easily be converted to a POSIX-compliant construct using a temporary file. Note that the naive alternative of piping the same thing to the while loop (and thus to read
) will not work as it would run the loop in a subshell, not affecting the environment of the current shell. Another alternative would be to evaluate the output of such a subshell that would echo it, but that would require escaping the values, to which I don't know a robust POSIX solution (there are plenty of handmade ones around, but most fail in odd corner cases -- and no, printf %q
is not in POSIX).
Il y a des choses intéressantes, mais il manque ce qui me semble le plus important pour toute idée de défensivité en (Ba)SH : le quoting. Il faut grosso modo quoter toutes les variables, sauf cas particuliers. Par exemple, le snippet tout simple utilisé pour obtenir PROGNAME
a un comportement probablement inattendu si $0 contient des espaces :
# avec un script "/tmp/foo bar.bash" :
$ bash foo\ bar.bash
foo
# avec un script "/tmp/a b c d.bash" :
$ bash a\ b\ c\ d.bash
basename: extra operand ‘c’
Try 'basename --help' for more information.
# ouch.
Tout ça est dû au fait que sans quoting, l'expansion des variables est sujette au découpage de mots et l'expansion des chemins (pour le fun, essayer avec un script nommé *
), ce qui est en général (très) dangereux. La solution est simple : quoter toutes les subsitutions (ce qui inclus $()
), et par exemple le PROGNAME
devient readonly PROGNAME=$(basename "$0")
. On note ici la seule exception : l'assignation à une variable, qu'il n'est pas nécessaire de quoter, mais je recommanderais de toujours quoter, même ça, par ce que ça protège par exemple d'une erreur de refactoring.
Aussi comme le mentionne Sky dans son commentaire (au milieu du reste), utiliser set -e
(avorter le script si une commande dont le retour n'est pas vérifié échoue) est une excellente pratique qui évite une cascade de problèmes lorsque une commande échoue de façon inattendue, que je recommande fortement pour tout nouveau script.
À noter cependant que ça ne marche pas dans tous les cas qu'on pourrait attendre, par exemple foo=$(false)
ne terminera pas le script (car la commande est considérée comme vérifiée). J'ai l'habitude d'avoir une fonction du style die() { echo "$@">&2 ; exit 1; }
et de vérifier également ce type d'assignation en utilisant foo=$(false) || die "failed to do something"
.