Better Command Execution with bash

In this second part of a two-part series on executing commands with the bash shell, you will learn how to use fewer if statements, display error messages when failures occur, and more. This article is excerpted from chapter four of the bash Cookbook, Solutions and Examples for bash Users, written by Carl Albing, JP Vossen and Cameron Newham (O’Reilly, 2007; ISBN: 0596526784). Copyright © 2007 O’Reilly Media, Inc. All rights reserved. Used with permission from the publisher. Available from booksellers or direct from O’Reilly Media.

4.6 Using Fewer if Statements


As a conscientious programmer, you took to heart what we described in the previous recipe, Recipe 4.5. "Deciding Whether a Command Succeeds." You applied the concept to your latest shell script, and now you find that the shell script is unreadable, if with all those if statements checking the return code of every command. Isn’t there an alternative?


Use the double-ampersand operator in bash to provide conditional execution:

  $ cd mytmp && rm *


Two commands separated by the double ampersands tells bash to run the first command and then to run the second command only if the first command succeeds (i.e., its exit status is 0). This is very much like using an if statement to check the exit status of the first command in order to protect the running of the second command:

  cd mytm p
  if (( $? )); then rm * ; fi

The double ampersand syntax is meant to be reminiscent of the logical and operator in C Language. If you know your logic (and your C) then you’ll recall that if you are evaluating the logical expression A AND B , then the entire expression can only be true if both (sub)expression A and (sub)expression B evaluate to true. If either one is false, the whole expression is false. C Language makes use of this fact, and when you code an expression like if (A && B) { … } , it will evaluate expression A first. If it is false, it won’t even bother to evaluate B since the overall outcome (false) has already been determined (by A being false).

So what does this have to do with bash? Well, if the exit status of the first command (the one to the left of the && ) is non-zero (i.e., failed) then it won’t bother to evaluate the second expression—i.e., it won’t run the other command at all.

If you want to be thorough about your error checking, but don’t want if statements all over the place, you can have bash exit any time it encounters a failure (i.e., a non- zero exit status) from every command in your script (except in while loops and if statements where it is already capturing and using the exit status) by setting the -e flag.

  set – e
  cd mytmp
  rm *

Setting the -e flag will cause the shell to exit when a command fails. If the cd fails, the script will exit and never even try to execute the rm * command. We don’t recommend doing this on an interactive shell, because when the shell exits it will make your shell window go away.

See Also

• Recipe 4.8, "Displaying Error Messages When Failures Occur" for an explanation of the || syntax, which is similar in some ways, but also quite different from the && construct

{mospagebreak title=4.7 Running Long Jobs Unattended}


You ran a job in the background, then exited the shell and went for coffee. When you came back to check, the job was no longer running and it hadn’t completed. In fact, your job hadn’t progressed very far at all. It seems to have quit as soon as you exited the shell.


If you want to run a job in the background and expect to exit the shell before the job completes, then you need to nohup the job:

  $ nohup long &
  nohup: appending output to `nohup.out’


When you put the job in the background (via the &), it is still a child process of the bash shell. When you exit an instance of the shell, bash sends a hangup (hup) signal to all of its child processes. That’s why your job didn’t run for very long. As soon as you exited bash, it killed your background job. (Hey, you were leaving; how was it supposed to know?)

The nohup command simply sets up the child process to ignore hang-up signals. You can still kill a job with the kill command, because kill sends a SIGTERM signal not a SIGHUP signal. But with nohup, bash won’t inadvertently kill your job when you exit.

The message that nohup gives about appending your output is just nohup trying to be helpful. Since you are likely to exit the shell after issuing a nohup command, your output destination will likely go away—i.e., the bash session in your terminal win dow would no longer be active. So, where would the job be able to write? More importantly, writing to a non-existent destination would cause a failure. So nohup redirects the output for you, appending it (not overwriting, but adding at the end) to a file named nohup.out in the current directory. You can explicitly redirect the output elsewhere on the command line and nohup is smart enough to detect that this has happened and doesn’t use nohup.out for your output.

See Also

{mospagebreak title=4.8 Displaying Error Messages When Failures Occur}


You need your shell script to be verbose about failures. You want to see error messages when commands don’t work, but if statements tend to distract from the visual flow of statements.


A common idiom among some shell programmers is to use the || with commands to spit out debug or error messages. Here’s an example:

  cmd || printf "%b" "cmd failed. You’re on your ownn"


Similar to how the && didn’t bother to evaluate the second expression if the first was false, the || tells the shell not to bother to evaluate the second expression if the first one is true (i.e., succeeds). As with && , the || syntax harkens back to logic and C Language where the outcome is determined (as true) if the first expression in A OR B evaluates to true—so there’s no need to evaluate the second expression. In bash, if the first expression returns 0 (i.e., succeeds) then it just continues on. Only if the first expression (i.e., exit value of the command) returns a non-zero value must it evaluate the second part, and thus run the other command.

Warning—don’t be fooled by this:

  cmd || printf "%b" "FAILED.n" ; exit 1

The exit will be executed in either case! The OR is only between those two commands. If we want to have the exit happen only on error, we need to group it with the printf so that both are considered as a unit. The desired syntax would be:

  cmd || { printf "%b" "FAILED.n" ; exit 1 ; }

Due to an oddity of bash syntax, the semicolon after the last command and just before the } is required, and that closing brace must be separated by whitespace from the surrounding text.

See Also

{mospagebreak title=4.9 Running Commands from a Variable}


You want to run different commands in your script depending on circumstances. How can you vary which commands run?


There are many solutions to this problem—it’s what scripting is all about. In coming chapters we’ll discuss various programming logic that can be used to solve this problem, such as if/then /else , case statements, and more. But here’s a slightly different approach that reveals something about bash. We can use the contents of a variable (more on those in Chapter 5) not just for parameters, but also for the command itself.

  FN=/tmp/x. x


We can assign the program name to a variable (here we use $PROG), and then when we refer to that variable in the place where a command name would be expected, it uses the value of that variable ($PROG) as the command to run. The bash shell parses the command line, substitutes the values of its variables and takes the result of all the substitutions and then treats that as the command line, as if it had been typed that way verbatim.

Be careful about the variable names you use. Some programs such as InfoZip use environment variables such as $ZIP and $UNZIP to pass set tings to the program itself. So if you do something like ZIP=’/usr/bin/zip’ , you can spend days pulling your hair out wondering why it works fine from the command line, but not in your script. Trust us. We learned this one the hard way. Also, RTFM.

See Also

{mospagebreak title=4.10 Running All Scripts in a Directory}


You want to run a series of scripts, but the list keeps changing; you’re always adding new scripts, but you don’t want to continuously modify a master list.


Put the scripts you want to run in a directory, and let bash run everything that it finds. Instead of keeping a master list, simply look at the contents of that directory. Here’s a script that will run everything it finds in a directory:

Put the scripts you want to run in a directory, and let run everything that it finds. Instead of keeping a master list, simply look at the contents of that directory. Here’s a script that will run everything it finds in a directory:

  for SCRIPT in /path/to/scripts/dir/*
if [ -f $SCRIPT -a -x $SCRIPT ]


We will discuss the for loop and the if statement in greater detail in Chapter 6, but this gives you a taste. The variable $SCRIPT will take on successive values for each file that matches the wildcard pattern * , which matches everything in the current direc tory (except invisible dot files, which begin with a period). If it is a file (the -f test) and has execute permissions set (the -x test), the shell will then try to run that script.

In this simple example, we have provided no way to specify any arguments to the scripts as they are executed. This simple script may work well for your personal needs, but wouldn’t be considered robust; some might consider it downright dangerous. But we hope it gives you an idea of what lies ahead: some programming-language-style scripting capabilities.

See Also

Chapter 6 for more about for loops and if statements

[gp-comments width="770" linklove="off" ]

chat sex hikayeleri Ensest hikaye