A builtin
is a command contained within the Bash tool
set, literally built in. This is either
for performance reasons -- builtins execute faster than external
commands, which usually require forking off a separate process
-- or because a particular builtin needs direct access to the
shell internals.
When a command or
the shell itself initiates (or
spawns) a new
subprocess to carry out a task, this is called
forking. This new process
is the child, and the process
that forked it off is the
parent. While the child
process is doing its work, the
parent process is still
executing.
Example 11-1. A script that forks off multiple instances of itself
#!/bin/bash
# spawn.sh
PIDS=$(pidof sh $0) # Process IDs of the various instances of this script.
P_array=( $PIDS ) # Put them in an array (why?).
echo $PIDS # Show process IDs of parent and child processes.
let "instances = ${#P_array[*]} - 1" # Count elements, less 1.
# Why subtract 1?
echo "$instances instance(s) of this script running."
echo "[Hit Ctl-C to exit.]"; echo
sleep 1 # Wait.
sh $0 # Play it again, Sam.
exit 0 # Not necessary; script will never get to here.
# Why not?
# After exiting with a Ctl-C,
#+ do all the spawned instances of the script die?
# If so, why?
# Note:
# ----
# Be careful not to run this script too long.
# It will eventually eat up too many system resources.
# Is having a script spawn multiple instances of itself
#+ an advisable scripting technique.
# Why or why not?
Generally, a Bash builtin
does not fork a subprocess when it executes within
a script. An external system command or filter in
a script usually will fork a
subprocess.
A builtin may be a synonym to a system command of the same
name, but Bash reimplements it internally. For example,
the Bash echo command is not the same as
/bin/echo, although their behavior is
almost identical.
#!/bin/bash
echo "This line uses the \"echo\" builtin."
/bin/echo "This line uses the /bin/echo system command."
A keyword
is a reserved word, token or
operator. Keywords have a special meaning to the shell,
and indeed are the building blocks of the shell's
syntax. As examples, "for",
"while", "do", and
"!" are keywords. Similar to a builtin, a keyword is hard-coded into
Bash, but unlike a builtin, a keyword is
not by itself a command, but part of a larger command structure.
[1]
I/O
echo
prints (to stdout) an expression
or variable (see Example 4-1).
echo Hello
echo $a
An echo requires the
-e option to print escaped characters. See
Example 5-2.
Normally, each echo command prints
a terminal newline, but the -n option
suppresses this.
An echo can be used to feed a
sequence of commands down a pipe.
if echo "$VAR" | grep -q txt # if [[ $VAR = *txt* ]]
then
echo "$VAR contains the substring sequence \"txt\""
fi
Be aware that echo `command`
deletes any linefeeds that the output
of command
generates.
The $IFS (internal field
separator) variable normally contains
\n (linefeed) as one of its set of
whitespace
characters. Bash therefore splits the output of
command at linefeeds
into arguments to echo. Then
echo outputs these arguments,
separated by spaces.
bash$ ls -l /usr/share/apps/kjezz/sounds-rw-r--r-- 1 root root 1407 Nov 7 2000 reflect.au
-rw-r--r-- 1 root root 362 Nov 7 2000 seconds.aubash$ echo `ls -l /usr/share/apps/kjezz/sounds`total 40 -rw-r--r-- 1 root root 716 Nov 7 2000 reflect.au -rw-r--r-- 1 root root 362 Nov 7 2000 seconds.au
So, how can we embed a linefeed within an
echoed character string?
# Embedding a linefeed?
echo "Why doesn't this string \n split on two lines?"
# Doesn't split.
# Let's try something else.
echo
echo $"A line of text containing
a linefeed."
# Prints as two distinct lines (embedded linefeed).
# But, is the "$" variable prefix really necessary?
echo
echo "This string splits
on two lines."
# No, the "$" is not needed.
echo
echo "---------------"
echo
echo -n $"Another line of text containing
a linefeed."
# Prints as two distinct lines (embedded linefeed).
# Even the -n option fails to suppress the linefeed here.
echo
echo
echo "---------------"
echo
echo
# However, the following doesn't work as expected.
# Why not? Hint: Assignment to a variable.
string1=$"Yet another line of text containing
a linefeed (maybe)."
echo $string1
# Yet another line of text containing a linefeed (maybe).
# ^
# Linefeed becomes a space.
# Thanks, Steve Parker, for pointing this out.
This command is a shell builtin, and not the same as
/bin/echo, although its behavior is
similar.
bash$ type -a echoecho is a shell builtin
echo is /bin/echo
printf
The printf, formatted print, command is an
enhanced echo. It is a limited variant
of the C language printf() library
function, and its syntax is somewhat different.
printfformat-string... parameter...
This is the Bash builtin version
of the /bin/printf or
/usr/bin/printf command. See the
printf manpage (of the system command)
for in-depth coverage.
Older versions of Bash may not support
printf.
Example 11-2. printf in action
#!/bin/bash
# printf demo
PI=3.14159265358979
DecimalConstant=31373
Message1="Greetings,"
Message2="Earthling."
echo
printf "Pi to 2 decimal places = %1.2f" $PI
echo
printf "Pi to 9 decimal places = %1.9f" $PI # It even rounds off correctly.
printf "\n" # Prints a line feed,
# Equivalent to 'echo' . . .
printf "Constant = \t%d\n" $DecimalConstant # Inserts tab (\t).
printf "%s %s \n" $Message1 $Message2
echo
# ==========================================#
# Simulation of C function, sprintf().
# Loading a variable with a formatted string.
echo
Pi12=$(printf "%1.12f" $PI)
echo "Pi to 12 decimal places = $Pi12"
Msg=`printf "%s %s \n" $Message1 $Message2`
echo $Msg; echo $Msg
# As it happens, the 'sprintf' function can now be accessed
#+ as a loadable module to Bash,
#+ but this is not portable.
exit 0
Formatting error messages is a useful application of
printf
E_BADDIR=65
var=nonexistent_directory
error()
{
printf "$@" >&2
# Formats positional params passed, and sends them to stderr.
echo
exit $E_BADDIR
}
cd $var || error $"Can't cd to %s." "$var"
# Thanks, S.C.
read
"Reads" the value
of a variable from stdin, that
is, interactively fetches input from the keyboard. The
-a option lets read
get array variables (see Example 26-6).
Example 11-3. Variable assignment, using read
#!/bin/bash
# "Reading" variables.
echo -n "Enter the value of variable 'var1': "
# The -n option to echo suppresses newline.
read var1
# Note no '$' in front of var1, since it is being set.
echo "var1 = $var1"
echo
# A single 'read' statement can set multiple variables.
echo -n "Enter the values of variables 'var2' and 'var3' (separated by a space or tab): "
read var2 var3
echo "var2 = $var2 var3 = $var3"
# If you input only one value, the other variable(s) will remain unset (null).
exit 0
A read without an associated variable
assigns its input to the dedicated variable $REPLY.
Example 11-4. What happens when read has no
variable
#!/bin/bash
# read-novar.sh
echo
# -------------------------- #
echo -n "Enter a value: "
read var
echo "\"var\" = "$var""
# Everything as expected here.
# -------------------------- #
echo
# ------------------------------------------------------------------- #
echo -n "Enter another value: "
read # No variable supplied for 'read', therefore...
#+ Input to 'read' assigned to default variable, $REPLY.
var="$REPLY"
echo "\"var\" = "$var""
# This is equivalent to the first code block.
# ------------------------------------------------------------------- #
echo
exit 0
Normally, inputting a \
suppresses a newline during input to
a read. The -r
option causes an inputted \ to be
interpreted literally.
Example 11-5. Multi-line input to read
#!/bin/bash
echo
echo "Enter a string terminated by a \\, then press <ENTER>."
echo "Then, enter a second string, and again press <ENTER>."
read var1 # The "\" suppresses the newline, when reading $var1.
# first line \
# second line
echo "var1 = $var1"
# var1 = first line second line
# For each line terminated by a "\"
#+ you get a prompt on the next line to continue feeding characters into var1.
echo; echo
echo "Enter another string terminated by a \\ , then press <ENTER>."
read -r var2 # The -r option causes the "\" to be read literally.
# first line \
echo "var2 = $var2"
# var2 = first line \
# Data entry terminates with the first <ENTER>.
echo
exit 0
The read command has some interesting
options that permit echoing a prompt and even reading keystrokes
without hitting ENTER.
# Read a keypress without hitting ENTER.
read -s -n1 -p "Hit a key " keypress
echo; echo "Keypress was "\"$keypress\""."
# -s option means do not echo input.
# -n N option means accept only N characters of input.
# -p option means echo the following prompt before reading input.
# Using these options is tricky, since they need to be in the correct order.
The -n option to read
also allows detection of the arrow keys
and certain of the other unusual keys.
Example 11-6. Detecting the arrow keys
#!/bin/bash
# arrow-detect.sh: Detects the arrow keys, and a few more.
# Thank you, Sandro Magi, for showing me how.
# --------------------------------------------
# Character codes generated by the keypresses.
arrowup='\[A'
arrowdown='\[B'
arrowrt='\[C'
arrowleft='\[D'
insert='\[2'
delete='\[3'
# --------------------------------------------
SUCCESS=0
OTHER=65
echo -n "Press a key... "
# May need to also press ENTER if a key not listed above pressed.
read -n3 key # Read 3 characters.
echo -n "$key" | grep "$arrowup" #Check if character code detected.
if [ "$?" -eq $SUCCESS ]
then
echo "Up-arrow key pressed."
exit $SUCCESS
fi
echo -n "$key" | grep "$arrowdown"
if [ "$?" -eq $SUCCESS ]
then
echo "Down-arrow key pressed."
exit $SUCCESS
fi
echo -n "$key" | grep "$arrowrt"
if [ "$?" -eq $SUCCESS ]
then
echo "Right-arrow key pressed."
exit $SUCCESS
fi
echo -n "$key" | grep "$arrowleft"
if [ "$?" -eq $SUCCESS ]
then
echo "Left-arrow key pressed."
exit $SUCCESS
fi
echo -n "$key" | grep "$insert"
if [ "$?" -eq $SUCCESS ]
then
echo "\"Insert\" key pressed."
exit $SUCCESS
fi
echo -n "$key" | grep "$delete"
if [ "$?" -eq $SUCCESS ]
then
echo "\"Delete\" key pressed."
exit $SUCCESS
fi
echo " Some other key pressed."
exit $OTHER
# Exercises:
# ---------
# 1) Simplify this script by rewriting the multiple "if" tests
#+ as a 'case' construct.
# 2) Add detection of the "Home," "End," "PgUp," and "PgDn" keys.
The -n option to read
will not detect the ENTER (newline)
key.
The -t option to read
permits timed input (see Example 9-4).
The read command may also
"read" its variable value from a file
redirected to
stdin. If the file contains
more than one line, only the first line is assigned
to the variable. If read
has more than one parameter, then each of
these variables gets assigned a successive whitespace-delineated
string. Caution!
#!/bin/bash
read var1 <data-file
echo "var1 = $var1"
# var1 set to the entire first line of the input file "data-file"
read var2 var3 <data-file
echo "var2 = $var2 var3 = $var3"
# Note non-intuitive behavior of "read" here.
# 1) Rewinds back to the beginning of input file.
# 2) Each variable is now set to a corresponding string,
# separated by whitespace, rather than to an entire line of text.
# 3) The final variable gets the remainder of the line.
# 4) If there are more variables to be set than whitespace-terminated strings
# on the first line of the file, then the excess variables remain empty.
echo "------------------------------------------------"
# How to resolve the above problem with a loop:
while read line
do
echo "$line"
done <data-file
# Thanks, Heiner Steven for pointing this out.
echo "------------------------------------------------"
# Use $IFS (Internal Field Separator variable) to split a line of input to
# "read", if you do not want the default to be whitespace.
echo "List of all users:"
OIFS=$IFS; IFS=: # /etc/passwd uses ":" for field separator.
while read name passwd uid gid fullname ignore
do
echo "$name ($fullname)"
done </etc/passwd # I/O redirection.
IFS=$OIFS # Restore original $IFS.
# This code snippet also by Heiner Steven.
# Setting the $IFS variable within the loop itself
#+ eliminates the need for storing the original $IFS
#+ in a temporary variable.
# Thanks, Dim Segebart, for pointing this out.
echo "------------------------------------------------"
echo "List of all users:"
while IFS=: read name passwd uid gid fullname ignore
do
echo "$name ($fullname)"
done </etc/passwd # I/O redirection.
echo
echo "\$IFS still $IFS"
exit 0
cat file1 file2 |
while read line
do
echo $line
done
However, as Bj�n Eriksson shows:
Example 11-8. Problems reading from a pipe
#!/bin/sh
# readpipe.sh
# This example contributed by Bjon Eriksson.
last="(null)"
cat $0 |
while read line
do
echo "{$line}"
last=$line
done
printf "\nAll done, last:$last\n"
exit 0 # End of code.
# (Partial) output of script follows.
# The 'echo' supplies extra brackets.
#############################################
./readpipe.sh
{#!/bin/sh}
{last="(null)"}
{cat $0 |}
{while read line}
{do}
{echo "{$line}"}
{last=$line}
{done}
{printf "nAll done, last:$lastn"}
All done, last:(null)
The variable (last) is set within the subshell but unset outside.
The gendiff script, usually found in
/usr/bin on many Linux distros, pipes the
output of find to a
while read construct.
find $1 \( -name "*$2" -o -name ".*$2" \) -print |
while read f; do
. . .
Filesystem
cd
The familiar cd change directory
command finds use in scripts where execution of a command
requires being in a specified directory.
(cd /source/directory && tar cf - . ) | (cd /dest/directory && tar xpvf -)
The -P (physical) option to
cd causes it to ignore symbolic
links.
cd - changes to $OLDPWD, the previous working
directory.
The cd command does not function
as expected when presented with two forward slashes.
bash$ cd //bash$ pwd//
The output should, of course, be /.
This is a problem both from the command line and in a script.
pwd
Print Working Directory. This gives the user's
(or script's) current directory (see Example 11-9). The effect is identical to
reading the value of the builtin variable $PWD.
pushd, popd, dirs
This command set is a mechanism for bookmarking working directories,
a means of moving back and forth through directories in an orderly
manner. A pushdown stack is used to keep track of directory names.
Options allow various manipulations of the directory stack.
pushd
dir-name pushes the path
dir-name onto the directory
stack and simultaneously changes the current working
directory to dir-name
popd removes
(pops) the top directory path name off the directory stack
and simultaneously changes the current working directory
to that directory popped from the stack.
dirs lists the contents of the directory
stack (compare this with the $DIRSTACK variable).
A successful pushd or
popd will automatically invoke
dirs.
Scripts that require various changes to the current
working directory without hard-coding the directory name
changes can make good use of these commands. Note that
the implicit $DIRSTACK array variable,
accessible from within a script, holds the contents of
the directory stack.
Example 11-9. Changing the current working directory
#!/bin/bash
dir1=/usr/local
dir2=/var/spool
pushd $dir1
# Will do an automatic 'dirs' (list directory stack to stdout).
echo "Now in directory `pwd`." # Uses back-quoted 'pwd'.
# Now, do some stuff in directory 'dir1'.
pushd $dir2
echo "Now in directory `pwd`."
# Now, do some stuff in directory 'dir2'.
echo "The top entry in the DIRSTACK array is $DIRSTACK."
popd
echo "Now back in directory `pwd`."
# Now, do some more stuff in directory 'dir1'.
popd
echo "Now back in original working directory `pwd`."
exit 0
# What happens if you don't 'popd' -- then exit the script?
# Which directory do you end up in? Why?
Variables
let
The let command carries out arithmetic
operations on variables. In many cases, it functions as a less
complex version of expr.
Example 11-10. Letting "let" do arithmetic.
#!/bin/bash
echo
let a=11 # Same as 'a=11'
let a=a+5 # Equivalent to let "a = a + 5"
# (Double quotes and spaces make it more readable.)
echo "11 + 5 = $a" # 16
let "a <<= 3" # Equivalent to let "a = a << 3"
echo "\"\$a\" (=16) left-shifted 3 places = $a"
# 128
let "a /= 4" # Equivalent to let "a = a / 4"
echo "128 / 4 = $a" # 32
let "a -= 5" # Equivalent to let "a = a - 5"
echo "32 - 5 = $a" # 27
let "a *= 10" # Equivalent to let "a = a * 10"
echo "27 * 10 = $a" # 270
let "a %= 8" # Equivalent to let "a = a % 8"
echo "270 modulo 8 = $a (270 / 8 = 33, remainder $a)"
# 6
echo
exit 0
eval
eval arg1 [arg2] ... [argN]
Combines the arguments in an expression or list of
expressions and evaluates them. Any
variables contained within the expression are expanded. The
result translates into a command. This can be useful for
code generation from the command line or within a script.
bash$ process=xtermbash$ show_process="eval ps ax | grep $process"bash$ $show_process1867 tty1 S 0:02 xterm
2779 tty1 S 0:00 xterm
2886 pts/1 S 0:00 grep xterm
Example 11-11. Showing the effect of eval
#!/bin/bash
y=`eval ls -l` # Similar to y=`ls -l`
echo $y #+ but linefeeds removed because "echoed" variable is unquoted.
echo
echo "$y" # Linefeeds preserved when variable is quoted.
echo; echo
y=`eval df` # Similar to y=`df`
echo $y #+ but linefeeds removed.
# When LF's not preserved, it may make it easier to parse output,
#+ using utilities such as "awk".
echo
echo "==========================================================="
echo
# Now, showing how to "expand" a variable using "eval" . . .
for i in 1 2 3 4 5; do
eval value=$i
# value=$i has same effect. The "eval" is not necessary here.
# A variable lacking a meta-meaning evaluates to itself --
#+ it can't expand to anything other than its literal self.
echo $value
done
echo
echo "---"
echo
for i in ls df; do
value=eval $i
# value=$i has an entirely different effect here.
# The "eval" evaluates the commands "ls" and "df" . . .
# The terms "ls" and "df" have a meta-meaning,
#+ since they are interpreted as commands,
#+ rather than just character strings.
echo $value
done
exit 0
Example 11-12. Forcing a log-off
#!/bin/bash
# Killing ppp to force a log-off.
# Script should be run as root user.
killppp="eval kill -9 `ps ax | awk '/ppp/ { print $1 }'`"
# -------- process ID of ppp -------
$killppp # This variable is now a command.
# The following operations must be done as root user.
chmod 666 /dev/ttyS3 # Restore read+write permissions, or else what?
# Since doing a SIGKILL on ppp changed the permissions on the serial port,
#+ we restore permissions to previous state.
rm /var/lock/LCK..ttyS3 # Remove the serial port lock file. Why?
exit 0
# Exercises:
# ---------
# 1) Have script check whether root user is invoking it.
# 2) Do a check on whether the process to be killed
#+ is actually running before attempting to kill it.
# 3) Write an alternate version of this script based on 'fuser':
#+ if [ fuser -s /dev/modem ]; then . . .
Example 11-13. A version of "rot13"
#!/bin/bash
# A version of "rot13" using 'eval'.
# Compare to "rot13.sh" example.
setvar_rot_13() # "rot13" scrambling
{
local varname=$1 varvalue=$2
eval $varname='$(echo "$varvalue" | tr a-z n-za-m)'
}
setvar_rot_13 var "foobar" # Run "foobar" through rot13.
echo $var # sbbone
setvar_rot_13 var "$var" # Run "sbbone" through rot13.
# Back to original variable.
echo $var # foobar
# This example by Stephane Chazelas.
# Modified by document author.
exit 0
Rory Winston contributed the following instance of how
useful eval can be.
Example 11-14. Using eval to force variable
substitution in a Perl script
In the Perl script "test.pl":
...
my $WEBROOT = <WEBROOT_PATH>;
...
To force variable substitution try:
$export WEBROOT_PATH=/usr/local/webroot
$sed 's/<WEBROOT_PATH>/$WEBROOT_PATH/' < test.pl > out
But this just gives:
my $WEBROOT = $WEBROOT_PATH;
However:
$export WEBROOT_PATH=/usr/local/webroot
$eval sed 's%\<WEBROOT_PATH\>%$WEBROOT_PATH%' < test.pl > out
# ====
That works fine, and gives the expected substitution:
my $WEBROOT = /usr/local/webroot;
### Correction applied to original example by Paulo Marcel Coelho Aragao.
The eval command can be
risky, and normally should be avoided when there
exists a reasonable alternative. An eval
$COMMANDS executes the contents of
COMMANDS, which may
contain such unpleasant surprises as rm -rf
*. Running an eval on
unfamiliar code written by persons unknown is living
dangerously.
set
The set command changes
the value of internal script variables. One use for
this is to toggle option
flags which help determine the behavior of the
script. Another application for it is to reset the positional parameters that
a script sees as the result of a command (set
`command`). The script can then parse the
fields of the command output.
Example 11-15. Using set with positional
parameters
#!/bin/bash
# script "set-test"
# Invoke this script with three command line parameters,
# for example, "./set-test one two three".
echo
echo "Positional parameters before set \`uname -a\` :"
echo "Command-line argument #1 = $1"
echo "Command-line argument #2 = $2"
echo "Command-line argument #3 = $3"
set `uname -a` # Sets the positional parameters to the output
# of the command `uname -a`
echo $_ # unknown
# Flags set in script.
echo "Positional parameters after set \`uname -a\` :"
# $1, $2, $3, etc. reinitialized to result of `uname -a`
echo "Field #1 of 'uname -a' = $1"
echo "Field #2 of 'uname -a' = $2"
echo "Field #3 of 'uname -a' = $3"
echo ---
echo $_ # ---
echo
exit 0
Invoking set without any options or
arguments simply lists all the environmental and other variables
that have been initialized.
Using set with the --
option explicitly assigns the contents of a variable to
the positional parameters. When no variable follows the
--, it unsets
the positional parameters.
Example 11-16. Reassigning the positional parameters
#!/bin/bash
variable="one two three four five"
set -- $variable
# Sets positional parameters to the contents of "$variable".
first_param=$1
second_param=$2
shift; shift # Shift past first two positional params.
remaining_params="$*"
echo
echo "first parameter = $first_param" # one
echo "second parameter = $second_param" # two
echo "remaining parameters = $remaining_params" # three four five
echo; echo
# Again.
set -- $variable
first_param=$1
second_param=$2
echo "first parameter = $first_param" # one
echo "second parameter = $second_param" # two
# ======================================================
set --
# Unsets positional parameters if no variable specified.
first_param=$1
second_param=$2
echo "first parameter = $first_param" # (null value)
echo "second parameter = $second_param" # (null value)
exit 0
The export command makes
available variables to all child processes of the
running script or shell. Unfortunately, there
is no way to export variables back
to the parent process, to the process that called or
invoked the script or shell. One important
use of the export command is in startup files, to initialize
and make accessible environmental
variables to subsequent user processes.
Example 11-18. Using export to pass a variable to an
embedded awk script
#!/bin/bash
# Yet another version of the "column totaler" script (col-totaler.sh)
#+ that adds up a specified column (of numbers) in the target file.
# This uses the environment to pass a script variable to 'awk' . . .
#+ and places the awk script in a variable.
ARGS=2
E_WRONGARGS=65
if [ $# -ne "$ARGS" ] # Check for proper no. of command line args.
then
echo "Usage: `basename $0` filename column-number"
exit $E_WRONGARGS
fi
filename=$1
column_number=$2
#===== Same as original script, up to this point =====#
export column_number
# Export column number to environment, so it's available for retrieval.
# -----------------------------------------------
awkscript='{ total += $ENVIRON["column_number"] }
END { print total }'
# Yes, a variable can hold an awk script.
# -----------------------------------------------
# Now, run the awk script.
awk "$awkscript" "$filename"
# Thanks, Stephane Chazelas.
exit 0
It is possible to initialize and export
variables in the same operation, as in export
var1=xxx.
However, as Greg Keraunen points out, in certain
situations this may have a different effect than
setting a variable, then exporting it.
The declare and
typeset commands specify
and/or restrict properties of variables.
readonly
Same as declare -r,
sets a variable as read-only, or, in effect, as a
constant. Attempts to change the variable fail with
an error message. This is the shell analog of the
C language const
type qualifier.
getopts
This powerful tool parses command-line arguments passed
to the script. This is the Bash analog of the getopt external command and the
getopt library function familiar to
C programmers. It permits passing
and concatenating multiple options
[2]
and associated arguments to a script (for
example scriptname -abc -e
/usr/local).
The getopts construct uses two implicit
variables. $OPTIND is the argument
pointer (OPTion INDex)
and $OPTARG (OPTion
ARGument) the (optional) argument attached
to an option. A colon following the option name in the
declaration tags that option as having an associated
argument.
A getopts construct usually comes
packaged in a while
loop, which processes the options and
arguments one at a time, then increments the implicit
$OPTIND variable to step to the
next.
The arguments passed from the command line to
the script must be preceded by a
minus (-). It is the
prefixed - that lets
getopts recognize command-line
arguments as options.
In fact, getopts will not process
arguments without the prefixed -,
and will terminate option processing at the first
argument encountered lacking them.
The getopts template
differs slightly from the standard while
loop, in that it lacks condition brackets.
The getopts construct replaces
the deprecated getopt
external command.
while getopts ":abcde:fg" Option
# Initial declaration.
# a, b, c, d, e, f, and g are the options (flags) expected.
# The : after option 'e' shows it will have an argument passed with it.
do
case $Option in
a ) # Do something with variable 'a'.
b ) # Do something with variable 'b'.
...
e) # Do something with 'e', and also with $OPTARG,
# which is the associated argument passed with option 'e'.
...
g ) # Do something with variable 'g'.
esac
done
shift $(($OPTIND - 1))
# Move argument pointer to next.
# All this is not nearly as complicated as it looks <grin>.
Example 11-19. Using getopts to read the
options/arguments passed to a script
#!/bin/bash
# Exercising getopts and OPTIND
# Script modified 10/09/03 at the suggestion of Bill Gradwohl.
# Here we observe how 'getopts' processes command line arguments to script.
# The arguments are parsed as "options" (flags) and associated arguments.
# Try invoking this script with
# 'scriptname -mn'
# 'scriptname -oq qOption' (qOption can be some arbitrary string.)
# 'scriptname -qXXX -r'
#
# 'scriptname -qr' - Unexpected result, takes "r" as the argument to option "q"
# 'scriptname -q -r' - Unexpected result, same as above
# 'scriptname -mnop -mnop' - Unexpected result
# (OPTIND is unreliable at stating where an option came from).
#
# If an option expects an argument ("flag:"), then it will grab
#+ whatever is next on the command line.
NO_ARGS=0
E_OPTERROR=65
if [ $# -eq "$NO_ARGS" ] # Script invoked with no command-line args?
then
echo "Usage: `basename $0` options (-mnopqrs)"
exit $E_OPTERROR # Exit and explain usage, if no argument(s) given.
fi
# Usage: scriptname -options
# Note: dash (-) necessary
while getopts ":mnopq:rs" Option
do
case $Option in
m ) echo "Scenario #1: option -m- [OPTIND=${OPTIND}]";;
n | o ) echo "Scenario #2: option -$Option- [OPTIND=${OPTIND}]";;
p ) echo "Scenario #3: option -p- [OPTIND=${OPTIND}]";;
q ) echo "Scenario #4: option -q-\
with argument \"$OPTARG\" [OPTIND=${OPTIND}]";;
# Note that option 'q' must have an associated argument,
#+ otherwise it falls through to the default.
r | s ) echo "Scenario #5: option -$Option-";;
* ) echo "Unimplemented option chosen.";; # DEFAULT
esac
done
shift $(($OPTIND - 1))
# Decrements the argument pointer so it points to next argument.
# $1 now references the first non option item supplied on the command line
#+ if one exists.
exit 0
# As Bill Gradwohl states,
# "The getopts mechanism allows one to specify: scriptname -mnop -mnop
#+ but there is no reliable way to differentiate what came from where
#+ by using OPTIND."
This command, when invoked from the command line,
executes a script. Within a script, a
source file-name loads the
file file-name. Sourcing a file
(dot-command) imports
code into the script, appending to the script (same effect
as the #include directive in a
C program). The net result is the
same as if the "sourced" lines of code were
physically present in the body of the script. This is useful
in situations when multiple scripts use a common data file
or function library.
Example 11-20. "Including" a data file
#!/bin/bash
. data-file # Load a data file.
# Same effect as "source data-file", but more portable.
# The file "data-file" must be present in current working directory,
#+ since it is referred to by its 'basename'.
# Now, reference some data from that file.
echo "variable1 (from data-file) = $variable1"
echo "variable3 (from data-file) = $variable3"
let "sum = $variable2 + $variable4"
echo "Sum of variable2 + variable4 (from data-file) = $sum"
echo "message1 (from data-file) is \"$message1\""
# Note: escaped quotes
print_message This is the message-print function in the data-file.
exit 0
File data-file for Example 11-20, above. Must be present in same
directory.
# This is a data file loaded by a script.
# Files of this type may contain variables, functions, etc.
# It may be loaded with a 'source' or '.' command by a shell script.
# Let's initialize some variables.
variable1=22
variable2=474
variable3=5
variable4=97
message1="Hello, how are you?"
message2="Enough for now. Goodbye."
print_message ()
{
# Echoes any message passed to it.
if [ -z "$1" ]
then
return 1
# Error, if argument missing.
fi
echo
until [ -z "$1" ]
do
# Step through arguments passed to function.
echo -n "$1"
# Echo args one at a time, suppressing line feeds.
echo -n " "
# Insert spaces between words.
shift
# Next one.
done
echo
return 0
}
If the sourced file is itself
an executable script, then it will run, then
return control to the script that called it.
A sourced executable script may use a
return for this
purpose.
It is even possible for a script to
source itself, though this does not
seem to have any practical applications.
Example 11-21. A (useless) script that sources itself
#!/bin/bash
# self-source.sh: a script sourcing itself "recursively."
# From "Stupid Script Tricks," Volume II.
MAXPASSCNT=100 # Maximum number of execution passes.
echo -n "$pass_count "
# At first execution pass, this just echoes two blank spaces,
#+ since $pass_count still uninitialized.
let "pass_count += 1"
# Assumes the uninitialized variable $pass_count
#+ can be incremented the first time around.
# This works with Bash and pdksh, but
#+ it relies on non-portable (and possibly dangerous) behavior.
# Better would be to initialize $pass_count to 0 before incrementing.
while [ "$pass_count" -le $MAXPASSCNT ]
do
. $0 # Script "sources" itself, rather than calling itself.
# ./$0 (which would be true recursion) doesn't work here. Why?
done
# What occurs here is not actually recursion,
#+ since the script effectively "expands" itself, i.e.,
#+ generates a new section of code
#+ with each pass through the 'while' loop',
# with each 'source' in line 20.
#
# Of course, the script interprets each newly 'sourced' "#!" line
#+ as a comment, and not as the start of a new script.
echo
exit 0 # The net effect is counting from 1 to 100.
# Very impressive.
# Exercise:
# --------
# Write a script that uses this trick to actually do something useful.
exit
Unconditionally terminates a script. The
exit command may optionally take an
integer argument, which is returned to the shell as
the exit status
of the script. It is good practice to end all but the
simplest scripts with an exit 0,
indicating a successful run.
If a script terminates with an exit
lacking an argument, the exit status of the script is the exit
status of the last command executed in the script, not counting
the exit. This is equivalent to an
exit $?.
exec
This shell builtin replaces the current process with
a specified command. Normally, when the shell encounters
a command, it forks off a
child process to actually execute the command. Using the
exec builtin, the shell does not fork,
and the command exec'ed replaces the shell. When used in
a script, therefore, it forces an exit from the script when
the exec'ed command terminates.
[3]
Example 11-22. Effects of exec
#!/bin/bash
exec echo "Exiting \"$0\"." # Exit from script here.
# ----------------------------------
# The following lines never execute.
echo "This echo will never echo."
exit 99 # This script will not exit here.
# Check exit value after script terminates
#+ with an 'echo $?'.
# It will *not* be 99.
Example 11-23. A script that exec's itself
#!/bin/bash
# self-exec.sh
echo
echo "This line appears ONCE in the script, yet it keeps echoing."
echo "The PID of this instance of the script is still $$."
# Demonstrates that a subshell is not forked off.
echo "==================== Hit Ctl-C to exit ===================="
sleep 1
exec $0 # Spawns another instance of this same script
#+ that replaces the previous one.
echo "This line will never echo!" # Why not?
exit 0
An exec also serves to reassign
file descriptors. For example, exec
<zzz-file replaces stdin
with the file zzz-file.
The -exec option to
find is
not the same as the
exec shell builtin.
shopt
This command permits changing shell options on the fly (see
Example 24-1 and Example 24-2). It often
appears in the Bash startup
files, but also has its uses in scripts. Needs
version 2 or later of Bash.
shopt -s cdspell
# Allows minor misspelling of directory names with 'cd'
cd /hpme # Oops! Mistyped '/home'.
pwd # /home
# The shell corrected the misspelling.
caller
Putting a caller command
inside a function
echoes to stdout information about
the caller of that function.
#!/bin/bash
function1 ()
{
# Inside function1 ().
caller 0 # Tell me about it.
}
function1 # Line 9 of script.
# 9 main test.sh
# ^ Line number that the function was called from.
# ^^^^ Invoked from "main" part of script.
# ^^^^^^^ Name of calling script.
caller 0 # Has no effect because it's not inside a function.
A caller command can also return
caller information from a script sourced within another
script. Like a function, this is a "subroutine
call."
You may find this command useful in debugging.
Commands
true
A command that returns a successful
(zero) exit status, but does
nothing else.
# Endless loop
while true # alias for ":"
do
operation-1
operation-2
...
operation-n
# Need a way to break out of loop or script will hang.
done
false
A command that returns an unsuccessful exit status,
but does nothing else.
# Testing "false"
if false
then
echo "false evaluates \"true\""
else
echo "false evaluates \"false\""
fi
# false evaluates "false"
# Looping while "false" (null loop)
while false
do
# The following code will not execute.
operation-1
operation-2
...
operation-n
# Nothing happens!
done
type [cmd]
Similar to the which external command,
type cmd gives the full path name to
"cmd". Unlike which,
type is a Bash builtin. The useful
-a option to type
identifies keywords
and builtins, and also locates
system commands with identical names.
bash$ type '['[ is a shell builtinbash$ type -a '['[ is a shell builtin
[ is /usr/bin/[
hash [cmds]
Record the path name of specified commands -- in the
shell hash table
[4]
-- so the shell or script will not need to search
the $PATH on subsequent calls to those
commands. When hash is called with no
arguments, it simply lists the commands that have been hashed.
The -r option resets the hash table.
bind
The bind builtin displays or modifies
readline[5]
key bindings.
help
Gets a short usage summary of a shell builtin. This is
the counterpart to whatis,
but for builtins.
bash$ help exitexit: exit [n]
Exit the shell with a status of N. If N is omitted, the exit status
is that of the last command executed.
A option is an argument that acts as a
flag, switching script behaviors on or off. The
argument associated with a particular option indicates
the behavior that the option (flag) switches on or
off.
Hashing is a method of
creating lookup keys for data stored in a table. The
data items themselves are
"scrambled" to create keys, using one of
a number of simple mathematical algorithms.
An advantage of hashing is that it
is fast. A disadvantage is that "collisions" --
where a single key maps to more than one data item -- are
possible.