Shell Script Snippets

which demonstrate uses and options of commands like
awk, cut, date, df, du, find, grep, ls, sed, tr, xargs, etc.

These are snippets of code that are useful in writing shell scripts.

! Preliminary ! More snippets (and links) are to be collected !

< Go to Table of Contents, below. >

INTRODUCTION :

Below are small samples of the use of Linux/Unix/Gnu shell commands --- grouped in alphabetical order by command name, and including commands such as:

  • awk
  • cut
  • date
  • df
  • du
  • find
  • grep
  • ls
  • ps
  • sed
  • sort
  • tr
  • xargs

Some of the examples ('snippets' of code) are composed of a combination of shell commands. For example, there are many examples which involve 'pipes' of the output of one command into the input 'file handle' of another command.

So I will often make a 'judgment call' in assigning a particular example (snippet of code) to a 'command-group'.

I will generally avoid putting the same snippet in multiple groups. But I may put a somewhat different snippet in other groups. For example, if I put an snippet involving 'awk' and 'sort' in the 'awk' group, I will probably put a different (but probably somewhat similar) snippet involving 'awk' and 'sort' in the 'sort' group of snippets.


Considerations in ordering the snippets :

Many of these examples are similar to examples in the most practical and useful book on shell scripting that I have ever found --- the O'Reilly book, 'Unix Power Tools'.

In that book, the examples are arranged in chapters by subject matter, such as 'organizing your home directory', 'setting your shell prompt', or 'finding files' --- about 50 different subject areas (chapters). Even with that many chapters, it turns out that the index in the back of the book is often the best way to find what one is looking for. The index lists both commands and subject areas.

    If the index failed me, I would resort to the Table of Contents at the front of the book. Simply browsing through the book was seldom practical, except when desperate, because the book is over 1,100 pages long.

I am often looking for examples of how a particular command is used --- that is, examples of specifying its 'most popular' or 'most useful' options and parameters. In other words, I often know the command(s) that are suitable to do the job at hand. So I have grouped the 'snippets' by a major command used in the snippet.

No matter how one assembles the 'snippets', there is need for a flexible, multi-faceted search system. In this regard, better than an index is the ability to search a web page for any text string.


Finding 'snippets' on this page :

Use the table-of-contents below to go directly to code 'snippets' in the command-name groups. OR simply scroll down this page to spot samples of interest.

    (If this page starts getting so long that it becomes difficult to maintain, I may start putting the command groups on their own separate pages.)

Alternatively, use the 'Find text' option of your web browser to look for keywords on this page.

For example, when looking for examples ('snippets') having to do with disk space and you do not know which commands might be useful for that topic, use the keyword 'space' or 'disk'.

Another example: When looking for 'snippets' having to do with extracting the first character of a word into an environment variable, use a keyword like 'char' or 'first'.

If you know the command for which you want examples of its use, then simply search for that command. For example, if you are looking for examples of use of the 'grep' or 'sed' or 'awk' or 'cut' command, one could search for 'grep' or 'sed' or 'awk' or 'cut'.

If you are looking for examples of 'for' or 'while' loops, then search for ' for ' or 'while'.

If you are looking for examples of if-then-else-fi code, then search for ' fi '.

    I will generally try to remember to put a leading space in front of command words like 'for' or 'fi' or 'if', so that you can search for ' for ' or ' fi ' or ' if' (space in front, and perhaps behind), thus avoiding hitting a lot of words involving that sequence of non-space characters when you are doing a text search.

I may not have established groups for some commands, like 'head' and 'tail'. But such commands are used in many of the snippets. So you can simply search for any command, to see if it is used in any of the snippets. Thus you may be provided with examples of its use.


Shell considerations :

Most of these 'snippets' are intended to run with the Bourne or the Korn shells (command interpreters), as well as with the Bash command interpreter. Most of these samples employ commands (and their parameters) available to all three interpreters, so the scripts (and aliases) will generally run in any of these three interpreters.

In fact, the Korn and Bash interpreters are offshoots of the Bourne command interpreter.

Most of these 'snippets' will work on Unix systems as well as Linux systems. But the Gnu versions of many of these commands have been extended to have many more options than the same command on a Unix system such as Sun Solaris, IBM AIX, HP UX, or SGI Irix.

For example, the Gnu commands have many options that start with double-hyphen (--), whereas most Unix commands have options indicated by a single hyphen only.

I usually use the single-hyphen option when both a single-hyphen and double-hyphen option are available for the same function. This makes it more likely that the command will work on Unix as well as Linux.

For the same reason, I usually use command options that appear in the POSIX standard. This helps assure that the 'snippet' will run in any of the major Unix distributions as well as running, in most cases, on Gnu/Linux-based distros.


Warnings :

These 'snippets' may not be tested thoroughly. For example, those using filenames may not have been tested against filenames with embedded blanks. Use at your own risk.

The 'snippets' are presented here mostly for my own personal use and reference (from various computers ; at home or away). This assembly of snippets provides a means for me to archive and backup the snippets.

If you use any of the snippets, try them on test data before using in a 'serious' mode. Or use ideas/techniques from the 'snippets' to develop your own code and scripts.

There are examples of complete (useful, practical) scripts available through my Handy Scripts page --- a page which also provides access to this 'snippets' page.

Table of Contents: (of this page)                   < Return to Top of Page, above. >

Shell 'snippets' involving :


  • Nautilus scripts for Audio, Image, Video processing, as well as for File Listings and other utility functions, on a separate Nautilus scripts page

  • Links (at the bottom of this page) to web sources of 'snippets' of shell code


End of Table of Contents. See warnings above. See the notes immediately below.

How the code samples (below) are presented :

These code 'snippets' are generally presented in a monospace (fixed-width) font. If the snippets are rather long they may be put in an HTML 'textarea' so that you can scroll the text, vertically or horizontally.

Generally, in the snippets, I will not put comment statements if the snippets are rather simple in nature. However, if it seems advisable to add useful comments, I will put them at the top of, or within, the snippet, so that the comments can easily be pasted into a script along with the code.

I use double-# (##) (double-asterisk) to indicate a true comment. That is, it is not an executable statement and should never be de-commented.

I use single-# (#) to indicate an executable statement that has been commented. Typically, this is because I might want to decomment the statement when doing testing of the script in a terminal window. Generally, the statement should be re-commented --- or a neighboring alternative statement should be commented --- when finished testing.

I do not spend time trying to make one-liner snippets, nor do I try for the ultimate in processing efficiency. (But I am aware of several things to avoid that would make script segments run as slow as molasses. In particular, avoid massive loops, especially massive loops that involve lots of number crunching or file manipulation.)

I DO try to make the snippets readable (to me). At the same time, they generally execute quite rapidly --- as rapidly as the target files and the utilities called allow, even on a directory containing over a hundred files or when sweeping through a file many Megabytes in size.

In some 'snippets', you will see that I may be using the Linux/Unix script variable

$@
or the Linux/Unix script variable
$*

to 'pick up' a list of parameters passed into the script.

If this proves to be a problem (for example, with embedded spaces in some filenames, when there are filenames being passed into a script), one may have to resort to other techniques to handle filenames with spaces embedded in the names.

This is a sign that these code snippets are NOT the 'be all and end all'. You (and I) will probably have to edit them somewhat to handle particular issues.


A programming technique to be aware of :

There are times when one is having a heck of a time getting a line of code to execute without syntax errors. Often the problem involves a judicious use of double-quotes and/or single-quotes and/or the 'eval' command.

The 'eval' command can be applied to a string of code to cause all variable substitutions to take place before the line of code is finally allowed to be executed.

An example of 'eval' usage is when you need to make a set of variable names with a sequence of integers in the variable names (in a 'for' loop) --- and you are assigning values to the variables. Example code :

    xcoord$i = such-and-such

will not work, whereas

    eval xcoord$i = such-and-such

will work.

There are many other cases in which a judicious use of double-quotes or single-quotes, along with the 'eval' command, will help you make a line of code that is free of syntax errors.


One thing I keep forgetting :

'  2>&1 ' is the proper way to send 'standard error' output along with 'standard out' to a listing file --- NOT some other combination of 1, 2, greater-than, and ampersand.

Another handy file 2 technique :

' 2> /dev/null' can be added to a command line to shunt error messages to the 'bit bucket'. This is helpful, for example, when a command (like 'find' or 'ls') is traversing directories owned by root, to which you do not have permission. ' 2> /dev/null' helps avoid a lot of messy 'cannot access' messages in a listing file.

Examples:

find /data/test2 -name "*.mfh" -print  >  ${OUTLIST} 2>&1

ls -d $DIRNAME 2> /dev/null

'awk' snippets :


< Return to Table of Contents, above. >

awk-Snippet 1 :

## Gets the middle name of a filename --- between directory and first period.
## Demonstrates use of the awk '-F' (field delimiter) option.
## We use the back-quotes to put the output in a new environment variable.
##
MIDFILNAM=`basename ${filename1} | awk -F. '{print $1}'`

awk-Snippet 2 :

## Gets the first 2 words of filetype for a given filename. Example: ASCII text
## The " " is needed to print out fields 1 and 2 separate from each other.
##
FILETYPE=`file $FILEIN | cut -d: -f2 | awk '{print $1" "$2}'`

awk-Snippet 3 :

## Prints out the large (> 500KB) directories under /var.
## Normally runs within a second or two.
## Typically we would be concatenating output like this into a report file,
## represented here by $OUTLIST.
##
du -kl /var | sort -n -r | awk ' ( $1 > 500 ) { print $0 }' >> $OUTLIST

awk-Snippet 4 :

## Gets the list of mount directories, as reported by the 'df' command.
## We are using the tail command to strip off the heading line of df.
##
FILSYSLIST=`df -l | tail -n +2 | awk '{print $6}'`

awk-Snippet 5 :

## Gets the size in bytes of a file whose name is in $FILEIN.
##
FILEIN_BYTES=`ls -l $FILEIN | awk '{print $5}'`

awk-Snippet 6 :

## Gets the owner of a given directory.
##
DIROWNER=`ls -ld ${dirname1} | awk '{print $3}'`

awk-Snippet 7 :

## Gets a list of the sub-directories of a given directory.
## Filters out the regular-files from the 'ls -l' output.
## Illustrates use of the awk $NF (number of fields) var.
##
DIRLIST=`ls -l "$dirname1" | grep "^d"| awk '{ print $NF }'`

awk-Snippet 8 :

## Prints the network interfaces, usually eth0 and lo.
##
netstat -i | tail -n +3 | awk '{ print $1 }'

awk-Snippet 9 :

## Shows the ends of lines, after column 50.
## Uses 'ls -l' as an example. In this case, the ends of lines are filenames,
## including pointers to any symbolic-links.
## I might get around to providing a more useful example.
## This is mainly to demonstrate how one can set variable names in awk.
##
ls -l | awk '{LEN0=length($0) ; print substr($0,51,LEN0 - 50)}'

awk-Snippet 10 :

## Shows the ends of lines, after column 48.
## This a more compact way of doing awk-snippet#9.
## This example shows the command and parameters used to start each process.
##
ps -ef | awk '{print substr($0,49)}'

NOTE: Most of these awk snippets are one-liners. You can really do some heavy duty file reformatting with awk, and most of those kinds of examples involve quite a few lines of awk code. I intend to put some examples of that kind here, as links to text files NOT incorporated into this page. (Thus a text search on this file will not yield hits in those separate pages.)

awk-Snippet 11 :

more to come


'cut' snippets :


< Return to Table of Contents, above. >

cut-Snippet 1 :

## Gets the 1st character in a given filename.
FIRSTCHAR=`echo "$FILEIN" | cut -c1-1`

cut-Snippet 2 :

## Gets the first 11 chars and all chars after column 50, from 'ls -l' output.
## 'cut -c' could also be handy with 'ps -ef' output.
ls -l | cut -c1-11,51-

cut-Snippet 3 :

## Gets the current seconds count from the 'date' command.
## Can be useful in semi-randomizing the action in a script ---
## like periodically removing the temp files from a temp directory.
CURSEC=`date | cut -d: -f3 | cut -d" " -f1`
CURMIN=`date | cut -d: -f2`

cut-Snippet 4 :

## Gets the first 3 characters of a filename.
FIRST3=`echo "${filename1}" | cut -c1-3`

cut-Snippet 5 :

## Gets only the first 16 characters of the basename of a file.
## This can be handy for displaying a filename in a window widget, say,
## where, for really long filenames, you want to keep just the first 16 chars.
## There are at least 15 spaces of padding in the echo statement, to
## handle cases where the filename may be as short as 1 character.
BASENAME=`basename "$FILENAME"`
BASENAME=`echo "$BASENAME                " | cut -c1-16`

cut-Snippet 6 :

## Outputs all fields of the /etc/passwd records EXCEPT the old
## field 2 (the encrypted password).
## NOTE: We do NOT use 'cat' and pipe to pipe the contents of the file
##       into the cut command. As some wise man has said in the past,
##       the 'cat' command is highly over-used.
cut -d: -f1,3,4,5,6,7 /etc/passwd

cut-Snippet 7 :

## Gets a list of printer names as known to the 'lpstat -p' command.
PRTRS=`lpstat -p | cut -d' ' -f2`

cut-Snippet 8 :

## Gets the tty ID number of the current shell window.
## Example: 0 from /dev/pts/0
TTYID=`tty | cut -d/ -f3`

cut-Snippet 9 :

## Gets an IP address for a given hostname.
## We could use 'sed' to strip off the left and right parentheses that this yields.
HOSTNAME="google.com"
IPADDR=`arp $HOSTNAME | cut -d" " -f2`


'date' snippets :


< Return to Table of Contents, above. >

date-Snippet 1 :

## An example of putting a formatted date line in a report file.
## Example output: 2010 May 19  Wed  19:58:14PM EDT
echo "*** `date '+%Y %b %d  %a  %T%p %Z'` ****" >> $OUTLIST

date-Snippet 2 :

## Example output: 2010 May 19 19:59:28PM
date '+%Y %b %d %T%p'

date-Snippet 3 :

## Example output: 2010May19_20:02:36PM
## (No spaces in the string. Almost suitable for filenames.)
TIMESTAMP=`date +%Y%b%d_%T%p`

date-Snippet 4 :

## Example output: 2010May19_20h07m
## (More suitable for filenames. The 'h' and 'm' are to make it pretty
##  obvious that those numbers are for hours and minutes.
##  2010May19_2007 would be more cryptic.)
## We have avoided colons in the string by using %H and %M instead of %T.
date +%Y%b%d_%Hh%Mm

date-Snippet 5 :

## Example output: 20100519 (yyyymmdd, all numeric)
## Suitable for all-numeric date stamps, with no embedded spaces, in filenames.
YYYYMMDD=`date '+%Y%m%d'`


'df' snippets :


< Return to Table of Contents, above. >

df-Snippet 1 :

## Sort the output of df by %-used, lowest to highest.
## The 'sed' is used to remove the header line of the 'df' output, which
## would typically get sorted among the data records of the 'df' output. 
df -k | sed '1d' | sort -k5


'du' snippets :


< Return to Table of Contents, above. >

du-Snippet 1 :

## Shows the space (in KB) occupied by the directories under the .thumbnails directory
## of your home directory --- sorted on the first field (space used), numerically
## (rather than a 'dictionary' sort), and in reverse order.
du -k ~/.thumbnails | sort -k1 -nr


'eval' snippets :


< Return to Table of Contents, above. >

eval-Snippet 1 :

## Puts line number 76 of a given file into a (new) environment variable
## named LINE76.
CNT=76
eval LINE$CNT=`head -$CNT "$FILENAME" | tail -1`

eval-Snippet 2 :

## Resets the number of columns and lines in the current X window.
## Sample output of the 'resize' command is several lines:
##   COLUMNS=111;
##   LINES=24;
##   export COLUMNS LINES;
## Applying 'eval' to that output (re)sets those environment variables,
## which commands use to determine where to break lines of output to
## 'wrap' the lines properly for the size of the terminal window.
## A more useful form of this might be to save the output of 'resize'
## in a variable when the window first comes up, say. Later, 'eval'
## those env-var settings to restore the window size. Something like:
##    TEMP=`resize`
##    ...
##    eval "$TEMP"
eval `resize`


'find' snippets :


< Return to Table of Contents, above. >

find-Snippet 1 :

## Use 'find' in combination with 'file' to show the FILE TYPES of all
## the files (except directories) under the 'Desktop' directory in the
## user's home directory.
##
## 'find' traverses ALL sub-directories, at all levels, under ~/Desktop.
## The traversal is said to be 'recursive'.
##
## The escaped semi-colon is to tell '-exec' where to stop.
## There is no more on its to-do list.
##
find  ~/Desktop -type f -exec file {} \;

find-Snippet 2 :

## Use 'find' to print the FULLY-QUALIFIED filenames of all files
## under each sub-directory under the 'Pictures' directory in the
## user's home directory. Traverses ALL sub-directories, at all levels.
## Lists directories as well as regular files.
##
find ~/Pictures -print

find-Snippet 3 :

## Uses 'find' to print the FULLY-QUALIFIED directory names 
## under the 'Pictures' directory in the user's home directory.
## Traverses ALL sub-directories, at all levels.
## Lists directories only --- NOT regular files.
##
find ~/Pictures -type d -print
##
## Similarly, try
##
find $HOME/.macromedia -type d -print
##
## $HOME is the equivalent of '~'.

find-Snippet 4 :

## Use 'find' in combination with 'ls -1' and 'wc -l' (line count) to show the
## the NUMBER OF FILES in each sub-directory under the 'Pictures' directory
## in the user's home directory.
##
## It would be nice if this would work :
##          find ~/Pictures -type d -exec "ls -1 {} | wc -l" \;
## BUT the 'exec' wants a single command (executable file), not a pipe involving
## multiple commands. You get error messages like the following.
##          find: `ls -1 /home//Pictures | wc -l': No such file or directory
##
## If you would put the command line
##          ls -1 $1 | wc -l
## in a script, say named $HOME/scripts/linecount.sh,
## then the following might work.
##
find ~/Pictures -type d -exec $HOME/scripts/linecount.sh {} \;

find-Snippet 5 :

## Show all the files under your home directory that are symbolic-links.
## (Typically you would have about 10.)
find $HOME -type l -print

find-Snippet 6 :

## An simple example of using the '-prune' parameter with 'find'.
## This command specifies that if a directory named 'Icons' is found,
## that directory is 'pruned' --- none of the files under it are processed.
## But all other files (and directories) are printed, fully-qualified.
##
find ~/Pictures \( -type d -name Icons -prune \) -o -print


NOTE: The 'find' command syntax can get very complicated --- especially when using the '-prune' option. But the '-prune' can be useful to avoid a huge amount of printout when you are applying 'find' to directories that may have huge numbers of files and directories --- such as your home directory, or /usr, or, of course, the root directory (/).

Generally, you do not want to apply 'find' to the root directory, unless you do a heck of a lot of pruning. And during the testing process on huge directories, you will probably want to do testing on smaller subdirectories, and work your way up to the parent directory.

As I find handy-looking uses of the 'find' command in magazines and books, I will put them here.


'grep' (and 'egrep') snippets :


< Return to Table of Contents, above. >

grep-Snippet 1 :

## Strips out lines starting with # in column 1.
## We assume we have loaded suitable names into vars FILENAME and FILENAME_STRIPPED.
## We have in mind here shell script files, in which comment lines start with the
## '#' character.
##
## If the '-v' were not used, only those comment lines would be listed/extracted.
##
grep -v '^#' "$FILENAME" > "$FILENAME_STRIPPED"

grep-Snippet 2 :

## Since comment lines in scripts can be indented, the following 'egrep'
## can be used to avoid listing the indented comments. ('egrep' allows for
## specifying multiple strings to search for. The strings are to be separated
## by the vertical bar character.)
##
## The 'egrep' is to remove comment lines --- more specifically:
##          a) lines starting with zero or more spaces and '##' 
##          b) lines starting with zero or more spaces and '# '.
##
## We imagine $FILENAME contains the name of a script file.
##
egrep -v '^ *##|^ *# ' "$FILENAME"
##
## In the string '^ *', the '^' represents the start of the line, and
## the ' *' represents zero or more blank spaces, at the start of the line
## before encountering a '#' character.
##
## We are using egrep with two criteria rather than the single criterion
##    grep -v '^ *#' "$FILENAME"
## because some script lines may start with a hex (color-specifying)
## string such as '#cc33ff'. We do not want to eliminate those lines.

grep-Snippet 3 :

## In the current directory, list all files but the ones that include the
## string '.old' or '.OLD' or '_old' or '_OLD' or '.bkup' or '_bkup' in
## the file (or directory) names.
##
## If the '-v' were not used, only those old/backup files would be listed.
##
ls | egrep -v '.old|.OLD|_old|_OLD|.bkup|_bkup'

grep-Snippet 4 :

## 'egrep' can be used to help check file type of a filename fed into a script,
## and exit if the file is not of the right type.
##
FILETYPE=`file $FILENAME1`
FILETYPE_CHK=`echo "$FILETYPE" | egrep 'text|Mail'`
if test "$FILETYPE_CHK" = ""
then
   exit
fi

grep-Snippet 5 :

## Use 'egrep' to search a log file for lines containing words indicating
## an error occurred. We use the '-n' option of 'egrep' to have it
## display the line numbers along with the match lines.
##
egrep -n 'ABEND|error|Fail' $LOGFILE
##
## To make the search case-insensitive, use the '-i' option.
##
egrep -ni 'abend|error|fail' $LOGFILE

grep-Snippet 6 :

## Use 'grep' along with 'wc' to count the number of lines in a file 
## that contain a specified string. In this example, the string is
## 'Source file:'.
##
grep 'Source file:' $FILE | wc -l

grep-Snippet 7 :

## 'grep' can be used, in combination with 'ls', to make some handy
## aliases to use when working at the command line.
##
## 'lsd' shows only the directories in the current directory.
## 'lsf' shows only the non-directories in the current directory.
## 'lss' shows both --- directories first.
##
alias lsd='ls -F | grep '\''/$'\'''
alias lsf='ls -F | grep -v '\''/$'\'''
alias lss='lsd;lsf'
##
## By adding the '-a' option to the 'ls' commands, we can show
## the 'hidden' directories and 'hidden' non-directory files.
##
alias lsad='ls -a -F | grep '\''/$'\'''
alias lsaf='ls -a -F | grep -v '\''/$'\'''
alias lsas='lsad;lsaf'

grep-Snippet 8 :

## We can use 'grep' with 'ls -l' to put a list of subdirectory names
## of a given directory into a (new) environment variable, say DIRLIST.
##
DIRLIST=`ls -l "$dirname1" | grep "^d"| awk '{ print $NF }'`

grep-Snippet 9 :

## For a given directory, $DIRNAME, load a couple of variables with
## 1) the names of the sub-directories of the directory and
## 2) the names of the non-directory files in the directory.
##
LS_OUT_DIRS=`ls -lAp "$DIRNAME" | grep '/$'`
LS_OUT_FILS=`ls -lAp "$DIRNAME" | grep -v '/$' | grep -v '^total'`

grep-Snippet 10 :

## We can use 'grep' with 'ping' to check if we can connect to a host.
##
HOSTLIST="homePC1 homePC2 homePC3 homePC4"
OUTLIST="/tmp/pingable_hosts.lis"

echo "\
******************************** `date '+%Y %b %d  %T%p'` *********************

CURRENTLY PING-ABLE HOSTS

among hosts: $HOSTLIST
" > $OUTLIST

for HOST in $HOSTLIST
do

   PINGOUT=`ping -c 3 $HOST 2>&1`
   PING_100LOSS=`echo $PINGOUT | grep '100% packet loss'`
   PING_RESOLVE=`echo $PINGOUT | grep 'unknown host'`

   if test ! "$PING_100LOSS" = ""
   then
      echo  "$HOST 100% NOT PING-ABLE" >> $OUTLIST
   elif test ! "$PING_RESOLVE" = ""
   then
      echo  "$HOST IS AN UNKNOWN HOST" >> $OUTLIST
   else
      echo  "$HOST seems ACCESSIBLE" >> $OUTLIST
   fi
done

nedit -read $OUTLIST

grep-Snippet 11 :

## We can use 'grep' to extract lines from a file that look like
## they contain an IPv4 address.
##
grep '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*' $FILENAME

grep-Snippet 12 :

more to come


'head' snippets :


< Return to Table of Contents, above. >

head-Snippet 1 :

## Gets the first 4 characters of the first line of a given file.
FIRST4=`head -1 "$FILENAME" | cut -c1-4`

head-Snippet 2 :

See eval-Snippet#1 above for use of 'head' and 'tail' together
to extract a specified line (given by line number) from a file.


'ls' snippets :


< Return to Table of Contents, above. >

ls-Snippet 1 :

## Gets the number of blocks that prints out as the first line of 'ls -l' output.
BLOCKS=`ls -l | head -1 | cut -d" " -f2`

ls-Snippet 2 :

more to come


'ps' snippets :


< Return to Table of Contents, above. >

ps-Snippet 1 :

The format of the 'ps' command output can vary some among various Unixes and Linux.
Some examples of using 'ps' (in non-trivial ways) on Linux may be added here.


'sed' snippets :


< Return to Table of Contents, above. >

sed-Snippet 1 :

## Changes all commas to spaces, in all records of a file (or a data stream
## piped into this sed command).
##
sed 's|,| |g' $FILENAME

sed-Snippet 2 :

## Removes numerals and decimal points from all records of a file (or
## a data stream piped into this sed command).
##
sed 's|[0-9,.]||g' $FILENAME

sed-Snippet 3 :

## Changes colons to periods in all records of a file (or a data stream
## piped into this sed command).
##
## Note that if you use double quotes, you need to 'escape' characters
## with special meaning to the shell with the back-slash escape indicator.
## In particular, the shell will attempt to process the period, unless
## you escape it.
##
sed "s|\:|\.|g" $FILENAME

sed-Snippet 4 :

## Deletes the first line of a file (or data stream
## piped into this sed command).
##
cat $FILENAME | sed '1d'
##
## OR (better form)
##
sed '1d' $FILENAME
##
## "sed '1d'" provides a substitute for the "tail -n +2" command
## used in awk-Snippet#4 above.
##
FILSYSLIST=`df -l | sed '1d' | awk '{print $6}'`

sed-Snippet 5 :

more to come


'sort' snippets :


< Return to Table of Contents, above. >

sort-Snippet 1 :

See awk-Snippet#3 and df-Snippet#1 and du-Snippet#1 above.
Some other 'sort' examples may be added later.


'tr' snippets :


< Return to Table of Contents, above. >

tr-Snippet 1 :

## Use the 'tr' (translate) command to convert lower-case alphabet in a string to
## upper-case.
##
STRING2=`echo "$STRING1" | tr "[a-z]" "[A-Z]"`
##
## As you would expect, tr "[A-Z]" "[a-z]" translates the other way.
##

tr-Snippet 2 :

## Use 'tr' to remove carriage-return characters from a file, such as
## a file from Microsoft Windows.
##
tr -d '\015' < $FILEIN > $FILEOUT

tr-Snippet 3 :

## Use 'tr' to remove space characters from a line.
##
echo "$LINE" | tr -d  ' '

tr-Snippet 4 :

## Use 'tr' to check if a string is numeric --- that is, all integers and decimal points.
## '-cd' causes the deletion of characters OTHER THAN period and zero thru 9.
## '-c' indicates 'complementary to'.
##
STR_NUMCHEK=`echo "$STRING" | tr -cd "\.[0-9]"`
if test ! "$STR_NUMCHEK" = "$STRING"
then
   echo "String < $STRING > is NOT a valid numeric response here."
   exit
fi

Here is an old (pre-2005) Unix shell script file that would convert between MS-Windows, Unix, and Mac(pre-2005) text files. It makes heavy use of 'tr' to translate carriage-return and line-feed characters --- deleting them or switching them or adding them.


'xargs' snippets :


< Return to Table of Contents, above. >

xargs-Snippet 1 :

## This is an example from the man page on 'xargs'.
## It uses the /etc/passwd file to generate a sorted list of the userids defined
## to your host computer. It uses 'xargs echo' to put all the userids on
## one line, rather than one userid per line.
##
cut -d: -f1 < /etc/passwd | sort | xargs echo
##
## As the 'man' page says, xargs can be used to
## "build and execute command lines from standard input".
##
## That said, I have found the cases where I really needed to use 'xargs'
## very rare.
##
## 'xargs' might be useful for cancelling a bunch of queued print jobs,
## with a command like the following.
##
lpstat -o | awk '{print $1}' | xargs cancel {}
##
## I once found it useful to use 'xargs' to 'kill' a bunch of processes
## started by a certain command. Here is an example of how I did it.
## $CMD represents the command that was used to start the processes.
##
ps -ef | grep $CMD | grep -v grep | awk '{ print $2 }' | xargs kill -9 {} 2> /dev/null
##
## You might find it useful to use 'xargs' to compress the files in a directory.
## Example:
ls ${dirname1} | xargs  -i gzip ${dirname1}/{}


LINKS : ( SOURCES of shell snippets )


< Return to Table of Contents, above. >

Brief background on shell scripting :

This page assumes the reader is familiar with many of the basics of shell scripting and with using the CLI (Command Line Interface).

However, for a basic introduction to Linux/Unix shell scripting, try the Wikipedia page on Shell_scripting.

In particular, see the Wikipedia page on the Bourne_shell. You may find it helpful to follow some of the links on those Wikipedia pages.


Shell scripting tutorials :

There are many web sites that provide tutorials for shell scripting. Someday I may provide a separate web page of such sites. Unfortunately, many such sites go dead quite rapidly.

In the meantime, here are a few links to some tutorials :

A problem with tutorials is that they often contain many code 'snippets' but not many complete code samples that accomplish very much. To accomplish much, they would have to be too long for a tutorial.


Sources of Shell Script code samples :


Web Archives:

Many instructive shell script code samples are available through code archive web sites such as

I found the Shelldorado archive by doing a

I used the string 'shell script' because there are many other scripting languages, such as Perl, Python, and Tcl/Tk. See my Tcl-Tk pages available via my Computer Topics (Ref Info) page.

If I had used 'shell' and 'script' as separate words, I probably would have 'hit' a lot more Perl, Python, and other script web pages.


Web Script-Code Searches:

Since many shell scripts start off with a call to the '#!/bin/sh' or '#!/bin/ksh' or '#!/bin/bash' interpreter, one way to find code samples is to do web searches on keyword strings such as

NOTE:
Since Google essentially replaces special characters in your search strings with a blank character (and seemingly replaces leading blanks by a null character),

  • a search on "bin/sh"
  • a search on "bin sh"
  • a search on "#!/bin/sh", and
  • a search on "   bin sh"
will usually return the same 'hits'.


Forums:

Another handy source of help and code samples for shell scripting is forums, such as

Bottom of the my SNIPPETS of Shell Code page.

To return to a previously visited web page location, click on the
Back button of your web browser a sufficient number of times.
OR, use the History-list option of your web browser.
OR ...

< Return to Table of Contents, above. >
< Return to Top of Page, above. >
< Return to Computer menu page. >

Page created 2010 May 19. Changed 2011 Jun 06.