old-www/HOWTO/VMS-to-Linux-HOWTO/examples.html

582 lines
9.7 KiB
HTML

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML
><HEAD
><TITLE
>Real Life Examples </TITLE
><META
NAME="GENERATOR"
CONTENT="Modular DocBook HTML Stylesheet Version 1.7"><LINK
REL="HOME"
TITLE="From VMS to Linux HOWTO"
HREF="index.html"><LINK
REL="PREVIOUS"
TITLE="Useful Programs "
HREF="useful-programs.html"><LINK
REL="NEXT"
TITLE="Tips You Can't Do Without"
HREF="x811.html"></HEAD
><BODY
CLASS="SECT1"
BGCOLOR="#FFFFFF"
TEXT="#000000"
LINK="#0000FF"
VLINK="#840084"
ALINK="#0000FF"
><DIV
CLASS="NAVHEADER"
><TABLE
SUMMARY="Header navigation table"
WIDTH="100%"
BORDER="0"
CELLPADDING="0"
CELLSPACING="0"
><TR
><TH
COLSPAN="3"
ALIGN="center"
>From VMS to Linux HOWTO</TH
></TR
><TR
><TD
WIDTH="10%"
ALIGN="left"
VALIGN="bottom"
><A
HREF="useful-programs.html"
ACCESSKEY="P"
>Prev</A
></TD
><TD
WIDTH="80%"
ALIGN="center"
VALIGN="bottom"
></TD
><TD
WIDTH="10%"
ALIGN="right"
VALIGN="bottom"
><A
HREF="x811.html"
ACCESSKEY="N"
>Next</A
></TD
></TR
></TABLE
><HR
ALIGN="LEFT"
WIDTH="100%"></DIV
><DIV
CLASS="SECT1"
><H1
CLASS="SECT1"
><A
NAME="EXAMPLES"
></A
>11. Real Life Examples</H1
><P
>UNIX' core idea is that there are many simple commands that can linked
together via piping and redirection to accomplish even really complex tasks.
Have a look at the following examples. I'll only explain the most complex
ones; for the others, please study the above sections and the man pages.</P
><P
><EM
>Problem</EM
>: <TT
CLASS="LITERAL"
>ls</TT
> is too quick and the file names fly away.</P
><P
><EM
>Solution</EM
>:</P
><P
>&#13;<TABLE
BORDER="1"
BGCOLOR="#E0E0E0"
WIDTH="100%"
><TR
><TD
><FONT
COLOR="#000000"
><PRE
CLASS="SCREEN"
>$ ls | less</PRE
></FONT
></TD
></TR
></TABLE
>&#13;</P
><P
><EM
>Problem</EM
>: I have a file containing a list of words. I want to sort it
in reverse order and print it.</P
><P
><EM
>Solution</EM
>:</P
><P
>&#13;<TABLE
BORDER="1"
BGCOLOR="#E0E0E0"
WIDTH="100%"
><TR
><TD
><FONT
COLOR="#000000"
><PRE
CLASS="SCREEN"
>$ cat myfile.txt | sort -r | lpr</PRE
></FONT
></TD
></TR
></TABLE
>&#13;</P
><P
><EM
>Problem</EM
>: my data file has some repeated lines! How do I get rid of them?</P
><P
><EM
>Solution</EM
>:</P
><P
>&#13;<TABLE
BORDER="1"
BGCOLOR="#E0E0E0"
WIDTH="100%"
><TR
><TD
><FONT
COLOR="#000000"
><PRE
CLASS="SCREEN"
>$ sort datafile.dat | uniq &#62; newfile.dat</PRE
></FONT
></TD
></TR
></TABLE
>&#13;</P
><P
><EM
>Problem</EM
>: I have a file called 'mypaper.txt' or 'mypaper.tex' or some
such somewhere, but I don't remember where I put it. How do I find it?</P
><P
><EM
>Solution</EM
>:</P
><P
>&#13;<TABLE
BORDER="1"
BGCOLOR="#E0E0E0"
WIDTH="100%"
><TR
><TD
><FONT
COLOR="#000000"
><PRE
CLASS="SCREEN"
>$ find ~ -name "mypaper*" </PRE
></FONT
></TD
></TR
></TABLE
>&#13;</P
><P
>Explanation: <TT
CLASS="LITERAL"
>find</TT
> is a very useful command that lists all the files
in a directory tree (starting from <TT
CLASS="LITERAL"
>&#732;</TT
> in this case). Its output
can be filtered to meet several criteria, such as <TT
CLASS="LITERAL"
>-name</TT
>.</P
><P
><EM
>Problem</EM
>: I have a text file containing the word 'entropy' in this
directory, is there anything like <TT
CLASS="LITERAL"
>SEARCH</TT
>?</P
><P
><EM
>Solution</EM
>: yes, try</P
><P
>&#13;<TABLE
BORDER="1"
BGCOLOR="#E0E0E0"
WIDTH="100%"
><TR
><TD
><FONT
COLOR="#000000"
><PRE
CLASS="SCREEN"
>$ grep -l 'entropy' *</PRE
></FONT
></TD
></TR
></TABLE
>&#13;</P
><P
><EM
>Problem</EM
>: somewhere I have text files containing the word 'entropy', I'd
like to know which and where they are. Under VMS I'd use <TT
CLASS="LITERAL"
>search entropy
[...]*.*;*</TT
>, but <TT
CLASS="LITERAL"
>grep</TT
> can't recurse subdirectories. Now what?</P
><P
><EM
>Solution</EM
>: </P
><P
>&#13;<TABLE
BORDER="1"
BGCOLOR="#E0E0E0"
WIDTH="100%"
><TR
><TD
><FONT
COLOR="#000000"
><PRE
CLASS="SCREEN"
>$ find . -exec grep -l "entropy" {} \; 2&#62; /dev/null</PRE
></FONT
></TD
></TR
></TABLE
>&#13;</P
><P
>Explanation: <TT
CLASS="LITERAL"
>find .</TT
> outputs all the file names starting from the
current directory, <TT
CLASS="LITERAL"
>-exec grep -l "entropy"</TT
> is an action to be
performed on each file (represented by <TT
CLASS="LITERAL"
>{}</TT
>), <TT
CLASS="LITERAL"
>\</TT
>
terminates the command. If you think this syntax is awful, you're right.</P
><P
>In alternative, write the following script:</P
><P
>&#13;<TABLE
BORDER="0"
BGCOLOR="#E0E0E0"
WIDTH="100%"
><TR
><TD
><FONT
COLOR="#000000"
><PRE
CLASS="PROGRAMLISTING"
>#!/bin/sh
# rgrep: recursive grep
if [ $# != 3 ]
then
echo "Usage: rgrep --switches 'pattern' 'directory'"
exit 1
fi
find $3 -name "*" -exec grep $1 $2 {} \; 2&#62; /dev/null</PRE
></FONT
></TD
></TR
></TABLE
>&#13;</P
><P
>Explanation: <TT
CLASS="LITERAL"
>grep</TT
> works like <TT
CLASS="LITERAL"
>search</TT
>, and combining it with
<TT
CLASS="LITERAL"
>find</TT
> we get the best of both worlds.</P
><P
><EM
>Problem</EM
>: I have a data file that has two header lines, then every
line has 'n' data, not necessarily equally spaced. I want the 2nd and
5th data value of each line. Shall I write a Fortran program...?</P
><P
><EM
>Solution</EM
>: nope. This is quicker:</P
><P
>&#13;<TABLE
BORDER="1"
BGCOLOR="#E0E0E0"
WIDTH="100%"
><TR
><TD
><FONT
COLOR="#000000"
><PRE
CLASS="SCREEN"
>$ awk 'NL &#62; 2 {print $2, "\t", $5}' datafile.dat &#62; newfile.dat</PRE
></FONT
></TD
></TR
></TABLE
>&#13;</P
><P
>Explanation: the command <TT
CLASS="LITERAL"
>awk</TT
> is actually a programming language:
for each line starting from the third in <TT
CLASS="LITERAL"
>datafile.dat</TT
>, print out
the second and fifth field, separated by a tab. Learn some <TT
CLASS="LITERAL"
>awk</TT
>---it
saves a lot of time.</P
><P
><EM
>Problem</EM
>: I've downloaded an FTP site's <TT
CLASS="LITERAL"
>ls-lR.gz</TT
> to check its
contents. For each subdirectory, it contains a line that reads "total xxxx",
where xxxx is size in kbytes of the dir contents. I'd like to get the grand
total of all these xxxx values.</P
><P
><EM
>Solution</EM
>:</P
><P
>&#13;<TABLE
BORDER="1"
BGCOLOR="#E0E0E0"
WIDTH="100%"
><TR
><TD
><FONT
COLOR="#000000"
><PRE
CLASS="SCREEN"
>$ zcat ls-lR.gz | awk ' $1 == "total" { i += $2 } END {print i}'</PRE
></FONT
></TD
></TR
></TABLE
>&#13;</P
><P
>Explanation: <TT
CLASS="LITERAL"
>zcat</TT
> outputs the contents of the <TT
CLASS="LITERAL"
>.gz</TT
> file and pipes
to <TT
CLASS="LITERAL"
>awk</TT
>, whose man page you're kindly requested to read ;-)</P
><P
><EM
>Problem</EM
>: I've written a Fortran program, <TT
CLASS="LITERAL"
>myprog</TT
>, to calculate a
parameter from a data file. I'd like to run it on hundreds of data files
and have a list of the results, but it's a nuisance to ask each time for
the file name. Under VMS I'd write a lengthy command file, and under Linux?</P
><P
><EM
>Solution</EM
>: a very short script. Make your program look for the data
file '<TT
CLASS="LITERAL"
>mydata.dat</TT
>' and print the result on the screen (stdout), then
write the following script:</P
><P
>&#13;<TABLE
BORDER="0"
BGCOLOR="#E0E0E0"
WIDTH="100%"
><TR
><TD
><FONT
COLOR="#000000"
><PRE
CLASS="PROGRAMLISTING"
>#!/bin/sh
# myprog.sh: run the same command on many different files
# usage: myprog.sh *.dat
for file in $* # for all parameters (e.g. *.dat)
do
# append the file name to result.dat
echo -n "${file}: " &#62;&#62; results.dat
# copy current argument to mydata.dat, run myprog
# and append the output to results.dat
cp ${file} mydata.dat ; myprog &#62;&#62; results.dat
done</PRE
></FONT
></TD
></TR
></TABLE
>&#13;</P
><P
><EM
>Problem</EM
>: I want to replace `geology' with `geophysics' in all my
text files. Shall I edit them all manually?</P
><P
><EM
>Solution</EM
>: nope. Write this shell script:</P
><P
>&#13;<TABLE
BORDER="0"
BGCOLOR="#E0E0E0"
WIDTH="100%"
><TR
><TD
><FONT
COLOR="#000000"
><PRE
CLASS="PROGRAMLISTING"
>#!/bin/sh
# replace $1 with $2 in $*
# usage: replace "old-pattern" "new-pattern" file [file...]
OLD=$1 # first parameter of the script
NEW=$2 # second parameter
shift ; shift # discard the first 2 parameters: the next are the file names
for file in $* # for all files given as parameters
do
# replace every occurrence of OLD with NEW, save on a temporary file
sed "s/$OLD/$NEW/g" ${file} &#62; ${file}.new
# rename the temporary file as the original file
/bin/mv ${file}.new ${file}
done</PRE
></FONT
></TD
></TR
></TABLE
>&#13;</P
><P
><EM
>Problem</EM
>: I have some data files, I don't know their length and have to
remove their last but one and last but two lines. Er... manually?</P
><P
><EM
>Solution</EM
>: no, of course. Write this script:</P
><P
>&#13;<TABLE
BORDER="0"
BGCOLOR="#E0E0E0"
WIDTH="100%"
><TR
><TD
><FONT
COLOR="#000000"
><PRE
CLASS="PROGRAMLISTING"
>#!/bin/sh
# prune.sh: removes n-1th and n-2th lines from files
# usage: prune.sh file [file...]
for file in $* # for every parameter
do
LINES=`wc -l $file | awk '{print $1}'` # number of lines in file
LINES=`expr $LINES - 3` # LINES = LINES - 3
head -n $LINES $file &#62; $file.new # output first LINES lines
tail -n 1 $file &#62;&#62; $file.new # append last line
done</PRE
></FONT
></TD
></TR
></TABLE
>&#13;</P
><P
>I hope these examples whetted your appetite...</P
></DIV
><DIV
CLASS="NAVFOOTER"
><HR
ALIGN="LEFT"
WIDTH="100%"><TABLE
SUMMARY="Footer navigation table"
WIDTH="100%"
BORDER="0"
CELLPADDING="0"
CELLSPACING="0"
><TR
><TD
WIDTH="33%"
ALIGN="left"
VALIGN="top"
><A
HREF="useful-programs.html"
ACCESSKEY="P"
>Prev</A
></TD
><TD
WIDTH="34%"
ALIGN="center"
VALIGN="top"
><A
HREF="index.html"
ACCESSKEY="H"
>Home</A
></TD
><TD
WIDTH="33%"
ALIGN="right"
VALIGN="top"
><A
HREF="x811.html"
ACCESSKEY="N"
>Next</A
></TD
></TR
><TR
><TD
WIDTH="33%"
ALIGN="left"
VALIGN="top"
>Useful Programs</TD
><TD
WIDTH="34%"
ALIGN="center"
VALIGN="top"
>&nbsp;</TD
><TD
WIDTH="33%"
ALIGN="right"
VALIGN="top"
>Tips You Can't Do Without</TD
></TR
></TABLE
></DIV
></BODY
></HTML
>