old-www/LDP/LG/issue19/issue19.txt

6286 lines
303 KiB
Plaintext

Linux Gazette... making Linux just a little more fun!
Copyright © 1996-97 Specialized Systems Consultants, Inc. linux@ssc.com
_________________________________________________________________
Welcome to Linux Gazette! (tm)
Sponsored by:
InfoMagic
Our sponsors make financial contributions toward the costs of
publishing Linux Gazette. If you would like to become a sponsor of LG,
e-mail us at sponsor@ssc.com.
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
The Mailbag!
Write the Gazette at gazette@ssc.com
Contents:
* Help Wanted -- Article Ideas
* General Mail
_________________________________________________________________
Help Wanted -- Article Ideas
_________________________________________________________________
Date: Mon, 23 Jun 1997 22:40:39 -0500
From: Tom Cannon tomc@usit.net
Subject: Cobal
Are there any Cobal compilers that will run under Linux, I have a
serious need to move some code to the Linux platform if there is
something available. Thanks.
(Check with Acucobol Inc., info@accubol.com,
http://www.acucobol.com --Editor)
_________________________________________________________________
Date: Sat, 21 Jun 1997 16:02:04 -0400
From: Linda Brooks lbrooks@stc.net
Subject: Packard Bell SOUND16A Soundcard
I have a Packard Bell Pack-Mate 4990CD, which has a soundcard
apparently called a "SOUND16A" (the documentation doesn't make it
clear whether PB or Aztech made it, or if it was a joint production).
It is a 16 bit sound card, which I can use under Windows 95 as such.
However, in Linux, the best I can do is 8 bit sound (via Sound Blaster
Pro 2.0 emulation). The card claims to support MSS, but nowhere in the
documentation or setup program does it specify which IRQ this runs at,
although it does tell what port. I have contacted Packard Bell's tech
support, but they say they only support Windows software for free, and
that if I wanted to talk about Linux or some such operating system, I
would need to get "special support" which would cost a ridiculously
high number.
As a struggling college student, I don't have much money to spend on
the computer (it is actually my family's that I scratched up enough
space to install Linux on), so I can't get a new sound card, and I am
not even sure if the commercial sound drivers support this particular
sound card.
I'm probably spoiled by Windows, but it's not asking too much for 16
bit sound so I can listen to 44KHz samples in stereo (I'm quite a MOD
fan), listen to MP2's or MP3's, etc...
I'm not much of a coder, so I can't go about writing my own drivers.
If anyone knows of how to set up this sound card for full 161 bit
sound, please inform me. Or, if you know of any 8 bit .MP? players,
that would work too =)
_________________________________________________________________
Date: Thu, 19 Jun 1997 12:03:00 -0400
From: Albert Race race@dgms.com
Subject: Linux HELP!
I would like to install Linux on a Sun 386i machine with 16 meg of
ram, 2 350 meg scsii drives color video adapter and a tape drive, and
Network support. When I try to install using a boot disk, I get the
following message.
Boot: Device fd(0,0,0): Invalid Boot Block
This occurs with any boot disk except for Sun. Is there a way I can
get Linux to install to this system? Any suggestions would be greatly
appreciated. If you can not help me, Please redirect this message to
someone who could. I don't know where to get this type of information.
I received these machines for free and would like to put them to use
using Linux.
Thank you Albert F. Race
_________________________________________________________________
Date: Mon, 16 Jun 1997 11:44:43 +0200
From: Claudio cricci@cpiprogetti.it
Subject: Matrox
Is there a chance to correctly configure a Matrox Mystique with 4MB
RAM under X or I must throw away it ?
_________________________________________________________________
Date: Mon, 2 Jun 1997 00:11:40 -0300 (EST)
From: Rildo Pragana rpragana@acm.org
Subject: Interfacing Genius Color Page-CS Scanner
Hello, Please help-me to interface my Color Page CS desktop scanner to
Linux. Now, I can scan only from Windows (Argh!!) and it would be fine
to have The Gimp accessing my scanned material. I can program in C and
Tcl/tk, if I at least have the information on its SCSI card, and the
scanner itself. Any information you may have is precious to me. When I
have this job done, of course, I'll be happy to publish my adventures
in the Gazette.
best regards,
Rildo Pragana
Greetings from Recife, the Brazilian's Venice
_________________________________________________________________
Date: Thu, 12 Jun 1997 14:36:09 -0400 (EDT)
From: David Bubar bubarda@sch.ge.com
Subject: Q: How do you un virtual a virtual screen?
My screen maybe 800x600 but my virtual screen is set at something like
1600x1200. How do I change this? Note:
1. This is not virtual Desktops, I like use of the PAGER
2. I wish you would put out a configuration guide for X that does NOT
have to be a TOME but a small book(let) that helps users customize
X to work the way they want.
_________________________________________________________________
Date: Mon Jun 16 13:46:14
From: Ade Bellini, AdeBellini@aol.com
Subject: *2+ Processing
Sir, I am a 35 from Sweden using at present *2 90 Pentium /NT4 and
Slakeware 1.2.13 and Red Hat 3.0 (and DOS 6.22 !) all on the same
machine. (paranoid !) I am interested in knowing how to take advantage
of the *2 cpu's on a Linux based machine. Any thing regarding *2 +
processing is of interest to me, as i use the NT4 as a server and
would like to try using Linux instead. many thanks in advance
Ade.
_________________________________________________________________
Date: Tue, 3 Jun 1997 07:33:50 -0700 (PDT)
From: David Mandel dmandel@transport.com
Subject: CD Burners, Scanners, Digital Cammeras, etc.
I have a mess of family photographs and possibly 35mm slides that I
want to preserve. One idea I'm considering is scanning these and
putting them on CDs. So I have a few questions.
1. Will a Sony CDU926S burner work with xcdroast? The documentation
says a Sony CDU920S will work, but I don't know the differences
between the CDU920S and CDU926S. A bare bones (no docs, drivers,
software) CDU926S is only $265. The MS ready version is $350, but
who would want that?
2. What is a good, but cheap flatbed scanner to use? (Good means 24
bit color and >= 300dpi optical resolution.) What software (in
Linux) supports the scanner?
3. I can't afford one, but... Are there any 35mm slide scanners on
the market with Linux support?
4. And as long as I'm asking dumb questions... Does Linux have
support for any digital cameras yet? Someday many of us will want
to change to digital photography, and it would be awful to have to
learn Windows to do this.
Thank you for your time and help,
Dave Mandel
(We'll have to depend on our readers for 1 and 3. As to 2, we use
the HP5P flatbed scanner, which fits your qualifications for good.
As to cheap it depends on your definition--it sells for around
$400. The Linux software that supports all HP scanners is XVscan,
and a very nice program it is. As to 4, the answer is yes; Hitach
MP-EG1A, http://www.mpegcam.net/. --Editor)
_________________________________________________________________
Date: Tue, 3 Jun 1997 09:01:06 +0100 (BST)
From: Andrew Philip Crook shu96apc@reading.ac.uk
Subject: Ascii Problems with FTP
When I use a dos ftp(in ascii mode) program to download a Linux
Script, because it is not running yet, the script fails to work when
installed. This is because a ^M is appended to every line, take them
out and it works.
What's happening?
How can I stop it?
Or how can a filter all the ^M's out?
Many Thanks
Andrew Crook.
(In a couple of last year's issues, there are several Tips & Tricks
for getting rid of ^M. You can't stop them from happening. I
personally get rid of them in vi using a global replace (e.g.,
:%s/^M//g); one command and they're gone forever. --Editor)
_________________________________________________________________
Date: Thu, 05 Jun 1997 23:08:08 -0400
From: Steve Malenfant, smalenfant@cablevision.qc.ca
Subject: Problems with XFree86
I'm a new user to Linux and the problem still XFree86! So then I tried
to know want can I do to Linux community. In Issue #16, you said that
the problem is not video card and is Monitor balancing. So why Windows
95 can have all these preset on monitor and Linux don't have? Why we
can't use the stuff in the Microsoft Lib to transfer it into the
database of XF86Setup or something like that. Cause that's real that
the dotclock and all this is very scrambled! Why not just resolution
and Virtual Refresh, that's all we need to know, the program could do
the rest! We don't have to know what horizontal frequency and dotclock
it is!
Steve Malenfant
_________________________________________________________________
Date: Thu, 19 Jun 1997 15:39:58 -0700
From: Kevin Hartman KevinH@hsaeug.com
Subject: Afterstep
Would anyone be interested in an Afterstep customization how-to/where
to get?
Kevin
(Have you got one setup or just trying to find out if there's a
need? --Editor)
_________________________________________________________________
Date: Sat, 07 Jun 1997 02:34:57 -0400
From: sinyz, sinyz@superlink.net
Subject: Need Help
Hi, If you happen to have time on your end please be so kind as to
answer a few questions for a newbie!
Well , here is the situation and I need to get some serious advice
from people like you. I have been reading the newsgroups and HOWTO's .
They have been quite informative and ,increasingly so, as I continue .
Now , thank GOD I got my Linux (RED HAT 4.1) box set up and running on
my slave drive with Win95 on the Master. It detected my CDROM and I
also configured my Xwindows (X11R6) .
But there are couple of questions
1. I have a video card of type diamond s3 virge 3D 2000 . The driver
for S3 was a choice in the XF86Setup which I chose and everything
seems to work fine. Also I chose the 800*600 resolution SVGA
monitor . I have been hearing rumors from friends that the video
card when being used by Xwindows may mess up the monitor . This
has been troubling me quite a bit . What's up with this ??
2. I read using the dmesg command that Linux at boot time does not
notice that there is a device on tty1 . The specific line reads
this
Serial driver version 4.13 with no serial options enabled
tty00 at 0x03f8 (irq=4) is a 16550A
tty03 at 0x02e8 (irq=3) is a 16550A
There seems to be no mention of tty1 (com 2 irq 3) where my modem is
installed at !How to fix this ?? By the way my modem happens to be
a plug-n-play modem -SUpra 28.8bps. I have heard that pnp modems
have problems with Linux and there are fixes for pnp types -
please recommend any.(In effect how do I get my Modem to work)
3. Also I did not notice during the boot time messages any thing to
do with PPP Protocol which I definitely need to dialup to an ISP .
Does that mean recompiling the Kernel -- HOw ( if red hat
distribution has specific or simpler way of doing things then let
me know ) Thanks a lot in anticipation.
_________________________________________________________________
Date: Wed, 11 Jun 1997 11:06:57 +0200 (MET DST)
From: Martin Lersch lersch@athene.informatik.uni-bonn.de
Subject: User-Level Driver For HP ScanJet 5p?
Hello! Please can you point me to some direction were I can find a
user-level driver for the HP ScanJet 5p? There exist the HPSCANPBM
driver which works in part, but does not support the -width and
-height options for the ScanJet 5p. I guess it was written for a
ScanJet 4c or something like that. BTW: The homepage of HP does not
give much support for Linux users. They do not publish the ESCAPE
sequences of the scanners.
Regards, Martin Lersch
_________________________________________________________________
General Mail
_________________________________________________________________
Date: Sun, 01 Jun 1997 00:56:52 -0500
From: Piotr Mitros pmitors@mit.edu
Subject: WordPerfect for Linux
Before more users spend many hours downloading the 50 megabyte (!)
WordPerfect for Linux, you may want to note that the beta download
lets you get a demo version that times out after just 15 days. They
seem to have demo versions of WordPerfect 6 available, so it is not
that big a deal.
However, I would like to see a comparison of WordPerfect for Linux,
StarOffice's word processor and the what is planned for GNU WP.
Piotr
(I'd like to see that comparisom too. --Editor)
_________________________________________________________________
Date: Thu, 12 Jun 1997 06:42:47 -0400
From: Stephen L. Cito al256@detroit.freenet.org
Subject: Question about downloading the archive
Hello, I'd like to download the past issues of LG (having enjoyed LJ
now since last fall), but I don't think I could even get an 11 meg
file downloaded over my 14.4 modem within the 1 hour that I have
before my local Internet connection (the Greater Detroit Free Net)
times out on me. Is there any way to download the past issues in
smaller "chunks"?
Thanks and have a real nice day...
SC, Novi, MI
(Hmmm, that is a problem. No, I don't save the individual tar files
of previous issues separately. There is, of course, TWDT,
option for each issue which gives you the issue as one great
big file. Not as nice as the normal multi-file format but very
popular so must work for some. --Editor)
______________________________________________________________
Date: Wed, 04 Jun 1997 22:52:36 -0700
From: James Zubb jimz_@ecom.net
Subject: ActiveX for Linux
Hi, I read the ActiveX for Linux question in the Answer Guy's
article, I did a little looking and came up with a web site:
http://www.sagus.com/Prod-i~1/Net-comp/dcom/index.htm
I don't know if this is actually the ActiveX port for Linux or not,
I didn't feel like trying to figure it out, but there is a Beta for
Linux there. Beats me what it does or how it does it...
-- Jim Zubb
______________________________________________________________
Date: Fri, 6 Jun 1997 19:01:40 +0100 (BST)
From: Adrian Bridgett apb25@cam.ac.uk
Subject: Re: X Color Depth (In response to the message by Roland
Smith)
Normally 8-bit displays use 256 colours chosen from 2^24
(16,777,216), and 15/16/24/32 bits displays just use a fixed number
of colours spread "evenly" throughout the colour spectrum.
16-bit displays use 5 bits for red, 5 bits for blue and 6 bits for
green, however the 65536 colours cannot be changed and so the
overall "resolution" of colour is lower than 256 bit displays. For
instance you can only have 2^5 different shades of green, rather
than 2^8.
Adrian
______________________________________________________________
Date:Thu June 12 08:39:19 PDT 1997
Timothy Gray timgray@lambdanet.com
Subject: CNE Certification for Linux?
Oh, no not a certification suggestion......
Linux was developed as a better and free version of UNIX. Now
someone wants to make a CNE for Linux? As a successful Linux
Network Administrator (and Business owner that proudly states no
Microsoft here!) I am appalled at charging ten's of thousands of
dollars to get a piece of paper that states I can do my job. As an
Internet service provider and an avid Linux, Freeware, and Free
Software Foundation supporter I hire my network administrators and
Engineers( We call them System Administrators ) based on their
abilities and trainability. A CNE paper does not nor will ever
impress me. Even suggesting such an idea toward Linux is appalling.
Let's keep our last bastion of freedom from the clutches of
cooperate greed! If we must have a Linux CNE make it 100% free and
available to everyone on the planet.
Thank you, Timothy Gray
______________________________________________________________
Published in Linux Gazette Issue 19, July 1997
______________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Next
This page written and maintained by the Editor of Linux Gazette,
gazette@ssc.com
Copyright © 1997 Specialized Systems Consultants, Inc.
"Linux Gazette...making Linux just a little more fun! "
______________________________________________________________
More 2¢ Tips!
Send Linux Tips and Tricks to gazette@ssc.com
______________________________________________________________
Contents:
* Rude Getty
* Keeping Track of File Size
* What Packages Do I Need?
* Sound Card Support
* InstallNTeX is Dangerous
* Relpy to Dangerous InstallNTeX Letter
* Monitoring An FTP Download
* Programming Serial Ports
* Grepping Files in a Directory Tree
* More Grepping Files
* Still More on Grepping Files
* More on Grepping Files in a Tree
* Grepping
* Untarring/Zip
* Hard Disk Duplication
* Reply to ncftp
* Sockets and Pipes
* Hex Dump
* More on Hex Dump
* Reply to Zprotocol
______________________________________________________________
Rude Getty
Date: Mon, 23 June 1997 21:12:23
From: Heather Stern star@starshine.org
I have a fairly important UNIX box at work, and I have come across
a good trick to keep around.
Set one of your console getty's to a nice value of very rude, -17
or worse. That way if a disaster comes up and you have to use the
console, it won't take forever to respond to you (because of
whatever went wrong).
______________________________________________________________
Keeping Track of File Size
Date:Mon 16 June 1997 13:34:24
From: Volker Hilsenstein vhilsens@aixterm1.urz.uni-heidelberg.de
Hello everyone, I just read Bob Grabau's 2C-tip for keeping track
of the size of file. Since it is a bit inconvenient to type all
these lines each time you download something I wrote this little
script:
#!/bin/bash
# This script monitors the size of the files given
# on the command line.
while :
do
clear
for i in $@; do
echo File $i has the size `ls -l $i | tr -s " " | cut -f 5 -d " "` bytes
;
done
sleep 1
done
Bye ... Volker
______________________________________________________________
Reply to "What Packages do I Need?"
Date: Tue 24 June 1997 11:15:56
From: Michael Hammel, mjhammel@long.emass.com
You asked about what packages you could get rid of and mentioned
that you had AcceleratedX and that because of this you "can get rid
of a lot off the X stuff". Well, thats not really true.
AcceleratedX provides the X server, but you still need to hang onto
the X applications (/usr/X11R6/bin/*) and the libraries and include
files (/usr/X11R6/lib and /usr/X11R6/include) if you wish to
compile X applications or run X binaries that require shared
libraries.
Keep in mind that X is actually made up of three distinct parts:
the clients (the X programs you run like XEmacs or Netscape or
xterm), the server (the display driver that talks to your video
adapter), and the development tools (the libs, header files, imake,
etc). General users (non-developers) can forego installation of the
development tools but need to make sure to install the runtime
libraries. Each Linux distribution packages these differently, so
just be careful about which ones you remove.
One caveat: I used to work for Xi Graphics, but that was over a
year and a half ago. Although I keep in touch with them, I haven't
really looked at the product line lately. Its possible they ship
the full X distributions now, but I kind of doubt it. If they are
shipping the full X distributions (clients, server, development
tools) then disregard what I've said.
Hope this helps.
-- Michael J. Hammel
______________________________________________________________
Sound Card Support
Date: Mon 24 June 1997: 11:16:34
From: Michael Hammel, mjhammel@long.emass.com
With regards to your question in the LG about support for the MAD16
Pro from Shuttle Sound System under Linux, you might consider the
OSS/Linux product from 4Front-Techologies. The sound drivers they
supply support a rather wide range of adapters. The web paget
http://www.4front-tech.com/osshw.html gives a list of what is and
isn't supported. The Shuttle Sound System 48 is listed as being
supported as well as generic support for the OPTi 82C929 chipset
(which you listed as the chipset on this adapter).
This is commercial software but its only $20. I've been thinking of
getting it myself. I have used its free predecessor, known at times
as OSS/Lite or OSS/Free, and found it rather easy to use. I just
haven't gotten around to ordering (mostly cuz I never seem to have
time for doing installation or any other kind of admin work). I
will eventually.
4Front's web site is at http://www.4front-tech.com.
Hope this helps.
-- Michael J. Hammel
______________________________________________________________
InstallNTeX is Dangerous
Date: Fri 06 June 1997 12:31:14
From: Frank Langbein langbein@mathematik.uni-stuttgart.de
Dear James:
On Fri, 6 Jun 1997, James wrote:
You have still
make_dir " LOG" "$VARDIR/log" $DOU 1777
make_dir " TMP-FONTS" "$VARDIR/fonts" $DOU 1777
If I hadn't (now) commented-out your
(cd "$2"; $RM -rf *)
then both my /var/log/* and /var/fonts/* files and directories
would have been deleted!
Actually VARDIR should also be a directory reserved for NTeX only
(something like /var/lib/texmf). Deleting VARDIR/log is not really
necessary unless someone has some MakeTeX* logs in there which are
not user writable. Any pk or tfm files from older or non-NTeX
installations could cause trouble later. Sometimes the font metrics
change and if there are some old metrics used with a new bitmap or
similar the resulting document might look rather strange. Further
log and fonts have to be world writable (there are ways to prevent
this, but I haven't implemented a wrapper for the MakeTeX* scripts
yet), so placing them directly under /var is not really a good
idea. I am aware that the documentation of the installation
procedure is minimal which makes it especially hard to select the
directories freely.
The real problem is that allowing to choose the directories freely.
Selecting the TDS or the Linux filesystem standard is rather save
and at most any other TeX files are deleted. The only real secure
option would be to remove the free choice and only offer the Linux
filesystem standard, the one from web2c 7.0 which is also TDS
conform and a TDS conform sturcutre in a special NTeX directory.
The free selection would not be accessible for a new user. I could
add some expert option which still allows to use a totally free
selection. Additional instead of deleting the directories they
could be renamed.
There are plans for a new installation procedure, also supporting
such things as read only volumes/AFS, better support for multiple
platform installation, etc. This new release will not be available
before I managed to implement all the things which were planed for
2.0. But that also means that there will probably be no new release
this year as I have to concentrate on my studies. Nevertheless I
will add a warning to the free selection in InstallNTeX. That's
currently the only thing I can do without risking to add further
bugs to InstallNTeX. Considering that my holiday starts next week I
can't do more this month.
BTW, on another point, I had difficulty finding what directory was
searched for the packages to be installed. Only in the ntex-guide,
seemingly buried, is there:
This is caused by different ways to look for the package in
NTeX-install, the text version of InstallNTeX and the Tcl/Tk
version of InstallNTeX. Therefore you get some warnings even if
NTeX-install would be able to install the packages. The minimal
documentation is one of the real big drawbacks of NTeX. I'm
currently working on a complete specification for the next release
which will turn into a real documentation.
Thanks for pointing out the problems with the free selection of
that paths. So far I concentrated on setting the installation paths
to non-existing directories.
Regards,
Frank
______________________________________________________________
Reply to Dangerous InstallNTeX Letter
To: Frank Langbein, langbein@mathematik.uni-stuttgart.de
Date: Sat, 07 Jun 1997 10:11:06 -0600
From: James james@albion.glarp.com
Dear Frank:
The hidden application of the operation
rm -rf *
to the unpredictable and unqualified input from a broad base of
naive users is highly likely to produce unexpected and undesired
results for some of these users. This is the kind or circumstance
more usually associated with a "prank". If this is _not_ your
intent, then further modifications to the script "InstallNTeX" are
required.
The script functions at issue include: mk_dirchain() ($RM -f $P),
make_dir() ($RM -rf * and $RM -f "$2"), make_tds_ln() ($RM -f
"$3"), and link_file() ($RM -rf "$2"). The impact of the operations
when using unexpected parameters, from misspellings or
misinterpretations, for instance, should be considered.
You might simply replace these operations with an authorization
dialog, or you could create a dialog with several recovery options.
(For the moment, I have replaced them with `echo "<some <warning
parm&gr;"'.)
James G. Feeney
______________________________________________________________
Monitoring An FTP Download
Date: Tue, 10 Jun 1997 19:54:25 +1000 (EST)
From: Nathan Hand Nathan.Hand@anu.edu.au
I saw the recent script someone posted in the 2c column to monitor
an ftp download using the clear ; ls -l ; sleep trick. I'd just
like to point out there's A Better Way.
Some systems will have the "watch" command installed. This command
works pretty much like the script, except it uses curses and
buffers for lightning fast updates. You use it something like
watch -n 1 ls -l
And it prints out the current time, the file listing, and it does
the refreshes so fast that you don't see the ls -l redraws. I think
it looks a lot slicker, but otherwise it's the same as the script.
I don't know where the watch command comes from. I'm using a stock
standard Red Hat system (4.0) so hopefully people with similar
setups will also have a copy of this nifty little tool.
______________________________________________________________
Programming Serial Ports
Date: Wed 18 June 1997 14:15:23
From: Tom Verbeure to_verbeure@mietec.be
Hello, A few days ago, I had to communicate using the serial port
of a Sun workstation. A lot of information can be found here:
http://www.stokely.com/stokely/unix.serial.port.resources and here:
http://www.easysw.com/~mike/serial
Reading chapters 3 and 4 of that last page, can do wonders. It took
me about 30 minutes to communicate with the machine connected to
the serial port. The code should work on virtually any unix
machine.
Hope this helps, Tom Verbeure
______________________________________________________________
Another Way of Grepping Files in a Directory Tree
Date: Thu 12 June 15:34:12
From: Danny Yarbrough danny@interactive.visa.com
That's a good tip. To work around the command line length
limitation, you can use xargs(1):
find . -name "\*.c" -print | xargs grep foo
this builds a command line containing "grep foo" (in this case),
plus as many arguments (one argument for each line of its standard
input) as it can to make the largest (but not too long) command
line it can. It then executes the command. It continues to build
command lines and executing them until it reaches the end of file
on standard input.
(Internally, I suppose xargs doesn't build command lines, but an
array of arguments to pass to one of the exec*(2) family of system
calls. The concept, however is the same).
xargs has a number of other useful options for inserting arguments
into the middle of a command string, running a command once for
each line of input, echoing each execution, etc. Check out the man
page for more.
Cheers! Danny
______________________________________________________________
More Grepping Files
Date: Mon 16 June 1997 08:45:56
From: Alec Clews Alec@tca.co.uk
grep foo `find . -name \*.c -print`
The only caveat here is that UNIX is configured to limit max chars
in a command line and the "find" command may generate a list of
files too huge for shell to digest when it tries to run the grep
portion as a command line. Typically this limit is 1024 chars per
command line.
You can get around this with
find . -type f -name \*.c -exec grep foo {} /dev/null \;
Notes: The -type f skips directories (and soft links, use -follow
if needed) that end with a c
The /dev/null is required to make grep display the name of the file
it's searching. grep only displays the file name *and* the search
string when there are multiple files to search, and /dev/null is a
0 length file.
Regards,
Alec
______________________________________________________________
Still More On Grepping Files
Date: Sat 14 June 1997 10:57:34
From: Rick Bronson rick@efn.org
Here is similiar way to grep for files in a directory tree. This
method uses xargs and as such does not suffer from the max chars in
a command line limit.
sea ()
{
find . -name "$2" -print | xargs grep -i "$1"
}
I've defined it as a function in my .bashrc file, you would use it
like:
sea "search this string" '*.[ch]'
Rick
______________________________________________________________
Grepping
Date: Thu 19 June 1997 09:29:12
From: David Kastrup dak@neuroinformatic.ruhr-uni-buchum.de
Reply to "Grepping Files in a Tree Directory"
Well right. That's why most solutions to this problem are given
using the xargs command which will construct command lines of
appropriate size.
You'd write
find . -name \*.c -print|xargs grep foo
for this. This can be improved somewhat, however. If you suspect
that you have files containing newlines or otherwise strange
characters in them, try
find . -name \*.c -print0|xargs -0 grep foo --
This will use a special format for passing the file list from find
to xargs which can properly identify all valid filenames. The --
tells grep that even strange file names like "-s" are to be
interpreted as file names.
Of course, we would want to have a corresponding file name listed
even if xargs calls a single grep in one of its invocation. We can
manage this with
find . -name \*.c -print0|xargs -0 grep foo -- /dev/null
This will have at least two file names for grep (/dev/null and one
given by xargs), so grep will print the file name for found
matches.
The -- is a good thing to keep in mind when writing shell scripts.
Most of the shell scripts searching through directories you find
flying around get confused by file names like "-i" or "xxx\ yyy"
and similar perversities.
David Kastrup
______________________________________________________________
More on Grepping Files in a Tree
Date: Mon 02 June 1997 15:34:23
From: Chris Cox ccox@central.geasys.com
My favorite trick for look for a string (or strings - egrep) in a
tree:
$ find . -type f -print | xargs file | grep -i text |
cut -f1 -d: | xargs grep pattern
This is a useful technique for other things...not just grepping.
______________________________________________________________
Untarring/Zip
Date: Sun 22 June 1997 13:23:14
From: Mark Moran mmoran@mmoran.com
I read the following 2-cent tip and was excited to think that I've
finally reached a point in my 'linux' expertise I COULD contribute
a 2-cent tip! I typically run:
tar xzf foo.tar.gz
to unzip and untar a program. But as Paul mentions the directory
structure isn't included in the archive and it dumps in your
current directory. Well before I do the above I run:
tar tzf foo.tar.gz
This will dump out to your console what going to be unarchived
easily allowing you to see if there's a directory structure!!!!
Mark
______________________________________________________________
An Addition to Hard Disk Duplication (LG #18)
Date: Thu 12 June 1997 15:34:32
From: Andreas Schiffler schiffler@zkm.de
Not suprisingly, Linux can do that of course for free and - even
from a floppy bootimage for example (i.e. Slackware bootdisk
console).
For identical harddrives the following will do the job:
cat /dev/hda >/dev/hdb
For non-identical harddrives one has to repartition the target
first:
fdisk /dev/hda record the partitions (size, type)
fdisk /dev/hdb create same partitions
cat /dev/hda1 >/dev/hdb1 copy partitions
cat /dev/hda2 >/dev/hdb2 ...
To create image files, simply redirect the target device to a file.
cat /dev/hda >image-file
To reinstall the MBR and lilo, just boot with a floppy using
parameters that point to the root partition (as in LILO> linux
root=/dev/hda1) and rerun lilo from within Linux.
Have fun
Andreas
______________________________________________________________
Reply to ncftp (LG #18)
Date: Fri 20 June 1997 14:23:12
From: Andrew M. Dyer, adyer@mcs.com
To monitor an ftp session I like to use ncftp which puts up a nice
status bar. It comes in many linux distributions. When using the
standard ftp program you can also use the
hash
command which prints a
#
every 1K bytes received. Some ftp clients also have the
bell
command which will send a bell character to your console for every
file transferred.
For grepping files in a directory tree I like to use the -exec
option to find. The syntax is cryptic, but there is no problem with
overflowing the shell argument list. A version of the command shown
in #18 whould be like this:
find . -name \*.c -exec grep foo {} /dev/null \;
(note the /dev/null forces grep to print the filename of the
matched file). Another way to do this is with the mightily cool
xargs program, which also solves the overflow problem and its a bit
easier to remember:
find . -name \*.c -print | xargs grep foo /dev/null
(this last one is stolen from "UNIX Power Tools" by Jerry Peek, Tim
O'Reilly and Mike Loukides - a whole big book of 2 cent tips.
For disk duplication we sometimes use a linux box with a secondary
IDE controller, and use
dd
to copy the data over.
dd if=/dev/hdc of=/dev/hdd bs=1024k
this would copy the contents of /dev/hdc to /dev/hdd. The bs=1024k
tells linux to use a large block size to speed the transfer.
______________________________________________________________
Sockets and Pipes
Date: Thu, 12 Jun 1997 23:22:38 +1000 (EST) From: Waye-Ian Cheiw,
itchy@jaguar.snafu.com
Hello!
Here's a tip!
Ever tried to pipe things, then realised what you want to pipe to
is on another machine?
spiffy $ sort < file
sh: sort: command not found
spiffy $ # no sort installed here! gahck!
Try "socket", a simple utility that's included in the Debian
distribution. Socket is a tool which can treat a network connection
as part of a pipe.
spiffy $ cat file
c
b
a
spiffy $ cat file | socket -s 7000 & # Make pipe available at port 7000.
spiffy $ rlogin taffy
taffy $ socket spiffy 7000 | sort # Continue pipe by connecting to spiffy.
a
b
c
It's also very handy for transferring files and directories in a
snap.
spiffy $ ls -F
mail/ project/
spiffy $ tar cf - mail project | gzip | socket -qs 6666 &
spiffy $ rlogin taffy
taffy $ socket spiffy 6666 | gunzip | tar xf -
taffy $ ls -F
mail/ project/
The -q switch will close the connection on an end-of-file and
conveniently terminate the pipes on both sides after the transfer.
It can also connect a shell command's input and output to a socket.
There is also a switch, -l, which restarts that command every time
someone connects to the socket.
spiffy $ socket -s 9999 -l -p "fortune" &
spiffy $ telnet localhost 9999
"Baseball is ninety percent mental. The other half is physical."
Connection closed by foreign host.
Will make a cute service on port 9999 that spits out fortunes.
-- Ian!!
______________________________________________________________
Hex Dump
Date: Tue 24 June 1997 22:54:12
From: Arne Wichmann aw@math.uni-sb.de
Hi.
One of my friends once wrote a small vi-compatible hex-editor. It
can be found (as source) under
vieta.math.uni-sb.de:/pub/misc/hexer-0.1.4c.tar.gz
______________________________________________________________
More on Hex Dump
Date: Wed, 18 Jun 1997 10:15:26 -0700
From: James Gilb p27451@am371.geg.mot.com
I liked your gawk solution to displaying hex data. Two things
(which people have probably already pointed out to you).
1. If you don't want similar lines to be replaced by * *, use the -v
option to hexdump. From the man page:
-v The -v option causes hexdump to display all input data. Without
the -v option, any number of groups of output lines, which would
be identical to the immediately preceding group of output lines
(except for the input offsets), are replaced with a line comprised
of a single asterisk.
2. In emacs, you can get a similar display using ESC-x hexl-mode. The
output looks something like this:
00000000: 01df 0007 30c3 8680 0000 334e 0000 00ff ....0.....3N....
00000010: 0048 1002 010b 0001 0000 1a90 0000 07e4 .H..............
00000020: 0000 2724 0000 0758 0000 0200 0000 0000 ..'$...X........
00000030: 0000 0760 0004 0002 0004 0004 0007 0005 ...`............
00000040: 0003 0003 314c 0000 0000 0000 0000 0000 ....1L..........
00000050: 0000 0000 0000 0000 0000 0000 2e70 6164 .............pad
00000060: 0000 0000 0000 0000 0000 0000 0000 0014 ................
00000070: 0000 01ec 0000 0000 0000 0000 0000 0000 ................
00000080: 0000 0008 2e74 6578 7400 0000 0000 0200 .....text.......
00000090: 0000 0200 0000 1a90 0000 0200 0000 2a98 ..............*.
(I don't suppose it is supprising that emacs does this, after all,
emacs is not just and editor, it is its own operating system.)
______________________________________________________________
Reply to Z Protocol
Date: Mon 09 June 1997 19:34:54
From: Gregor Gerstmann gerstman@tfh-berlin.de
In reply to my remarks regarding file transfer with the z protocol
in LinuxGazette issue17, April 1997, I received an e-mail that may
be interesting to others too:
Hello!
I noticed your article in the Linux Gazette about the sz command,
and really don't think you need to split up your downloads into
smaller chunks.
The sz command uses the ZMODEM protocol, which is built to handle
transmission errors. If sz reports a CRC error or a bad packet, it
does not mean that the file produced by the download will be
tainted. sz automatically retransmits bad packets.
If you have an old serial UART chip ( 8250 ), then you might be
getting intermittent serial errors. If the link is unreliable, then
sz may spend most of its time tied up in retransmission loops.
In this case, you should use a ZMODEM window to force the sending
end to expect an `OK' acknowledgement every few packets.
sz -w1024
Will specify a window of 1024 bytes.
-- Ian!!
______________________________________________________________
Published in Linux Gazette Issue 19, July 1997
______________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
______________________________________________________________
This page maintained by the Editor of Linux Gazette, gazette@ssc.com
Copyright © 1997 Specialized Systems Consultants, Inc.
"Linux Gazette...making Linux just a little more fun!"
______________________________________________________________
News Bytes
Contents:
* News in General
* Software Announcements
______________________________________________________________
News in General
______________________________________________________________
SPAM Counter Attack!
If you'd like to have your voice heard regarding SPAM mail, why
don't you consider writing a letter to your representative?
If you're not sure of who your representatives are, check the
Congressional websites:
* House: http://www.house.gov/writerep/
* Senate: http://www.senate.gov/senator/index.html
The postal addresses for your members are:
The Honorable (Senator name) The Honorable (Rep. name) Washington,
DC 20510 Washington, DC 20515
The letter doesn't have to be long... two paragraphs is as
effective as 10 pages. And you don't need to write different
letters, the same one can be sent to each Member. (Just remember to
change the mailing address!)
______________________________________________________________
Linux-Access Web Pages
The Center for Disabled Student Services at the University of Utah
in Salt Lake City Utah, today announced it's newly re-designed
linux-access web pages. linux-access is a mailing list hosted by
CDSS which is used by both developers and users of the Linux
operating system in order to aid development and integration of
access related technology into the Linux OS and available software.
Both users and developers of Linux are encouraged to join the
mailing list and help Linux become more accessible to everyone.
Among those encouraged to subscribe to the list are companies
making Linux distributions so that they can incorporate access
technology into their products as well as get valuable feedback
from users.
Location of the new pages is at:
http://ssv1.union.utah.edu/linux-access/.
Location of the blinux FTP mirror is at
ftp://ssv1.union.utah.edu/pub/mirrors/blinux/.
An archive of the mailing list can be found on the Linux v2
Information HQ site at:
http://www.linuxhq.com/lnxlists/linux-access/.
______________________________________________________________
Supreme Court Ruling
The U.S. Supreme Court extended free-speech rights to cyberspace in
its recent ruling striking down a federal law that restricted
indecent pictures and words on the Internet computer network.
The court declared the law that bans the dissemination of sexually
explicit material to anyone younger than 18 unconstitutional.
"Notwithstanding the legitimacy and importance of the congressional
goal of protecting children from harmful materials, we agree ...
that the statute abridges 'freedom of speech' protected by the
First Amendment," Justice John Paul Stevens said for the court
majority in the 40-page opinion.
The ruling represented a major victory for the American Civil
Liberties Union (ACLU) and groups representing libraries,
publishers and the computer on-line industry, which brought the
lawsuit challenging the law.
______________________________________________________________
The Power OS
Matthew Borowski has created a new website featuring Linux
information. Entitled "Linux - THE POWER OS", and featuring Linux
links, software, help, and a discussion forum, Linux - THE POWER OS
is also a member of the Linux Webring.
The software listing is top-of-the-line, featuring a list of
powerful applications that will change the way you make use of
Linux. The modem setup section will help you get your modem working
under Linux, and the StarOffice-miniHOWTO will help fix Libc
problems when installing Staroffice under Linux.
If you have a chance, visit "Linux - THE POWER OS" at:
http://www.jnpcs.com/mkb/linux or http://www.mkb.home.ml.org/linux/
For more information write to mkb@poboxes.com
______________________________________________________________
June 1997 PowerPC Project
The Linux for PowerPC project announces its June 1997 CD of the
Linux operating system for the PowerPC. The CD is the second
release following the first one in January 1997. The June release
is significantly faster and has improved memory handling. It now
contains over 400 different software packages and everything needed
to install and run Linux on any of the PowerPC machines
manufactured by Be Inc, Apple Computer, IBM, Motorola and most
other manufactures of PowerPC computers. Go to
http://www.linuxppc.org/ to order your own CD or to find out more
about
______________________________________________________________
Sunsite Link
Check out http://sunsite.unc.edu/paulc/liv
This lets you view the contents of SunSITE's /pub/Linux/Incoming
directory, but extracts all the descriptions out of the map files
(.lsm) and displays them in a table. It has links for 24 hours/7
day/14 day and 28 day lists.
______________________________________________________________
GLUE Announcement
Caldera has announced that it will give a free copy of OpenLinux
Lite on CD-ROM for each group member of GLUE. Caldera, Inc.
(http://www.caldera.com/) is located in Provo, Utah. For full
details on GLUE and to register your group as a member, visit the
GLUE web site at http://www.ssc.com/glue.
______________________________________________________________
Software Announcements
______________________________________________________________
Woven Goods for LINUX
World-Wide Web (WWW) Applications and Hypertext-based Information
about LINUX. It is ready configured for the Slackware Distribution
and currently tested with Version 3.2 (ELF). The Power Linux LST
Distribution contains this collection as an integral part with some
changes.
The Collection consists of five Parts, so it can be used for
multiple purposes depending on the installed Parts:
The five Parts of Woven Goods for LINUX are:
1. World-wide Web Browser The World-wide Web Browser from Netscape
for X11 and Lynx for ASCII terminals.
2. LINUX Documents The LINUX Documents contain the HTML Pages of
Woven Goods for LINUX, FAQs, HOWTOs, LDP Documents and more in
different formats like Hypertext Markup Language (HTML), Text, PDF
and Postscript.
3. World-wide Web Server The Apache World-wide Web Server with
additional CGI Scripts for Statistics, viewing MAN Pages and
Counters, Glimpse Search Engine and the Documentation for Apache
Server. Furthermore the Apache Module PHP/FI as well as the BSCW
system and the necessary Python interpreter are included.
4. Hypertext Markup Language The HTML-Editor asWedit allows the
creation of HTML-Pages. Some Graphic Tools allow the creation and
modification of GIFs.
5. External Viewers The external Viewers are nessesary to present
Information which can not be viewed by the WWW Browsers. Only the
usefull Viewers (xanim, acroread, ia, raplayer, str, splay,
swplayer, vrweb, etc.) are included which are not part of the
Slackware Distribution (xv, ghostview, showaudio).
Availabilty & Download
Woven Goods for LINUX is available via anonymous FTP from:
ftp://ftp.fokus.gmd.de/pub/Linux/woven
Installation
For Installation Instructions see the Installation Guide:
ftp://ftp.fokus.gmd.de/pub/Linux/woven/README.install or
http://www.fokus.gmd.de/linux/install.html
______________________________________________________________
Qbib Version 1.1
Qbib is a bibliography management system based on Qddb. Features
include the Qddb database, import BibTeX .bib giles, custom export
options and a friendly user-interface just to name a few.
For more information about Qbib (including an on-line manual), see
http://www.hsdi.com/qddb/commercial
To order Qbib or other Qddb products/services, visit the Qddb
store: http://www.hsdi.com/qddb/orders
______________________________________________________________
WipeOut Version 1.07
WipeOut is an integrated development environment for C++ and Java.
It contains project manager, class browser, make tool, central text
editor with syntax highlighting and a debugger frontend. WipeOut is
available for Linux and SunOS/Solaris both under XView.
For the new release we have especially extended the class browser
and the text editor. Check out the changes list for all new
features and fixed bugs.
You can obtain the software and documentation at:
http://www.softwarebuero.de/ndex-eng.html
______________________________________________________________
Published in Linux Gazette Issue 19, July 1997
______________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
______________________________________________________________
This page written and maintained by the Editor of Linux Gazette,
gazette@ssc.com
Copyright © 1997 Specialized Systems Consultants, Inc.
"Linux Gazette...making Linux just a little more fun!"
______________________________________________________________
The Answer Guy
By James T. Dennis, jimd@starshine.org
Starshine Technical Services, http://www.starshine.org/
______________________________________________________________
Contents:
* Mounting Disks Under Red Hat 4.0
* Weird LILO Problems
* Running FileRunner
* Adding Linux To a DEC XLT-366
* Disk Support
* Legibility
* MetroX Problems
* Installing Linux
* Adding Programs to the Pull Down Menus
* Linux Skip
* ActiveX for Linux
* Bash String Manipulations
* Blinking Underline Cursor
* File Permissions
______________________________________________________________
Mounting Disks Under Red Hat 4.0
From: Bigby, Bruce W. bbigby@frontiernet.net
Hi. The RedHat 4.0 control-panel has an interesting problem. I have
two entries in my /etc/fstab file for my SCSI Zip Drive--one for
mounting a Win95 Zip removable disk and another for mounting a
removable Linux ext2fs disk--
/dev/sda4 /mnt/zip ext2fs rw,noauto 0 0
/dev/sda4 /mnt/zip95 vfat rw,noauto 0 0
I do this so that I can easily mount a removable zip disk by
supplying only the appropriate mount point to the mount
command--for example, by supplying
mount /mnt/zip
when I want to mount a Linux ext2fs disk, and
mount /mnt/zip95
when I want to mount a Windows 95 Zip disk.
Yes, I do this all the time (except that I use the command line for
all of this -- and vi to edit my fstab). I also add the "user" and
a bunch of "nosuid,nodev,..." parameters to my options field. This
allows me or my wife (the only two users with console access to the
machine) to mount a new magneto optical, floppy, or CD without
having to 'su').
Unfortunately, the control-panel's mount utility treats the two
lines as duplicates and removes the additional lines that begin
with /dev/sda4. Consequently, the control panel's mount utility
only sees the first line,
/dev/sda4 /mnt/zip ext2fs rw,noauto 0 0
In addition, the utility also modifies my original /etc/fstab. I do
not
Bummer! Since I don't use the GUI controls I never noticed that.
desire this behavior. I prefer that the utility be fairly dumb and
not modify my original /etc/fstab. Has RedHat fixed this problem in
4.2?
I don't know. There are certainly enough other fixes and upgrades
to be worth installing it (although -- with a .1 version coming out
every other month -- maybe you want to just download selective
fixes and wait for the big 5.0).
(My current guess -- totally unsubstantiated by even an inside
rumor -- is that they'll shoot for integrating glibc -- the GNU C
library -- into their next release. That would be a big enough job
to warrant a jump in release numbers).
Can I obtain the sources and modify the control-panel's mount
utility so that it does not remove, "so-called," duplicates?
Last I heard the control-panel was all written in Python (I think
they converted all the TCL to Python by 4.0) In any event I pretty
sure that it's TCL, Python and Tk (with maybe some bash for some
parts). So you already have the sources.
The really important question here is why you aren't asking the
support team at RedHat (or at least posting to their "bugs@"
address). This 'control-panel' is certainly specific to Red Hat's
package.
According to the bash man page, bash is supposed to source the
.profile, or .profile_bash, in my home directory. However, when I
login, bash does not source my .profile. How can I ensure that bash
sources the .profile of my login account--$HOME/.profile?
The man page and the particular configuration (compilation) options
in your binary might not match.
You might have an (empty?) ~/.bash_profile or ~/.bash_login (the
man page looks for these in that order -- with .profile being the
last -- and only it sources the first of them that it finds).
You might have something weird in your /etc/profile or /etc/bashrc
that's preventing your ~/.bash_* or ~/.profile from being sourced.
Finally you might want to double check that you really are running
bash as your login shell. There could be all sorts of weird bugs in
your configuration that effectively start bash and fail to signal
to it that this is a "login" shell.
Normally login exec()'s bash with an "ARG[0]" of "-bash" (preceding
the name with a dash). I won't get into the gory details -- but if
you were logging in with something that failed to do this: bash
wouldn't "know" that it was a login shell -- and would behave as
though it were a "secondary" shell (like you invoked it from your
editor)).
If all else fails go over to prep.ai.mit.edu and grab the latest
version of the GNU bash sources. Compile them yourself.
-- Jim
______________________________________________________________
Weird LILO Problem
From: David Runnels david_runnels@smb.com
Hi Jim. I read your column in the Linux Gazette and I have a
question. (If I should have submitted it some other way I
apologize.)
I recommend using the tag@starshine.org address for now. At some
point I hope to have SSC set up a tag@gazette.ssc.com address -- or
maybe get linux.org to give me an account and set up some custom
mail scripts.
I've been using Linux casually for the last couple of years and
several months ago I installed RedHat 4.0 on the second IDE drive
of a Win95 system. Though I've used System Commander in the past I
don't like using it with Win95 so I had the RedHat install process
create a boot floppy. This has always worked fine, and I made a
second backup floppy using dd) which I also made sure booted fine.
This probably isn't really a "boot" floppy. It sounds like a "lilo"
floppy to me. The difference is that a boot floppy has a kernel on
it -- a "lilo" floppy just has the loader on it.
The confusing thing about Linux is that it can be booted in so many
ways. In a "normal" configuration you have Lilo as the master boot
program (on the first hard drive -- in the first sector of track 0
-- with the partition table). Another common configuration places
lilo in the "superblock" (logical boot record) of the Linux "root"
partition (allowing the DOS boot block, or the OS/2 or NT boot
manager -- or some third party package like System Commander) to
process the partition table and select the "active" partition --
which *might* be the Linux root partition.
Less common ways of loading Linux: use LOADLIN.EXE (or
SYSLINUX.EXE) -- which are DOS programs that can load a Linux
kernel (kicking DOS out from under them so to speak), put Lilo on a
floppy (which is otherwise blank) -- or on a none Linux boot block
(which sounds like your situation).
Two others: You can put Lilo on a floppy *with* a Linux kernel --
or you can even write a Linux kernel to a floppy with no lilo. That
last option is rarely used.
The point of confusion is this: LILO loads the Linux kernel using
BIOS calls. It offers one the opportunity to pass parameters to the
kernel (compiled into it's boot image via the "append" directive in
/etc/lilo.conf -- or entered manually at boot time at the lilo
prompt).
Another source of confusion is the concept that LILO is a block of
code and data that's written to a point that's outside the
filesystems on a drive -- /sbin/lilo is a program that writes this
block of boot code according to a set of directives in the
/etc/lilo.conf. It's best to think of the program /sbin/lilo as a
"compiler" that "compiles" a set of boot images according to the
lilo.conf and writes them to some place outside of your filesystem.
Yet another source of confusion is that the Linux kernel has a
number of default parameters compiled into it. These can be changed
using the 'rdev' command (which was originally used to set the
"root device" flags in a kernel image file). 'rdev' basically
patches values into a file. It can be be used to set the "root
device," the "initial video mode" and a number of other things.
Some of these settings can be over-ridden via the LILO prompt and
append lines. LOADLIN.EXE can also pass parameters to the kernel
that it loads.
There's a big difference between using a kernel image written
directly on a floppy -- and a LILO that's built to load an image
that's located on a floppy filesystem (probably minix or ext2fs).
With LILO the kernel must be located on some device that is
accessible with straight BIOS calls.
This usually prevents one from using LILO to boot off of a third
IDE or SCSI disk drive (since most systems require a software
driver to allow DOS or other OS' to "see" these devices). I say
"usually" because there are some BIOS' and especially some BIOS
extensions on some SCSI and EIDE controllers that may allow LILO to
access devices other than the first two floppies and the first two
hard drives. However, those are rare. Most PC hardware can only
"see" two floppy drives and two hard drives -- which must be on the
same controller -- until an OS loads some sort of drivers.
In the case where a kernel is directly located on the raw floppy --
and in the case where the kernel is located on the floppy with LILO
-- the kernel has the driver code for your root device (and
controllers) built in. (There are also complex new options using
'initrd' -- an "initial RAM disk" which allows a modular kernel to
load the drivers for it's root devices.
Yet another thing that's confusing to the DOS user -- and most
transplants from other forms of Unix -- is that the kernel doesn't
have to be located on the root device. In fact LOADLIN.EXE requires
that the kernel be located on a DOS filesystem.
To make matters more complicated you can have multiple kernels on
any filesystem, any of them might use any filesystem as their root
device and these relationships (between kernel and root
device/filesystem can be set in several ways -- i.e. by 'rdev' or
at compile time, vs. via the LOADLIN or LILO command lines).
I recommend that serious Linux users reserve a small (20 or 30 Mb)
partition with just a minimal installation of the root/base Linux
software on it. This should be on a separate device from your main
Linux filesystems.
Using this you have an alternative (hard drive based) boot method
which is much faster and more convenient than digging out the
installation boot/root floppies (or having to go to a working
machine and build a new set!). I recommend the same thing for most
Solaris and FreeBSD installations. If you have a DOS filesystem on
the box -- at least stash a copy of LOADLIN.EXE and a few copies of
your favorite kernels in C:\LINUX\ (or wherever).
Now that more PC SCSI cards support booting off of CD-ROM's (a
feature that's been long overdue!) you can get by without heeding
my advice -- IF YOU HAVE SUCH A CONTROLLER AND A CD TO MATCH.
(Incidentally -- I found out quite by accident that the Red Hat 4.1
CD is "bootable" on Adaptec 2940 controllers -- if you have the
Adaptec configured to allow it. I've also heard that the NCR
SymBIOS PCI controller supports this -- though I haven't tested
that yet).
In any event we should all make "rescue disks" -- unfortunately
these are trickier than they should be. Look for the Bootdisk HOWTO
for real details about this.
About a week ago I put the Linux floppy in the diskette drive,
reset the machine and waited for the LILO prompt. Everything went
fine, but all I got were the letters LI and everything stopped. I
have tried several times, using the original and the backup
diskette, with the same results.
Did you add a new drive to the system?
I have done nothing (that I can think of!) to my machine and I'm at
a loss as to what might be causing this. Just to ensure that the
floppy drive wasn't acting funny, I've booted DOS from it and that
went fine.
When you booted DOS where you able to see the drive? I'd get out
your installation floppy (or floppies -- I don't remember whether
Red Hat 4.0 had a single floppy system or not -- 4.1 and 4.2 only
require one for most hardware). Boot from that and choose "rescue"
or switch out of the installation script to a shell prompt. You
should then be able to attempt mounting your root filesystem.
If that fails you can try to 'fsck' it. After that it's probably a
matter of reinstallation and restoring from backups.
Any ideas you have would be appreciated. Thanks for your time.
Dave Runnels
Glad I could help.
_________________________________________________________________
Running FileRunner
David E. Stern kotsya@u.washington.edu I wanted to let you know that
you were right about relying too heavily on rpm. In the distant past,
I used file text-based file compression utilities, so I tried it again
and tarballs are actually quite nice. I also found that rpm --nodeps
will help. Tarballs are also nice because not all apps are distributed
with rpm. (bonus! :-) I'm also told that multiple versions of tcl/tlk
can peacably coexist, although rpm won't allow it by default. Another
ploy with rpm which I didn't see documented was that to avoid circular
dependencies, update multiple rpms at the same time; i.e.: rpm -Uvh
app1.rpm app2.rpm app3.rpm . Another thing I learned about was that
there are some non-standard (contributed) libraries that are required
for certain apps, like afio and xpm. Thanks for the great ideas and
encouragement.
The end goal: to install FileRunner, I simply MUST have it! My
intermediate goal is to install Tcl/Tk 7.6/4.2, because FileRunner
needs these to install, and I only have 7.5/4.1 . However, when I try
to upgrade tcl/tlk, other apps rely on older tcl/tk libraries, at
least that's what the messages allude to:
libtcl7.5.so is needed by some-app
libtk4.1.so is needed by some-app
(where some-app is python, expect, blt, ical, tclx, tix, tk,
tkstep,...)
I have enough experience to know that apps may break if I upgrade the
libraries they depend on. I've tried updating some of those other
apps, but I run into further and circular dependencies--like a cat
chasing it's tail.
In your opinion, what is the preferred method of handling this
scenario? I must have FileRunner, but not at the expense of other
apps.
It sounds like you're relying too heavily on RPM's. If you can't
afford to risk breaking your current stuff, and you "must" have the
upgrade you'll have to do some stuff beyond what the RPM system seems
to do.
One method would be to grab the sources (SRPM or tarball) and manually
compile the new TCL and tk into /usr/local (possibly with some changes
to their library default paths, etc). Now you'll probably need to grab
the FileRunner sources and compile that to force it to use the
/usr/local/wish or /usr/local/tclsh (which, in turn, will use the
/usr/local/lib/tk if you've compiled it all right).
Another approach is to set up a separate environment (separate disk, a
large subtree of an existing disk -- into which you chroot, or a
separate system entirely) and test the upgrade path where it won't
inconvenience you by failing. A similar approach is to do a backup,
test your upgrade plan -- (if the upgrade fails, restore the backup).
Thanks, -david
You're welcome. This is a big problem in all computing environments
(and far worse in DOS, Windows, and NT systems than in most multi-user
operating systems. At least with Unix you have the option of
installing a "playpen" (accessing it with the chroot call -- or by
completely rebooting on another partition if you like).
Complex interdepencies are unavoidable unless you require that every
application be statically linked and completely self-sufficient
(without even allowing their configuration files to be separate. So
this will remain an aspect of system administration where experience
and creativity are called for (and a good backup may be the only thing
between you and major inconvenience).
-- Jim
_________________________________________________________________
Adding Linux t a DEC XLT-366
From: Alex Pikus alex@webexpress.net
I have a DEC XLT-366 with NTS4.0 and I would like to add Linux to it.
I have been running Linux on an i386 for a while.
I have created 3 floppies:
* Linload.exe and MILO (from DEC site)
* Linux kernel 2.0.25
* RAM disk
I have upgrade AlphaBIOS to v5.24 (latest from DEC) and added a Linux
boot option that points to a:\
You have me at a severe disadvantage. I've never run Linux on an
Alpha. So I'll have to try answering this blind.
When I load MILO I get the "MILO>" prompt without any problem. When I
do
show
or
boot ...
at the MILO I get the following result ...
SCSI controller gets identified as NCR810 on IRQ 28 ... test1 runs and
gets stuck "due to a lost interrupt" and the system hangs ...
In WinNTS4.0 the NCR810 appears on IRQ 29.
My first instinct is the ask if the autoprobe code in Linux (Alpha) is
broken. Can you use a set of command-line (MILO) parameters to tell
pass information about your SCSI controller to your kernel? You could
also see about getting someone else with an Alpha based system to
compile a kernel for you -- and make sure that it has values in it's
scsi.h file that are appropriate to your system -- as well as insuring
that the corrective drivers are built in.
How can make further progress here?
It's a tough question. Another thing I'd look at is to see if the
Alpha system allows booting from a CD-ROM. Then I'd check out Red
Hat's (or Craftworks') Linux for Alpha CD's -- asking each of them if
they support this sort of boot.
(I happened to discover that the Red Hat Linux 4.1 (Intel) CD-ROM was
bootable when I was working with one system that had an Adaptec 2940
controller where that was set as an option. This feature is also quite
common on other Unix platforms such as SPARC and PA-RISC systems -- so
it is a rather late addition to the PC world).
Thanks!
Alex.
_________________________________________________________________
Disk Support
From: Andrew Ng lulu@@asiaonline.net
Dear Sir, I have a question to ask: Does Linux support disks with
density 2048bytes/sector?
Apparently not. This is a common size for CD-ROM's -- but it not at
all normal for any other media.
I have bought a Fujitsu MO drive which support up to 640MB MO disks
with density 2048bytes/sector. The Slackware Linux system does not
support access to disks with this density. Windows 95 and NT support
this density and work very well. Is there any version of Linux which
support 2048bytes/sector? If not, is there any project working on
that?
I believe the drive ships with drivers for DOS, Windows, Windows '95
and NT. The OS' don't "support it" the manufacturer supports these
OS'.
Linux, other the other hand, does support most hardware (without
drivers being supplied by the hardware manufacturers). Granted we get
some co-operation from many manufacturers. Some even contribute code
to the main kernel development.
We prefer the model where the hardware manufacturer releases free code
to drive their hardware -- whether that code is written for Linux,
FreeBSD or any other OS. Release it once and all OS' can port and
benefit by it.
I hear a lot of praise about Linux. Is Linux superior to Windows NT in
all aspect?
That's controversial question. Any statement like: Is "foo" superior
to "bar" in all aspects? ... is bound to cause endless (and probably
acrimonious) debate.
Currently NT has a couple of advantages: Microsoft is a large company
with lots of money to spend on marketing and packaging. They are very
aggressive in making "partnerships" and building "strategic
relationships" with the management of large companies.
Microsoft has slowly risen to dominance in the core applications
markets (word processors, spreadsheets, and databases). Many industry
"insiders" (myself included) view this as being the result of
"trust"-worthy business practices (a.k.a. "verging on monopolistic").
In other words may people believe that MS Word isn't the dominant word
processor because it is technically the superior product -- but
because MS was able to supply the OS features they needed when they
wanted (and perhaps able to slip the schedules of certain releases
during the critical development phases of their competitors).
The fact that the OS, and the principal programming tools, and the
major applications are all from the same source has generated a
amazing amount of market antagonism towards Microsoft. (Personally I
think it's a bit extreme -- but I can understand how many people feel
"trapped" and understand the frustration of thinking that there's "no
choice").
Linux doesn't have a single dominant applications suite. There are
several packages out there -- Applixware, StarOffice, Caldera's
Internet Office Suite. Hopefully Corel's Java Office will also be a
useful to Linux, FreeBSD and other users (including Windows and NT).
In addition to these "suites" there are also several individual
applications like Wingz (a spreadsheet system), Mathematica, (the
premier symbolic mathematics package), LyX (the free word processor --
LaTeX front-end -- that's under development), Empress, /rdb (database
systems), Flagship and dbMan IV (xBase database development packages),
Postgres '95, mSQL, InfoFlex, Just Logic's SQL, MySQL (database
servers) and a many more. (Browse through the Linux Journal
_Buyer's_Guide_ for a large list -- also waltz around the web a bit).
Microsoft's SQL Server for NT is getting to be pretty good. Also,
there are alot of people who program for it -- more than you'll find
for InfoFlex, Postgres '95 etc. A major problem with SQL is that the
servers are all different enough to call for significant differences
in the front end applications -- which translates to lots of
programmer time (and money!) if you switch from one to another. MS has
been very successful getting companies to adopt NT Servers for their
"small" SQL projects (which has been hurting the big three -- Oracle,
Sybase and Informix). Unfortunately for Linux -- database programmers
and administrators are very conservative -- they are a "hard sell."
So Linux -- despite the excellent stability and performance -- is not
likely to make a significant impact as a database server for a couple
of years at least. Oracle, Sybase and Informix have "strategic
relationships" with SCO, Sun, and other Unix companies.
The established Unix companies viewed Linux as a threat until
recently. They now seem to see it as a mixed blessing. On the up side
Linux has just about doubled the number of systems running Unix-like
OS', attracted somewhere between two and eight million new converts
away from the "Wintel" paradigm, and even wedged a little bit of
"choice" into the minds of the industry media. On the down side SCO
can no longer charge thousands of dollars for the low end of their
systems. This doesn't really affect Sun, DEC, and HP so much -- since
they are primarily hardware vendors who only got into the OS business
to keep their iron moving out the door. SCO and BSDI have the tough
fight since the bulk of their business is OS sales.
(Note: BSDI is *not* to be confused with the FreeBSD, NetBSD, OpenBSD,
or 386BSD (Jolix) packages. They are a company that produces a
commercial Unix, BSDI/OS. The whole Free|Net|Open-BSD set of
programming projects evolved out of the work of Mr. and Mrs. Jolitz --
which was called 386BSD -- and I call "Jolix" -- a name with I also
spotted in the _Using_C-Kermit_ book from Digital Press).
So there don't seem to be any Oracle, SyBase, or Informix servers
available for Linux. The small guys like JustLogic and InfoFlex have
an opportunity here -- but it's a small crack in a heavy door and some
of them are likely to get their toes broken in the process.
Meanwhile NT will keep getting market share -- because their entry
level still a tiny fraction of the price of any of the "big guys."
I've just barely scratched the tip of the iceberg (to thoroughly blend
those metaphors). There are so many other aspects of comparison it's
hard to even list them -- let alone talk about who Linux and NT
measure up to them.
It's also important to realize that it's not just NT vs. Linux. There
are many forms of Unix -- most of them are quite similar to Linux from
a user and even from an administrators point of view. There are many
operating systems that are vastly different than either NT (which is
supposed to be fundamentally based on VMS) and the various Unix
variants.
There are things like Sprite (a Berkeley research project), Amoeba and
Chorus (distributed network operating systems), EROS, and many others.
Here's a link where you can find out more about operating systems in
general: Yahoo! Computers and Internet: Operating Systems: Research
-- Jim
_________________________________________________________________
Legibility
From: Robert E Glacken glacken@bc.seflin.org
I use a 256 shade monochrome monitor. The QUESTIONS are invisible.
What questions? What OS? What GUI? (I presume that the normal text is
visible in text mode so you must be using a GUI of some sort)?
I wouldn't expect much from a monochrome monitor set to show 256 (or
even 127) shades of grey. That's almost no one in the PC/Linux world
that uses those -- so there almost no one that tunes their color
tables and applications to support it.
Suggestions -- get a color screen -- or drop the GUI and use text
mode.
-- Jim
_________________________________________________________________
MetroX Problems
From: Allen Atamer atamer@ecf.toronto.edu
I am having trouble setting up my XServer. Whether or not I use MetroX
or Xfree86 to set it up it's still not working.
When I originally chose metrox to install, i got to the setup screen,
chose my card and resolution, saved and exited. Then i started up the
xwindows, and my screen loaded the Xserver, but the graphics were all
messed up. I exited, then changed some settings, and now i can't even
load the xserver. The Xerrors file says it had problems loading the
'core'.
Hmm. You don't mention what sort of video card you're using or what
was "messed up." As I've said many times in my column -- I'm not must
of an "Xpert" (or much of a "TeXpert" for that matter).
MetroX and XFree86 each have their own support pages on the web -- and
there are several X specific newsgroups where you'd find people who
are much better with X than I.
Before you go there to post I'd suggest that you type up the type of
video card and monitor you have in excruciating detail -- and make
sure you go through the X HOWTO's and the Red Hat manual. Also be sure
to check the errata page at Red Hat
(http://www.redhat.com/errata.html) -- this will let you know about
any problems that were discovered after the release of 4.1.
One other thing you might try is getting the new version (4.2 --
Biltmore) -- and check it's errata sheet. You can buy a new set of
CD's (http://www.cheapbytes.com is one inexpensive source) or you can
use up a bunch of bandwidth by downloading it all. The middle road is
to to download just the parts you need.
I notice (looking at the errata sheets as I type this) that XFree86 is
up to version 3.3.1 (at least). This upgrade is apparently primarily
to fix some buffer overflow (security) problems in the X libraries.
By the way, how do I mount what's on the second cd and read it?
(vanderbilt 4.1)
First umount the first CD with a command like: umount /cdrom Remove
it. Then 'mount' the other one with a command like: mount -t iso9660
-o ro /cdrom /dev/scd0 ... where /cdrom is some (arbitrary but extent)
mount point and /dev/scd0 is the device node that points to your CD
drive (that would be the first SCSI CD-ROM on your system -- IDE and
various other CD's have different device names).
To find out the device name for your CD use the mount command BEFORE
you unmount the other CD. It will show each mounted device and the
current mount point.
Personally I use /mnt/cd as my mount point for most CD's. I recommend
adding an entry to your /etc/fstab file (the "filesystems table" for
Unix/Linux) that looks something like this:
# /etc/fstab
/dev/scd0 /mnt/cd iso9660 noauto,ro,user,nodev,nosuid 0 0
This will allow you to use the mount and umount commands as a normal
user (without the need to su to 'root').
I also recommend changing the permissions of the mount command to
something like:
-rwsr-x--- 1 root console 26116 Jun 3 1996 /bin/mount
(chgrp console `which mount && chmod 4550 `which mount`)
... so that only members of the group "console" can use the mount
command. Then add your normal user account to that group.
The idea of all this is to strike a balance between the convenience
and reduced "fumblefingers" exposure of running the privileged command
as a normal user -- and the potential for (as yet undiscovered buffer
overflows) to compromise the system by "guest" users.
(I recommend similar procedures for ALL SUID binaries -- but this is
an advanced issue that goes *WAY* beyond the scope of this question).
Allen, You really need to get a copy of the "Getting Started" guide
from the Linux Documentation Project. This can be downloaded and
printed (there's probably a copy on your CD's) or you can buy the
professionally bound editions from any of several publishers -- my
favorite being O'Reilly & Associates (http://www.ora.com).
Remember that the Linux Gazette "Answer Guy" is no substitute for
reading the manuals and participating in Linux newsgroups and mailing
lists.
-- Jim
_________________________________________________________________
Installing Linux
From: Aryeh Goretsky aryeh@tribal.com
[ Aryeh, I'm copying my Linux Gazette editor on this since I've put in
enough explanation to be worth publishing it ]
..... why ... don't they just call it a disk boot sector . .... Okay,
I've just got to figure out what the problem is, then. Are there any
utilities like NDD for Linux I can run that will point out any errors
I made when entering the superblock info?
Nothing with a simple, colorful interface. 'fsck' is at least as good
with ext2 filesystems as NDD is with FAT (MS-DOS) partitions. However
'fsck' (or, more specifically, e2fsck) has a major advantage since the
ext2fs was designed to be robust. The FAT filesystem was designed to
be simple enough that the driver code and the rest of the OS could fit
on a 48K (yes, forty-eight kilobytes) PC (not XT, not AT, and not even
close to a 386). So, I'm not knocking NDD when I say that fsck works
"at least" as well.
However, fsck doesn't touch your MBR -- it will check your superblock
and recommand a command to restore the superblock from one of the
backups if yours is damaged. Normally the newfs (like MS-DOS' FORMAT)
or mke2fs (basically the same thing) will scatter extra copies of the
superblock every 8K sectors across the filesystem (or so). So there
are usually plenty of backups.
So, usually, you'd just run fdisk to check your partitions and
/sbin/lilo to write a new MBR (or other boot sector). /sbin/lilo will
also update its own "map" file -- and may (optionally) make a backup
of your original boot sector or MBR.
(Note: There was an amusing incident on one of the mailing lists or
newsgroups -- in which a user complained that Red Hat had "infected
his system with a virus." It turns out that lilo had moved the
existing (PC/MBR) virus from his MBR to a backup file -- where it was
finally discovered. So, lilo had actually *cured* his system of the
virus).
Actually when you run /sbin/lilo you're "compiling" the information in
the /etc/lilo.conf file and writing that to the "boot" location --
which you specify in the .conf file.
You can actually call your lilo.conf anything you like -- and you can
put it anywhere you like -- you'd just have to call /sbin/lilo with a
-C switch and a path/file name. /etc/lilo.conf is just the built-in
default which the -C option over-rides.
Here's a copy of my lilo.conf (which I don't actually use -- since I
use LOADLIN.EXE on this system). As with many (most?) Unix
configuration files the comments start with hash (#) signs.
boot=/dev/hda
# write the resulting boot block to my first IDE hard drive's MBR.
# if this was /dev/hdb4 (for example) /sbin/lilo would write the
# resulting block to the logical boot record on the fourth partition
# of my second IDE hard drive. /dev/sdc would mean to write it to
# the MBR of the third SCSI disk.
# /sbin/lilo will print a warning if the boot location is likely to
# be inaccessible to most BIOS' (i.e. would require a software driver
# for DOS to access it).
## NOTE: Throughout this discussion I use /sbin/lilo to refer to the
## Linux executable binary program and LILO to refer to the resulting
## boot code that's "compiled" and written by /sbin/lilo to whatever
## boot sector your lilo.conf calls for. I hope this will minimize the
## confusion -- though I've liberally re-iterated this with parenthetical
## comments as well.
# The common case is to put boot=/dev/fd0H1440 to specify that the
# resulting boot code should be written to a floppy in the 1.44Mb
# "A:" drive when /sbin/lilo is run. Naturally this would require
# that you use this diskette to boot any of the images and "other"
# stanzas listed in the rest of this file. Note that the floppy
# could be completely blank -- no kernel or files are copied to it
# -- just the boot sector!
map=/boot/map
# This is where /sbin/lilo will store a copy of the map file --
# which contains the cylinder/sector/side address of the images
# and message files (see below)
# It's important to re-run /sbin/lilo to regenerate the map
# file any time you've done anything that might move any of
# these image or message files (like defragging the disk,
# restoring any of these images from a backup -- that sort
# of thing!).
install=/boot/boot.b
# This file contains code for LILO (the boot loader) -- this is
# an optional directive -- and necessary in this case since it
# simply specifies the default location.
prompt
# This instructs the LILO boot code to prompt the user for
# input. Without this directive LILO would just wait
# upto "delay" time (default 0 tenths of a second -- none)
# and boot using the default stanza.
# if you leave this and the "timeout" directives out --
# but you put in a delay=X directive -- then LILO won't
# prompt the user -- but will wait for X tenths of a second
# (600 is 10 seconds). During that delay the user can hit a
# shift key, or any of the NumLock, Scroll Lock type keys to
# request a LILO prompt.
timeout=50
# This sets the amount of time LILO (the boot code) will
# wait at the prompt before proceeding to the default
# 0 means 'wait forever'
message=/etc/lilo.message
# this directive tells /sbin/lilo (the conf. "compiler") to
# include the contents of this message in the prompt which LILO
# (the boot code) displays at boot time. It is a handy place to
# put some site specific help/reminder messages about what
# you call your kernels and where you put your alternative bootable
# partitions and what you're going to do to people who reboot your
# Linux server without a very good reason.
other=/dev/hda1
label=dos
table=/dev/hda
# This is a "stanza"
# the keyword "other" means that this is referring to a non-Linux
# OS -- the location tells LILO (boot code) where to find the
# "other" OS' boot code (in the first partition of the first IDE --
# that's a DOS limitation rather than a Linux constraint).
# The label directive is an arbitrary but unique name for this stanza
# to allow one to select this as a boot option from the LILO
# (boot code) prompt.
# Because it is the first stanza it is the the default OS --
# LILO will boot this partition if it reaches timeout or is
# told not to prompt. You could also over-ride that using a
# default=$labelname$ directive up in the "global" section of the
# file.
image=/vmlinuz
label=linux
root=/dev/sda5
read-only
# This is my "normal" boot partition and kernel.
# the "root" directive is a parameter that is passed to the
# kernel as it loads -- to tell the kernel where its root filesystem
# is located. The "read-only" is a message to the kernel to initially
# mount the root filesystem read-only -- so the rc (AUTOEXEC.BAT)
# scripts can fsck (do filesystem checks -- like CHKDSK) on it.
# Those rc scripts will then normally remount the fs in "read/write"
# mode.
image=/vmlinuz.old
label=old
root=/dev/sda5
append= single
read-only
# This example is the same except that it loads a different kernel
# (presumably and older one -- duh!). The append= directive allows
# me to pass arbitrary directives on to the kernel -- I could use this
# to tell the kernel where to find my Ethernet card in I/O, IRQ, and
# DMA space -- here I'm using it to tell the kernel that I want to come
# up in "single-user" (fix a problem, don't start all those networking
# gizmos) mode.
image=/mnt/tmp/vmlinuz
label=alt
root=/dev/sdb1
read-only
# This last example is the most confusing. My image is on some other
# filesystem (at the time that I run /sbin/lilo to "compile" this
# stanza). The root fs is on the first partition of the 2nd SCSI drive.
# It is likely that /dev/sdb1 would be the filesystem mounted under
# /mnt/tmp when I would run /sbin/lilo. However it's not "required"
# My kernel image file could be on any filesystem that was mounted
# /sbin/lilo will warn me if the image is likely to be inaccessible
# by the BIOS -- it's can't say for sure since there are a lot of
# BIOS' out there -- some of the newer SCSI BIOS' will boot off of a
# CD-ROM!
I hope that helps. The lilo.conf man page (in section 5) gives *lots*
more options -- like the one I just saw while writing this that allows
you to have a password for each of your images -- or for the whole
set. Also there are a number of kernel options described in the
BootPrompt-HOWTO. One of the intriguing ones is panic= -- which allows
you to tell the Linux kernel how long to sit there displaying a kernel
panic. The default is "forever" -- but you can use the append= line in
your lilo.conf to pass a panic= parameter to your kernel -- telling it
how many seconds to wait before attempting to reboot.
In the years that I've used Linux I've only seen a couple (like two or
three) kernel panics (that could be identified as such). Perhaps a
dozen times I've had a Linux system freeze or go comatose enough that
I hard reset it. (Most of those involve very bad hardware IRQ
conflicts). Once I've even tricked my kernel into scribbling garbage
all over one of my filesystems (don't play with linear and membase in
your XConfig file -- and, in particular don't specify a video memory
base address that's inside of your system's RAM address space).
So I'm not sure if setting a panic= switch would help much. I'd be
much more inclined to get a hardware watchdog timer card and enable
the existing support for that in the kernel. Linux is the only PC OS
that I know of that comes with this support "built-in"
For those that aren't familiar with them a watchdog timer card is a
card (typically taking an ISA slot) that implements a simple
count-down and reset (strobing the reset line on the system bus)
feature. This is activated by a driver (which could be a DOS device
driver, a Netware Loadable Module, or a little chunk of code in the
Linux kernel. Once started the card must be updated periodically (the
period is set as part of the activation/update). So -- if the software
hangs -- the card *will* strobe the reset line.
(Note: this isn't completely fool-proof. Some hardware states might
require a complete power cycle and some sorts of critical server
failures will render the systems services unavailable without killing
the timer driver software. However it is a good sight better than just
hanging).
These cards cost about $100 (U.S.) -- which is a pity since there's
only about $5 worth of hardware there. I think most Sun workstations
have this feature designed into the motherboard -- which is what PC
manufacturers should scramble to do.
_________________________________________________________________
AG
At 11:43 AM 6/10/97 -0700, you wrote: Subject: Once again, I try to
install Linux... ...and fail miserably. This is getting depressing.
Someone wanna explain this whole superblock concept to me? Use small
words....
Aryeh, Remember master boot records (MBR's)? Remember "logical" boot
records -- for volume boot records?
A superblock is the Unix term for a logical boot record. Linux uses
normal partitions that are compatible with the DOS, OS/2, NT (et al)
hard disk partitioning scheme.
To boot Linux you can use LILO (the Linux loader) which can be written
to your MBR (most common), to your "superblock" or to the "superblock"
of a floppy. This little chunk of code contains a reference (or "map")
to the device and logical sector of one or more Linux kernels or DOS
(or OS/2) bootable partitions.
There is a program called "lilo" which "compiles" a lilo.conf
(configuration file) into this LILO "boot block" and puts it onto the
MBR, superblock or floppy boot block for you. This is the source of
most of the confusion about LILO. I can create a boot floppy with
nothing put this boot block on it -- no kernel, no filesystems,
nothing. LILO doesn't care where I put any of my linux kernels -- so
long as it can get to it using BIOS calls (which usually limits you to
putting the kernel on the one of the first two drives connected to the
first drive controller on your system).
Another approach is to use LOADLIN.EXE -- this is a DOS program that
loads a Linux (or FreeBSD) kernel. The advantage of this is that you
can have as many kernel files as you like, and they can be located on
any DOS accessible device (even if you had to load various weird
device drivers to be able to see that device.
LOADLIN.EXE is used by some CD-ROM based installation packages --
avoiding the necessity of using a boot floppy.
The disadvantages of LOADLIN include the fact that you may have loaded
some device drivers and memory managers that have re-mapped (hooked
into) critical BIOS interrupt vectors. LOADLIN often needs a "boot
time hardware vector table" (which it usually writes as
C:\REALBIOS.INT -- a small hidden/system file). Creating this file
involves booting from a "stub" floppy (which saves the table) and
rebooting/restarting the LOADLIN configuration to tell it to copy the
table from the floppy to your HD. This must be done whenever you
change video cards, add any controller with a BIOS extension (a ROM)
or otherwise play with the innards of your machine.
Call me and we can go over your configuration to narrow down the
discussion. If you like you can point your web browser at
www.ssc.com/lg and look for articles by "The Answer Guy" there. I've
described this a greater length in some of my articles there.
-- Jim
_________________________________________________________________
Adding Programs to the Pull Down Menus
From: Ronald B. Simon rbsimon@anet.bna.boeing.com
Thank you for responding to my request. By the way I am using RedHat
release 4 and I think TheNextLevel window manager. I did find a
.fvwm2rc.programs tucked away in...
Ronald, TheNextLevel is an fvwm derivative.
/etc/X11/TheNextLevel/. I added a define ProgramCM(Title,,,program
name) and under the start/applications menu I saw Title. When I put
the cursor over it and pressed the mouse button, everything froze. I
came to the conclusion that I am in way over my head and that I
probably need to open a window within the program that I am trying to
execute. Any way I will search for some 'C' code that shows me how to
do that. Thanks again!
I forgot to mention that any non X program should be run through an
xterm. This is normally done with a line in your rc file like: Exec
"Your Shell App" exec xterm -e /path/to/your/app & ... (I'm using fvwm
syntax here -- I'll trust you to translate to TNL format). Try that --
it should fix you right up.
Also -- when you think your X session is locked up -- try the
Ctrl-Alt-Fx key (where Fx is the function key that corresponds to one
of your virtual consoles). This should switch you out of GUI mode and
into your normal console environment. You might also try Alt-SysReq
(Print-Screen on most keyboards) followed by a digit from the
alphanumeric portion of you keyboard (i.e. NOT from the numeric
keypad). This is an alternative binding for VC switching that might be
enabled on a few systems. If all of that fails you can try
Ctrl-Alt-Backspace. This should (normally) signal the X server to
shutdown.
Mostly I doubt that your server actually hung. I suspect that you
confused it a bit by running a non-X program not "backgrounded" (you
DO need those trailing ampersands) and failing to supply it with
communications channel back to X (an xterm).
Please remember that my knowlege of X is very weak. I hardly ever use
and almost never administer/customize it. So you'll want to look at
the L.U.S.T. mailing list, or the comp.windows.x or (maybe) the
comp.os.linux.x (although there is nothing to these questions which is
Linux specific). I looked extensively for information about
TheNextLevel on the web (in Yahoo! and Alta Vista). Unfortunately the
one page that almost all of the references pointed to was down
The FVWM home page is at:
http://www3.hmc.edu/~tkelly/docs/proj/fvwm.html
-- Jim
_________________________________________________________________
Linux Skip
From: Jesse Montrose jesse@spine.com
Time warp: This message was lost in my drafts folder while I was
looking up some of the information. As it turns out the wait was to
our advantage. Read on.
Date: Sun, 16 Mar 1997 13:54:34 -0800
Greetings, this question is intended for the Answer Guy associated
with the Linux Gazette..
I've recently discovered and enjoyed your column in the Linux Gazette,
I'm hoping you might have news about a linux port of sun's skip ip
encryption protocol.
Here's the blurb from skip.incog.com: SKIP secures the network at the
IP packet level. Any networked application gains the benefits of
encryption, without requiring modification. SKIP is unique in that an
Internet host can send an encrypted packet to another host without
requiring a prior message exchange to set up a secure channel. SKIP is
particularly well-suited to IP networks, as both are stateless
protocols. Some of the advantages of SKIP include:
* No connection setup overhead
* High availability - encryption gateways that fail can reboot and
resume decrypting packets instantly, without having to renegotiate
(potentially thousands) of existing connections
* Allows uni-directional IP (for example, IP broadcast via satellite
or cable)
* Scalable multicast key distribution
* SKIP gateways can be configured in parallel to perform
instant-failover
I heard a bit about SKIP while I was at a recent IETF conference.
However I must admit that it got lost in the crowd of other security
protocols and issues.
So far I've paid a bit more attention to the Free S/WAN project that's
being promoted by John Gilmore of the EFF. I finally got ahold of a
friend of mine (Hugh Daniel -- one of the architects of Sun's NeWS
project -- and well-known cypherpunk and computer security
professional)
He explained that SKIP is the "Secure Key Interchange Protocol" --
that is is a key management protocol (incorporated in ISAKMP/Oakley).
For secure communications you need:
* Key management (which -- between strangers requires some sort of
RSA (Public Key) or Diffie-Hellman key exchange or even some
variant of elliptic curve -- from what I've heard).
* Encrypted Link (which will be built into IPv6 and will be
available as IPSec extensions to IPv4 -- using tunnelled
interfaces from what I gather).
* Secure-DNS (this is related to the key management problem -- we
need a trustworthy source of public key information -- Verisign
and Thawte offer commercial "Certification Authority" services --
but the 'net needs something a bit more open and standards based).
My employer is primarily an NT shop (with sun servers), but since I
develop in Java, I'm able to do my work in linux. I am one of about a
dozen telecommuters in our organization, and we use on-demand ISDN to
dial in directly to the office modem bank, in many cases a long
distance call.
I'm finally working on configuring my dial-on-demand ISDN line here at
my place. I've had diald (dial-on-demand over a 28.8 modem) running
for about a month now. I just want to cut down on that dial time.
We're considering switching to public Internet connections, using skip
to maintain security. Skip binaries are available for a few platforms
(windows, freebsd, sunos), but not linux. Fortunately the source is
available (http://skip.incog.com/source.html) but it's freebsd, and I
don't know nearly enough deep linux to get it compiled (I tried making
source modifications).
If I understand it correctly SKIP is only a small part of the
solution.
Hopefully FreeS/WAN will be available soon. You can do quite a bit
with ssh (and I've heard of people who are experimenting with routing
through some custom made tunnelled interface). FreeBSD and Linux both
support IP tunneling now.
For information on using ssh and IP tunnels to build a custom VPN
(virtual private network) look in this month's issue of Sys Admin
Magazine (July '97). (Shameless plug: I have an article about C-Kermit
appearing in the same issue).
Another method might be to get NetCrypto. Currently the package isn't
available for Linux -- however McAfee is working on a port. Look at
http://www.mcafee.com
After much time with several search engines, the best I could come up
with was another fellow also looking for a linux version of skip :)
Thanks! jesse montrose
Jesse, Sorry I took so long to answer this question. However, as I
say, this stuff has changed considerably -- even in the two months
between the time I started this draft message and now.
-- Jim
_________________________________________________________________
ActiveX for Linux
From: Gerald Hewes hewes@OpenMarket.com
Jim, I read your response on ActiveX in the Linux Gazette. At
http://www.ssc.com/lg/issue18/lg_answer18.html#active
Software AG is porting the non GUI portions of ActiveX called DCOM to
Linux. Their US site where it should be hosted appears down as I write
this e-mail message but there is a link of their home page on a Linux
DCOM beta: http:/www.sotwareag.com
I beleive the link ought to be
http://www.sagus.com/prod-i~1/net-comp/dcom/index.htm
As for DCOM, its main value for the Linux community is in making
Microsoft Distributed Object Technology available to the Linux
community. Microsoft is trying to push DCOM over CORBA.
I know that MS is "trying to push DCOM over CORBA" (and OpenDOC, and
now, JavaBeans). I'm also aware that DCOM stands for "distributed
component object model" and CORBA is the "common object request
broker" and SOM is IBM's "system object model" (OS/2).
The media "newshounds" have dragged these little bones around and
gnawed on them until we've all seen them. Nonetheless I don't see its
"main value to the Linux community."
These "components" or "reusable objects" will not make any difference
so long as significant portions of their functionality are tied to
specific OS (GUI) semantics. However, this coupling between specific
OS' has been a key feature of each of these technologies.
It's Apple's OpenDoc, IBM's DSOM, and Microsoft's DSOM!
While I'm sure that each as their merits from the programmer's point
of view (and I'm in no position to comment on their relative technical
pros or cons) -- I have yet to see any *benefit* from a user or
administrative point of view.
So I suppose the question here becomes:
Is there any ActiveX (DCOM) control (component) that delivers any real
benefit to any Linux user? Do any of the ActiveX controls not have a
GUI component to them? What does it mean to make the "non-GUI
portions" of DCOM available? Is there any new network protocol that
this gives us? If so, what is that protocol good for?
For more information, checkout http://www.microsoft.com/oledev
While I encourage people to browse around -- I think I'll wait until
someone can point out one DCOM component, one JavaBean, one CORBA
object, or one whatever-buzzword- you-want-to-call-it-today and can
explain in simple "Duh! I'm a user!" terms what the *benefit* is.
Some time ago -- in another venue -- I provided the net with an
extensive commentary on the difference between "benefits" and
"features." The short form is this:
I benefit is relevant to your customer. To offer a benefit requires
that you understand your customer. "Features" bear no relation to a
customers needs. However mass marketing necessitates the promotion of
features -- since the *mass* marketer can't address individual and
niche needs.
Example: Microsoft operating systems offer a "easy to use graphical
interfaces" -- first "easy to use" is highly subjective. In this case
it means that there are options listed on menus and buttons and the
user can guess at which ones apply to their need and experiment until
something works. That is a feature -- one I personally loathe. To me
"easy to use" means having documentation that includes examples that
are close to what I'm trying to do -- so I can "fill in the blanks"
Next there is the ubiquitously touted "GUI." That's another *feature*.
To me it's of no benefit -- I spend 8 to 16 hours a day looking at my
screen. Text mode screens are far easier on the eyes than any monitor
in graphical mode.
To some people, such as the blind GUI's are a giant step backward in
accessibility. The GUI literally threatens to cut these people off
from vital employment resources.
I'm not saying that the majority of the world should abandon GUI's
just because of a small minority of people who can't use them and a
smaller, crotchety contingent of people like me that just don't like
them. I'm merely trying to point out the difference between a
"feature" and a "benefit."
The "writing wizards" offered by MS Word are another feature that I
eschew. My writing isn't perfect and I make my share of typos, as well
as spelling and grammatical errors. However Most of what I write goes
straight from my fingers to the recipient -- no proofreading and no
editing. When I've experimented with spell checkers and "fog indexes"
I've consistently found that my discourse is beyond their capabilities
-- much too specialized and involving far too much technical
terminology. So I have to over-ride more than 90% of the
"recommendations of these tools.
Although my examples have highlighted Microsoft products we can turn
this around and talk about Linux' famed "32-bit power" and "robust
stability." These, too are *features*. Stability is a benefit to
someone who manages a server -- particularly a co-located server at a
remote location. However the average desktop applications user could
care less about stability. So long as their application manage to
autosave the last three versions of his/her documents the occasional
reboot is just a good excuse to go get a cup of coffee.
Multi-user is a feature. Most users don't consider this to be a
benefit -- and the idea of sharing "their" system with others is
thoroughly repugnant to most modern computer users. On top of that the
network services features which implement multi-user access to Linux
(and other Unix systems) and NT are gaping security problems so far as
most IS users are concerned. So having a multi-user system is not a
benefit to must of us. This is particularly true of the shell access
that most people identify as *the* multi-user feature of Unix (as
opposed to the file sharing and multiple user profiles, accounts and
passwords that passes for "multi-user" under Windows for Workgroups
and NT).
So, getting back to ActiveX/DCOM -- I've heard of all sorts of
features. I'd like to hear about some benefits. Keep in mind that any
feature may be a benefit to someone -- so benefits generally have to
be expressed in terms of *who* is the beneficiary.
Allegedly programmers are the beneficiary of all these competing
component and object schema. "Use our model and you'll be able to
impress your boss with glitzy results in a fraction of the time it
would take to do any programming" (that seems to be the siren song to
seduce people to any of these).
So, who else benefits?
-- Jim
_________________________________________________________________
Bash String Manipulations
From: Niles Mills nmills@dnsppp.net
Oddly enough -- while it is easy to redirect the standard error of
processes under bash -- there doesn't seem to be an easy portable way
to explicitly generate message or redirect output to stderr. The best
method I've come up with is to use the /proc/ filesystem (process
table) like so:
function error { echo "$*" > /proc/self/fd/2 }
Hmmmm...how about good old
>&2
?
$ cat example
#!/bin/bash
echo normal
echo error >&2
$ ./example
normal
error
$ ./example > file
error
$ cat ./file
normal
$ bash -version
$ bash -version
bash -version
GNU bash, version 1.14.4(1)
Best Regards, Niles Mills
I guess that works. I don't know why I couldn't come up with that on
my own. But my comment worked -- a couple of people piped right up
with the answer.
Amigo, that little item dates back to day zero of Unix and works on
all known flavors. Best of luck in your ventures.
Niles Mills
_________________________________________________________________
Blinking Underline Cursor
From: Joseph Hartmann joeh@arakis.sugar-river.net
I know an IBM compatible PC is "capable" of having a blinking
underline cursor, or a blinking block cursor.
My linux system "came" with a blinking underline, which is very
difficult to see. But I have not been able (for the past several
hours) to make *any* headway about finding out how to change the
cursor to a blinking block.
You got me there. I used to know about five lines of x86 assembly
language to call the BIOS routine that sets the size of your cursor.
Of course that wouldn't work under Linux since the BIOS is mapped out
of existence during the trip into protected mode.
I had a friend who worked with me back at Peter Norton Computing -- he
wrote a toy program that provided an animated cursor -- and had
several need animated sequences to show with it -- a "steaming coffee
cup," a "running man," and a "spinning galaxy" are the ones I
remember.
If you wanted to do some kernel hacking it looks like you'd change the
value of the "currcons" structure in one of the
/usr/src/linux/drivers/char/ files -- maybe it would be "vga.c"
On the assumption that you are not interested in that approach (I
don't blame you) I've copied the author of SVGATextMode (a utility for
providing text console mode access to the advanced features of most
VGA video cards)
Hopefully Koen doesn't mind the imposition. Perhaps he can help.
I've also copied Eugene Crosser and Andries Brouwer the authors of the
'setfont' and 'mapscrn' programs (which don't seem to do cursors --
but do some cool console VGA stuff). 'setfont' lets you pick your text
mode console font.
Finally I've copied Thomas Koenig who maintains the Kernel "WishList"
in the hopes that he'll add this as a possible entry to that.
Any hints? Best Regards,
Joe, As you can see I don't feel stumped very often -- and now that I
think about it -- I think this would be a neat feature for the Linux
console. This is especially true since the people who are most likely
to stay away from X Windows are laptop users -- and those are
precisely the people who are most likely to need this feature.
-- Jim
_________________________________________________________________
File Permissions
From: John Gotschall johng@frugal.com
Hi! I was wondering if anyone there knew how I might actually change
the file permissions on one of my linux box's DOS partition.
I have Netscape running on one box on our local network, but it can't
write to another linux box's MSDOS filesystem, when that filesystem is
NFS mounted. It can write to various Linux directories that have
proper permissions, but the MSDOS directory won't keep a permissions
setting, it keeps it stuck as owned by, read by and execute by root.
What you're bumping into is two different issues. The default
permissions under which a DOS FAT filesystem is mounted (which is
"root.root 755" that is: owned by user root, group root, rwx for
owner, r-x for group and other).
You can change that with options to the mount (8) command.
Specifically you want to use something like:
mount -t msdos -o uid=??,gid=??,umask=775
... where you pick suitable values for the UID and GID from your
/etc/passwd and /etc/group files (respectively).
The other culprit in this is the default behavior of NFS. For your own
protection NFS defaults to using a feature called "root squash" (which
is not a part of a vegetable). This prevents someone who has root
access to some other system (as allowed by your /etc/exports file)
from accessing your files with the same permissions as you're own
local root account.
If you pick a better set of mount options (and put them in your
/etc/fstab in the fourth field) then you won't have to worry about
this feature. I DO NOT recommend that you over-ride that setting with
the NFS no_root_squash option in the /etc/exports file (see 'man 5
exports' for details). I personally would *never* use that option with
any export that was mounted read-only -- not even in my own home
between two systems that have no live connection to the net! (I do use
the no_root_squash option with the read-only option -- but that's a
minor risk in my case).
Is there a way to change the MS-DOS permissions somehow?
Yes. See the mount(8) options for uid=, gid=, and umask=. I think you
can also use the umsdos filesytem type and effectively change the
permissions on your FAT based filesystem mount points.
This was a source of some confusion for me and I've never really
gotten it straight to my satisfaction. Luckily I find that I hardly
ever use my DOS partitions any more.
_________________________________________________________________
Copyright © 1997, James T. Dennis
Published in Issue 19 of the Linux Gazette July 1997
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
Adventures in Linux: A Redhat Newbie Boldly Treks Onto the Internet Frontier
By A. Cliff Seruntine, cliff@micronet.net
_________________________________________________________________
Ever tried using chat to dial out with your modem? If you have, then
after a few hours of mind-numbing inproductivity you may have found
yourself developing an odd, convulsive twitch and banging your head
against your monitor? Another dozen hours of typing in reworded chat
scripts and you will find yourself wishing the program was a living,
tangible entity so you could delete it once and for all out of the
known universe, and thus gain a measure of relief knowing that you
have spared others the terrible ordeal of sitting in front of their
monitors for perhaps days on end coding pleas for chat to just dial
the #!%$ telephone. Truthfully, I have found few programs under any of
the operating systems I am familiar with give me the jitters the way
chat does.
I recall one frosty summer morning (I live in Alaska, so I can
honestly describe some summer mornings as being frosty) when I boldly
set off where no Microsoft hacker has gone before-Linux, the final
frontier. Well, that's a bit extreme. Many Microsoft hackers have seen
the light and made the transition. Anyway, I had decided I was going
to resist Bill Gatus of Borg, and not be assimilated, so I put a new
hard drive in my computer, downloaded Redhat Linux 4.1 from Redhat's
ftp server (a two day ordeal with a 33.6 modem, I might add) and read
enough of the install documentation to get started.
Now friends already familiar with the Linux OS offered to come by and
help me set it up. But I'd have none of it. After all, I owned a
computer and electronics service center. I was the expert. And I was
firmly convinced that the best way to truly learn something is to plow
through it yourself. So I sat down in front of my PC with a cup of
tea, made the two required floppy disks for a hard drive install, and
began my voyage into Linux gurudom.
About 45 minutes later I was surprised to discover that I was done.
Linux had been installed on my system and little fishies were swimming
around my monitor in X windows. Well, I was impressed with myself.
"Hah!" I said to the walls. "They said it couldn't be done. Not
without background. Not without experience. But I've showed them. I've
showed them all! Hah! Hah! Hah!" And then, being the compulsive hacker
that I am, I began to do what comes naturally. I hacked. And being the
Net buff that I am, the first thing I decided to do was get on the
Internet through Linux. And all the stuff I'd read about in my printed
copy of the works of the Linux Documentation Project said that the way
to dial out with Linux was through chat.
Four days later I found myself on my knees in front of my computer,
wearily typing in yet another reworded script for chat, half plea,
half incantation, hoping beyond reason that this time chat would
perform the miracle I had so long sought and just dial the $#%! phone.
Yes, I was by that time a broken man. Worse, a broken hacker. My
spirit was crushed. My unique identity was in peril. I could hear Bill
Gatus in the distance, but getting closer, closer, saying, "Resistance
is futile. You will be assimilated." Resigned to my fate, I wrung my
hands, achy and sore from writing enough script variants to fill a
novel the size of War and Peace, and prepared to type halt and reboot
into Windows 95.
Then a voice said, "Luke. Luke! Use the X, Luke!" I don't know why the
voice was calling me "Luke" since my name is Cliff, but somehow I knew
to trust that voice. I moved the cursor onto the background, clicked,
and opened up the applications menu. There I found a nifty little
program called Minicom. I clicked on Minicom, it opened, initialized
the modem, and a press of [CTRL-a, d] brought up the dial out options.
I selected the edit option with the arrow keys, and at the top entered
the name and number of my server. Then I selected the dial option with
the arrow keys, and pressed [RETURN]. The X was with me, the modem
dialed out, logged into my server, and with a beep announced that I
should press any button. Minicom then asked me to enter my login name
and password. I breathed a sigh of relief, opened up Arena, typed in
an address, and . . . nothing happened. Worse, after about a minute,
the modem hung up.
"What?" I wondered aloud, squinting into my monitor, certain that
behind the phosphorescent glow I could see little Bill Gatuses
frantically chewing away the inner workings of my computer. "Join me,
Cliff," they were saying. "It is your destiny."
"I'll never join you," I cried out and whipped out my Linux
Encyclopedia. I couldn't find anything in the index on how to avoid
assimilation, but I did find out that I needed to activate the ppp
daemon and give control of the connection from Minicom to the daemon.
The command line that worked best was:
pppd /dev/cua2 115200 -detach crtscts modem defaultroute
-detach
is the most important option to include here. It causes the daemon to
take over control of the modem from Minicom. pppd activates the Point
to Point Protocol daemon. /dev/cua* should be given whatever number
corresponds to the serial port your modem is attached to, as long as
you have a serial modem. 115200 is the max speed of my modem with
compression. You should set this to the max speed of your own modem.
crtscts tells your modem to negotiate high speed transmissions. modem
simply indicates the daemon should use the modem as its means of
networking. It is a default setting, but I like to set it anyway to
remind me whats going on. And defaultroute tells the daemon which
route the incoming and outgoing data are going through.
The trick is to enter all this before the Minicom connection times
out. You could go through the trouble writing it out every time you
log on, but a better way is to edit an alias in .bashrc. Go down to
the /root directory and type emacs .bashrc (or whatever your prefered
editor is) and enter the line below as follows:
alias daemon = <pppd /dev/cua* <your modem speed> -detach crtscts modem
defaultroute>
(Do not forget the quotes or your alias will not function.)
Finally, go into the control panel, double click on the networking
icon, and select the last tab that appears. There you will find near
the top the option to set your default gateway and your device. Set
your default gateway to whatever your Internet server specifies.
Specify your device as /dev/cua (whatever serial port your modem is
attached to). Sometimes simply /dev/modem will work if it has been
symbolically linked in your system. (By the way, if you haven't
already done it, in X you also need to double click the modem icon in
the control panel and set your modem to the correct /dev/cua(serial
port number) there too). And if you have a SLIP account (rare these
days) add the pertinent info while setting up your gateway.
Reboot your system. Now your new alias and settings will all be in
effect. Now just invoke Minicom and dial out. Then at xterminal type
daemon. Minicom will beep at you for taking away its control of the
modem. To be on the safe side, I like to kill Minicom to make sure it
stops fighting with the daemon for control of the modem. Occasionally
it will succeed and weird things will happen. Then invoke your browser
and you are on the World Wide Web.
As a final note, Arena's HTML is kind of weak, and you may find it
locking up with newer, more powerful web code. It is a good idea to
download a more capable browser such as Netscape 3.01, which makes a
fine Linux browser, and install and use that as soon as possible.
And that's all there is to taking your Linux webship onto the
Information frontier. Well, I'm enjoying my time on the web. I think
I'll build a new site dedicated to stopping the assimilation.
_________________________________________________________________
Copyright © 1997, Cliff Seruntine
Published in Issue 19 of the Linux Gazette, July 1997
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
Atlanta Showcase Report
By Phil Hughes, phil@ssc.com, todds@ontko.com
_________________________________________________________________
The Atlanta Linux Showcase is over, and everyone is beginning to
recover. Recover, that is, from being awake too long, being on a plane
too long and stuffing more Linux than will fit into one weekend.
ALS was put together by the Atlanta Linux Enthusiasts, the local Linux
user's group in Atlanta, Georgia. The show began on Friday evening,
June 6 and ran through Sunday afternoon. More than 500 people
attended. The report following this one by Todd Shrider covers much of
the show, including the talks.
I want to thank Amy Ayers and Karen Bushaw for making their photos
available to us with a special thank you to Amy for getting them
scanned and uploaded to the SSC ftp site.
I spent most of my time in the Linux Journal booth giving away
magazines and talking to show attendees. One aspect that made this
show special for me is that I didn't spend most of my time explaining
that Linux is a Unix-like operating system to the attendees. Instead,
I got to discuss Linux with experienced people with thoughtful
questions, letting them know in the process how LJ could help them.
Each attendee was truly interested in Linux and stopped at each booth
in the show. I expect attendees appreciated the low signal-to-noise
ratio in the booths; that is, conversations were solely about Linux.
The Roast
On Saturday night there was a roast--no, I didn't change from a
vegetarian into a meat eater overnight--we were roasting Linus. That
is, a group of people presented interesting stories about Linus,
intended to only slightly embarrass him. At the end of the evening, I
felt that the roast had been successful in every way.
In front of a crowd of about 115 people, Eric Raymond, David Miller,
Jon "maddog" Hall and I got to pick on this Linus character. Topics
varied from Linus almost being hit by a car in Boston because he was
so engrossed in talking about a particular aspect of kernel code, to
the evolution of the top-half/bottom-half concept in interrupt
handlers and to why Linus was apparently moving from geekdom to
becoming a "hunk" sportswear model. (See the cover of the San Jose
Metro, May 8-14, 1997.)
Maddog finished the roasting by telling a few Helsinki stories and
showing a video that included Tove's parents talking about Linus. A
good time was had by the roasters and the audience and, as Linus'
closing comment was "I love you all," we assume he had a good time too
and wasn't offended by our gentle ribbing.
The Future
The show came off very well. I consider this sucess an amazing feat
for an all-volunteer effort. The ALE members plan to write an article
for Linux Gazette about how they made this happen. We'll also make
this information available on the GLUE web site. I would like to see
more shows put on by user groups. The local involvement, the
enthusiasm of the attendees and the all Linux flavor of the show made
this weekend a great experience. We are already thinking about a
Seattle or Portland show and would like to help others make regional
shows a reality.
_________________________________________________________________
Take a look at the ALS Photo Album.
More on ALS
by Todd M. Shrider, todds@ontko.com
_________________________________________________________________
I first started writing this article in my hotel room late Sunday
evening (or early Monday morning) planning to get just enough sleep
that I would wake up in time to catch my plane. The plan didn't
work--I missed my 6:00 AM flight out of Atlanta. I did the second
draft while waiting for my new 9:45 AM flight. The third draft came
(yes, you guessed it) while waiting for my 1:30 PM connection from
Detroit to Dayton, also having missed the previous connection because
of my first flight's late arrival. Suffice it to say, I'm now back
home in Indiana and still enjoying the high received from the Atlanta
Linux Showcase.
Thanks to all the sponsors and to our host, the Atlanta Linux
Enthusiasts user group, the conference started with a bang and went
off without a hitch. The conference was a three day event, starting
with registration Friday and ending Sunday with a kernel hacking
session led by none other than Linus himself. In between there were
numerous conferences found in both a business and technical track,
several birds of a feather (BoF) sessions and a floor show. These
events were broken up with frequent trips to local pubs and very
little sleep.
This was my first (but not last) Linux conference, and I found that an
added benefit of ALS was meeting all the people who use Linux as a
viable business platform/tool. (These same people tend to be doing
very cool things with Linux on the side). From companies such as Red
Hat to Caldera to others such as MessageNet, Cyclades and DCG
Computers, it was obvious that many people have very creative ways to
make money with Linux. This wasn't limited, by any means, to the
vendors. Many of the conference speakers talked of ways to make money
with Linux or of their experiences with Linux in a professional
environment.
All of these efforts seemed to compliment the key-note address, World
Domination 101, where Linus Torvalds, called for applications,
applications, applications. Did I say he thought Linux needed a few
more useful applications? Anyway, he pointed out the more or less
obvious fact that, if Linux is going to be a success in a world of
commercial operating systems, we need every application type you find
in other commercial operating systems. In other words, if you're
thinking about doing--don't think--just do it. Another thing that
Linus pointed out, and that I was glad to hear echoed throughout the
conference, was that Linux needs to be easy to use. It needs to be so
easy that a secretary or corporate executive could sit and be as
productive as they would be with Windows 95. We need to make people
realize that Linux has gotten rid of the high learning curve usually
associated with Unix.
Something pointed out by Don Rosenberg, while speaking on the "how-to"
and "what's needed next" of commercial Linux, was that we are now in a
stage where the innovators (that's us) and the early adopters (that's
us as well as the people using Linux in the business world today) must
continue to push forward so that we can get a group of early adopters
(the old DOS users) to take us seriously. In Maddog's closing remarks
he urged us all to find two DOS users, convert them to Linux and then
tell them to do the same. As a step in this direction, today I
introduced a local computer corporate sales firm to Linux; whether
they take my advice and run is left to be seen, but believe me, I'm
pushing.
The rest of the conference was filled with business and technical
talks. The business talks included things such as Eric Raymond's "The
Cathedral and the Bazaar", talks on OpenLinux by both Jeff Farnsworth
and Steve Webb and "Linux Connectivity for Humans" by none other than
Phil Hughes. Lloyd Brodsky was on hand to talk about Intranet Support
of Collaborative Planning while Lester Hightower brought us the story
of PCC and their efforts to bring Linux to the business world. Mark
Bolzern spoke of the significance of Linux and Bob Young talked of the
"process" not the "product" of Linux.
The technical discussion track started with Richard Henderson's
discussion of the shared libraries and their function across several
architectures. Michael Maher gave a HOWTO of Red Hat's RPM package
management system and Jim Paradis discussed EM86 and what remains to
be done, so that one can run Intel/Linux binaries under Alpha Linux.
David Miller then followed giving a boost of enthusiasm with his
discussion of the tasks involved in porting Linux to SPARC and Miguel
de Icaza took us on a trip to the world of RAID and Linux. We convened
the next day to hear David Mandelstam discuss what is involved with
wide-area networks and Mike Warfields anatomy of a cracker's
intrusion.
All in all, the conference was a huge success. What I might suggest as
an improvement for next year is more involvement from the vendors (or
maybe just more vendors), a possible sale from the vendors of their
special Linux wares to the conference attendees and a possible
tutorial session like the ones seen at Uselinux (Anaheim, California,
January 1997). Other than that, a few virtual beers (I owe you Maddog)
and lots of great geek conversation made for one wild weekend.
_________________________________________________________________
Copyright © 1997, Phil Hughes and Todd M. Shrider
Published in Issue 19 of the Linux Gazette, July 1997
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
SSC is expanding Matt Welsh's Linux Installation & Getting Started by
adding chapters about each of the major distributions. Each chapter is
being written by a different author in the Linux community. Here's a
sneak preview--the Caldera chapter by Evan Leibovitch.--editor
_________________________________________________________________
Caldera OpenLinux
By Evan Leibovitch, evan@teely.on.ca
_________________________________________________________________
This section deals with issues specific to the Caldera releases of
Linux, how to install the current release (Caldera OpenLinux) and
prepare for the steps outlined in the following chapters. It is
intended to be a complement to, not a replacement for, the "Getting
Started Guides" Caldera ships with all of its Linux-based products.
References to the Getting Started Guide for Caldera Open Linux Base
will be indicated throughout this chapter simply as "the Guide".
What is Caldera?
The beginnings of Caldera the company come from an internal Novell
project called "Corsair". While Novell had owned Unix System V in the
early 1990s, Corsair was formed to see if there were things Novell
could learn from Linux.
Corsair was a casualty of the changing of the guard at Novell that
also caused it to sell off Unix to SCO and WordPerfect to Corel.
Novell founder Ray Noorda gave startup capital to this group with the
intention of making Linux available in a manner that would be as
acceptable to business users and corporate MIS as commercial versions
of Unix. Caldera is a privately-held company based in Orem, Utah.
The implementation of this goal has resulted in a series of
Linux-based products that "broken the mold" in a number of ways.
Caldera was the first Linux distribution to bundle-in commercial
software such as premium X servers, GUI desktops, backup software and
web browsers; at the time of writing, Caldera is the only Linux
distribution officially supported by Netscape.
The Caldera Network Desktop
Caldera's first product, the Caldera Network Desktop (CND), was
released to the public in early 1995 in a $29 "preview" form (a rather
unusual manner to run a beta test), and in final release version in
early 1996. The CND was based on the 1.2.13 Linux kernel, and included
Netscape Navigator, Accelerated-X, CrispLite, and the Looking Glass
GUI desktop. It also was the first Linux release to offer NetWare
client capabilities, being able to share servers and printers on
existing Novell networks. Production and sale of CND ceased in March
1997.
Caldera OpenLinux
In late 1996, Caldera announced its releases based on the Linux 2.0.25
kernel would be named Caldera Open Linux (COL) and would be made
available at three levels;
* COL Base, which includes Navigator, CrispLite, and the Metro-X
server;
* COL Standard, which would add the Netscape FastTrack secure web
server, the StarOffice desktop productivity suite, and NetWare
connectivity;
* COL Deluxe, which incorporates all the features of Standard and
also offers NetWare--server--capabilities.
As this is written, only the COL Base release is shipping, and feature
sets of the other packages are still being determined. For specific
and up-to-date lists of the comparative features of the three levels,
check the Caldera web site http://www.caldera.com.
Because all three levels of COL build on the Base release, all three
are installed the same way. The only difference is in the different
auxiliary packages available; their installation and configuration
issues are beyond the scope of this guide. Most of COL's add-on
packages contain their own documentation; check the /doc directory of
the Caldera CD-ROM for more details.
Obtaining Caldera
Unlike most other Linux distributions, COL is not available for
downloading from the Internet, nor can it be distributed freely or
passed around. This is because of the commercial packages which are
part of COL; while most of the components of COL are under the GNU
Public License, the commercial components, such as Looking Glass and
Metro-X, are not. In the list of packages included on the COL media
starting on page 196 of the Guide, the commercial packages are noted
by an asterisk.
COL is available directly from Caldera, or through a network of
Partners around the world who have committed to supporting Caldera
products. These Partners can usually provide professional assistance,
configuration and training for Caldera users. For a current list of
Partners, check the Caldera web site.
Preparing to Install Caldera Open Linux
Caldera support the same hardware as any other release based on Linux
2.0 kernels. Appendix A of the Guide (p145) lists most of the SCSI
hosts supported and configuration parameters necessary for many
hardware combinations.
Taking a page out of the Novell manual style, Caldera's Guide provides
an installation worksheet (page 2) that assists you in having at hand
all the details of your system that you'll need for installation. It
is highly recommended you complete this before starting installation;
while some parameters, such as setting up your network, are not
required for installation, doing it all at one time is usually far
easier than having to come back to it. Sometimes this can't be
avoided, but do as much at installation time as possible.
Creating boot/modules floppies
The COL distribution does not come with the floppy disks required for
installation. There are two floppies involved; one is used for
booting, the other is a "modules" disk which contains many hardware
drivers.
While the Guide recommends you create the floppies by copying them
from the CD-ROM, it is better to get newer versions of the disks from
the Caldera web site. The floppy images on some CD-ROMs have errors
that cause problems, especially with installations using SCSI disks
and large partitions.
To get newer versions of the floppy images, download them from
Caldera's FTP site. In directory {\tt pub/col-1.0/updates/Helsinki},
you'll find a bunch of numbered directories. Check out the directories
in descending order---that will make sure you get the latest versions.
If you find one of these directories has a subdirectory called
bootdisk
, the contents of that directory are what you want.
You should find two files:
install-2.0.25-XXX.img
modules-2.0.25-XXX.img
The
XXX
is replaced by the version number of the disk images. At the time of
writing, the current images are 034 and located in the 001 directory.
Once you have these images, transfer them onto two floppies using the
methods described on page 4 of the Guide, using RAWRITE from the
Caldera CD-ROM if copying from a DOS/Windows system or
dd
from a Linux system.
While Caldera's CD-ROM is bootable (if your system's BIOS allows it),
if possible use the downloaded floppies anyway, since they are newer
and will contain bug-fixes that won't be in the CD versions.
Preparing the hard disks
This procedure is no different from that of other Linux distributions.
You must use fdisk on your booted hard disk to allocate at least two
Linux partitions, one for the swap area and one for the root file
system. If you are planning to make your system dual-boot COL with
another operating system such as MS Windows or DOS or even OS/2, it's
usually preferable to install COL last; its "fdisk" recognizes
"foreign" OS types better than the disk partitioning tools of most
other operating systems.
To run the Linux fdisk, you'll need to start your system using the
boot (and maybe the modules) floppy mentioned above. That's because
you need to tell COL what kind of disk and disk controller you have;
you can't even get as far as entering
fdisk
if Linux doesn't recognize your hard disk!
To do this, follow the bootup instructions in the Guide, from step 2
on page 33 to the end of page 36. Don't bother going through the
installation or detection of CDROMs or network cards at this time; all
that matters at this point is Linux sees the booting hard disk so you
can partition it using fdisk. A brief description of the use of the
Linux fdisk is provided on page 28 of the Guide.
Remember that when running fdisk, you need to set up both your root
file system (type 83) and your swap space (type 82) as new partitions.
A brief discussion of how much swap space to allocate is offered on
page 10 of the Guide.
As soon as you have completed this and written the partition table
information to make it permanent, you will need to reboot.
_________________________________________________________________
Copyright © 1997, Evan Leibovitch
Published in Issue 19 of the Linux Gazette, July 1997
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
CLUELESS at the Prompt: A new column for new users
by Mike List, troll@net-link.net
_________________________________________________________________
[INLINE]
Welcome to installment 6 of Clueless at the Prompt: a new column for new
users.
_________________________________________________________________
This time let's take a quick look at the XF86Setup utility. at X
window managers, concentrating on FVWM, adding popup menus, adding and
subtracting apps from existing popups and other relatively easy ways
to get a custom appearance and feel.
_________________________________________________________________
Using XF86Setup to configure X
Judging from the posts I've seen on the usenet, a lot of people aren't
aware that there's an easier way to get X up and running than
configuring it the old confusing way(at least I found it to be that
way), using a tcl/tk script called XF86Setup. While it doesn't totally
eliminate the need to manually edit your XConfig, it does provide a
method of getting a usable configuration for most common video cards
and monitors. XF86Setup first appeared in the XFree86 3.2
distribution, and uses the lowest common denominator VGA 16 color mode
server and a tcl/tk(corrections welcome) script to start the config
process in X and by using the graphical nature of this utility script
you can be almost certain to have X running in a couple of tries, and
if worst comes to worst you can have it running in 16 color mode until
you can get the details to optimize it to your video hardware. Current
downloads of Xfree86 all seem to have this included, and if your CDROM
diskribution has X 3.2 or better you already have it available to
install to your HD. If you download it from xf86.org, be sure to read
the Relnotes for the component files necessary to insure a successful
install. You'll need :
* preinst.sh Pre-installation script
* postinst.sh Post-installation script
* X3?bin.tgz Clients, run-time libs, and app-defaults files
* X3?doc.tgz Documentation
* X3?fnts.tgz 75dpi, misc and PEX fonts
* X3?lib.tgz Data files required at run-time
* X3?man.tgz Manual pages
* X3?set.tgz XF86Setup utility
* X3?VG16.tgz 16 colour VGA server (XF86Setup needs this server)
, where ?=the level of the distribution you're using, ie.3.2, 3.3
etc., for all installations, read the relnotes for any oher files your
specific hardware might need. Since the 3.3 version just came out, if
you are just getting around to setting up X you will most likely want
to get this distribution, since every successive version has support
for more hardware and often better support for hardware already
supported.
OK, you have the files you need, that is the ones listed above, and
the server for your particular video card, in my case the SVGA server,
you may need to do a little detective work to determine which server
to use. If you are using the X version that comes on your CDROM, you
can probably install all the servers(assuming there's space on your
HD)and let the XF86Setup prog make the choice. To install,type:
cd /usr/X11R6
Next, copy the preinst.sh and postinst.sh scripts to /var/tmp, then go
to /usr/X11R6 and type:
cd /usr/X11R6
sh /var/tmp/preinst.sh
the script will remove some symbolic links, and check to see that all
the files you need are available, and may output a message asking for
those files that are needed but not present. But assuming that you
have followed the above, everything should be in place, and you should
get a generally encouraging message on exit from the script.
Now for the installation itself,type:
tar -zxvf /wherever/you/have/X3?files.tgz
you'll have to repeat this step with each of the required files,
although if you have these files in a directory by themselves, you may
be able to type:
tar -zxvf /wherever/youhavethem/*.tgz
although it's been awhile, and I can't recall if it works, it won't
hurt anything to try, since the alternative is to unpack each tgz file
separately.
Next you run the postinst.sh script in the same manner as the
preinst.sh above, this will make sure that you have all the X
components in the correct places.Run ldconfig something like:
ldconfig -m /usr/X11R6/lib
or reboot to run ldconfig automatically. This links the libraries
necessary to run X. At this point you should be able to start the
actual setup by typing, naturally:
XF86Setup
which will present a dialog box asking if you want to start in
graphical mode or tell you it will start momentarily. At this point
you'll be in X, using the 16 color VGA server.Read all the
instructions, and follow the routine, which I found to be pretty
self-explanatory. You will probably have the most trouble finding the
right mouse device and protocol, but try each one in turn if you
aren't sure. You'll probably also want to change the keyboard to
102key US International keyboard. Specify the video card, and monitor
info, don't worry if you don't know the salient monitor inf, you cna
start at the top of the list and work your way down the list until you
reach a good setting.Much easier if you have your monitor manual
available, so have it on hand if you can. Finish the routine when you
think it's right and that should do it. Congratulations on your
hopefully valid Xconfiguration. If you muff it just try again using
slightly different settings until you do get it right.
_________________________________________________________________
Window Managers
Most Linux distributions that i'm familiar with use the FVWM window
manager as default and the rest of them should have it present, unless
you downloaded the files directly from xf86.org, in which case the
default is TWM.
FVWM is highly configurable by editing the
/var/X11R6/lib/fvwm/system.fvwmrc file.You can use the file as it is,
since it has the most common installed features already configured,
but you can comment out those programs that you don't have installed
by adding a "#" at the beginning of the lines you wish to drop, change
colors, add popup menus, and more just by following the examples. Just
be sure to save the system.fvwmrc by typing:
cp /var/X11R6/lib/fvwm/system.fvwmrc
/var/X11R6/lib/fvwm/system.fvwmrc.old
or something similar, so if you do mess up on your customization you
can always start from scratch by cp'ing .old to the original
system.fvwmrc.A couple of months ago The Weekend Mechanic column had
some very cool ideas on wallpapering the root window, so you might
want to check them out.
I made "Internet" and "PPP" popup menus to include lynx, Netscape and
a couple of telnet sites, as well as an IRC client, and to use the
chat script from X. you may have other ideas more to your liking,
don't be afraid to try, you can always start over again if you don't
like the results.
Take a look at my system.fvwmrc, nothing too sophisticated, but if you
compare it to the original you should get the idea. I commented the
changes that I made so you can see some of the ways in which you can
customize yours.
_________________________________________________________________
Copyright © 1997, Mike List
Published in Issue 19 of the Linux Gazette, July 1997
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
Welcom to the Graphics Muse
Set your browser to the width of the line below for best viewing.
© 1997 by mjh
_________________________________________________________________
Button Bar muse:
1. v; to become absorbed in thought
2. n; [ fr. Any of the nine sister goddesses of learning and the arts
in Greek Mythology ]: a source of inspiration
W elcome to the Graphics Muse! Why a "muse"? Well, except for the
sisters aspect, the above definitions are pretty much the way I'd
describe my own interest in computer graphics: it keeps me deep in
thought and it is a daily source of inspiration.
[Graphics Mews] [Musings] [Resources]
indent T his column is dedicated to the use, creation, distribution,
and discussion of computer graphics tools for Linux systems.
This month has been even more hectic than most. I finished the
first pass of an article on the 1.0 release of the GIMP and submitted
it to the LInux Journal editors. That will be out in the November
Graphics issue. I'll probably have to do some updates after I get back
the marked up version. I'm also working on the cover art for that
issue, using the developers release (currently at 0.99.10) of the
GIMP. I've also had quite of bit of regular work (that kind that pays
the rent) since I'm getting very close to my code freeze date. This
weekend I'll be writing up documentation for it so I can give an
introductory class to testers, other developers, Tech Pubs, Tech
Support, and Marketing on Monday. I think I picked a bad time to start
lifting weights again.
In this months column I'll be covering ...
* More experiences with printing using the Epson Stylus Colro 500
* A brief discussion about DPI, LPI, and Halftoning
* An even briefer discussion about 3:2 pulldown - transerring film
to video.
Next month may not be much better. I don't know exactly what I'll be
writing about, although I do have a wide list from which to choose.
Mostly I'm looking forward to my trip to SIGGRAPH in August. Any one
else going? I should have plenty to talk about after that. I plan on
going to at least two of the OpenGL courses being taught at the
Conference. I haven't completely decided which courses I'm going to
take, however.
I'm also looking forward to a trip to DC in August as well. A
real vacation. No computers. Just museums and monuments. I may need to
take some sort of anti-depressant. Nah. I need the break.
Graphics Mews
Disclaimer: Before I get too far into this I should note that
any of the news items I post in this section are just that - news.
Either I happened to run across them via some mailing list I was on,
via some Usenet newsgroup, or via email from someone. I'm not
necessarily endorsing these products (some of which may be
commercial), I'm just letting you know I'd heard about them in the
past month.
indent
Announcing bttv version 0.4.0
BTTV is a device driver for Booktree Bt848 based frame grabber
cards like the Hauppauge Win/TV pci, Miro PCTV, STB TV PCI, Diamond
DTV2000, and AverMedia. Major new features in version 0.4.0 are
rudimentary support for grabbing into user memory and for decoding VBI
data like teletext, VPS, etc. in software.
The Motif application xtvscreen now has better support for selecting
channels and also works in the dual visual modes (255+24 mil. colors)
of Xi Graphics AcceleratedX 3.1 X server.
Author:
Ralph Metzler rjkm@thp.uni-koeln.de
Marcus Metzler mocm@thp.uni-koeln.de
Web Site:
http://www.thp.uni-koeln.de/~rjkm/linux/bttv.html indent indent
OpenGL4Java 0.3
This is an initial developer's release of an (unoffical) port of
OpenGL(tm) for java. Leo Chan's original package has been ported to
both WindowsNT/95 and to Linux. Several features have been added, the
main one being OpenGl now draws into a Java Frame. What advantage does
this provide? Well, you can now add menus to the OpenGL widget as well
as receiving all normal events such as MouseMotion and Window events.
You could very simply have a user rotate a OpenGL object by moving the
mouse around in the Frame ( the demo for the next release will have
this feature ).
You can grab it from the developers web page at
http://www.magma.ca/~aking/java.
indent
WebMagick Image Web Generator - Version 1.29
WebMagick is a package which makes putting images on the Web as easy
as magick. You want WebMagick if you:
* Have access to a Unix system
* Have a collection of images you want to put on the Web
* Are tired of editing page after page of HTML by hand
* Want to generate sophisticated pages to showcase your images
* Want to be in control
* Are not afraid of installing sophisticated software packages
* Want to use well-documented software (33 page manual!)
* Support free software
After nine months of development, WebMagick is chock-full of features.
WebMagick recurses through directory trees, building HTML pages,
imagemap files, and client-side/server-side maps to allow the user to
navigate through collections of thumbnail images (somewhat similar to
xv's Visual Schnauzer) and select the image to view with a mouse
click. In fact, WebMagick supports xv's thumbnail cache format so it
can be used in conjunction with xv.
The primary focus of WebMagick is performance. Image thumbnails are
reduced and composed into a single image to reduce client accesses,
reducing server load and improving client performance. Everything is
pre-computed. During operation WebMagick employs innovative caching
and work-avoidance techniques to make successive executions much
faster. WebMagick has been successfully executed on directory trees
containing many tens of directories and thousands of images ranging
from tiny icons to large JPEGs or PDF files.
Here is a small sampling of the image formats that WebMagick supports:
* Windows Bitmap image (BMP)
* Postscript (PS)
* Encapsulated Postscript (EPS)
* Acrobat (PDF)
* JPEG
* GIF (including animations)
* PNG
* MPEG
* TIFF
* Photo CD
WebMagick is written in PERL and requires the ImageMagick (3.8.4 or
later) and PerlMagick (1.0.3 or later) packages as well as a recent
version of PERL 5 (5.002 or later). Installation instructions are
provided in the WebMagick distribution.
Obtain WebMagick from the WebMagick page at
http://www.cyberramp.net/~bfriesen/webmagick/dist/. WebMagick can also
be obtained from the ImageMagick distribution site at
ftp://ftp.wizards.dupont.com/pub/ImageMagick/perl.
indent
EasternGraphics announces public release of `opengl' widget
EasternGraphics announces the public release of `opengl' widget
which allows windows with three-dimensional graphics output, produced
by OpenGL to be integrated into Tk applications. The widget is
available for Unix and MS-Windows platforms.
You can download the package from ftp://ftp.EasternGraphics.com/
pub/egr/tkopengl/tkopengl1.0.tar.gz
Email: wicht@EasternGraphics.com
WWW: http://www.EasternGraphics.com/ indent indent
ELECTROGIG's GIG 3DGO 3.2 for Linux for $99.
There is a free demo package for Linux. Its roughly 36M tarred
and compressed. A 9M demo's file is also available for download. I had
placed a notice about this package in the May's Muse column, but I
guess ELECTROGIG had missed that, so they sent me another announcement
(I got the first one from comp.os.linux.announce). Anyway, one thing I
didn't mention in May was the price for the full Linux product: $99.
This is the complete product, although I'm not sure if this includes
any documentation or not (it doesn't appear to). The Linux version
does not come with any product support, however. You need a 2.0 Linux
kernel to run GIG 3DGO.
I also gave a URL that takes you to an FTP site for downloading the
demo. A slightly more informative page for downloading the demo and
its associated files is at http://www.gig.nl/support/indexftp.html
indent
Type1Inst updated
James Macnicol uploaded version 0.5b of his type1inst font
installation utility to sunsite.unc.edu. If its not already there, it
will end up in /pub/Linux/X11/xutils.
Type1inst is a small perl script which generates the "fonts.scale"
file required by an X11 server to use any Type 1 PostScript fonts
which exist in a particular directory. It gathers this informatiom
from the font files themselves, a task which previously was done by
hand. The script is also capable of generating the similar "Fontmap"
file used by ghostscript. It can also generate sample sheets for the
fonts.
FTP: ftp://sunsite.unc.edu/pub/Linux/X11/xutils/type1inst-0.5b.tar.gz
Editors note: I highly recommend this little utility if you are intent
on doing any graphics arts style work, such as with the GIMP.
indent
libgr-2.0.13 has been updated to png-0.96
It seems the interface to png-0.96 is not binary compatible with
png-0.89, so the major version of the shared library was bumped to
libpng.so.2.0.96 (last version was libpng.so.1.0.89).
WHAT IS LIBGR?
Libgr is a collection of graphics libraries, based on libgr-1.3, by
Rob Hooft (hooft@EMBL-Heidelberg.DE), that includes:
* fbm
* jpeg
* pbm
* pgm
* png
* pnm
* ppm
* rle
* tiff
* zlib, for compression
These are configured to build ELF static and shared libraries. This
collection (libgr2) is being maintained by Neal Becker
<neal@ctd.comsat.com>
FTP: ftp.ctd.comsat.com:/pub/linux/ELF
indent
indent
indent
Did You Know?
...there is a site devoted to settign up Wacom tablets under XFree86?
http://www.dorsai.org/~stasic/wacomx.htm The pages maintainer, Edward,
says:
So far, nobody has told me that he or she couldn't follow the
instructions.
Fred Lepied is the man who actually created the support for the
Wacom tablets under XFree86. He gave me instructions on setting my
ArtPad II up and I repeated this, periodically, on Usenet. When the
requests for help there turned into a steady stream, I decided to
put up a web page (mainly to show that I can make one but not use
it for a lame ego trip).
Adam D. Moss <adam@uunet.pipex.com> has said he's also gotten this to
work and offered to help others who might need assistance getting
things set up.
...there is rumored work being done on 3Dfx support for Linux? Tige
writes:
I was looking around for info about the 3Dfx based cards and came
across a guy's page that said he is working on a full OpenGl driver
for 3Dfx boards for NT. What does this have to do with Linux? Well,
he says that after the NT driver is done, he is going to start work
on 3Dfx drivers for Linux and an OpenGl driver for XFree86/3Dfx.
The guy's name is Zanshin and the address of his site is:
http://www.planetquake.com/gldojo/
Most of this stuff is in the News Archives section under 4/18/97 Oh
yeah, he also mentions hacking SGIQuake to work with Linux, so we
may get to see a hardware accelerated version of Quake for Linux.
...the MindsEye Developers mailing list has moved to mindseye@luna.nl.
unsubscribing can be done by sending a body of
unsubscribe
to mindseye-request@luna.nl and a body of
unsubscribe mindseye@luna.nl
to majordomo@luna.nl Other majordomo commands should be send to
majordomo@luna.nl a body of 'help' gives an overview. Users which are
subscribed to the old mindseye@ronix.ptf.hro.nl adress do not need to
unsubscribe. The list will be removed shortly afterwards. They will
get this message twice: one from mindseye@luna.nl and one from
mindseye@ronix.ptf.hro.nl. A HTML interface by using hypermail is
under construction.
Q and A
Q: Forgive what might be a dumb question, but what exactly is meant by
"overlays"?
A: Imagine a 24bpp image plane, that can be addressed by 24bpp
visuals. Imagine an 8bpp plane in front of the 24bpp image plane,
addressed by 8bpp visuals.
One or more of the 8bpp visuals, preferably the default visual, should
offer a 'transparent pixel' index. When the 8bpp image plane is
painted with the transparent pixel, you can see through to the 24bpp
plane. You can call an arrangement like this, a 24bpp underlay, or
refer to the 8bpp visuals as an overlay.
Strictly, we call this "multiple concurrent visuals with different
color depths", but that's rather a mouthful. Hence, shorthand we refer
to it as "24+8" or "overlays", with "24+8" as the preferred
description.
From Jeremy Chatfield @ Xi Graphics, Inc.
indent
indent
indent
Musings
Microstation update
After last months 3D Modeller update I received email from Mark
Hamstra at Bentley Systems, Inc. Mark is the man responsible for the
ports of Bentley's MicroStation and Masterpiece products that are
available for Linux. I've included his response below. The stuff in
italics is what I had orginally written:
Thanks for the mention in Gazette #18 --it's kinda fun watching
where MicroStation/Linux info pops up. Being the guy that actually
did the ports of MicroStation and Masterpiece, I'll lay claim to
knowing the most about these products. Unfortunately, you've got a
few errors in Gazette #18; allow me to correct them:
Includes programming support with a BASIC language and linkages to
various commericial databases such as Oracle and Informix.
Programming support in the current product includes the
MicroStation Development Language (C syntax code that compiles to
platform-independent byte-code), BASIC, and support for linking MDL
with both MDL shared libraries and native code shared libraries
(i.e., Linux .so ELF libraries). For a look at the future direction
of Bentley and MicroStation, take a look on our web site at the
recent announcement by Keith Bentley at the AEC Systems tradeshow
of MicroStation/J and our licensing agreement with Javasoft.
Because of the lack of commercial database support for Linux, there
are no database linkage facilities in the current Linux port of
MicroStation.
This looks like the place to go for a commercial modeller, although
I'm not certain if they'll sell their educational products to the
general public or not.
Nope, academic-only at this time; although we're collecting
requests for commercial licensing (at our normal commercial prices)
at http://www.bentley.com/products/change-request.html. The only
thing preventing MicroStation from being available commercially for
Linux is a lack of adequate expressed interest.
Note that the Linux ports have not been released (to my knowledge -
I'm going by whats on the web pages).
The first two of our new Engineering Academic Suites that contain
the Linux ports, the Building Engineering and GeoEngineering
Suites, have been available in North America since the middle of
February. European and worldwide distribution should be underway
now too, although it took a little longer. Incidentally, the web
pages you list are for our Europe, Middle East, and Africa (ema)
division; you probably actually want
http://www.bentley.com/academic.
[output formats] Unknown
We output a wide range of formats (and import a wider range than
you give us credit for). I always forget just which ones are
actually in the product and which are only in my current builds
from the most recent source, so I'll just refer you to
http://www.bentley.com/products/microstation95 and
http://www.bentley.com/products/masterpiece, and note that my copy
of MicroStation/Linux currently lists DGN, DWG, DXF, IGES, CGM,
SVF, GRD, RIB, VRML, Postscript, HPGL, PCL, TIFF, TGA, BMP, and a
couple other raster and animation formats as output options -- and
I know I haven't currently got some of our soon-to-be-released
translators compiled. Like I said, probably not all of these are in
the current Linux port, but it's a simple matter to add whatever's
not there to future versions of the Linux products, provided
there's enough demand to keep the project going.
I wasn't sure what a few of these formats were, so I wrote Mark back
to ask about them. He informed me on the following (which were the
ones I had asked specifically about):
* DGN is MicroStation-native design file format and has its ancestry
in the Intergraph IGDS file format.
* SVF is the Simple Vector Format (see http://www.softsource.com),
which works pretty good for web browser plug-ins.
* GRD is used by our MicroStation Field product.
* CGM is the Computer Graphics Metafile format, a vendor-independent
standard supported in various software packages, browser plug-ins,
printers/plotters, etc.
I want to thank Mark for offering updated information so quickly. My
information is only as good as what I can find or am fed, and it helps
when vendors, developers or end users provide me with useful info like
this. Many thanks Mark.
If you've used this product on MS platforms feel free to drop me a
line and let me know what you thought of it. I'm always out to support
commercial ports of graphics-related products to Linux.
indent
Printing with an Epson Stylus Color 500
I bought an Epson Stylus Color 500 printer back in December of
last year so I could print in color. I had done some research into
what printers would be best, based in part on reviews in online PC
magazines and also on support available in the Ghostscript 4.03
package. The Epson Stylus Color 500 was rated very high by the reviews
and I found a web page which provided information on how to configure
Ghostscript for use with the printer. I bought the printer, got
Ghostscript working in a very marginal way (that is to say, it printed
straight text in black and white). But thats as far as it went. I had
gotten some minor printing in color done, but nothing very impressive
and most of it was downright bad.
Earlier this month I was given the opportunity to work on the
cover art for an issue of the Linux Journal. A few trial runs were
given the preliminary ok but they were too small - the size of the
image needed to be more than twice as big as the original I had
created. Also, because the conversion of an image from the monitors
display to printed paper is not a straightforward one (see the
discussion on LPI/DPI elsewhere in this months column) it became
apparent I needed to try printing my artwork to sample how it would
really look on paper. I had to get my printer configuration working
properly.
Well, it turned out to be easier than I thought. The hardest
part is to get Ghostscript compiled properly. The first thing to do is
to be sure to read the text files that accompany the source code.
There are 3 files to read:
* make.txt - general compiling and installation instructions
* drivers.txt - configuration information for support of the various
devices you'll need for your system.
* unix-lpr.txt - help on setting up a print spooler for Unix
systems.
The first two are the ones that made the most difference to me. I
didn't really use the latter, but my solution isn't very elegant.
However, what it lacks in grace it makes up for in simplicity.
Building the drivers was fairly simple for me - I took most of
the defaults, except I added support for the Epson Stylus Color
printers. There is a section in make.txt devoted specifically to
compiling on Unix systems (search for How to build Ghostscript from
source (Unix version) in that file). In most cases you'll just be able
to type "make" after linking the correct compiler specific makefile to
makefile. However, I needed to configure in the Epson printers first.
What I did was to edit the unix-gcc.mak file to change one line.
The line that begins
DEVICE_DEVS=
was modified to add
stcolor.dev
right after the equal sign. I also didn't need support for any of the
HP DeskJet (DEVICE_DEVS3 and DEVICE_DEVS4) or Bubble Jet
(DEVICE_DEVS6) devices so I commented out those lines. Now, once this
file had been linked to makefile I could just run
make
make install
At this point the Ghostsript package was ready for use. Note that many
of the current distributions already include Ghostscript, but may not
have the 4.03 release. Run
gs -v
to find out if you have Ghostscript 4.03. You'll need it to work with
the Epson Stylus Color 500.
Now I needed to set up my print spooler. This turned out to be
rather easy. First, you need to know that the stcolor driver (which is
the name of the driver Ghostscript uses to talk to Epson Stylus
printers) has a pre-built Postscript file that is used to prepare the
printer for printing. This file, called stcolor.ps, is included with
the 4.03 distribution. The file contains special commands that are
interpreted by the printer, however it does not actually cause
anything to be printed.
-Top of next column- indent indent indent
More Musings...
* DPI, LPI, Halftoning and other strange things - A short discussion
on printing computer images.
* How many frames makes a movie? - a discussion with Larry Gritz
about how video animations are transferred from film.
indent indent
When you want to print something you need to first print this file
followed by the file or files you want to print. Don't worry about how
to do this just yet - I have a set of scripts to make this easier.
There were a number of options I could use with Ghostscript for
my printer, but I found I only needed to work with one: display
resolution or Dots Per Inch (DPI). In order to handle the two
resolutions I simply created two scripts which could be used as input
filters for lpr (the print spooler). The scripts are almost exactly
the same, except one is called stcolor and one is called stcolor-high,
the latter being for the higher resolution. Both of these were
installed under /var/spool/lpd/lp and given execute permissions.
Next came the configuration for lpr. I needed to edit the
/etc/printcap file to create entries for the new printer filters. I
decided to give the printers different names than the standard,
non-filtered printer name. In this way I could print ordinary text
files (which I do more than anything else) using the default printer
and use the other printer names for various draft or final prints of
images, like the cover art.
Now the system was ready to print my images, but I still needed
to do a couple more things. First, I wanted to write a script for
handling printing of my images in the most common formats I created. I
wrote a script to do this which I named print-tga.sh. I made symbollic
links from this file to variations on the name. The script uses the
name used to invoke it to determine which type of conversions to run
before printing the file. The script converts the various formats,
using the tools in the NetPBM kit, to Postscript files and then prints
them to the high resolution printer setup in the previously mentioned
printcap file.
Once I got all this done I was able to print full page images on
high-gloss paper. They come out beautifully. The images I created for
the cover art are far bigger than the paper, so Ghostscript resizes
them to fit. It wasn't until I got this working that I realized just
how good Ghostscript is. Or just how good the Epson Stylus Color 500
is.
As a side bonus, I also discovered that I could now print pages
from my Netscape browser to my printer. I configured the print command
to be lpr -llpps (using the lower resolution printer from the
/etc/printcap file) in the Print dialog. Since Netscape passes the
page as a Postscript file to the filter, there is no need to do any
conversions like I do with my images. I now get full color prints of
the pages I wish to save (like SIGGRAPH's registration forms). I also
can print directly from Applixware using the same printer
configurations. I just had to set up the print options to output as
Postscript, which was simply enough to do.
There are a number of other settings that can be set using the
filters. If you are interested in using these you should consult the
devices.txt file for information on the stcolor driver. There are
probably some better settings than what I'm using for other types of
printing needs.
Well, thats about it. I hope this was of some use to you. I was
really thankful when I got it working. My setup is probably not
exactly like anyone elses, but if you have the Epson Stylus Color 500
you should be able to get similar results. Don't forget: if you plan
on printing high resolution images using the 360 DPI (as opposed to
the 180 DPI also supported by the printer) then you'll probably want
to print on high-gloss paper. This paper can be rather expensive. The
high-gloss paper Epson sells specifically for this printer is about
$36US for 15 sheets. Also, I should note that I recently heard Epson
now has a a model 600 that is to replace the model 500 as their entry
level color printer. I haven't heard if the 600 will work with the
stcolor driver in Ghostscript so you may want to contact the drivers
author (who is listed in the devices.txt file, along with a web site
for more info) if you plan on getting the model 600.
indent
indent
Resources
The following links are just starting points for finding more
information about computer graphics and multimedia in general for
Linux systems. If you have some application specific information for
me, I'll add them to my other pages or you can contact the maintainer
of some other web site. I'll consider adding other general references
here, but application or site specific information needs to go into
one of the following general references and not listed here.
Linux Graphics mini-Howto
Unix Graphics Utilities
Linux Multimedia Page
Some of the Mailing Lists and Newsgroups I keep an eye on and where I
get alot of the information in this column:
The Gimp User and Gimp Developer Mailing Lists.
The IRTC-L discussion list
comp.graphics.rendering.raytracing
comp.graphics.rendering.renderman
comp.graphics.api.opengl
comp.os.linux.announce
Future Directions
Next month:
I have no idea. I have a ton of things that need doing, but I just
haven't had time to figure out what I *should* do. I still have
part 3 of the BMRT series to do, which I plan on doing as part of
the process of creating an animation. The animation is another
topic I'd like to do. I've also had requests for a number of other
topics. One good one was to cover the various Image Libraries that
are available (libgr or its individual components, for example). I
have a review of Image Alchemy to do (long ago promised and still
not done *sigh*). Well, at least I'll never be short a topic.
Let me know what you'd like to hear about!
_________________________________________________________________
Copyright © 1997, Michael J. Hammel
Published in Issue 19 of the Linux Gazette, July 1997
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
Intranet Hallways Systems Based on Linux
By Justin Seiferth, seifertj@af.pentagon.mil
_________________________________________________________________
Using Linux: An Intranet Hallways SystemLike many of you, I like to
use Unix, esp. Linux when ever and where ever it seems to be the best
fit for the job. This means I have to work fast and be creative;
making opportunities when and where ever I can't take them. I had just
such an opportunity recently when I put together a system which allows
my workplace to publish the common file sharing areas of its Microsoft
Windows NT based desktops. I thought others might be interested in
this system and created a distribution have your own Intranet Hallways
system or as the popular press would put it an "enterprise information
warehouse". Don't let on how easy it is and you'll be able to make a
bundle reselling the system. Here's what you need to do to make it
happen:
Kernel Options
Support Utilities
HTML pages and scripts
Some Configuration Changes
A Quick installation
Other things you might do with it
Once you've retrieved the distribution, it shouldn't take more than an
hour to get things running; let me know what you think about the
system when you do.
The Opportunity
Microsoft's Windows NT suffers from a file system inherited from its
MS-DOS lineage. For those of you who haven't had the displeasure this
means file systems are cryptically named A-Z, can't automount and the
process of manually mounting them is much more complicated and error
prone than the more user friendly tools like Red Hat's fstool. These
problems have been worked around somewhat at my agency through a
series of .bat files which mount server drives in standard places so
users can say "Just look at the T: drive" or something similar. This
still left users with problems searching tens of thousands of files
spread thousands of directories located on servers across the world.
The Microsoft Windows NT operators were trying to figure out a way to
present an efficient, agency-wide view of these servers so that users
could easily find and retrieve things. We used Linux to integrate and
publish these file sharing areas on our intranet.
Before theShow
Key to the system is the ability of the Linux kernel (later 2.0 and
2.1 versions) to mount local NTFS and remote SMB volumes. There's
nothing esoteric about enabling or using this option, just check it
off when you're compiling the kernel. (Don't run away at the thought
of compiling a kernel! Most distributions include these options in
their default kernel so you probably don't have do anything- just try
it out smbmount and see if it works). If any of your network shares
are coming from Windows 95 machines, make sure to also select the
patch for Windows 95 machines and long file names. If you are just
serving Microsoft Windows NT or Samba shares, don't use the Windows 95
option as I've found it has a noticeable impact on the speed of the
SMB mount and file operations.
Once you've got an SMB capable kernel installed you're almost ready to
go. The other critical components are thesmbfs utilities, the wu-ftpd
suite, a web server, a search engine and a javascript - capable
browser. Your distribution has probably installed an operational FTP
and HTTP server and most people nowadays have a Netscape browser
installed so all you really need to do is compile the smbfs utilities
and setup a search engine. If most of the documents on your SMB shares
are in text or HTML format, there are a number of search engines that
you can choose from- htdig and glimpse come to mind. If you want to be
able to search non-HTML documents then you might need one of the
commercial search engine. We use Netscape's catalog server for
Solaris.
The system will work without a javascript browser; it just won't be a
easy to use. Hit the links to grab the software tools you need from
the list above set it up. If you run into problems, be sure and check
out the linux HOWTOs and mailing list documentation on the sites
offering the software. If you have RedHat's RPM or Debian's package
tools somebody else has probably already made a binary available; just
check your local archive.
Set andStage
I'm assuming you've tested your kernel to make sure you can indeed
mount SMB shares and that your ftp server is up and alive. Before we
can start serving your "enterprise information warehouse" there are a
few files which need to be added to or modified on your system in
addition to the HTML files we'll discuss later. The first addition is
a new init.d file for automatically mounting SMB shares when you boot
your system. Then we'll enable a few features of your FTP server.
First, let's contend with mounting shares automatically. I do this
with a standard run-level 3/5 initscript; here's an excerpt with the
most critical lines:
______________________________________________________________________
# Check that networking is up.
[ ${NETWORKING} = "no" ] exit 0
# See how we were called.
case "$1" in
start)
echo -n "Mounting SMB shares..."
echo "Mounting share1"
/usr/sbin/smbmount //hostname/share /home/ftp/mountpoint -n -uftp
-gftp -f755 -Ihostnames_IP_address
# mount your other shares
echo ""
;;
stop)
echo -n "Umounting SMB shares..."
/usr/sbin/umount /home/ftp/mountpoint
#insert other mount comments here ....
echo ""
;;
*)
echo "Usage: hallways {start|stop}"
exit 1
esac
______________________________________________________________________
The smbmount(8) and umount(8) man pages have more details on what all
those flags are about. Basically, we are mounting the shares into a
directory accessible via anonymous FTP. The permissions and groups are
"fake" in the sense that they don't map to anything sensible in the NT
file system; they are only for the convenience and protection of the
Unix system. Our common shares are read/write for everyone; if your
site is more cautious you may want to review the implications of the
file permissions and ownership or perhaps impose access controls using
your file system and web server's security mechanisms.
Now, let's take a look at the scripts used to startup your FTP server.
You have to make sure you're taking advantage of wu-ftpd's
ftpaccess(5) configuration capabilities. If you start your FTP daemon
using the -a option the /etc/ftpaccess file will allow you to
customize many aspects of the FTP server's performance and
capabilities. Normally, you enable the -a option of your FTP server in
your /etc/inetd.conf file; some people run their FTP full time, in
this case check out the startup files in your /etc/rc.d/rc3.d or rc5.d
directory and add the option when the daemon is started up. Among the
benefits of using ftpaccess is the ability to specify header and
trailer messages in the directory listings generated by your FTP
server. These directives, message and readme are key to our system's
capabilities.
We created an HTML file within the directory structure accessible to
the FTP daemon; in our case it is called 'welcome.html', this file is
placed in the root directory of the FTP daemon's file area and the
entry in ftpaccess looks like:
...
message /welcome.html login
...
Now the contents of welcome.html will be displayed at the beginning of
directory listings. The contents of welcome.html are a little tricky
if you're not familiar with javascript. They are designed to both
dynamically tailor the HTML based on the position of the page within a
browser. These dynamic web pages tailor the help message to the
context of the display.
______________________________________________________________________
<HTML>
<HEAD>
<SCRIPT LANGUAGE="JavaScript">
function OpenNewWindow()
{
alert("To Upload a file go to file...Upload File on the browser's
button bar")
parent.frames[2].location.protocol = "ftp:"
window.open(parent.frames[2].location.href)
}
</SCRIPT>
</HEAD
<BODY bgcolor="#FFFFFF">
<FORM>
<SCRIPT LANGUAGE="JavaScript">
if (self!=top) {
document.write('<i><B>Hi!</b></i>' + "You can preview, download files
or search for information here.<p>You can also upload a file<br>" +
'<FORM>' +
'<CENTER>' + '<INPUT TYPE="button" Value="Upload File"
onClick="OpenNewWindow()
">' + '</CENTER>' + '</FORM>');
}
else
{
document.write('<i><B>Hi!</b></i> This is a special screen for adding
information to hallways.<p> To Upload a file, go to FILE | Upload,
like <a href="http://webserver/access_directory/file_upload.gif">
this</a>
<p>');
}
</SCRIPT>
</FORM>
</BODY>
</HTML>
______________________________________________________________________
This interface is not the first one we tried. I really wanted to make
the system intuitive; then we'd have to spend less time answering
questions and could spend more time working on new ideas. The tests we
conducted showed most people knew how to download files but were not
aware you could upload files or view the contents of non-HTML files.
We tried HTTP uploads and downloads but settled on the combination of
FTP and HTTP generated screens. We needed a design which allowed easy
navigation around a complicated system and kept at least minimal help
hints in front of the users all the time. The final HTTP based frame
design allowed us to put together an attractive interface.
Encapsulating the FTP file display simplified uploads and downloads.
Unlike a web server, our FTP server labels all files as a single MIME
type allowing us to use a single helper application to easily display
all files.; Getting this preview function to work will require editing
the association of mine types with an application on the user's
computer. We use a universal viewer, you can use one of these if your
network already has one installed or you might investigate one of the
many plug-ins which allow viewing files within the browser itself.
The Curtain Rises
Now the majority of the work and trickery is done; all that remains
is a frame based user interface, a few snazzy graphics and some help
files. In a nutshell, if the FTP listing is contained within a frame
then the if part of the conditional is presented. This HTML allows the
user to press an "upload" button which will pop open another browser
instance with the FTP directory within the root window. When
welcome.html is displayed within this root window, it contains
instructions on how to upload a file using the FTP capabilities of the
browser. The best way to understand how the code works is of course to
just load it up and experiment.
This isn't a tutorial on HTML so I'll just let you know you can
download this whole package (minus a few of the graphics we used in
our user interface) from
ftp://www.disa.mil/pub/linux_gazette_stuff.tgz. We can't redistribute
some of the graphics we use but you can just draw up your own and
stick them into HTML code.
During your review of the code you may notice that our frame
definition document distributes this system across several machines;
for us this is an important feature. We make use of local proxy
servers for FTP and HTTP traffic. These proxy servers keep down the
loading of our backbone. Our system is distributed such that the web
server documents and graphics will be served from a local departmental
web server while the FTP server distributes information from another,
centralized location. Since the proxy and web are local to our subnet
documents stored on the SMB hallways area are served from the proxy
(cache)- speeding up the file transfer times dramatically and reducing
our wide area network traffic. We are also using the Solaris version
of the Netscape Catalog Server to allow users to expediently find any
document or content within a wide variety of popular Unix, MacIntosh
and Windows application formats. This feature provides some much
needed help to users who must retrieve one of several hundred thousand
documents stored on servers spread across the globe; it was absolutely
infeasible using the Microsoft Windows NT file manager search feature
previously recommended by the Microsoft Windows NT operators
Applause and Royalties
You can provide many other enhancements such as browser access to
multiple file system types (NFS, Appleshare, SMB, AFS, etc) and
internet/intranet FTP areas are easily added. We are also working on a
management add-on using PHP/FI and Postgress to present users with a
fully graphical file upload facility which will also store meta data
on documents such as the originator of the information, the
originators e-mail address, etc. In fact I think with a little more
work this system is a pretty good replacement for some the proprietary
commercial document management applications that cost tens of
thousands of dollars.
I hope these ideas and this system will help you and your workplace
out. If you have other creative examples of simple systems that help
bring people working around the world together, I'd like to here about
them. Thanks for listening...
_________________________________________________________________
Copyright © 1997, Justin Seiferth
Published in Issue 19 of the Linux Gazette, July 1997
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
Linux and Artificial Intelligence
By John Eikenberry, jae@ai.uga.edu
_________________________________________________________________
Three years ago when I was starting my last year of my masters of
philosophy degree. I found myself asking that eternal question, "Ok,
now what in the hell am I going to do?" Not wanting to continue on in
philosophy, what could a philosopher (and computer enthusiast) do that
would be both fun and profitable. Artificial Intelligence of course
(but you saw that coming didn't you?)
I had fallen in love with Linux in late 1993 and after seeing all the
Suns scattered about the AI Dept, it seemed like the perfect OS for AI
research. Guess what, I was right. I have found so many resources
available for doing AI research on Linux that I had to write them all
down (warning: blatant plug follows), thus my Linux AI/Alife
mini-HOWTO came into being.
Ok, enough of this drivel, now on to the meat of the article.
Modern AI is a many faceted field of research, dealing with everything
from 'traditional' logic based systems, to connectionism, evolutionary
computing, artificial life, and autonomous agents. With Unix being the
main platform for AI, there are many excellent resources available for
Linux in each of these areas. The rest of this article I'll give a
brief description of each of these areas along with one of the more
interesting resources available to the Linux user.
_________________________________________________________________
PROGRAMMING LANGUAGES
I know I didn't mention this above, but there are many
programming languages that have been specifically designed with
AI applications in mind.
DFKI OZ
Web page: www.ps.uni-sb.de/oz/
FTP site: ps-ftp.dfki.uni-sb.de/pub/oz2/
Oz is a high-level programming language designed for concurrent
symbolic computation. It is based on a new computation model
providing a uniform and simple foundation for several
programming paradigms, including higher-order functional,
constraint logic, and concurrent object-oriented programming.
Oz is designed as a successor to languages such as Lisp, Prolog
and Smalltalk, which fail to support applications that require
concurrency, reactivity, and real-time control.
DFKI Oz is an interactive implementation of Oz featuring a
programming interface based on GNU Emacs, a concurrent browser,
an object-oriented interface to Tcl/Tk, powerful
interoperability features (sockets, C, C++), an incremental
compiler, a garbage collector, and support for stand-alone
applications. Performance is competitive with commercial Prolog
and Lisp systems.
_________________________________________________________________
TRADITIONAL ARTIFICIAL INTELLIGENCE
Traditional AI is based around the ideas of logic, rule
systems, linguistics, and the concept of rationality. At its
roots are programming languages such as Lisp and Prolog. Expert
systems are the largest successful example of this paradigm. An
expert system consists of a detailed knowledge base and a
complex rule system to utilize it. Such systems have been used
for such things as medical diagnosis support and credit
checking systems.
SNePS
Web site: www.cs.buffalo.edu/pub/sneps/WWW/
FTP site: ftp.cs.buffalo.edu/pub/sneps/
The long-term goal of The SNePS Research Group is the design
and construction of a natural-language-using computerized
cognitive agent, and carrying out the research in artificial
intelligence, computational linguistics, and cognitive science
necessary for that endeavor. The three-part focus of the group
is on knowledge representation, reasoning, and natural-language
understanding and generation. The group is widely known for its
development of the SNePS knowledge representation/reasoning
system, and Cassie, its computerized cognitive agent.
_________________________________________________________________
CONNECTIONISM
Connectionism is a technical term for a group of related
techniques. These techniques include areas such as Artificial
Neural Networks, Semantic Networks and a few other similar
ideas. My present focus is on neural networks (though I am
looking for resources on the other techniques). Neural networks
are programs designed to simulate the workings of the brain.
They consist of a network of small mathematical-based nodes,
which work together to form patterns of information. They have
tremendous potential and currently seem to be having a great
deal of success with image processing and robot control.
PDP++
Web site: www.cnbc.cmu.edu/PDP++/
FTP site (US): cnbc.cmu.edu/pub/pdp++/
FTP site (Europe): unix.hensa.ac.uk/mirrors/pdp++/
As the field of connectionist modeling has grown, so has the
need for a comprehensive simulation environment for the
development and testing of connectionist models. Our goal in
developing PDP++ has been to integrate several powerful
software development and user interface tools into a general
purpose simulation environment that is both user friendly and
user extensible. The simulator is built in the C++ programming
language, and incorporates a state of the art script
interpreter with the full expressive power of C++. The
graphical user interface is built with the Interviews toolkit,
and allows full access to the data structures and processing
modules out of which the simulator is built. We have
constructed several useful graphical modules for easy
interaction with the structure and the contents of neural
networks, and we've made it possible to change and adapt many
things. At the programming level, we have set things up in such
a way as to make user extensions as painless as possible. The
programmer creates new C++ objects, which might be new kinds of
units or new kinds of processes; once compiled and linked into
the simulator, these new objects can then be accessed and used
like any other.
_________________________________________________________________
EVOLUTIONARY COMPUTING [EC]
Evolutionary computing is actually a broad term for a vast
array of programming techniques, including genetic algorithms,
complex adaptive systems, evolutionary programming, etc. The
main thrust of all these techniques is the idea of evolution.
The idea that a program can be written that will evolve toward
a certain goal. This goal can be anything from solving some
engineering problem to winning a game.
GAGS
Web site: kal-el.ugr.es/gags.html
FTP site: kal-el.ugr.es/GAGS/
Genetic Algorithm
application generator and class library written mainly in C++.
As a class library, and among other thing, GAGS includes:
* A chromosome hierarchy with variable length chromosomes. Genetic
operators: 2-point crossover, uniform crossover, bit-flip
mutation, transposition (gene interchange between 2 parts of the
chromosome), and variable-length operators: duplication,
elimination, and random addition.
* Population level operators include steady state, roulette wheel
and tournament selection.
* Gnuplot wrapper: turns gnuplot into a iostreams-like class.
* Easy sample file loading and configuration file parsing.
As an application generator (written in PERL), you only need to supply
it with an ANSI-C or C++ fitness function, and it creates a C++
program that uses the above library to 90% capacity, compiles it, and
runs it, saving results and presenting fitness thru gnuplot.
_________________________________________________________________
ALIFE
Alife takes yet another approach to exploring the mysteries of
intelligence. It has many aspects similar to EC and
connectionism, but takes these ideas and gives them a
meta-level twist. Alife emphasizes the development of
intelligence through emergent behavior of complex adaptive
systems. Alife stresses the social or group based aspects of
intelligence. It seeks to understand life and survival. By
studying the behaviors of groups of 'beings' Alife seeks to
discover the way intelligence or higher order activity emerges
from seemingly simple individuals. Cellular Automata and
Conway's Game of Life are probably the most commonly known
applications of this field.
Tierra
Web site: www.hip.atr.co.jp/~ray/tierra/tierra.html
FTP site: alife.santafe.edu/pub/SOFTWARE/Tierra/
Alternate FTP site:
ftp.cc.gatech.edu/ac121/linux/science/biology/
Tierra's written in the C programming language. This source
code creates a virtual computer and its operating system, whose
architecture has been designed in such a way that the
executable machine codes are evolvable. This means that the
machine code can be mutated (by flipping bits at random) or
recombined (by swapping segments of code between algorithms),
and the resulting code remains functional enough of the time
for natural (or presumably artificial) selection to be able to
improve the code over time.
_________________________________________________________________
AUTONOMOUS AGENTS
Also known as intelligent software agents or just agents, this
area of AI research deals with simple applications of small
programs that aid the user in his/her work. They can be mobile
(able to stop their execution on one machine and resume it on
another) or static (live in one machine). They are usually
specific to the task (and therefore fairly simple) and meant to
help the user much as an assistant would. The most popular (ie.
widely known) use of this type of application to date are the
web robots that many of the indexing engines (eg. webcrawler)
use.
Ara
Web site: www.uni-kl.de/AG-Nehmer/Ara/
Ara is a platform for the portable and secure execution of
mobile agents in heterogeneous networks. Mobile agents in this
sense are programs with the ability to change their host
machine during execution while preserving their internal state.
This enables them to handle interactions locally which
otherwise had to be performed remotely. Ara's specific aim in
comparison to similar platforms is to provide full mobile agent
functionality while retaining as much as possible of
established programming models and languages.
_________________________________________________________________
Copyright © 1997, John Eikenberry
Published in Issue 19 of the Linux Gazette, July 1997
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
Linux: For Programmer Only--NOT!
By Mike List, troll@net-line.net
_________________________________________________________________
A couple of weeks ago, I was in a computer repair shop, trying to get
a deal on some hardware. The owner was trying to sell me on how cool
Win95 is. I told him I run Linux, then gave him the same hard/soft
sell I give to every one that I think might have use for Linux. I'm
just a glutton for punishment that way. He looked at me blankly, and
said Unix is a programmer's OS and it's not good for the average user.
My turn to look blankly, "Apparently that means that MS is an
illiterate's OS, and not good for the educated user". I didn't say
that but I thought it very loudly, and the conversation was over....
I should have been more understanding of his attitude. Part of the
reason that Linux hasn't become more mainstream is the belief that you
must be a highly trained programmer to make it run. That simply isn't
the case.
I hope to dispel some of this notion by pointing out my personal
experience with Linux. I am not a programmer, I can barely write a
good shell script, but I am happy as a clam with my Slackware 3.2 beta
installation and only very infrequently boot to the DOS/WFWG 3.11
partition.
Programming consists of writing code and compiling it, and very little
of this is required to effectively use Linux. Although many
applications are distributed as source code, the source code in most
cases require very little modification. Compiling source code,
moreover is not as complicated as it might seem. One command, "make"
can usually accomplish this compilation and the advice to inspect
MakeFiles can largely be ignored(I probably should be horsewhipped for
the previous statement, but in my experience it's nonetheless true.).
There is no doubt that the Linux experience is enhanced by programming
ability. Linux does lend itself to source code modification, which is
part of the reason that its development and bug fixes have been so
rapid, and continuous improvement has been the hallmark of Linux, as
well as the whole of the GNU organization.
It might be closer to the truth to consider Linux a hacker's medium,
simply because "hacker" means different things to different people.I
do not consider myself a hacker, although several MS Windows users
have described me that way. "Hacker", "cracker" and "programmer" are,
in my opinion often erroneously used as synonyms, by people who
haven't acquired computer skills beyond user level.
This myth is probably furthered by manufacturers of the more well
known OS, although not necessarily deliberately. Salesmanship requires
manipulation of certain facts, and in the case of OS software, this is
even more likely to be the case. FACT: There is no perfect OS. FACT:
Proponents of any OS tend to misplace that fact, even Linux advocates.
In my own family there exists a conflict of opinion regarding WFWG
3.11 vs. Linux, which in time is growing weaker, with Linux becoming
more acceptable to my wife and kids(I have admittedly used subversive
techniques to accomplish this goal, such as leaving the computer on
all the time, in X). In addition, I made sure to download programs
that were similar to ones used by my kids in WFWG, such as xpaint, and
Netscape, as well as several games, both SVGALIB, and X. Koules is a
big favorite, as is SASTEROIDS, and some while ago I had a flight sim,
FLY8111, that was a litle too challenging so it quietly disappeared. I
have put the BSD text based games on as an inducement to get my 15
year old foster son to read with some enthusiasm, with moderate
success. All I have to do now is find a word processing application
that my wife will accept readily, and I'll experience little
resistance, hopefully to commandeering the drive that's currently
loaded with DOS and WFWG.When I recompiled the kernel, I added sound
support, and even though I've had a little trouble installing a sound
playing program, the kids and I still make use of a pair of extremely
basic scripts based on the drivers README that allow us to record and
playback music. My sound card is an old eight bit SoundBlaster so the
sound quality isn't great, but I used it to rehearse the song I sang
at my oldest daughter's wedding, to good effect.
Earlier, I stated that I'm not really capable of writing a decent
shell script, but very simple scripts similar to DOS batch files can
be written by nearly anyone, and examples of scripts abound on many
sites, so keystroke saving measures are available to any one who cares
to try their hand at it. The Linux Gazette, in particular has provided
me with plenty of template like scripts from which I have learned what
little I know about more advanced scripting.
Linux advocates need, in my opinion to show patience with new users to
a greater degree than is currently the fashion. Banter among the
initiated has camaraderie value, but often puts off the prospective
Linux convert. When I was investigating Linux, I was told by one
respondent to my usenet posting "Do not meddle in the ways of wizards
for their ways are subtle and quick to anger." Hardly an encouraging
statement, but with my temperament it served to strengthen my resolve
to show the SOB. I daresay most casual computer users would not
respond as I did, however.
For the advancement of Linux I would recommend that you (Linux gurus)
choke on RTFM, unless you're sure that the person you are talking to
has acquired the skills needed to effectively read those FMs. My
experience has shown me that Linux distributions are almost as plug
and play as anything MS, IBM, Apple or anyone else has to offer. This
provides a jumping off point that will motivate users to learn skills
that they previously thought to be beyond them. By drawing them into
Linux operation slowly, they may become capable programmers, at which
point they will have made it their OS. A programmer's OS.
_________________________________________________________________
Copyright © 1997, Mike List
Published in Issue 19 of the Linux Gazette, July 1997
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
QPS, A New Qt-Based Monitor
by Larry Ayers
_________________________________________________________________
Introduction
The Qt C++ software development toolkit, by Norway's Troll Tech, has
been available long enough now that applications are beginning to
appear which use Qt rather than Motif, Xlib or Tcl/TK. Programs
developed with Qt have an identifiable and distinctive appearance,
with some resemblance to both Windows and Motif. There has been some
resistance in the Linux community to widespread adoption of the
toolkit due to its hybrid licensing arrangement. The toolkit is freely
available in the Linux version, and its use in the development of
free, GNU-ish software is free and unrestricted, but for other
platforms and for commercial Linux software Qt is a commercial
product.
Remember when Tcl/Tk began to become widely used a couple of years
ago? Applications and utilities written with the toolkit began to
proliferate, one reason being that the learning curve is relatively
gentle and a quick X interface to a command-line utility could be
contrived in a short time. C programmers found that the guts of a
program could be written in C, while the tricky X-windows interface
could be quickly put together with Tcl/Tk. This benefited the Linux
community as a whole, making it easier for new users and developers to
gain a foothold on the sometimes forbiddingly steep unix terrain.
Qt is an entirely different sort of toolkit than Tk, since it is based
on C++ and doesn't have the interpreted script layer of Tk. (It more
closely resembles Bruce Wampler's V package, described in the Dec.
1996 issue of Linux Journal.) In order to run QT applications the
libqt shared lib must be available as well as a small executable, moc.
The Qt header (include) files are needed as well to compile these
applications from scratch. The Qt source package is available from the
Troll Tech FTP site. Many small sample applications and demos, as well
tutorials and ample documentation, are included in the package.
QPS
Mattias Engdegård has recently written and released a process monitor
similar to top, the classic interface to ps. Top, though a
character-mode application, is commonly run in an xterm or rxvt window
in an X session. There is one problem with top in a window; scrolling
down to the bottom of the process list doesn't work, so the entries at
the bottom are inaccessible without resizing the window. There may be
a way to do this, but I haven't been able to find one. A minor issue,
I suppose, since the ordering of the entries can be easily toggled so
that either the most memory-intensive or the most CPU-intensive
processes appear at the top.
Qps is a more X-friendly application than top, with scrollbars and a
mouse-oriented interface. Clicking on any of the header categories,
such as %CPU, SIZE, or %MEM, will sort the processes in descending
order. Alt-k will kill a highlighted process. A series of bar-graphs
along with an xload-like meter form a status bar at the top of the
window. This can be toggled on and off from the menu-bar. When Qps is
iconified the icon is the small xload-like pane from the status-bar,
which is a nice touch.
Here's a screenshot:
Qps screenshot
_________________________________________________________________
Qt applications don't use the X resources typical of most X programs;
one result of this is that Qps seems to be confined to the default
white, gray, and black color scheme. It can generate a resource file
in your home directory which specifies which fields you'd like to see
and whether the status-bar should be visible or not.
Qps could be thought of as a sort of second-generation Linux utility,
written for users who rarely work from the console and boot directly
into an X session. It should fit in well with the KDE suite of
applications, which are also being developed with Qt. Though it uses
more memory than top in an rxvt window, I find myself using it often
while running X. I think this is a solid, dependable application and
deserves attention from the Linux community.
Availability
Currently the Qps-1.1 source is in the Sunsite Incoming directory, but
will most likely end up in the status directory. An alternate Swedish
site is here.
_________________________________________________________________
Copyright © 1997, Larry Ayers
Published in Issue 19 of the Linux Gazette, July 1997
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
The UnRpm-Install Package
by Larry Ayers
_________________________________________________________________
Introduction
No matter what distribution of Linux you have installed, there will
come a time when you would like to install a package in one of the
other distribution's formats. No one distribution has available every
possible package, and the updates to packages often depend on a
volunteer's inclination and time constraints.
On a reasonably current and well-maintained Linux system, most of the
quality source-code packages will compile without much effort beyond
perusal of the README and INSTALL files. In other words, *.rpm and
*.deb packages aren't vitally necessary, though the ease of upgrading
or removal possible with these packages makes them a time-saving
convenience.
But few people have both the time and/or inclination to compile every
new program from source. It does take more time than using a
precompiled package, and often a package maintainer will have access
to patches which haven't yet been incorporated into an official
release. One of these patches might be just what it takes to insure a
successful installation on your system! Therefore it stands to reason
that the more different genera of precompiled packages you have
available, the wider the pool of available software.
A year and a half ago I was running a Slackware 3.0 system, but had
used Redhat just long enough to appreciate the value of an rpm
package. As I remember, there were a few pieces of software which I
was unable, no matter what tweaking I did, to successfully compile.
The rpm's available for those packages were tempting, but I didn't
want to start from scratch and reinstall the Redhat Linux distribution
just for a few packages. Poking around the Redhat FTP site, I saw that
the source for the then-current version of rpm was available, and
after various trials and tribulations I managed to successfully
compile and install it. The crucial factor which made it all work was
downloading and installing a newer version of cpio, which was right
there in the Redhat rpm directory. It wasn't the easiest installation
I've ever done, but I don't blame the folks at Redhat for not making
it a no-brainer. After all, they evidently worked long and hard
developing the rpm package system and they surely wanted to leverage
its value in influencing users to buy their distribution. Redhat is to
be commended for resisting purely commercial urges and making rpm
freely available.
Two distributions later, I never have gotten around to reinstalling
rpm, partly because the Debian distribution has a utility called
alien, which will convert an *.rpm file into a *.deb file. This is a
nice utility, but sometimes I'd just like to poke around inside a
package and see what's there without actually installing it. Both rpm
and Debian's dpkg utility have command-line switches for just listing
the contents, or extracting individual files from a package. These
aren't the sort of switches I would use often enough to memorize, and
it's a pain to read the man page each time. So I gradually meander my
way to the point of this article...
UnRpm-Install
Recently, in nearly daily updates, Kent Robotti has been releasing to
the Sunsite archive site a package of programs and scripts which
simplify working with these various package formats. UnRpm is most
useful when used in conjunction with the Midnight Commander file
manager, as one component of the package is a set of entries meant to
be appended to the mc user menu.
This is what the package includes:
* the version of cpio which works well with rpm
* two shell scripts from the Slackware distribution, installpkg and
removepkg
* rpm2cpio, a program from Redhat which converts an rpm archive to a
cpio archive
* dpkgdeb, a program from the Debian distribution which unpacks,
packs, or provides information about a Debian archive file
* unrpm and undeb, two shell scripts which can either be used as is
or be called by the Midnight Commander.
* Update.mc, a shell script which will append entries for the above
scripts and programs to the /usr/lib/mc/mc.menu file
* Install, a shell script which installs the above binaries and
shell scripts, and also thoughtfully renames any pre-existing
equivalents in case you may want to back out any of the installed
files
The earlier versions of UnRpm-Install included statically-linked
binaries, no doubt to make them usable by a wider variety of users,
but with the disadvantage of large binaries. Since most systems have
compatible libc versions installed, which is the only library linked
with the binaries, recent versions have included the smaller
dynamically-linked versions.
The Midnight Commander in its recent incarnations has excellent
support built-in for treating these various archive formats as virtual
file-systems, allowing the user to browse through their contents
without actually expanding them. The menu entries provided by UnRpm
expand upon these capabilities, making it easier than ever to convert
one format to another and to see just what an archive will install on
your system.
There's nothing in UnRpm-Install which you couldn't gather up
yourself, from various FTP sites or distribution cd's. What makes the
package valuable is that Kent Robotti has done this for you, and
presented these disparate binaries and scripts as a coherent whole,
bound together by the Midnight Commander used as archive manager.
Availability
Various versions of UnRpm-Install are still in the /pub/Linux/Incoming
directory of the Sunsite FTP archive, but the most recent version will
eventually make its way into the archive utility directory.
_________________________________________________________________
Copyright © 1997, Larry Ayers
Published in Issue 19 of the Linux Gazette, July 1997
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
Single-User Booting Under Linux
By John Gatewood Ham, zappaman@alphabox.compsci.buu.ac.th
_________________________________________________________________
I was trained as a system administrator on HP, IBM, and Sun
workstations while working as a DRT consultant assigned to Informix as
an alpha-tester. There I learned the need for a true single-user
operating mode in Unix. When I tried to use the single user mode with
Linux, it did not work in the way that I expected. After many, many
reboots I worked out the right configuration to support a true
single-user mode on the distribution I was using, Slackware 3.2, by
modifying the boot process.
This article will now explain how to setup the bootup process for
Linux so that single-user mode really works if you are using the
Slackware 3.2 distribution (or a derivative). I will begin by assuming
that your kernel is correctly configured and that the init program
starts successfully. See the Installation-HOWTO at
ftp://sunsite.unc.edu/pub/Linux/docs/HOWTO/Installation-HOWTO for help
to get this far. Once you have a system that boots, however, you have
only begun. Why? Most distributions will give you a generic set of
initialization scripts that are designed to work for an average
installation. You will want to customize this in order to run extra
things you want and/or to prevent running things you do not want. With
the dozen or so standard startup scripts things can seem confusing,
but after you read this article you should be able to understand
enough to create a custom environment when you boot that exactly suits
you.
As I stated earlier, I will begin by assuming that init has started
successfully. It will examine the file /etc/inittab to determine what
to do. In that file are located the lines to activate your login
devices such as terminals, modems, and your virtual consoles. Leave
that stuff alone. What we are interested in are the lines which call
the startup/shutdown scripts. These lines will look something like
this:
# Default runlevel.
id:3:initdefault:
# System initialization (runs when system boots).
si::sysinit:/etc/rc.d/rc.S
# Script to run when going single user (runlevel 1).
l1:1:wait:/etc/rc.d/rc.K
# Script to run when going single user (runlevel S or s)
mm:S:wait:/etc/rc.d/rc.K2
# Script to run when going multi user.
rc:23456:wait:/etc/rc.d/rc.M
# Runlevel 0 halts the system.
l0:0:wait:/etc/rc.d/rc.0
# Runlevel 6 reboots the system.
l6:6:wait:/etc/rc.d/rc.6
The comments are present and are very helpful. First you need to
determine your default runlevel. In this case it is 3. The format of
the /etc/inittab file section we are looking at is simple. Blank lines
are ignored. Lines with '#' as the first character are comments and
are ignored. Other lines have 4 parts separated by the colon
character. These parts are 1. symbolic label, 2. runlevel, 3. action
and 4.command to run. These are documented in the section 5 manual
page for /etc/inittab (man 5 inittab). First we must find a line with
an action of initdefault, and then see what runlevel it has. That will
be the default runlevel. Obviously you should not have 2 lines that
have initdefault as the action in an /etc/inittab file. Once you know
the default runlevel, you will be able to know what /etc/inittab
entries will be processed by init. The 1 runlevel is considered
single-user maintenance mode, but it supported multiple simultaneous
logins in virtual terminals with the default /etc/inittab on my
systems. You can prevent this by removing the 1 from the getty lines
of the tty2, tty3, tty4, etc. The 3 runlevel is considered the normal
multi-user mode with full networking support. The S runlevel is
supposed to be true single-user, and you can theoretically enter that
runlevel using the lilo parameter single. However, for the Slackware
3.2 distribution, that does not put you in a single-user mode as you
would expect, but instead you wind up in runlevel 3. The /etc/inittab
file I show here does not have that problem however. Once you have
read this article you can change the system to behave in the expected
manner. So we know we will go to runlevel 3. That means init will
perform every command in the /etc/inittab file that has a sysinit,
then boot, or bootwait, and finally any entries for our runlevel 3.
When you want to run a script when entering a runlevel, it doesn't
make sense to have more than one script line in the /etc/inittab file
for that level. Instead, you should put everything in 1 script, or
call scripts from within the script mentioned in the /etc/inittab file
using the dot method. Once thing to note is that field 2, the runlevel
field, can have more than 1 runlevel specified. The init program will
first run the si entry (and we will wait for it to finish running
/etc/rc.d/rc.S) since it has sysinit (which implies wait) in the third
field. Then it will run everything with 3 specified. So in our example
file we will run the si target, then the rc target (and we will wait
for it to finish running the /etc/rc.d/rc.M script since the third
field is wait), and finally we it will do the c1 through c6 targets
which set up the virtual ttys during a normal boot.
If we boot (via lilo) and add the single parameter, we will still run
the si target (/etc/rc.d/rc.S) and wait for it to complete, but then
we will run the mm target (/etc/rc.d/rc.K2). Keep in mind that
runlevel 1 and runlevel S are essentially the same when you enter
them, but how you get there is very different. Runlevel 1 can be
entered by using the command /sbin/telinit 1, but /sbin/telinit s will
send you to runlevel 5 often for some reason (some kind of bug).
Runlevel 1 will give you a normal log in, and allows 1 user (any 1
user) to log in at the console. With this setup, runlevel S will give
you a special root-only login that allows only root to use the
console. Since only root can log in, only a special password prompt is
displayed. If you press enter or ctl-D, the system will return to
runlevel 3. This root-only login is accomplished by using the
/bin/sulogin program. Runlevel S is probably what you want when you
think single-user, but you have to reboot the machine and use lilo and
have the single parameter to make it work. You can use runlevel 1 to
accomplish the same things, but remember you will have to manually
return to runlevel 3 when you are done with another call to
/sbin/telinit 3 or a reboot, and you must insure that nobody else can
get to the console but the root user. WARNING: The true single-user
mode entered with the single parameter to lilo with my /etc/inittab
and /etc/rc.d/rc.K2 will support only 1 console and no other virtual
terminals. Do not run anything that locks up the terminal!
Ok, so what do we know now? We know what scripts init will call and
when they will be called. But what can be in those scripts? The
scripts should be written for bash unless you are a real guru and KNOW
the other shell you wrote scripts for will be available during boot.
There is nothing preventing you from using perl or tcsh or whatever,
but traditionally most everyone uses bash scripts (ok, ok, Bourne
shell scripts) for unix boot scripts. The /etc/rc.d/rc.S script which
is called at system boot time should take care of things like fsck'ing
your file systems, mounting them, and starting up swapping and other
essential daemons. These are things that you need independent of
runlevel. The /etc/rc.d/rc.M script which is called when you enter
runlevel 3 should start all the processes that remain that you usually
need during normal system operation EXCEPT things like getty.
Processes that must be restarted whenever they stop running like getty
should be placed in the /etc/inittab file instead of being started by
a boot script. So what is in a typical /etc/rc.d/rc.M script? Usually
configuring the network , starting web servers, sendmail, and anything
else you want to always run like database servers, quota programs,
etc.
The only startup script I mention in my /etc/inittab that is not
included in the Slackware 3.2 distribution is /etc/rc.d/rc.K2, and it
is merely a modified version of /etc/rc.d/rc.K set up for single user
mode. Remember this is the startup script that will be used if you
choose to enter the single parameter to lilo. At the end of this file
you will see a line:
exec /bin/sulogin /dev/console
This will replace the current process which is running the script with
the /bin/sulogin program. This means, of course, that this has to be
the last line in your script, since nothing after this line will be
processed by bash. After that program starts, it displays a message to
either enter the root password or press ctl-D. If you enter the
correct root password, you will be logged in as root in a true
single-user mode. Be careful, though, because when you exit that shell
the machine will go into runlevel 3. If you want to reboot before
entering runlevel 3 you must remember to do it (via shutdown) instead
of just exiting the shell. If you press ctl-D instead of the root
password, the system will enter runlevel 3. I have changed the
incorrect calls to kill to use the killall5 program, since the lines
with kill caused init to be killed and a runlevel change was happening
incorrectly.
Well, I hope that this description of how I enabled my Linux machine
to have a single-user mode similar to that of the big-name
workstations proves helpful to you. Customizing your boot process is
not too hard, once you understand something about how the /etc/inittab
and /etc/rc.d/* scripts work. Be sure you 1. backup your entire
system, 2. have a boot floppy, and 3. a rescue floppy that can restore
the backup (or any individual files) you made in step 1 using the boot
floppy from step 2 to boot the machine. If you make a 1 character typo
you can prevent the machine from booting, so the backup steps, while
tedious, are really necessary to protect yourself before you
experiment.
The Files
Here are the files I used. Use at your own risk. They work for me, but
may need to be modified to work for you.
_________________________________________________________________
/etc/inittab
#
# inittab This file describes how the INIT process should set up
# the system in a certain run-level.
#
# Version: @(#)inittab 2.04 17/05/93 MvS
# 2.10 02/10/95 PV
#
# Author: Miquel van Smoorenburg, miquel@drinkel.nl.jugnet.org
# Modified by: Patrick J. Volkerding, volkerdi@ftp.cdrom.com
# Modified by: John Gatewood Ham, zappaman@alphabox.compsci.buu.ac.th
#
# Default runlevel.
id:3:initdefault:
# System initialization (runs when system boots).
si::sysinit:/etc/rc.d/rc.S
# Script to run when going maintenance mode (runlevel 1).
l1:1:wait:/etc/rc.d/rc.K
# Script to run when going single user (runlevel s)
mm:S:wait:/etc/rc.d/rc.K2
# Script to run when going multi user.
rc:23456:wait:/etc/rc.d/rc.M
# What to do at the "Three Finger Salute".
# make the machine halt on ctl-alt-del
ca::ctrlaltdel:/sbin/shutdown -h now "going down on ctl-alt-del"
# Runlevel 0 halts the system.
l0:0:wait:/etc/rc.d/rc.0
# Runlevel 6 reboots the system.
l6:6:wait:/etc/rc.d/rc.6
# What to do when power fails (shutdown to single user).
pf::powerfail:/sbin/shutdown -f +5 "THE POWER IS FAILING"
# If power is back before shutdown, cancel the running shutdown.
pg:0123456:powerokwait:/sbin/shutdown -c "THE POWER IS BACK"
# If power comes back in single user mode, return to multi user mode.
ps:S:powerokwait:/sbin/init 5
# The getties in multi user mode on consoles an serial lines.
#
# NOTE NOTE NOTE adjust this to your getty or you will not be
# able to login !!
#
# Note: for 'agetty' you use linespeed, line.
# for 'getty_ps' you use line, linespeed and also use 'gettydefs'
# we really don't want multiple logins in single user mode...
c1:12345:respawn:/sbin/agetty 38400 tty1 linux
c2:235:respawn:/sbin/agetty 38400 tty2 linux
c3:235:respawn:/sbin/agetty 38400 tty3 linux
c4:235:respawn:/sbin/agetty 38400 tty4 linux
c5:235:respawn:/sbin/agetty 38400 tty5 linux
c6:235:respawn:/sbin/agetty 38400 tty6 linux
# Serial lines
#s1:12345:respawn:/sbin/agetty 19200 ttyS0 vt100
#s2:12345:respawn:/sbin/agetty 19200 ttyS1 vt100
# Dialup lines
#d1:12345:respawn:/sbin/agetty -mt60 38400,19200,9600,2400,1200 ttyS0 vt100
#d2:12345:respawn:/sbin/agetty -mt60 38400,19200,9600,2400,1200 ttyS1 vt100
# Runlevel 4 used to be for an X-window only system, until we discovered
# that it throws init into a loop that keeps your load avg at least 1 all
# the time. Thus, there is now one getty opened on tty1. Hopefully no one
# will notice. ;^)
# It might not be bad to have one text console anyway, in case something
# happens to X.
x1:4:wait:/etc/rc.d/rc.4
# End of /etc/inittab
_________________________________________________________________
/etc/rc.d/rc.K
# /bin/sh
#
# rc.K This file is executed by init when it goes into runlevel
# 1, which is the administrative state. It kills all
# deamons and then puts the system into single user mode.
# Note that the file systems are kept mounted.
#
# Version: @(#)/etc/rc.d/rc.K 1.50 1994-01-18
# Version: @(#)/etc/rc.d/rc.K 1.60 1995-10-02 (PV)
#
# Author: Miquel van Smoorenburg miquels@drinkel.nl.mugnet.org
# Modified by: Patrick J. Volkerding volkerdi@ftp.cdrom.com
# Modified by: John Gatewood Ham zappaman@alphabox.compsci.buu.ac.th
#
# Set the path.
PATH=/sbin:/etc:/bin:/usr/bin
# Kill all processes.
echo
echo "Sending all processes the TERM signal."
killall5 -15
echo -n "Waiting for processes to terminate"
for loop in 0 1 2 3 4 5 6 7 ; do
sleep 1
echo -n "."
done
echo
echo "Sending all processes the KILL signal."
killall5 -9
# Try to turn off quota and accounting.
if [ -x /usr/sbin/quotaoff ]
then
echo "Turning off quota.."
/usr/sbin/quotaoff -a
fi
if [ -x /sbin/accton ]
then
echo "Turning off accounting.."
/sbin/accton
fi
_________________________________________________________________
/etc/rc.d/rc.K2
# /bin/sh
#
# rc.K This file is executed by init when it goes into runlevel
# 1, which is the administrative state. It kills all
# deamons and then puts the system into single user mode.
# Note that the file systems are kept mounted.
#
# Version: @(#)/etc/rc.d/rc.K 1.50 1994-01-18
# Version: @(#)/etc/rc.d/rc.K 1.60 1995-10-02 (PV)
#
# Author: Miquel van Smoorenburg miquels@drinkel.nl.mugnet.org
# Modified by: Patrick J. Volkerding volkerdi@ftp.cdrom.com
# Modified by: John Gatewood Ham zappaman@alphabox.compsci.buu.ac.th
#
# Set the path.
PATH=/sbin:/etc:/bin:/usr/bin
# Kill all processes.
echo
echo "Sending all processes the TERM signal."
killall5 -15
echo -n "Waiting for processes to terminate"
for loop in 0 1 2 3 4 5 6 7 ; do
sleep 1
echo -n "."
done
echo
echo "Sending all processes the KILL signal."
killall5 -9
# Try to turn off quota and accounting.
if [ -x /usr/sbin/quotaoff ]
then
echo "Turning off quota.."
/usr/sbin/quotaoff -a
fi
if [ -x /sbin/accton ]
then
echo "Turning off accounting.."
/sbin/accton
fi
# Now go to the single user level
exec /bin/sulogin /dev/console
_________________________________________________________________
zappaman@alphabox.compsci.buu.ac.th
Information about me.
_________________________________________________________________
Copyright © 1997, John Gatewood Ham
Published in Issue 19 of the Linux Gazette, July 1997
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
User Groups and Trade Shows
Lessons from the Atlanta Linux Showcase
[LINK]
by Andrew Newton
_________________________________________________________________
Trade shows and expos are not at all uncommon in the computer
industry. But not since the early days of microcomputers, when CP/M
was King and toggle switches were the user interface, have user groups
been heavily involved. So in the era of powerful non-commercial
software, couldn't the trade shows also be non-commercial?
We, the members of the Atlanta Linux Enthusiasts (ALE), found out the
answer is yes. Originating from correspondence with Linux
International for some local help for Linux vendors at COMDEX and our
own Linux demo fest (called the "Geek-Off") a year earlier, we put
together a non-commercial, user group organized trade show. On June 7,
1997, we put on the largest Linux vendor showcase to date.
Get Started With The Essentials
So let's say you, being the Linux activist of your community, want to
do your bit to spread the word. Where would you start?
Although we didn't necessarily do this, we learned there are two
essential things to get a Linux trade show off the ground: 1) a time
and place, and 2) a checking account. And in the words of our own Marc
Torres, once you have those two items, "the rest grows from there."
It was a given that we would hold the Atlanta Linux Showcase as close
to COMDEX as possible. After all, this whole idea came from helping
out the Linux vendors at COMDEX. Plus the idea of getting the COMDEX
crowd was good. We theorized that many people flying in for COMDEX
would stay over the following weekend to save on air fare. And they
could easily justify it if they were attending another computer show.
Picking the place was a little more troublesome, but not impossible.
We finally decided on The Inforum because it was located only blocks
away from the venue for COMDEX, was in downtown Atlanta, and well
known to many.
Finally, the checking account is very important. As it turns out we
didn't do this immediately and paid the price in countless hours of
meetings discussing logistics. A checking account is important because
it gives you a place from which to send money and, more importantly, a
place to receive money. People like it better when they can write
checks to "Big Time Linux Event" instead of "Bob Smurd."
One of the major inhibitors behind our acquisition of a checking
account was our incredible lack of knowledge when it comes to the law.
After all, we are a bunch of computer jocks, not attorneys. We had
many seemingly endless discussions on issues such as incorporation,
non-profit status, tax codes, the right to bear arms, etc. In the end,
David Hamm, one of our most active members, just ended up going to a
bank and getting a new checking account under his control.
Incidentally, David became the treasurer.
[INLINE]
David Miller eyes a bottle of ALS Ale. - Photo by Amy Ayers
Put Time On Your Side
Unfortunately, we didn't. Of course, we had the COMDEX target date to
shoot for giving us little time between our mobilization and the
event. If you can pick a date over six months out, do so. There are
multiple reasons for this, most of which have to do with reserving
space.
First, you must reserve space somewhere to hold your event. We lucked
out in our case, but many venues will require booking many months
ahead of time, especially the ones that don't often cost so much money
such as college campuses and state buildings.
Second, you must reserve space in print media for advertising and
publicity. While we were able to get ads in our local computer
magazines and the event listed in some calendars, we did miss
deadlines elsewhere. You may have noticed there were no advertisements
for the Atlanta Linux Showcase in Linux Journal. We missed the
deadline. In addition, it takes time to grease the wheels for free
publicity.
A brief word about FREE PUBLICITY - There is no such thing; you'll
work for every last bit of it. Free publicity means getting listed in
upcoming events calendars and maybe an article or two about Linux in
the local paper with a small plug at the end for the event. If you do
take the time to pay for advertising, use the advertising
representative as a way of getting your event some extra publicity in
that publication. Many publications put on the appearance that their
articles are completely disjoint from their advertising on the basis
of journalistic ethics, but with the exception of SSC that isn't true.
A brief word about paid advertising - It is like buying a used car.
What an ad rep puts on a rate card isn't necessarily the price you
have to pay. Try talking them down. Again, this doesn't apply to SSC.
Organize Your Volunteers
We divided our group into two major camps, organizers and volunteers.
The first were the people that planned the event out for months and
did a lot of the leg work. The second were the people that showed up
the day of the show and manned the registration desk, checked badges,
etc.
You don't want to have too many organizers as it becomes difficult to
manage a large group of people over a large span of time. We divided
up our group into teams of 3 or 4, with many people being on 2 or more
teams. This gave us what philosophical management types like to call
cross-functional teams. By having more than one person on a team, it
helped insure no one person was the only source of information or
action.
We had the following teams:
* Vendor - contacted vendors
* Talks - organized the speakers and presenters
* Publicity - handled advertising and publicity
* Finance - dealt with our mounds of gold
* Logistics - managed booth layouts and site coordination
In hind sight, we should have also created a "Registration" team to
handle all the registrations for both walk-in and pre-paid
registrations. Our answer to this was to make the Talks team and the
Logistics team work together, which worked but not as smoothly as we
would have liked. It is better to have a group of people who are
solely in charge of registration and aren't distracted with other
problems.
We didn't solicit for volunteers until a month before our show. In
retrospect, it probably should have been two months. We gave our
volunteers a briefing the day before the Showcase and had a work
schedule already printed when they arrived. We also required them to
work 2 three-hour shifts for manageability purposes and to keep the
number of volunteers to a minimum but in the end solicited for some
more at the last minute (thanks Ben and Vicki).
A brief note about Volunteers - Treat them well, because they are
working for free. And if you do that, most will go the extra mile
treating the attendees well and pulling those extra shifts or duties
you didn't anticipate (thanks James). Also put your "people-person"
types at the registration desk where they will likely be needed the
most. More personable people will be able to sell t-shirts and so
forth much more easily (thanks Karen). Finally, thank your volunteers.
Everybody likes to be told they've done a good job (thanks everyone
else).
Another seemingly weird thing we did was to make our volunteers pay
for the honor to work our event. It seems odd, but it worked. The idea
was to have them show us they weren't going to volunteer and then back
out on us at the last minute. In exchange for their $30, they got a
polo shirt and were able to see all the presentations at a lower price
than anyone else. And the cash flow didn't hurt either. While we
didn't mandate this for the organizers, it wouldn't be a bad idea.
However, all the organizers did have to pay for their own shirts and
many loaned hundreds of dollars to the effort.
[INLINE]
Linus, Tove and Patricia meet Zeph Hull, a showcase volunteer. - Photo
by Amy Ayers
Coordinate Vendors
Organize your vendor team so everybody knows which person is calling
what company but only one person is calling each. It is nice for
everyone to know the status of a potential vendor. It is not nice to
have 3 separate people make 3 separate cold calls to the same
potential vendor.
When contacting vendors, use the phone as your primary means of
communications and not e-mail. While it seems e-mail would work, it is
human nature to give it a much lower priority than a phone call. We
found many companies that ignored our e-mail's responded quite
positively to our phone calls.
One of the things we should have done sooner was bill the vendors. We
charged each vendor $400 for a booth and sent them an invoice. While
we were expecting one week turn turnarounds on payment, the business
world doesn't work that way. In many cases, paperwork and payments can
take up to 30 days to get through the accounting offices of some
companies.
Get People To Talk
We solicited for speakers and presenters over the
comp.os.linux.announce news group. This had to be done multiple times,
but eventually the offers started rolling in. We also drew upon some
local talent. And in many cases, the vendors also wanted to give
presentations.
Getting speakers to volunteer was the easy part. Getting them to the
Showcase was the difficult part. We had to solicit money from
sponsoring companies and the vendors in order to pay for the travel
and lodging expenses for Linus Torvalds, Eric Raymond, David Miller,
Richard Henderson, Miguel de Icaza, and the rest of the crew (and we
still owe a debt of gratitude to Digital and Caldera for all they did
in this department).
Once the money was appropriated, travel plans and hotel accommodations
were made - at least that's how it works in the ideal world. A lot of
the travel costs were floated on the credit cards of organizers until
they could be reimbursed. Do make plane reservations and travel
arrangements as far ahead of time as possible; you can save on air
fare that way.
Work The System
This involves taking advantage of perks and getting the extras out of
the people with whom you are doing business. For example, we decided
to also rent some conference rooms at the Days Inn which was just next
store to The Inforum. These conference rooms were used for Birds Of A
Feather sessions and impromptu meetings by our attendees. In order to
secure a good price on the room rental, we made an agreement with the
hotel management that we would guarantee they got a certain number of
room bookings based on our event. We then set-up that hotel as our
"Official" hotel and asked most of our out-of-town guests to try the
Days Inn first. In addition, our attendees were able to get a
reasonable rate at a downtown hotel. Our guest speakers were also
booked there.
Genie Travel also became our "Official" travel agent. Genie gave us a
certain percentage on every flight booked through them, and their air
fares were very reasonable. Although we didn't take advantage of this
until very late in the game, it would be very advantageous for us to
do it again. Genie Travel uses Linux in their day-to-day operations
and probably would be very happy to repeat this arrangement with
another Linux event.
Another good idea would be to solicit the help of other Linux users
groups. Often other users groups that are geographically close by may
be able to help. Be sure to get them in on it early in the planning
stage. For instance, we solicited SALUG (Southern Alabama) and
CHUGALUG (Athens, GA), albeit at the last minute. Coordinating with
other users groups also insures that the Linux community isn't
throwing a trade show every month in towns only 100 miles apart (this
stretches the resources of the Linux vendors and the enthusiasm these
shows generate)..
One last thing that can help is to have a Sugar-Daddy. In our case it
was Linux International. Jon "Maddog" Hall of Linux International
helped get us credit through which we were able secure our lease on
the rooms at The Inforum. If you ask him nicely, he may do the same
for you.
Have Fun
Keep in mind that organizing such an event is very hard word and
requires a lot of time. We estimate over 2000 man hours were spent by
ALE members putting together our show. But with any luck, your user
group will be able to pull off a grand Linux event. And remember, have
fun. Don't hold your trade show to make money. Do it to spread the
word of Linux and to cavort with other like-minded Linux hobbyists.
Finally, we'd like to thank all our volunteers, vendors, speakers, and
organizers for helping out with the Atlanta Linux Showcase. If you are
interested in any videos of the presentations at the 1997 Atlanta
Linux Showcase or t-shirts and polo shirts, please visit our web site
at http://www.ale.org/showcase/. And if you have any questions, please
feel free to send us e-mail at ale-expo@cc.gatech.edu.
[INLINE]
Greg Hankins hangs out with Maddog and David Hamm. - Photo by Amy
Ayers
_________________________________________________________________
Copyright © 1997, Andrew Newton
Published in Issue 19 of the Linux Gazette, July 1997
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
Using Python to Generate HTML Pages
By Richie Bielak, richieb@netlabs.net
_________________________________________________________________
Introduction
I have waited for a long time to set up my own Web site, mostly
because I didn't know what to put there that others may want to see.
Then I got an idea. Since I'm an avid reader and an aviation
enthusiast, I decided to create pages with a list of aviation books I
have read. My initial intention was to write reviews for each book.
Setting up the pages was easy to start with, but as I added more books
the maintenance became tedious. I had to update couple of indices with
the same data and I had to sort them by hand, and alphabetizing was
never my strong suit. I needed to find a better way.
Around the same time I became interested in the programming language
Python and it seemed that Python would be a good tool to automatically
generate the various HTML pages from a simple text file. This would
greatly simplify the updates of my book pages, as I would only add one
entry to one file and then create complete pages by running a Python
script.
I was attracted to Python for two main reasons: it's very good at
processing strings and it's object oriented. Of course the fact that
Python interpreter is free and that it runs on many different systems
helped. At first I installed Python on my Win95 machine, but I just
couldn't force myself to do any programming in the Windows
environment, even in Python. Instead I installed Linux and moved all
my Web projects there.
The Problem
The main goal of the program is to generate three different book
indices, by author, by title and by subject, from a single input file.
I started by defining the format of this file. Here is what a typical
entry describing one book looks like:
title: Zero Three Bravo
author: Gosnell, Mariana
subject: General Aviation
url: 3zb.htm
# this is a comment
Each line starts with a keyword (eg. "title:" or "author:") and is
followed by a value that will be shown in the final HTML page.
Description of each book must start the "title:" line, there must be
at least one "author:" tag, and the "url:" entry points to a review of
the book, if there is one.
Since Python is object-oriented we begin program design by looking for
"objects". In a nutshell, object oriented (OO) programming is a way to
structure your code around the things, that is "objects", that the
program is working with. This rather simple idea of organizing
software around what it works with (objects), rather than what it does
(functions), turns out to be surprisingly powerful.
Within an OO program similar objects are grouped into "classes" and
the code we write describes each class. Objects that belong to a given
class are called "instances of the class".
I hope it is pretty obvious to you that since the program will
manipulate "book" objects, we need a Python class that will represent
a single book. Just knowing this is enough to let us suspend design
and write some code.
The Book Class
Before we start looking at the code we need to consider briefly how
Python programs are organized. Each program consists of a number of
modules, each module is contained in a file (usually named with the
extension ".py") and the name of the file (without the ".py") serves
as the module name. A module can contain any number of routines or
classes. Typically things that are related are kept in one module. For
example, there is string module that contains functions that operate
on strings. To access functions or classes from another module we use
the import statement. For example the first line of the Book module
is:
from string import split, strip
which says that the routines split and strip are obtained from the
strings module.
Next, I have to point out few syntactic features of Python that are
not immediately obvious the code. The most important is the fact that
in Python indentation is part of the syntax. To see which statements
will be executed following an "if", all you need to look at is
indentation - there is no need for curly braces, BEGIN/END pairs or
"fi" statements.
Here is a typical "if" statement extracted from the set_author routine
in the Book class:
if new_author:
names = split (new_author, ",")
self.last_name.append (strip (names[0]))
self.first_name.append (strip (names[1]))
else:
self.last_name = []
self.first_name = []
The three statements following the "if" are executed if "new_author"
variable contains a non-null value. The amount of indentation is not
important, but it must be consistent. Also note the colon (":") which
is used to terminate the header of each compound statement.
The Book class turns out to be very simple. It consists of routines
that set the values for author, title, subject and the URL for each
book. For example, here is the set_title routine:
def set_title (self, new_title):
self.title = new_title
The first argument to the "set_title" method (that is a routine which
belongs to a class) is "self". This argument always refers to the
instance to which the method is applied. Furthermore, the attributes
(i.e. the data contained in each object) must be qualified with "self"
when referenced within the body of a method. In the example above the
attribute "title" of a "Book" object is set to value of "new_title".
If in another part of a program we have variable "b" that references
an instance of a "Book" class this call would set the book's title:
b.set_title ("Fate is the Hunter")
Note that the "self" argument is not present in the call, instead the
object to which the method is applied (i.e. the object before the ".",
"b" above) becomes the "self" argument.
At this point a reasonable question to ask is "Where do the objects
come from?" Each object is created by a special call that uses the
class name as the name of a function. In addition a class can define a
method with the name __init__ which will automatically be called to
initialize the new object's attributes (in C++ such a routine is
called a constructor).
Here is the __init__ routine for the Book class:
def __init__ (self, t="", a="", s="", u=""):
#
# Create an instance of Book
#
self.title = t
self.last_name = []
self.first_name = []
self.set_author (a)
self.subject = s
self.url = u
The main purpose of the above routine is to create all the attributes
of the new "Book" object. Note that the arguments to "__init__" are
specified with default values, so that the caller needs only to pass
the arguments that differ from the default.
Here are some examples of calls to create "Book" objects:
a = Book()
b = Book ("Fate is the Hunter")
c = Book ("Some book", "First, Author")
There is one small complication in the "Book" class. It is possible
for a book to have more than one author. That's why the attributes
"first_name" and "last_name" are actually lists. We'll look more at
lists in the next section.
The complete Book class is show in Listing #1. To test the class we
add a little piece of code at the end of the file to test if the code
is running as __main__ routine, that is execution started in this
file. If so, the code to test the Book will run.
The Book_List Class
Once the Book is tested we can go back to designing. The next obvious
object is a list which will contain all the "book" objects. For the
purposes of our program we have to be able to create the book list
from the input file and we have to sort the books in the list by
author, title or subject. Sorted list will then be used as input into
the code that actually generates HTML pages.
As it turns out one of Python's built-in data structures is a list.
Here is a snippet of code showing creation of a list and addition of
some items (this example was produced by running Python
interactively):
Python 1.4 (Dec 18 1996) [GCC 2.7.2.1]
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> s = []
>>> s.append ("a")
>>> s.append ("hello")
>>> s.append (1)
>>> print s
['a', 'hello', 1]
Above we create a list called "s" and add three items to it. Lists
allow "slicing" operations, which let you pull out pieces of a list by
specifying element numbers. These examples illustrate the idea:
>>> print s[1]
hello
>>> print s[1:]
['hello', 1]
>>> print s[:2]
['a', 'hello']
>>> print s[0]
a
s[1] denotes the second element of the list (indexing starts at zero),
s[1:] is the slice from the second element to the end of the list,
s[:2] goes from the start to the third element, and s[0] is the first
item.
Finally, lists have a "sort" operator which sorts the elements
according to a user supplied comparison function.
Armed with the knowledge of Python lists, writing the Book_List class
is easy. The class will have a single attribute, "contents", which
will be a list of books.
The constructor for the Book_List class simply creates a "contents"
attribute and initializes it to be an empty list. The routine that
parses the input file and creates list elements is called
"make_from_file" and it begins with the code:
def make_from_file (self, file):
#
# Read the file and create a book list
#
lines = file.readlines ()
self.contents = []
The "file" argument is a handle to an open text file that contains the
descriptions of the books. The first step this routine performs is to
read the entire file into a list of strings, each string representing
one line of text. Next, using Python's "for" loop we step through this
list and examine each line of text:
#
# Parse each line and create a list of Book objects
#
for one_line in lines:
# It's not a comment or empty line
if (len(one_line) > 0) and (one_line[0] != "#"):
# Split into tokens
tokens = string.split (one_line)
If the line is not empty or is not a comment (that is the first
character is not a "#") then we split the line into words, a word
being a sequence of characters without spaces. The call "tokens =
string.split (one_line)" uses the "split" routine from the "string"
module. "split" returns the words it found in a list.
if len (tokens) > 0:
if (tokens[0] == "title:"):
current_book = book.Book (string.join (tokens[1:]))
self.contents.append (current_book)
elif (tokens[0] == "author:"):
current_book.set_author (string.join (tokens[1:]))
elif (tokens[0] == "subject:"):
current_book.set_subject (string.join (tokens[1:]))
elif (tokens[0] == "url:"):
current_book.set_url (string.join (tokens[1:]))
The first token (i.e. word) on the line is the keyword that tells us
what to do. If it is "title:" then we create a new Book object and
append it to the list of books, otherwise we just set the proper
attributes. Note that the remaining tokens found on each line are
joined together into a string (using "string.join" routine). There is
probably a more efficient way to code this, but for my purposes this
code works fast enough.
The other interesting parts of the Book_List class are the sort
routines. Here is how the list is sorted by title:
def sort_by_title (self):
#
# Sort book list by title
#
self.contents.sort (lambda x, y: cmp (x.title, y.title))
We simply call "sort" routine on the list. To get proper ordering we
need to supply a function that compares two Book objects. For sorting
by title we have to supply an anonymous function, which is introduced
with the keyword "lambda" (those of you familiar with Lisp, or other
functional languages should recognize this construct). The definition:
lambda x, y: cmp (x.title, y.title)
simply says that this is a function of two arguments and function
result comes from calling the Python built-in function "cmp" (i.e.
compare) on the "title" attribute of the two objects.
The other sort routines are similar, except that in "sort_by_author" I
used a local function instead of a "lambda", because the comparison
was little more complicated - I wanted to have all the books with the
same author appear alphabetically by title.
You can see complete listing of book_list class in Listing #2.
Generating Pages:
Now that we have constructed a list of books, the next step is to
create the HTML pages. We begin by creating a class, called Html_Page,
that generates basic outline of a page and then we extend that class
to create the titles, authors and subjects pages.
The idea that existing code can be extended yet not changed is the
second most import idea of OO programming. The mechanism for doing
this is called "inheritance" and it allows the programmer to create a
new class by adding new properties to an old class and the old class
does not have to change. A way to think about inheritance is as
"programming by differences". In our program we will create three
classes that inherit from Html_Page.
Html_Page is quite simple. It consists of routines that generate the
header and the trailer tags for an HTML page. It also contains an
empty routine for generating the body of the page. This routine will
be defined in descendant classes. The __init__ routine let's the user
of this class specify a title and a top level heading for the page.
When I first tested the output of the HTML generators I simply printed
it to the screen and manually saved it into a file, so I could see the
page in a browser. But once I was happy with the appearance, I had to
change the code to save the data into a file. That's why in Html_Page
you will see code like this:
self.f.write ("<html>\n")
self.f.write ("<head>\n")
for writing the output to a file referenced by the attribute "f".
However, since the actual output file will be different for each page
opening of the file is deferred to a descendant class.
You can see complete code for Html_Page in Listing #3. The three
classes Authors_Page, Titles_Page and Subjects_Page are used to create
the final HTML pages. Since these classes belong together I put them
in one module, called books_pages. Because the code for these is
classes is very similar we will only look at the first one.
Here is how Authors_Page begins:
class Authors_Page (Html_Page):
def __init__ (self):
Html_Page.__init__ (self, "Aviation Books: by Author",
"<i>Aviation Books: indexed by Author</i>")
self.f = open ("books_by_author.html", "w")
print "Authors page in--> " + self.f.name
To start with that the class heading lists the name of the class from
which Authors_Page inherits, mainly Html_Page. Next notice that the
constructor invokes the constructor from the parent class, by calling
the __init__ routine qualified by the class name. Finally, the
constructor names and opens the output file. I decided not to make the
file name a parameter for my own convenience to keep things simple.
Since the book list is needed for to generate the body of each page I
added a book_list attribute to each page class. This attribute is set
before HTML generation starts.
The generate_body routine redefines the empty routine from the parent
class. Although fairly long, the code is pretty easy to understand
once you know that the book list is represented as an HTML table and
the "+" is the concatenation operator for strings.
In addition to replacing the generate_body routine we also redefine
generate_trailer routine in order to put a back link to the book index
at the bottom of each page:
def generate_trailer (self):
self.f.write ("<hr>\n")
self.f.write ("<center><a href=books.html>Back to Aviation Books Top Pa
ge</a></center>\n")
self.f.write ("<hr>\n")
Html_Page.generate_trailer (self)
Notice how right after we generate the back link, we include a call to
parent's generate_trailer routine to finish off the page with correct
terminating tags.
Complete listing for the three page generating classes are found in
Listing #4.
The main line of the entire program is shown in Listing #5. By now the
code there should be self explanatory.
Summary
As you can see this particular program was not hard to write. Python
is well suited for these types of tasks, you can quickly put together
a useful program with minimal fuss.
After I have got the program to work I realized that its design is not
the best. For example, the HTML generating code could be more general,
perhaps the Book class should generate it's own HTML table entries.
But for now the program fits my purposes, but I will modify if I need
to create other HTML generating applications.
If you like to see the results of this script visit my book page.
To learn more about Python you should start with the Python Home Page
which will point you to many Python resources on the net. I also found
the O'Reilly book Programming in Python by Mark Lutz extremely
helpful.
Finally, any mistakes in the description of Python features are my own
fault, as I'm still a Python novice.
_________________________________________________________________
Copyright © 1997, Richie Bielak
Published in Issue 19 of the Linux Gazette, July 1997
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next
_________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
Using SAMBA to Mount Windows 95
By Jonathon Stroud, jgstroud@eos.ncsu.edu
_________________________________________________________________
Many major universities are now offering network connections to
students in their rooms. This is really a wonderful thing for the
Linux community. Whereas, the majority of student owned computers on
these networks are still running Windows 95, many students are making
the switch to Linux. One thing that newcomers to Linux are constantly
asking is, "Can I access a directory shared by a Windows 95 computer
in the 'Network Neighborhood', and can I share files to Windows 95
users?" The answer, of course, is YES. I keep trying to tell them that
there is nothing that Linux can not do, yet they continue to come to
me and ask if they can do this in Linux, or if they can do that. I
have never once answered no.
Samba
To mount a Windows 95 share, we use a program called Samba. Samba is a
program that allows Linux to talk to computers running Windows for
Workgroups, Windows 95, Windows NT, Mac OS, and Novel Netware. Samba
even allows you to share a printer between computers using these
different operating systems. Samba comes with most distributions of
Linux, but if you do not have it installed, you can obtain a copy from
the Samba home page at http://lake.canberra.edu.au/pub/samba/.
Mounting Windows 95 Shares
The first thing you will probably want to do, is check to see what
directories are shared on the computer you are trying to mount off of.
To do this type smbclient -L computername. This will list all the
directories shared by the machine. To mount the directory, we use the
command smbmount. Smbmount can be a little tricky though. I have
created a script, named smb, that allows users to mount drives using
smbmount, with relative ease.
#usage smb computername sharename
#!/bin/sh
if [ $UID = 0 ]; then
if [ ! d /mnt/$1 ]; then
mkdir /mnt/$1
fi
#You may want to add the -u option here also if you need to
#specify a login id (ie: mounting drives on Windows NT)
/usr/sbin/smbmount //$1/$2 /mnt/$1 I $1 c etc
else
if [ ! d ~/mnt/ ]; then
mkdir ~/mnt/
fi
if [ ! d ~/mnt/$1 ]; then
mkdir ~/mnt/$1
fi
#You may want to add the -u option here also if you need to
#specify a login id (ie: mounting drives on Windows NT)
/usr/sbin/smbmount //$1/$2 ~/mnt/$1 I $1 c etcfi
To execute this script you simply type smb followed by the name of the
computer you are mounting off of, and then the directory you wish to
mount (ex. smb workstation files). If you are root, the script creates
a directory in /mnt by the same name as the computer, and mounts the
directory there. For any other user, the script makes a directory in
the users home directory named mnt. In that directory it makes another
directory by the same name as the computer and mounts the share there.
Sharing files with Windows 95
Now to share a file. This also is not too difficult. To share a
directory you need to edit /etc/smb.conf. By default, Samba shares
users' home directories, but they are only visible (and accessible) to
the owner. This means that the person accessing the share should be
logged into Windows 95 with the same loginid, as they use to log into
your Linux box.
Let's say you want to let 'bob' access the directory '/shares/files',
and you do not want anyone else to access it. To do this, add these
lines to your /etc/smb.conf file.
[bobsfiles]
comment = files for bob
path = /shares/files
valid users = bob
public = no
writable = yes
printable = no
1. indicates the name the directory will be shared under.
2. is a comment that can be displayed in the Windows 95 Network
Neighborhood.
3. lists the directory on your computer that will be shared
4. when set to yes allows users to access the directory with guest
privileges.
5. indicates whether or not the user has write permissions to the
indicated directory
6. when set to yes allows users to spool print jobs from that
directory
More examples on sharing files can be found in the default smb.conf
file. For more help on setting up this file, see the Samba web page,
or type man smb.conf.
More cool Samba stuff
If a Windows 95 user on your network is running winpopup (an instant
massaging program), you can send them a winpopup message using Samba.
To do this just type
smbclient -M computername
message_text
.
_________________________________________________________________
Copyright © 1997, Jonathon Stroud
Published in Issue 19 of the Linux Gazette, July 1997
Linux Gazette Back Page
Copyright © 1997 Specialized Systems Consultants, Inc.
For information regarding copying and distribution of this material see the
Copying License.
_________________________________________________________________
Contents:
* About This Month's Authors
* Not Linux
_________________________________________________________________
About This Month's Authors
_________________________________________________________________
Larry Ayers
Larry Ayers lives on a small farm in northern Missouri, where he is
currently engaged in building a timber-frame house for his family. He
operates a portable band-saw mill, does general woodworking, plays the
fiddle and searches for rare prairie plants, as well as growing
shiitake mushrooms. He is also struggling with configuring a Usenet
news server for his local ISP.
Jim Dennis
Jim Dennis is the proprietor of Starshine Technical Services. His
professional experience includes work in the technical support,
quality assurance, and information services (MIS) departments of
software companies like Quarterdeck, Symantec/ Peter Norton Group, and
McAfee Associates -- as well as positions (field service rep) with
smaller VAR's. He's been using Linux since version 0.99p10 and is an
active participant on an ever-changing list of mailing lists and
newsgroups. He's just started collaborating on the 2nd Edition for a
book on Unix systems administration. Jim is an avid science fiction
fan -- and was married at the World Science Fiction Convention in
Anaheim.
John Eikenberry
John currently lives in Athens, GA where he is both a student and an
employee of the University of Georgia. He is working on his masters
thesis in artificial intelligence while working full time as a system
administration and programmer for the College of Education. Prior to
his coming to Athens, John studied psychology and philosophy ending
with a Masters of Philosophy from the University of Toledo. He has
been using Linux since 1994 and maintains the Linux Ai/Alife
mini-Howto.
John Gatewood Ham
John Ham was born June 10, 1964, in Florence, Alabama. He has a B.S.,
Mathematics, from The University of the South, Sewanee, TN and an
M.S., Computer Science, from The University of Missouri-Rolla, Rolla,
MO. He is currently working as an Instructor in the Computer Science
Department at Burapha University, Bang Saen, Cholburi, Thailand. He
teachs in English -- he does not speak Thai. He lives in Thailand,
because his wife is Thai and did not wish to live in the United
States. His Home Page
Michael J. Hammel
Michael J. Hammel, is a transient software engineer with a background
in everything from data communications to GUI development to
Interactive Cable systems--all based in Unix. His interests outside of
computers include 5K/10K races, skiing, Thai food and gardening. He
suggests if you have any serious interest in finding out more about
him, you visit his home pages at http://www.csn.net/~mjhammel. You'll
find out more there than you really wanted to know.
Evan Leibovitch
Evan is a Senior Analyst for Sound Software of Brampton, Ontario,
Canada. He's installed almost every kind of Unix available for Intel
systems over the past dozen years, and this year his company became
Canada's first Caldera Channel Partner.
Mike List
Mike List is a father of four teenagers, musician, printer (not
laserjet), and recently reformed technophobe, who has been into
computers since April,1996, and Linux since July.
Andy Newton
Andy Newton is a Java programmer for Automated Logic Corporation and
has been an active member of the Atlanta Linux Enthusiasts for two
years. When not playing with computers, he enjoys running,
backpacking, political banter and spending time with his fiancee,
Karen. His Home Page
Justin Seiferth
When Justin's not busy improving our nation's information boreen, he's
at home hacking various projects. If you are cut off by a silver
coupe with New Mexico plates on the roads around our nation's capital,
feel free to wave hello! Justin and his family will be making their
annual sojourn to relatives in Ireland during July- he'd like to hear
from fellow Linux users over there.
Cliff Serutine
Cliff Seruntine is a writer and an electronics and computer
technician, web designer and all around hacker. He lives in Alaska
with his family of four where they fight a never-ending battle against
the evil computer assimilators and spend their weekends salmon
fishing. He'd love to have you over to visit. Meet him at
http://www.micronet.net
_________________________________________________________________
Not Linux
_________________________________________________________________
Thanks to all our authors, not just the ones above, but also those who
wrote giving us their tips and tricks and making suggestions. Thanks
also to our new mirror sites.
My assistant, Amy Kukuk, did all the work again this month. She's so
good to me. Thank you, Amy.
I'm going on vacation from July 3 to July 13, and I am truly looking
forward to it. I've been working much too hard since taking over as
Editor of Linux Journal, and a week or so with no work in my thoughts
is going to be a much needed break. Riley and I are flying to Southern
California to visit his dad--the esteemed UCLA Professor Emeritus, Dr.
Ralph Richardson. We also will be visiting my daughter Lara and her
children. Pictures of all my grandchildren are on my home page--they
are, of course, the most beautiful and most intelligent grandkids in
the world. I am very proud of them, as you can see.
Have fun!
_________________________________________________________________
Marjorie L. Richardson
Editor, Linux Gazette gazette@ssc.com
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back
_________________________________________________________________
Linux Gazette Issue 19, July 1997, http://www.ssc.com/lg/
This page written and maintained by the Editor of Linux Gazette,
gazette@ssc.com