old-www/LDP/LG/issue25/issue25.txt

7849 lines
264 KiB
Plaintext

Linux Gazette... making Linux just a little more fun!
Copyright © 1996-98 Specialized Systems Consultants, Inc.
_________________________________________________________________
Welcome to Linux Gazette! (tm)
_________________________________________________________________
Published by:
Linux Journal
_________________________________________________________________
Sponsored by:
InfoMagic
S.u.S.E.
Red Hat
Our sponsors make financial contributions toward the costs of
publishing Linux Gazette. If you would like to become a sponsor of LG,
e-mail us at sponsor@ssc.com.
_________________________________________________________________
Table of Contents
February 1998 Issue #25
_________________________________________________________________
* The Front Page
* The MailBag
+ Help Wanted
+ General Mail
* More 2 Cent Tips
+ Linux - 2 Cents about vim for pico users
+ My 1/50th of a Dollar
+ sound problems
+ Filtering output of binary files
+ Easter Eggs in Netscape
+ RE: Perl and HTML
+ Update locate
+ Doing spaces in file names
+ Mailing binary files to Microsoft clients
+ Linux and Routing
+ Linux and Routing 2
+ Netscape's Abouts
+ Netscape on the Desktop
+ Re: Printing Problems
+ Re: Using a 386 Computer
* News Bytes
+ News in General
+ Software Announcements
* The Answer Guy, by James T. Dennis
+ Removing LILO, Reinstalling MS-DOS
+ Running as root on Standalone Systems -- DON'T
+ More on Netscape Mail Crashes
* Book Review: A Practical Guide to Linux, by Bernard Doyle
* Bourne/Bash: Shell Programming Introduction, by Rick Dearman
* Clueless at the Prompt, by Mike List
* Confessions of a Former VMS Junkie, by Russell C. Pavlicek
* EMACSulation, by Eric Marsden
* Gathering Usage Stats, by Randy Appleton
* The Graphics Muse, by Michael J. Hammel
* Hylafax, by Dani Pardo
* Linux Compared to Other Operating Systems, by Elof Soerensen
* Linux Ports, by Ross Linder
* Linux and Windows95, by Leonardo Lopes
* New Release Reviews, by Larry Ayers
+ GCC News
+ Gmemusage: A Distinctive Memory Monitor
+ Xephem
* A Simple Internet Dialer for Linux, by Martin Vermeer
* Secure Public Access Internet Workstations, by Steven Singer
* The Software World--It's a Changin', by Phil Hughes
* The Back Page
+ About This Month's Authors
+ Not Linux
The Answer Guy
The Weekend Mechanic will return.
_________________________________________________________________
TWDT 1 (text)
TWDT2 (HTML)
are files containing the entire issue: one in text format, one in
HTML. They are provided strictly as a way to save the contents as one
file for later printing in the format of your choice; there is no
guarantee of working links in the HTML version.
_________________________________________________________________
Got any great ideas for improvements! Send your comments, criticisms,
suggestions and ideas.
_________________________________________________________________
This page written and maintained by the Editor of Linux Gazette,
gazette@ssc.com
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
The Mailbag!
Write the Gazette at gazette@ssc.com
Contents:
* Help Wanted
* General Mail
_________________________________________________________________
Help Wanted
_________________________________________________________________
Date: Tue, 06 Jan 1998 17:09:31 +1000
From: Peter Scott webguru@planet-sex.com
Subject: Help - Adding third hard drive
I've been using Slackware Linux 2.0.29 for quite some time. I've
managed with 2 drives with partitions for Win95 and Linux, but now I
need to add another drive. It is recognised in the BIOS and can be
found in Windoze, but I get no joy from Linux. I expected to be able
to mount the drive straight away. Do I need to do some insnod or
mke2fs or something.
# mount /dev/hdc1 /mnt
mount: the kernel does not recognize /dev/hdc1 as a block device
(may# fdisk /dev/hdc
# fdisk /dev/hdc
Unable to open /dev/hdc
I've got a fealing that I need to reconfigure Lilo or something? I
know that I've forgotten something obvious, but I've wasted hours
without any joy.
thanx,
pete
_________________________________________________________________
Date: Tue, 06 Jan 1998 19:55:08 +0000
From: George Russell george.russell@clara.net
Subject: Installing StarOffice 3.1 on Redhat 4.2
I'm having difficulties installing StarOffice onto my system. I've
installed the rpms for static binaries, common files, english demos
and english docs. I've run the setup script and done a user install,
after updating libc and ld.so, and would like to now run the package.
It says to install two daemons, svdaemon and svportmap . These both
need something called rpc and the portmap daemon, which I can find no
reference to on my system. How can I install these so that StarOffice
will run. All help gratefully recieved.
--
George Richard Russell
_________________________________________________________________
Date: Sat, 10 Jan 1998 05:55:26 -0600
From: ReXsOn RuLeZ abernal@theonramp.net
Subject: Question....
Hello there... sorry to bother with probably one of the stupidist
questions in the world, but I want to install Linux on my computer;
the problem is that I share a computer with my family and of course
they don't have a clue of what I do so they don't care, they just want
to be able to use office to do their work. I've looked around various
Linux sites trying to find an answer but I've been unsuccesful. My
question is this: As I told you I have a windows95 box and all the
documents that
Are outthere focus on installing linux in a DOS environment which in
my case and a lot of people it's history. I have a 3 gig hardrive that
I think is partitioned already in 2 because I have a c: and d: I'm not
sure if they're separate hardrives or one is partitioned. What I would
like to know is what should I do here. I'm not sure if my second drive
which is 1 gig is already formated for windows which I think it is
because it has a recyble bin. I don't know what to do because I'm
afraid that if I delete or erase the partition the whole thing will
become one hardrive and I will have to erase everything to partition
that one hardrive. I was kind of hoping there was a way or partitionin
my second hardrive (d:) and leave a part for windows and another for
Linux.
As I told you earlier I'm sorry to make this kind of questions but if
I damage something of this computer my dad would prive me of even
looking at it, and I don't want that to happen. I would really
aprecciate your help since I'm really eagered to use Linux.
Thanx in advance for your help.
Rexson
_________________________________________________________________
Date: Sat, 17 Jan 1998 20:39:14 -0400
From: Frank Nazario webmaster@prplaza.net
Subject: HI...
I've just finished browsing your Gazzettte and it is very cool...as a
web administrator at http://www.prplaza.net I was fedup with the
performance and slowness of an NT enviroment and decided kind of
reluctant to migrate to Linux (right now i'm a green thumb at it)...
But and a big BUT after seen a single processor pentium pro 200 server
running linux redhat 5.0 and Apache Webserver blow the doors of a dual
pentium pro 200 running on NT and IIS3.0...i was sold in the
spot....never to touch Microsoft NT again ... and feeling good about
it.
My problem is this one ... I've gone bananas in trying to find a
document that explains how to install, in a step by step fashion, the
Apache SSL "extensions" to one of my Apache WWW Webservers (the
performance increase is awesome) can you or anyone that reads this
help...
thanks beforehand for you response....
Frank Nazario San Juan, Puerto Rico
_________________________________________________________________
Date: Sun, 18 Jan 1998 11:36:50 -0500
From: Michael Vore mvore@digex.net
Subject: Problems with CD-ROM
Admittedly I'm not sure where the problem lies, When using
NT-4/Netscape Communicator-4 to view the cdrom all links look like
"file:///El/lj/" Which of course will not be found. I have looked at
the source to try to find why the double '//' and the '/El' come from.
Any ideas? (and any work arounds??)
At the moment I don't have X running on my Linux machine - it's a new
install of RedHat and during the upgrade I forgot to same the XConfig
files.
mike
_________________________________________________________________
Date: Mon, 19 Jan 1998 09:52:28 -0500 (EST)
From: Michael Stutz stutz@dsl.org
Subject: Help Wanted: SVGALIB Screenshots?
Is there any way to make a screenshot of a graphical program that runs
on the console (_not_ in X)?
_________________________________________________________________
Date: 21 Jan 98 11:28:04 -0500
From: Jonathan Smith SMITHJL@detroitedison.com
Subject: netcfg
I am using X to connect to my ISP via the netcfg command. I have it
starting up at boot time. This works great, but I was wondering where
the chat script and pppd command are hiding for the ppp interface that
you can create via netcfg.
I was also wondering if there was a way to prevent my ISP from
dropping me after a given time of activity. I am using a cron job to
ping to my isp, but that does not seem to prevent this from happening.
Should I try pinging to a server other than my ISP?
Thanks, Jonathan Smith.
_________________________________________________________________
Date: Thu, 22 Jan 1998 02:38:11 -0800 (PST)
From: Jaume Vicent jvicent@yahoo.com
Subject: Sound Card MED3931
I'm a new Linux user (kernel 2.0.29). My sound card is a MED3931, with
a chip OPTi 82c931.
As it is a PnP card, I use the isapnptools-1.9 package, loading the
sound support as a module. I've tried configure it as a MAD16 or MSS
(Microsoft Sound System) but it hasn't worked in any way.
I don't know if the problem is with the IRQ/DMA/IO settings (I use the
same ones as with Windows 95) that I set in the /etc/isapnp.conf file,
or it is that the sound driver (OSS/Free) just doesn't support this
card.
Can you help? Thank you.
_________________________________________________________________
Date: Thu, 22 Jan 1998 22:51:09 +0200
From: Asaf Wiener wasaf@writeme.com
Subject: from were can i download (for free) LINUX?
I have an inter pentium, and I would like to install LINUX in my
computer. But, I don't know from were can i download the installation
files (for free), I heard that LINUX is a free softwere, but i don't
find a site that i can download for free LINUX. Please help me to find
such a site (and also some install instractions).
Thank you,
Asaf wiener
Look on the Linux Resources page to find pointers to everything you
need, http://www.linuxresources.com/ --Editor
_________________________________________________________________
Date: Fri, 23 Jan 1998 14:59:00 +1100
From: Peter Lee peterl@localgov.wacher.com.au
Subject: POP3d Problem
I am having problem connecting to the POP3 Server in Linux. This
problem only arise when a new email arrives or there are mails in my
mailbox If I delete all my mails I can connect to the POP3 Server.
Error message I get is:
ERR - being read already /usr/spool/mail/
I have removed the account and recreated it again. Still problem
occurs. And I know I am the only one logged on. The only way I can
read my mails is manually telneting to the Linux and use either pine
or mail from the unix command
Any Suggestions ?
PETER LEE
_________________________________________________________________
Date: Fri, 23 Jan 1998 13:02:25 -0003
From: RAFAGUI@if.ufrgs.br
Subject: Printing postcript ...
I really want to print some using postscript form but I can't ... So,
I have read a lot of docs and unfortunately when I try to print some I
just got the postscript language.
I tried configure the 'printcap' and so on but. My printer is a CANON
4200 Bj and I am running Slackware 96 Linux Version.
Oh: I just gave a look at the RedHat 5.0 comments and seems it make
easier. I am thinking about purchase it.
Thanks a lot. I would appreciate a reply.
Regards,
Rafael.
_________________________________________________________________
Date: Sat, 24 Jan 1998 08:09:51 -0500 (EST)
From: Casimer P. Zakrzewski zak@acadia.net
Subject: IBM 8514 monitor
Hi. I'm new to Linux, and my problem is installing my monitor, an IBM
8514, for use with X-Window. I use an S3 Virge 86C325 accelerator
card.
After installing Red Hat 4.2, the monitor works fine for command line,
but when I try using it for X-Window, the screen shrivels up to
something less than a 3x5 recipe card! I've tried reconfiguring
different combinations of color depths and screen resolutions, and
have come up with everything from a blank screen to 'your worst
nightmare'.
I'm stumped. I've tried the different FAQ sites, but can't find one
that can give me a hint of how to configure this monitor for use with
X-Window. I'd appreciate any help anyone can give me. I had to resort
to installing Win95 and IE (which work for this monitor/card) just to
send this out. Thanks in advance for any help I can get.
Zak
_________________________________________________________________
Date: Sun, 25 Jan 1998 11:06:17 -0800
From: Apple Annie annel@cdsnet.net
Subject: Re: Remote address on Chat sites
I am not a very literate person on the internet, so many of the
vernaculars used are over my head. However, am having a problem of
people "killing" me when they wish to , on Java Chat room. I would
like to be able to eliminate my Remote number coming up with my user
name . Is there a simple way of doing this? I do not desire these
people to locate my server & other information. Thanks for your
response, if you can give me help. If not, then thank you anyhow.
Sincerely yours,
Anne L
_________________________________________________________________
Date: 27 Jan 98 13:18:56
From: dennis.j.smith dennis.j.smith@ArthurAndersen.com
Subject: Linux and VAX 3400 and 3300
I have just purchased a MicroVAX 3400 and 3300. I would like to put
Linux on these two systems. Can you provide any help in this aspect.
Dennis
_________________________________________________________________
Date: Wed, 28 Jan 98 15:54:55 MST
From: Antony Ware aware@acs.ucalgary.ca
Subject: Linux and Children
I've been hunting around for linux software for toddlers. My eldest
(3) has had fun with xpaint, and he likes to "type", but there's a lot
more going on at his level on my DOS partition. (:-(
So far, my searches in the linux world have turned up...nothing. Does
anyone know of anything out there?
Cheers,
Tony Ware
_________________________________________________________________
General Mail
_________________________________________________________________
Date: Mon, 5 Jan 1998 12:14:09 PST
From: Marty Leisner leisner@sdsp.mc.xerox.com
Subject: I don't like long articles
Glen Fowler's article comparing NT and linux processes is too long for
the gazette (IMHO) and didn't print out good from netscape...
HTML is not the best media for everything, maybe the gazette should
have an abstract and a URL for a postscript or tex master.
marty
_________________________________________________________________
Date: Tue, 06 Jan 1998 11:07:06 +0100
From: Trond Eivind Glomsrød teg@pvv.ntnu.no
Subject: Linux and routing
In the January issue, you ask for readers to write an article on how
to connect a LAN via just 1 IP address...
That is rather unnecesarry - there is a mini-HOWTO on it, called "IP
Masquerade mini-HOWTO". Available from your favorite LDP mirror.
Trond Eivind Glomsrød
_________________________________________________________________
Date: Tue, 13 Jan 1998 22:26:57 -0600
From: chris rennert lavithan@execpc.com
Subject: Rookie
Hello I am a newbie to Linux and I am very excited that I stumbled
across the Linux Gazette.. I have been wanting to put up a home LAN
with 2 pc's for some time ... and the article on SAMBA has put me on
the right track. I am a computer science student here in Wisconsin and
I love using linux. I will keep reading .. I just hope that you keep
printing.. Thanks again and I hope I can contribute in the future to
this great Mag.
Chris Rennert
_________________________________________________________________
Date: Sat, 17 Jan 1998 01:32:06 -0700
From: Sean Horan sean@olam.ed.asu.edu
Subject: Server uptime
I'm sending this maybe as news, perhaps Linux stability and advocacy.
I heard that the record for keeping up a Linux server up continuously
without reboot is six months. Here's the output from executing 'w' on
our Linux 1.2.8 system.
1:48am up 274 days, 17:05, 1 user, load average: 1.09, 1.02, 1.00
User tty from login@ idle JCPU PCPU what
sean ttyp1 sss2-01.inre.as 1:46am w
274 days, 24 hours a day. Never restarted. How common is this?
Let us know
--Sean
_________________________________________________________________
Date: Sun, 25 Jan 1998 15:04:35 -0800
From: Sean Russell ser@javalab.uoregon.edu
Subject: Linux security
I'm not intending to kick of a debate of the merits of PAM, but I have
a couple of comments and a question.
The question is, has anybody, commercial or freeware, started coding
an MVS-like security system for Linux? Specifically, I'm interested in
the fine granularity of access controls, the ability to deal with more
than just file accesses, user configurable ACLs, and most importantly,
security at the kernel level. One thing I find most distasteful about
PAM is the fact that applications have to be PAM aware to make use of
PAM's abilities. MVS security, on the other hand, is soft-linked into
the IO layer of the kernel, and /all/ applications use that security
model without knowing anything about it.
Anyone who has any comments on this, information, or leads, please
email me.
Thanks!
--- SER
_________________________________________________________________
Published in Linux Gazette Issue 25, February 1998
_________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Next
This page written and maintained by the Editor of Linux Gazette,
gazette@ssc.com
Copyright © 1998 Specialized Systems Consultants, Inc.
"Linux Gazette...making Linux just a little more fun!"
_________________________________________________________________
More 2¢ Tips!
Send Linux Tips and Tricks to gazette@ssc.com
_________________________________________________________________
Contents:
* Linux - 2 Cents about vim for pico users
* My 1/50th of a Dollar
* sound problems
* Filtering output of binary files
* Easter Eggs in Netscape
* RE: Perl and HTML
* Update locate
* Doing spaces in file names
* Mailing binary files to Microsoft clients
* Linux and Routing
* Linux and Routing 2
* Netscape's Abouts
* Netscape on the Desktop
* Re: Printing Problems
* Re: Using a 386 Computer
_________________________________________________________________
Linux - 2 Cents about vim for pico users
Date: Mon, 5 Jan 1998 23:07:20 +0100
From: Sven Guckes guckes@math.fu-berlin.de
I just read the "2 cent tips" again and I thought you might enjoy this
tip:
Several people enjoy the editor "pico" but do not feel comfortable
with an editor like "vim" for several reasons - one of these being
that it is so easy to do reformat the current paragraph with ^J
(control-j) within pico while it is so "difficult" within Vim. Well,
all it takes is two mappings for Vim:
nmap <C-J> vipgq
nmap <C-J> gq
Put these mappings into your setup file (on Unix and esp Linux this is
~/.vimrc) and you can use ^J to reformat the current paragraph or the
currently highlighted text (use 'V' and some movement commands to do
that, for example).
More tips can be obtained from these Pages:
http://www.vim.org/ Vim Home Page
http://www.vim.org/faq/ Vim FAQ
http://www.vim.org/answ.html Vim Answers Page
(for everything not yet in the VIM FAQ)
http://www.vim.org/rc Sven's Huge Setup File with comments
And for those people who use "some vi" but never got the hang of it -
here is a page about "why" you would want to use a vi clone such as
Vim:
http://www.vim.org/why.html
Enjoy!
Sven
_________________________________________________________________
My 1/50th of a Dollar
Date: Wed, 07 Jan 1998 01:27:09 +0000
From: Michael Katz-Hyman mkatshym@erols.com
Here is a small shell script I wrote to blink the scroll lock on my
keyboard when new mail arrived.
--------------------------------------------------------------
#!/bin/bash
#
# Keyboard blinky thingy when you have new mail, sleeps 5 minutes if you
don't
#
# Michael Katz-Hyman (mkatshym@erols.com) running Linux 2.0.33 Red Hat
4.0
Mail_File = "/var/spool/mail/mkatshym"
# The static file is used to make the script a daemon (I just test to
see if /bin/bash is present :- )
Static_File="/bin/bash"
LED_SET_COMMAND_ON = "/usr/bin/setleds +scroll"
LED_SET_COMMAND_OFF = "/usr/bin/setleds -scroll"
Sleep_Command = "/bin/sleep 2m"
# O.k. lets get started
while [ -e $Static_File ]; do
while [ -s $Mail_File ]; do
$LED_SET_COMMAND_ON
$LED_SET_COMMAND_OFF
done
if [ ! -sMail_FIle ]; then
/bin/sleep 5m
fi
done
------------------------------------------------------------------
Michael Katz-Hyman
_________________________________________________________________
sound problems
Date: Wed, 7 Jan 1998 09:48:10 -0600 (CST)
From: Mike Hammel mhammel@stassw10
> Have installed RedHat 5.0 and configured the sound card using sndconfig.
> All went well and I heard the demo sound bite of Linus. However, I
> have never heard another sound since. When browsing web sites with sound,
> no audio is played. Anyone have any ideas?
First, cat an audio file to the audio device: cat file > /dev/audio.
If you get sound out then the device is fine. The problem is probably
that you haven't configured your browser to play the audio. With
Netscape you would use the Preferences->Navigator->Applications
option. You'll need to configure the various audio types to be played
using whatever tool you choose (I don't play much audio, so don't have
anything configured in my browser to do so). The cat command will work
with .au files, and maybe .wav (I think), but possibly not with
others. You might want to look at the Linux Application and Utilities
Page or the Linux Midi and Sound Page for hints on getting
applications for playing sound files. Both of these have links on the
Software Resources page at the Linux Journal:
http://www.linuxresources.com/apps.html.
Hope this helps a little.
Michael J. Hammel
_________________________________________________________________
Filtering output of binary files
Date: Wed, 7 Jan 1998 14:56:05 -0500
From: Sylvain Falardeau sfalardeau@clic.net
When you do a cat/grep/etc. of binary files on a tty, the terminal may
become unusable because of some control character.
Guido Socher (eedgus@aken104.eed.ericsson.se) suggests a
sed -e 's/[^ -~][^ -~]*/ /g'
to filter unprintable characters. You can simply use a
cat -v
and all the control characters are escaped to be printable. It's very
useful when you are "cating" files and don't know if they contains
control characters.
_________________________________________________________________
Easter Eggs in Netscape
Date: Thu, 8 Jan 1998 11:53:51 +0000 (GMT)
From: Caolan McNamara caolan@skynet.csn.ul.ie
* From: Ivan Griffin ivan.griffin@ul.ie
*
* These special URLs do interesting things in Netscape Navigator and Communicat
or.
*
* about:cache gives details on your cache
* about:global gives details about global history
* about:memory-cache
* about:image-cache
* about:document
* about:hype
* about:plugins
* about:editfilenew
*
* view-source:URL opens source window of the URL
*
* Ctrl-Alt-F take you to an interesting site :-)
At least some of the netscape developers have an about for themselves,
e.g about:kahern.
C.
_________________________________________________________________
RE: Perl and HTML
Date: Thu, 08 Jan 1998 16:58:44 +0000
From: Carl Mark Windsor mbdtscw@cerberus.mcc.ac.uk
In reply to Gabriele Giansante (gvgsoft@madnet.it), whose return mail
address does not seem to work.
--------------------------------------------------------------
Gabriele,
The #!/usr/local/bin/perl line is what is used to indicate that this
is a perl script, but netscape is not clever enough to know this, it
has to be told.
Go to Options / General Preferences / Helpers and edit (if it exists)
or create (if it doesn't) the following configuration
Description: Perl Script
Type: application/perl
Suffix: pl
Tick the Application box and put the path
Application: /usr/sbin/perl
Sorry if you have heard this all before!
Carl
__________________________________________________________________________
Update locate
Date: Sat, 10 Jan 1998 19:16:31 +0000
From: Joaquim Baptista px@helios.si.fct.unl.pt
Both Redhat and Slackware (not sure about Debian) install the package
updatedb. This package has two programs:
- "updatedb" scans the filesystem and generates a database of existing files.
This is run every night as root.
- "locate" is run by users to quickly locate files on the filesystem,
using the database generated by updatedb.
My problem is that "updatedb" runs at 4:40 in the morning, and my machine
is rarely running at 4:40. Thus the database is never updated and "locate"
never finds any recent file.
The solution is not very simple: updating the database hits the disk hard
and takes some time; it is hardly a task to be performed every hour.
My solution is to run a script every hour that updates the database only if
it is more than 24 hours old. I (ab)used find to do the task.
Here is the script "run-updatedb":
#!/bin/sh
/usr/bin/find /var/spool/locate/locatedb -mtime +1 -exec \
/usr/bin/updatedb \
--prunepaths='/tmp /usr/tmp /var/tmp /mnt /cdrom /floppy /var/spool' \;
I also had to change the crontab for root: I commented the old line that
runs updatedb at 4:40, and added a line that runs my script every hour:
0 * * * * /usr/local/sbin/run-updatedb 1> /dev/null 2> /dev/null
One final note: I believe that both Redhat and Debian have
"super-crontabs." That means that you must fish around in /etc
(/etc/cron?) for extra crontab files (long live Slackware!).
Best regards,
Joaquim Baptista, alias pxQuim
__________________________________________________________________________
Doing spaces in file names
Date: Tue, 13 Jan 1998 18:16:48 -0800 (PST)
From: Mark Lundeberg ae885@pgfn.bc.ca
If you think Win95/NT filenames are better than Linux ones, think again.
In bash, (this may work in csh, but I never use it) use quotes to enclose
the filename in the parameters of a program:
echo "test" > "spaced name"
and do an ls, and you see a space in the middle of the filename!
This can be used for confusing people, by going:
echo "Hi" > "test "
(notice the space at the end of "test ").
Then, someone tries to open the file "test" as it looks from ls, but all it
does is open a new file.
PS: The ext2 filesystem allows names of up to 255 chars long, just like
Loseows 95.
Go Linux!
__________________________________________________________________________
Mailing binary files to Microsoft clients
Date: Fri, 16 Jan 1998 12:37:22 +0000 (GMT)
From: Ivan Griffin ivan.griffin@ul.ie
Mailing binary files to Microsoft clients...
Quite often I receive a mail with an attachment in that weird Microsoft
format which is not quite MIME. It's easy for a Unix client to decode such
attachments -- save the message as a file, and run uudecode or the
excellent freeware uudeview on it.
However, sending a mail message to such a Microsoft mail user is a little
different -- you cannot send them a standard MIME message (unless they are
using Exchange I believe). I have found the following script useful in
such situations.
Say, for example, I wanted to send a file foo.gif to user mike. I would
run my script as follows:
msmail_encode foo.gif > mail_message
Then I would read the mail message into the body of the message I wanted
to send. This script could easily be improved to include automatic
mailing, and editing of the mail message proper.
#!/bin/sh
echo "[[ $1 : 2628 in $1 ]]"
echo ""
echo " Microsoft document attached. "
echo ""
echo " Regards, "
echo " Ivan."
echo ""
echo "The following binary file has been uuencoded to ensure successful"
echo "transmission. Use UUDECODE to extract."
echo
cat $1 | uuencode $1
By the way, I have no idea what the 2628 above refers to. It is
a number generated somehow by Microsoft mail clients, but they don't seem
to need it, so the 2628 is a value I received once in a mail message.
Regards,
Ivan.
__________________________________________________________________________
Linux and Routing
Date: Sat, 17 Jan 1998 11:02:43 -0800
From: James C. Carr jccarr@nwlink.com
I am not sure if you have already received a reply regarding your
question on routing a LAN to the 'net, so I thought I'd go ahead and
give it a shot. The CC to Linux Gazette is just in case no one else
has sent in a more elaborate reply. ;) Also, this is something that was
mentioned back in Linux Journal number 43 ( November 1997 ), so most of
this stems from that particular article, "IP Masquerading Code
Follow-UP". To avoid re-hashing someone else's wonderful article, I'll
just skim over what I use here at my own home.
======================================================================
Linux and Routing with ipfwadm
======================================================================
Getting Linux to route information between a LAN and the 'net will
require you to re-compile the kernel with IP Masquerading support. Of
course, one could also use firewalls and disable the routing, but I
don't have experience with that just yet. If your kernel version is <
2.0.30, you'll need to enable the "Code Maturity Level" option at
re-compilation -- this gives you access to the other Network Options in
the kernel, such as IP Masquerading support.
After installing the new kernel, obtain and install the ipfwadm
program; this usually comes installed on a base Debian 1.3.1 system, and
is easily obtainable for Red Hat. Executing ipfwadm from my end
includes the following commands:
/sbin/ipfwadm -F -p deny
This portion breaks down as follows:
-F -- Notify ipfwadm that
you're modifying the IP forwarding rules.
-p -- Tell ipfwadm that
you want to deny the forwarding of incoming packets.
I've
experienced certain web pages that will not open with this option
set; it's probably some Microsoftian plot, you know. ;)
/sbin/ipfwadm -F -a m -S 192.168.0.0/24 -D 0.0.0.0/0
-F is the same as above.
-a -- Append the following rule to the
list, in this case, we're (m) masquerading the following rule.
-S
-- We're going to masquerade the computers in the 192.168.0.*
address range. Since this is a "local" set of IP numbers, it'll
work with all computers on the LAN with these IP addresses.
-D --
The forwarding destination will be 0.0.0.0, the equivalent of the
gateway address on a PPP defaultroute.
/sbin/ipfwadm -F -l -n
Let's make sure this thing is up and running.
-l -- List all IP #
forwarding rules;
-n -- convert the information to numeric format.
Of course, you'll need to have assigned your computers with IP
addresses within the 192.168.0.* range to use the exact commands above.
On my own setup, the primary computer gets 192.168.0.1, and the others
fall in succession. Be sure to have all the computers that are being
masqueraded set their gateway address to the primary, e.g.
secondary.my.com (192.168.0.2) uses primary.my.com (192.168.0.1) as
its gateway to the 'net.
For a far more in-depth article regarding this type of set-up, I do
suggest reading Chris Kostick's article "IP Masquerading Code Follow-up"
in the November 1997 issue of Linux Journal. Not only does it cover the
basics, but the author also explains a few more subtle aspects to
ipfwadm. Besides, without the help of this article, I wouldn't even
know the small amount about ipfwadm that I do. :)
======================================================================
I hope this helped at least a little,
-- James
__________________________________________________________________________
Linux and Routing 2
Date: Tue, 6 Jan 1998 13:25:57 -0500 (EST)
From: Paul Lussier, plussier@LanCity.COM
> I plan on getting a cable modem soon, so the bandwidth would be pretty
> high, so that is why I have decided to try to make this connection
> provide for my whole house via a LAN connection in my home. What I
> have read is that you could use the private IPs, meaning the 10.x.x.x
> or so, 192.168.x.x and some others for the IP of the LAN and have
> these connect to some box (the LINUX box?) that would provide its
> connection to the internet to the inside LAN connected to the box. Is
> the problem that you would have to route the assigned address to the
> private IPs for the LAN use. I have also read that this would slow
> down the connection a bit or something, but that is a price I am
> willing to pay. So, the summary of the question is how would I be able
> to connect many computers to the internet via just 1 assigned IP
> address? I would like to be able to do it using my LINUX box connected
> to the internet via cable modem, and to my LAN via an Ethernet
> link. Any help is much appreciated, thanks.
This caught my attention, especially since I'm the Unix admin for
Baynetworks Broadband Technology Division (formerly LANcity) and we
pretty much invented this technology, along with being the leader in
the Cable modem industry :) Now that I've got the plug in for company
I'll get down to your problem :)
I first must admit that 1.) I don't own a cable modem (I can't get
cable, long story :( and 2.) I don't do any routing of this nature.
But I have read a lot about it, and I do work with cable modems, so I
think I can help a little :)
The first thing to understand is that with Linux, you don't want to be
routing, and definitely do not want to run routed to do what you want
to accomplish. Rather, you want to be doing IP forwarding/IP
masquerading which you would enable in the kernel by
re-configuring/re-compiling a new kernel. You'll definitely want to
scour the HOWTOs, I believe there is one on this subject. In
addition, you may want to check out the Linux Network and/or Systems
Administrator's guides, as they too, probably have some good
infomation in them. Other good references may be:
* The NET-2/3 HOWTO
* The Ethernet HOWTO
* The Multiple Ethernet Mini HOWTO
* Networking with Linux
The Firewalling and Proxy Server HOWTO is probably the best bet, now
that I look, since what you really want to do is set up firewall to
prevent people from coming in, and a proxy server to allow your
internal lan to get out.
Some words of caution. DO NOT HAVE YOUR LAN CONNECTED AT THE TIME OF
THE CABLE MODEM INSTALLATION!!!! MediaOne, Cablevision, Time Warner,
and most of the other cable companies (we deal with them all here)
will refuse to connect a LAN to their broadband network. Simply
remove your hub or coax cable from view, and let them do what they
need to do, then connect everything else up after they leave. =20
You will need 2 Ethernet NICs in the system which will be connected to
the broadband, one for the cable modem and one for the internal LAN..
Most cable companies will gladly provide and install one for you
(MediaOne charges $120 for a 3C509 + labor). I recommend telling them
you have a NIC, and going out and buying one and installing it yourself.
The cable modem, in reality, is NOT a modem. It's an Ethernet Bridge.
When the modem^H^H^H^H^Hbridge boots/powers up it does a bootp request
to a server at the cable companies central office to obtain an IP
address. The NIC is also assigned an IP address, which (at least with
MediaOne) is registered to the MAC address on the NIC (MediaOne
doesn't want you to move the modem to another computer after they
leave. They apparently check the modems from time to time to see what
MAC they're connected to). Therefore, you want your proxy
server/firewall configured so that it prevents all incoming
connections from the cable modem and allows only outgoing connections.
You want the IP forwarding/masquerading set up to allow other systems
on your private lan to use the proxy server as a proxy server (I'm not
sure if using the term gateway here is correct).
Some other interesting tidbits of information about cable modems and
cable companies:
1. Do not expect support for running a LAN over the cable modem from
the cable company. They don't want you to do it, they won't help
you do it.
2. Do not expect to put up a web server to be accessed by from the
internet. You are a client, not a server. This technology,though
fully capable of performing in this manner, is not being deployed
for use this way. Cable companies WILL shut you down for running a
server of anykind on your end of the network, and it can be
*forever* :(
3. Spammers love cable/broadband networks. There have been several
cases where a broadband network customer has been used by spammers
and were subsequently shutdown for life by the cable company. What
happens is the person decides to connect their private LAN to the
cable modem but sets the firewall up incorrectly. Spammers search
cable/broadband networks for proxy servers/firewalls (Usually
Win95/NT) that allow incoming connections and then use that system
to spam the entire cable/broadband network making the spam appear
as if you sent it. Usually you will be given 1 warning by the
cable company, but there have been cases where none was given and
the customer was completely shut down.
4. The current BayNetworks LANcity modems (the LCp product) being
deployed in homes is limited to 1 MAC address connection (which
means you can't plug the modem into a repeater/mini-hub in order
to connectit to multiple systems). It is sotfware upgradable to 16
MACs, but you'll pay a fortune for it to the cable company.
However, an ethernet switch works wonders :)
5. Current modems are capable of transmitting at 10Mbs in both
directions, but are usually deployed throttled back to a trasmit
speed of 300Kbs and a recieve speed of 1.5Mbs. You want more
bandwidth, they'll be happy to charge you more money :)
I hope this helps a little bit. Feel free to e-mail me if you have
any questions.
Seeya,
Paul
__________________________________________________________________________
Netscape's Abouts
Date: Tue, 20 Jan 1998 16:00:46 +0100
From: "Stefan K." kampi@physik3.gwdg.de
I've read the article about the about's of Netscape...
Here's some more (some of them may not work or simply do nothing)
about:montulli
about:nihongo
about:francais
about:plugins
about:document
about:license
about:cache
about:global
about:image-cache
about:memory-cache
about:security
about:hype
about:blank
about:mozilla
about:security?subject-logo=
about:security?
about:security?banner-mixed
about:security?banner-insecure
about:security?banner-secure
about:security?banner-payment
mocha:
javascript:
livescript:
view-source:
about:FeCoNtExT=123
PEACE!
kampi
__________________________________________________________________________
Netscape on the Desktop
Date: Sat, 24 Jan 1998 06:46:22 -0500
From: Tim Hawes tim@donet.com
I do a lot of my web development work at home on my Linux box. Netscape
for Linux does not automatically check for an existing Netscape session.
As a result, if you try to run two different Netscape sessions, you will
get an error message box with something like the following:
Netscape has detected a /home/thawes/.netscape/lock
file.
This may indecate that another user is running
Netscape using your /home/thawes/.netscape files.
It appears to be running on host localhost under process-ID 316.
You may continue to use Netscape, but you will
be unable to use the disk cache, global history,
or your personal ceritificates.
Blah, blah, blah.
If you are like me, and like to have links to URL's using Netscape on
your menus, FVWM GoodStuff or desktop icons, this can be a real
nuisance, having to completely start a new Netscape session each time.
Or you can have them link with this:
netscape -remote 'openURL(your.url)
But then none of your links will work if Netscape is not currently
running. This shell script will look for the lock file that Netscape
creates when it is started. If it does not find the lock file, it will
start a fresh Netscape session. If it does find it, it will send a
netscape -remote command to your current session with the URL you
provide in the argument. If you do not provide a URL, netscape will
simply give you a popup message indicating that you did not specify a
URL. If you do not want Netscape to start up a new window for the URL,
just get rid of the
"new-window"
in the argument in the shell script.
#!/bin/sh
if [ -L $HOME/.netscape/lock ]
then exec /usr/local/netscape/netscape -remote
'openURL('$*',new-window)';
else exec /usr/local/netscape/netscape $*;
fi
exit 0
There are limitations with this script. First of all, if Netscape did
not exit cleanly after the last session, then the lock file will still
be present in your ~/.netscape directory. The script will then try to
execute a netscape -remote command and will error out with the console
message that Netscape is not running on :0.0. If you are not redirecting
your console messages anywhere, then you will not see anything except
Netscape not-starting.
1. Do a ps to see if there are any zombie processes left
over from your last netscape session.
2. Kill all zombie processes
3. $ rm ~/.netscape/lock
4. retry
I am sure there is a way to automate this through a shell script as
well, but I have not yet any time nor motivation to write it.
Some other shortcomings include trying to start Netscape composer with
the -remote argument for a currently running netscape session. But then
this is probably why you should never name a shell script after the
actual binary it attempts to start.
All in all, if you envy the functionality of Netscape on Windows 95,
automatically checking for an existing netscape session to send the
browser surfing, and starting a new session if it does not find it,
well, here is a simple solution for Linux users, using the power of the
shell.
Tim Hawes
__________________________________________________________________________
Re: Printing Problems
Date: Sat, 31 Jan 1998 20:09:18 +0100 (MET)
From: Roland Smith, rsmith06@ibm.net
>Anyone that can help me. I'd love to hear it. I try running
>lpr, but everytime I get no name for local machine.
>How do I set this and/or what is the problem.
>Manish Oberoi
It sounds like you're using LPRng. This is a new version of lpr that's
more suitable for networks. It is included in the newer Slackware releases
and maybe others.
My solution was to grab the bsdlpr.tgz package from ftp.cdrom.com and use
that (This is meant for Slackware). Otherwise you can search the Net for
"bsdlpr".
-- Roland
__________________________________________________________________________
Re: Using a 386 Computer
Date: Sat, 31 Jan 1998 20:13:09 +0100 (MET)
From: Roland Smith, rsmith06@ibm.net
>I used to have a 386 25 MHz computer. Not long time ago I bought a
>Pentium 200 MHz computer. Since then I have not played with 386.
>Is there any easy and economical way to connect the 386 to the
>Pentinum computer where I will install the Release 5.0. If so,
>what I can do with it or at lease what I can learn from it.
If you connect both machines with a parallel cable, and configure PLIP
into the kernel on both machines, you can have your own little network. A
386 should at least work nice as a terminal, even if it might not run X
:-)
-- Roland
__________________________________________________________________________
Published in Linux Gazette Issue 25, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ] Back
Next
__________________________________________________________________________
This page maintained by the Editor of Linux Gazette, gazette@ssc.com
Copyright © 1998 Specialized Systems Consultants, Inc.
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
News Bytes
Contents:
* News in General
* Software Announcements
__________________________________________________________________________
News in General
__________________________________________________________________________
March Linux Journal
The March issue of Linux
Journal will be hitting the newsstands
this week. The focus of this issue is Graphical User Interfaces with
articles on XView, GTK+, X-Designer and CDE. Check out the
Table of Contents.
__________________________________________________________________________
Netscape Announces Plans To Make Source Code Free
January 22, 1998
Netscape Communications Corporation today announced bold plans
to make the source code for the next generation of its highly popular
Netscape Communicator client software available for free licensing on
the Internet. The company plans to post the source code beginning
with the first Netscape Communicator 5.0 developer release, expected
by the end of the first quarter of 1998.
Netscape is releasing its currently available Netscape Navigator and
Communicator Standard Edition 4.0 software products immediately
free for all users.
In addition, the company separately announced the launch of an
aggressive new software distribution program called "Unlimited
Distribution" to broadly distribute its market-leading Internet client
software for free. Unlimited Distribution enables Original Equipment
Manufacturers (OEMs), Internet Service Providers (ISPs),
telecommunications companies, Web content providers, publishers
and software developers to download and redistribute Netscape
Communicator and Netscape Navigator easily with "no strings
attached."
To read and post reactions about this latest announcement, Linux
Journal has added a discussion group to our pages.
__________________________________________________________________________
Linux in the News
Eric Raymond's article
"The Cathedral and the Bazaar"
evidently made the rounds at Netscape and helped convince them that giving
away Navigator source code was a good idea. If you've never read it, now is
a good time.
Check out the article by Barton Crockett on msnbc: A Titanic Challenge to
Microsoft
The February issue of Dr. Dobb's Journal has an interview with Larry Wall,
the creator of Perl.
__________________________________________________________________________
The SEUL Project
The SEUL (Simple End-User Linux) Project is an organization dedicated to
developing a free
Linux distribution that presents a viable alternative to commercial
PC operating systems. Currently based on Red Hat Linux, the SEUL
distribution will cover many different aspects of Linux.
For more information:
Roger Dingledine, seul@seul.org
http://www.seul.org/
__________________________________________________________________________
The Linux Clothing Project
Check out http://genocide.adept.co.za/lcp/
to have your questions answered.
We're planning another t-shirt, with ordering opening on the 1st of
February,
1998. All the info is on the page.
For more information:
Albert Strasheim, UUNET Internet Africa,
fullung@ilink.nis.za
__________________________________________________________________________
Stampede Linux Logo Contest!
Along with the highly anticipated release of Stampede Linux 0.55 (heber),
the developers felt it time to have an official logo. The developers
also felt that they should look elsewhere for development of said logo.
This contest is a result of the looking elsewhere bit. (Yes, prizes are
part of this contest =]).
For more Information:
Matt Wood, Stampede Linux Head Developer, skibum@beer.stampede.org
http://www.stampede.org/
__________________________________________________________________________
12th SYSTEMS ADMINISTRATION CONFERENCE (LISA '98)
December 6-11, 1998
Boston, Massachusetts
The LISA '98 program is put together by a volunteer committee of
experienced systems administrators. The Program Committee welcomes
your submission. The Call for Participation is now available at
http://www.usenix.org/events/lisa98/
Sponsored by USENIX, The Advanced Computing Systems Association
Co-Sponsored by SAGE, the System Administrators Guild
__________________________________________________________________________
Japanese Word Processor
Perhaps you'd like to work on another exciting project? There is
a Windows application, called JWP -- a Japanese Word Processor. This
package was written by Stephen Chung, and as a GNU product it is freely
distributable. I've used it extensively over the past few years, and
it is a *great* package.
Unfortunately, JWP is only available for Windows right now, which is
locking out a lot of people under other platforms who might benefit from
it. As Stephen is quite busy with full-time work and maintaining the
Windows versions (he's developing version 2.00 now), Steve Frampton has
decided to go ahead with a port to X-Windows.
The JWP-Port Project home page contains more information on the JWP
package as well as the JWP-Port project itself. If you are interested,
please visit the page at
http://qlink.queensu.ca/~3srf/jwp-port/.
For more information:
Steve Frampton, 3srf@qlink.queensu.ca
__________________________________________________________________________
A 3D CAD Application for Linux Project
FreeDesigner is intended to be a fully extendable
Computer Aided Design and Drafting (CAD) application for Linux and other
Unix type operating systems. Initially K Desktop
Environment and GNOME/GTK frontends will be investigated, although it will
be written as "toolkit inspecific" as is possible, by
utilizing a GUI abstraction layer in FreeDesigner Core.
For more information:
Fleming, Petersen & Associates,
http://www.fpa-engineers.com/OD/
__________________________________________________________________________
Artificial Intelligence
Interested in Artificial Intelligence, Eveolutionary Computing,
Connectionism, Artificial Life, and/or Software Agents? Want to find
out what software is available for Linux in these areas? Or are you
just curious?
If so, check out my Linux AI/Alife mini-HOWTO at:
http://www.ai.uga.edu/~jae/ai.html
For more information:
John A. Eikenberry, jae@bob.coe.uga.edu
__________________________________________________________________________
Digital Domain and Red Hat Linux
Digital Domain used Red Hat Linux not only for special effects in the movie
Titanic but also in commercials that debuted during this Superbowl.
Here's Red Hat's press release.
__________________________________________________________________________
Software Announcements
__________________________________________________________________________
eVote 2.2
Date: Wed, 28 Jan 1998
eVote image
eVote is a freely available add-on to email list-servers that gives
the members of the list the ability to poll each other. After
installation of the software, the administrator is not involved. All
participants have the power to open polls, vote, change their votes and
view each other's votes if the particular poll was so configured.
The underlying specialized data-server, The Clerk, is also freely
available for Linux systems only. eVote 2.2 is available in both English
and French.
For more information:
Marilyn Davis, mdavis@deliberate.com,
http://www.Deliberate.com/
__________________________________________________________________________
FunktrackerGOLD 1.1
FunktrackerGOLD 1.1 has been released. FunktrackerGOLD is a module editor for
Linux that allows you to compose digital music (similar to Fasttracker,
Impulsetracker etc for those who are familar with them).
For more information:
Jason Nunn, jsno@dayworld.net.au
http://www.downunder.net.au/~jsno/proj/unix_projects/
__________________________________________________________________________
Quikscript
Quikscript is a PostScript text formatting and typesetting program.
It enables documents to be prepared on any type of hardware, using
visible layout marks to control the appearance of the output, and
produce output on a PostScript printer by despatching Qs and the
document file to the device. No processing is performed by the
host hardware; all processing is done within the printer.
The advantage that Quikscript provides, other than portability,
is precision of control over output. Because it is written in
PostScript, it is interpreted at run-time within the printer.
It is possible to create documents that modify the Quikscript
program during execution. It is very easy to include other
PostScript programs or fragments with Quikscript. It is possible
to use special PostScript fonts, such as hand-generated ones.
Graphics generated from a variety of sources can be easily included,
as can text output from computer programs. It is possible to embed
Quikscript within a document, such as an advertisement or a telephone
bill.
The Quikscript distribution is available by anonymous ftp from
"ftp.adfa.oz.au" in the directory "pub/postscript". It may also be
accessed through the World Wide Web at URL
http://www.cs.adfa.oz.au/~gfreeman/
For more information:
Graham Freeman, g-freeman@adfa.oz.au
__________________________________________________________________________
YP-Tools & YP-Server
Version 1.4 of the YP (NIS version 2) tools for Linux has been released.
This package contains ypcat, ypmatch, ypset, ypwhich and yppasswd.
You need this package for GNU C Library 2.x and Linux libc 5.4.21,
but you should use libc 5.4.36 or later due some NIS bugs in libc.
It replaces the old yp-clients 2.2 on this systems.
You could get the latest version from:
http://www-vt.uni-paderborn.de/~kukuk/linux/nis.html
------------------------
Version 1.2.7 of an YP (NIS version 2) Server for Linux has been released.
It also runs under SunOS 4.1.x, Solaris 2.4 - 2.6, AIX, HP-UX, IRIX,
Ultrix and OSF1 (alpha).
The programs are needed to turn your workstation in a NIS server.
It contains ypserv, ypxfr, rpc.ypxfrd, rpc.yppasswdd, yppush, ypinit,
revnetgroup, makedbm and /var/yp/Makefile.
This is NOT an NIS+ (NIS version 3) Server !
ypserv 1.2.7 is available under the GNU General Public License.
You could get the latest version from:
http://www-vt.uni-paderborn.de/~kukuk/linux/nis.html
For more information:
Thorsten Kukuk, kukuk@vt.uni-paderborn.de
__________________________________________________________________________
Motif 2.1 for Linux
The latest and best release of Motif (version 2.1) is now available for
the best operating system!
Linked against both glibc (yes, it DOES work with RedHat 5) and libc (ie
it works with Debian, Caldera, RedHat 4.0).
For more information:
LSL, http://www.lsl.com/,
motif@lsl.com
NC Laboratories, http://www.nc-labs.com,
sales@nc-labs.com
__________________________________________________________________________
NetTracker
NetTracker is one of the most powerful, yet easy to use Internet and
Intranet usage tracking programs on the market today. NetTracker allows
marketing professionals, webmasters and ISPs to get the essential
information they need to make informed decisions regarding their web
sites.
A demonstration of NetTracker can be seen at [http://www.sane.com/demo/],
and a free 30 day evaluation copy can be downloaded from
[http://www.sane.com/eval/].
For more information:
Sane Solutions, info@sane.com
__________________________________________________________________________
SCEPTRE-90
SCEPTRE-90
a program for the analysis and simulation of
electrical nonlinear networks and dynamic systems
is now available for Linux users (free of charge).
The ftp site, where the program can be found is:
novilux.fh-friedberg.de/pub/sceptre_linux.
Detailed documentation in english and german as well as many samples
are included in the archive file.
For more information:
Prof. Dr. Wolf-Rainer Novender, novender@novilux.fh-friedberg.de
__________________________________________________________________________
BANAL 0.04 (free bookkeeping software)
BANAL is a bookkeeping system that allows you to track invoices,
clients, projects, TODOs, bank accounts and expenses. BANAL is a
client/server application so you can keep one set of books on your
system while allowing everyone access.
For this release, BANAL can store your information, list (and allow
searching of) information and generate invoices, income and expense
statements. You can also make and use recurring and memorized
transactions to ease the burden of creating them manually. Check
the TODO file, that is included with the distribution, for an idea
of what is coming in the next release.
If you want to obtain BANAL and try it out, ftp to:
ftp://sunsite.unc.edu/pub/Linux/apps/financial/accounting.
For more information:
Matthew Rice, Matthew.Rice@ftlsol.com
__________________________________________________________________________
Aegis 3.1 - Software Configuration Management System
Aegis is a transaction-based software configuration management system.
It provides a framework within which a team of developers may work
on many changes to a program independently, and Aegis coordinates
integrating these changes back into the master source of the program,
with as little disruption as possible.
http://www.canb.auug.org.au/~millerp/aegis.html
For more information:
Peter Miller, millerp@canb.auug.org.au
__________________________________________________________________________
Free CORBA 2 ORB - omniORB 2.4.0
The Olivetti and Oracle Research Laboratory has made available the second
public release of omniORB (version 2.4.0). We also refer to this version
as omniORB2.
omniORB2 is copyright Olivetti & Oracle Research Laboratory. It is free
software. The programs in omniORB2 are distributed under the GNU General
Public Licence as published by the Free Software Foundation. The libraries
in omniORB2 are distributed under the GNU Library General Public
Licence.
Source code and binary distributions are available from our Web pages:
http://www.orl.co.uk/omniORB/omniORB.html
For more information:
Dr. Sai-Lai Lo, S.Lo@orl.co.uk
http://www.orl.co.uk/omniORB/omniORB_240/
__________________________________________________________________________
New Linux STREAMS Release
Linux STREAMS (LiS) version 1.12 is now available. This version
supports kernels 2.0.24 through 2.0.33. By mutual consent of the
authors, the licensing terms have been changed to the GNU Public Library
License. This allows linking of proprietary STREAMS drivers with the LiS
code.
This version contains an install script which automates the
installation.
It can be downloaded from ftp.gcom.com from the directory
/pub/linux/src/streams-1-15-98.
For more information:
Mikel L. Matthews, mikel@gcom.com
__________________________________________________________________________
Speech Enhancement by Kalman Filtering Package
If you are interested in speech enhancement, signal processing
in general, or applications of Kalman filtering, read on. Mr. Kybic has just
finished his diploma work, entitled "Kalman Filtering and Speech
Enhancement" which includes, among other things, an implementation of
a Kalman smoothing based speech enhancement algorithm, working on
speech signal corrupted by slowly changing coloured additive noise.
Tested on Linux and HP-UX. Parallel version using PVM.
It is not perfect but might be inspiring anyway. Free for
non-commercial use.
http://cmp.felk.cvut.cz/~kybic/dipl
For more information:
Jan Kybic, xkybic@sun.felk.cvut.cz
__________________________________________________________________________
Published in Linux Gazette Issue 25, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
This page written and maintained by the Editor of Linux Gazette,
gazette@ssc.com
Copyright © 1998 Specialized Systems Consultants, Inc.
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
The Answer Guy
By James T. Dennis, linux-questions-only@ssc.com
Starshine Technical Services, http://www.starshine.org/
__________________________________________________________________________
Contents:
* Removing LILO, Reinstalling MS-DOS
* Running as root on Standalone Systems -- DON'T
* More on Netscape Mail Crashes
__________________________________________________________________________
Removing LILO, Reinstalling MS-DOS
From: Stephen Britton, sbritton@westnet.com
My parents just told me that I have to
give our extra machine (a 486 running Red Hat 4.1)
to my younger brother, who only knows Windows.
I have formated the drive with MS-DOS, but I
can't seem to figure out how to remove LILO. I
recall reading somewhere that it can be done by
c:\fdisk /mbr But that doesn't seem to be working.
Please help, he is returning to College next week!!
That should do it. However -- which version
of MS-DOS are we talking about. This option
was introduced in MS-DOS 5.0. Although it
wasn't documented at the time it is widely
used to recover from a variety of boot
viruses.
If that that doesn't work -- boot from a Linux
floppy -- zero out the whole partition table
and MBR (dd if=/dev/zero of=/dev/hda -- for
a primary IDE, or of=/dev/sda for the primary
SCSI and count=1 (or 2 or so)).
Then you can boot from a DOS installation floppy
and it will insist that you run fdisk and will
treat the drive as though it was brand new and
previously unformatted/partitioned.
(Technically you only have to zero out or
put anyting other that 0x55AA as the last two
bytes of the MBR -- that's the signature that
tells FDISK that this drive has been previously
partitioned. However, it's just easier to zero
out the whole mess.)
Naturally this will make all of the data on the
drive inaccessible -- but I suspect you already
knew that was going to happen anyway.
Alternatively -- if fdisk /mbr doesn't work --
you should find out *why*. If this is an early
version of DOS -- you should probably try to
get a copy of 5.0 or later (or consider Caldera's
OpenDOS). I suppose you could also consider
installing Win '95, considering the likelihood
that your brother will need access to TCP/IP
utilities like web browsers and some e-mail
package.
On the one hand I hate to push some further down
the throat of the snake -- on the other hand we
should always do our best to act in the best
interests of our customers -- even when they're
our pesky brothers.
P.S. I tried talking him into taking Linux, but he's
locked into the Windows mindset.
Trying to convince someone of something is
usually a losing proposition. Try to understand
his real requirements -- and offer the best
advice you can.
It may be that Windows is the best environment
for him. It may also be that there are over-riding
constraints that force him to choose a Windows
compatible platform.
I think that many organizations are now "chained" to the
Microsoft aggenda by their current investment in their
existing data files (all their spreadsheets, documents,
and many of their small, departmental mailing lists, and
databases are locked into various versions of the proprietary
.DOC, .XLS, and other data formats).
Microsoft clearly intends to maintain this state. I
guess that is has been the core of their strategy for the
last five years (since about the release of Win 3.0 or 3.1).
(It is also not unique to them -- most major commercial
hardware and software vendors have tried to "lock" their
customer into upgrade paths. Companies like DEC, IBM,
and HP have each had their VMS, MVS, MPE OS' with this
aggenda. Consequently their efforts at Unix have often
been "skunkworks" -- and have been highly politicized for
over a quarter of a century).
I ask people to consider this tidbit in their long range
planning. Truly optimizing for the present requires
looking to the future as well.
-- Jim
__________________________________________________________________________
Running as root on Standalone Systems -- DON'T
From: griffin@ameritech.net
What advantages are there, if any, to running your single-user
system as a normal user and not root?
If you're absolutely perfect, you never make a typing mistake or
issue a wrong command, or a right command from a wrong directory
with the wrong arguments, *and* you only run perfect software,
with no bugs in it at all, *and* you are totally disconnected
from the world (you don't get any e-mail, never use netnews, or
IRC etc) -- then you *might* be sort of safe running as root on
your system.
If you simply don't care about your data and you like the idea
of rebuilding your system configuration from scratch then throw
all caution to the wind and go for it.
However, for the vast majority of us, it's the most minimal bow
to prudence to log in as an unprivileged user for the vast
majority of work you do at your system.
The advantages are:
* Your normal user account can't accidentally damage vital system
files with any normal command. The most common cause of data loss
and downtime is operator failure. When I worked on the tech
support lines at Norton Computing (the largest publisher of DOS
and Mac data recovery tools) the accidental deletion calls were
more common than all other causes combined. Even on Unix and other
multi-user system the system administrators (or "operators") are
the primary cause of downtime and data loss. It simply makes sense
to minimize these risks.
* Programs you are running (buggy, or even trojan horses and
viruses) can't readily damage system files. Software bugs are the
second most common cause of data loss. Trojan horses and viruses
are a rarity in the Unix world -- precisely because the prevailing
custom is to run software with minimal privileges. When it comes
to software that legitimately needs privileged access (like the
Red Hat rpm system when it's used to update or install new
packages), many sysadmins run new software on a "sacrificial"
system or in a "chroot jail."
* Even programs that are reasonably O.K. may vulnerable to
deliberate attacks. If someone uses 'write' to ANSI-bomb you
(re-writing the keybindings in your terminal/console driver for
malicious purposes) or exploits some 'feature' of IRC or your mail
reader to execute code on your behalf, you'd like to limit the
damage they can do.
The disadvantages mostly relate to convenience. A typical
microcomputer user from a DOS, Windows, OS/2, MacOS, AmigaDOS,
CP/M or similar background is used to being able to edit any
file and change any setting directly and quickly.
By maintaining the discipline of only doing administrative tasks
from a 'root' login -- and all of your other work from one or
more 'user' accounts you are forced to pause and consider the
implications of what you're doing.
It's also nice that you can partition your work into distinct
domains -- you can always play games from your 'player' account
-- and none of those games can damage you're thesis project, or
financial records, or whatever.
Personally I think this could use some improvement. I'd like to
see a system whereby by each user is implicitly the manager of a
group of "roles." For single-user home systems this would be
basically the same as using your root account to create new
psuedo users for yourself. On multi-user systems it would
delegate the task of creating new roles and rolegroups to the
user --- so that each user's "base account" in effect becomes an
administrator of this own roles.
The problem I see with that is that there's no support in Unix
for it. I think it would take alot of work to build a set of
tools to support it (and many of these tools would have to be
SUID 'root' in traditional Unix systems -- or would require some
totally different lower level support such as a variant of a
"capabilities" system. In any event these tools would be very
security sensitive -- and early versions would probably be the
cause of numerous exploits.
However, none of that matters to the home user with root access
to his own box.
-- Jim
__________________________________________________________________________
More on Netscape Mail Crashes
From: Chris, colohan@cs.cmu.edu
In http://www.linuxgazette.com/issue24/lg_answer24.html, you suggest
removing the ~/.netscape tree to stop Netscape Mail from crashing.
I have had the same problem several times, and it does not appear to be
anything in that directory -- it is the mail files themselves. It
appears as though Netscape will occasionally put a wee bit of corruption
in your ~/nsmail/[Inbox, Trash, etc.] files, which prevents it from
reading them. And it crashes when it encounters any corruption in these
files. It also seems to crash if your trash gets too large. (Anything
over 1MB seems hopeless).
So one solution is to back up your mail elsewhere, and erase your mail
directory. Then Netscape will create new, valid, empty mail folders,
and stop crashing for a while. Another solution is to open the files
yourself (they are just text files), and erase any messages that look
suspect.
These sound like excellent troubleshooting suggestions,
recovery procedures and workarounds.
I believe I also mentioned that my e-mail is far too important
to me to entrust to Netscape (or any "new" product). For
years I used 'elm' and before that it was 'mush' (mail user's
shell). The switch from 'elm' to MH (using emacs' mh-e and Gnus
interfaces) was nerve-wracking. (I deal with over a hundred
messages a day -- and it's at the core of my business that I
"keep up" on administration and security issues for my customers).
My biggest customer (another consultant in a different specialty)
has also made this switch, after over a decade of using emacs'
RMAIL. As you can imagine there have to be some pretty extensive
advantages to a package to warrant changing from one client to
another. (Merely having a "prettier" interface and a few bells
and whistles isn't nearly enough).
Consequently I will probably stay in a poor position to answer
questions about NS's mail and news readers.
As for the fact that NS crashes when encountering corruptions
in folders and messages -- that's just poor quality control and
poor coding. As usual the issues of "time-to-market" and
"pretty interface" dominate the development of commercial products.
The nature of the computer software industry practically guarantees
that the most widely used commercial products will have bugs of
this sort. This is the result of a set of corporate priorities
that don't match typical customer priorities -- and is a byproduct
of the selection process by most software is purchased.
I could go on about this for many pages. Since I worked in the
software industry for a long time -- I had a lot of time to
observe the process first hand. (Since I was doing tech support
I also had an abundance of free neural cycles to think about the
issues, as well). Here's a few observations that will help explain
my conclusion:
* Software companies sell features. They only make money on product
sales and upgrades -- and the margins are much better in upgrades
than in initial sales (since many, possibly most, upgrades are
direct revenue -- and no "cut" goes to the channel distributors
and retailers).
* Most software marketing is directed to channel distributors,
retailers, and fortune 1000 corporate purchasing agents. Most of
it is not directed to end users and home customers. These
intermediaries largely determine the pricing and availability of
most commercial software, and the advertising that goes to the
end-user. The priorities of these intermediaries are: high sales,
low product return rates (RMA's). The purchasing agents at Merisel
and Egghead don't do detailed requirements analysis on behalf of
their customers.
* Product returns are most tightly correlated with how long the
customer has had the product before becoming dissatisfied with it.
This is why "ease of use" and "ease of installation" are so
important in commercial software. If the vendors can keep the
majority failures from occuring for 60 to 90 days -- very few
customers will return the product even if the publisher's policies
allow it.
* There is much more focus on corporate sales than on retail for
most shrinkwrapped software. This is due to high rates of piracy
among home users and the obvious observation that every "customer"
contact costs money (sales and tech support time). So one
successful sale at TransAmerica costs much less than 10,000
individual sales to home users and SOHO markets.
* Most corporate software users have little say and relatively
little interest in what software they use. They are told what do
so -- and usually don't question that. Corporate purchasing agents
get plenty of political pressure from managers and executives but
usually neither the purchasing agent nor the manager spends much
time "in the trenches" with the software that's being used.
* Managers are far more worried about being "wrong" than being
"right." An excellent product from an unknown source is considered
a much higher risk than a mediocre product that gets good press
and comes from a large, well-known source.
* The computer industry press can't sell much copy by talking about
"old" products. They also can't depend on any significant amount
of advertising unless they maintain close, positive,
relatiionships with their major advertisers. Most of their
advertisers are hardware and software companies.
* Because the writers in most of these magazines are working with
new (usually pre-release or "beta") software or versions they have
no opportunity to discover the bugs that take two or three months
to show up in typical use. In addition most of these writers
either don't use the products they review extensively, or tend to
rely on earlier versions for their production and critical work.
Almost no one is a full-time professional journalist in the
computer industry -- and those that are in this position are in a
rather poor position to do in depth evaluation of anything other
than word processors.
* Despite these limitations -- which almost gaurantee that we should
take software reviews with a large block of salt -- these reviews
in major magazines become the focal point of most discussion on
the topic. By the time a given customer has purchased, installed,
configured, and learned a given product it's usually too costly
(emotionally and in time) to "start all over."
* The fact that a large number of commercial packages store some or
all of "their" data (not "yours" -- but "theirs") in proprietary
formats also increases the risks and costs associated with
switching.
* Finally there is a strong possibility that the next product a
given customer tries to switch to will be as bad or worse.
When you go through all of this -- even if you don't agree
with half of the observations -- it's easy to see why so
many people live in quiet desperation, hating their most
important software.
Sadly it takes *really* bad software to fail as a result of its
bugs. dBase IV comes to mind. It doesn't take much for really
high quality software to fail as a result of poor marketing
(or the superior marketing and industry dominance of competitors).
DESQview comes to mind.
By contrast almost all free software is chosen by end-users
based on recommendations from other end-users. It is produced
by people whose only rewards are: access to their own tool
to solve their own problems, the satisfaction of having lots
of users, and some chance for fame and sincere admiration.
They gain nothing by claiming more than they deliver (except
more e-mail with more support questions).
Luckily we, Linux and free software users, are blessed with
alternatives. These systemic problems are what I think we are
really "free" of.
-- Jim
__________________________________________________________________________
Copyright © 1998, James T. Dennis
Published in Issue 25 of the Linux Gazette February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ] [ FRONT PAGE ]
Back
Next
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
Review of "A Practical Guide to Linux" by Mark Sobell
By Bernard Doyle
__________________________________________________________________________
Several months ago, with some trepidation and the assistance of a friend who is
somewhat more knowledgeable than myself about computer hardware, I took the
plunge and installed Linux on my Pentium PC.
Soon after, I downloaded a pile of assorted How-To's, FAQS and Tutorials from
the Internet to start doing something useful with Linux. The downloaded
documentation was handy but I frequently had trouble finding answers to
important
questions. After a month I purchased 2 books - Running Linux by Welsh &
Kaufman and
A Practical Guide to Linux by Mark Sobell. Welsh & Kaufman's Book is a well
known,
highly regarded, authoritative book on Linux. It is fundamentally about how
to set
up the major Systems and Hardware and how they interact.
Sobell's book, by way of contrast, approaches Linux from a software perspective
.
There is little, if any, overlap between the two books, even when they are
talking
about the same thing. The two books effectively work opposite sides of the Linu
x
street. There is also a contrast in the styles of the two books. Welsh and
Kaufman
are somewhat "chatty" while Sobell basically tells it like it is with little
or no
opinion thrown in.
Although there is a chapter on System Administration, Sobell's book
concentrates on
showing how to use the Linux variants of the standard Unix software
packages. There
are chapters on X-Windows, vi, emacs, Linux Internet and Networking
Software, bash
(2 chapters on this important subject), the TC Shell, the Z Shell and
Programming
Tools.
Learning the bash Shell by Cameron Newham and Bill Rosenblatt (published by
O'Reilly)
covers the use of bash in more detail than Sobell's book, but I suspect it
is a little
advanced for the beginner. Sobell's chapters on bash were the most
informative and useful
information that I have come across so far. Being something of a
scripting/batch
file afficianado the two chapters on bash provided just the information I
needed
to produce a host of useful custom scripts.
The Command Summary takes up about a third of the book and maintains the high
standards of the rest of the text. Sobell uses internal page references quite
freely. This often results in a lot of page turning. I assume this was done
to avoid
repetition of material, and given the vast amount of material that could be
included
in a book on Linux/Unix software this is a reasonable compromise as it
leaves more
room for additional material.
This is not a book for solving Linux hardware or installation problems. If
you are
looking for that sort of information then get Welsh and Kaufman's book, or
download
the relevant "How-Tos" (or both). This is the book to use if you want to do
learn how
to do useful things with the software. The book manages to cover almost all
the major
software topics, and it covers them well.
I do have some quibbles with the book. The Table of Contents uses a
typeface that
is much too large, As a result it runs from page xvii to page xlvii. (That's
31 pages
for the Roman numerally challenged) Hopefully, the next edition will address
this
issue.
One notable Linux/Unix Utility not mentioned at all is Perl. A short 5-6 page
reference to it in the Linux Utility Program Section or an Appendix would
have been
nice. Summarising Perl in 5-6 pages is possibly a tall order, but I would have
liked some mention or reference to it.
Although the book gives a good rundown on accessing Linux Documentation and
Software
from the Internet, a Bibliography of Linux/Unix books would have been good.
"Running
Linux" does have a Bibliography, so if you have that book as well then I
guess you
have the information anyway (although it's a little out of date).
The book is an adaptation of Sobell's other Practical Guides to the Unix
System and
this shows, and it's not necessarily a bad thing either. However, given the
nature of
the Linux community, I doubt whether photographs of a mouse and keyboard are
necessary.
On the positive side, the book is professionally organized, indexed and
referenced. It
is substantially larger than the other Practical Guides to Unix by the same
author as
well.
In the light of the high quality of the book overall, all of the above
criticisms are
minor and easily overlooked. The book is far and away the best I have seen
on the
market for quickly and effectively using Linux software. If you have a copy
of A
Practical Guide to Linux and Running Linux along with a few appropriate
"How-Tos",
you should be able to get solutions to most of your Linux questions as well as
productively use your system.
__________________________________________________________________________
Copyright © 1998, Bernard Doyle
Published in Issue 25 of Linux Gazette, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
Bourne/Bash:
Shell Programming Introduction
By Rick Dearman
__________________________________________________________________________
Sooner or later every UNIX user has a use for a shell script. You may
just want to do a repetitive task easier, or you may want to add a bit
more kick to an existing program. An easy way to accomplish this is to
use a shell script. One of the first shell scripts I wanted was something
that would change a directory full of files which were all in capital letters
to lowercase. I did it with this script:
LCem.sh
1 #!/bin/sh
2
3 DIR=$1
4
5 for a in `ls $DIR`
6 do
7
fname=`echo $a | tr A-Z a-z`
8
mv $DIR/$a $DIR/$fname
9 done;
10 exit 0
11 #this script will output and error
if the file is already lowercase, and assumes argument is a directory
Line one tells the computer which shell to use, in this case it is "sh"
the bourne shell ( or this may be a link to the bash shell ). The combination
of the two symbols #! are special to the shell and indicates what shell
will run this script. It IS NOT IGNORED like other comment lines. Line
3 sets a variable called DIR to equal the first argument of the input.
(Arguments start at $0, which is the name of the shell script or in this
case LCem.sh ).
In line 5 we enter a control loop. In this case it is a for loop. Translated
into english this line means for every entry "a" that I get back from the
command `ls $DIR` I want to do something. The shell will replace the variable
name $DIR to whatever was typed on the command line for you. Line 6 starts
the loop.
Now in line seven we make use of the UNIX utilities available , `echo`
and `tr`. So what we are doing is echoing whatever the current value
of $a is and piping it into tr which is short for translate. In this case
we are translating uppercase to lowercase, and setting a new variable called
fname to the result.
In line eight we move the file $DIR/$a, whatever it may be to $DIR/$fname.
Line nine tells the shell to go back and do all the other $a variables
until it is done. And finally line 10 we exit the script with an error
code of zero. Line eleven is a comment.
This script wouldn't have been needed to change one or two file
names, but because I needed to change a couple of hundred it saved me lots
of typing. To get this to run on your machine you would have to chmod
the file to be executable. Like this `chmod +x LCem.sh` . Or you
could evoke the shell command directly and give it the name of your script
like this `sh LCem.sh`. Using the comment and exclamation mark combination
would tell the kernel what shell to evoke and is the normal way to do things.
But remember if you use the #! then the file itself needs to have execution
permissions.
It is only eleven lines but it shows us a lot about shell scripting.
We have learned how to get the computer to run the script using the #!
combination. This combination of a comment mark and a bang operator, or
as some people call it an exclamation mark, is used to start a shell script
without having to evoke the shell first. We learned that a # is how
we can write a comment into our script and have them ignored when the script
is processed. We learned how to pass arguments to the script to get input
from the user, and we know how to set a variable. We have glanced
at one of the many control structures we can use to control the functionality
of a script.
Don't worry if you didn't really get all of that. We shall now move
on to explaining some of the most common decision making / control structures.
The first one we want to look at is the `if` statement. In every programming
language we want to be able to change the flow of the program based on
various conditions. For example if a file is in this directory do one thing.
If it isn't do something else. The syntax for the if command is:
if expression then
commands
fi
So if the expression is true the statements inside the if block are
executed. Lets look at a simple example of the if statement.
WhoMe.sh
1 #!/bin/sh
2
3 # set the variable ME to the first argument after
the command.
4 ME=$1
5
6 # grep through the passwd file discarding the output
and see if $ME is in the file
7 if grep $ME /etc/passwd > /dev/null
8 then
9 # if $ME is in the file out put the following line
10 echo "You are a user"
11 fi
Notice the extensive use of comments on lines 3, 6, and 9. You
should try to comment you scripts as much as possible because someone else
may need to look at it later. In six months you may not remember what you
were doing, so you might need the comments as well.
Using the if statement we can now correct some of the errors which would
occur in the lowercasing script. In LCem.sh the script will hang if the
user doesn't input a directory as an argument. To check for an empty string,
we would use the following syntax:
if [ ! $1 ]
This means if not $1. The two new things here are the use of the bang
operator, or exclamation mark as the symbol for NOT. So lets add
this new knowledge to our program.
#!/bin/sh
1 if [ ! $1 ]
2 then
3 echo "Usage: `basename $0` directory_name"
4 exit 1
5 fi
6
7 DIR=$1
8
9 for a in `ls $DIR`
10 do
11 fname=`echo
$a | tr A-Z a-z`
12 mv $DIR/$a $DIR/$fname
13 done;
Now if the user types in the command but not the directory then the
script will exit with a message about the proper way to use it, and an
error code of one.
But what if we really did want to change the name of a single
file? We have already got this command wouldn't it be nice if it could
cope. If we want to do that then we need to be able to test if the argument
is a file or directory. Here is a list of the file test operators.
CAPTION:
Parameter
Test
-b file
True is file is a block device
-c file
True if file is a character special file
-d file
True if the file is a directory
-f file
True if file is a ordinary file
-r file
True if file is readable by process
-w file
True if file is writeable by process
-x file
True if file is executable
There are more operators but these are the most commonly used ones.
Now we can test to see if the user of our script has input a directory
or a file. so lets modify the program a bit more.
1 #!/bin/sh
2
3 if [ ! $1 ]
4 then
5 echo "Usage: `basename $0` directory_name"
6 exit 1
7 fi
8
9 if [ -d $1 ]
10 then
11 DIR="/$1"
12 fi
13
14 if [ -f $1 ]
15 then
16 DIR=""
17 fi
18
19 for a in `ls $DIR`
20 do
21 fname=`echo $a | tr A-Z
a-z`
22 mv $DIR$a $DIR$fname
23 done;
We inserted lines nine through seventeen to do our file/directory checks.
If it is a directory we set DIR to equal "/$1" if not we set it blank.
Notice we now put the directory slash in with the DIR variable and we've
modified line 22 so that there is no slash between $DIR and $a. This way
the paths are correct.
We still have a few problems with our script. One of them is that if
the file which is getting moved already exists then the scripts outputs
an error. What we want to do is check the file name before we attempt to
move it. Another thing is what if someone puts in more than two arguments?
We'll modify our script to accept more than one path or filename.
The first problem is easily corrected by using a simple string test
and an if statement like we have use earlier. The second problem is slightly
more difficult in that we need to know how many arguments the user has
input. To discover this we'll use a special shell variable which is already
supplied for us. It is the $# variable, this holds the number of arguments
present on the command line. Now what we want to do is loop through the
arguments until we reach the end. This time we'll use the While loop to
do our work. Finally we shall need to know how to compare integer values,
this is because we want to check the number of time we have gone through
the loop to the number of arguments. There are special test options
for evaluating integers, they are as follows
Test
Action
int1 -eq int2
True if integer one is equal to integer two
int1 -ge int2
True if integer one is greater than or equal to integer two
int1 -gt int2
True if integer one is greater than integer two
int1 -le int2
True if integer one is less than or equal to integer two
int1 -lt int2
True if interger one is less then interger two.
int1 -ne int2
True if integer one is not equal to integer two
Using this new knowledge we'll modify our program.
1 #!/bin/sh
2
3 if [ ! $1 ]
4 then
5 echo "Usage: `basename
$0` directory_name"
6 exit 1
7 fi
8
9 while [ $# -ne 0 ]
10 do
11
if [ -d $1 ]
12
then
13
DIR="/$1"
14
fi
15
16
if [ -f $1 ]
17
then
18
DIR=""
19
fi
20
21
for a in `ls $DIR`
22
do
23
fname=`echo $a | tr A-Z a-z`
24
if [ $fname != $a ]
25
then
26
mv $DIR$a $DIR$fname
27
fi
28
done;
29
30
shift
31 done
What we've done here is to insert a while loop on line 9 which checks
to see if the arguments listing is equal to zero. This may seem like we
just created an infinite loop but the command on line 30 the shift saves
us. You see the shift command basically discards the command nearest the
command name. (LCem.sh) and replaces it with the one to the right. This
loop will succeed in discarding all the arguments eventually and then will
equal zero and exit our loop.
And finally note the if statement on line 24, this checks to see if the
file name is already lowercase and if so ignores it.
I hope you have enjoyed this brief introduction to Bourne / Bash programming.
I would encourage you to try some of these examples for yourself. In fact
if you want you could make this script much better by using a switch like
-l to lowercase and -u to uppercase and modifying the script to handle
it.
I take full responsibility for any errors or mistakes in the above documentatio
n.
Please send any comments or questions to rick@ricken.demon.co.uk
REFERENCES:
The UNIX programming environment
by Brian W. Kernighan & Rob Pike
Published by Prentice Hall
Inside UNIX
Published by New Riders
__________________________________________________________________________
Copyright © 1998, Rick Dearman
Published in Issue 25 of Linux Gazette, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
Clueless at the Prompt
By Mike List, troll@net-link.net
__________________________________________________________________________
[INLINE]
Welcome to installment 5 of Clueless at the Prompt:
Here's this month's account of the triumphs, trials and
tribulations that I caused myself or encountered since the last
time, and a couple tips that may come in handy and increase your
understanding of linux.
__________________________________________________________________________
*Changing Disks:
If you make partitions the same size as your
previous disk's, you can simply hook up your new disk as
slave(See the documentation that comes with your new drive, or
sometimes there's a diagram on the top of the disk that shows
jumper settings to configure the disk as master, slave, or only
disk.), and use the "dd" command. You'll have to mount the old
disk first, use fdisk to set the partitions to the desired size,
then mount each partition separately, if you mount your
partitions one at a time, you'll avoid having the whole old disk
contents try to settle on your new disk.
__________________________________________________________________________
*Backups:
If you have any serious need of any of
the information on your old disk, I can't stress the value of
periodic backups enough. Even if you just backup the
configuration files you worked so hard to tweak to your liking,
and maybe your checking account balance, anything that you don't
have to remember or reinvent is a Good Thing(tm).
If you adopt the strategy of selective backups, you can easily
fit them on a floppy or three, rather than using a whole tape or
zipdisk to backup what you have already on your installation
media. I think that especially if you installed from a CD, the
plain vanilla install like you did the first time, can put you
back on your feet when combined with a backup of only those files
you wrote or modified, and and any special software that wasn't
included in the distribution.
To find out what files and libraries are required to run an app,
you can use
ldd filename
Another command that you can use to find out more about files is,
strangely enough, file. File can be used as
file filename
which will give information about other files, as well as
executables.
Yet one more helpful command is which, used like
which executable
where executable is the command used to start the application
as in
which makewhatis
to find out where the executable is located, pretty handy if you
are modifying your path statement.
__________________________________________________________________________
*Oh did I mention backups?
I stress this because I know from experience that failing to backup your data
is an extremely stupid and easy thing to do, but since I apart from the cardiac
care unit and the nuclear reactor I don't have anything mission critical on my
box right now, I'm still too lazy to back it up. Please excercise
a little cautious computing if anyone's data needs to be secure
__________________________________________________________________________
*A little bit about FVWM configuration files(fvwm-1.x):
with a
little text editting, you can configure your Xdesktop to your
liking. FVWM-2.x uses m4 macros, which I haven't even tried to
acquaint myself with yet. FVWM is configurable in either
system.fvwmrc or a .fvwmrc in your home directory, so you can set
a consistent set of applications system-wide or change the
defaults to your idea of a convenient desktop. Most of the
possible modifications are explained in comments preceding the
line to be editted or uncommented, and if you have X applications
that aren't included in the default popups, all you have to do is
follow the examples of those already there, usually something like
Exec "PROGNAME" exec progname -options &
the "&" causes the program to execute in the background, which
keeps it from monopolizing X. Note that some apps, such as
ImageMagick don't seem to want to share, and those will have to be
exec'ed without the "&". Also non-X apps can usually be run by
invoking an xterm or rxvt, in which case the titlebar can be
changed to reflect the program name, as in
Exec "Top" exec color_xterm -font 7x14 -T Top -n Top -e top &
which starts a color_xterm running top. Top, in case you aren't
familiar, basically lists the amount of resources each process is
using. For more info type
man top
or better yet just type top
__________________________________________________________________________
*Some stuff you may not hear anywhere else (so basic they forgot to tell
you):
Redirecting output: you obviously can print a file
to your monitor screen, and with a little luck even to a piece of
paper via your printer, but did you know you can print a file to
another VT or serial terminal or even to another file? By using
the ">" or "
Some examples:
cat filenamehere>>anotherfile
This one will add the contents of one file to another file, as
in chapters 1 and 2 could be added together for reasons of
continuity to make a fluid read that would otherwise be broken
up by having to cat the successive chapters
cat hellaracket.au >/dev/audio or /dev/dsp
is another example of redirecting the output of a command or
file to somewhere other than standard output which is another
way of saying your monitor.
Another feature is command line batching of commands. If you
type several commands separated by semicolons, each command will
execute when the previous one exits. A good example, is:
make config; make dev; make clean; make zImage
which will perform each of the steps necessary to compile a
kernel. As soon as the first command exits or is closed, the
next one starts Any group of commands that you would like to run in
succession can be done in this manner.
Another device you can use to your advantage with a little
imagination is the pipe, signified by the "|" symbol. Pipe is a
pretty good description of what it does, which is to "pipe" the
output of one command into another command for further
processing. One example that springs to mind is
cat filename | pr -l56>/dev/lp0
which come to think of it, is another example of redirection as
well. The above command takes the results of the cat command
pipes it to a filter "pr", and redirects the output to /dev/lp0
to print a file in a reasonably attractive manner. For some of
the options available to "pr", try
man pr
This filter is particularly useful if you find lpr to be beyond
your present capability, as I have :(. You should be aware
however, that this will only work as root, or with a lot of
permission hacking, which is probably best left undone, as it can
cause security problems if /dev/lp0 is made available to regular
users.
__________________________________________________________________________
*That terminal finally works!! What worked:
If you have been
reading this column for a while, you might recall I mentioned a
vt 220 that I couldn't get working. I got impatient and got rid
of it. but sometime later I ran into a wyse 150 and decided to
try it again. This time I hit paydirt, thanks to a member of the
Kalamazoo Linux Users Group, Scott Yellig. The magic bullet was the
letter -L which was unreported in the serial HOWTO, but Scott is
pree sharp at that stuff.When used in the /etc/inittab (Slackware)
this line,
s2:12345:respawn:/sbin/agetty -L 9600 ttyS1 vt100
if modified to reflect the serial port used, in this case com
port 1 in DOS lingo. This line can also be used with a 8086 or
above to emulate a serial terminal, if used with the proper cable.
The proper cable, usually called a null modem, is often sold as
a serial printer cable.
A serial terminal is a very good option when used with a Linux box
as it allows more than one user on the system at a reasonable cost
compared to buying another computer. The local university surplus
disposal has them for about $25US, and you may find them for free.
8086, 8088, and 286 boxes, which will also serve the purpose can
be gotten just as cheaply, depending on what hardware is attached.
The other thing you need is a comm program, Minicom and Kermit
are two that spring to mind or perhaps Seyon if you're in X. I've
never used any of these programs to connect directly to another computer as a t
erminal
without a modem, so I don't know much about connecting with
minicom in this manner, but Kermit seems to be pretty simple in
this capacity
Another use is to kill frozen X applications. I had a Netscape bus
error problem before I got Andreas Theofilu's nets ,
and a terminal can be used to kill out of
control processes quite easily, by logging in and using kill or
similar(remember die?)command to wax it and you can regain
your X session. Nearly any non-graphical task you can do on the
console can be done on a serial terminal. One exception, virtual
terminals can be worked around to a degree by usingsplitvt
which cuts your screen into two parts and by using
CTRL-W
you can switch between the upper and lower displays, and work
alternately between the two, with the added advantage of seeing
both screens at once. You can even be root on one while using a
different account on the other screen, easily cut and paste from
one editting session to another, check top or ps
or many administrative tasks that require monitoring. It ain't
X but it's pretty good for a text only environment.
troll@net-link.net
See you next month!
__________________________________________________________________________
Copyright © 1998, Mike List
Published in Issue 25 of Linux Gazette, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
Confessions of a Former VMS Junkie
One Techie's Journey to Linux
By Russell C. Pavlicek
__________________________________________________________________________
Once upon a time, in a land far, far away...
Someone once told me that phrase was the perfect way to begin a story with a
happy ending. If so, then I am inclined to employ it here.
It has been over 20 years since my first programming experience. An
ASR-33 Teletype with a paper tape punch attached to an acoustic coupler (do
they even tell today's Computer Science students about the joys of a 110 baud
acoustic coupler?) would whir, clunk, chunk, and ding as it magically made my
dry, clinical code come to life and perform wonderful tasks! Amazing! And, I
was told, the wondrous machine miles away on the other end of the telephone
could not only breath life into my coded creations, but it could simultaneously
do likewise for dozens of other aspiring Dr. Frankensteins who, like me, wanted
to see dry, dead algorithms transformed into living, breathing computer
creations.
That's how it all started. In retrospect, it involved a dreary little teletype
in a bleak little room connected to a slow little coupler (for you recent CS
grads, that's a modem that connected to a phone using an acoustic cradle rather
than today's direct modular phone wire) connected over a telephone line to
a computer that probably didn't have the computational power of a modern
programmable pocket calculator. By today's standards, it was a trivial
computing experience. But it shaped my perspective on computing forever,
because that ancient assembly of antique parts could not only perform
computations, but it could support multiple concurrent users. It did something
that those of us with grey in our hair used to refer to as "timesharing".
When I went to college, I was exposed to and learned the internals of a DEC
PDP-11/34 running the RSTS/E operating system. Another fine timesharing
operating system, RSTS/E happily supported an entire campus population with
a mere 124K words -- just 248K bytes! -- of usable memory and 12.5M bytes of
hard disk storage! But this Resource Sharing Time Sharing / Extended system
made each user feel like they had a whole computer at their beck and call.
It was a marvelously reliable workhorse that ran for days without crashing,
even while hordes of unthankful students stretched it to its very limits on
a daily basis.
Soon after I entered the business world, I met another highly impressive
operating system. It was called DEC VAX/VMS. It was an iron horse of a
operating system that was seemingly massive in its internal complexity, yet
uniform in its appearance. When properly tuned, a VAX/VMS system could
satisfy the needs of dozens or even hundreds of concurrent users for months
on end. Even now, Digital's OpenVMS (the current incarnation of VAX/VMS)
can run for years between reboots faithfully servicing the needs of its
users.
It was here that I settled down. It was here I dug in. Nestled safely in
the FABs and RABs and QIOs of OpenVMS internals, I settled in for a long,
comfortable stay. Where else would a programmer rather go? Here was
reliability. Here were strong multiuser capabilities. Here were
documented system calls... uniform presentation... true upgradability...
all found in a system that just wouldn't quit!
I was home!
Yes, I knew there was more out there. There were all those mainframes.
But who the heck wanted to work with IBM? They were on top. They were
the Big Corporate standard. They were the "safe choice". What fun
was that?
Then, there was Unix. Or, shall I say, the plethora of Unix-like systems.
Each different. Each ugly. Commands that made no sense. Non words like
"grep". What's a "grep"? Editors named after people's initials. Uck.
Phewy! Give me commands like SEARCH and EDIT any day.
Then, of course, came the ground swell which was dubbed the "PC revolution".
Here, at last, was computing for the common man. You could have your own
system with your own software to do your own work. Magnificent concept, but th
e
tools... yow! The popular PC operating systems were so anemic. Remember,
these operating systems were responsible for the word "reboot"
entering common speech. They were lucky if they could accomplish one thing
at a time, let alone serve the needs of hundreds of people simultaneously.
Yes, color and sound became standard through the PC influence, but so did
the notion that an operating system could have a nervous breakdown whenever
it pleased. With the introduction of these systems into the business realm,
the bar of technical excellence for operating systems plummeted to
previously unimagined lows. Amidst the growing cry for open standards, the PC'
s
proprietary operating system with undocumented system calls inexplicably
soared in
popularity. Suddenly, interface was everything. Reliability was nothing.
Yet, though I tinkered with the PC at home, I was happy to continue my work wit
h
solid, feature-rich OpenVMS. Then, one day, it happened. I was attending
training on migrating software from OpenVMS to Unix (ugly though it was, at
least Unix was a product of people who
knew what it meant to have a reliable operating system). I picked up a
mail order catalog and there was an ad for an inexpensive PC-based Unix called
Linux. I passed it around during class and by the end of the training
session, there were several people intending to purchase this product as a
means of brushing up on Unix skills.
That's how I came to use Linux. After the class was over, I ordered a copy
of Yggdrasil
Plug-and-Play Linux (Nov 1994; kernel version 1.1). At first, I created an
80 MB partition on my 386SX/40 and ran most of the operating system off of
the CD. The few people I found who knew of the operating system said it was
"still a bit buggy, but cool". I quickly found out that a "buggy" Linux was
still more stable than the more "mature" PC operating systems I had been
fiddling with.
One of my first practical uses for Linux presented itself during a 2 week
intensive training course I needed to attend. As I wanted to touch base
with my wife daily, but knew that the schedule could make it difficult for
us to connect on the phone, I decided to set up my little Linux box as a
mail server during the training. I created a turnkey account and menu for
my non-technical wife to create and read mail messages on the box at home,
while I would compose my mail messages on my laptop and dial in to my home
system to upload and download my mail. Much to my amazement, my limited
little 386 turned out to be a marvelous little mail hub. This lowly little
box, which many would dismiss as having insufficient resources to perform
any serious computing, was suddenly transformed into a true multiuser system
which easily handled the task of being a miniature mail hub!
I soon discovered that there were familiar friends available to help me get
acclimated to my new O/S. On the Web, I found Anker Berg-Sonne's
SEDT
editor to give the EDT emulator I desired. I also found source code for an
implementation of the TECO editor which compiled nicely under Linux. Suddenly,
I was ready to give programming a try in this "new world" I had discovered.
The robust GNU C compiler proved to be a rich engine for developing software.
Coupled with the XFree86 software that provides the standard X windows
interface, I soon found that the Linux environment was a splendid development
platform for producing some 3D object rotation software that was requested
by one of my clients. Even though the target system was an OpenVMS workstation
,
I found that I could port the software I developed under Linux by simply
changing a couple of #include directives. Wow! I now had the ability to
create and run workstation software on a low-end PC!
But that was only the beginning. Soon, I upgraded my system
and made the strategic decision to allocate a large portion of my new disk
drive to Linux. That is one decision I have never regretted. The operational
advantages of my new platform were becoming more and more significant.
Like any PC, my Linux box enjoyed numerous inexpensive hardware options.
Yet, unlike
most PCs, this operating system could really perform multiple tasks
simultaneously. And, unlike most PCs, I didn't have an operating system that
needed constant rebooting. I could develop and run software based on open
standards without having to focus on proprietary system calls. I could employ
a TCP/IP stack that was sure and solid. And, I had
the power of a true multiuser, multitasking operating system.
Then came the 1997 Atlanta Linux Showcase.
I talked my manager into letting
me attend it as a training event. Suddenly, I was surrounded by hundreds of
people who were even more enthusiastic than I. Amidst the technology and
training, there was passion and conviction. I discovered that Linux wasn't
merely the pleasant pastime of a few hackers; it was the growing wave that was
beginning to wash over the beaches of corporations worldwide. Listening to
the impassioned appeals of people like maddog Hall, Eric Raymond, and
Robert Young, I was affected. The software paradigm was changing, and I had
to find my place in this new world.
At work, I liberated an old 486 languishing in a corner and turned it into
a intranet web server. It had been considered too weak for most "serious" PC
applications, yet it has plenty of horsepower to serve as my personal
workstation, intranet ftp server, and intranet web server. Its intranet web
pages are dedicated to Linux advocacy, attempting to convey, convince, and
convict folks within the corporation that Linux is a new market that will
not be ignored. In its first
6 months of operation, the server has processed requests for over
3300 HTML pages. In all that time, the system has never crashed due
to software (we had a power outage once), and at one point the system
exceeded 10 weeks between reboots (I have had to shut it down for
hardware upgrades and environmental reasons).
I have used Linux to develop software for US government customers, both on
site and off. It has proved to be an extremely capable development platform
for software destined for OpenVMS, Digital UNIX, and even Windows NT. Linux's
adherence to industry standards makes it an excellent base for designing
portable software. Plus, the addition of exciting technologies like
KDE and GNOME bring the concept of a user-friendly desktop to a
POSIX-compliant system. Who could imagine the day of a sharp looking Unix
desktop that even the most hesitant end-user could conquer?
Today, Linux is my preferred platform, both at work and at home. I still
have a deep fondness for the robustness of OpenVMS, but I relish the
possibilities of an operating system that can scale from a lowly
386 to a networked army of thundering Alphas.
I do not know all that is ahead for Linux, but I'm tempted to invoke the
normal conclusion for all good stories:
... and they lived happily ever after!
__________________________________________________________________________
Copyright © 1998, Russell C. Pavlicek
Published in Issue 25 of Linux Gazette, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
EMACSulation
By Eric Marsden
__________________________________________________________________________
[INLINE]
This column is devoted to making the best use of Emacs, text editor
extraordinaire. Each issue I plan to present an Emacs extension
which can improve your productivity, make the sun shine more
brightly and the grass greener.
__________________________________________________________________________
Jka-compr is a package written by Jay K. Adams which allows
Emacs to handle compressed files transparently. When you open a
compressed file, Emacs will automatically decompress it before
displaying it. If you make changes and save the file, it will be
compressed transparently before being written to the disk. To
enable jka-compr, just add the following line to your emacs
configuration file (normally called ~/.emacs) :
(require 'jka-compr)
jka-compr works by looking at the filename extension, and in its
default configuration recognizes .gz (gzip), and
.Z (compress) files. It also recognizes the extension
.tgz and unzips tarballs before passing them to tar-mode,
which lets you look inside tar files. If you use other compression
programs you can tell Emacs about them too, for example to use
Julian Seward's bzip2
(faster and slightly better compression than gzip, under GPL) you
could add the following to your .emacs (before
loading jka-compr)
(setq jka-compr-compression-info-list
'(["\\.Z\\(~\\|\\.~[0-9]+~\\)?\\'"
"compressing" "compress" ("-c")
"uncompressing" "uncompress" ("-c")
nil t]
["\\.tgz\\'"
"zipping" "gzip" ("-c" "-q")
"unzipping" "gzip" ("-c" "-q" "-d")
t nil]
["\\.gz\\(~\\|\\.~[0-9]+~\\)?\\'"
"zipping" "gzip" ("-c" "-q")
"unzipping" "gzip" ("-c" "-q" "-d")
t t]
["\\.bz2\\(~\\|\\.~[0-9]+~\\)?\\'"
"bzipping" "bzip2" ()
"bunzipping" "bzip2" ("-d")
nil t]))
How does it work?
Packages like jka-compr are written in Emacs Lisp; you can read
the source code in the directory
/usr/local/lib/emacs/${VERSION}/lisp/jka-compr.el for GNU
Emacs, or
/usr/local/lib/xemacs-${VERSION}/lisp/packages/jka-compr.el
for XEmacs users (if you are using a Red Hat Linux distribution,
you need to install the emacs-el package to see the source
files). How can they change the behaviour of Emacs at such a low
level as reading and writing files? The answer comes from the
concept of hooks.
Most of Emacs' low-level functions (which are written in C) have
an associated hook, to which user-level functions (written in
Emacs Lisp) can be attached. Hooks are fundamental to the
customizability of Emacs, allowing users to override default
behaviour in ways that its developers could not have imagined.
Hooks are explained in the Emacs and Elisp manuals, which are
available online from within Emacs by typing C-h i
(or from the Help menubar or (blech!) the XEmacs toolbar).
As an example of using a hook, the after-init-hook is
run right after Emacs is lauched and has loaded your
initialization file. Let's say you want Emacs to tell your fortune
each time you start it. Just add the following lines to your
.emacs :
(add-hook 'after-init-hook
(function
(lambda ()
(pop-to-buffer (get-buffer-create " *Fortune*"))
(shell-command "fortune -a" t))))
Next time ...
In the next issue I'll discuss ange-ftp, which lets Emacs
see the Internet as a huge virtual filesystem. Please contact me
at <emarsden@mail.dotcom.fr> with comments,
corrections or suggestions. C-u 1000 M-x hail-emacs !
PS : Emacs isn't in any way limited to Linux, since
implementations exist for many other operating systems. However,
as one of the leading bits of free software, one of the most
powerful, complex and customizable, I feel it has its place in the
Linux Gazette. Don't forget, Emacs makes
all computing simple :-)
__________________________________________________________________________
Copyright © 1998, Eric Marsden
Published in Issue 25 of Linux Gazette, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
Gathering Usage Stats
By Randy Appleton
__________________________________________________________________________
Intro
Here in the Linux Laboratory at Northern Michigan University, we have quite
a few users and quite a few computers for them to use. It is important
for laboratoies like us to quantify usage. This
data can be used to justify expansion of a computer laboratory, describe
who is actually using the machines, which machines are being used,
or just satisfy simple curiosity.
Being the curious type, I sat down to write a program that would gather
usage information. The information I wanted includes:
* How much time each user spends online.
* How much time each computer spends being used.
* How often the computer is up.
* User total usage time divided by weeks (to see long term trends).
* User total usage time divided by day for the last couple of days
(to see current trends).
Methodology
My first thought was to just stick my head in at odd times and count users.
But for such a strategy to work, I would have to count users at various
times in the day, including times I might not otherwise be inclined to
visit the lab (like early mornings). Further, I would miss users
using the lab remotely, over the internet.
My second thought was to use the "w" command. This command reads
a log file (normally /var/log/wtmp) and produces a line of output for every
logon event in the past, describing who was logged on and for how long.
My hope was that a summary of this information would provide the usage
statistics I was looking for. Unfortunately, this command does not
produce foolproof output. If the machine crashes while someone is
logged on, then "w" will sometimes produce the wrong total time online.
Even worse, if a person is logged on but idle, this idle time still counts
as usage as computed by "w".
Counting idle time was unacceptable to me. We have several
users with computers in their offices, and they are essentially logged
on 24 hours per day 7 days per week. Their usage is nowhere near
this level (yes, even college professors go to sleep!)
Luckily , there was an alternative to "w". The easiest way to
find out who is currently logged onto a computer is to use finger, a program
designed for just this purpose. The command "finger @hostname"
will describe who is logged on to "hostname", and how long since they actually
typed a command (i.e. finger knows their idle time).
Finger produces a header line, and the one line for every person logged
on. Eliminating the users with a high idle time will provide a list
of users who are using the computer at any given moment. A log file
of such lists, gathered at regular intervals, will describe usage over
the time the log file was gathered.
There is an important statistical assumption here. We assume that
a set of entries will accurately describe usage over the whole time period,
not just the precise moments when those entries occur. For
this assumption to be valid the entries should be gathered at regular intervals
.
Defining Usage
The other complicated issue is to define usage. Often a single computer
will have several users logged on simultaneously, and often a single user
will be logged on to multiple computers at once (as I am now). It
becomes important to carefully define usage in these cases. I adopted
the following definitions.
* A computer is in use if and only if there is at least one user
using that computer.
* A user is logged on if and only if the user is logged onto at
least one computer.
* A computer is up if and only if it responds to the finger command
at all, and is otherwise down. Note that a computer that is
currently running Windows will NOT respond, and will therefore be
counted as down (which makes sense to me!).
Given these definition, it becomes important not double count users where
they are logged in more than once, and to not double count computers when
they have more than one user. Correct programming eliminates these
double countings (see the source code below).
The Log file
The log file contains a series of records, each one of which is a description
of the results of running finger on the set of hosts. The size of
each entry is minimized, since many entries will be gathered yet the log
file should remain modest in size. The top of each entry contains the date
and time the entry was gathered, which is important for gathering time
and date based statistics. The log file entry below shows that it
is 11 45 in the evening on 10/11/97, and that I am the only one logged
in besides root. Root and I are using the computers ogaa and ogimaa.
Also shown is that the computer nigig is down, since it is not listed at
all.
Date 97 10 11 23 45
Host ogimaa 1
Host bine 0
Host gaag 0
Host makwa 0
Host mooz 0
Host zagime 0
Host ogaa 1
Host euclid 0
Host euler 0
Host fermat 0
User randy
User root
Total 2 users
The Program
The program is named fingersummarize, since its job is to summarize a set
of results from the finger command. It is written in Perl, since
Perl offers wonderful support for associative arrays (where the usage stats
are stored) and working with strings (from the log file and the output
of finger).
There are two basic tasks of fingersummarize. These functions could
easily be done with two separate programs, but I find it easier to have
one program with options rather than two executables.
* It should gather finger results, and store them in a log file.
(fingersummarize -probe)
* It should read the log file and produce the usage statistics.
(fingersummarize -print)
Fingersummarize can be installed easily. Just follow the instructions
below.
1. Copy the executable to someplace on your system, such as
/usr/local/bin.
cp /tmp/fingersummarize /usr/local/bin; chmod 755
/usr/local/bin/fingersummarize
2. Edit the top of the executable so that fingersummarize will probe
your machines instead of mine. This should be very easy to do.
vi /usr/local/bin/fingersummarize
3. Make a blank log file and put that log file somewhere. Often
/var/log/fingersummarize is a reasonable place.
echo > /var/log/fingersummarize; chmod 600
/var/log/fingersummarize
4. Install a line in cron so that fingersummarize will run in probe
mode at regular intervals. Below is the line I use, which runs
fingersummarize every fifteen minutes for every hour.
0,15,30,45 * * * * /usr/local/bin/fingersummarize
-probe >> /var/log/fingersummarizelog
That's it. Now, whenever you want to see a current summary of the
usage data, just run
fingersummarize -print < /var/log/fingersummarizelog
Example Output
Here is some sample output. A current example for my lab can he had
at http://euclid.nmu.edu/fingerprobe.txt
. The executable itself can be had at http://euclid.nmu.edu/~randy/Papers/finge
rprobe
. Note that the total number of hours computers were in use (12.8
hours/week) exceeds the total number of hours that people were using computers
(10.8hours/week). This just means there were times that some person
was using more than one computer at a time. Also, note that the useage
spikes at 10am, since a particular class sometimes meets in the lab at
10am.
Stats by user
User Total Usage Hours
Name Observ. Percent /Day
abasosh 47 4
0.42
agdgdfg 54 4.6
0.49
arnelso 7 0.6
0.06
bparton 2 0.1
0.01
bob 28 2.4
0.25
brandk 101 8.7
0.92
btsumda 37 3.2
0.33
chgijs 1 0
0
clntudp 1 0
0
daepke 2 0.1
0.01
dan 93 8
0.84
dfliter 17 1.4
0.15
gclas 43 3.7
0.39
goofy 15 1.3
0.13
gypsy 2 0.1
0.01
jadsjhf 2 0.1
0.01
jbsdjh 2 0.1
0.01
jdefgg 2 0.1
0.01
jeffpat 6 0.5
0.05
jpaulin 7 0.6
0.06
jstyle 4 0.3
0.03
jstamo 17 1.4
0.15
jwilpin 37 3.2
0.33
jwilpou 79 6.8
0.72
kangol 39 3.3
0.35
matt 58 5
0.52
mhgihjj 8 0.6
0.07
randy 187 16.2
1.7
rbush 2 0.1
0.01
root 22 1.9
0.2
rpijj 2 0.1
0.01
sbeyne 17 1.4
0.15
sdajani 1 0
0
sdalma 28 2.4
0.25
ship 1 0
0
skinny 48 4.1
0.43
stacey 2 0.1
0.01
tbutler 35 3
0.31
tmarsha 5 0.4
0.04
tpauls 34 2.9
0.31
vladami 30 2.6
0.27
xetroni 26 2.2
0.23
---------------------------------
Overall 1151
10.24
Stats by Host
Host Total Percent Percent Hours
Name Observ. Up
Busy /Day
bine 131 100%
4.9% 1.194
euclid 152 100%
5.7% 1.386
euler 7 89.3%
0.2% 0.068
fermat 52 100%
2.1% 0.506
gaag 202 36.5%
7.6% 1.842
maang 118 100%
4.4% 1.076
makwa 77 100%
2.9% 0.702
mooz 92 100%
3.4% 0.839
nigig 81 100%
3% 0.738
ogaa 48 100%
1.8% 0.437
ogimaa 374 100%
14.2% 3.411
waabooz 28 100%
1% 0.255
zagime 38 100%
1.4% 0.346
------------------------
Overall 2551 94.2% 4.1%
12.807
Stats by the Week
Week
User
Starting Hours
97 10 04 74.5705816481128
97 09 28 55.9130434782609
97 09 21 64.7
97 09 14 113.023956442831
Last Two Weeks
Day User
Hours
97 10 11 7.05882352941176
97 10 10 16.75
97 10 09 4.25
97 10 08 1.5
97 10 07 5.25
97 10 06 8.25
97 10 05 13.8947368421053
97 10 04 17.6170212765957
97 10 03 9.91304347826087
97 10 02 0.75
97 10 01 1
97 09 31 12
97 09 30 9.75
97 09 29 12.75
Stats by the Hour
Hour Avg Users
00 0.151
01 0.163
02 0.151
03 0.053
04 0.036
06 0.027
07 0.055
08 0.175
09 0.75
10 1.398
11 1.171
12 0.972
13 0.814
14 0.775
15 0.778
16 0.607
17 0.526
18 0.459
19 0.455
20 0.232
21 0.321
22 0.339
23 0.196
__________________________________________________________________________
Copyright © 1998, Randy Appleton
Published in Issue 25 of Linux Gazette, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
Welcome to the Graphics Muse
Set your browser as wide as you'd like now. I've fixed the Muse to
expand to fill the aviailable space!
© 1998 by mjh
_______________________________________________________________________________
Button Bar
muse:
1. v; to become absorbed in thought
2. n; [ fr. Any of the nine sister goddesses of learning and the arts
in Greek Mythology ]: a source of inspiration
W elcome
to the Graphics Muse! Why a "muse"? Well, except for the sisters aspect,
the above definitions are pretty much the way I'd describe my own interest
in computer graphics: it keeps me deep in thought and it is a daily source
of inspiration.
[Graphics Mews][WebWonderings][Musings] [Resources]
T his column
is dedicated to the use, creation, distribution, and discussion of computer
graphics tools for Linux systems.
The past two months have been quite busy for me. First, I moved
from Denver to Dallas. Yes - on purpose. I grew up in
Texas and have many friends here. I loved Colorado - its a beautiful
state - but I wasn't much of a cold weather fan and winters there could
get chilly. More importantly, I missed my friends. Hey, geeks
need friends too. [INLINE]
So I'm back in Dallas now. The move went well up
until I started to set my computers back up. First, and before
I got the other systems unpacked, I blew the monitor on my laptop (aka
"kepler"). I have no idea what happened. Its just dead.
Sigh. Thats now an $1800 doorstop unless I can get NEC to fix
it for a reasonable price. Suprisingly, I wasn't put off by this.
I started to get my main systems unpacked. The first thing I did
was to bring up my primary system - "feynman", the one I do all my real
work on. I plugged it in, turned it on. It sprang to life
just as always. Then, 15 minutes later - power spike. You see,
this is a brand new apartment complex. No one had ever lived here
before. Apparently no one had ever plugged anything in here either.
That burnt plastic smell you've noticed was my Cyrix CPU and PCI chipset
waving bye bye. $400 more. I really need a cheaper hobby.
Anyway, things are finally back up and running. More imporantly,
its all stable. Through it all my Linux OS has performed fine.
Its the hardware that keeps kicking up dirt. So much for commodity
items.
Once life settled back to normal I got back down to business.
I had spent about a month away from serious nerd time during the move
and was feeling pretty refreshed. Translated that means I should
have gotten my writing responsibilities done with immediately. Instead
I started playing around with the PalmPilot my brother gave me for
Christmas. It wasn't a new one - I think he had it for about a year
- but its in perfect condition. He knew I'd found some info on using
it with Linux previously and had mentioned that if I were to get a PDA (Persona
l
Digital Assistant), it would be the Pilot. Well, I got one.
And its cool (no, not "kewl" - cool, as in "I'm over 30 now"). And
the tools available for Unix systems and the Pilot work great. So
great I wrote an article about it. Keep an eye out in a future LJ
for it. Its cool.
I also took on another programming task. I decided,
for no particular reason I can think of, to begin scanning the bowels of
Gtk and to port my XPostitPlus (aka computer sticky notes for the
3M impaired) to a new widget set. I really enjoyed it, mostly
because the port was very straight forward. Gtk is quite easy to
use. More so than Motif, although Gtk still has a way to go to be
as feature rich (mostly its missing simple convenience tools - or perhaps
they are there and I just missed them). Anyway, I spent way
to much time on that. Planning new features, testing some neat ideas.
Way too long.
Which leads me to this months column. Its nearly
midnight on January 29th. I promised I would upload this issue by
tonight. And I still wanted to do a section on XeoMenu, a Java-based
menuing system from JavaSoft. Guess thats not going to happen.
On the bright side - I know what I can do for the Web Wonderings section
next month.
In this months column I'll be covering that nifty
logo machine, Font3D, along with its side kick XFont3D. Both are
terrific tools. XFont3D is a fairly decent front end to Font3D which
you'll want to look at if you get seriously involved with creating 3D logos.
For this month, you'll want to view the Muse in something wider than 640
pixels. Sorry, but to get the images in required a little extra width.
Hopefully your holidays (if you had any) were good
and you're ready to get back into the fun stuff again. I know
I am. Hey, I even got approached about possibly being a
series editor for a set of Linux-related books. Gee, I wonder what
topic I should emphasize....
Graphics Mews
Disclaimer: Before I get too far into this
I should note that any of the news items I post in this section are just
that - news. Either I happened to run across them via some mailing list
I was on, via some Usenet newsgroup, or via email from someone. I'm not
necessarily endorsing these products (some of which may be commercial),
I'm just letting you know I'd heard about them in the past month.
indent
Play Video CDs with MpegTV Player
MpegTV is happy to announce that is
it now possible to play Video-CD's (VCD's) on Linux-x86 systems with MpegTV
Player 1.0 and xreadvcd.
MpegTV Player 1.0 is shareware (US$ 10)
for personnal and non profit use only. Commercial licenses are required
for commercial or governmental use. xreadvcd
is a free utility developped by Ales Makarov (source code available).
For information and to download MpegTV Player and xreadvcd:
http://www.mpegtv.com/download.html
To receive announcement of new MpegTV product releases you can subscribe
to our mailing list:
http://www.mpegtv.com/mailing.html
Contact information: mailto:info@mpegtv.com
MpegTV website: http://www.mpegtv.com
indent
Xi Graphics announes Virge GX/2 support
Xi Graphics, Inc. announces support
for the Virge GX/2 in their Accelerated-X
Display Server v4.1 for Linux, FreeBSD, BSD/OS, Sun Solaris/86,
Interactive, Unixware, and SCO OpenServer V. XiG has full 2D acceleration
in all color depths and resolutions. XiG also supports hardware gamma
correction.
For current users of Accelerated-X Display Server v4.1 there is now
an update_4100.016 on their FTP site which contains new support for the
Virge GX/2 (AGP & PCI) video cards, this update includes specific support
for the Number9 Reality 334 video card. The update also contains
enhanced support for the previous Virge GX and DX video cards.
For a demo of the Accelerated-X Display Server v4.1 download the demo
and these updates:
ftp://ftp.xig.com/pub/update/
update_4100.016.tar.gz
and
ftp://ftp.xig.com/pub/update/
update_4100.016.txt
are the two files required to get this support. The update_4100.016.txt
file has installation details.
If you have a graphic card with troubled support contact XiG. They may
have a server that fixes your problems.
Xi Graphics, Inc. 800.946.7433
303.298.7478
[INLINE]
[INLINE]
TrueType to Postscript font converter
Andrew Weeks has written a program to convert True Type fonts to Postscript,
so Linux users can use the TT fonts that come with Windows.
See http://www.bath.ac.uk/~ccsaw/fonts/
Comments/Problems to:
Andrew Weeks
Bath Information & Data Services
University of Bath
email: A.Weeks@bath.ac.uk
OpenGL Widget for Gtk
gtkGL version 0.2 is a function/object/widget
set to use OpenGL easily with GTK. gtkGL includes gdkGL; GLX wrapper.
List of current archives appears to be at
http://www.sakuranet.or.jp/~aozasa/shige/
doc/comp/gtk/gtkGL/files-en.html.
The current version appears to be
http://www.sakuranet.or.jp/~aozasa/shige/
dist/gtkGL-0.3.tar.gz
_______________________________________________________________________________
MindsEye mailing list archives
http://mailarchive.luna.nl/mindseye/
[INLINE]
Freedom VR 2, a Quicktime VR viewer
Paul A. Houle announces the release of Freedom
VR 2, a Java applet that works like a Quicktime
VR object movie. Freedom VR 2 is a solution for photographic VR that
can be viewed on any platform for a Java enabled web-browser,
including Linux as well as other forms of Unix, Mac OS, OS/2,
Windows and more. Because it's based on open standards such
as .gif and .jpg, you can create Freedom VR content on any platform
as well. Freedom VR 2 is released under the GNU public license so
it's free and source code is available.
Freedom VR 2 adds many features to Freedom VR 1 -- it's now possible
to embed hyperlinks in your VR scenes as well as to make scenes with two
dimensional navigation -- where you can drag the object up and down as
well as left and right. Users can now navigate via the keyboard,
and Freedom VR 2 can now be controlled by Javascript. In addition,
Freedom VR 2 has some improvements in cross-platform performance.
Freedom VR 2 is easy to use; many people have already made great
content with Freedom VR 1 -- to encourage people to use Freedom VR 2,
we're sponsoring a contest. We're giving away a free virtual pet
to the person who submits the best VR model before December 15, 1997.
Take a look at http://www.honeylocust.com/vr/
Editor's Note: Ok, so I didn't get this out in time for the contest.
My apologies.
[INLINE]
Brother HL 720 Laser Printer driver for Ghostscript
P.O. Gaillard wrote a Ghostscript driver for the Brother
HL 720 laser printer. He submitted it to Aladdin Enterprises
and it should be included in upcoming versions of Ghostscript (i.e. the
ones coming AFTER 5.10).
This driver is completely free from copyrights by Brother or Microsoft
(the printer is not a true WPS printer, which is why he could obtain documentat
ion).
You should note that such documentation is not available for Oki and Canon
(LBP 660) printers which prevents writing drivers for them.
Some facts about the driver and the printer
= The printer is a 600dpi, 6 ppm , $300 printer
= With ghostscript you can print at approximately 5 ppm
= It took less than 50 hours to develop the driver
People (especially maintainers of Ghostscript packages for commercial
distributions) who want to use the driver with gs3.33 can contact Mr. Gaillard
and he will send them a patch. (The patch has already been posted in fr.comp.os
.linux
a few months ago). Maybe normal users can wait for Debian and Red Hat packages.
P.O. Gaillard
Ed. Note: this was an old announcment from comp.os.linux.announce.
I don't have any other contact information except for the email address.
_______________________________________________________________________________
VARKON V1.15C
VARKON is a high level development
tool for parametric CAD and engineering applications developed by Microform,
Sweden. Version 1.15C of the free version for Linux is now available for
download at:
http://www.microform.se
For details on what's new in 1.15C check:
http://www.microform.se/userinfo.htm
Johan Kjellander, Microform AB
http://www.microform.se (VARKON/English)
[INLINE]
Awethor - Java Based authoring tool
CandleWeb AS is proud to announce a new Java based authoring tool called
Awethor. Awethor strives
to meet the needs of web authors when it comes to designing and creating
graphics for the Web. As the Awethor system uses vector graphics rather
than bitmaps, users can create and publish large scale drawings and animations
in small files, thereby avoiding the large download times traditionally
associated with large web graphics and animations.
The output of Awethor can be run in any browser that supports the Java
language. Awethor typically outputs two files :
1. A file containing the presentation in the QDV (Quick and Dirty
Vector graphics) format. QDV is optimized for the Web, and
graphics in this format have a fraction of the size compared to
similar graphics in GIF or JPEG.
1. An HTML-file example with the correct parameters for incorporating
the QDV graphics into regular HTML-files. In addition, a standard
Java applet driver for QDV is used. The size of the applet is
about 13K, so it is loaded quickly (and automatically) and you may
reuse the same applet on multiple QDV files.
Here is a short summary of the features of Awethor :
* Creates animations and vector graphics that scales for use on the
web.
* Drawing of rectangles, arcs, lines, polygons, splines, images and
text are suppported.
* Full featured WYSIWYG vector based drawing tool.
* Integrated HTML based help system.
Awethor may be downloaded from the CandelWeb web site :
http://www.candleweb.no/
[INLINE]
FREEdraft - 2D drafting system for Linux/Unix/X.
FREEdraft is under development.
It is not yet in any sense ready for production work. It may be useful
if you are interested in constraint syntax modeling, or are just the curious
type. Currently FREEdraft consists of a viewer, a dynamically loadable
grammer/menu/command system, some geometry types and a library of 2D plane
and cad mathematics.
FREEdraft is licensed under the GPL. Feedback is appreciated.
The source code and a screen shot is available from http://www2.netcom.com/~iam
cliff/techno.html
[INLINE]
Announcing The WebMagick Image Web Generator Version 1.39
New in this release: a 100% JavaScript interface!
WebMagick is a package which makes putting images on the Web as easy
as magick. You want WebMagick if you:
1. Have access to a Unix system
2. Have a large collection of images you want to put on the Web
3. Are tired of editing page after page of HTML by hand
4. Want to generate sophisticated pages to showcase your images
5. Like its interactive JavaScript based interface
6. Are not afraid of installing sophisticated software packages
7. Want to use well-documented software (40 page manual!)
8. Support free software
After 12 months of development, WebMagick is chock-full of features. WebMagick
recurses through directory trees, building HTML pages, imagemap files,
and client-side/server-side maps to allow the user to navigate through
collections of thumbnail images (somewhat similar to xv's Visual Schnauzer)
and select the image to view with a mouse click. In fact, WebMagick supports
xv's thumbnail cache format so it can be used in conjunction with xv.
The primary focus of WebMagick is performance. Image thumbnails are
reduced and composed into a single image to reduce client accesses, reducing
server load and improving client performance. Everything is either pre-computed
or computed in the browser.
Users with JavaScript-capable browsers (Netscape 3 or 4 & Internet
Explorer 4) enjoy an interface that minimizes accesses to the server. Since
HTML generation is done in the brower, navigation is much faster and more
interactive.
During operation WebMagick employs innovative caching and work-avoidance
techniques to make successive executions much faster. WebMagick has been
successfully executed on directory trees containing hundreds of directories
and thousands of images ranging from tiny icons to large JPEGs or PDF files.
Here is a small sampling of the many image formats that WebMagick supports
(48 in all):
* Acrobat (PDF)
* Encapsulated Postscript (EPS)
* Fig (Xfig format)
* GIF (including animations)
* JPEG
* MPEG
* PNG
* Photo CD
* Postscript (PS)
* TIFF
* Windows Bitmap image (BMP)
WebMagick is written in PERL and requires the ImageMagick (3.8.4 or later)
and PerlMagick (1.0.3 or later) packages as well as a recent version of
PERL 5 (5.003 or later). Installation instructions are provided in the
WebMagick distribution.
Obtain WebMagick from the WebMagick page at http://www.cyberramp.net/~bfriesen/
webmagick/dist/.
WebMagick
can also be obtained from the ImageMagick distribution site (or one
of its mirrors) at ftp://ftp.wizards.dupont.com/pub/ImageMagick/perl/.
[INLINE]
Did You Know? [INLINE]
...the POV-Ray Texture Library 3.0 has its own domain now? Check
outhttp://texlib.povray.org/.
Q and A
Q: Is the Gimp licensed under the GPL or the LGPL?
Does it make a difference?
A: Actually, I'm not completely sure about the legal differences,
but I'll tell you what I know and how I interpret it. First, the
Gimp core program is licensed under GPL. The Plug-Ins (as of the
0.99.18 release) are licensed via the Gimp API library they use which is
called libgimp. This library is licensed under the LPGL.
GPL - the GNU General Public LIcense - provides that the program may be
modified and distributed by anyone as long as the changes are distributed
with the source. This means, I believe, that you can sell the Gimp
if you want but that you need to distribute it with the source code, including
any changes you may have made to the program. It also means that
the code in the Gimp's core cannot be incorporated into proprietary programs
- those programs would have to fall under the GPL if they used any of the
Gimp's source code directly.
The Plug-Ins differ from this in that they can be commercial applications,
distributable without source code. They link against libgimp (and
the Gtk libraries, which are also LGPL'd) but do not use any of the core
Gimp code directly. The LGPL appears to cover the libraries
distribution rights, but allows proprietary programs to link against the
library with certain restrictions.
At least thats how I interpreted it.
[INLINE]
Reader Mail
hixson@frozenwave.com wrote
(way back in November):
I've recently written 3 Perl scripts which help to distribute the task
of rendering with povray between several cpu's. One script is for
SMP (multiple processor) machines. It will break an image into
halves and start a separate process for each. This utilizes both
CPU's in a dual proc machine, and nearly halves the rendering
time. The other two scripts work together to utilize multiple
machines on a network. The server script tells each client script
how much of an image to render (also sending the .pov file and any
necessary files to each client).
These scripts were created using Perl 5.004, Linux 2.0.32, and
POVRay 3.0. I'd be honored if you would like to include a link
from your excellent graphics site to my page at
http://www.frozenwave.com/~hixson/projects.html.
'Muse: Not quite on my
LGH pages, but its a start. I'll get it added to my LGH pages next
time I do an update (whenever I get a chance to do that).
In going through some old email, I found the following discussion which
took place in early November 1997 regarding the use of RIB shaders with
BRMT. Being a little short on real subject matter this month, I thought
I'd share it with you.
Ed Holzwarth (eholzwar@MIT.EDU)
initially wrote:
I'm trying to render some hypertextures using BMRT... To do this I
need to be able to sample lights with illuminance() at an
arbitrary point inside an object's volume. Seems like the best
thing to do that with would be an Interior volume shader, but I
can't get it to work. Here is some code that I wrote just to test
out volume shaders. From the debugging printf(...)'s, I can tell
that the Interior shader is being called, but it seems to have no
effect on the image. Any ideas would be greatly appreciated!
Would love to see topics like this convered in Graphics Muse!
Partial RIB code
AttributeBegin
Attribute "identifier" "name" [ "ball" ]
Interior "shaders/hsin"
Surface "shaders/trans"
Translate 2 0 6
Sphere 3 -3 3 360
AttributeEnd
.sl code
volume hsin ()
{
if (sin (xcomp(P)) > 0)
{
Oi = .5;
Ci = color (0,.8,0);
printf(".");
}
else
{
Oi = 0.8;
Ci = color (.5,0,0);
printf("!");
}
}
/* transparent shader */
surface
trans ()
{
Oi = .2;
trace(P,normalize(I));
printf("After : Oi = %c, Ci = %c\n",Oi,Ci);
}
'Muse: (Note - I'd love
to get back to BMRT. I just have to learn to stop taking on so many
projects at once.)
[INLINE] Hmmm.
I haven't been playing with BMRT for some time now and was no expert to
begin with, however I think the problem might be fairly straight forward.
I played with what you sent me by shoving it in a standard RIB that I use
to test objects and shaders. I played with lots of settings in the
RIB for colors and opacity. No real help there. Then I tried
mucking with the two shaders. Not much luck there.
So I thought about what the volume shader really does. A volume
shader does not have a geometric primitive associated with it. It
is bound to a surface. So thinking about this and looking at
how the surface was defined via the RIB and the surface shader I thought
"Gee, maybe the surface isn't of a type that can allow light to pass through
it very well, even if we've set the opacity low". So I swapped your
surface shader with the BMGlass shader I got from a web site (or maybe
it was from Larry Gritz's pages, I've forgotten now - the shader was written
by Larry).
Success. The effects of the volume shader are properly displayed
using the glass surface shader. Or lets say the colors you'd expect
from the volume shaders impact are obvious and distinct. The old
way, all I got was various forms of reflection from the surface.
Now I get the surface mixed with the volume shader effects.
I don't know if this is the correct solution to your problem, but I
think its a start. The volume shaders effects are tightly bound to
how the light enters that volume, and that is determined by the characteristics
of the surface through which the light must travel. Muck with the
surface characteristics (or use a clear glass shader if you don't want
the surface to play a role in the overall effect) first, then fiddle with
the volume shader.
Ed wrote back:
Hmm. That is interesting. Actually, yesterday I got the code to work
by changing the order of things in the .rib file. Also, the
although the volume shader doesn't know about Os and Cs, Oi and Ci
are already set to what the Surface shader has calculated for the
surface hit points. Also, the surface shader gets called twice,
and then the Interior shader is called, and the length of I in the
volume shader is the length of the ray inside the volume. So
anyway, here is a revised version of what I sent you previously;
it now works as expected, but if you change the order of things in
the .rib file it seems not to work. In the shader below, the
color and opacity are based on the length of I, so the sphere
looks 3D. If you replace the interior shader below with, for
example, the noisysmoke shader which comes with BMRT, you get a
smoky sphere. Pretty neat!
Partial RIB code
AttributeBegin
Attribute "identifier" "name" [ "ball" ]
Surface "shaders/trans"
Interior "shaders/hsin"
Opacity [0 0 0]
Translate 1.9 0 6
SolidBegin "primitive"
Sphere 3 -3 3 360
SolidEnd
AttributeEnd
Shader code
/* transparent shader */
surface
trans ()
{
Ci = trace(P,I);
}
volume hsin ()
{
color Cv, Ov;
if (sin (2*xcomp(P)) > 0)
Cv = color (0,length(I)/8,0);
else
Cv = 0;
Ov = length(I)/8;
/* Ci & Oi are the color (premultiplied by opacity) and opacity
of
*the background element.
* Now Cv is the light contributed by the volume itself, and
Ov is the
* opacity of the volume, i.e. (1-Ov)*Ci is the light from the
background
* which makes it through the volume.
*/
Ci = Cv + (1-Ov)*Ci;
Oi = Ov + (1-Ov)*Oi;
'Muse: Neat indeed!
And another from the really old email category:
Rob Hartley <rhartley@aei.ca or
robert.hartley@pwc.ca> wrote:
Bonjour from Montreal!
'Muse: ...and howdy from
Texas!
We are expecting a foot or more of snow today, so I decided to snuggle
up to LG this morning until the roads are cleared.
'Muse: Snow measured in
anything but millimeters is why I left Colorado. Beautiful state,
but I lack the requisite tolerance for frigid winters.
I wrote to you a while ago mentioning the availability of OpenInventor
(OIV) for Linux from Template Graphics Software
(http://www.tgs.com). So far, it seems alright, but there are
still a few things that I cannot get working at home that work
just fine on my SGI at the office. I have the book "The Inventor
Mentor" which took a week for special order, but it was worth the
wait.
The problem with OIV is that it costs nearly a thousand dollars
U.S.! A bit much when I consider that I can get a whole new Linux
box for that much, or for the price of a new souped up PC and OIV,
we can get a second-hand SGI workstation which comes with Inventor
pre-installed.
So I scrounged the 'net a bit and found Links to the 'Apprentice
Project' and 'Pryan' which runs under the QT GUI library. Both
of these packages, available in source form, will read Inventor
files, which is really nice, because Inventor files are/were the
basis for the VRML 1.0 file definition. This I find particularly
handy for developing applications at work and at home. At work we
have a mix of SGI, AIX, HP, and Sun workstations pumped up and
running Catia for our design group (we build gas turbine engines
for jets, helicopters and commuter aircraft.)
Which brings me to why I am writing: In the Linux Gazette I
noticed a query about: "...PC software product -- an interactive
educational system -- what PC graphics package is "state of the
art" for Linux or Windows?" If I were tasked with developing an
interactive 3D system that had to be run on Linux, Win'95/NT and a
large variety of Unixen (Unixes, Unicses?), I would be tempted to
look further into the following:
Open Inventor
Solid, easy to use, multiplatform, but costly ( developer ~$1000,
runtime starts at ~$75 (I think), and decreases with volume)
http://www.tgs.com
The Apprentice project (Inventor clone)
Source is available from this link:
http://users.deltanet.com/~powerg/Apprentice/
Pryan (Inventor clone, requires Qt GUI listed below)
Free software, source code distribution,
http://www.troll.no/opengl/
Qt
Free software (commercial license also, but same code), source
code distribution - http://www.troll.no
Also note that most of the Addison Wesley OpenGL programming books,
including:
The Inventor Mentor
Open GL Programming for the X Window System (which covers
GLUT)
Open GL Programming Guide
Open GL Reference Guide
(and all the 'X' books, including Motif) are good references to
have around, but they are also available in electronic format, in
postscript PDF and hypertext format. I would guess we have heard
little of them because they are so big. I know they exist because
I have and use them online and on-paper. If needed, they would
probably all fit onto a Zip disk.
'Muse: I'm not certain
its legal to redistribute those texts, but it is nice to know they are
available in electronic format if desired.
I would love to help out in any way I can. Keep up the great work,
'Muse: You already have!
Thanks for all this wonderful information!
PS: I can see a diversification of the realms of computer graphics
between 2D and 3D. Have you ever considered a 3D Graphics Muse?
It is an exciting area that is really growing and I would enjoy
seeing more attention paid to it.
'Muse: Its not a bad idea
and there certainly is enough material to keep it going. The only
problem is that I don't have the time to split between the two subject
areas (and a job, and other writing duties, and ...). Of course,
if any readers would like to do a write up on either and have it included
with the Muse feel free to contact
me. You will, of course, get full credit for your work.
The Muse is just another place for graphics fans to gather.
_______________________________________________________________________________
[INLINE]
XeoMenu 1.1 from JavaSoft should have been here.
I just procrastinated. If you want to get a head start on it,
take a look at http://java.sun.com:81/share/classes/menu/source/source.html.
Happy wonderings!
_______________________________________________________________________________
Musings
indent
Font3D and XFont3D
One of the problems with using 3D graphics for logos is the
lack of good model data for the fonts. A quick scan of the various
model banks, such as Viewpoint Datalabs Avalon
archives or 3DSite, finds very few
canned models of fonts. Besides, do you really want to hang on to
a complete set of letters in a given font as model data? After all,
how often will you be using X, Q or Z? (Of course, cyberworld artists
probably use these all the time, but thats another story).
Fortunately, this problem is easily solved using Todd Prater's
Font3D utiltity. Font3D
is a tool for converting text strings using a given font into model data
which can be read by a variety of modelling programs and rendering utilities.
Output formats include support for POV-Ray (both 2.x and 3.x formats),
Raidance, Vivid, AutoCad DXF, Renderman RIB, and RAW Triangles. The
model data can be generated using a healthy set of Font3D command options.
Features such as face textures, beveling of both front and back faces,
length of face and side cuts for beveling, and object positioning are provided.
Font3D supports both Macintosh and MSWindows TrueType font files.
Font3D is, I believe, shareware. The register.txt
file states it runs for $10US, although it doesn't state explicitly that
you need to register. Since the files in the latest version, 1.60,
are dated with a January 1996 date, I suspect that either no new work has
been done on Font3D in some time or only registered users are getting updates.
Then again, once you've seen the breadth of command options avialable,
you might wonder what new features could be added.
You can fetch the C++ source for Font3D from its primary
archives at http://www-personal.ksu.edu/~squid/
font3d.html. You can also fetch a slightly older version from
the POV-Ray archives at ftp://ftp.povray.org/pub/poray/utilities.
This latter version is th 1.51 version. I'm not certain why, after
all this time, the 1.60 version has not been added to the POV-Ray archives.
Also note that the 1.51 release includes a large DOS and OS/2 binaries
in the zip file, along with the C++ source. The 1.60 release broke
out the DOS and OS/2 binaries and includes only the source.
The source for 1.60 comes in a zip file. If, like
me, you are unfamiliar with C++, don't worry. The Makefile provided
builds the source without modification. There really isn't all that
much to the source, which makes dealing with the build all that much simpler.
The Makefile assumes you have GCC/G++ installed and in your path.
For Linux users this is pretty much a given, especially if you've installed
from one of the well known Linux distributions (Red Hat, Debian, SuSE,
Slackware, etc.). Basically, just follow the installation instructions
for Unix systems that can be found in the font3d.txt file, or if you prefer,
in the font3d.ps document.
The code appears quite stable, producing usable code for both POV and
RIB (via BMRT) as well as DXF and RAW files that were parsable by the latest
version of the AC3D modeller.
Font3D processes a specified string using a specified font
by parsing a set of commands. These commands can be specified either
on the command line or in a configuration file. Command options fall
into 8 basic categories:
Categories Commands
Fonts font, font-path, map
Visibility faces, sides, bevels, front-face, back-face, front-bevel,
back-bevel
Texturing texture, face-texture, side-texture, bevel-texture,
front-face-texture, back-face-texture, front-bevel-texture,
back-bevel-texture
Beveling bevel-type, cut, face-cut, side-cut, front-face-cut,
front-side-cut, back-face-cut, back-side-cut
Object char, code, depth, resolution, string, triangle type
Output coordinate-system, constants, format, name, output,
output-path, precision
Positioning xpos, ypos, zpos
Miscellaneous config, verbose
A config file can be used to specify commands. The config
command can be used to specify the name of the config file or you can set
the FONT3D_DEFAULT_CONFIG environment variable:
For bash/ksh/sh users:
FONT3D_DEFAULT_CONFIG=<path>/<config_file_name>
export FONT3D_DEFAULT_CONFIG
For csh users:
setenv FONT3D_DEFAULT_CONFIG <path>/<config_file_name>
If a path is not specified, the default config file (font3d.def) will be
searched for in the same directory from which you started Font3D.
Note that the FONT3D_DEFAULT_CONFIG variable specifies the path and file
name, not just the path, to the config file.
Commands are formed as "name=value" pairs, whether they
are in the config file or on the command line. If the "value" portion
of the command includes spaces it must be enclosed in double quotes.
This is probably only applicable to the string command, which is
used to specify the text for which the objects will be generated.
By default Font3D uses POV-Ray as its preview renderer,
which means the default output file will be a POV-Ray include file.
Object naming is supported for POV objects, although no other output formats
allow for naming of objects. Font3D also uses a right-handed coordinate
system by default. This can be changed with the coordiante-system
command line option. Note that POV-Ray, for example, uses a left
handed coordinate system. I would think it would make more sense to make
the default left handed since the default output is POV-Ray. Strings
are generated by default, but you can specify a single character using
the char command. You can also specify a character code of
a single glyph using the code command.
Texturing is only supported for POV output formats.
The texture is referenced by name only, by applying the named texture to
the object. Font3D cannot be used to generate a texture directly.
The visibility commands only determine if a component (front
face, a bevel, etc) will be displayed in the rendered image. If the
visibility for a component is turned off, the component is still generated
as part of the object in the output file. This means turning the
visibility off for various components will reduce the polygon count for
your objects. It does not turn off the actual beveling, however.
If the cut for a face or side is non-zero, then the bevel will still be
there except with the visibility turned off the object has a gap where
the bevel would have been.
-Top of next column-
indent
More Musings...
None this month!
[INLINE]
Bevels, sides and faces are better understood with a simple
diagram:
[INLINE]
As you can see, it is possible to set quite a few characteristics of
the objects generated. You can't use the rounded beveling features of Font3D
to create completely rounded lettering, however. The beveling (whether
using rounded or flat bevels) work best as subtle effects on the lettering.
This is because the rounded beveling is done using smooth triangles on
a flat bevel, which only fake the rounded appearance by altering the normals
at the points of triangles. I covered this type of problem when discussing
BMRT's support for True Displacements in the May
1997 Graphics Muse article titled BMRT Part II: Renderman
Shaders. Also, not all formats support the smooth triangles.
Despite this, smooth triangles are the default (POV-Ray) does support them)
and are recommended for final renderings. Previews can be run without them,
of course, to decrease rendering time.
The output from Font3D is prefixed with comments, as shown in font3d-1.txt.
This makes it easy to determine how to reproduce the objects should the
need arise. You can view the actual object code by viewing the example
POV-Ray 3.x and RIB
files. These are abbreviated, sample files, since the complete files
were over 700k. Notice that the RIB file is in a format where it
can be included using the ReadArchive command. The samples generated
produced the following images:
[INLINE]
POV-Ray
[INLINE]
RIB
As you can see the generated objects come out very similar. The rendering
options were not optimized so the quality of the renderings shouldn't be
compared.
Font3D comes complete with very good documentation in both regular text
and a postscript version which prints out to 30 pages. The document
includes a very thorough description of all command line options.
Although Font3D offers many wonderful features, it can be cumbersome
to remeber how to use them all. Thankfully, Robert
S. Mallozzi has added an X-based front end to Font3D which he calls
XFont3D. XFont3D
is an XForms based front end that includes a POV preview capability.
That means it understands how to run POV, but not any of the other supported
formats supported by Font3D.
Aimed at POV users, it (apparently, I didn't verify this) will
still run all the command line options allowed by Font3D.
[INLINE]
XForms Interface
Using this interface is pretty straightforward as long as you understand
the Font3D command structure. Clicking a button under the options
header on the right of the window causes the framed area to the left of
that to be populated with relevant buttons and input fields. Many
of these options can be reset to their default values using the small,
square buttons with the black dot in them (just click on it once).
In general, you'll want to choose a font first (using the font button to
access a file selection window), specify the string to generate and an
output file name. AFter this you can specify configuration options
and an output file format (RIB, POV, etc). Changing the map type
(MS, which should really be PC to avoid annoying Unix traditionalists like
myself, or MAC) or the Cooordinate handedness probably won't be necessary
that often, but that depends on your own needs.
[INLINE]
Resources
The following links are just starting points for finding more information
about computer graphics and multimedia in general for Linux systems. If
you have some application specific information for me, I'll add them to
my other pages or you can contact the maintainer of some other web site.
I'll consider adding other general references here, but application or
site specific information needs to go into one of the following general
references and not listed here.
Linux
Graphics mini-Howto
Unix Graphics Utilities
Linux Multimedia Page
Some of the Mailing Lists and Newsgroups I keep an eye on and where
I get much of the information in this column:
The Gimp User and Gimp Developer Mailing
Lists.
The IRTC-L discussion list
comp.graphics.rendering.raytracing
comp.graphics.rendering.renderman
comp.graphics.api.opengl
comp.os.linux.announce
[INLINE]
Future Directions
Next month:
XeoMenu, for one.
libgr might be another, or maybe
IPAD or VRWave,
if I can get either them running in time.
Let me know what you'd like to
hear about!
_______________________________________________________________________________
© 1998 Michael J. Hammel
__________________________________________________________________________
Copyright © 1998, Michael J. Hammel
Published in Issue 25 of Linux Gazette, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
Hylafax
By Dani Pardo
__________________________________________________________________________
Our company network is based on some sparcs and linux servers and windows
(3.11 and 95) clients, wich telnet to the server to use a Cobol-written
accounting program. After passing from nfs to samba (imagine the user's fun
when they first discovered winpopup), we decided to try some fax software.
We had some alternatives: comercial (and really expensive) Un*x software, the
NT alternative, and the free software solution.
NT was an unfriendly and unflexible solution, based on licenses
per user, and we didn't want to spend thousands of dollars buying an unix fax
server. Having had the good samba experience, we decided to give hylafax
a try.
HylaFax (originally flexfax) is made by Silicon Graphics, and
distributed with source code, availiable at http://www.vix.com/hylafax/.
Latest version is 4.0pl1. Get also the tiff library. If you get the source,
you must first compile and install the tiff library (in order to convert
tiff files to .g3 fax format). You must also have Ghostscript up and running
to convert from Postscript to g3. As experience: use the latest ghostscript
you can get (unless you would like to see your customer receiving ghostscript
error messages by fax).
After having the tiff library installed, hylfax compiles at first under
an stardard linux distribution, placing binaries under /usr/local, and
/var/spool/fax for the jobs.
Once installed, configure the system by running /usr/local/sbin/faxsetup.
It will add the fax user, modify /etc/services and /etc/inetd.conf (hylafax
listens to ports 4559 and 444). After some other confguration, faxsetup will
run faxaddmodem, in order to configure wich modem(s) to use. Faxaddmodem
will talk to the modems you've specified, getting its parameters, and let you
configure other stuff.
Hylafax consists in two daemons: hfaxd (the server), and hfaxq (the
priorityzed Round-Robin scheduler). You should run faxmodem to tell the schedul
er wich modem(s) it
can use. If you've also planed to receive calls, you'll have to set up
faxgetty, that will place incoming faximiles into /var/spool/fax/incoming,
respecting also data calls (pasing the control to getty/mgetty).
You should also add these daemons in /etc/rc.d. Now you can check the
server works by telneting yourself at port 4559.
Some useful programs you will use are sendfax (files), faxstat (to
check the queue), and faxrm to remove jobs. Sendfax calls faxq, sendpage, etc..
It also
invoques ghostscript for the image format translation, so you'll normally
send postscript or ascii text. If you want to send other formats, check out
/usr/local/lib/fax/typerules. Other interesting configuration files reside
at /var/spool/fax/etc: If you run in trouble with your modem, you'll
probably want to check them.
And to finish with the server side, it's not a bad idea to modify
crontab to invoque faxqclean, in order to remove sent faxes.
The Client Side
Once you've hylafax up and running, it's time to configure the
clients: There's MacFlex for Macintosh users, and WinFlex for Windows users.
With Winflex (and MacFlex too), you'll install a generic postscript printer (I
usually use a Apple Laserwriter Pro600 window's driver). So, when something
is sent to that printer, a window appears, asking for the phone number. You
can also check the queue, remove jobs, etc. Once the fax is sent, the user
will receive an e-mail confirming the job has been done with some other
useful information. HylaFax creators claim that "you'll never
loose a fax",
and I must say that this aspect has been taken with a great effort.
WinFlex, although a good solution, is not perfect (the interface
with the printer driver is a bit poor), and doesn't use all hylafax
features yet (any volunteer?).
Another feature yet to be perfectioned, this time a server feature,
is the automatic cover page generation: I've really had pains to create the
cover page, much postscript knowledge needed. In our company, we finally
wrote the cover page as a normal document with our word processor, and copy
it in a samba share.
Let's Have Fun
The real party began when I was told about the accounting department
special need. They needed to send the facturation automatically by fax. That
facturation was generated by the cobol program as an ascii file, up to 20
pages. But ALL pages had to inlcude the company logo at top, and some text
at the left side. That seemed to be a harder issue than the cover page one,
but after some scripting and some C, and thanks giving to hylafax flexibility,
I could write a printer filter that:
* gets the phone number from the ascii file.
* divides the ascii file into pages (pages where separated by an
EOP)
* converts each page to pbm (portable bitmap) with pbmplus package
* Mixes the pbm logo with each page
* convert all mixed pages into postscript (with ghostscript)
* Join all postscript pages into one, and finally calls sendfax.
Now, I don't even want to think how I'd solve this problem with a
Windows server.
Conclusions
HylaFax is a versatile, powerful and flexible fax software, although
missing some features. It's highly configurable, provides a good amount of
debugging information, its secure, and it's free.
There's also a mailing list, where you can get patches and solve some
problems. Once again, free software has proven me its
strength.
__________________________________________________________________________
Copyright © 1998, Dani Pardo
Published in Issue 25 of Linux Gazette, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
Linux Compared to Other Operating Systems
By Kristian Elof Soerensen
__________________________________________________________________________
You might have the feeling that Linux is a real good OS.
In this article I will pit some of Linux' features against those of
some competing *nix's, and thus identify some of Linux's relative strengths
and weaknesses.
Linux and it's competitors
Not so long ago a frequent Linux question was "Is it really useful or
is it just another geeks only OS". Now most insightful people consider
Linux as being on par with the best, and the interesting question is "when
is it best to use Linux and when should some other *nix be preferred".
To help people identify Linux' place in the market, I've made a comparison
of ten different OS's eight of them *nix's, where each OS's capabilities
in a number of specific areas, are pitted against each other.
The comparison is available as an interactive chart at: http://www.falconweb.co
m/~linuxrx/WS_Linux/OS_comparison.html
.
It's part of a bigger Linux-page called "The Linux Resource Exchange"
that holds a lot of other Linux-info such as a searchable HOWTO-mirror,
guides to both unofficial and official patches to the 2.0.* and 2.1.* kernels,
Linux on workstation hardware pointers, and much more. Take a look at it
at http://www.falconweb.com/~linuxrx
.
It will be noted that the emphasis of the Comparison Chart as well as
this article is on usability and suitability for "real-world-usage" rather
than the more technically features of the kernels.
In this article I will present a summary of the information for Linux
2.0, Solaris 2.6, SGI Irix 6.2/6.4 and Digital Unix 4.0 and discuss it.
The web-site has more info, and holds information for BSDI 3.0, Freebsd
2.2, MacOS 8, OS/2 4, UnixWare 2.1 and OpenServer 5.0 as well. While this
article is fixed in time, I intent to keep the web-site up-to-date in a
long time from now.
A small extract of the OS Comparison Chart
Linux 2.0 SGI Irix 6.2/6.4 SUN Solaris 2.6 DIGITAL Unix 4.0
OS interoperability
Runnable foreign binaries DOS, Windows 3.1, Macintosh, some SysV Dos
and Windows 3.1 Macintosh, Windows 3.1
Mountable foreign filesystems FAT, VFAT, UFS ro, SysV, HPFS ro, MAC
MAC, FAT
Java yes yes yes yes
OS-standards
Posix.1 Designed to comply, but only a hacked version has been
certified. yes yes yes
XPG4 base 95 no yes yes yes
Unix 95 no no yes yes
Unix 98 no no no no
Policy-issues
Pricing Free Pay per release Pay per release or 2 year subscriptions
Pay per release
See the complete chart at http://www.falconweb.com/~linuxrx
Linux and the OS standards
The days of the great Unix wars are sort of gone. It has always been
part of the Unix-philosophy that a program written for Unix should not
need anything more than a recompile to work on any vendors *nix. In reality
there have always been many minor and major differences, making the task
of writing applications runnable on a vide selection of *nix'n a challenging
one.
During the 90' the vendors have agreed to write down and follow a set
of common standards for *nix behavior. The first one to gain big following
was the Posix.1 standard. In the last couple of years this standard have
been enchanged by standards such as Unix 95 and Unix98, the newer standards
including up-to-date versions of the older standards as well as standardizing
additional areas of Unix. It seems that after a quarter of a century Unix
can finally live up to the "Unix-box" metaphor, e.g. a generic square box
with some flavor of *nix capable of running every random Unix-program you
care to use.
It's as if OS's are becoming less important from now on. People want
a box with 100 % standard Unix behavor so they can run all ther applications,
and buy equipment and OS from whichever vendor has the best offer at the
day of purchase.
The versions of *nix made before Linux consisted of many niveaus of
revorkings of code that stemmed back from the earliest versions. This was
necessary in order for a *nix version to behave to applications like it's
counterparts so applications could run everywhere.
When Linus turned his Linux-development into a quest for a complete
OS, the Posix.1 standard was his guideline. Having the OS <-> application
interface ready, allowed him and the other developers to build all the
internal parts of the OS without using any old code. Ideas fostered and
experience gained since the original Unix could be freely used in the developme
nt
of Linux, since none of the code from older *nix's had to be used.
This is one of the main factors that allowed Linux to be so much better
than the competition. All the innards are brand new modern OS code, taking
full advantage of modern hardware.
As can be seen in the chart above, Linux haven't got the official "I
am Posix.1 compliant" stamp. A German company named Unifix
has hacked on Linux and gotten their versions of both 1.2 and 2.0 certified.
Their work have more or less been included in the main Linux-code. This
doesn't make Linux Posix.1 certified, but it ensures that it's very close,
probably as close as it's certified counterparts non-certified patchlevels
and minor releases.
It's important that work is done to keep Linux in sync with recent standards,
or it will turn into a non-standard *nix only suited for certain niche
purposes, like we are currently seing various BSD derived *nix's do.
Linux does only have a cost of zero if your time is worthless
The fact that Linux' price tag says zero is not as interesting as it
might seem.
Most of the cost of owning and using a computersystem, is the cost of
time spent on learning how to use the system, time spent on installation
and maintaining it over it's lifetime, and the initial cost of purchase
of computer, applications and OS.
If Linux is a cheap OS then it's because it can do more with less hardware
than many of it's competitors, or because it comes preinstalled with many
hundreds of apps., saving installation time, or since it gives it's users
the ability to work smarter, rather than by the OS itself being obtainable
without expense
Linux has better documentation than most OS's, and all of it is on-line,
so it keeps itself current and is search-able, unlike shelves full of expensive
vendor supplied paper manuals. The newsgroups and mailing-lists provide
a rapid help and support forum, that beats every phone-support system I
have ever used. This ensures more rapid problem fixing than most other
OS's even when the local gurus are out of luck, and can be used as a learning
tool, thus helping all Linux users work smarter than people using some
other *nix.
Linux can make a PC do most of the tricks an ordinary workstation-user
makes his workstation do. A workgroup with workstations can be renewed
to a few high-end workstations as shared CPU servers and a Linux PC on
every table. This costs less, and the really speedy CPU servers ensures
that the users gets more power than before.
What makes Linux an economically OS isn't so much it's own cost of zero,
but all the related savings and improvements it gives it's users.
Linux speaks many tongues
One of the first business support purposes Linux was widely put to was
to act as a multipurpose network device and server. It's capable of handling
most of the purposes needed to keep a modern LAN or WAN running. It can
be both router, firewall, bridge, gateway, modem and ISDN dial-up server,
nameserver and many other network task imaginable. It's also really good
at server jobs like mail, ftp and web.
Having the same OS with the same tools doing all these very different
jobs, instead of having to use a different device for every task, is saving
people a lot of time, gives more flexibility, and ties up a lot less money
in equipment purchases or leases.
Other *nix'n have somewhat similar abilities, but most require expensive
workstations and really expensive network peripherals, and those that does
run on PC's doesn't support an equally huge amount of cheap peripherals
and software as does Linux.
__________________________________________________________________________
Copyright © 1998, Kristian Elof Soerensen
Published in Issue 25 of Linux Gazette, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
Linux Ports
By Ross Linder
__________________________________________________________________________
I am writing in response to Dave Blondell's letter,
where he says "The sad truth of the matter is that Bently, and for that
matter most other software companies don't get enough requests for Linux
ports to justify the production costs."
Well perhaps it's true for ports from non-Unix environments, but it
surely is not true otherwise. A look at page 84 "Linux Makes The Big
Leagues" and "A place for Linux" is exactly the how I persuaded our
company to start using Linux. For only $250 we could have Linux with
Metrolink Motif, what's more we could use a cheap PC clone that put
our HP715 to shame in the performance stakes.
As we started to use Linux seriously, we bought more tools like
Insure++, CodeWizard, and INT Edittable Widgets. Soon the HP was
gathering dust, and only used for porting to HP-UX and testing.
Ironically the HP715 has just been paid off this year, its still a
nice machine, but its no match for a high end Linux PC.
Since we associate closely with some of our clients, they often visit
and get to see some of the new enhancements that are under development.
Often they noticed how fast Linux was compared to other platforms, so
natural evolution took place, and a lot of our clients have switched
to Linux.
And the best part of all is that I never need to change a line of code
when compiling across platforms, I use simple shell scripts that are
used as CC and LN. An example would be..
------------------------------------- mcc --------------------------------
#! /bin/sh
name=`uname -m`
if [ $name = "i386" ]
then
cc -DSCO $*
elif [ $name = "i486" ] || [ $name = "i586" ] || [ $name = "i686" ]
then
cc -O2 -m486 -fomit-frame-pointer -malign-loops=2 -malign-jumps=2 \
-malign-functions=2 -DLinux $*
else
c89 +w2 -z +FPD -DHPUX -D_HPUX_SOURCE -I/usr/include/X11R5 \
-I/usr/include/Motif1.2 -I/mnt/INT -I/mnt/700_LIBS/xpm-3.4e $*
fi
--------------------------------------------------------------------------
The combination of Linux[Intel] with its LITTLE ENDIAN architecture and HP-PA
Risc with its nice BIG ENDIAN (Same as networking) provides a really nice
combination of test beds to ensure both byte swapping and 64/32 bit
compatibility is tested.
At the end of the day it is no extra effort to provide a Linux solution.
Probably the biggest deterrent is the _loud_ anti-commercial voices. Some
folk who don't mind paying for software should be more vocal.
Recently a really nice guy called Jay, explained to me why the GNU
philosophy was so good, he said someone pays you once to do the work
then the rest of the community should be able to get the benefit
of your work for free, as you have already been paid.
When I pointed out that most commercial applications take many man
years to write, so we have two options, to get one poor soul to pay
millions of $'s or we can try to market our product to ten thousand
people who would each only pay $100 I got no response.
And while not every one may appreciate or use any of the free software
that I have contributed to the Linux community, some of the credit must
go to my employer (Who does not provide free software as a rule) for
the skills and resources I used to create my free S/W were gained
from them, in return they use some of my free S/W.
__________________________________________________________________________
Copyright © 1998, Ross Linder
Published in Issue 25 of Linux Gazette, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
Linux and Windows 95
The Best Bang for Your Buck
By Leonardo Lopes
__________________________________________________________________________
Many Linux users tend to think of
Windows95 as a competitor to Linux. In
mailing lists and in Usenet it is common
to encounter comments that portray
Windows95 as the materialization of
evil and Linux as the savior of all
cybernetic souls. While it is my belief
that only a small portion of the Linux
community believes the source of all
darkness is Redmond, it is easy to get
caught by passion and forget to
analyze this situation through a more
technical light, which would definitely be
more productive in promoting the
growth of Linux, through it's own merits.
Of course Microsoft has thrown more
than it's share of low blows over the
years. But it is hard for me to believe
that any other company in the position
Microsoft was in would act much
differently. And in any case, the Linux
community has nothing to gain by
confronting the Goliaths of the software
business in any field except the
technical one. The media attention we
have received lately is totally funded on
the quality of Linux, which by the way
separates us clearly from the pack.
This attention will only grow in the
future, especially if we present
ourselves as mature albeit idealistic
developers, which most of us are.
We know all too well that Windows95
and it's applications are not as stable
as we would like, that support is very
poor and expensive, how inflexible and
insecure it is, and all the other perils
that plague it. People in charge of
supporting it are familiar with error
messages like: "Consult an Expert" and
"Reinstall Windows95".
But if you can put up with that, what
you have is an extraordinary operating
system: It is very easy to use, install
and configure; It is inexpensive; It has
impressive internationalization support;
it has excellent development tools; it is
supported by nearly every major
hardware manufacturer; not to mention
the tremendous amount of high quality
software available in almost every
category for the platform.
Linux, on the other hand, has a
different set of advantages. It is rock
solid, has excellent support, is
extremely flexible and secure, is free, is
open, and so on. From a technical point
of view, it is incomparably superior to
Windows95.
The problem is that companies have
invested billions of dollars in software
and training for the Windows platform.
And Linux does not run Powerpoint, or
MS Word, or Delphi. Also, most end
users will not take advantage of the
extra flexibility and security offered by
Linux. It is not that they have no use for
it, it is just that they are so used to
working with what they have, and so
wary of changes, that they don't really
care about the advantages they may
get. It is sad, but true: They would
rather not save sensitive information
than learn about permissions; They are
so used to rebooting their machine all
the time that it has become as frivolous
as clicking a mouse button.
Most end-users spend the whole day
performing parametric transactions on
their machines. In many cases, even
management will prefer to wait days or
weeks for their IS department to
prepare a GUI interface to a query than
to learn SQL and get the information
immediately. Of course I and many
people use Linux for most of my
personal computing needs. When I use
Windows95, I really miss the things we
take for granted in Linux, like powerful
command line tools, permissions,
stability, etc... But unfortunately most
users are not like that nor are they
likely to be.
Linux is best exactly where Windows is
lacking. It is strong in support for
different software platforms. It is
designed to be sturdy and take heavy
workloads day in day out. It has
marvelous internet tools, and picks up
the security buck where Windows
passes it. Nobody wants a web server
or for that matter any server in which
you can't have 100% confidence on.
For all these reasons, looking at Linux
as an alternative to Windows95 is in my
opinion a mistake. It's greatest potential
will be achieved as a server and
manager for Windows, complementing
Windows' weaknesses and
guaranteeing a high level of service to
the enterprises who select it. If at all
possible, it's generally a good idea that
end users don't even knowor need
to knowthat it's Linux that is offering
the advanced services they're using.
That having been said, the natural
competitors to Linux become Windows
NT and other unices. So let's see why it
is by far the natural choice for this role.
In every step of the initial cost equation
you will be saving money with Linux. To
begin with it is free, or almost free if
you want to take into account the cost
of a distribution. Then It requires far
less computer resources than it's
competitors, and you'll also save money
there. Also it will often eliminate or
reduce the need for additional
equipment, especially when compared
to NT. Then it is portable to several
platforms. So instead of supporting NT,
Solaris, Ultrix and AIX, each with it's
own expenses in training,
documentation, etc..., now you only
have to support Linux. That aspect
alone can save thousands of dollars
every month to an organization.
With regard to software, not only you
will find almost every type of software
you may need for free or very
inexpensively, but bugs are corrected
and new features are added with
incredible agility. No more of that "it will
be fixed in the next release" talk. And
since almost everything comes with
source code, if your organization needs
a feature with great urgency, it is much
easier to add it than with a closed box
OS. That is not to mention the speed
with which Linux itself is updated.
Security holes and bugs are quickly
tracked and fixed, frequently in a matter
of hours. Nobody can put a price tag on
that.
Probably the biggest difference
between Linux and it's competitors is in
support and documentation. No, it is
not commonplace yet to have your
Linux vendor put you on hold for half
an hour to charge you big bucks for
online support like the other guys. And
yes, there are situations in which online
support is indispensable. But there are
already options for online support for
Linux, a business which has everything
to grow considerably as Linux invades
the corporate market. And in an
emergency, putting a Linux server up
and running can be done much faster
than any of it's competitors. In fact, in
many cases you can have a spare hard
disk laying around for an eventuality. If
you need it, pop open just about any
PC, stick the disk in there, turn the
machine on and go. Also, if you want to
really do things right, the low setup and
maintenance cost makes redundant
solutions using Linux much more
interesting than with any other OS. And
that is not to mention that a lot of
people, including probably the people
who will be in charge of maintaining
Linux at work, use or will use Linux at
home. How many people you know use
Ultrix or even NT at home?
If your business is connected to the
internet, you will get an infinite
knowledge base, always willing to help,
generally for free. Antagonists will say:
"Other OSs have their own mailing lists
and Usenet groups too." But the fact is
that no other internet support group is
even closely as effective as Linux's.
Linux is unique in that it offers many
more tools to fix your problems. It
doesn't matter how big a guru you may
be, if the software you use is not
traceable by a debugger and doesn't
come with source, you will not be able
to get answers as fast and as easily.
And there is a "positive spiral", as Bill
Gates would like to define it, with Linux
support: A lot of people learned a lot of
what they know through the Linux
internet support channels. Now they
feel in many ways obliged to help lots
of other people. Who will learn a lot of
what they will now through these
channels. And so on.
Linux documentation is incomparable to
any other OS's. Not only in quantity,
quality and price, but also in that it is so
frequently updated. From novice users
to accomplished network
administrators, it is more than likely that
you will find most of the answers you
need from the documentation that
comes with your distribution or with the
CDs that accompany it. If you don't find
it there, it is almost always somewhere
in the internet, reachable by any search
engine. More and more books are
published every month about Linux.
There are monthly publications like
Linux Journal and Linux Gazette
available. There are tutorials, howtos,
faqs and other documents describing
every single detail of the operating
system, and most of the software that
comes with it. And that is not to
mention the inheritance of over 20
years of UNIX expertise and
information. In total, the amount saved
with support and documentation
expenses every day with Linux can add
up considerably.
Administrative costs are also much
lower in Linux, and administration is
much easier on Linux than in any other
OS. An argument many people use in
favor of NT is that it is so easy to
administrate. A lot of UNIX people were
at first fearful of losing their jobs when
NT came out. Now, how many NT sites
you know don't have a dedicated
administrator? The fallacy of Microsoft's
argumentation is that administrative
costs are not affected by creating new
users in a GUI instead of using a shell
script, or even editing a file. They are
not affected by day to day operations
when things go right, and they are not
affected by performing ordinary
maintenance. What really skyrockets
your administrative costs is when things
go wrong. And anyone supporting
networks knows that they do. With any
system. And when that happens, you
need clear error messages. You need
trace and debug capabilities. And you
need documentation. And Linux offers
all these items in great generosity,
much more than NT and more than
most other unices.
Another factor that increases your
administration costs is when you have
to do anything that is out of the
ordinary. When that happens, you want
flexibility. And while NT may be
acceptable for cooking pasta, finer
dishes will require tools and flexibility
you can only get from UNIX. Because
Linux is so flexible, you can frequently
eliminate routers, bridges, and other
equipment which not only add to
additional hardware cost, but also
contribute to make your network more
complex, introduce new environments
to be learned, and become yet another
failure point. With Linux, cost involved
in the maintenance of these
equipments can often be eliminated,
and other times, greatly reduced.
@subhd:Getting To the PointIntegrating Linux and Windows95
Using Linux with Windows95 is not a
very complicated task. Most of the work
is handled by the Samba suite, a host
of programs designed to work with the
SMB protocol, capable of most services
you expect from a network server:
Handling logins, sharing hard drives,
printers, etc... Samba is especially
useful when you have a mixed
UNIX/Windows95 environment, like we
did at the Mathematics Department at
UFC. When people logged on any
Windows machine, they would have
access to their home directories at the
H: drive. This brought up an
administrative problem, as people
quickly took up all the hard drive space
available installing Windows programs
in their H: drives. Nothing that a quota
system won't fix.
Samba fools your Windows95 machine
into thinking that it is talking to a NT
server. You can have network profiles,
unified registries for all your machines,
run login scripts, and generally have
most of the bells and whistles available
with NT. [See earlier issue of Linux
Journal]. It is one of the best supported
and documented programs available.
The only problem I had is that logins
take a little longer to complete, when
compared to NT. It is generally a little
slower than NT, but perfectly usable.
The configuration files have a format
similar to the Windows .ini files. You
can use it to share printers, hard disks,
cdroms, etc... According to the
documentation, there is no real reason
why other mass storage peripherals
shouldn't work, although I haven't tried
any.
At PCC Inform=E1tica, a computer retailer
at which I installed an intranet based on
a sole Linux server, I also installed
HylaFax, an excellent fax server. It was
not as simple to install, mainly because
it asks so many questions that it can
scare you. If you take your time to
answer them, especially with the aid of
your Modem's manual, it should be no
big deal. Also it searches for some
programs which you will not find in
most distributions. For instance, it
asked me for mawk, which I
symbolically linked to gawk, and never
had any problems. The Windows95
Hylafax client, whfc, works reasonably
well, although it is not quite stable
enough for everyday use, and lacks
important features, like job scheduling. I
contacted the author, but he was busy
with other projects, and told me that he
could not release the source code
because of limitations by his employer.
HylaFax is so richly documented I
decided to implement my own client,
with the specific needs of my
organization. As soon as I get a couple
of machines, I will start doing that. Any
volunteers?
Mail came mostly configured. Not only
sendmail was configured correctly
almost right out of the box, but a pop
server also already came installed. All I
had to do in Windows95 was install a
major browser.
Information about products is created
on regular Windows95 programs, then
converted to HTML and made available
for the intranet at the Linux server.
Tutorials and documentation for
installed programs available in HTML
are also available from the server.
The Linux machine at PCC Inform=E1tica
also has the responsibility of doing IP
Masquerading for the whole network of
22 machines and counting. I had to get
the newest stable kernel at the time
(2.0.29), and a patch for it to work with
ftp.. Even in this kernel, the help
message on the configure script will say
that masquerading is experimental
code. I never had any problems,
running the machine under the
conditions above. Once the kernel was
recompiled, all I had to do was add two
calls to ipfwadm and I was all set. I had
invaluable help from the people in the
internet for this task. The Brazilian linux
mailing list linux-br, an Issue of Linux
Journal, the kernel documentation, web
documentation, were all useful tools for
me to get this job done.
Telecommunications in Brazil is very
expensive. At the time we were
planning this network, our first thought
was on getting a 64k leased line from
the company to our service provider.
That would cost us around US$1050 a
month, only on telephone company
charges. So we decided to build a new
machine, install it at our service
provider and put in it our web content,
ftp server and mail server. The
company would then access the
internet via a dial-up account, which
would cost us only US$210 a month.
Since dial-up calls tend to fail a lot, I
made a simple script which would
check if the line was ok, calling the
service provider again in case the line
had dropped. Also this script mailed my
outside account the current IP number
for the machine, in case I needed to
access it from somewhere else in the
internet. Then I put the script to run
every 5 minutes with crontab. Simple
and agile. In other words, low
administration costs. If the bandwidth
required increased sufficiently, it would
be easy to add a second modem and
use equal line balancing to get a higher
throughput.
Another use for crontab is making
automatic backups of the companies
database, which runs on Access.
Everyday at noon and at 6PM a copy of
the whole database is made to the
server using a script based on smbtar,
part of the Samba suite, and at 8PM a
copy of the database is made to tape.
The home directories, which users use
at their Windows95 clients mainly to
store business proposals, are also
saved to tape every week. Most users
don't even know there is a Linux
machine in the network.
@subhd:Bottom LineSavings
Savings with Linux start with the O.S.
itself, grow through setup with lower
power equipment (All the above work
smoothly on a Pentium 133), and by
making networking hardware
dispensable (router), goes through easy
software setup, flexible settings and
easy administration and training, and
adds up every month, with low
equipment maintenance costs, agile
software updates, and inexpensive
support. It also protects your
investment by allowing you to easily
upgrade to other platforms. Or even
other OSs, if you for some strange
reason would ever want to do that.
How much you will actually save
depends on many factors, but there are
just so many ways to save with Linux,
from support fees to documentation to
feasible redundancy which means less
down time to flexibility that one thing is
for certain: It will be a bundle. At PCC,
Linux saved more than US$3000 in
initial setup costs and another US$1000
every month, out of software,
communications and maintenance
costs. It also has increased the safety
of the data on the network, provided
the employees with the convenience of
private disk space and access to the
wealth of information offered by the
internet and made internal
communications more agile and
inexpensive. If you though your
company or office was too small to
afford a high quality intranet or a
company-wide internet connection, think
again. With Linux, Now You Can!!!
__________________________________________________________________________
Copyright © 1998, Leonardo Lopes
Published in Issue 25 of Linux Gazette, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
Compiler News
By Larry Ayers
__________________________________________________________________________
Introduction
The GNU gcc compiler is one of the most highly-regarded applications made
available by the Free Software Foundation; it has become an integral part of
Linux distributions. The existence of gcc and its corollary utilities (make,
autoconf, etc.) makes it possible to distribute source code for everything
from the Linux kernel itself to a wide variety of free software applications
and utilities. This fact is crucial to the survival and health of Linux;
different people run different kernels and distributions, and it would be
expecting too much to ask volunteer developers to create binary distributions
for all of the different flavors and variations of Linux and other
Unix-derived operating systems. Richard Stallman does have a valid point when
he emphasizes Linux's dependence upon the many GNU utilities.
Lately there has been a flurry of activity in the GNU gcc compiler world,
resulting in new releases and giving Linux users expanded choices in their
development environments.
The GNU people operate in a relatively closed environment; the average user
doesn't have access to news or progress reports; a new release is usually the
first indication that development is actually progressing. Since GNU software
is released under the GNU licence, there is nothing to stop other developers
from modifying the code and making available variant releases. There exists
another approach to free software development, in which patches and
"snapshot" releases are freely available for interested developers and users.
The Linux kernel (with both stable and unstable development releases
available) is an obvious and influential example. XEmacs, KDE, and GNOME are
others. Since the advent of egcs it seems the GNU developers might be moving
toward this development model, judging by some new material at the
GNU web-site.
Eric Raymonds' online
article,
The Cathedral and the Bazaar is an insightful and interesting
interpretation of these two different models of free software development.
This piece was one of the inspirations for the first gcc variant (that I know
of) to become available: egcs.
egcs
Gcc 2.7.2 has been the standard GNU compiler for some time now. The Cygnus Cor
poration is a company which offers
commercial support for the GNU utilities and has ported many of them to the
Windows environment. A group of programmers there decided to try an
experiment. Beginning with the stock GNU sources, they adapted the patches
which would eventually become gcc-2.8.0 (I assume from the GNU gcc development
source tree) and added experimental features which the GNU developers either
weren't interested in or were delaying for a future release. The idea was to
make periodic snapshot releases freely available with the hope of attracting
more developers. This approach seems to be working; I don't know how many new
programmers are contributing to the project, but the two releases they have
made to date (1.00 and 1.01) are being used by quite a few people without
many problems. Any fruitful changes in gcc/egcs will be available to the GNU
gcc developers for possible inclusion in future releases. This benefits
end-users as well as the GNU programmers, as users get to try these new
features and functions (and hopefully bugs will be reported and dealt with),
while the sources may be of use to the GNU gcc people in their efforts.
pgcc
Both the gcc and egcs compilers are intended to be built and used on
systems based on a variety of processors. Yet another group of developers has
hacked the egcs code to support operations peculiar to the Intel Pentium
processors. Pgcc consists of a set of patches which can be applied to the egcs
source, which will allow code to be compiled with various pentium
optimizations. These developers claim execution speed (of binaries
compiled with pgcc) can be five to thirty percent faster than stock gcc. A
new Linux distribution called Stampede
is using pgcs to compile the binaries of the kernel and
applications which they plan to distribute. Interestingly enough, the
original patches which the pgcc team used as a starting point came from a
programming team at Intel.
My Experiences
Though the GNU gcc compiler has always worked well for me, the appeal of
novelty led me to tentatively experiment with egcs when the first real release
appeared on the egcs web-site late
last year. The first thing I noticed was that the compiler tends to generate
more warnings than the -Wall switch did with gcc 2.7.2. These
don't seem to have deleterious effects, and I've heard gcc 2.8.0 exhibits the
same tendency. Everything I tried seemed to compile successfully except for
the KDE source; I've been told that this will be fixed for KDE beta 3. If
you've never built a version of gcc from source be prepared for a long,
disk-space-intensive compilation. It happens in several stages; during the
last of these the compiler actually compiles itself, a process known as
boot-strapping.
Some time after installing egcs I happened upon the
pgcc web-site. I downloaded
a large set of patches and patched the egcs source, ending up with another
compiler. Along with the claimed execution speed increase (which in most
cases probably isn't large enough to be noticeable) optimization can be
increased to -O6, and a pentium-specific flag
(-mpentium) can be used. The binaries generated tend to be
substantially larger than gcc's due to the default inclusion of
exception-handling. This can be disabled with the switch
-fno-exceptions.
So far I've compiled several Linux kernels, XEmacs, mutt, slrn, the Gimp,
gzip, bzip2, and others without any problems. I wish I was a systematic type
of person and had timed the execution of these programs before and after using
egcs/pgcc, but I'm not. As an example, I'm running XEmacs 20.5 beta-22 using
the Linux kernel 2.1.84, and the editor seems snappier and more responsive
than before. But is this due to the compiler, the kernel, the XEmacs version,
or (most probably) all three? Too many variables and not enough time!
I wouldn't recommend installing any of these gcc variants unless you are
willing to monitor newsgroups, web-sites, and possibly the mailing-lists.
Luckily problems and work-arounds are reported quickly, and of course the
invaluable safety-net of gcc distribution packages is always there if your
set-up gets badly hosed. It will be interesting to see what comes of this
non-adversarial fork in the evolution of gcc.
Last modified: Sat 31 Jan 1998
__________________________________________________________________________
Copyright © 1998, Larry Ayers
Published in Issue 25 of Linux Gazette, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
Gmemusage: A Distinctive Memory Monitor
By Larry Ayers
__________________________________________________________________________
Introduction
Linux may not have many office-suites available, but it sure does have a
wide variety of system, process, and memory monitors! I use ProcMeter quite a
bit, mainly for the incoming and outgoing TCPIP packet display, but recently I
happened upon an unusual memory monitor which displays the relative
proportions of memory in use by running processes. Gmemusage is a small X
application written by Raju Mathur. He has been attempting to emulate a
monitor (also called gmemusage) which is used on Silicon Graphics
workstations.
Features
Here is a screenshot, which will save me several paragraphs of description:
Gmemusage window
As with many other such monitors, the information shown is essentially the
same as what is shown in the memory fields produced by ps
-aux, which derives its information from pseudo-files in the /proc
directory. These files, such as meminfo and loadavg, are
generated dynamically by the kernel. You can read them directly, by running a
command such as cat /proc/meminfo.
Although a plethora of information is presented by the output of
ps -aux or top, more detail is shown than is
needed for a quick overview and comparison, and tabular data doesn't easily
lend itself to comparative analysis. You won't see precise differences
between memory usage while contemplating a gmemusage display, but the
proportions are shown in a graphical and easily interpreted format. In most
cases the relative proportions are more useful than the decimally exact detail
shown by ps or top.
Raju Mathur has plans to enhance gmemusage. One possibility (mentioned in
his TODO file) is to add a pop-up window which would give
additional information about a process when its name in the main display is
selected with the mouse.
I like this small utility, partly because it diverges from the usual
dynamic bar-chart display found in many memory monitors, and also because it
is small and specialized. You don't have to spend time configuring it either;
it works well "out of the box". If you would like to try it, the source
archive is available at the gmemusage home WWW
site.
Last modified: Sat 31 Jan 1998
__________________________________________________________________________
Copyright © 1998, Larry Ayers
Published in Issue 25 of Linux Gazette, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
Xephem
By Larry Ayers
__________________________________________________________________________
Introduction
ephemeris n., pl. ephemerides 1. A table giving the coordinates of
one or a number of celestial bodies at a number of specific times
during a given period. 2. A publication that presents a collection
of such tables; an astronomical almanac.
The above definition came to mind when, some time ago, I happened upon a
Debian package called xephem while browsing the contents of a
distribution CD. At the time I dismissed any thought of installing it; I
could visualize (falsely, as I later learned) a simple X application
displaying scrollable lists of sun, moon, and planet rising and setting times
for various latitudes. This sort of information is easily available from
printed ephemerides and hardly justified installing a probably old package.
A salient aspect of free software is that it's not advertised, and
word-of-mouth has its limitations. News of an application with wide appeal,
such as an editor or file-manager, will eventually be spread via the internet,
but a program which occupies a specialized niche might not receive the
attention it deserves.
Some time later I saw a brief description of xephem in a usenet posting
which was enough to spark my curiosity. After trying it out, I was impressed,
and thought the word should be spread.
Description and Features
Xephem is a Motif-based X application which goes far beyond the name's
implication. It's a multi-purpose astronomical program which can present
detailed, zoom-able star-charts as well as views of the earth, moon, planets
and the entire solar-system. These views can be from any location on Earth,
at any time in the past or future.
This application can be effective on several levels. The casual star-gazer
can consult xephem just to see what planets and constellations are visible on
a certain night, and perhaps print out a star-chart. As a teaching aid
xephem's graphical and animated displays could spark a student's interest.
The serious amateur astronomer can set up a link between a telescope and the
program, so that the sky-view displays whichever spot the telescope is also
seeing.
This review will be more comprehensible if a screenshot is presented first.
The first window which appears when xephem is started is the control
window:
xephem controls
In this window various parameters, such as location, date, and time, can be
set. From the menubar the view windows can be summoned, as well as which of
the various astronomical databases (included in the distribution) should be
loaded into memory. These databases are quite a useful resource to have
available. They include the Messier and NGC databases of deep-sky objects,
along with databases of asteroids, comets, and satellites. Updated versions of
the latter two are available on the xephem web-pages.
Here is a screenshot of a skyview window:
Skyview window
This window is much more than a simple star-chart of a certain date, time,
and location. Right-mouse-button clicking on a star or other astronomical
object summons a small window showing various facts about the object. Zooming
in can also be done with the mouse, and a zoomed view can be panned using the
scroll-bars. A variety of viewing options can be set from the menubar. The
constellation names and outlines can be shown, and if any of the xephem
databases are loaded the objects in them will be visible, if desired.
One view window which I find particulary interesting is the earth view. A
representation of the earth from an orbital viewpoint is shown, with the sun's
illumination and current zenith-point highlighted. This is updated in
real-time, and equivalent views displaying the zenith location and area
illumination of either the moon or the other planets are menu options.
Earth View
Another view-window displays the solar-system in schematic form. This and
the earth-view windows can be animated, a sort of cartoon-movie which shows
the relative movements of the various celestial objects.
Availability
The xephem web-site
is the place to visit if you'd like to investigate this application. Source
for current development versions is available there; I've had good luck
compiling and running these. Users lacking the Motif libraries can obtain
statically-linked binary releases from this site, and updated databases are
available as well. Elwood Downey is the author of xephem. If you install
it, I'm sure he would be glad to hear any comments you might have.
Last modified: Sat 31 Jan 1998
< !-- hhmts end -->
__________________________________________________________________________
Copyright © 1998, Larry Ayers
Published in Issue 25 of Linux Gazette, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
A Simple Internet Dialer for Linux
By Martin Vermeer
__________________________________________________________________________
Those of us that have used Netscape (or other Web browsers) under Windows,
may have felt some envy at the sight of the Dialer, a little box in one
corner of the screen showing that you are on-line and how much time you
have already spent on-line, so your phone bill doesn't go overboard.
In Linux, on the other hand, setting up a dial-up connection and making
it work is often a rather painful process, a "challenge", if you like:
Not only no handy auto-install packages available from your internet service
provider -- you have to figure out everything for yourself, and know what
questions to ask -- but also establishing the connection every time requires
you to go through a sequence of operations.
Open an xterm or a virtual console, log in as root, and run the ppp
startup script (unless of course you use the diald
package for dial-on-demand, an alterative also. I personally found that
it had too much a mind of its own :-).
Closing the connection similarly requires you to do the same to run
a disconnect script.
One of the first things I did therefore when I decided to learn tcl/tk
was to write a Dialer look-alike. It (tkdial)
is attached to this text; it is the first tcl/tk program I ever
wrote -- just under 150 lines -- and that may show. But tcl/tk
is ideal for this kind of job, "glueing" existing command line facilities
together into a beautiful motif-look, mouseable package. Just have a look
at the pictures!
_______________________________________________________________________________
[link down] [link up]
______________________________________________________________________
You can put a call to this script somewhere in your X startup, in the case
of Red Hat 5.0, in the file /etc/X11/Anotherlevel/fvwf2rc.init.
Then you will always have it on your desktop (Linux lives on connectivity!).
It gives precise, interactive, manual control of your ppp link.
There are some things with a dial-up connection which appear not generally
known (I'm not talking to you, geeks and gurus :-).
I'll give a quick run-down of my experiences as I understood them (but
note that I am no professional):
* In order to be able to run tkdial (which calls pppd) as an
ordinary user, you should have pppd set suid root. Additionally,
you should be able to read the scripts in the /etc/ppp directory,
so they should either be world readable or readable by a group to
which you belong. (A nice exercise in basic system administration.
But if you give world reading rights to your pap-secrets file, you
will deservedly fail your exam!)
* The standard Red Hat sendmail setup uses sendmail -bd -q1h, in
other words, activate the sendmail daemon once an hour. That's not
much. In a dial-up environment you want to send out mail while the
line is up, so change the -q1h to -q2m, for example, for every two
minutes. And follow with the mailq command if your mail really has
left your machine, before closing down the ppp link. (If you
forget, not to worry: The queue will continue to try for five
days, every time ppp comes up.)
* There is an option to pppd called lcp-echo-interval, which can be
used to keep the line alive. LCP means Link Control Protocol, and
by putting an option lcp-echo-interval 60 into either your
/etc/ppp/options file or on the pppd command line when starting it
up (i.e. inside the tkdial script file), you can keep your line
alive even when not actively browsing.
This is important because, with the ubiquity of crashy operating
systems, internet service providers have taken to the habit to cut
the line when nothing has arrived over it for a couple minutes.
Imagine starting a five hour download, going shopping, and
returning only to find that three minutes after you closed the
door, the machine crashed and the phone line is still open,
burning up your money for nothing! (This could even happen in
principle with Linux, if the power goes down and you don't have an
UPS, or your dog gnaws off the phone wire. Well, the modem has a
time-out also). So Windows dialers send an empty package once
every minute or so to the ISP, telling "don't worry, I'm still
alive!" And when the system crashes, the line cuts promptly.
With the option given above, also Linux will send an empty package
every 60 seconds.
* If you have a POP3 mail service, the best program (transport
agent) undoubtedly is fetchmail, which transports the mail to your
"system maildrop", typically /var/spool/mail/<userid>. Also
fetchmail can be run as a daemon. You can use xbiff or xmailbox to
inform you of arrived mail, and read it with pine, exmh
(recommended, another one of those tcl/tk miracle programs!) or
whatever. If you use Netscape mail, forget about all this, you
just have to configure it on its own terms, which involves
learning pretty much the same concepts anyway.
* A trick (I don't really know if this is wise or intended, but it
sure is effective!):
If you use very much the same search agent all the time, e.g. Alta
Vista, put it in the file /etc/hosts. Find out Alta Vista's IP
address with ping. The details are left as an exercise for the
reader, as well as the explanation for the speedup achieved (hint:
DNS...)
* Make sure that your machine name (as given in the network setup
procedure) is the same as that which your ISP gave to your
mailbox. So, if you are john.smith@isp-international.com, call
your machine isp-international.com. Not very romantic, but you
avoid problems with an anti-spam feature in some sendmail
installations, which bounces mail coming from a "sender" not
existing (i.e. not found by the domain name service) on the
internet. (I expect that this feature can be circumvented by
reconfiguring and recompiling sendmail.cf. I guess the sendmail
folks just bet that such a feat is way beyond your average
spammer, and I bet they're right...)
Alternatively, make yourself exist; but that requires the
co-operation of your ISP. E.g. EUnet would give you a mailbox name
of donald.duck@john-smith.pp.fi, which provides you with a
slightly more personalized name for your own machine...
And make sure to keep the localhost name also valid. Some
programs depend on it.
_______________________________________________________________________________
Acknowledgement: I am indebted to Jaakko Hyvätti
of EUnet Finland, who provided me with working ppp scripts and plenty good
advice.
_______________________________________________________________________________
Enjoy!
______________________________________________________________________
(a piece of my desktop:)
-
[my desktop]
__________________________________________________________________________
Copyright © 1998, Martin Vermeer
Published in Issue 25 of Linux Gazette, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
Secure Public Access Internet Workstations
By Steven Singer
__________________________________________________________________________
Introduction.
Linux is the perfect operating system to deploy in a hostile environment, the b
uilt in security
features combined with the customization most window managers allow make Linux
ideally
suited to this task. Recently a local career planning agency wanted to deploy a
dozen public
access Internet workstations at various locations in the community including li
braries and
hospitals. Linux was chosen as the operating system for the task. This article
provides details
about how to setup Linux so that it can safely be deployed as a public access
workstation.
Why Linux
When it came time to decide on how to setup the workstations various solutions
were
considered. It was decided before hand that the hardware would be Intel-based
PC's. That
essentially left us to consider the various Microsoft offerings or Unix. The l
icensing costs for
NT would have added significantly to the projects budget. Even after buying NT
licenses for
each workstation, an experienced administrator would have had to spend time con
figuring and
securing each NT machine. It was determined that NT was an option, but an expe
nsive one.
Win95 is significantly cheaper than NT, but lacks the built in security feature
s of a more
advanced operating system. Our biggest fear with Win95, was that we would freq
uently have
people walking in and messing up the systems setup. Linux offered us a soluti
on to all of
these problems. The flexibility of X-Windows, combined with Linux's basic secu
rity features
allowed us to setup the workstations such that we did not have to fear hostile
users. The
licensing costs were essentially non-existent, and setting up each workstation
became a
manner of following a simple routine.
The Installation Procedure.
When you have to setup a bunch of Linux workstations with essentially the same
configuration, there are two approaches you can take. The first one consists of
setting up and
testing the first machine, then dupilicate the entire hard disk onto each works
tation. (If you
are doing this remember that you will most likely have to re-run Lilo on each w
orkstation).
The second method is to manually set-up each workstation by following a standar
d check-list.
We opted for the second method due to logistical reasons. However, the install
ation
procedure was automated by re-using config files, and running scripts where pos
sible.
We used RedHat 4.3 as our distribution, installing from the RedHat Powe
rCD set. I
suspect any decent Linux distribution would have worked equally as well. By t
he time we
had finished the installation of the first machine, I had established a step by
step checklist of
things to do during the install. As we went along, we occasionally revised the
check-lists
which required us to go back to the original few machines and make some changes
after the
fact.
OS Installation & Networking.
The installation started out as a standard Red-Hat install, the machines had pl
enty of hard
disk space so we were quite liberal in what packages we installed. This include
d any
networking stuff we felt was relevant, and X-windows. We had to manually insta
ll
ipfwadm since there wasn't an explicit option for it.
Dial-out-on demand.
The machines were to be connected to the Internet via a modem, we used the dial
-out-on
demand PPP support that is built into the 2.x series of kernels. We placed a ch
at script
containing the pertinent information in /etc/ppp and insured that only root had
any sort of
access to it.(mode 700) For more details on setting up dial-out on demand netwo
rking see the
kernel 2.0 documentation and the PPP FAQ. The Networking HOWTO should also cont
ain
some useful information. We then tested the network connection to insure it w
orked.
X-Windows.
XF86Config
The XF86Config file is the configuration file for the XFree86 X server. We cre
ated this file
as we would have for a normal Linux workstation running X except added the lin
es
"DontZap" and "DontZoom". DontZap prevents a user from killing the X-server wit
h a break
key sequence. DontZoom prevents dynamic changing of resolutions. Both of these
options
prevent a hostile user from making the machine look somehow different for the n
ext person
that comes along. Further details about this file can be found in the XF86Conf
ig man page.
Xdm.
Xdm is a log in manager for X-windows. Instead of the standard text-based logi
n prompt you
normally get at the Linux console, Xdm is a X based program that asks the user
for a user-name and password. The user is then logged in with X-windows runnin
g.
The following files are located in xdm go in /usr/X11R6/lib/X11/xdm.
Xsession
We used a standard Xsession file, however we made sure that it loaded fvwm as o
ur window
manager(Other window managers will also work, however we decided to use fvwm)
XResources
The XResources file controls settings for xdm's log in process. We used the sta
ndard
XResource file but added/changed the following lines. They all effect the appe
rence of the
login window, with the exception of the last line which allows our guest accoun
t to work
without a password.
xlogin*greeting: Welcome, please log in as 'guest', with no
password.
xlogin*namePrompt: login:\xlogin*fail: Login incorrect, please use
the username 'guest' with no password
xlogin*allowNullPasswd: true
and removed the following from the translations section to a user from getting
around XDM.
Ctrl<Key>R: abort-display()\n\
XSetup
The XSetup file is called once the user logs in, any programs you want to run u
pon login can
be started from this file. this is where we would place an xsetroot command or
something
similar. The default version of XSetup might start Xconsole(a program that dis
plays the text-output of the Xserver in a small window) we did not want this in
formation to be visible so
we commented that line out.
FVWM setup.
We choose fvwm as our window manger as a matter of personnel preference and fam
iliarity,
most other window managers will require similar changes. All configuration inf
ormation for
a users fvwm setup is stored in a file named .fvwmrc located in their home dire
ctory. A
system-default version of the config file is often located in
/usr/X11R6/lib/X11/fvwm/system.fvwmrc. We will use this file as our base, and o
utline the
important things you will have to check for. Since there is no "standard" base
fvwm
configuration, I will only outline the changes to make, and will assume familia
rity with the
format of an fvwmrc file.
The Popup Menu's.
The config file you use as your base will most likely start off with some pop-u
p menu's
predefined. You will want to remove many of the predefined menu items.. I woul
d
recommend only leaving two items, "netscape" and "exit".
Paging.
It is a good idea to disable paging, this will avoid some unnecessary user conf
usion.
This can be done with a line saying.
PagingDefault 0
It is also a good idea to remove the "Pager" line if one exists.
GoodStuff
The GoodStuff program that comes with Fvwm places a "Button-bar" at a predeterm
ined
location on the screen. This button bar allows for easy launching of applicatio
ns. GoodStuff
is a flexible program that can be tailored to your taste. I chose a button bar
consisting of a
single row located at the top-left of the desktop. The following are the releva
nt lines.
*GoodStuffRows 1*GoodStuff Netscape netscape.xpm Exec "Netscape"
/usr/local/netscape/netscape*GoodStuff Logout mini.exit.xpm
Quit-Verify
I created a pixmap file named netscape.xpm containing the netscape logo to
be used as my
icon. Pixmaps are usually stored in /usr/X11R6/include/pixmaps.
Startup commands.
Fvwm allows you to execute certain programs upon start-up. Since any guest use
rs logging
onto the machine would be using the Internet, we decided ensure that the modem
starts to
dial as soon as possible. We added an Initfunction section to the end of the fv
wmrc file. If
the PPP link already happens to be up, the ping will be successful, otherwise t
he kernel
should start the connection process. Replace router.myisp.ca with the hostname
of a machine
located at your ISP.
Function "InitFunction" Exec "I" /bin/ping -c 1 router.myisp.ca &
EndFunction
Security Considerations.
BIOS Setup.
In a situation where the console is publicly accessible the BIOS is your first
line of defense
against hostile intent. Most modern BIOS's support password protection of some
sort. It is
recommended that a boot-up password be set. In our setup, we decided that we on
ly wanted
to allow "trusted" people to be able to boot the machine. Otherwise someone cou
ld boot the
machine using a floppy disk as the root file system,(thus they will be able to
gain root
privledges), or alternatively boot into DOS and format the hard-disk. In addit
ion to the boot-up password we also installed a password to protect the BIOS se
tup, and disabled booting
from the floppy drive.
Inittab
/etc/inittab is the configuration file for the "init" process. Since we wante
d our workstations
to work only in X-Windows, we changed the initial runlevel to 5. It is done wit
h the
following line. This means that when the machine boots, The X-server and Xdm a
re started
automatically.
id:5:initdefault:
"Init" is also responsible for handling the "getty"'s or terminal monitors whic
h handle text-based logins from the console or other terminals physically conne
cted to the machine. The
default inittab file should have a section that looks similar to this.
1:12345:respawn:/sbin/mingetty tty1
2:2345:respawn:/sbin/mingetty tty2
3:2345:respawn:/sbin/mingetty tty3
4:2345:respawn:/sbin/mingetty tty4
5:2345:respawn:/sbin/mingetty tty5
6:2345:respawn:/sbin/mingetty tty6
You should remove the "5" from the second section of each line. The result shou
ld look
something like this.
1:1234:respawn:/sbin/mingetty tty1
2:234:respawn:/sbin/mingetty tty2
3:234:respawn:/sbin/mingetty tty3
This means that when the system is in runlevel 5(The runlevel where X-windows s
tarts up to
handle log-ins.) A user is unable to login from the text-based console. Before
making this
change it is a good idea to insure that X-windows and XDM are working properly.
Disabling
text-based logins is not essential to security, but we felt that it would confu
se users who
would walk up to a machine that was left logged in text mode. If for some reaso
n X-windows
stops working after you disable text-based logins, you will have to boot the ma
chine into
single user mode in order to login. This can be done by passing an option to th
e kernel from
the lilo command prompt.
S90Console.
RedHat uses the SVR4 style init-scripts to manage the boot-up process. The basi
c idea is that
there is a directory for each runlevel under /etc/rc.d. When init switches ru
nlevels it goes
into the appropriate directory and executes each file that starts with a 'S' in
ascending order.
Eg on my RedHat system, when my system enters runlevel 3(multi-user) first
/etc/rc.d/rc3.d/S10network is executed, and lastly /etc/rc.d/rc3.d/S99local is
executed.
Even though we disabled the getty's for the console, a user could still press C
TRL-ALT-F1
(or another function key) to switch to another virtual console from X-windows.
I am unaware
of a way of preventing this (short of kernel modifications). So in the event th
at a user
accidently ended up switching virtual consoles we decided to leave the user ins
tructions on
how to get back into X-windows. We created the file S90Console and placed it i
n
/etc/rc.d/rc5.d and gave root execute permissions to it. The file looks as foll
ows.
#!/bin/shD="Press Ctrl+Alt+F2 to use this computer"
echo $D>/dev/tty1
echo $D>/dev/tty3
echo $D>/dev/tty4
echo $D>/dev/tty5
echo $D>/dev/tty6
echo $D>/dev/tty7
Since getty does not run on any virtual-consoles, the X-server uses the second
virtual console
by default.
inetd.conf
The file /etc/inetd.conf is the configuration file for the inetd daemon. This d
aemon is
responsible for starting daemons that provide network services when needed. Not
all daemons
are started by inetd. Many, such as sendmail and httpd can either run in stand
alone mode, or
under inetd. If your machine is only being used as a workstation, and is not p
roviding
network services to anyone then you should disable all unnecessary daemons. To
disable a
daemon that is currently being started by inetd, just added a '#' sign at the b
eginning of the
relevant line to comment it out. I would recommend disabling finger, pop, ntal
k, talk, and
any other daemons that are not being used. We decided to leave telnet and ftp e
nabled to
allow for remote administration. However if you are doing this remember to keep
an eye out
for security advisories that deal with problems associated with these packages(
and any other
program that is running on your system.) Usually fixing a bug is just a questio
n of upgrading
to the newest version of the program in question.
Firewalling Issues.
The Linux kernel can be configured to support IP Firewalling. This allows you
to specify
what packets the kernel should ignore, for example you can instruct the kernel
to refuse to
route any packets from the local machine destined to TCP port 25(of any machine
). You must
enable IP Firewalling, when compiling your kernel if you want to use this featu
re. You
control the firewall parameters with the "ipfwadm" command, usually located in
/sbin. We
added the following lines to /etc/rc.d/rc5.d/S99local.
/sbin/ipfwadm -I -f
/sbin/ipfwadm -O -f
/sbin/ipfwadm -O -a deny -P tcp -D 0.0.0.0/0.0.0.0 25
/sbin/ipfwadm -O -a deny -P tcp -D 0.0.0.0/0.0.0.0 119
This restricts all outgoing traffic to port 25 (The mail port) so users can not
send mail. Since
anyone could walk up and use our workstations, we felt that it would be a bad i
dea to allow
them to send mail. Likewise we restricted port 119(the news port) so usenet acc
ess is not
allowed. Ideally we would have liked to allow read-only usenet access from Net
scape,
however I could not figure out how to do this so decided to be safe and restric
t all usenet
access.
Permissions.
In order to insure that your setup stays, you will want to change the permissio
ns on various
files located inside the guest users home directory. By this point you should h
ave already
created a guest user. You should also run netscape for the first time as the gu
est user before
making these changes.
chown root /home/guestchmod 555 /home/guestchown root
/home/guest/.fvwmrc /home/guest/.bash_profile chown root
/home/guest/.Xdefaults /home/guest/.bashrc
/home/guest/.bash_logoutchmod 555 /home/guest/.fvwmrc
/home/guest/.bash_profile /home/guest/.Xdefaults
/home/guest/.bashrc /home/guest/.bash_logoutchmod 444
/home/guest/.netscape/preferences
/home/guest/.netscape/bookmarks.htmlchown root
/home/guest/.netscape/preferences
/home/guest/.netscape/bookmarks.htm
The commands above were executed, first we gave root ownership of the users hom
e
directory. Changing ownership prevents the user from changing the permissions b
ack. Then
we removed write access to the home directory. Next we changed ownership and r
emoved
write access to the .fvwmrc file, the .bash_profile, .Xdefaults, .bash_logout,
and .bashrc. This
prevents a user from changing aspects of his environment. Finally we secured th
e netscape
preferences file, and the bookmarks file. A user can still change the settings
in netscape,
however they will not be saved, so the next person to login will be presented w
ith the default
settings.
__________________________________________________________________________
Copyright © 1998, Steven Singer
Published in Issue 25 of Linux Gazette, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
"Linux Gazette...making Linux just a little more fun!"
__________________________________________________________________________
The Software World--It's a Changin'
By Phil Hughes
__________________________________________________________________________
First, let me set the scene: today is January 22.
The important events of this day are:
* Bill Clinton is once again accused of a sexual impropriety.
* The Pope is in Cuba.
* Netscape has announced that that their browser is now free and
that they will freely distribute the source code for it.
* Microsoft has somewhat folded in its browser battle with the U.S.
Justice Department.
The first two items are just for context--it has been an exciting day.
I am really here to discuss the last two items.
Let's get the Microsoft information out of the way first.
My understanding is that a compromise has been reached between Microsoft
and the justice department--the Internet Explorer icon
will not appear on the desktop, but the browser itself will still be
included.
As the easiest way to get a new browser is to download it off the Internet
and 90% of all personal computers today come with Microsoft Windows, it
seems that all we have done is make it a little harder for Internet Explorer to
be on 90% of the desktops.
Hopefully, there will be further developments in the Microsoft vs. the U.S. Jus
tice
Department game.
The Netscape item has two parts.
The first, making the browser available for free really is a
necessity;
90% of new personal computers come with Windows and, thus, Internet
Explorer.
Whether IE is better than anything Netscape offers or not isn't the
issue if one comes with your computer and you have to go buy and
install the other one.
Numbers back up this statement.
Netscape used to account for about 90% of the browser market while 60%
is probably the case today.
The good news for Netscape is that they have managed to shift their
revenue stream away from stand-alone client software.
Their own numbers show that in the fourth quarter of 1997 these
revenues were only 13% of total, down from 45% a year earlier.
By far the most interesting part of Netscape's announcement for the
Linux community is the fact they will
release the source code for Communicator starting with 5.0.
Sure, this will also make a change for them in the Windows arena and
may force Microsoft to make some brave decision as well, but let's look
at what this does for the Linux community.
The first thing I see is talk on the Gnome mailing list about a version of
Navigator using Gnome.
Call it Gnomescape, it is potentially a full-featured browser with a
look and feel that is likely to become the Linux standard. [For more on
Gnome see the "KDE and Gnome" article by Larry Ayers in issue
24 of Linux Gazette January 1998.]
Netscape claims they are releasing the code to allow the Internet
community to contribute to the development.
(I expect Linux helped them realize that is possible.)
For us, this can mean that instead of complaining about Netscape bugs, we
can fix them.
I expect, based on Linux history, the best, most bug-free version of
Netscape will appear on Linux systems first.
Free Communicator and free source code means that Linux systems become
a much cheaper choice for "Web Appliances".
It also means inexpensive kiosks at shopping malls, car dealers, etc.
While I am sure Netscape made this decision to help their competitive
position with Microsoft, I think we will see a huge impact on the
Linux scene.
Of course, if Linux replaces Windows as the operating system installed
on 90% of the PCs sold today, Netscape will be as happy as the Linux
community.
What's still up in the air is what sort of license the source code
will fall under.
GPL is one choice; a license more like that of BSD is another.
Check out "Linux News" and the discussion groups on our
web site to get up to the
minute information on what is happening.
__________________________________________________________________________
Copyright © 1998, Phil Hughes
Published in Issue 25 of Linux Gazette, February 1998
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
Next
__________________________________________________________________________
Linux Gazette Back Page
Copyright © 1998 Specialized Systems Consultants, Inc.
For information regarding copying and distribution of this material see the
Copying License.
__________________________________________________________________________
Contents:
* About This Month's Authors
* Not Linux
__________________________________________________________________________
About This Month's Authors
__________________________________________________________________________
Randy Appleton
Randy Appleton is a professor of Computer Science at Northern Michigan
University. Randy got his Ph.D. at the University of Kentucky. He has
been involved with Linux since before version 0.9. Current research
includes high performance pre-fetching file systems, with a coming port to
the 2.X version of Linux. Other interests include airplanes, especially
home-built ones.
Larry Ayers
Larry Ayers lives on a small farm
in northern Missouri, where he is currently engaged in building a
timber-frame house for his family. He operates a portable band-saw mill,
does general woodworking, plays the fiddle and searches for rare
prairie plants, as well as growing shiitake mushrooms. He is also
struggling with configuring a Usenet news server for his local ISP.
Jim Dennis
Jim Dennis
is the proprietor of
Starshine Technical Services.
His professional experience includes work in the technical
support, quality assurance, and information services (MIS)
departments of software companies like
Quarterdeck,
Symantec/
Peter Norton Group, and
McAfee Associates -- as well as
positions (field service rep) with smaller VAR's.
He's been using Linux since version 0.99p10 and is an active
participant on an ever-changing list of mailing lists and
newsgroups. He's just started collaborating on the 2nd Edition
for a book on Unix systems administration.
Jim is an avid science fiction fan -- and was
married at the World Science Fiction Convention in Anaheim.
Rick Dearman
Rick is an American living and working in the United Kingdom as
a computer programming consultant. He is currently attempting to wean
himself off late nights, coffee, and computers, on to early morning jogs
and fresh orange juice. Unfortunately it isn't working that well.
Bernard Doyle
Bernard is a self-employed programmer/analyst in Sydney, Australia. He
mainly works on developing software for handheld pen PCs. His web page is at
http://www.moreinfo.com.au/bjd/. He hopes to set up a Web Server running
Linux at some time in the future. Comments, etc. can be sent to
bernardd@wr.com.au
Michael J. Hammel
Michael J. Hammel,
is a transient software engineer with a background in
everything from data communications to GUI development to Interactive Cable
systems--all based in Unix. His interests outside of computers
include 5K/10K races, skiing, Thai food and gardening. He suggests if you
have any serious interest in finding out more about him, you visit his home
pages at http://www.csn.net/~mjhammel. You'll find out more
there than you really wanted to know.
Phil Hughes
Phil Hughes is the publisher of Linux Journal, and thereby Linux
Gazette. He dreams of permanently tele-commuting from his home on the
Pacific coast of the Olympic Peninsula.
As an employer, he is "Vicious, Evil,
Mean, & Nasty, but kind of mellow" as a boss should be.
Mike List
Mike List is a father of four teenagers, musician,
and recently reformed technophobe, who has been into computers
since April,1996, and Linux since July.
IMG ALIGN=BOTTOM ALT="" SRC="../gx/note.gif">Leonardo Lopes
Leonardo is originally from Brazil. He has a degree in CS and is currently a
Ph.D. candidate in Industrial Engineering at Northwestern University. He
also enjoy computers, playing soccer and guitar, and fast cars.
IMG ALIGN=BOTTOM ALT="" SRC="../gx/note.gif">Leonardo Lopes
Eric is studying computer
science in Toulouse, France, and is a member of the local Linux Users
Group. He enjoys programming, cycling and Led Zeppelin. He admits to
once having owned a Macintosh, but denies any connection with the the
Eric Conspiracy Secret
Labs.
Russell C. Pavlicek
Russell is employed by Digital Equipment Corporation as a software
consultant serving US Federal Government customers in the Washington D.C.
area. He is constantly looking for opportunities to employ Linux on the job.
He lives with his lovely wife and wonderful children in rural Maryland
where they serve Yeshua and surround themselves with a variety of furry
creatures. His opinions are entirely his own (but he will allow you to adopt
one or two if you ask nicely).
IMG ALIGN=BOTTOM ALT="" SRC="../gx/note.gif">Kristian Elof Sørensen
Kristian
lives in Copenhagen, Denmark where he makes database enabled web-sites,
builds intranets, programs, trains users and does other forms of
Inter/intra-net contracting work.
He has made some of the information on
The Linux Rescource Exchange,
but apart from that hasn't contributed to Linux.
When not working, he likes to study Nordic and British 19th century
philosophy and literature.
Martin Vermeer
Martin is a European citizen born in The Netherlands in 1953
and living with his wife in Helsinki, Finland, since 1981, where he is
employed as a research professor at the Finnish Geodetic Institute.
His first UNIX experience was in 1984 with OS-9, running on a Dragon
MC6809E home computer (64k memory, 720k disk!). He is a relative newcomer
to Linux, installing RH4.0 February 1997 on his home PC and, encouraged,
only a week later on his job PC. Now he runs 5.0 at home, job soon to
follow.
Special Linux interests: LyX, Pascal (p2c), tcl/tk.
__________________________________________________________________________
Not Linux
__________________________________________________________________________
Thanks to all our authors, not just the ones above, but also those who wrote
giving us their tips and tricks and making suggestions. Thanks also to our
new mirror sites.
Thanks to my wonderful husband, Riley, for the hard work he has done
the last three months to help get LG out. He has, however, decided that
he no longer wishes to take
full responsibility for Linux Gazette so I am once more in the
driver's seat. Whether or not I decide to outsource it again remains to be
decided.
In the meantime, I'm having fun and enjoying all the mail and good articles
that you guys have been sending in. Lots of you have subscribed to our
announcement service and it seems to be working well -- no complaints!
Linux Journal has redesigned its Linux Resources Page.
Check it out and give us your comments and suggestions. We'd like these
pages to be community effort and will provide space for discussion groups
and Linux projects. Just get in touch with our webmaster.
Have fun!
Marjorie L. Richardson
Editor, Linux Gazette, gazette@ssc.com
__________________________________________________________________________
[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back
__________________________________________________________________________
Linux Gazette Issue 25, February 1998,
http://www.linuxgazette.com/
This page written and maintained by the Editor of Linux Gazette,
gazette@ssc.com