LDP/LDP/guide/docbook/Linux-Networking/Thin-Client.xml

5943 lines
255 KiB
XML
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

NFS-Root mini-HOWTO
not maintained
V9, 20 September 2002
This mini-HOWTO tries explains how to set up a ``diskless'' Linux
workstation, which mounts its root filesystems via NFS. The newest
version of this mini-HOWTO can always be found at
http://www.tldp.org/HOWTO/mini/NFS-Root.html
<http://www.tldp.org/HOWTO/mini/NFS-Root.html> or a Linux Documenta­
tion Project mirror NEAR YOU.
______________________________________________________________________
Table of Contents
1. Copyright
1.1 Contributors
2. General Overview
3. Setup on the server
3.1 Compiling the kernels
3.2 Creation of the root filesystem
3.2.1 Copying the filesystem
3.2.2 Changes to the root filesystem
3.2.3 Exporting the filesystem
3.2.4 RARP setup
3.2.5 BOOTP setup
3.2.6 DHCP setup
3.2.7 Finding out hardware addresses
4. Booting the workstation
4.1 Using a boot ROM
4.2 Using a raw kernel disk
4.3 Using a bootloader &
4.4 Using a bootloader without
4.5 Sample kernel command lines
5. Known problems
5.1 /sbin/init doesn't start.
5.2 /dev troubles.
6. Other resources
______________________________________________________________________
1. Copyright
(c) 1996 Andreas Kostyrka (e9207884@student.tuwien.ac.at or
andreas@ag.or.at)
Unless otherwise stated, Linux HOWTO documents are copyrighted by
their respective authors. Linux HOWTO documents may be reproduced and
distributed in whole or in part, in any medium physical or electronic,
as long as this copyright notice is retained on all copies. Commercial
redistribution is allowed and encouraged; however, the author would
like to be notified of any such distributions.
All translations, derivative works, or aggregate works incorporating
any Linux HOWTO documents must be covered under this copyright notice.
That is, you may not produce a derivative work from a HOWTO and impose
additional restrictions on its distribution. Exceptions to these rules
may be granted under certain conditions; please contact the Linux
HOWTO coordinator at the address given below.
In short, we wish to promote dissemination of this information through
as many channels as possible. However, we do wish to retain copyright
on the HOWTO documents, and would like to be notified of any plans to
redistribute the HOWTOs.
If you have questions, please contact Andreas Kostyrka
<mailto:andreas@ag.or.at>, the author of this mini-HOWTO, or the Linux
HOWTO coordinator, at <mailto:linux-howto@sunsite.unc.edu> via email.
1.1. Contributors
· Avery Pennarun <apenwarr @ foxnet.net> (how to boot without LILO)
· Ofer Maor <ofer @ hadar.co.il> (a better mini-HOWTO about setting
up diskless workstations)
· Christian Leutloff <leutloff @ sundancer.tng.oche.de> (info about
netboot)
· Greg Roelofs <newt @ pobox.com> (2.2/2.4 updates, DHCP info, NFS-
export info)
2. General Overview
An NFS-mounted root filesystem is typically most useful in two
situations:
· A system administrator may wish to aggregate storage for multiple
workstations in order to simplify maintenance, improve security and
reliability, and/or make more economical use of limited storage
capacity. In this scenario, a single, large server may host a
dozen or more workstations; all of the systems can be regularly
backed up from a central location, and individual clients are less
prone to damage by unsophisticated users or attack by malicious
parties with physical access. (Of course, if the server itself is
compromised, then so are all of the clients.)
· An embedded system may not have a disk, an IDE interface, or even a
PCI bus. Even if it does, during development it may be too
unstable to use the disk, and a ramdisk may be too small to include
all of the necessary utilities or too large (as a part of the
kernel image) to allow for rapid turnaround during testing and
development. An NFS root allows quick kernel downloads, helps
ensure filesystem integrity (since the server is basically
impervious to crashes by the client), and provides virtually
infinite storage.
(In this document we'll use the terms client and workstation
interchangeably.)
However, there are two small problems from the client's perspective:
· It must find out its own IP address and possibly also the rest of
the ethernet configuration (gateway, netmask, name servers, etc.).
· It must know or discover both the IP address of the NFS server and
the mount path (on the server) to the exported root filesystem.
The current implementation of NFSROOT in the Linux kernel (as of
2.4.x) allows for several approaches, including:
· The complete ethernet configuration, including the NFS-path to be
mounted, may be passed as parameters to the kernel via LILO,
LOADLIN, or a hard-coded string within
linux/arch/i386/kernel/setup.c (or its equivalent for other
architectures).
· The IP address may be discovered by RARP and the NFS-path passed
via kernel parameters.
· The IP address may be discovered by RARP, with the NFS-path derived
from the RARP server and the just-granted IP address (loosely
speaking, ``mount -t nfs <RARP-server>:/tftpboot/<IP-address-of-
client>/dev/nfs'').
· The client configuration may be discovered by BOOTP.
· The client configuration may be discovered by DHCP.
Since the most common dynamic-address protocol these days is DHCP, its
addition as an option in kernels 2.2.19 and 2.4.x (3 < x <= 14) is
particularly welcome.
Before starting to set up a diskless environment, you should decide if
you will be booting via LILO, LOADLIN, or a custom, embedded
bootloader. The advantage of using something like LILO is flexibility;
the disadvantage is speed--booting a Linux kernel without LILO is
faster. This may or may not be a consideration.
3. Setup on the server
3.1. Compiling the kernels
On the server side, if you don't plan to use the old, user-mode NFS
daemon, you'll need to compile NFS server support into the kernel
(``NFS server support,'' a.k.a. knfsd or CONFIG_NFSD). If you plan to
use the older RARP protocol to assign the client an IP address, RARP
support in the kernel of the server is probably a good idea. (You must
have it if you will boot via RARP without kernel parameters.) On the
other hand, it doesn't help you if the client isn't on the same subnet
as the server.
The kernel for the workstation needs the following settings, as a
minimum:
· NFS filesystem support (CONFIG_NFS_FS). Note that there is no need
for ext2 support.
· Root file system on NFS (CONFIG_ROOT_NFS).
· Ethernet (10 or 100Mbit) (CONFIG_NET_ETHERNET).
· The ethernet driver for the workstation's network card (or onboard
ethernet chip, if it's built into the motherboard or chipset).
Where there is an option to compile something in as a module, do
not do so; modules only work after the kernel is booted, and these
things are needed during boot.
For dynamically assigned IP numbers, you'll also need to select one or
more of these kernel options:
· IP: kernel level autoconfiguration (CONFIG_IP_PNP)
· RARP support (CONFIG_IP_PNP_RARP)
· BOOTP support (CONFIG_IP_PNP_BOOTP)
· DHCP support (CONFIG_IP_PNP_DHCP)
If the workstation will be booted without kernel parameters, you need
also to set the root device to 0:255. Do this by creating a dummy
device file with mknod /dev/nfsroot b 0 255. After having created such
a device file, you can set root device of the kernel image with rdev
<kernel-image> /dev/nfsroot. [NOTE: Modern kernels recognize
root=/dev/nfs as a command-line argument; for consistency and/or
compatibility, it may be better to use /dev/nfs as the device name
instead of /dev/nfsroot.]
3.2. Creation of the root filesystem
3.2.1. Copying the filesystem
Warning: while these instruction might work for you, they are by no
means sensefull in a production environment. For a better way to set
up a root filesystem for the clients, see the NFS-Root-Client mini-
HOWTO by Ofer Maor <ofer@hadar.co.il>.
After having decided where to place the root tree, create it with
(e.g.) mkdir -p <directory> and tar cClf / - | tar xpCf <directory> -.
If you boot your kernel without LILO, then the rootdir has to be
/tftpboot/<IP-address>. If you don't like it, you can change it in the
top Makefile in the kernel sources, look for a line like: NFS_ROOT =
-DNFS_ROOT="\"/tftpboot/%s\"" If you change this, you have to
recompile the kernel.
3.2.2. Changes to the root filesystem
Now trim the unneeded files, and check the /etc/rc.d scripts. Some
important points:
· One important thing is eth0 setup. The workstation comes up with
eth0 set up, at least partially. Setting the IP address of the
workstation to the the IP address of the server is not considered a
clever thing to do. (As it happened to the original author on one
of his early attempts.)
· Another point is the /etc/fstab of the workstation. It should be
set up for NFS filesystems. <NOTE: this is not true in 2.4
kernels; the NFS mount is implicit and may actually cause mount(1)
error messages if it's explicitly listed in /etc/fstab. It is not
clear when this changed.>
· WARNING: Don't confuse the server root filesystem and the
workstation root filesystem. (I've already patched up a rc.inet1 on
the server, and wondered why the workstation still didn't work.)
3.2.3. Exporting the filesystem
Export the root dir to the workstation. The basic idea is to edit
/etc/exports to include a line similar to one of the following:
· /path/on/server/to/nfs_root <client-IP-
number>(rw,no_root_squash,no_all_squash) <2nd-client-IP-
number>(rw,no_root_squash,no_all_squash)
· /path/on/server/to/nfs_root <client-IP-network>/<client-IP-
netmask>(rw,no_root_squash,no_all_squash)
For example, a DHCP client receiving an IP address on a class C subnet
would need an exports entry similar to this:
· /path/on/server/to/nfs_root
192.168.263.0/255.255.255.0(rw,no_root_squash,no_all_squash)
The no_root_squash parameter allows the superuser (root) to be treated
as such by the NFS server; otherwise root will be remapped to nobody
and will generally be unable to do anything useful with the
filesystem. The no_all_squash parameter is similar but applies to
non-root users. See the exports(5) man page for details.
You will have to notify the NFS server after making any changes to the
exports file. Under Red Hat this can easily be done by typing
/etc/rc.d/init.d/nfs stop; /etc/rc.d/init.d/nfs start. On other
systems, a simple /etc/rc.d/init.d/nfs restart or even exportfs -a may
suffice, while on older machines running the user-mode NFS daemon you
may actually need to killall -HUP rpc.mountd; killall -HUP rpc.nfsd.
(Do not killall -HUP rpc.portmap, however!)
You may also need to edit /etc/hosts.allow and/or /etc/hosts.deny if
tcp_wrappers are installed. In particular, if the remote system
(client) gets RPC: connection refused errors, /etc/hosts.deny probably
contains portmap: ALL or ALL: ALL. To enable the client to use the
server's portmapper, add a corresponding line to /etc/hosts.allow:
portmap: <client-IP-number>
portmap: <2nd-client-IP-number>
portmap: <client-IP-network>/<client-IP-netmask>
There is no need to restart anything in this case. You can check by
running rpcinfo -p on the NFS server and rpcinfo -p NFS-server on a
Linux client within the allowed range; the RPC services listed by both
should match.
In case of problems, check /var/log/messages and /var/log/syslog for
errors (for example, run tail -f /var/log/messages /var/log/syslog and
then try booting the client), and check your man pages (exports,
exportfs, portmap, etc.). As a last resort, a reboot of the NFS
server may help, but that's a borderline Microsoftism...
3.2.4. RARP setup
Set up the RARP somewhere on the net. If you boot without a nfsroot
parameter, the RARP server has to be the NFS server. Usually this will
be the NFS server. To do this, you will need to run a kernel with RARP
support.
To do this, execute (and install it somewhere in /etc/rc.d of the
server!):
/sbin/rarp -s <ip-addr> <hardware-addr>
where
ip-addr
is the IP address of the workstation, and
hardware-addr
is the ethernet address of the network card of the workstation.
example: /sbin/rarp -s 131.131.90.200 00:00:c0:47:10:12
You can also use a symbolic name instead of the IP address, as long
the server is able to find out the IP address. (/etc/hosts or DNS
lookups)
3.2.5. BOOTP setup
For BOOTP setup you need to edit /etc/bootptab. Please consult the
bootpd(8) and bootptab(5) man pages.
3.2.6. DHCP setup
There is no need for the DHCP server to be the same as the NFS server,
and in most cases, a DHCP server will already be set up. If one is
not, however, consult the DHCP mini-HOWTO for further help.
3.2.7. Finding out hardware addresses
I don't know the hardware address! How can I find it out?
· Boot the kernel disk you made, and watch for the line where the
network card is recognized. It usually contains 6 hex bytes, that
should be the address of the card.
· Boot the workstation with some OS with TCP/IP networking enabled.
Then ping the workstation from the server. Look in the ARP-cache by
executing: /sbin/arp -a
4. Booting the workstation
4.1. Using a boot ROM
As I have not used such a beast myself yet, I can give you only the
following tips (courtesy of Christian Leutloff
<leutloff@sundancer.tng.oche.de>):
· You can't use ``normal'' boot ROMs.
· There is a netboot packet by Gero Kuhlmann, that provides for boot
ROMs for Linux, and further information. netboot is available from
the local Linux mirror, or as a Debian package (netboot-0.4).
· Read the documentation coming with your boot ROM carefully.
· You probably will have to enable the tftpd on the server, but this
depends upon your boot ROM's way of loading the kernel.
· Any information on boot-ROM vendors of these Linux variety,
mentioned above, as not everybody has access to PROM burner :(
(especially in Europe, as I'm located there.) welcome, I'll include
them then here.
4.2. Using a raw kernel disk
If you have exported the root filesystem with the correct name for the
default naming and your NFS server is also the RARP server (which
implies that the boxes are on the same subnet.), than you can just
boot the kernel by cating it to a disk. (You have to set the root
device in the kernel to 0:255.) This assumes, that the root directory
on the server is /tftpboot/IP Address (this value can be changed when
compiling the kernel.)
4.3. Using a bootloader & RARP
Give the kernel all needed parameters when booting, and add
nfsroot=<server-ip-addr>:</path/to/mount> where server-ip-addr is the
IP address of your NFS-server, and /path/to/mount is the path to the
root directory.
Tips:
· When using LILO consider using the ``lock'' feature: Simply type in
once all the correct parameters and add ``lock''. Next time when
booting let LILO timeout.
· When generating a workstation specific boot disk, you can also use
the append= feature in lilo.conf.
4.4. Using a bootloader without RARP
The ip and nfsroot kernel parameters (which can be hardcoded into the
kernel, interactively entered at some bootloader prompts, or included
in lilo.conf via the append= parameter; see the next subsection)
provide all of the information necessary for the client to set up its
ethernet interface and to contact the NFS server, respectively. The
parameters are fully documented in Documentation/nfsroot.txt, which is
included in the kernel sources (usually found under /usr/src/linux).
Here's the format for a machine with a static (pre-assigned) IP
address:
· nfsroot=<NFS-server-IP-number>:/path/on/server/to/nfs_root
ip=<client-IP-number>::<gateway-IP-number>:<netmask>:<client-
hostname>:eth0:off
DHCP is much simpler:
· nfsroot=<NFS-server-IP-number>:/path/on/server/to/nfs_root ip=dhcp
4.5. Sample kernel command lines
Here's an example of a complete kernel command line such as you might
include in lilo.conf or equivalent; only the IP numbers and NFS path
are bogus:
· root=/dev/nfs rw nfsroot=12.345.67.89:/path/on/server/to/nfs_root
ip=dhcp console=ttyS1
That uses DHCP to assign an IP address to the machine and puts its
boot messages (console) on the second serial port. The following is
the corresponding example using a static IP address; it also
explicitly specifies Busybox's (non-standard) location for init:
· root=/dev/nfs rw nfsroot=12.345.67.89:/path/on/server/to/nfs_root
ip=12.345.67.88::12.345.67.1:255.255.255.0:embed-o-matic:eth0:off
console=ttyS1 init=/bin/init
5. Known problems
5.1. /sbin/init doesn't start.
A common problem with /sbin/init is that some distributions (e.g., Red
Hat Linux) come with /sbin/init dynamically linked. So you have to
provide a correct /lib setup to the client. An easy thing one could
try is replacing /sbin/init (for the client) with a statically linked
``Hello World'' program. This way you know if it is something more
basic, or ``just'' a problem with dynamic linking.
Also note that Busybox by default installs its init symlink in /bin
rather than /sbin. You may need to move it or pass an explicit init=
parameter on the kernel command line, as shown in the final example of
the previous section.
5.2. /dev troubles.
If you get some garbled messages about ttys when booting, then you
should run a MAKEDEV from the client in the /dev directory. There are
rumors that this doesn't work with certain server OSes that use 64-bit
device numbers; should you run into this, please consider updating
this section! A potential solution would be to create a small /dev
ram disk early in the boot process and reinstall the device nodes each
time, or simply embed directly into the kernel a suitably initialized
ramdisk.
6. Other resources
· In the Documentation directory of kernel source there is a file
documenting NFS-Root systems (Documentation/nfsroot.txt).
· There are quite a few related HOWTOs:
· Diskless-HOWTO (specifically, the Network Booting section)
· Diskless-root-NFS-HOWTO
· Diskless-root-NFS-other-HOWTO
· Network-boot-HOWTO
· PXE-Server-HOWTO ("Pre-boot eXecution Environment") < coming >
· There is a BOOTP client:
http://ibiblio.org/pub/Linux/system/network/admin/bootpc-0.64.tar.gz
<http://ibiblio.org/pub/Linux/system/network/admin/bootpc-0.64.tar.gz>
With initrd (which is included in Linux 2.0), it could be made to
work for diskless stations quite nicely. initrd is actually always
an advanced option for more customized setups.
· For plain bootpd-based boots this is actually probably not needed
as Linux 2.0 contains also the option to use BOOTP instead of RARP.
(To be more precise, you can compile both in the kernel, and the
faster response wins.)
· There is a patch floating around that allows for swapping over NFS.
It was sent to me (during a private high workload phase), but I
somehow managed to lose the mail.
You can probably get it from http://www.linuxhq.com/
<http://www.linuxhq.com/> in the unofficial-patches section.
NFS-Root-Client Mini-HOWTO
Ofer Maor
v4.1, 02 Feb, 1999
Revision History
Revision 4.1 Feb 02, 1999 Revised by: mo
The purpose of this Mini-Howto is to explain how to create client root
directories on a server that is using NFS Root mounted clients.
_________________________________________________________________
Table of Contents
1. [1]Copyright
1.1. [2]Thanks
2. [3]Preface
2.1. [4]General Overview
3. [5]Creating the client's root directory
3.1. [6]Creating the directory tree
3.2. [7]Creating the minimal file system needed for boot
3.3. [8]Building the etc directory and configuring the clients
3.4. [9]Booting Up
4. [10]Creating more clients
1. Copyright
(c) 1996 Ofer Maor (<[11]oferm@hcs.co.il>)
Unless otherwise stated, Linux HOWTO documents are copyrighted by
their respective authors. Linux HOWTO documents may be reproduced and
distributed in whole or in part, in any medium physical or electronic,
as long as this copyright notice is retained on all copies. Commercial
redistribution is allowed and encouraged; however, the author would
like to be notified of any such distributions.
All translations, derivative works, or aggregate works incorporating
any Linux HOWTO documents must be covered under this copyright notice.
That is, you may not produce a derivative work from a HOWTO and impose
additional restrictions on its distribution. Exceptions to these rules
may be granted under certain conditions; please contact the Linux
HOWTO coordinator at the address given below.
In short, we wish to promote dissemination of this information through
as many channels as possible. However, we do wish to retain copyright
on the HOWTO documents, and would like to be notified of any plans to
redistribute the HOWTOs.
If you have questions, please contact Ofer Maor
(<[12]oferm@hcs.co.il>), the author of this mini-HOWTO, or Greg
Hankins, the Linux HOWTO coordinator, at <[13]gregh@sunsite.unc.edu>
via email, or at +1 404 853 9989.
If you have anything to add to this Mini-Howto, please mail the author
(Ofer Maor, <[14]oferm@hcs.co.il>), with the information. Any new
relevant information would be appreciated.
_________________________________________________________________
1.1. Thanks
I would like to express my thanks to the author of the NFS-Root Howto,
Andreas Kostyrca (<[15]andreas@medman.ag.or.at>). His Mini-Howto
helped me with the first steps in creating a NFS Root Mounted client.
My Mini-Howto does not, in any way, try to replace his work, but to
enhance it using my experiences in this process.
I would also like to thank Mark Kushinsky (<[16]mark026@ibm.net>) for
polishing the english and spelling of this Howto, thus making it much
more readable.
_________________________________________________________________
2. Preface
This Mini-Howto was written in order to help people who want to use
NFS Root mounting to create their client's directories. Please note
that there are many ways to accomplish this, depending on your needs
and intent. If the clients are individual, and each client has its own
users and administrator, it will be necessary to make significant
parts of the client dirs not shared with other clients. On the other
hand, if the client is intended for multiple users, and are all
administrated by the same person (for instance, a computerclass), make
as many files as possible shareable in order to make administration
more manageable. This Howto will focus on the second issue.
_________________________________________________________________
2.1. General Overview
When building a client's root directory, and trying to limit ourselves
to the minimum client size, we mainly focus on which files we can
share, or mount from the server. In this Howto I will recommend the
configuration of a client based on my experience. But beforewe begin
please note:
* This Mini-Howto does not explain how to do the actual NFS Root
mounting. Refer to the NFS-Root Mini-Howto if you need more
information about that issue.
* I based most of my client's configuration on mounts and symbolic
links. A lot of those symbolic links can be replaced by hardlinks.
One should choose according to his personal preference. Putting a
hardlink over a mount and a symbolic link has its advantages, but
might cause confusion. A file will not be erased until all its
hardlinks are removed. Thus, In order to prevent a case in which
you upgrade a certain file, and the hardlinks still refer to the
older version, you will have to be very careful and keep track of
every link you put.
* While mounting the information from the server, two concepts can
be used. The first (most common) concept, is to mount the whole
server root directory under a local directory, and then just
change the path or link the relevant directories there. I
personally dislike mounting root partitions of a server on
clients. Thus, this Howto suggests a way to mount the relevant
directories of the server to the relevant places on the system.
* This Howto is based on my experience building client directories
on a Slackware 3.1 based distribution. Things may be different
(especially on the rc.* files), for other users, however the
concepts should still remain the same.
_________________________________________________________________
3. Creating the client's root directory
3.1. Creating the directory tree
First of all, you need to create the directory structure itself. I
created all the clients under /clients/hostname and I will use it for
my examples listed below. This, however, can be changed to anything
else. The first stage, then, is to create the relevant directories in
the root directory. You should create the following directories:
bin , dev , etc , home , lib , mnt , proc , sbin , server , tmp , usr
, var
and any other directories you might want to have on your system.
The local, proc, and dev directories will be used separately on each
machine while the rest of the directories will be either partly or
completely shared with the rest of the clients.
_________________________________________________________________
3.2. Creating the minimal file system needed for boot
3.2.1. Creating the dev dir.
Although the dev dir can be shared, it is better to create a separate
one for each client. You can create your client's dev directory with
the appropriate MAKEDEV scripts, however in most cases it is simpler
just to copy it from the server:
bash# cp -a /dev /clients/hostname
You should keep in mind that /dev/mouse, /dev/cdrom and /dev/modem are
symbolic links to actually devices, and therefore you should be sure
that they are linked correctly to fit the client's hardware.
_________________________________________________________________
3.2.2. Copying the necessary binaries.
Although we mount everything from the server, there is a minimum that
we need to copy to each client. First of all, we need "init", our
system will not be able to run anything before init'ing (as the author
found out in the hard way ;-). So first, you should copy /sbin/init to
your client's sbin dir and then so that rc.S will run, you should copy
/bin/sh to the client's bin directory. Also, in order to mount
everything you need to copy /sbin/mount to the client's sbin
directory. This is the minimum, assuming the first line in your rc.S
is mount -av. However, I recommend copying a few more files: update,
ls, rm, cp and umount, so that you will have the basic tools in case
the client has problems mounting. If you choose to leave your swap on
line before mount, you should also copy the swapon binary.
Since most of these binaries are by default dynamically linked, you
will also need to copy a fair part of /lib:
bash# cp -a /lib/ld* /lib/libc.* /lib/libcursses.* /client/hostname/lib
Hardlinking the binaries themselves, instead of copying them, should
be considered. Please read my comments on this in [17]Section 2.1 of
this Howto.
Please notice, all of the information above assumes that the kernel
has been given the network parameters while booting up. If you plan to
use rarp or bootp, you will probably need the relevant binaries for
these as well.
Generally, you will need the minimum of files that will enable you to
configure the network and run rc.S up to the point where it mounts the
rest of the file system. Make sure you looked into your /etc/init and
rc.S files, making sure there are no "surprises" in any of them, which
will require other files to be accessed, before the first mount will
take place. If you do, however, find such files, you can either copy
them as well, or remove the relevant parts from your init and your
rc.S files.
_________________________________________________________________
3.2.3. The var directory
The var directory, in most cases, should be separate for each client.
However, a lot of the data can be shared. Create under the server
directory a directory called var. We will mount the server's var
directory there. To create the local var directory, simply type:
bash# cp -a /var /clients/hostname/
Now, you have a choice as to what you want to separate, and what you
want to share. Any directory/file that you want to share, simply
remove it from the client's var dir, and symlink it to the
/server/var/ directory. However please note that you should either
symlink it to /server/var or to ../server/var but NOT to
/clients/hostname/server/var as this will not work when the root
changes.
Generally, I would recommend separating /var/run, /var/lock,
/var/spool, and /var/log.
_________________________________________________________________
3.2.4. The rest of the directories
* etc is explained thoroughly in the next section.
* mnt and proc are for local purposes.
* usr and home are merely mount points.
* tmp is up to you. You can create a different tmp directory for
each client, or create some /clients/tmp directories, and mount it
for each client under /tmp. I would recommend that you provide
each client with a separate tmp directory.
_________________________________________________________________
3.3. Building the etc directory and configuring the clients
Please Note -
Note: this section refers to building the etc directory which is
mostly shared among the clients. If your diskless clients have
separate system administrators, it's best to set up for each client
a separate etc directory.
_________________________________________________________________
3.3.1. Building a clients wide etc directory
_________________________________________________________________
3.3.2. Creating a client's etc directory
Although we separate the etc directories of the clients, we still want
to share a large portion of the files there. Generally, I think
sharing the etc files with the server's /etc is a bad idea, and
therefore I recommend creating a /clients/etc directory, which will
hold the information needed for the clients. To start with, simply
copy the contents of the server's etc to the /clients/etc directory.
You should add to this directory all of the non-machine-specific
configuration files, for instance motd, issue, etc. and not the
clientspecific ones.(i.e. inittab or fstab)
The most important changes will be in your rc.d directory. First, you
should change rc.inet1 to be suitable for your local setup. I pass all
my network parameters to the kernel through the LILO/Loadlin,
therefore I remove almost everything from rc.inet1 file. The only
thing I leave there is the ifconfig and route of the localhost. If you
use rarp or bootp, you will have to build it accordingly.
Secondly, you should edit your rc.S. First, remove all the parts that
are responsible for the fsck check as fsck will occur when the server
boots up. Then, you should find the line that mounts your fstab. This
should look something like:
mount -avt nonfs
The -t nonfs is there since normal clients first run rc.S and only
later on use rc.inet1 to configure the Ethernet. As this will cause no
NFS partitions to be mounted this line should be deleted. Therefore,
change it to mount -av. If you need to run rarp/bootp to configure
your network, do it in rc.S (or call the appropriate script from
rc.S), before the mount, and make sure your physical bin and sbin
directories have the necessary files available.
After the mount -av is performed, you will have a working file system.
Build a general fstab, so that you can later copy it to each client.
Your fstab should look something like this:
Table 1. fstab
server:/clients/hostname / nfs default 1 1
server:/bin /bin nfs default 1 1
server:/usr /usr nfs default 1 1
server:/sbin /sbin nfs default 1 1
erver:/home /home nfs default 1 1
server:/lib /lib nfs default 1 1
server:/clients/etc /server/etc nfs default 1 1
server:/clients/var /server/var nfs default 1 1
none /proc proc default 1 1
Please notice, that the keyword default might not work on all versions
of mount. You might change it to rw or ro or remove all of the default
1 1 part.
Also, make sure your server's /etc/exports looks like this:
Table 2. /etc/exports
/clients/hostname hostname.domainname(rw,no_root_squash)
/clients/etc hostname.domainname(ro,no_root_squash)
/clients/var hostname.domainname(ro,no_root_squash)
/usr hostname.domainname(ro,no_root_squash)
/sbin hostname.domainname(ro,no_root_squash)
/bin hostname.domainname(ro,no_root_squash)
/lib hostname.domainname(ro,no_root_squash)
/home hostname.domainname(rw,no_root_squash)
Other than the first line, which should be separate for each host, the
rest of the lines can be replaced with a hostmask to fit all your
hosts (like pc*.domain - keep in mind though, that * will substitue
only strings without a dot in them). I suggest that you make most of
the directories read only, but this is up to you. The no_root_squash
will make sure root users on the clients have actual root permissions
on the nfsd as well. Check out man exports(5). If you want users to be
able to run passwd from the clients also, make sure the /etc has rw
and not ro permissions. However, this is not advisable.
Please note another thing concerning the rc.S file. In Slackware, by
default, it creates a new /etc/issue and /etc/motd every time it runs.
This function MUST be disabled if these files are mounted ro from the
server, and I would recommend that they should be disabled in any
case.
Lastly, if you want to have the same userbase on the server as on the
clients, you should choose between 1), using NIS (Yellow Pages - check
the yp-howto), and then each client will have a separate /etc/passwd
and /etc/group as it receives it from the NIS server. 2) In most
cases, a simple symbolic link will suffice. Therefore, you will need
to either hardlink /clients/etc/passwd to /etc/passwd, or if you
prefer a symlink, link /etc/passwd to /clients/etc/passwd (and not the
other way around, since the clients do not mount the server's etc
directory). Do the same for /etc/group.
_________________________________________________________________
3.3.3. Creating a client's etc directory
Generally, most of the files in the client's etc should be symlinked
to the /server/etc directory. However, some files are different for
each machine, and some just have to be there when the kernel loads.
The minimum you need from the etc dir is as follows:
resolv.conf
hosts
inittab
rc.d/rc.S
fstab
Since these 5 files can be identical on all clients, you can simply
hardlink them or copy them again. However, with the rc.S and fstab
file it is advised to keep a separate copy for each client. You will
also need a separate etc/HOSTNAME for each client. I personally
recommend having all of the rc.d files separate for each client, as
configuration and hardware might vary from one to another.
For each client, add to the fstab the proper swap line:
Table 3. fstab
/dev/swap_partition swap swap default 1 1
The rest of the /etc files of the client, you can either hardlink to
the /clients/etc/* files, or symlink them to the /server/etc (which is
the mount point of /clients/etc/).
Make sure your machine can resolve properly, either through a named or
through etc/hosts. It is not a bad idea to keep the server's IP in the
etc/hosts, instead of counting on resolving. If you will count only on
named resolving, a problem in the named will prevent your clients from
booting up.
_________________________________________________________________
3.4. Booting Up
Now, all you have to do is to boot up your machine, cross your fingers
and hope everything works as it should :-).
_________________________________________________________________
4. Creating more clients
If you have followed my instructions so far this should be simple - cd
to /clients/ and type:
bash# cp -a hostname1 hostname2
and then make sure you check these points:
* rc.d/* files matches the hardware and wanted software
configuration
* etc/HOSTNAME is correct
* fstab's swap line is correct
* the symbolic links of dev/mouse, dev/modem and dev/cdromare right.
Good Luck....
References
1. file://localhost/export/sunsite/users/gferg/howto/00_NFS-Root-Client-mini-HOWTO.html#COPYRIGHT
2. file://localhost/export/sunsite/users/gferg/howto/00_NFS-Root-Client-mini-HOWTO.html#AEN28
3. file://localhost/export/sunsite/users/gferg/howto/00_NFS-Root-Client-mini-HOWTO.html#PREFACE
4. file://localhost/export/sunsite/users/gferg/howto/00_NFS-Root-Client-mini-HOWTO.html#OVERVIEW
5. file://localhost/export/sunsite/users/gferg/howto/00_NFS-Root-Client-mini-HOWTO.html#CLIENTROOT
6. file://localhost/export/sunsite/users/gferg/howto/00_NFS-Root-Client-mini-HOWTO.html#AEN52
7. file://localhost/export/sunsite/users/gferg/howto/00_NFS-Root-Client-mini-HOWTO.html#AEN74
8. file://localhost/export/sunsite/users/gferg/howto/00_NFS-Root-Client-mini-HOWTO.html#AEN158
9. file://localhost/export/sunsite/users/gferg/howto/00_NFS-Root-Client-mini-HOWTO.html#AEN371
10. file://localhost/export/sunsite/users/gferg/howto/00_NFS-Root-Client-mini-HOWTO.html#MORECLIENTS
11. mailto:oferm@hcs.co.il
12. mailto:oferm@hcs.co.il
13. mailto:gregh@sunsite.unc.edu
14. mailto:oferm@hcs.co.il
15. mailto:andreas@medman.ag.or.at
16. mailto:mark026@ibm.net
17. file://localhost/export/sunsite/users/gferg/howto/00_NFS-Root-Client-mini-HOWTO.html#OVERVIEW
Root over nfs clients & server Howto.
Hans de Goede hans@highrise.nl
v1.0 30 March 1999
Howto setup a server and configure clients for diskless operation from
a network.
______________________________________________________________________
Table of Contents
1. Introduction
1.1 Copyright
1.2 Changelog
2. Basic principle
2.1 Things can't be that simple
2.1.1 Each ws needs its own writable copy of a number of dirs
2.1.2 Write access to /home might be needed
2.1.3 How does a ws find out it's ip so that it can communicate with the server?
2.1.4 What about ws sepecific configuration
2.1.5 Miscelancious problems
3. Preparing the server
3.1 Building a kernel
3.2 Creating and populating /tftpboot, making symlinks for /tmp etc.
3.2.1 The automagic part
3.2.2 Manual adjustments to some files
3.3 Exporting the appropriate file systems and setting up bootp
3.3.1 Exporting the appropriate file systems
3.3.2 Setting up bootp
4. Adding workstations
4.1 Creating a boot disk or bootrom
4.1.1 Creating a bootdisk
4.1.2 Creating a bootrom
4.2 Creating a ws dir
4.3 Add entries to /etc/bootptab and /etc/hosts
4.4 Booting the ws for the first time
4.5 Set the ws specific configuration.
5. Added bonus: booting from cdrom
5.1 Basic Principle
5.1.1 Things can't be that simple
5.2 Creating a test setup.
5.3 Creating the cd
5.3.1 Creating a boot image
5.3.2 Creating the iso image
5.3.3 Verifying the iso image
5.3.4 Writing the actual cd
5.4 Boot the cd and test it
6. Thanks
7. Comments
______________________________________________________________________
1. Introduction
This howto is also available at - <http://xmame.retrogames.com/hans>.
This document describes a setup for nfs over root. This document
differs from the other root over nfs howto's in 2 ways:
1. It describes both the server and the client side offering a
complete solution, it doesn't desribe the generic principles off
root over nfs although they will become clear. Instead it offers a
working setup for root over nfs. One of the many possible setup's I
might add.
2. This solution is unique in that it shares the root of the server
with the workstations (ws). Instead of having a mini-root per ws.
This has a number of advantages:
· low diskspace usage
· any changes on the serverside are also automagicly made at the
client side, all configuration has only to be done once!
· Very easy adding of new clients
· only one system to maintain
This document is heavily based on a RedHat-5.2 system. Quite a bit of
prior linux sysadmin experience is assumed in this howto, if you have
that it shouldn't be a problem to addept this solutions to other
distributions.
1.1. Copyright
Well here's the standard howto legal stuff:
This manual may be reproduced and distributed in whole or in part,
without fee, subject to the following conditions:
· The copyright notice above and this permission notice must be
preserved complete on all complete or partial copies.
· Any translation or derived work must be approved by the author in
writing before distribution.
· If you distribute this work in part, instructions for obtaining the
complete version of this manual must be included, and a means for
obtaining a complete version provided.
· Small portions may be reproduced as illustrations for reviews or
quotes in other works without this permission notice if proper
citation is given.
Exceptions to these rules may be granted for academic purposes: Write
to the author and ask. These restrictions are here to protect us as
authors, not to restrict you as learners and educators.
1.2. Changelog
· v0.1, 20 January 1999: First draft written at the HHS, where the
setup was originally developed.
· v1.0, 30 March 1999: First released version partially written in
time of ISM
2. Basic principle
As already said with this setup the clients share basicly the entire
root-fs with the server. But the clients ofcourse only get read access
to it. This is basicly how things work.
2.1. Things can't be that simple
Unfortunatly things aren't that simple, there are a couple of problems
the overcome with this simple setup.
2.1.1. Each ws needs its own writable copy of a number of dirs
A normal linux setup needs to have write access to the following dirs:
1. /dev
2. /var
3. /tmp
There are 3 solutions for this, of which one will only work for /dev:
1. mount a ramdisk and populate it by untarring a tarball, or by
copying a template dir.
· Advantages:
a. It's cleaned up every reboot, which removes tmp files and logs.
Thus it needs no maintaince unlike server sided dirs.
b. It doesn't take up any space on the server, and that it doesn't
generate any network traffic. A ramdisk takes less server and
network resources, and is faster.
· Disadvantages:
a. It takes memory.
b. The logs aren't kept after a reboot, if you really want logging
of all your clients tell syslog to redirect the logging to your
server.
2. create a dir for each ws on the server and mount it rw over nfs.
· Advantages & disadvantages:
a. The above arguments work in reverse for serversided dirs.
3. With kernel 2.2 devfs can be used for /dev, this is a virtual
filesystem ala /proc for /dev.
· Advantages:
a. Devfs takes very little memory when compared to a ramdisk / no
diskspace on the server and is very fast. A normal /dev takes at
least 1.5 mb since the minimal size for a file (and thus for a
device) is 1k, and there are somewhere around 1200 devices. You
can offcourse use a template of a stripped /dev with only the
entries you need to save some space. 1.5 Mb is a lott for a
ramdisk and also isn't nice on a server.
b. Devfs automagicly creates entries for newly added & detected
devices, so no maintainance is needed.
· Disadvantages:
a. Any changes to /dev like creating symlinks for the mouse and
cdrom are lost. Devfs comes with a script called rc.devfs to
save these chances. The script's provided in this howto will
automagicly restore these symlinks settings by calling rc.devfs
If you make any changes to /dev you need to call the rc.devfs
yourself to save them by typing:
/etc/rc.d/rc.devfs save /etc/sysconfig
As you can see, there are a number of ways to solve this problem. For
the rest of this Howto the following choices are assumed:
· For /dev we'll use Devfs
· For /var and /tmp we'll use a shared ramdisk of 1mb. It's shared to
use the space as effeciently as possible. /tmp is replaced by a
symlink to /var/tmp to make the sharing possible.
· Populating the ramdisk with tarballs or template dirs, works
equally well. But with template dirs it's much easier to make
changes, thus we'll use template dirs.
2.1.2. Write access to /home might be needed
Not really a problem in every unix client/server setup /home is
mounted rw from the server so we'll just do that ;)
2.1.3. How does a ws find out it's ip so that it can communicate with
the server?
Luckily for us, this problem has already been solved and the linux
kernel has support for 2 ways of autoconfiguration of the ip-address:
1. RARP
2. Bootp
Rarp is the easiest to setup, bootp is the most flexible. Since most
bootroms only support bootp that's what we'll use.
2.1.4. What about ws sepecific configuration
On redhat most system dependent config files are already in
/etc/sysconfig We'll just move those which aren't there and add
symlinks. Then we mount a seperate /etc/sysconfig on a per ws basis.
This is really the only distribution dependent part on other
distributions you can just create a sysconfig dir, move all config
files which can't be shared there and create symlinks. Also
/etc/rc.d/rc3.d, or symilar on other dists, might need to be different
for the server resp the workstations. Assuming that all ws run the
same services in runlevel 3, we'll just create a seperate 3th runlevel
for the workstations and the server:
1. Create both a /etc/rc.d/rc3.ws and a /etc/rc.d/rc3.server
2. make /etc/rc.d/rc3.d a symlink to /etc/sysconfig/rc3.d
3. make /etc/sysconfig/rc3.d a symlink to the apropiate
/etc/rc.d/rc3.xxx
4. replace S99local in rc3.ws by a link to /etc/sysconfig/rc.local so
that each ws can have it's own rc.local
2.1.5. Miscelancious problems
There are a few problems left:
1. /etc/rc.d/rc.sysinit needs /var, so /var needs to be mounted or
created before /etc/rc.d/rc.sysinit is run. It would also be nice
if the ws-specific /etc/sysconfig is mounted before any initscripts
are run.
· We'll just source a bootup script for the ws in the very top of
/etc/rc.d/rc.sysinit. Note this script will then ofcourse also be
sourced by the server itself on boot, so the script has to detect
this and do nothing on the server.
2. /etc/mtab needs to be writable:
· This is a tricky one, just create a link to /proc/mounts and create
an empty file mounts in /proc so that fsck and mount don't complain
during the initscripts when /proc isn't mounted yet. One note
smb(u)mount doesn't respect mtab being a link and overwrites it.
Thus if you want to use smb(u)mount create wrapper scripts that
restore the symlink.
3. Preparing the server
Now it's time to prepare the server to serve diskless clients.
3.1. Building a kernel
The first thing todo is build a kernel with the nescesarry stuff in to
support root over nfs. Take the following steps to build your kernel:
1. Since we'll be using redhat-5.2 with kernel-2.2 you should asure
yourself that your redhat-5.2 is kernel-2.2 ready. RedHat has got
an excellent howto on this.
2. I use the same kernel for both server and ws, to avoid module
conflicts since they share the same /lib/modules. If this is not
possible in your situation, fake different kernel versions by
editing the version number in the kernel's top makefile. These
different versionsnumbers will avoid any conflicts.
3. Besides the usual stuff the kernel should have the following:
· ext2 compiled in (if used on server, or for both)
· nfs and root-over-nfs compiled in (if used on client or both), to
get the nfs over root option in 2.2 enable ip-autoconfig in the
network options. We'll use bootp as configuration method.
· ws networkcard support compiled in (if used on client or both)
· compile devfs in (required for client, also nice for server)
· anything else you normally use, modules for all other devices used
on either the server or all / some ws etc.
4. The kernel-src needs to be edited to make the default root-over-nfs
mount: /tftpboot/<ip>/root instead of just /tftpboot/<ip>. This is
to get a clean tree in /tftpboot with one dir per ws containing
both the root for it (a link to the actual server root) and any ws
specific dirs.
· For 2.0 This is a define in: "include/linux/nfs_fs.h" called
"NFS_ROOT"
· For 2.2 This is a define in: "fs/nfs/nfsroot.c"
5. Now just compile the kernel as usual, see the kernel-howto.
6. If you don't have /dev/nfsroot yet, create it by typing:
mknod /dev/nfsroot b 0 255.
7. After compiling the kernel set the root to nfsroot by typing:
rdev <path-to-zImage>/zImage /dev/nfsroot
8. Before booting with devfs you need to make a few changes to
/etc/conf.modules, append the contents of the conf.modules in the
devfs documentation to it.
9. Since this new kernel is compiled for autoconfig of ip's it will
try to autoconf the ip of the server during bootup. Which ofcourse
will fail since it gives out the ip's. To avoid a long timeout add:
append="ip=off" To the linux section of /etc/lilo.conf.
10.
Run lilo and boot the new kernel.
11.
Due to devfs you'll have lost all symlinks on the server. With
redhat this is usually /dev/mouse and /dev/cdrom. Recreate these.
If you also used to use special ownerships, chown to appropiate
files in /dev. Now save the /dev settings (in /etc/sysconfig, since
they might be ws specific):
· Copy rc.devfs from the devfs documentation in the kernel source to
/etc/rc.d/rc.devfs and make it executable
· Save the settings by typing:
/etc/rc.d/rc.devfs save /etc/sysconfig
3.2. Creating and populating /tftpboot, making symlinks for /tmp etc.
The next step is to create and populate /tftpboot
3.2.1. The automagic part
This is all handled by a big script since putting a long list of
commands into this howto seemed pretty useless to me. If you want todo
this manual just read the script and type it in as you go ;)
This setup script thus some nasty things like nuke /tmp, temporary
kill syslog, umount /proc. So make sure that noone is using the
machine during this, and that X isn't running. Just making sure your
the only one logged in on a text-console is enough, no need to change
runlevels.
DISCLAIMER: this script has been tested but nevertheless if it messes
up your server your on your own. I can take no responsibility what so
ever. Lett me repeat this howto is only for experienced linux
sysadmins. Also this is script is designed to be run once and I really
mean once. Running it twice will nuke: /etc/fstab,
/etc/X11/XF86Config, /etc/X11/X and /etc/conf.modules.
Now with that said, just cut and paste the script make it executable,
execute it and pray to the holy penguin that it works ;)
______________________________________________________________________
#!/bin/sh
SERVER_NAME=`hostname -s`
###
echo creating /etc/rc.d/rc.ws
#this basicly just echos the entire script ;)
echo "#root on nfs stuff
SERVER=$SERVER_NAME
#we need proc for mtab, route etc
mount -t proc /proc /proc
IP=\`ifconfig eth0|grep inet|cut --field 2 -d ':'|cut --field 1 -d ' '\`
#if the first mount fails we're probably the server, or atleast something is
#pretty wrong, so only do the other stuff if the first mount succeeds
mount \$SERVER:/tftpboot/\$IP/sysconfig /etc/sysconfig -o nolock &&
{
#other mounts
mount \$SERVER:/home /home -o nolock
mount \$SERVER:/ /\$SERVER -o ro,nolock
#/var
echo Creating /var ...
mke2fs -q -i 1024 /dev/ram1 1024
mount /dev/ram1 /var -o defaults,rw
cp -a /tftpboot/var /
#network stuff
. /etc/sysconfig/network
HOSTNAME=\`cat /etc/hosts|grep \$IP|cut --field 2\`
route add default gw \$GATEWAY
ifup lo
}
#restore devfs settings
/etc/rc.d/rc.devfs restore /etc/sysconfig
umount /proc" > /etc/rc.d/rc.ws
###
echo splitting runlevel 3 for the client and server
mv /etc/rc.d/rc3.d /etc/rc.d/rc3.server
cp -a /etc/rc.d/rc3.server /etc/rc.d/rc3.ws
rm /etc/rc.d/rc3.ws/*network
rm /etc/rc.d/rc3.ws/*nfs
rm /etc/rc.d/rc3.ws/*nfsfs
rm /etc/rc.d/rc3.ws/S99local
ln -s /etc/sysconfig/rc.local /etc/rc.d/rc3.ws/S99local
ln -s /etc/rc.d/rc3.server /etc/sysconfig/rc3.d
ln -s /etc/sysconfig/rc3.d /etc/rc.d/rc3.d
###
echo making tmp a link to /var/tmp
rm -fR /tmp
ln -s var/tmp /tmp
###
echo moving various files around and create symlinks for them
echo mtab
/etc/rc.d/init.d/syslog stop
umount /proc
touch /proc/mounts
mount /proc
/etc/rc.d/init.d/syslog start
rm /etc/mtab
ln -s /proc/mounts /etc/mtab
echo fstab
mv /etc/fstab /etc/sysconfig
ln -s sysconfig/fstab /etc/fstab
echo X-config files
mkdir /etc/sysconfig/X11
mv /etc/X11/X /etc/sysconfig/X11
ln -s ../sysconfig/X11/X /etc/X11/X
mv /etc/X11/XF86Config /etc/sysconfig/X11
ln -s ../sysconfig/X11/XF86Config /etc/X11/XF86Config
echo conf.modules
mv /etc/conf.modules /etc/sysconfig
ln -s sysconfig/conf.modules /etc/conf.modules
echo isapnp.conf
mv /etc/isapnp.conf /etc/sysconfig
ln -s sysconfig/isapnp.conf /etc/isapnp.conf
###
echo creating a template dir for the ws directories
echo /tftpboot/template
mkdir /home/tftpboot
ln -s home/tftpboot /tftpboot
mkdir /tftpboot/template
mkdir /$SERVER_NAME
echo root
ln -s / /tftpboot/template/root
echo sysconfig
cp -a /etc/sysconfig /tftpboot/template/sysconfig
rm -fR /tftpboot/template/sysconfig/network-scripts
ln -s /$SERVER_NAME/etc/sysconfig/network-scripts \
/tftpboot/template/sysconfig/network-scripts
echo NETWORKING=yes > /tftpboot/template/sysconfig/network
echo `grep "GATEWAY=" /etc/sysconfig/network` >> /tftpboot/template/sysconfig/network
echo "/dev/nfsroot / nfs defaults 1 1" > /tftpboot/template/sysconfig/fstab
echo "none /proc proc defaults 0 0" >> /tftpboot/template/sysconfig/fstab
echo "#!/bin/sh" > /tftpboot/template/sysconfig/rc.local
chmod 755 /tftpboot/template/sysconfig/rc.local
rm /tftpboot/template/sysconfig/rc3.d
ln -s /etc/rc.d/rc3.ws /tftpboot/template/sysconfig/rc3.d
rm /tftpboot/template/sysconfig/isapnp.conf
echo var
cp -a /var /tftpboot/var
rm -fR /tftpboot/var/lib
ln -s /$SERVER_NAME/var/lib /tftpboot/var/lib
rm -fR /tftpboot/var/catman
ln -s /$SERVER_NAME/var/catman /tftpboot/var/catman
rm -fR /tftpboot/var/log/httpd
rm -f /tftpboot/var/log/samba/*
for i in `find /tftpboot/var/log -type f`; do cat /dev/null > $i; done
rm `find /tftpboot/var/lock -type f`
rm `find /tftpboot/var/run -type f`
echo /sbin/fsck.nfs
echo "#!/bin/sh
exit 0" > /sbin/fsck.nfs
chmod 755 /sbin/fsck.nfs
echo all done
______________________________________________________________________
3.2.2. Manual adjustments to some files
Now we need to make a few manual adjustments to the server:
1. The ws setup script has to be sourced at the very beginning of
rc.sysinit, so add the following lines directly after setting the
PATH:
___________________________________________________________________
#for root over nfs workstations.
/etc/rc.d/rc.ws
___________________________________________________________________
2. Strip /etc/rc.d/rc3.ws to a bare minimum. It might be useful to
create something like rc.local.ws but I'll leave that up to you.
Network and nfsfs are already setup.The following have been already
removed / updated by the automagic script:
· network
· nfsfs
· nfs
· rc.local
3.3. Exporting the appropriate file systems and setting up bootp
The server must ofcourse export the appropriate filesystems and asign
the ip addresses to the clients.
3.3.1. Exporting the appropriate file systems
We need to export some dir's for the workstations so for the situation
here at the university I would add the following to /etc/exports:
______________________________________________________________________
/ *.st.hhs.nl(ro,no_root_squash)
/home *.st.hhs.nl(rw,no_root_squash)
______________________________________________________________________
Ofcourse use the apropriate domain ;) and restart nfs by typing:
/etc/rc.d/init.d/nfs restart
Note for knfsd users: knfsd doesn't allow you to have multiple exports
on one partition with different permissions. Also knfsd doesn't allow
clients to go past partition boundaries for example if a client mounts
/ and /usr is a different partition it won't have access to /usr. Thus
if you use knfsd, at least /home should be on a different partition,
the server prepare script already puts /tftpboot in /home so that
doesn't need a seperate partition. If you've got any other partitions
your clients should have access to export them seperatly and add mount
commands for them to /etc/rc.d/rc.ws.
3.3.2. Setting up bootp
1. If bootp isn't installed yet install it. It comes with RedHat.
2. Edit /etc/inetd.conf and uncomment the line beginning with bootps,
if you want to use a bootprom uncomment tftp while your at it.
3. Restart inetd by typing:
/etc/rc.d/init.d/inetd restart
4. Adding workstations
Now that the server is all done, we can start adding workstations.
4.1. Creating a boot disk or bootrom
You'll need ot create a bootrom and / or a bootdisk to boot your
workstation.
4.1.1. Creating a bootdisk
Even if you wish to use a bootrom its wise to first test with a
bootdisk, to create a boot disk just type:
dd if=/<path-to-zImage>/zImage of=/dev/fd0
4.1.2. Creating a bootrom
There are a few free package's out there to create bootroms:
1. netboot, this is IMHO the most complete free package out there. It
uses standard dos packet drivers so allmost all cards are
supported. One very usefull hint I got on there mailing list was to
pklite the packetdrivers since some commercial drivers are to big
to fit into the bootrom. Netboot's documentation is complete
enough, so I won't waste any time reproducing it here, it should be
more then sufficient to create a bootrom and boot a ws with it.
Netboot's webpage is: http://www.han.de/~gero/netboot/
2. etherboot, this is the other free package out there it has got a
few nice features like dhcp support, but has limited driver support
as it uses its own driver format. I haven't used this so I really
can't give anymore usefull info. Etherboot's webpage is:
http://www.slug.org.au/etherboot/
About the roms themselves. Most cards take ordinary eproms with an 28
pins dip housing. These eproms come in size upto 64kB. For most cards
you'll need 32kB eproms with netboot. Some cards drivers will fit into
16kB but the price difference of the eproms is minimal. These eproms
can be burned with any ordinairy eprom burner.
4.2. Creating a ws dir
Just copy over the template by typing:
cd /tftpbootcp -a template <ip>
You could of course also copy over the dir of a workstation with
identical mouse, graphicscard and monitor and ommit the configuration
in step 5.4.
4.3. Add entries to /etc/bootptab and /etc/hosts
Edit /etc/bootptab and add an entry for your test ws, an example entry
is:
______________________________________________________________________
nfsroot1:hd=/tftpboot:vm=auto:ip=10.0.0.237:\
:ht=ethernet:ha=00201889EE78:\
:bf=bootImage:rp=/tftpboot/10.0.0.237/root
______________________________________________________________________
Replace nfsroot1 by the hostname you want your ws to have. Replace
10.0.0.237 by the ip you want your ws to have (do this twice) and
replace 00201889EE78 by the MAC-ADDRESS of your ws. If you don't know
the MAC-ADDRESS of the ws, just boot it with the just created boot
disk and look for the MAC-ADDRESS in the boot messages. There's a
chance bootpd is already running so just to make sure try to restart
it by typing:
killall -HUP bootpd
Don't worry if it fails, that just means it wasn't running, inetd will
start it when asked too.
4.4. Booting the ws for the first time
Just boot the ws from the bootdisk. This should get you a working ws
in textmode, with the exact same setup as your server except for the
ip-nr and the running services. Even if you want to use a bootprom
it's wise to first test with the bootdisk, if that works you can try
to boot with the bootrom see the bootroms documentation for more info.
4.5. Set the ws specific configuration.
Now it's time to configure any ws specific settings:
1. First off all to get the mouse working, just run mouseconfig. To
apply the changes, and check that the mouse works type:
/etc/rc.d/init.d restart
2. Run Xconfigurator, when Xconfigurator has probed the card and you
can press ok don't! Since we have moved the symlink for the Xserver
from /etc/X11/X to /etc/sysconfig/X11/X Xconfigurator will fail to
create the proper link. Thus to make sure the rest of Xconfigurator
goes well, switch to another console and create the link in
/etc/sysconfig/X11 to the advised server. Now just finish
Xconfigurator and test X.
3. Configure anything else which is different then the server /
template:
· sound: You probaly need to modify isapnp.conf and conf.modules,
both are already made links to /etc/sysconfig by the server setup
script.
· cdrom: Link in /dev, entry in /etc/fstab? etc.
· rc.local: Make any nescesarry changes.
4. Save the links and any other changes to /dev type:
/etc/rc.d/rc.devfs save /etc/sysconfig
5. All done.
5. Added bonus: booting from cdrom
Much of the above also goes for booting from cdrom. Since I wanted to
document howto boot from cdrom anyway, I document it in here to avoid
typing a lott of the same twice.
Why would one want to boot a machine from cd-rom? Booting from cdrom
is interesting everywhere where one wants to run a very specific
application, like a kiosk, a library database program or an intenet
cafe, and one doesn't have a network or a server to use a root over
nfs setup.
5.1. Basic Principle
The basic principle is wants again simple, boot with a cdrom as root.
To make this possible we'll use the rockridge extension to put a unix
like filesystem on a cd and the Eltorito extension to make cd's
bootable.
5.1.1. Things can't be that simple
Ofcourse this setup also has a few problems. most are the same as
above:
1. We'll need write access to: /dev, /var & /tmp.
· We'll just use the same solutions as with root over nfs (see
above):
· For /dev we'll use Devfs
· For /var and /tmp we'll use a shared ramdisk of 1mb. It's shared to
use the space as effeciently as possible. /tmp is replaced by a
symlink to /var/tmp to make the sharing possible.
· Populating the ramdisk with tarballs or template dirs, works
equally well. But with template dirs it's much easier to make
changes, thus we'll use template dirs.
2. Some apps need write access to /home.
· Put the homedir of the user's who will be running the application
in /var, and populate it wiht the rest of /var every boot.
3. /etc/mtab needs to be writable:
· Create a link to /proc/mounts and create an empty file mounts in
/proc, see above.
5.2. Creating a test setup.
Now that we know what we want todo and how, it's time to create a test
setup:
1. For starters just take one of the machines which you want to use
and put in a big disk and a cd-burner.
2. Install your linux of choice on this machine, and leave a 650mb
partition free for the test setup. This install will be used to
make the iso-image and to burn the cd's from, so install the
nescesarry tools. It will also be used to restore any booboo's
which leave the test setup unbootable.
3. On the 650 mb partition install your linux of choice with the setup
you want to have on the cd, this will be the test setup
4. Boot the test setup.
5. Compile a kernel as described in Section 3.1, follow all the steps,
the changes need for devfs are still needed! At step 3 of Section
3.1 put in the following:
· isofs compiled in
· devfs compiled in
· cdrom support compiled in
· everything else you need either compiled in or as module.
6. Configure the test setup:
· Create the user which we'll be running the application.
· Put it's homedir in /var.
· Install the application if needed.
· Configure the application if needed.
· Configure the user so that the application is automagicly run after
login.
· Configure linux so that it automaigcly logs in the user.
· Configure anything else which needs configuring.
7. Test that the test setup automagicly boots into the apllication and
everything works.
8. Boot the main install and mount the 650 mb partition on /test of
the main install.
9. Put the following in a file called /test/etc/rc.d/rc.iso, this file
we'll be sourced at the begining of rc.sysinit to create /var
___________________________________________________________________
#/var
echo Creating /var ...
mke2fs -q -i 1024 /dev/ram1 1024
mount /dev/ram1 /var -o defaults,rw
cp -a /lib/var /
#restore devfs settings, needs proc
mount -t proc /proc /proc
/etc/rc.d/rc.devfs restore /etc/sysconfig
umount /proc
___________________________________________________________________
10.
Edit /test/etc/rc.sysinit comment the lines we're the root is
remounted rw and add the following 2 lines directly afer setting
the PATH:
___________________________________________________________________
#to boot from cdrom
. /etc/rc.d/rc.iso
___________________________________________________________________
11.
Copying the following to a script and executing it, this wil create
a template for /var and make /tmp and /etc/mtab links.
___________________________________________________________________
#!/bin/sh
echo tmp
rm -fR /test/tmp
ln -s var/tmp /test/tmp
###
echo mtab
touch /test/proc/mounts
rm /test/etc/mtab
ln -s /proc/mounts /test/etc/mtab
###
echo var
mv /test/var/lib /test/lib/var-lib
mv /test/var /test/lib
mkdir /test/var
ln -s /lib/var-lib /test/lib/var/lib
rm -fR /test/lib/var/catman
rm -fR /test/lib/var/log/httpd
rm -f /test/lib/var/log/samba/*
for i in `find /test/lib/var/log -type f`; do cat /dev/null > $i; done
rm `find /test/lib/var/lock -type f`
rm `find /test/lib/var/run -type f`
___________________________________________________________________
12.
Remove the creation of /etc/issue* from /test/etc/rc.local it will
only fail.
13.
Now boot the test partition again, it will be read only just like a
cdrom. If something doesn't work reboot to the working partition
fix it, try again etc. Or you could remount / rw ,fix it then
reboot straight into to test partition again. To remount / rw type:
mount -o remount,rw /
5.3. Creating the cd
5.3.1. Creating a boot image
First of all boot into the workign partition. To create a bootable cd
we'll need an image of a bootable floppy. Just dd-ing a zimage doesn't
work since the loader at the beginning of the zimage doesn't seem to
like the fake floppydrive a bootable cd creates. So we'll use syslinux
instead.
1. Get boot.img from a redhat cd
2. Mount boot.img somewhere through loopback by typing:
mount boot.img somewhere -o loop -t vfat
3. Remove everything from boot.img except for:
· ldlinux.sys
· syslinux.cfg
4. Cp the kernel-image from the test partition to boot.img.
5. Edit syslinux.cfg so that it contains the following, ofcourse
replace zImage by the appropiote image name:
___________________________________________________________________
default linux
label linux
kernel zImage
append root=/dev/<insert your cdrom device here>
___________________________________________________________________
6. Umount boot.img:
umount somewhere
7. If your /etc/mtab is a link to /proc/mounts umount won't
automagicly free /dev/loop0 so free it by typing:
losetup -d /dev/loop0
5.3.2. Creating the iso image
Now that we have the boot image and an install that can boot from a
readonly mount it's time to create an iso image of the cd:
1. Copy boot.img to /test
2. Cd to the directory where you want to store the image make sure
it's on a partition with enough free space.
3. Now generate the image by typing:
mkisofs -R -b boot.img -c boot.catalog -o boot.iso /test
5.3.3. Verifying the iso image
1. Mounting the image throug the loopbackdevice by typing:
mount boot.iso somewhere -o loop -t iso9660
2. Now verify that the contents is ok.
3. Umount boot.iso:
umount somewhere
4. If your /etc/mtab is a link to /proc/mounts umount won't
automagicly free /dev/loop0 so free it by typing:
losetup -d /dev/loop0
5.3.4. Writing the actual cd
Assuming that you've got cdrecord installed and configured for your
cd-writer type:
cdrecord -v speed=<desired writing speed> dev=<path to your
writers generic scsi device> boot.iso
5.4. Boot the cd and test it
Well the title of this paragraph says it all ;)
6. Thanks
· The HHS (Haagse Hoge School) a dutch college where I first
developed and tested this setup for use in a couple of labs. And
where the initial version of this HOWTO was written.
· ISM a dutch company where I'm doing my final project. Part of the
project involves diskless machines, so I got to develop this setup
further and had the time to update this HOWTO.
· All the users who will give me usefull input once this first
version is out ;)
7. Comments
Comments suggestions and such are welcome. They can be send to Hans de
Goede at: j.w.r.degoede@et.tudelft.nl
Root over NFS - Another Approach
George Gousios, cs98011@icsd.aegean.gr
v1.0, 2001-09-12
This HOWTO does not intend to replace the existing Root over NFS
Howto's. It is just another approach, particularly useful in large
system installations. It is the result of many days of trying to
setup a system for the University of the Aegean computer labs. The
installation method described here is up and running. The HOWTO is
dedicated to all of those guys who programmed these exceptionally good
OS and tools. Also dedicated to all people that encouraged me to
write it.
______________________________________________________________________
Table of Contents
1. Introduction
1.1 The setting
1.2 The alternatives
1.3 General Principles
2. Setting up the server
2.1 Setting up the NFS server
2.2 Setting up the DHCP/BOOTP server
2.3 Preparing the base system
3. Setting up the clients
3.1 Errata
3.2 Fiddling with scripts and files!
3.2.1 How to setup a swap partition
3.2.2 Modifying
3.2.3 Copying password files
3.3 Booting the base system
3.4 Configuring the system
3.4.1 Configuring the language
3.4.2 The X window system
3.4.3 Configuring network access for KDE2
4. Preparing the boot disk
4.1 Building a kernel
4.2 Creating the boot disk
4.3 The kernel command line
5. The magic time
6. Other Stuff
6.1 Contributors
6.2 Copyrights
6.3 Contacting the author
6.4 Changelog
7. Appendix
7.1 Appendix A - A script for creating host directories
7.2 Appendix B - A script to create the dhcpd.conf file using
7.2.1 The arp.dat2dhcpd.conf script
7.3 Appendix C - A sample XF86Config file
______________________________________________________________________
1. Introduction
This document does not resemble a common HOWTO, meaning referencing to
general principles, but it is rather an on-hand approach to a by
nature complex matter. It borrows the structure of the current Root
over NFS , but differs from it in the following points:
· It provides a working solution fom the distribution used. The
distribution specific points should be applicable to all major
distributions (RedHat,SuSE,Debian).
· It uses more up to date tools, ex NFS v3.0, kernel 2.4.0, dhcp
instead of bootparamd.
· All steps are described in detail, letting the reader to adapt them
to his own system. No scripts!
This HOWTO expects that you have a general knowledge of what you are
up to, so first read the Diskless Nodes HOW-TO.
1.1. The setting
It is a common case a University computer lab to have a lot PC's
running Windows 98 or/and NT and a powerful UNIX server to satisfy the
need of an alternative operating environment. This UNIX server is
most of times idle or meerly accessed by telnet and running stupid
tasks. On the other hand, students, especially those attending a
computer science department, feel like taking full advantage of it,
just for fun or for "educational purposes" (breaking in, hacking
it...). The restrictive environment of telnet does not allow us to
enjoy the use of a power server.There are 2 alternatives to that:
· Try to persuade the department' s headmaster to approve of the
purchase of a bunch of new Unix workstations.
· Try to persuade the same guy to approve of transforming the server
to a diskless node server.
The network at the computer lab consists of the following.
· UNIX server: SUN Enterprise 3500 with 2 64 bit SPARC@366 Mhz
processors and 512 MB of memory. A real monster, isn't it?
· "Dumb" target workstations: 60-70 PC's with variable
configurations, ranging from PII 266 to PIII 450 with 64-128 MB
RAM.
The task I had to accomplish was the following: Provide a complete
working solution without new expenses and without modifying anything
but the necessary on the server.
1.2. The alternatives
Being the responsible for the project, I had to choose between a
variety of solutions about it. I choose the following, for the
reasons illustrated:
· The new 2.4 kernel: It provides a robust and fast solution, using
less memory than the old 2.2 series. If it is important for your
users to attach devices to their PC's then it is the only solution.
Also provides NFS v3, and more efficient memory management.
· The KDE 2.1.1 desktop environment: VERY stable, easy to use,
Internet enabled, makes the transition from Windows to Linux
desktop almost effortless. GNOME + Afterstep is another option, but
not as mature as a solution as KDE.
· SuSE 7.0 distribution: My favorite one, IMHO the most balanced
between ease of use and understanding of a Linux system structure.
1.3. General Principles
To be able to boot a Linux system, you have to provide it with the
following:
· The /sbin directory. There exists the init programm, which is
responsible for starting other programms and start up scripts
during the boot process. Also, the /sbin directory contains the
startup scripts in the case of SuSE, some useful programms like the
portmap programm and many other programms that are needed before
you mount the /usr directory.
· The /lib directory. It contains the libc libraries that are
absolutely necessary if your init is dynamically linked.
· The /bin directory. It contains file commands and shells for
running startup scripts.
· The /etc directory. It contains configuration files for most
programms and the rc.d directories that is the default for startup
scripts.
· The /var directory. It is a spool area for programms that want to
write somewhere. It is divided into many subdirectories with
alternate usability.
· The /dev directory. It contains character and block special devices
that allow programms to communicate with the computers devices via
the kernel.
You should notice that after a clean install, the total size of
these directories is not that big, ranging from 30 to 40 MB. The
main load of files exists in the /usr and /opt directories. So, it
is possible to create a directory for every diskless client
containing the above listed directories and mount points for
directories like /usr that will be exported by the server. The
boot process, as assumed by this document, is the following:
1. The user reboots the computer, and using a diskette boots the Linux
kernel.
2. The kernel takes control of the system, identifies the system
devices, and uses BOOTP to obtain the IP address matching the NIC
's hardware address.
3. The init programm is started. Before switching to a run level, it
calls a script described in the /etc/inittab file. This script is
responsible for building the library cache, initialise and mount a
swap file, load some system specific kernel modules and set the
hostname.
4. The boot script finishes and the init programm switches to the
specified runlevel. It starts to execute the scripts located into
the /etc/rc.d/rcX directory where 'X' is the name of the runlevel.
These scripts are responsible for starting the portmapper and
mounting the NFS exported /usr, /home and /opt directories.
5. The user is able to login.
To sum up, the system administrator has to do the following tasks:
· Prepare a clean install of the system to be exported to the
diskless hosts.
· Create the host specific directories
· Control what is going to be started during the diskless clients'
boot proces
· Prepare the server to export some directories and start a bootp
service.
2. Setting up the server
The first, and less tricky, thing to do is to setup the server. The
server must be prepared to run these services:
· NFS, preferably version 3, for exporting the following directories:
/usr, /lib/modules, /opt (at least at SuSE) and /home (unless you
have a dedicated file server).
· DHCP server (in bootp mode), for matching the clients' MAC
addresses to IP addresses.
Also, the administrator has to create directories for each client,
containing nessesary startup files and programs. The directory
scheme created for the installation described was like this one:
______________________________________________________________________
/usr/local/linux-
|-/base-
| |-/bin
| |-/sbin
| |-/etc
|
|-/workstations-
| |
| |-195.251.160.100
| | |-/bin
| | |-/sbin
| | |-/etc
| |
| |-195.251.160.101
| |-195.251.160.102
| |-base(symbolic link to ../base)
______________________________________________________________________
The /base directory contains the whole file system you want to export
to your clients. The per IP directories contain files that are needed
before mounting the /usr or /lib/modules directories, like the /etc
folder. This is a confortable directory structure for 2 purposes: i)
You can easily create a basic system at the base directory and copy
the per workstation files at the workstation directories easily, with
an entry level bash script ii) You can easily add or delete or update
workstations by modifying the directories under /workstations. A
script for copying the appropriate files (which will be discused
later) can be found in Appendix A.
2.1. Setting up the NFS server
An NFS server can be set up in two ways:
· Using the /etc/exports file at BSD-compliant Unices like Linux of
FreeBSD.
· Using the /etc/dfs/dfstab at SysV Unices like Solaris.
/etc/exports: The /etc/exports file controls the directories to be
exported and the export options per workstation. It has a
structure like the following (Linux):
______________________________________________________________________
/path/to/dir1 ws1(options) ws2(options)....
/path/to/dir2 ws3(options) ws1(options)....
______________________________________________________________________
Options include ro or rw, root_squash, wsize, tcp, version.
Have a look at the nfs or the exports man page and the NFS Howto for a
more detailed description of what these options mean.
/etc/dfs/dfstab:A typical dfstab file on Solaris should look like the
following:
______________________________________________________________________
share -F nfs -o rw=193.250.160@,ro=193.250.161@ /export/home
share -F nfs -o ro=193.250.160@,root=193.250.161.132 /export/engineering
______________________________________________________________________
Of course, these options are discused in detail at the dfstab man
page.
The directories we want to export are /usr/local/linux/base/usr,
/usr/local/linux/base/opt, /usr/local/linux/base/lib/modules and
/home, assuming that you 've followed the suggested structure.
Optimising NFS
Of course, this is none of our business but here are some general
principles:
· Reduce the TCP window size (parameter wsize for Linux) to whatever
is closest to the MTU of your network type. For Ethernet, a good
value of wsize is 2048 bytes as long as the MTU is 1536 bytes.
This is generally a good idea because the main traffic load between
the clients and the server consists of little packets and only in
the case of starting large programms like X or StarOffice there is
a big number of fragmented packets. Of course this may vary in your
case, according to the needs of your users.
· If you plan to have a large installation, break the space for your
workstations into 2 or more SCSI disks. This will allow consequent
writes and reads on both disks, increasing responce and reducing
latency before a request completes
· Always use NFS v3 over TCP. The main reason for migrating from v2
to v3 is the writeback case it offers on both the workstation and
the server. Also, mounting NFS over TCP lets you use the first
recomentation. \end{itemize} For further optimising use a packet
analyzer like Ethereal or tcpdump and dicide your needs.By the way,
Sun has written an excellent guide to optimizing NFS performance
which, although emphasised on Solaris, is applicable to every
modern Unix and is accessible online at http://docs.sun.com
<http://docs.sun.com>.
2.2. Setting up the DHCP/BOOTP server
Although there are many DHCP or BOOTP servers 'out there', some of
which are proprietary, the best option is to use the reference IETF
DHCP server. It is the least vulnerable and the most extensible DHCP
available. The main server configuration is done through the
/etc/dhcpd.conf file. This file is divided into two sections, the
general server configuration and the host specific configuration. A
typical dhcpd.conf file looks like this, in case that the DHCP/BOOTP
server is used in BOOTP mode:
subnet 193.250.160.0 netmask 255.255.255.0 {
range 193.250.160.10 193.250.160.12;
}
host george{
hardware ethernet 00:60:08:2C:22:20;
fixed-address 193.250.160.10;
}
host earth{
hardware ethernet 00:A0:24:A5:FD:E0;
fixed-address 193.250.160.12;
}
This structure is fairly easy to be understood by everyone. For every
diskless client we have to supply the programm with a 'host'
declaration providing a pair of hardware and IP adresses. The host
name provided in the 'host' statement can be everything, but there is
a conversion to use the real host name of the client having the
specific IP. The range statement in the subnet declaration is not
necessary to be the range that you want your clients to have. In fact,
if these clients are normal workstations with an operating system that
during its boot uses DHCP to obtain an IP address it is not
recommended to have the same IP for their operation as diskless
clients. If you have specific needs, have a look at dhcpd.conf man
page.
Another difficulty is how to obtain the IP - MAC address pairs for a
large network. The solution is a nice little programm called arpwatch.
This programm runs at the background and keeps track of the IP - MAC
address pairs of the computers that your computer has contacted in a
file that you have specified. The only thing you have to do is to ping
the computers you want. At Appendix B there is a script that starts
arpwatch, pings a range of subsequent IP's and creates the dhcpd.conf
file. If you want to do it manually, start arpwatch when your network
is at its peak of usage and wait for some time. On a shared medium
network (Ethernet, Tokenring) arpwatch will track down all different
IP 's and hardware addresses.
2.3. Preparing the base system
To prepare the base system just install your favorite distribution to
a mountable partition on a hard disk with a Unix like operating system
already installed. Install all the programms you want to be available
to your users. Then you have to transfer the whole partition
preserving the links and the character or block devices. This is best
done using the tar programm. Boot the previously installed system and
execute the following command, assuming that you have mounted the new
partition at /mnt:
tar cpvf system.tar /mnt/.
This command will create a tar archive at the current directory with
the whole system to be served to the diskless clients. Then just copy
the tar archive to the server using a CDROM or through the network and
extract it at the base directory. The command to do this is:
tar xvf system.tar /usr/local/linux/base
3. Setting up the clients
3.1. Errata
In order to setup the clients, we have to work on the base system.
First, we will make some modifications to the startup scripts by hand
and second we will boot a workstation with the base system to make
sure it works and to polish some details. Note that this part is very
distribution specific and perhaps some of those described here are not
applicable to your case. I can only guarantee that this works for
SuSE 7.0. Please, feel free to send me distribution specific copies of
this page!
3.2. Fiddling with scripts and files!
After init is started, it executes a script described in /etc/inittab.
This script has a very spesific job to do: Bring the system in a state
that other programms can be started. In most distributions I can think
of this script does the following:
1. Mounts the /proc, /dev/pts and swap filesystems.
2. Activates raid arrays and fscks the root filesystem.
3. Adjusts the clock.
4. Starts the kernel deamon for autoloading of modules.
5. Executes user defined client scripts.
6. Set some kernel parameters.
On most distributions I have checked this script is very well
commented and it is possible for an experienced user to remove some
lines that are not wanted or not applicable during a network boot.
I 've also noticed that all programms started do not require the
/usr directory to be mounted. If you are trying to netboot a host
you must do the following modifications to this script:
· Remove all entries that do fsck or initialise raid arrays, and add
to the top of the script this command : mount -o remount,rw /
because the client has to have rw access to the root directory when
it boots.
· Do not let the kernel deamon start until all partitions are mounted
· Mount a swap partition. This is described later.
· Start the portmapper. If your system has a specific directory for
starting bootup scripts, place the portmapper startup script there
giving it the highest priority possible, for example: ln -s
/etc/rc.d/portmap /etc/rc.d/boot/S01portmap if you are using SuSE.
· Place the NFS filesystem mounting script in the system specific
directory for boot scripts with priority lower than the portmapper,
for example ln -s /etc/rc.d/nfs /etc/rc.d/boot/S02nfs for SuSE.
· Remove all entries that automount local partitions, and all entries
that start an automounter deamon for RedHat.
3.2.1. How to setup a swap partition
This is tricky business! Swapping over NFS is not allowed by the
kernel and not functioning either. You cannot use swapon on files that
are on an NFS mounted filesystem. We have to do some tricks to enable
it:
1. Create the swap file. Its size can be variable but for a machine
with 128 MB of RAM a swap size of 40-50 MB seems reasonable. The
command to create the swap file is: dd if=/dev/zero of=/var/swap
bs=1k count=Xk where X stands for the number of MB your swap should
be. It is also a necessity to put the swap file under /var as long
as it is mounted at boot.
2. Format the swap file using the mkswapfs command.
3. Initialise a loopback device using the swap file. The command is
losetup /dev/loop0 /var/swap.
4. Mount the loopback device with the command mount /dev/loop0 swap.
You have to initialise a swap partition at the very beginning of
the boot process. So place commands 2-4 somewhere near to the top
of the startup script. The first command is very time
consuming,especially in the case of a loaded network so just copy a
swap file in the base system and do not delete it when you create
directories for each host.
3.2.2. Modifying /etc/fstab
The /etc/fstabfile contains entries for automounting file systems at
boot. In our case, we have to place the following lines at the end of
it:
server_IP:/usr/local/linux/base/usr /usr nfs nfsvers=3,wsize=2048,tcp 0 0
server_IP:/usr/local/linux/base/opt /opt nfs nfsvers=3,wsize=2048,tcp 0 0
server_IP:/usr/local/linux/base/lib/modules /lib/modules nfs nfsvers=3 wsize=2048,tcp 0 0
fileserver_IP:/home /home nfs nfsvers=3,wsize=2048,tcp 0 0
Also, do not forget to comment out lines that mount local partitions.
Save this file as /etc/fstab.new because it should not be activated
yet, as long as we have to boot the base system first.
3.2.3. Copying password files
You must provide the system with to files to let the users perform a
login. To do this just copy the files /etc/passwd and /etc/shadow from
your file server to the base system. Notice that you have to do it
every time you add a user to the system, or a user changes his/her
password, so can best be done by creating a cron job.
3.3. Booting the base system
To boot the base system we have to create a boot disk first. Go to
the next section and create a boot disk as recommended. Please, change
the 'append' line to this one:
append init=/sbin/init root=/dev/nfs
ip=X:Y:195.251.160.254:255.255.255.0:::'off'
nfsroot=Y:/usr/local/linux/base vga=0x318
(Of course, in a sigle line)
where X stands for an unused IP address in your network and Y for the
IP address of the NFS server. Of course, you have to export the
/usr/local/linux/base directory from the NFS server with the
rw,no_root_squash options. Now boot the base system. Everything
should work OK, but I don' t think that there is a possibility that
you succeeded from the first boot! There are many obscure points,
that you have forgotten to edit or I have forgotten to mention.
This is the standard method to boot the base system and to add
programms or a new kernel to your installation. So backup the files
you have edited as well as the boot disk image.
After succeeding to boot the system, you are in a complete linux
enviroment. Login as root and enjoy a first ride in your newly
created system! Now comes the hard time... You have to disable some
services that startup automatically and remove some programms not
needed by the users.
3.4. Configuring the system
Nearly all distributions start these services:
· inetd, the Internet superdeamon responsible for starting other
deamons like telnet, ftp etc.
· syslogd, the logging deamon. Not needed on a diskless client not
needed because all the modifications are done to files easyly
replacable.
· httpd, the apache webserver. Not needed for obvious reasons.
· dhcpclient. Needed for automatic aquisition of an IP address. At
out case, this is done by the kernel.
· lpd, the line printer deamon. This is needed only when you have a
printer connected to a host. In most cases, this is not needed.
Also, according to your installation, there may be started sshd,
nscd, cupsd and other network services not needed on clients. To
disable these services, remove their entries from the runtime
directory under /etc/rc.d/X. There is a more elegant way to do
this under SuSE or RedHat, using Yast or Linuxconfig. For Yast, go
to System administration ---> Change configuration file and using
search locate the entries for every service you want to stop.
Then, uninstall all these services from the base system. The only
service that seems reasonable to me to be left running is the
NameServer caching deamon, which is able to reduce network traffic a
lot.
Now, you have to edit some files:
· /etc/resolv.conf Used to provide a nameserver. Add these entries:
nameserver xxx.xxx.xxx.xxx and domain xxxxx , replacing x with the
correct values.
· /etc/hosts Used to match IP addresses to host names localy. Provide
the basic servers' names of your network.
· /etc/nntpserver Used to provide a news server. Just append the
nameserver 's hostname.
· /etc/fstab Restore the fstab.new file we have created earlier.
3.4.1. Configuring the language
Perhaps, you do not leave in the US or the UK, like me, so you have to
configure the language. This is simply done through the .profile
file. Just add the following: export LANG="X"where X is your natural
language. Then, download a console font which supports your codepage
and set, with the help of Yast, the keyboard keymap. Copy .profile to
/etc/skel of the file server or to all the users' home directories.
3.4.2. The X window system
If you want to provide a working X enviroment for clients with
different graphics hardware, you have to use the XFBDev server. If you
followed the instructions on howto create a boot disk, you would now
be in framebufer mode at 1024x768@16M colors, which is sufficient for
use with X windows. Now, you have to configure the X server to load
the framebuffer driver. SuSE provides an exellent tool for configuring
X wherher it might be version 3 or 4. It is called sax for X 3.3.x and
sax2 for X 4.x. To use XFBDev driver start sax with the -s XF86_FBDev
option and configure the server according to your hardware. In case
you do not use SuSE, most of the work must be done by hand. Create a
basic /etc/X11/XF86Config file using xf86config4. Please choose
entries that are as much as possible closer to your needs. Then edit
the /etc/X11/XF86Config. This file is devided into sections that
start with the keyword 'Section' and end with 'EndSection'. Do the
following modifications:
· Section "Files": Add the path to the direcory where you 've put
your fonts.
· Section "Module": Load the GLX module if you want REALLY SLOW Open
GL graphics (Load glx)!
· Section "InputDevice, Driver="mouse"": Add the following lines if
you want to use a wheel mouse:
Option "Buttons" "5"
Option "ZAxisMapping" "4 5"
· Section "Device": Replace everything with the following:
BoardName "AutoDetected"
Driver "fb"
Identifier "Device[0]"
VendorName "AutoDetected
· Section "Modes": Replace everything with the following:
Identifier "Modes[0]"
Modeline "1024x768" 71.39 1024 1040 1216 1 400 768 768 776 802
· Section "Screen": Replace everything with the following
DefaultDepth 16
SubSection "Display"
Depth 16
Modes "1024x768"
EndSubSection
Device "Device[0]"
Identifier "Screen[0]"
Monitor "Monitor[0]"
· Section "ServerLayout": Replace everything with the following:
Identifier "Layout[all]"
InputDevice "Keyboard[0]" "CoreKeyboard"
InputDevice "Mouse[1]" "CorePointer"
Screen "Screen[0]"
and then replace the first argument of the InputDevice directives with
the identifiers which can be found earlier in the file.
I thing that it should be a working configuration for framebuffer sys­
tems. For further reference take a look at the XF86Config and the
xf86cfg4 man pages. You will find a working XF86Config file at
Appendix C.
3.4.3. Configuring network access for KDE2
KDE is the most extensible, configurable and internet enabled window
manager available, even if we count some commercial ones that are
proud of it! To download KDE, ftp to ftp.kde.org and get the rpms for
your distribution. There, you can also find vanilla sources and other
related projects.
The main configuration to KDE is done through the K Control Center.
There you can find options for configuring the fonts, colors,
backgrounds etc. The most important thing you can configure is the
LAN browsing deamon that KDE incorporates, lisa. There is also a
readme file under \$KDE2ROOT/share/apps/lisa. After you configure
lisa, you have to make it (or her?) start in the background every time
the computer is started. Find the lisa 's configuration file under
/root. Copy it under /etc. Aftewards, place the command lisa -c
/etc/lisa.conf at the /etc/rc.d/boot.local file, or the similar for
your installation. Now tell me, which is easiest to search a network
Windows or Linux?
If your users are coming from the Windows world, they are familiar to
find programms at the damned 'Start' menu. To make their transition
easy, edit the KDE menu with the Menu Editor programm and add or
remove applications there. Then, copy the .kde2 directory from you
directory to the /etc/skel directory of your file server. Every new
account you create will have access to the menu (and the settings) you
have created.
4. Preparing the boot disk
To prepare a boot disk we just want a kernel, syslinux and a 1,44MB
diskette. Syslinux is tiny boot loader, designed specifically to boot
a kernel and pass some arguments through its command line using a
diskette. As we will see it very easy to configure, too.
4.1. Building a kernel
Always choose the newest kernel to build. As of this time of writing
(Wed Sep 12 17:28:22 2001) the newest kernel is 2.4.9. Building an
older kernel can only save you time updating the nesessary programms.
Also, be sure you have the program versions described in
/usr/src/linux/Documentation/Changes. It is a good idea to compile
the kernel using the base system to be served. The kernel can be build
according to your needs of drivers, but it must contain the following
options:
· Build in support for the cient 's network card (Network device
support ---> Select your card driver).
· Build in support for the BOOTP protocol (Networking options --->
IP: kernel level autoconfiguration ---> IP: BOOTP support).
· Build in support for NFS and root over NFS (File systems --->
Network File Systems ---> NFS file system support and File systems
---> Network File Systems ---> NFS file system support ---> Root
over NFS).
· Build in support for loopback devices (Block devices ---> Loopback
device support).
Do not forget to compile in the VESA framebuffer driver. Then go on
with the familiar kernel compilation routine. Unless you have
build the kernel using the base system, copy all the modules
created to the base/lib/modules directory of the exported directory
structure. The new kernel resides at
/usr/src/linux/arch/i386/boot.
You also have to set the root device to your kernel. You have to use
the rdev programm. Execute the following commands:
mknod /dev/boot255 c 0 255
rdev /path/to/kernel/file /dev/boot255
4.2. Creating the boot disk
Now, we have to use the syslinux programm. Insert a disk into the
first floppy drive and run:
syslinux -s /dev/fd0
Mount the floppy and notice that syslinux has written 2 files:
syslinux.cfg and ldlinux.sys. The second is the boot loader
executable. The syslinux.cfg is the programm configuration file. A
typical structure for that file is the following:
default linux
append init=/sbin/init root=/dev/nfs
ip=:195.251.160.10:195.251.160.254:255.255.255.0:::'bootp'
nfsroot=195.251.160.10:/usr/local/linux/ws/\%s vga=0x318
prompt 1
timeout 30
readinfo 2
The default statment is the kernel name to be booted and the append is
the command line to be passed to the kernel. Now, you have to copy the
kernel you have created to the floppy and rename it to 'linux'.
4.3. The kernel command line
To boot a diskless client, its kernel must have the following command
line options:
· init=/sbin/init: If your init programm is elsewhere just change the
path.
· root=/dev/nfs: An alias to say the kernel that it has to mount its
root directory over nfs
· ip: This command line option tells the kernel how to get it's IP
address and which is the NFS server's address
· nfsroot: Tells the kernel to mount this directory as its root. The
% is an alias to the host 's IP address.
· vga: If you want to be able to start X windows in framebuffer mode,
switch to a framebuffer mode. The one given stands for 1024x768@16M
colors.
All these options are discussed in detail in
/usr/src/linux/Documentation/nfsroot.txt. Read it and adjust the
given command line to your needs.
Now you have created the boot disk you are ready to test the system
you have build. Start the NFS and BOOTP services and boot a client
with the boot disk. No one has been able to do it from the first time.
So go on to the next section!
5. The magic time
In this section will be discused all the problems that you have and
the changes that you propose to the installation. Please feel free to
email me and ask about any difficult or not mentioned points in this
document. My email is cs98011@icsd.aegean.gr
Q: A DHCP is already running. How do I configure BOOTP, so as no
interaction is made with the DHCP?
A: This was the main problem I faced when I installed the system on a
running network. DHCP and BOOTP use the same port. When a windows
client boots, it issues a DHCP/BOOTP request to locate its IP (of
course in case of dynamic IP). When the DHCP server responds, it also
returns the IP's of DNS servers, print servers and Domain Controlers.
My BOOTP server was responding faster than the Microsoft DHCP server,
an so Windows clients were unable to locate their Domain controler.
This resulted to users not being able to login! The solution described
here was donated by D. Spinellis.
Open the /usr/src/linux/net/ipv4 file. This is were all BOOTP
autoconfiguration is done. Search for udph.source,udph.dest
variables. You will see that they are set to the standard 67/68
request/responce ports. Change BOTH values so they use an unused UDP
port in your network. A good port pair that no application uses it is
967/968. Now, start your DHCPd with the -p 967 option. Everything must
be working OK!
6. Other Stuff
6.1. Contributors
· Diomidis Spinellis: Structure and typographical corrections, the
DHCP/BOOTP conflict resolution.
6.2. Copyrights
This document is GNU copylefted by Georgios Gousios
<mailto:cs98011@icsd.aegean.gr>.
It is covered by the GNU documentation licence.
Permission to use, copy, distribute this document for any purpose is
hereby granted, provided that the author's / editor's name and this
notice appear in all copies and/or supporting documents; and that an
unmodified version of this document is made freely available. This
document is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY, either expressed or implied. While every effort
has been taken to ensure the accuracy of the information documented
herein, the author / editor / maintainer assumes NO RESPONSIBILITY for
any errors, or for any damages, direct or consequential, as a result
of the use of the information documented herein
6.3. Contacting the author
The author may be contacted via e-mail. For any change, question,
error that must be corrected please feel free to contact me. For every
contribution you make for this document, your name will be mentioned
in the contributors section.
6.4. Changelog
· v0.8, Thu May 24 17:37:13 2001 : First draft written.
· v1.0, Fri May 25 01:36:25 2001 : The first version is complete (in
HTML).
· v1.05, Thu Jul 19 19:09:58 2001: Structure and typos corrections.
Also, tranfered to LaTeX.
· v1.1, Wed Sep 12 18:23:29 2001: Transfered to LinuxDoc SGML,
donated to the LDP.
7. Appendix
7.1. Appendix A - A script for creating host directories
#!/usr/bin/bash
#This is a script for creating host directories using the
#directory scheme illustrated before in this document.
#It is written on Solaris and I did not test it on Linux.
#Execute it at the ws directory.
#Needs as input a file containing space separeted IP
#addresses named addr, for example bash# ./script addr
#This file must be like this: 195.251.160.10 195.251.160.11 195.251.160.13 ....
echo "Creating the tar archive"; echo
cd base
tar cpf linux.tar ./bin ./dev ./etc ./lib ./sbin ./var
mv linux.tar /usr/local/linux/ws/linux.tar
cd ..
echo "Creating host directories"; echo
for addr in $(cat addr)
do
echo "Working on host $addr"
mkdir $addr
cd $addr
echo " ---Creating nessesary directores"
mkdir boot
mkdir cdrom
mkdir floppy
mkdir home
mkdir mnt
mkdir opt
mkdir proc
mkdir root
mkdir tmp
mkdir usr
echo " ---Extracting tar archive"
ln -s ../linux.tar ./linux.tar
tar xf linux.tar
rm linux.tar
echo " ---Removing unnessesary files"
rm -R ./lib/modules/*
rm -R ./var/yp
rm -R ./var/X11R6/sax
rm -R ./var/tmp
rm -R ./var/state/dhcp
rm -R ./var/squid
rm -R ./var/run/*
rm -R ./var/opt
rm -R ./var/named
rm -R ./var/mysql
rm -R ./var/lib/amanda
rm -R ./var/lib/codadmin
rm -R ./var/lib/firewall
rm -R ./var/lib/apsfilter
rm -R ./var/lib/gdm
rm -R ./var/lib/misc
rm -R ./var/lib/nobody
rm -R ./var/lib/pcmcia
rm -R ./var/lib/pgsql
rm -R ./var/lib/rpm/*
rm -R ./var/lib/setup
rm -R ./var/lib/wvdial
rm -R ./var/lib/wwwrun
rm -R ./var/lib/xdm
rm -R ./var/lib/xkb
rm -R ./var/lib/YaST/*
rm -R ./var/lib/zope
rm -R ./var/log/*
rm -R ./var/cache/*
rm -R ./var/games
rm -R ./var/adm/*
echo " ---Deciding the hostname"
nslookup $addr |sed -n "s/^Name: *//p" >etc/HOSTNAME
cd ..
i=$(($i+1))
echo
done
echo "Removing the tar archive"
rm linux.tar
echo
exit 0
7.2. Appendix B - A script to create the dhcpd.conf file using arp­
watch
#!/bin/bash
#A script that starts arpwatch, pings a range of addresses and creates an
#/etc/dhcpd.conf file from the output of arpwatch.
#The arp.dat2dhcpd.conf programm is described later.
#Do not forget to edit the i variable and the while statement to specify
#the range of the addresses you want to ping
i=128;
echo "Starting arpwatch";echo
arpwatch
while [ "$i" -lt 253 ]
do
addr=195.251.160.$i
echo "Now pinging $addr"
ping -c 5 $addr >/dev/null
i=$(($i+1))
done
echo
exit
killproc arpwatch
echo "Creating /etc/dhcpd.conf"
cat /var/lib/arpwatch/arp.dat |arp.dat2dhcpd.conf >/etc/dhcpd.conf
7.2.1. The arp.dat2dhcpd.conf script
#!/usr/bin/perl -n
($ether, $ip,$stup1,$name) = split;
if ($name eq "") {
print "
host host$i {
hardware ethernet $ether;
fixed-address $ip;
}
";
$i++;}
else{
print "
host $name {
hardware ethernet $ether;
fixed-address $ip;
}
"}
7.3. Appendix C - A sample XF86Config file
#This file should let X 4.0.1 work in 1024x768@16M colors
#with the fbdev driver using the linux's framebuffer
Section "Files"
RgbPath "/usr/X11R6/lib/X11/rgb"
FontPath "/usr/X11R6/lib/X11/fonts/75dpi:unscaled"
FontPath "/usr/X11R6/lib/X11/fonts/local"
FontPath "/usr/X11R6/lib/X11/fonts/misc:unscaled"
FontPath "/usr/X11R6/lib/X11/fonts/100dpi:unscaled"
FontPath "/usr/X11R6/lib/X11/fonts/Type1"
FontPath "/usr/X11R6/lib/X11/fonts/URW"
FontPath "/usr/X11R6/lib/X11/fonts/Speedo"
FontPath "/usr/X11R6/lib/X11/fonts/misc"
FontPath "/usr/X11R6/lib/X11/fonts/75dpi"
FontPath "/usr/X11R6/lib/X11/fonts/100dpi"
FontPath "/usr/X11R6/lib/X11/fonts/PEX"
FontPath "/usr/X11R6/lib/X11/fonts/cyrillic"
FontPath "/usr/X11R6/lib/X11/fonts/latin2/misc"
FontPath "/usr/X11R6/lib/X11/fonts/latin2/75dpi"
FontPath "/usr/X11R6/lib/X11/fonts/latin2/100dpi"
FontPath "/usr/X11R6/lib/X11/fonts/latin7/75dpi"
FontPath "/usr/X11R6/lib/X11/fonts/kwintv"
FontPath "/usr/X11R6/lib/X11/fonts/truetype"
FontPath "/usr/X11R6/lib/X11/fonts/uni"
FontPath "/usr/X11R6/lib/X11/fonts/ucs/misc"
FontPath "/usr/X11R6/lib/X11/fonts/ucs/75dpi"
FontPath "/usr/X11R6/lib/X11/fonts/ucs/100dpi"
FontPath "/usr/X11R6/lib/X11/fonts/xtest"
EndSection
Section "ServerFlags"
AllowMouseOpenFail
EndSection
Section "Module"
EndSection
# This section is no longer supported
# See a template below
# Section "XInput"
# EndSection
Section "Keyboard"
Protocol "Standard"
XkbRules "xfree86"
XkbModel "microsoft"
XkbLayout "us"
EndSection
Section "Pointer"
Protocol "PS/2"
Device "/dev/psaux"
SampleRate 60
BaudRate 1200
Buttons 5
EndSection
Section "Monitor"
Identifier "Primary-Monitor"
VendorName "Unknown"
ModelName "Unknown"
HorizSync 29-64
VertRefresh 47-90
Modeline "1400x1050" 59.93 1400 1416 1704 1816 1050 1050 1055 1097
Modeline "1280x960" 59.90 1280 1296 1552 1664 960 960 965 1003
Modeline "1600x1000" 59.90 1600 1616 1968 2080 1000 1000 1004 1044
Modeline "1024x864" 59.89 1024 1040 1216 1328 864 864 870 902
Modeline "800x600" 58.55 800 816 928 1040 600 600 608 626
Modeline "1152x864" 59.99 1152 1168 1384 1496 864 864 870 902
Modeline "1280x1024" 59.90 1280 1296 1552 1664 1024 1024 1029 1070
Modeline "640x480" 37.44 640 656 720 832 480 480 486 501
Modeline "1024x768" 59.89 1024 1040 1216 1328 768 768 774 802
Modeline "1600x1200" 59.90 1600 1616 1968 2080 1200 1200 1204 1253
EndSection
Section "Device"
Identifier "Primary-Card"
VendorName "---AUTO DETECTED---"
BoardName "---AUTO DETECTED---"
EndSection
Section "Screen"
Driver "fbdev"
Device "Primary-Card"
Monitor "Primary-Monitor"
DefaultColorDepth 16
SubSection "Display"
Depth 32
Modes "default"
EndSubSection
SubSection "Display"
Depth 24
Modes "default"
EndSubSection
SubSection "Display"
Depth 16
Modes "default"
Virtual 1024 768
EndSubSection
SubSection "Display"
Depth 8
Modes "default"
EndSubSection
EndSection
Section "Screen"
Driver "fbdev"
Device "Primary-Card"
Monitor "Primary-Monitor"
DefaultColorDepth 16
SubSection "Display"
Depth 32
Modes "default"
EndSubSection
SubSection "Display"
Depth 24
Modes "default"
EndSubSection
SubSection "Display"
Depth 16
Modes "default"
Virtual 1024 768
EndSubSection
SubSection "Display"
Depth 8
Modes "default"
EndSubSection
EndSection
Network Boot and Exotic Root HOWTO
Brieuc Jeunhomme
frtest
          bbp@via.ecp.fr
        
Logilab S.A.
Revision History
Revision 0.3 2002-04-28 Revised by: bej
Many feedback inclusions, added links to several projects
Revision 0.2.2 2001-12-08 Revised by: dcm
Licensed GFDL
Revision 0.2.1 2001-05-21 Revised by: logilab
Fixed bibliography and artheader
Revision 0.2 2001-05-19 Revised by: bej
Many improvements and included Ken Yap's feedback.
Revision 0.1.1 2001-04-09 Revised by: logilab
First public draft.
Revision 0.1 2000-12-09 Revised by: bej
Initial draft.
This document explains how to quickly setup a linux server to provide what
diskless linux clients require to get up and running, using an IP network. It
includes data and partly rewritten text from the Diskless-HOWTO, the
Diskless-root-NFS-HOWTO, the linux kernel documentation, the etherboot
project's documentation, the linux terminal server project's homepage, and
the author's personal experience, acquired when working for Logilab.
Eventually this document may end up deprecating the Diskless-HOWTO and
Diskless-root-NFS-HOWTO. Please note that you'll also find useful information
in the From-PowerUp-to-bash-prompt-HOWTO and the Thin-Client-HOWTO, and the
Claus-Justus Heine's page about NFS swapping.
-----------------------------------------------------------------------------
Table of Contents
1. Introduction
1.1. What is this all about?
1.2. Thanks
1.3. Diskless booting advocacy
1.4. Requirements
1.5. Acknowledgements and related documentation
1.6. Feedback
1.7. Copyright Information
2. Diskless booting operation overview
2.1. Obtaining IP parameters
2.2. Loading the kernel
2.3. Mounting the root filesystem
2.4. Terminating the boot process
3. Building the kernel
3.1. When the root filesystem is on a ramdisk
4. Daemons setup
4.1. NFS daemon
4.2. BOOTP daemon
4.3. TFTP
5. Clients setup, creation of the root filesystem
5.1. Creating the first files and directories
5.2. The /var and /etc directories
5.3. Last details
5.4. Trial...
5.5. And Error!
6. Several ways of obtaining the kernel
6.1. BOOTP or DHCP capable NICs
6.2. Kernel on a local floppy or hard drive
6.3. Bootloader without kernel on a local floppy or hard drive
6.4. Creating ROMs for the clients
6.5. Local CDROM
7. How to create diskless MS-Windows stations?
8. Troubleshooting, tips, tricks, and useful links
8.1. Transparently handling workstations'specific files
8.2. Reducing diskless workstations'memory usage
8.3. Swapping over NFS
8.4. Swapping over network block devices
8.5. Getting rid of error messages about /etc/mtab or unmounted
directories on shutdown
8.6. Installing new packages on workstations
A. Non-Volatile Memory chips
B. Determining the size and speed of EPROMs to plug in a NIC
C. Companies selling diskless computers
References
1. Introduction
1.1. What is this all about?
Recent linux kernels offer the possibility to boot a linux box entirely from
network, by loading its kernel and root filesystem from a server. In that
case, the client may use several ways to get the first instructions it has to
execute when booting: home made eproms, special network cards implementing
the RARP, BOOTP or DHCP protocols, cdroms, or bootloaders loaded from a boot
floppy or a local hard drive.
-----------------------------------------------------------------------------
1.2. Thanks
Logilab sponsored this HOWTO. Check their [http://www.logilab.org] website
for new versions of this document. I also thank the etherboot, netboot, plume
and linux terminal server project developers and webmasters, who made it
really possible to boot a Linux worstation over a network.
Very special thanks go to Ken Yap, member of the etherboot project, whose
comments greatly helped to improve the quality of this document.
I also thank Jerome Warnier, main developer of the plume project, Pierre
Mondié, Kyle Bateman, Peter T. Breuer, Charles Howes, and Thomas Marteau for
their comments and contributions.
-----------------------------------------------------------------------------
1.3. Diskless booting advocacy
1.3.1. Buying is cheaper than building
Sometimes, buying a diskless linux computer will be cheaper than building!
Checkout the list of commercial sites given in appendix, which are selling
diskless linux network cards and diskless computers. These companies do mass
production of linux diskless computers selling millions of units and thereby
reducing the cost per unit.
-----------------------------------------------------------------------------
1.3.2. Advantages of diskless computers
Diskless computers will become more and more popular in the next years. They
will be very successful because of the availability of very high-speed
network cards at very low prices. Today 100 Megabit per second (12.5 MB per
sec transfer rate) network cards are common and in about 1 to 2 years
1000 MBit (125 MB per sec transfer rate) network cards will become very cheap
and will be the standard.
In near future, monitor manufacturers will place the CPU, NIC, RAM right
inside the monitor to form a diskless computer. This eliminates the diskless
computer box and saves space. The monitor will have outlet for mouse,
keyboard, network RJ45 and power supply.
The following are benefits of using diskless computers:
  * Total cost of ownership is very low in case of diskless computers. Total
cost of ownership is cost of initial purchasing + cost of maintenance.
The cost of maintenance is usually 3 to 5 times the cost of initial
computer purchase and this cost is recurring year after year. In case of
diskless computers, the cost of maintenance is completely eliminated.
  * All the backups are centralized at one single main server.
  * No need of UPS battery, air-conditioning, dust proof environment for
diskless clients, only server needs UPS battery, A/C and dust proof
environment.
  * A better protection from virus attack - Some computer virus cannot attack
diskless computers as they do not have any hard disk. This kind of virus
cannot do any damage to diskless computers. Only one single server box
needs to be protected against virus attack. This saves millions of
dollars for the company by avoiding installation of vaccines and cleaning
the hard disks.
  * Servers can have large powerful/high performance hard disks, can optimize
the usage of disk space via sharing by many diskless computer users.
Fault tolerance of hard disk failure is possible by using RAID on main
server.
  * On some installations: sharing of central server RAM memory by many
diskless computer users. For example, if many users are running a web
browser remotely on a server, then there will be only one copy of this
web browser in its RAM.
  * Very few system administrators required to maintain central server.
  * Zero administration at diskless client side. Diskless computers are
absolutely maintenance free and troublefree.
  * Long life of diskless clients.
  * Eliminates install/upgrade of hardware, software on diskless client side.
  * Eliminates cost of cdrom, floppy, tape drive, modem, UPS battery, printer
parallel ports, serial ports etc...
  * Can operate in places like factory floor where a hard disk might be too
fragile.
-----------------------------------------------------------------------------
1.4. Requirements
1.4.1. Hardware requirements
The most important thing in order to boot from network is to have an
equipment which enables the stations to execute a bootloader, which will get
the kernel on the server and launch it. Another solution is to use a device
which will load a local kernel, which will mount the root filesystem on the
server. There are several solutions: home made eproms containing the first
instructions to execute when booting the station, boot with BOOTP/DHCP
capable network adapters, or a local floppy, a tiny hard drive, or a cdrom to
load the kernel. Note that some vendors also sell network booting capable
stations: for instance, some Sun stations implement the BOOTP protocol.
Other hardware requirements depend on the configuration you plan to use: on
some sites, every application run by the stations is executed remotely on the
server, this implies that a very high-performance server is required, but
only light stations are required: depending on what they will have to do,
80486 CPUs with 16 MB of RAM may be enough. On the other side, if application
programs are really executed locally on the stations, the requirements for
the stations depend completely on these applications. In that case, only a
small server is required. A 80486 CPU with 32 MB of RAM will be sufficient
for a small number of stations, but more memory will be necessary in very
large installations with hundreds or thousands of machines. Note the server's
CPU does not really matter for such an installation.
-----------------------------------------------------------------------------
1.4.2. Software requirements
Linux kernel version 2.0 or above sources are required. All tools required to
build a linux kernel are also necessary (see the linux kernel documentation
for more information on this).
A BOOTP daemon (a DHCP daemon may also do fine, but I won't explain how to
configure this), a NFS daemon (if you want to mount the root filesystem on a
remote server), are also required. We will also need a TFTP daemon if you
plan to load the kernel remotely. At last, the mknbi utility provided with
the [http://etherboot.sourceforge.net] etherboot distribution, and, if you
use LanWorks EPROMs, like those included in the 3c905 3com ethernet adapter,
you will also need the imggen utility, available at [http://www.ltsp.org/
contrib/] http://www.ltsp.org/contrib/.
-----------------------------------------------------------------------------
1.5. Acknowledgements and related documentation
This documentation has been written for experimented system administrators,
who are already aware of linux fundamentals, like the use of grep, sed, and
awk, basic shell programming, the init process and the boot scripts, kernel
compilation, and NFS server configuration. Experience of kernel arguments
passing should also help. Information on these subjects can be found
respectively in the grep, sed, awk, and bash man/info pages, in the
Bootdisk-HOWTO, the From-PowerUp-To-Bash-Prompt-HOWTO, the Kernel-HOWTO, the
BootPrompt-HOWTO, the bootparam man page, the rdev man page, the NFS-HOWTO,
and the exports manual page.
There are many sources of information on network booting, but, and this is
why I wrote this HOWTO, none describes all the existing ways of booting over
a network, and much of them are specific to a way of operating. The most
useful to me has been the documentation provided by the [http://www.ltsp.org]
linux terminal server project, although I did not use the packages they
recommend, and I have chosen to describe here how to proceed without these
packages, because they configure things so that every application program is
executed remotely on a server. Useful information can also be found on the
[http://etherboot.sourceforge.net] etherboot project's homepage.
At last, you can also find useful but succinct information in the kernel's
source tree, in /usr/src/linux/Documentation, assuming your kernel source
tree resides in /usr/src/linux.
-----------------------------------------------------------------------------
1.6. Feedback
I will highly appreciate any feedback about this document. Please feel free
to mail me at <bbp@via.ecp.fr> if you have any comment, correction, or
suggestion. You may also use <contact@logilab.fr>.
-----------------------------------------------------------------------------
1.7. Copyright Information
This document is copyrighted (c) 2001 and is distributed under the terms of
the GNU Free Documentation License. You should have received a copy along
with it. If not, it is available from [http://www.fsf.org/licenses/fdl.html]
http://www.fsf.org/licenses/fdl.html.
-----------------------------------------------------------------------------
2. Diskless booting operation overview
Hey, you think it's time to start with the real stuff, right? Here we go.
-----------------------------------------------------------------------------
2.1. Obtaining IP parameters
One could wonder how a station may boot over an IP network if it doesn't even
know its own IP address. In fact, three protocols enable the client to obtain
this information and some additional configuration parameters:
  * RARP: this is the simplest of these protocols. However I guess it does
not enable the server to specify how the client should download the
kernel, so we won't use it (In fact, there is a convention that uses the
IP address of the workstation as filename, e.g. a client getting the
address 192.168.42.12 by RARP might ask for /tftpboot/192.168.42.12 by
TFTP, as the linux kernel does. The filename might also be the hex form
of the IP address, this is implementation dependant, and is not
mandatory.).
  * BOOTP: this protocol allows a server to provide the client (identified by
its hardware MAC address) with much information, in particular its IP
address, subnet mask, broadcast address, network address, gateway
address, host name, and kernel loading path. This is the one we will use.
  * DHCP: this is an extension of BOOTP.
-----------------------------------------------------------------------------
2.2. Loading the kernel
When the client has got its IP parameters, if the kernel is not on a local
support (like a floppy, a cdrom, or a hard drive), the client will start to
download it via TFTP. Its location is given by the BOOTP/DHCP server. A
server (not necessarily the BOOTP/DHCP server) will also have to run a TFTP
daemon for non local kernels. The kernel one obtains after compilation can
not be used "as is" for BOOTP/DHCP operation, its binary image has to be
modified with the mknbi utility (and then modified again with the imggen
utility if you use LanWorks EPROMs). The mknbi utility should also be used to
modify kernels that will be written in a ROM.
-----------------------------------------------------------------------------
2.3. Mounting the root filesystem
After the kernel has started, it will try to mount its root filesystem. The
location of this filesystem is also obtained through BOOTP/DHCP, and it is
mounted via NFS. It means a client may use BOOTP twice for booting: the first
time to get its kernel, and the second time to learn the location of the root
filesystem (which may be on a third server).
Another solution is to use a ramdisk as root filesystem. In this case, the
ramdisk image is obtained with the kernel via TFTP.
-----------------------------------------------------------------------------
2.4. Terminating the boot process
When the root filesystem is mounted, you can start breathing: you can at
least use your swiss army knife with its sh, sed, and awk blades. In fact,
you will have to customize the initialization scripts of the client's
filesystem: for instance, you will have to remove all hard drive, floppy or
cdrom related stuff from /etc/fstab (when your stations are not equipped with
these devices), you may also have to inhibit swap partitions activation (note
there is a way to swap over NFS or network block devices). You also will have
to automagically generate all network configuration files at boot time if
several clients use the same remote root filesystem.
-----------------------------------------------------------------------------
3. Building the kernel
First of all, build a kernel for the clients. I suggest you build it on the
server, this will be useful later for modules installation. Use a zImage to
reduce its size. Include everything you need, but try to use as many modules
as possible, because many BOOTP client implementations are unable to load
very large kernels (at least on intel x86 architectures). Also include
iramdisk support, NFS protocol support, root filesystem on NFS support,
support for your NIC, kernel level IP autoconfiguration via BOOTP; do not use
modules for these! Then, if you plan to use the same remote root filesystem
for several clients, add support for ext2fs or some other filesystem and
ramdisks (16 Megabytes ramdisks will do fine on most systems). You can then
modify the kernel arguments as usual (see the BootPrompt-HOWTO for
information on this topic), but you will have another opportunity to modify
kernel arguments later.
Then, if you plan to use BOOTP, copy the kernel zImage on the server. We will
assume it resides in /tftpboot, its name is zImage, the name of the image you
want to create from this zImage for BOOTP operation is kernel, and the nfs
root filesystem will reside in /nfsroot.
Issue the following commands on the server (the mknbi package should be
installed):
# cd /tftpboot
# chmod 0555 zImage
# chown root:root zImage
# mknbi-linux zImage --output=kernel --rootdir=/nfsroot
If you are using LanWorks EPROMs, also issue the following commands (you need
the imggen utility):
# mv -f kernel tmpkernel
# imggen -a tmpkernel kernel
# rm -f tmpkernel
Your kernel is ready for BOOTP/DHCP/ROM operation. You of course don't need
to do this if you plan to use a local drive.
-----------------------------------------------------------------------------
3.1. When the root filesystem is on a ramdisk
It is possible to use a ramdisk for the root filesystem. In this case, the
command used to modify the kernel's binary image is slightly different. If
you choose to do so, you have to enable support for initial ramdisk (initrd),
and you probably don't need NFS support, or you probably can compile it as a
module.
Its time to give an overview of what happens when you use initrd. The full
documentation for this is in your kernel source tree, in the Documentation/
initrd.txt file. I have to warn you I did never try this :).
When initrd is enabled, the boot loader first loads the kernel and the inital
ramdisk into memory. Then, the ramdisk is mounted read-write as root
filesystem. The kernel looks for a /linuxrc file (a binary executable or a
script beginning with #!). When /linuxrc terminates, the traditionnal root
filesystem is mounted as /, and the usual boot sequence is performed. So, if
you want to run your box entirely from ramdisk, you just have to create a
link from /linuxrc to /sbin/init, or to write there a shell script to perform
any action you like, and then shutdown the computer.
After the kernel has been compiled, you have to build a root filesystem for
your installation. This is explained in the "Clients setup, creation of the
root filesystem" section. I will assume here that this is already done and
that the root filesystem for your clients temporarily resides in /tmp/rootfs.
You now have to create a ramdisk image. A simple way to do so is the
following:
  * Make sure the computer you are working on has support for ramdisks and
has such a device (/dev/ram0).
  * Create an empty filesystem with the appropriate size on this ramdisk:
# mke2fs -m0 /dev/ram0 300
  * Mount it somewhere:
# mount -t ext2 /dev/ram0 /mnt
  * Copy what you need for your new root filesystem, and create your future /
linuxrc if you did not create it in /tmp/rootfs/linuxrc:
# cp -a /tmp/rootfs/* /mnt
  * Unmount the ramdisk:
# umount /mnt
  * Save the ramdisk image to some file and free it:
# dd if=/dev/ram0 of=initrd bs=1024 count=300
# freeramdisk /dev/ram0
What was toled above about LanWorks PROMs is also true if you use initrd.
Then, you have to modify the kernel image, as was told above, with the
mknbi-linux utility. Its invocation will slightly differ from the above,
though (I will assume your just compiled zImage resides in /tftpboot/zImage
and your initial ramdisk image resides in /tmp/initrd):
# cd /tftpboot
# chmod 0555 zImage
# chown root:root zImage
# rdev zImage /dev/ram0
# mknbi-linux zImage --output=kernel --rootdir=/dev/ram0 /tmp/initrd
-----------------------------------------------------------------------------
4. Daemons setup
4.1. NFS daemon
Just export the directory in which the client's root filesystem will reside
(see the exports manpage for more information about this topic). The simplest
is to export it no_root_squash and rw, but a perfect setup would export most
of the root filesystem root_squash and ro, and have separate lines in the /
etc/exports for directories which really require no_root_squash and/or rw.
Just start with everything rw and no_root_squash, the fine tuning will be
done later.
Of course, you don't need any NFS server at all if you plan to run your
clients entirely from ramdisk.
-----------------------------------------------------------------------------
4.2. BOOTP daemon
I assume you have installed the bootpd package. The default configuration
file is /etc/bootptab, and its syntax is detailed in the bootptab manpage.
Let's create it.
First, open as root your favourite text editor. It is vim. Yes, it is. If it
is not, it has to become. Now, enter the following lines (they are the
default attributes). All the attributes you give here and do not override in
a machine's specific attributes list will be given to clients):
.default\
:sm=your subnet mask\
:ds=the IP address of your DNS server\
:ht=ethernet\
:dn=your domain name\
:gw=the IP address of your gateway\
:sa=the IP address of the TFTP server\
:bf=path to find the kernel image\
:rp=path of the root filesystem\
:hn
Of course, not all these parameters are required, this depends on your
network configuration and BOOTP implementations, but these will work in most
cases.
Then, add an entry per client in your network. An entry should look like
this:
dns of the client\
:ha=MAC address of the client\
:ip=IP address of the client
The MAC address above is the hexadecimal hardware address of the client
without the ':' characters.
Here is a sample /etc/bootptab file:
.default\
:sm=255.255.0.0\
:ds=192.168.0.2\
:ht=ethernet\
:dn=frtest.org\
:gw=192.168.0.1\
:sa=192.168.0.2\
:bf=/tftpboot/kernel\
:rp=/nfsroot\
:hn
foo\
:ha=001122334455\
:ip=192.168.2.12
bar\
:ha=00FFEEDDCCBB\
:ip=192.168.12.42\
:ds=192.168.2.42
Then, run the bootpd daemon with the bootpd -s command (it is also a good
idea to add it to your startup scripts), or add the following line to your /
etc/inetd.conf:
bootps dgram udp wait root /usr/sbin/tcpd bootpd -i -t 120
If you want to test the BOOTP server, add an entry to your /etc/bootptab and
use the bootptest program.
-----------------------------------------------------------------------------
4.3. TFTP
Setting up the TFTP daemon is not the hard part: just install the tftpd
package if you have one, and add the following line to your /etc/inetd.conf
(again, I assume /tftpboot is the directory where the kernel image resides):
tftp dgram udp wait root /usr/sbin/tcpd in.tftpd /tftpboot
Don't forget to chmod 555 the /tftpboot directory, as most TFTP servers won't
send the files if they are not world readable.
You should be aware of the limitations implied by running the TFTP daemon
from the inetd. Most inetd's will shutdown a service if it is spawned to
frequently. So if you have many clients, you should look for another inetd
like xinetd, or run a standalone TFTP daemon.
Now you have properly setup all daemons, you can restart the inetd and take a
coffee. Don't forget to tell everyone the server setup is over, so you think
you're a hero before you start building the root filesystem for the clients.
-----------------------------------------------------------------------------
5. Clients setup, creation of the root filesystem
Tired? No you're not. Remember you're a hero. Here comes the tricky part. We
will (err... you will) build the client's root filesystem. This shouldn't be
very hard, but you probably will have to use trial and error.
The simplest way to create a root filesystem is to use an already working
filesystem and customize it for the needs of diskless operation. Of course,
you can also build one by hand (like in the good old times) if you like:=),
but I won't explain this here.
-----------------------------------------------------------------------------
5.1. Creating the first files and directories
First, cd to your future station's root directory. You can safely create the
future /home directory with the mkdir command, or by copying it from anywhere
you want (you can use cp -a to do a recursive copy preserving owners, groups,
symlinks, and permissions). Same thing for the future /mnt, /root, /tmp
(don't forget to chmod 0 it, this is only a mount point for the actual /tmp
we will use, because each workstation needs to have its own /tmp). Then, copy
some existing /bin, /sbin, /boot, and /usr into this future root directory
(use cp -a). You can create the /proc directory with mkdir, and chmod 0 it.
Note some applications need write access to their user's home directory.
The /lib directory can be safely copied from somewhere else, but you will
have to put the proper modules in it. To do so, use the following commands
(assuming you have compiled the kernel for your clients on the server in /usr
/src/linux, and the root filesystem will reside in /nfsroot):
# cd /usr/src/linux
# make modules_install INSTALL_MOD_PATH=/nfsroot
Don't forget to put the System.map file in /nfsroot/boot. A first problem we
will have to fix is that, depending on your configuration, your system may
try to run fsck on the root filesystem at boot time. It shouldn't if there is
no hard drive in the box. Most distributions will also skip this fsck if they
find a fastboot file in the root directory. So, issue the following commands
if you do not plan to mount any hard drive:
# cd /nfsroot
# touch fastboot
# chmod 0 fastboot
Another method is tell fsck that checking a NFS filesystem always succeeds:
# cd /nfsroot/sbin
# ln -s ../bin/true fsck.nfs
The /dev directory can also be safely copied from another place into /
nfsroot. But permissions and symlinks have to be preserved, so use cp -a.
Another solution is to use kernel 2.2.x devfs feature, which will reduce
memory consumption and improve performance, but the drawback of this method
is that all symlinks created in /dev will be lost. The point to remember is
that each workstation needs to have its own /dev, so you will have to copy it
on a ramdisk if you plan to use several clients and not to use devfs.
-----------------------------------------------------------------------------
5.2. The /var and /etc directories
We will use ramdisks for these directories, because each client needs to have
its own one. But we still need them at the beginning to create their standard
structure. Note you are not required to do so if you use a single client. So
copy these directories (cp -a) from another place into /nfsroot. Then you can
make some cleanup in /var: you can remove everything in /nfsroot/var/log and
/nfsroot/var/run. You also probably can remove everything in /nfsroot/var/
spool/mail, if you plan to export it via NFS. You also will have to remove
the files containing host specific information in /nfsroot/etc to build them
on the fly during the boot process.
The startup scripts will have to be customized in order to mount some parts
of the filesystem: the /dev directory, if you don't use devfs, the /tmp, the
/var, and the /etc directories. Here is some code which will achieve this:
# this part only if you don't use devfs
mke2fs -q -i 1024 /dev/ram0 16384
mount -n -t ext2 -o rw,suid,dev,exec, \
async,nocheck /dev/ram0 /dev
# this part for everyone
mke2fs -q -i 1024 /dev/ram1 16384
mount -n -t ext2 -o rw,suid,dev,exec, \
async,nocheck /dev/ram1 /tmp
chmod 1777 /tmp
cp -a /etc /tmp
mke2fs -q -i 1024 /dev/ram2 16384
mount -n -t ext2 -o rw,suid,dev,exec, \
async,nocheck /dev/ram2 /etc
find /tmp/etc -maxdepth 1 -exec cp -a '{}' /etc ';'
mount -f -t ext2 -o rw,suid,dev,exec, \
async,nocheck,remount /dev/ram2 /etc
mount -f -o remount /
cp -a /var /tmp
mke2fs -q -i 1024 /dev/ram3 16384
mount -t ext2 -o rw,suid,dev,exec, \
async,nocheck /dev/ram3 /var
find /tmp/var -maxdepth 1 -exec cp -a '{}' /var ';'
If you plan to use more than a single client, you will also have to change
files dynamically at boot time in /etc: the files which contain the IP and
hostname of the client. These files depend on your distribution, but you will
easily find them with a few greps. Just remove client-specific information
from them, and add code into your startup files to generate this information
again at boot time but only once the new /etc has been mounted on the
ramdisk! A way to obtain your IP address and hostname at bootup is the
following (if you have the bootpc package installed on the
workstations'filesystem):
IPADDR="$(bootpc | awk '/IPADDR/ \
{
match($0,"[A-Za-z]+")
s=substr($0,RSTART+RLENGTH)
match(s,"[0-9.]+")
print substr(s,RSTART,RLENGTH)
}
')"
HOST="$(bootpc | awk '/HOSTNAME/ \
{
match($0,"[A-Za-z]+")
s=substr($0,RSTART+RLENGTH)
match(s,"[A-Za-z0-9-]+")
print substr(s,RSTART,RLENGTH)
}')"
DOMAIN="$(bootpc | awk '/DOMAIN/ \
{
match($0,"[A-Za-z]+")
s=substr($0,RSTART+RLENGTH)
match(s,"[A-Za-z0-9-.]+")
print substr(s,RSTART,RLENGTH)
}')"
This is a complicated solution, but I guess it should work on most sites. The
IP address can alternatively be obtained with the output of ifconfig and the
hostname can be obtained from the output of the host command, but this is not
portable, because these outputs differ from system to system depending on the
distribution you are using, and the locales settings.
Then, the hostname should be set with the hostname $HOSTNAME command. When
this is done, it is time to generate on the fly the configuration files which
contain the IP address or the hostname of the client.
-----------------------------------------------------------------------------
5.3. Last details
Now, it's time to do the fine tuning of the client. As /var will be mounted
on a ramdisk (unless you have a single client), you will have to send the
logs to a logs server if you want to keep them. One way to do that is to
delete the /nfsroot/etc/syslog.conf file and replacing it by the following
file (see man syslog.conf for details):
*.* /dev/tty12
*.* @dns or IP of the logs server
If you do so, the logs server will have to run syslogd with the -r option
(see the syslogd manual page).
If you use logrotate and you have done the preceding operation, you should
replace the logrotate configuration file (/etc/logrotate.conf on most boxes)
by an empty file:
# rm -f /etc/logrotate.conf
# touch /etc/logrotate.conf
If you don't use it, just remove the logs rotation scripts from the crontab,
and as you no longer have log files in /var/log, put an exit 0 at the
beginning of your logs rotation scripts.
In the /nfsroot/etc/fstab file, remove anything related to the hard drive,
floppy disk reader, or cdrom if you don't have such devices on your
workstations. Add an entry for the /var/spool/mail directory, which should be
exported by the server through NFS or any other network filesystem. You
probably also want to put an entry for the /home directory in this file.
You can also comment the lines running newaliases, activating swap, and
running depmod -a and remove the /nfsroot/etc/mtab file. Comment out the line
(s) removing /fastboot, /fsckoptions, and /forcefsck in your startup scripts.
Also remove or comment any line in the startup scripts that would try to
write on the root filesystem except for really necessary writes, which should
all be redirected to some ramdisk location if you use several clients.
-----------------------------------------------------------------------------
5.4. Trial...
Time has come for a small trial. MAKE A BACKUP OF YOUR NEWLY CREATED /
nfsroot. tar -cvvIf should do fine. Take a minute to verify we didn't forget
anything. Try to boot a client.
-----------------------------------------------------------------------------
5.5. And Error!
Look carefully at the client's screen during the boot process. Oh, I didn't
tell you to connect a screen... Run, forest! Run an get one. You will
probably see some error messages. Fix the problems, and make frequent backups
of your /nfsroot. One day, the client will boot properly. This day, you will
have to fix errors occurring during shutdown;=P.
-----------------------------------------------------------------------------
6. Several ways of obtaining the kernel
We have spoken so far about the client and server's configuration for
operation after the BOOTP request has been issued by the client, but the
first problem is that most computers are not able to behave as BOOTP clients
by default. We will see in this section how to fix this.
-----------------------------------------------------------------------------
6.1. BOOTP or DHCP capable NICs
This is the most simple case: some network cards provide a supplement to the
BIOS, containing a BOOTP or DHCP client, so just setup them for BOOTP or DHCP
operation in the BIOS, and you're done.
-----------------------------------------------------------------------------
6.2. Kernel on a local floppy or hard drive
These cases are also quite simple: the kernel is loaded from a local drive,
and all the kernel has to do is to obtain its network parameters from BOOTP,
and mount its root filesystem over NFS; this should not cause any problem. By
the way, a local hard drive is a good place to leave a /var, /tmp, and a /
dev...
If you have a local hard drive, all you have to do is to use lilo or your
favourite boot loader as usual. If you use a floppy, you can use a bootloader
or simply write the kernel on the floppy: a kernel is directly bootable.This
enables you to use a command like the following:
# dd if=zImage of=/dev/fd0 bs=8192
However, Alan Cox told in a linux-kernel thread that this feature of the
linux kernel will be removed sooner or later, you thus will have to use a
bootloader even on floppies some day. I know this still works with 2.4.11
kernels, but support seems to have been removed in the 2.4.13 version. See
the sixth chapter of the [http://www.tldp.org/HOWTO/Bootdisk-HOWTO/
index.html] boot-disk-HOWTO for this topic.
-----------------------------------------------------------------------------
6.3. Bootloader without kernel on a local floppy or hard drive
Certain bootloaders are network aware, you may thus use them to download the
kernel image from the network. Some of them are listed below:
  * [http://netboot.sourceforge.net] netboot, a bootloader dedicated to
network boot.
  * [http://www.gnu.org/software/grub/] GRUB, the GNU project's GRand Unified
Bootloader, which is a very general purpose bootloader.
-----------------------------------------------------------------------------
6.4. Creating ROMs for the clients
Many network cards include a slot in which one can insert an EPROM with
additional BIOS code. This enables one to add, for instance, BOOTP
capabilities to the BI0S. To do so, you will first have to find how to enable
the EPROM socket. You may need a jumper or a special software to do so. Some
cards like the 3Com 905B have slots for EEPROMs which enable one to change
the software in the EEPROM in place. In appendix, you'll find the information
about EPROM and various types of memory chips.
For a list of EPROM burner manufacturers visit the Yahoo site and go to
[http://dir.yahoo.com/Business_and_Economy/Companies/Computers/Hardware/
Peripherals/Device_Programmers/] economy->company->Hardware->Peripherals->
Device programmers or check out the old Diskless-HOWTO List of EPROM burner
manufacturers section.
If you choose to create your own ROMS, you will have to load a BOOTP or DHCP
capable software in the ROM, and then, you will be in the case of BOOTP or
DHCP capable NICs described above.
You will also need to find the proper EPROM size and speed for your NIC. Some
methods to do so are provided in appendix, because the NICs manufacturers
often do not provide this information.
-----------------------------------------------------------------------------
6.4.1. LanWorks BootWare PROMs
This information may save you time. In order to make LanWorks BootWare(tm)
PROMs to correctly start up a linux kernel image, the "bootsector" part of
the image must be modified so as to enable the boot prom to jump right into
the image start address. The net-bootable image format created by netboot/
etherboot's `mknbi-linux' tool differs and will not run if used with BootWare
PROMs.
A modified bootsector together with a Makefile to create a BootWare-bootable
image after kernel compilation can be found at:
  * Bwimage package: [ftp://ftp.ipp.mpg.de/pub/ipp/wls/linux/bwimage-0.1.tgz]
ftp://ftp.ipp.mpg.de/pub/ipp/wls/linux/bwimage-0.1.tgz
  * See also [http://www.patoche.org/LTT/net/00000096.html] http://
www.patoche.org/LTT/net/00000096.html
  * LanWorks BootWare Boot ROMs: [http://www.3com.com/lanworks] http://
www.3com.com/lanworks
Refer to the README file for installation details. Currently, only
"zImage"-type kernels are supported. Unfortunately, kernel parameters are
ignored.
This section was initially written by Jochen Kmietsch for the Diskless-HOWTO,
email to: <jochen.kmietsch@tu-clausthal.de> for any questions.
-----------------------------------------------------------------------------
6.5. Local CDROM
This section was originally written by Hans de Goede <
j.w.r.degoede@et.tudelft.nl> for the Diskless-root-NFS-HOWTO. I modified it
slightly in order to reflect some differences between this document and the
Diskless-root-NFS-HOWTO.
Much of the above also goes for booting from cdrom. Why would one want to
boot a machine from cdrom? Booting from cdrom is interesting everywhere one
wants to run a very specific application, like a kiosk, a library database
program or an internet cafe, and one doesn't have a network or a server to
use a root over nfs setup.
-----------------------------------------------------------------------------
6.5.1. Creating a test setup
Now that we know what we want to do and how, it's time to create a test
setup:
  * For starters just take one of the machines which you want to use and put
in a big disk and a cd burner.
  * Install your linux of choice on this machine, and leave a 650 MB
partition free for the test setup. This install will be used to make the
iso image and to burn the cd's from, so install the necessary tools. It
will also be used to restore any booboo's which leave the test setup
unbootable.
  * On the 650 mb partition install your linux of choice with the setup you
want to have on the cd, this will be the test setup.
  * Boot the test setup.
  * Compile a kernel with isofs and cdrom support compiled in.
  * Configure the test setup as described above with the root filesystem
mounted read only.
  * Verify that the test setup automagically boots and everything works.
  * Boot the main install and mount the 650 MB partition on /test of the main
install.
  * Put the following in a file called /test/etc/rc.d/rc.iso, this file will
be sourced at the beginning of rc.sysinit to create /var:
#/var
echo Creating /var ...
mke2fs -q -i 1024 /dev/ram1 16384
mount /dev/ram1 /var -o defaults,rw
cp -a /lib/var /
  * Edit /test/etc/rc.sysinit, comment the lines where the root is remounted
rw, and add the following 2 lines directly after setting the PATH:
#to boot from cdrom
. /etc/rc.d/rc.iso
  * Copy the following to a script and execute it to make a template for /var
and create /tmp and /etc/mtab links.
#!/bin/sh
echo tmp
rm -fR /test/tmp
ln -s var/tmp /test/tmp
###
echo mtab
touch /test/proc/mounts
rm /test/etc/mtab
ln -s /proc/mounts /test/etc/mtab
###
echo var
mv /test/var/lib /test/lib/var-lib
mv /test/var /test/lib
mkdir /test/var
ln -s /lib/var-lib /test/lib/var/lib
rm -fR /test/lib/var/catman
rm -fR /test/lib/var/log/httpd
rm -f /test/lib/var/log/samba/*
for i in `find /test/lib/var/log -type f`; do
cat /dev/null > $i;
done
rm `find /test/lib/var/lock -type f`
rm `find /test/lib/var/run -type f`
  * Remove the creation of /etc/issue* from /test/etc/rc.local: it will only
fail.
  * Now boot the test partition again, it will be read only just like a
cdrom. If something doesn't work reboot to the working partition fix it,
try again etc. Or you could remount / rw, fix it, then reboot straight
into to test partition again. To remount / rw type:
# mount -o remount,rw /
-----------------------------------------------------------------------------
6.5.2. Creating the CD
If you need more information than you can find below, please refer to the
CD-Writing-HOWTO.
-----------------------------------------------------------------------------
6.5.2.1. Creating a boot image
First of all, boot into the working partition. To create a bootable cd we'll
need an image of a bootable floppy. Just dd-ing a zImage doesn't work since
the loader at the beginning of the zimage doesn't seem to like the fake
floppydrive a bootable cd creates. So we'll use syslinux instead.
  * Get boot.img from a redhat cd.
  * Mount boot.img somewhere through loopback by typing:
# mount boot.img somewhere -o loop -t vfat
  * Remove everything from boot.img except for ldlinux.sys and syslinux.cfg.
  * Cp the kernel-image from the test partition to boot.img.
  * Edit syslinux.cfg so that it contains the following, of course replace
zImage by the appropriate image name:
default linux
label linux
kernel zImage
append root=/dev/<insert your cdrom device here>
  * Umount boot.img:
# umount somewhere
  * If your /etc/mtab is a link to /proc/mounts, umount won't automagically
free /dev/loop0 so free it by typing:
# losetup -d /dev/loop0
-----------------------------------------------------------------------------
6.5.2.2. Creating the iso image
Now that we have the boot image and an install that can boot from a readonly
mount it's time to create an iso image of the cd:
  * Copy boot.img to /test
  * Cd to the directory where you want to store the image and make sure it's
on a partition with enough free space.
  * Now generate the image by typing:
# mkisofs -R -b boot.img -c boot.catalog -o boot.iso /test
-----------------------------------------------------------------------------
6.5.2.3. Verifying the iso image
  * Mounting the image through the loopbackdevice by typing:
# mount boot.iso somewhere -o loop -t iso9660
  * Umount boot.iso:
# umount somewhere
  * If your /etc/mtab is a link to /proc/mounts umount won't automagically
free /dev/loop0 so free it by typing:
# losetup -d /dev/loop0
-----------------------------------------------------------------------------
6.5.2.4. Writing the actual CD
Assuming that you've got cdrecord installed and configured for your cd-writer
type:
# cdrecord -v speed=<desired writing speed> dev=<path to your writers generic scsi device> boot.iso
-----------------------------------------------------------------------------
6.5.3. Boot the cd and test it
Well the title of this paragraph says it all;)
-----------------------------------------------------------------------------
7. How to create diskless MS-Windows stations?
Since MS-Windows does not support diskless booting, a simple workaround is
presented here: the solution is to use software like [http://www.vmware.com]
VMWare or its free alternative, [http://www.plex86.org] plex86. Although the
plex86 seems to have been abandonned, one can still boot certain versions of
MS-Windows using this software. These enable MS-Windows to be executed
transparently on the linux box.
-----------------------------------------------------------------------------
8. Troubleshooting, tips, tricks, and useful links
8.1. Transparently handling workstations'specific files
The previous sections discussed a simple way to handle
workstations'specific files and directories like /var. Most of them are
simply build on the fly and put on ramdisks, you may however prefer to deal
with this problem on the NFS server. The clusternfs project provides a
network filesystem server that can serve different files based on several
criteria including the client's IP address or host name. The basic idea is
that if the client whose IP address is 10.2.12.42 requests a file named, for
instance, myfile, the server will look for a file named myfile$$IP=
10.2.12.42$$ and serve this file instead of myfile if it is available.
-----------------------------------------------------------------------------
8.2. Reducing diskless workstations'memory usage
One simple way to reduce memory consumption is to put several dynamically
created directories on the same ramdisk. For instance, let's say the first
ramdisk will contain the /tmp directory. Then, one may move the /var/tmp
directory on that ramdisk with the following commands issued on the server:
# mkdir /nfsroot/tmp/var
# chmod 0 /nfsroot/tmp/var
# ln -s /tmp/var /nfsroot/var/tmp
Another good way to reduce memory consumption if you don't have local hard
drives and do not swap over a network block device is to disable the Swapping
to block devices option during kernel compilation.
-----------------------------------------------------------------------------
8.3. Swapping over NFS
If your stations do not have enough memory and do not have local drives, you
may want to swap over NFS. You have to be warned the cod eto do so is still
under development and this method is generally quite slow. The full
documentation for this can be found at [http://www.instmath.rwth-aachen.de/
~heine/nfs-swap/] http://www.instmath.rwth-aachen.de/~heine/nfs-swap/.
The first thing to do if you want to apply this solution is to patch your
kernel (you need a kernel version 2.2 or above). First download the patch at
the above url, and cd to /usr/src/linux. I assume the patch is in /usr/src/
patch. Then issue the following command:
# cat ../patch | patch -p1 -l -s
Then, compile your kernel normally and enable the Swapping via network
sockets (EXPERIMENTAL) and Swapping via NFS (EXPERIMENTAL) options.
Then export a directory read-write and no_root_squash from the NFS server.
Setup the clients so that they will mount it somewhere (say on /mnt/swap). It
should be mounted with a rsize and wsize smaller than the page size used by
the kernel (ie. 4 kilobytes on Intel architectures), otherwise your machine
may run out of memory due to memory fragmentation; see the nfs manual page
for details about rsize and wsize. Now, to create a 20 MB swap file, issue
the following commands (which should be placed in the clients'initialization
scripts):
# dd if=/dev/zero of=/mnt/swap/swapfile bs=1k count=20480
# mkswap /mnt/swap/swapfile
# swapon /mnt/swap/swapfile
Of course, this was just for an example, because if you have several
workstations, you will have to change the swap file name or directory, or all
your workstations will use the same swap file for their swap...
Let's say a word about the drawbacks of NFS swapping: the first drawback is
that it is generally slow, except you have specially fast network cards.
Then, this possibility has not been very well tested yet. At last, this is
not secure at all: anyone on the network is able to read the swapped data.
-----------------------------------------------------------------------------
8.4. Swapping over network block devices
Although I have never tried it personally, I got report that the trick
described below works, at least with recent kernels.
The general principle for swapping over network block devices is the same
than to swap over NFS. The good point is you won't have to patch the kernel.
But most of the same drawbacks also apply to the NBD method.
To create a 20 MB swap file, you will have to first create it on the server,
export it to the client, and do an mkswap on the file. Note that the mkswap
must be done on the server, because mkswap uses system calls which are not
handled by NBD. Moreover, this command must be issued after the server starts
exporting the file, because the data on the file may be destroyed when the
server starts exporting it. If we assume the server's name is NBDserver, the
client's name is NBDclient, and the TCP port used for the export is 1024, the
commands to issue on the server are the following:
# dd if=/dev/zero of=/swap/swapfile bs=1k count=20480
# nbd-server NBDclient 1024 /swap/swapfile
# mkswap /swap/swapfile
Now, the client should use the following command:
# swapon /dev/nd0
Again, this was just to show the general principle. The files'names should
also be dependant on the workstations'names or IPs.
Another solution to swap over a network block device is to create an ext2
filesystem on the NBD, then create a regular file on this filesystem, and at
last, use mkswap and swapon to start swapping on this file. This second
method method is closer to the swap over NFS method than the first solution.
-----------------------------------------------------------------------------
8.5. Getting rid of error messages about /etc/mtab or unmounted directories
on shutdown
The following commands, issued on the server may solve the problem:
# ln -s /proc/mounts /nfsroot/etc/mtab
# touch /nfsroot/proc/mounts
-----------------------------------------------------------------------------
8.6. Installing new packages on workstations
A simple way to do so is to use, on the server, a chroot and then execute
your favourite installation commands normally. To chroot to the appropriate
place, use the following command:
# chroot /nfsroot
Debian users will be particularly interested in the --root option of dpkg,
which simply tells dpkg where the root of the target system is.
-----------------------------------------------------------------------------
A. Non-Volatile Memory chips
Here is a brief descriptions of memory chips and their types:
  * PROM: Pronounced prom, an acronym for programmable read-only memory. A
PROM is a memory chip on which data can be written only once. Once a
program has been written onto a PROM, it remains there forever. Unlike
RAM, PROMs retain their contents when the computer is turned off. The
difference between a PROM and a ROM (read-only memory) is that a PROM is
manufactured as blank memory, whereas a ROM is programmed during the
manufacturing process. To write data onto a PROM chip, you need a special
device called a PROM programmer or PROM burner. The process of
programming a PROM is sometimes called burning the PROM. An EPROM
(erasable programmable read-only memory) is a special type of PROM that
can be erased by exposing it to ultraviolet light. Once it is erased, it
can be reprogrammed. An EEPROM is similar to a PROM, but requires only
electricity to be erased.
  * EPROM: Acronym for erasable programmable read-only memory, and pronounced
e-prom, EPROM is a special type of memory that retains its contents until
it is exposed to ultraviolet light. The ultraviolet light clears its
contents, making it possible to reprogram the memory. To write to and
erase an EPROM, you need a special device called a PROM programmer or
PROM burner. An EPROM differs from a PROM in that a PROM can be written
to only once and cannot be erased. EPROMs are used widely in personal
computers because they enable the manufacturer to change the contents of
the PROM before the computer is actually shipped. This means that bugs
can be removed and new versions installed shortly before delivery. A note
on EPROM technology: The bits of an EPROM are programmed by injecting
electrons with an elevated voltage into the floating gate of a
field-effect transistor where a 0 bit is desired. The electrons trapped
there cause that transistor to conduct, reading as 0. To erase the EPROM,
the trapped electrons are given enough energy to escape the floating gate
by bombarding the chip with ultraviolet radiation through the quartz
window. To prevent slow erasure over a period of years from sunlight and
fluorescent lights, this quartz window is covered with an opaque label in
normal use.
  * EEPROM: Acronym for electrically erasable programmable read-only memory.
Pronounced double-e-prom or e-e-prom, an EEPROM is a special type of PROM
that can be erased by exposing it to an electrical charge. Like other
types of PROM, EEPROM retains its contents even when the power is turned
off. Also like other types of ROM, EEPROM is not as fast as RAM. EEPROM
is similar to flash memory (sometimes called flash EEPROM). The principal
difference is that EEPROM requires data to be written or erased one byte
at a time whereas flash memory allows data to be written or erased in
blocks. This makes flash memory faster.
  * FRAM: Short for Ferroelectric Random Access Memory, a type of
non-volatile memory developed by Ramtron International Corporation. FRAM
combines the access speed of DRAM and SRAM with the non-volatility of
ROM. Because of its high speed, it is replacing EEPROM in many devices.
The term FRAM itself is a trademark of Ramtron.
  * NVRAM: Abbreviation of Non-Volatile Random Access Memory, a type of
memory that retains its contents when power is turned off. One type of
NVRAM is SRAM that is made non-volatile by connecting it to a constant
power source such as a battery. Another type of NVRAM uses EEPROM chips
to save its contents when power is turned off. In this case, NVRAM is
composed of a combination of SRAM and EEPROM chips.
  * Bubble Memory: A type of non-volatile memory composed of a thin layer of
material that can be easily magnetized in only one direction. When a
magnetic field is applied to circular area of this substance that is not
magnetized in the same direction, the area is reduced to a smaller
circle, or bubble. It was once widely believed that bubble memory would
become one of the leading memory technologies, but these promises have
not been fulfilled. Other non-volatile memory types, such as EEPROM, are
both faster and less expensive than bubble memory.
  * Flash Memory: A special type of EEPROM that can be erased and
reprogrammed in blocks instead of one byte at a time. Many modern PCs
have their BIOS stored on a flash memory chip so that it can easily be
updated if necessary. Such a BIOS is sometimes called a flash BIOS. Flash
memory is also popular in modems because it enables the modem
manufacturer to support new protocols as they become standardized.
-----------------------------------------------------------------------------
B. Determining the size and speed of EPROMs to plug in a NIC
This section comes from the etherboot project's documentation version 5.0. It
provides tips to determine the size and speed of EPROMs to use with a
particular NIC
The smallest EPROM that is accepted by network cards is an 8k EPROM (2764).
16 kB (27128) or 32 kB (27256) are the norm. Some cards will even go up to
64 kB EPROMs (27512). (You will often see a C after the 27, e.g. 27C256. This
indicates a CMOS EPROM, which is equivalent to the non-C version and is a
good thing because of lower power consumption.) You want to use the smallest
EPROM you can so that you don't take up more of the upper memory area than
needed as other extensions BIOSes may need the space. However you also want
to get a good price for the EPROM. Currently the 32 kB and 64 kB EPROMs
(27256 and 27512) seem to be the cheapest per unit. Smaller EPROMs appear to
be more expensive because they are out of mainstream production.
If you cannot find out from the documentation what capacity of EPROM your
card takes, for ISA NICs only, you could do it by trial and error. (PCI NICs
do not enable the EPROM until the BIOS tells the NIC to.) Take a ROM with
some data on it (say a character generator ROM) and plug it into the socket.
Be careful not to use an extension BIOS for this test because it may be
detected and activated and prevent you from booting your computer. Using the
debug program under DOS, dump various regions of the memory space. Say you
discover that you can see the data in a memory window from CC00:0 to CC00:
3FFF (= 4000 hex = 16384 decimal locations). This indicates that a 16 kB
EPROM is needed. However if you see an alias in parts of the memory space,
say the region from CC00:0 to CC00:1FFF is duplicated in CC00:2000 to CC00:
3FFF, then you have put an 8 kB EPROM into a 16 kB slot and you need to try a
larger EPROM.
Note that because pinouts for 28 pin EPROMs are upward compatible after a
fashion, you can probably use a larger capacity EPROM in a slot intended for
a smaller one. The higher address lines will probably be held high so you
will need to program the image in the upper half or upper quarter of the
larger EPROM, as the case may be. However you should double check the
voltages on the pins armed with data sheet and a meter because CMOS EPROMs
don't like floating pins.
If the ROM is larger than the size of the image, for example, a 32 kB ROM
containing a 16 kB image, then you can put the image in either half of the
ROM. You will sometimes see advice to put two copies of the image in the ROM.
This will work but is not recommended because the ROM will be activated twice
if it's a legacy ROM and may not work at all if it's a PCI/PnP ROM. It is
tolerated by Etherboot because the code checks to see if it's been activated
already and the second activation will do nothing. The recommended method is
to fill the unused half with blank data. All ones data is recommended because
it is the natural state of the EPROM and involves less work for the PROM
programmer. Here is a Unix command line that will generate 16384 bytes of
0xFF and combine it with a 16 kB ROM into a 32 kB image for your PROM
programmer.
# (perl -e 'print "\xFF" x 16384'; cat bin32/3c509.lzrom) > 32kbimage
The speed of the EPROM needed depends on how it is connected to the computer
bus. If the EPROM is directly connected to the computer bus, as in the case
of many cheap NE2000 clones, then you will probably have to get an EPROM that
is at least as fast as the ROMs used for the main BIOS. This is typically
120-150 ns. Some network cards mediate access to the EPROM via circuitry and
this may insert wait states so that slower EPROMs can be used. Incidentally
the slowness of the EPROM doesn't affect Etherboot execution speed much
because Etherboot copies itself to RAM before executing. I'm told Netboot
does the same thing.
If you have your own EPROM programming hardware, there is a nice collection
of EPROM file format conversion utilities at [http://www.canb.auug.org.au/
~millerp/srecord.html] http://www.canb.auug.org.au/~millerp/srecord.html. The
files produced by the Etherboot build process are plain binary. A simple
binary to Intel hex format converter can be found at the Etherboot web site
at [http://etherboot.sourceforge.net/bin2intelhex.c] http://
etherboot.sourceforge.net/bin2intelhex.c. You may alternatively use the
objcopy utility, included in the binutils package:
# objcopy --input-target binary --output-target ihex binary.file intelhex.file
# objcopy --input-target ihex --output-target binary intelhex.file binary.file
Etherboot is believed to make PnP compliant ROMs for PCI NICs. A
long-standing bug in the headers has been tracked down. However some faulty
old BIOSes are out there so I have written a Perl script swapdevids.pl to
switch the header around if necessary. You'll have to experiment with it both
ways to find out which works. Or you could dump a ROM image that works (e.g.
RPL, PXE ROM) using the Perl script disrom.pl. The fields to look at are
Device (base, sub, interface) Type. It should be 02 00 00, but some BIOSes
want 00 00 02 due to ambiguity in the original specification.
-----------------------------------------------------------------------------
C. Companies selling diskless computers
The original Diskless-HOWTO mentions the names of the following vendors of
diskless computers:
  * Linux Systems Labs Inc., USA [http://www.lsl.com] http://www.lsl.com.
Click on "Shop On-line" and then click on "HardWare" where all the
diskless computers will be listed. Phone 1-888-LINUX-88.
  * Diskless Workstations Corporation, USA [http://
www.disklessworkstations.com] http://www.disklessworkstations.com.
  * Unique Systems of Holland Inc., Ohio, USA [http://www.uniqsys.com] http:/
/www.uniqsys.com
-----------------------------------------------------------------------------
References
[Diskless-HOWTO] Diskless-HOWTO.
http://www.linuxdoc.org/HOWTO/Diskless-HOWTO.html
[Diskless-root-NFS-HOWTO] Diskless-root-NFS-HOWTO.
http://www.linuxdoc.org/HOWTO/Diskless-root-NFS-HOWTO.html
[Bootdisk-HOWTO] Boot-disk-HOWTO.
http://www.tldp.org/HOWTO/Bootdisk-HOWTO/index.html
[ltsp] linux terminal server project.
A set of utilities and documentation for diskless stations, based on the red
hat distribution.
http://www.ltsp.org
[plume] plume.
A beginning project whose goal is to provide a set of utilities for diskless
stations and associated servers, based on the debian distribution.
http://plume.sourceforge.net
[logilab] Logilab.org web site.
http://www.logilab.org
[PowerUp2Bash] From-PowerUp-to-bash-prompt-HOWTO.
http://www.linuxdoc.org/HOWTO/From-PowerUp-to-bash-prompt-HOWTO.html
[ThinClient] Thin-Client-HOWTO.
http://www.linuxdoc.org/HOWTO/Thin-Client-HOWTO.html
[cdwriting] CD-Writing-HOWTO.
http://www.linuxdoc.org/HOWTO/CD-Writing-HOWTO.html
[etb] etherboot project.
http://etherboot.sourceforge.net
[VMWare] VMWare.
A non free virtual machine software.
http://www.vmware.com
[plex86] plex86.
A free virtual machine software.
http://www.plex86.org
The Mock Mainframe Mini-HOWTO
Scot W. Stevenson
<scot@possum.EXSPAM.in-berlin.de>
2003-10-07
Revision History
Revision 1.0 2003-10-07 Revised by: sws
First finished release
Revision BETA 1.0 2003-09-28 Revised by: sws
Last public draft
Revision ALPHA 0.3 2003-09-23 Revised by: sws
Last internal draft
A brief description of a standard way to set up and work with a computer
network for a small group of people that is inexpensive to build, easy to
administer, and relatively safe. It is written for users who might not be
completely familiar with all of the concepts involved.
-----------------------------------------------------------------------------
Table of Contents
1. Introduction
1.1. Copyright and License
1.2. Disclaimer
1.3. Credits / Contributors
1.4. Feedback
1.5. Translations
2. Background
2.1. Why This Text?
2.2. Reasoning and Overview
2.3. What You Should Be Aware Of
2.4. How This Text Is Organized
3. The Individual Pieces
3.1. The Mock Mainframe
3.2. The Terminals
3.3. The Support Machines
4. Putting the Pieces together
4.1. Security
4.2. Network Hardware
4.3. Network Geography
5. Life With Multiple Users
5.1. Shared Resources
5.2. Screen Savers and Other Gimmicks
5.3. Idle Terminals
6. Going Hardcore: Non-GUI Systems
6.1. Why the Command Line Is Cool
6.2. Setting Up Text Terminals
6.3. Useful Shell Commands
7. Odds and Ends
7.1. Mock Mainframe Case Studies
7.2. And Finally...
1. Introduction
1.1. Copyright and License
This document, The Mock Mainframe Mini-HOWTO, is copyrighted (c) 2003 by
Scot W. Stevenson. Permission is granted to copy, distribute and/or modify
this document under the terms of the GNU Free Documentation License, Version
1.1 or any later version published by the Free Software Foundation; with no
Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts.
A copy of the license is available at [http://www.gnu.org/copyleft/fdl.html]
http://www.gnu.org/copyleft/fdl.html.
-----------------------------------------------------------------------------
1.2. Disclaimer
No liability for the contents of this document can be accepted. Use the
concepts, examples and information at your own risk. There may be errors and
inaccuracies that could be damaging to your system. Although this is highly
unlikely, the author does not take any responsibility. Proceed with caution.
All copyrights are held by their by their respective owners, unless
specifically noted otherwise. Use of a term in this document should not be
regarded as affecting the validity of any trademark or service mark. Naming
of particular products or brands should not be seen as endorsements.
-----------------------------------------------------------------------------
1.3. Credits / Contributors
This document has benefitted greatly from the feedback, commentary, and
corrections provided by the following people:
Gareth Anderson, Doug Jensen, Jim McQuillan, Volker Meyer, Binh Nguyen,
Douglas K. Stevenson.
-----------------------------------------------------------------------------
1.4. Feedback
Feedback is most certainly welcome for this document. Please send your
additions, comments and criticisms to the following email address: <
scot@possum.DESPAM.in-berlin.de>. Feedback can be in English or German.
-----------------------------------------------------------------------------
1.5. Translations
There are currently no translations.
-----------------------------------------------------------------------------
2. Background
2.1. Why This Text?
In the last decade of the past millennium, I moved out of my parents' house
and into a small apartment with my girlfriend. I left behind not only the
comfort of a magically refilling refrigerator, but also a computer network
that suddenly had to survive daily and sometimes creative usage by my mom,
dad, and kid sister for months without me. After some gentle persuasion, my
girlfriend not only switched from Windows to Linux, but also became my
fiancee. I left grad school and got a real job, which left me with even less
time to fool around even with my -- er, our -- network, let alone my parents'
computers. My fiancee became my wife, we left the apartment for a small
house, and then I found myself spending more time changing diapers than
floppies.
In other words, somewhere along the way, I turned into an adult.
It happens to the best of us, I'm told, and there are benefits that go
beyond a de facto unlimited budget for ice cream. Having all the time in the
world to keep computers running, however, is not one of them. I needed some
sort of setup for the systems I am responsible for that is
  *  Easy to administer. I don't have the time to do the same thing on three
different machines, or figure out which machine needs which patch.
Ideally, I only have to take care of one single computer in each network,
and that infrequently. Some of the computers should not require any
maintenance at all for months at a time.
  *  Easy to afford. My hardware budget now competes with house payments,
food bills, and the cost of clothes that my daughter seems to grow out of
faster than we can buy them. Getting more done with less is not just an
intellectual challenge, but a pressing necessity.
  *  Easy to secure. The network's very structure should make it harder for
outsiders to do evil things, and, more important, make it easy for me to
create a safe "lock-down" state where threats are minimal until I find
the time to patch holes.
After a few years of trial and error and a lot of time spent thinking about
setting up computers while rocking screaming babies in the middle of the
night, I created a "standard" setup. It is not a terribly clever or ingenious
way of doing things, and there are probably thousands of systems out there
organized along exactly the same lines. The aim of this text is to present
this setup in a coherent form so that other people don't have to invent the
wheel all over again when faced with the same problem.
-----------------------------------------------------------------------------
2.2. Reasoning and Overview
Most desktop computers nowadays are insanely overpowered for what they are
doing most of the time: Email, surfing, and text processing, while maybe
listening to music. Unless you are still using a 486DX at 66 MHz, your
processor is probably bored out of its registers even if it is doing all of
this at once. Check any program that shows the system load -- such as xload,
top, or uptime -- and you'll see just how much of your expensive hardware is
busy doing nothing.
With all of those resources left over, there is no technical reason why
more than one person can't use the computer at the same time. This concept
seems strange and downright alien to most home users today, thanks in no
small part to Microsoft's philosophy of "a computer on every desktop" and the
hardware companies' ad campaigns that imply that you are, among other things,
sexually inadequate if you don't have your very own super-charged computer
under your desk.
There are good commercial reasons for hard- and software companies not to
like multiuser setups. Even if you have to upgrade the central machine, you
are going to need less high-quality hardware than if everybody has their own
computer; and if four people could use one Windows machine at the same time,
that would be three copies less for Microsoft to make money on. You obviously
don't save money if you just install Linux on one machine instead on four,
but your hardware costs and administration time will drop.
Of course there are other reasons than big company ad pressure why few
people have multiuser setups. One is computer games: Many of them suck up so
much hardware that a multiuser-system is usually not the best idea. Also,
until a short time ago, there was no easy way to actually have more than one
person log on, since most desktop computers come with only one keyboard, one
mouse, and one monitor. This has changed: You can now create inexpensive and
reliable graphic terminals (also known as thin clients) with very little
hassle and expense. This allows us to get away with one big machine and a
couple of little ones. Last but not least, sharing a machine means you have
to behave and get along with other users.
In a nutshell, this text is about centralizing small computer systems to
save time and money. The mainframe vendor IBM wants us to believe that this
is just what the big boys are doing, too. Now that the age of server mania is
over, they say, companies are moving stuff back onto those mainframes. Since
more and more of those mainframes are running roughly the same Linux you have
at home, the only difference between a real mainframe and your computer is a
bit of hardware. A few hundred thousand dollars worth of hardware at least,
granted, but that doesn't mean that you can't use the same design principle
and enjoy the benefits of a "little" mainframe -- a "mock" mainframe, if you
will.
The basic setup has three parts:
  * The Mock Mainframe. The one and only core machine. All users access this
computer, either by sitting in front of it or (more likely) from a
terminal, and they can do so at the same time. In the simplest setup,
this machine is home to all users, holds all files, and runs all
programs.
  * The Terminals. What the user actually touches. Cheap, easy to maintain,
and expendable, they can be dual-boot machines, Linux Terminals, thin
clients, or even X Window server programs for other operating systems.
  *  Support Machines. Optional computers that perform a special task that
for reasons of security or performance you'd rather not have on the mock
mainframe. The most common support machine is a "guardian" that handles
Internet connections.
Parts of this text will deal with installing software that is covered in
far greater detail in other Linux HOWTOs. Caught between the extremes of just
referring to those texts and copying everything, I have decided to give a
very brief description of the installation procedure on a standard system.
You'll get a general idea of what needs to be done, but for the details,
you'll need the specialized text. This does mean that there are a lot of
references, but that just goes to show how much I am standing on the
shoulders of others here.
-----------------------------------------------------------------------------
2.3. What You Should Be Aware Of
A mock mainframe setup is not for everybody. It is based on the following
assumptions:
  *  A small group of users. Though it should scale well from a family setup
to at least a classroom full of people (depending on the hardware and
programs used), this is not something you want to run a university or
Fortune-500-company with. If you are alone, it doesn't make much sense
either. Go find somebody to move in with, then read on.
  *  A sane system load. Unless you can really, really fork out a lot of
money for serious hardware (in which case, you should probably not be
looking for a mock mainframe), this is not a setup where you should have
your kids playing "Quake 3" while you are encoding Ogg Vorbis files and
your partner is watching a DVD, all at the same time. It is designed
primarily for pedestrian workloads like email, browsing, chatting, and
text processing.
  *  Some downtime tolerance. We will be using standard, off-the-shelf,
home-user-grade hardware. These parts are not built for enterprise
strength work, and sooner or later, something is going to break or fail.
If whatever you are doing urgently requires anything even close to 24/7
uptime, you'll have to go out and buy industrial strength hardware -- and
remember to get somebody to guarantee that uptime in writing while you
are at it.
Some examples of when a mock mainframe might make sense:
  *  You have a family of email, surfing and chat freaks who all want to be
online at the same time but don't use serious resources when they are.
  *  You have a small, closed teaching system that can't be expensive or
take too much time to administer.
  *  You and your dorm buddies each have those high-powered computers to
blow each other away with computer games, but don't want to go through
the hassle of installing a serious Linux system on every one to do
something as trivial as your actual course work.
  *  Your organization has absolutely no money and the only hardware you can
get is stuff so old, it doesn't even have scrap value anymore, but you
still have to give your people computer access.
(If you have found other situations where this setup works, please let me
know.)
-----------------------------------------------------------------------------
2.4. How This Text Is Organized
First, we will take a look at the individual parts of the setup -- the mock
mainframe, the terminals, the support computers. Then we'll discuss ways of
putting these elements together. This is also where we will talk about
security. We'll also discuss life with more than one user and setups for very
weak hardware.
-----------------------------------------------------------------------------
3. The Individual Pieces
3.1. The Mock Mainframe
3.1.1. The Hardware
Examining your needs. If the load that is going to be placed on the mock
mainframe is more or less constant and won't change too much over time, you
are in the wonderful position of being able to tailor your hardware to your
needs. This probably will let you get away with second-hand hardware, which
leaves you with more money for, say, a new surround sound system (or more
realistically, a new dish washer).
The simple way to find out just what you need is to throw together a
machine, just about any machine, and then see how it performs under the load
it will actually be asked to bear. Then experiment a bit: Will the computer
start to swap if you take out half of the RAM, will it speed up if you put in
double the amount? See if you can get away with a slower processor or a
smaller hard disk. If you can, get feedback from your users.
These trial runs can take time and may seem like a lot of work. The idea
here is to fit the mock mainframe's hardware as exactly as possible to the
task at hand so you can use the rest of the hardware for other stuff. Also,
these trial runs can have surprising results. Most people have little
experience in building a system for more than one user, and tend to
overestimate the processor strength required while underestimating the amount
of memory they need.
For example, for our current setup at home in 2003 -- two people running
SuSE 8.2 and KDE 3.1 with a regular load of email clients, multiple browser
windows, chatting and music playback -- an AMD Duron 1.0 GHz processor turned
out to be overkill. We ended up with a secondhand SMP mainboard with two used
Intel Pentium II Xeon 450 MHz CPUs (yes, Pentium "two"). Further experiments
showed that 512 MByte RAM was slightly too much RAM: 384 MByte is fine, if
you can live with the system going into swap once in a blue moon.
Multiple vs. single processors. With more and more people working on one
computer at the same time, you'll start having moments when a single
processor machine seems to stall. Also, if somebody's process goes berserk
and starts hogging the CPU, it can freeze the whole system. This is bad.
Decades of hardware marketing have produced computer users who reflexively
go out and buy a faster processor when things slow down. But even the fastest
CPU can't do more than one thing at once (we're ignoring tricks like
hyperthreading), it is just somewhat better at faking it. To really do two
things at the same time, you need more than one processor. Such systems are
usually referred to as "SMP"-computers, from symmetrical nultiprocessing. You
can get them with eight processors or more (Intel Pentium II Xeon, AMD
Opteron, Intel Xeon) but in our price range, two CPU (dual-processor) systems
are the most common.
More than one processor will go a long way towards keeping the system
responsive even when there are lots of processes running. What it will not do
is make a single process run twice as fast as on a system with a single
processor of the same speed. One way to visualize this is to imagine you are
doing your laundry: Two washing machines will get the whole job done in about
half the time, but that does not mean that they now each spin twice as fast;
each load still takes just as long as before. Things are actually more
complicated, but this is a good rule of thumb.
Although processor speed might be important for gamers on the bleeding edge
or people who want to simulate nuclear explosions on their desktop machine,
the current clock speeds are simply perverse for normal use. You can usually
get away with far slower processors than the market is trying to force down
your throat, especially if you have more than one CPU. This is a good thing
because SMP-mainboards are more expensive than normal, single-processor
boards, and then you still have to buy that second processor. Keep in mind
that more recent (AMD Opteron / Intel Xeon) SMP systems can have expensive
needs such as a special power supply and extra large cases.
A multi-processor mainboard is not a must for a mock mainframe. But if you
find your system groaning under multiple users, adding processors might give
you a better deal than adding MHz.
(At the time of writing, there was also the problem of latency in the Linux
kernel. In the 2.4.* series, the kernel is not pre-emptable, so occasionally
a one-processor system will stall while something is happening in the bowels
of the operating system. The 2.6.* kernels are supposed to be far more
responsive, which would be the end of that problem and of this
paragraph,too).
Storage: SCSI vs. IDE, RAID. You might want to take a look at using SCSI
instead of IDE for hard disks and other drives. One advantage of SCSI is that
you can connect more drives to one computer than the four you are usually
limited to with IDE. SCSI drives are also better at moving data back and
forth amongst themselves without help of the processor. They are, however,
more expensive and can be louder. On smaller systems with few users and low
loads, you should be able to use IDE drives with no problem.
If you are going to build a system where it is important you don't loose
your data even between those regular backups you perform every night right
after you floss your teeth, you might want to consider a RAID (Redundant
Array of Inexpensive Disks) setup. Very roughly speaking, a RAID setup
duplicates the data on more than one hard disk, so that if one drive crashes,
the others still have copies.
Sane graphics. Most graphics cards cater to the game freak who has an
unlimited hunger for speed, speed, and more speed and the pocket depth to
match. An AGP graphics card with 128 MByte of RAM and dazzling 3D functions
is not necessarily a bad thing in a mock mainframe, but be sure that you
actually need it. A good used PCI card will usually do just fine for email
and surfing.
Heat and Lightning. Beyond the normal hardware considerations mentioned here,
give some thought to the parts that protect your machine from threats such as
power surges or brown outs, or makes sure that everything stays cool, or
shields your drive bays from inquisitive little children with popsicle
sticks. A good modern mainboard has temperature alarms and all sorts of other
features to help you monitor your system's heath.
In summary:
  *  Think RAM before processor speed. With more than one user, you'll be
using more memory and less CPU time than you expect.
  *  Two slower processors can be better than one fast one. A faster
processor can switch between more than one task faster than a slow one,
but two processors don't have to switch at all. This means you can use
older hardware, which will almost always be less expensive even though
you will need more of it.
  *  Consider SCSI and RAID. SCSI instead of IDE gives you more drives on
one machine, and they are able to play amongst themselves without
processor supervision. However, SCSI drives are more expensive and make
more noise. RAID helps protect your data from hard disk failure. Both are
for more ambitious setups.
When buying hardware for a mock mainframe, online auctioneers are your
friends. Whereas your local computer store will try to sell you the newest
fad, there is no shortage of previous-generation hardware at affordable
prices online.
-----------------------------------------------------------------------------
3.1.2. The Software
Some background on X. The X Window System (X Windows or just X for short) is
the graphics layer that most Linux systems use. Almost all current window
managers -- KDE, Gnome, Blackbox -- sit on top of X, and almost all variants
of Unix use X.
X Windows has one important aspect that we make extended use of with the
mock mainframe: It is network transparent. The software responsible for
controlling the input/output devices -- screen(s), keyboard, and mouse -- can
be on a different computer than the programs you are actually running. With
X, it is possible to sit in Beijing, China, with a 486DX and run your
programs on a supercomputer in Langley, Virginia.
This has a whole number of advantages. Graphics are hard work for a
computer; having them processed on a different machine than the program they
belong to takes a big load off of the central computer. They are not so hard,
however, that they can't be handled by an older processor. In the distant
past of computer technology, there were special machines called X Terminals
that did nothing but display graphics. Today, a spare computer with an Intel
PentiumPro or an AMD K6 with 300 MHz is enough. This lets you have one big,
fat machine running the actual programs and a whole host of cheap, small
machines doing all the graphics. Which is exactly what we are looking for.
X Windows does have some drawbacks. It gobbles up a lot of bandwidth, so
you will want a fast network. Also, some of the terminology is strange. The
computer (or rather the software) that controls screen, mouse, and keyboard
is called the "X server", because it "serves" the actual program, which in
turn is called the "X client". In this text, we'll stick to "host" and
"terminals" to avoid confusion.
There are all kinds of good Linux HOWTOs about X Windows, so again we'll
just go through the basic steps and let you consult the special texts. I'm
assuming that you already have X set up on the mock mainframe; your
distribution should handle that part for you.
First, we have to start the program that handles remote X logins. This is
xdm (X Display Manager). Depending on your system and taste, you might want
to use the KDE version kdm or Gnome version gdm instead; both have nicer
graphics and more features. Check the XDMCP Mini-HOWTO by Thomas Chao for
more details. Normally, you'll want xdm (or whatever) to start up in the run
level that you ususally use for graphics (for example, run level 5 for SuSE
8.2).
Even when xdm is running, the mock mainframe should not let you connect
from the outside, which is good security. You distribution might let you
change this with a simple entry in one of its configuration files (for
example, SuSE 8.2 uses /etc/sysconfig/displaymanager). If you have to do it
the hard way, you will want to change /etc/X11/xdm/xdm-config and /opt/kde3/
share/config/kdm/kdmrc if you are using kdm.
After all of this is done, you are ready to test the link. Get a computer
you know has a functioning X system, boot it in console mode -- not in
graphics mode (runlevel 3 instead of 5 on SuSE systems, use init 3 as root
from a shell). Log in and type
/usr/X11/bin/X -terminate -query <host>
where "<host>" is the name or IP-address of the mock mainframe. You should
get the same X login prompt as if were sitting at the host machine.
-----------------------------------------------------------------------------
3.2. The Terminals
The machines you use to connect to the mock mainframe should be
inexpensive, easy to maintain and, from a security point of view, expendable.
-----------------------------------------------------------------------------
3.2.1. Dual Boot Machines
Some people -- those without a time consuming job, a spouse, or children,
for example -- will want to be able to spend lots of time playing hardware
intensive computer games. Although more and more games are coming out for
Linux, this usually means running a machine that has a closed source
operating system such as Microsoft Windows. The solution to this problem is
to set up the game computers as dual boot machines. The messy details are
usually handled automatically by whatever distribution you are using; if not,
check out the Linux Installation Strategies mini-HOWTO Tobby Banerjee.
The mock mainframe setup lets you keep the size and complexity of the Linux
partition on a dual boot machine to a minimum: All it has to do is to get X
running and connected. There are various way to do this, I usually just do
the following:
1. Go to /etc/X11/xdm/. In the file Xservers, comment out the line that is
either
:0 local /usr/X11R6/bin/X :0 vt07
or something similar by putting a hash mark ("#") at the beginning. This
will stop the computer from starting up X locally during boot time.
2. In /etc/inittab, insert a new line such as (for SuSE 8.2)
xx:5:respawn:/usr/X11R6/bin/X -query <host>
where "<host>" again is the name of the mock mainframe. The "5" is the
runlevel that boots with X; "xx" is just a label I picked; you might have
to adapt both to your system (please be careful: Playing around with
inittab can cause big trouble). This will start X with a call to the mock
mainframe, and you should get the login window when you are on the dual
boot computer.
Dual boot machines are nice if you don't have to switch between operating
systems too often. All of the rebooting can quickly become a bore, though,
and a dual boot machine cannot be considered truly expendable, given the
price of closed source operating systems.
-----------------------------------------------------------------------------
3.2.2. Linux Terminals
The Linux Terminal Server Project [http://www.ltsp.org] (LTSP) lets you use
old hardware to put together bare-bones computers without hard disks that run
as thin clients. These machines are cheap, quiet, quick to set up, and once
they are running, require just about zero maintenance (unless, say, a fan
breaks). At LinuxWorld 2003 in San Francisco, the LTSP project deservedly
took the award for "Best Open Source Project". If you are going to have
terminals that are in use constantly, it is hard to see how this would not be
the best solution.
Required hardware. More likely than not, somewhere in your cellar or garage
(or wherever you keep the stuff your partner lovingly calls "all that crap"),
you probably have a hopelessly outdated mainboard and processor that you've
been saving because you never know. Well, guess what.
If you are using a 100 Mbit ("Fast") Ethernet network, stay above a 486DX;
a Pentium II should be fine. See if you can scrape together about 32 MByte of
RAM. You'll need a floppy drive for the initial phase. You'll also need a
decent graphics card and a monitor -- "decent" doesn't necessarily mean a AGP
graphics card with 128 MByte RAM, it means a clear, crisp picture.
The only thing you have to pay slightly more attention to is the network
card. Find one that has a socket to plug ROM chips in: a "bootable" network
card. You can get away with one that doesn't have the socket, but then you
have to keep booting from the floppy. We'll also need the unique number (
Media Access Control or MAC number) of the network card. On good cards, it is
included on a little sticker on the board and looks something like this:
00:50:56:81:00:01
If you can't find it on the card, try booting the system with a Linux rescue
floppy or any other kernel. The number should be displayed during boot when
the card is detected.
Add a keyboard and a case and that's it. Notice we don't have a hard disk,
let alone a CD-ROM. With the right kind of fans for the power supply and the
processor, you have a very quiet machine.
How they work. The LTSP home page has an in-depth technical discussion of
what happens when the system powers up. In brief, human terms:
When turned on, the Linux Terminal, like any other computer, looks around
to see what it has been given in way of hardware. It finds a network card
with a MAC and notices that it has a floppy with a boot disk (or a boot ROM
in the network card. It starts the boot program). This in effect tells the
Linux Terminal:
Got your MAC? Good. Now scream for help as loud as you can.
The terminal's call goes through the whole (local) network. On the mock
mainframe, a program called dhcpd (Dynamic Host Configuration Protocol Server
Daemon) is listening. It compares the MAC the terminal sent to a list of
machines it has been told to take care of, and then sends the terminal an
answer that includes an IP address and a location where the terminal can get
a kernel. The terminal then configures itself with its new name.
Using some more code from the boot program, the terminal starts a program
called tftp (Trivial File Transfer Protocol), a stripped-down version of the
venerable ftp. This downloads the kernel from the host machine. The terminal
then boots this kernel.
Like every other Linux system, the terminal needs a root filesystem.
Instead of getting it from a harddisk, it imports it from the mock mainframe
via NFS (Network File System). If the terminal has very little memory, it can
also mount a swap partition this way. The terminal then starts X, connects to
the mock mainframe via xdm, and throws up the login screen.
This all happens amazingly fast. If you turn off all of the various BIOS
boot checks on the terminal and boot off of an EPROM in the network card
instead of a floppy, it happens even faster.
Running dhcpd, tftpd, and nfsd on the mock mainframe is a security risk you
might not be willing to take. In the chapter on Support Machines, we'll show
a way of getting around this.
Setting up the software. On the server (mock mainframe) side, you need to
install nfsd, tftpd, and dhcpd, which your distribution should include as
standard packages.
Leave their configuration files untouched for now. The LTSP configuration
and installation programs will do most of the work for you. There are some
things you still need to do by hand (at time of writing, the LTSP people were
working on a more automatic installation system). Edit the files:
/etc/dhcpd.conf
Provide the IP address of the terminal, the hostname, the IP address of
the mock mainframe, the MAC of the terminal, and the default gateway.
Also, check to see if the kernel pathname is correct.
/opt/ltsp/i386/etc/lts.conf
These options control the terminal itself.
/etc/hosts
The names of the Linux Terminals and their IP addresses must be listed
here. Further down, while describing the network, we'll introduce a
systematic naming convention to make this easier.
/etc/hosts.allow
Though not mentioned in the current LTSP documentation, you probably
will have to add the following lines to this file:
rpc.mountd : <terminal> : ALLOW
rpc.mountd : ALL : DENY
where "<terminal>" is the terminal's IP address. This tells the host to
allow the terminal to mount the NFS file system.
Creating a boot floppy for the Linux Terminal is usually trivial. Armed
with your type of Ethernet card, go to the website mentioned in the LTSP
documentation (currently Marty Connor's ROM-O-Matic Website [http://
www.rom-o-matic.net/]), and follow the instructions for a boot floppy. This
should produce a file of a few dozen kilobytes that you can then put on a
floppy and boot from. Later, when you are sure that your hardware
configuration is not going to change and your setup works, replace the floppy
by an EPROM that you plug into your Ethernet card.
Using the terminals. Just how many Linux Terminals can one mock mainframe
support? The LTSP documentation gives the following example:
It's not unusual to have 40 workstations [Linux Terminals], all running
Netscape and StarOffice from a Dual PIII-650 with 1GB of ram. We know
this works. In fact, the load-average is rarely above 1.0!
(This part of the documentation was written in March 2002, hence the
reference to Netscape, an ancestor of Mozilla Firebird. StarOffice is a
commercial variant of OpenOffice.)
Linux Terminals will probably require some user education. People who have
only ever used Windows tend to have trouble visualizing a system where the
graphics layer is not only independent from the rest of the operating system,
but can also be accessed from multiple screens. The best way to explain this
is with examples. One trick that people new to X just love is when programs
start on one terminal and then appear on a different one. To enable this (but
only in a safe environment!), sit down at a terminal and type
xhost +<host>
where "<host>" is the name of the mock mainframe. Then, move to a different
terminal and start a program such as xeyes or xroach:
xeyes -display <terminal>:0 &
The eyes should appear on the first terminal's monitor, providing endless
amusement for all. When you are done explaining what happened, remember to
retract the privileges again on the first terminal with
xhost -<host>
You can also use this example to point out why it is dangerous to use the
xhost command.
Another question that usually comes up is the speed of Linux Terminals. One
nice way to demonstrate this is to run a series of screen savers from the
xlock suite. For example
xlock -inwindow -mode kumppa
or more generally
xlock -inwindow -mode random
Though the results will depend on your hardware, this usually takes care of
any doubts.
If you are using a desktop such as KDE that allows you to shut down the
computer when you log off, make sure that this function is disabled.
Otherwise, your users will shut down the mock mainframe when trying to get
out of the terminal. Tell them to just turn off the power once they have
logged out. Older users will feel a sense of nostalgia, and younger users
will stare at you as if you have gone mad. Such is progress.
-----------------------------------------------------------------------------
3.2.3. Real X Terminals
If fortune smiles on you or you are rich, you might find yourself with a
real thin client. Installing one is usually not much different than setting
up a Linux Terminal, except that you will need the software from the vendor,
you will probably have to pay for support, and when something goes wrong, you
won't be able to fix it yourself.
The Linux Documentation Project has a number of general and special HOWTOs
on how to set up X Terminals, for example the Connecting X Terminals to Linux
Mini-HOWTO by Salvador J. Peralta or the NCD-X-Terminal Mini-HOWTO by Ian
Hodge.
-----------------------------------------------------------------------------
3.2.4. X Server Programs
As a final way of connecting to the mock mainframe, there are "X server"
programs that run under different operating systems (remember, the X server
powers the terminal side of things). These let you log onto Linux machines
with an operating system that does not natively run X.
Most X servers for Windows cost money, in some cases a lot of money. The
single exception I am aware of is [http://cygwin.com/xfree/] Cygwin [http://
cygwin.com/xfree/], which ports X (and GNU tools) to Windows machines.
If you have an Apple computer with OS X, you are in better shape. Check the
[http://www.xdarwin.com/] XDarwin [http://www.xdarwin.com/] project. XDarwin
is an Apple version of the X Window System that sits on the Darwin operating
system -- a variant of BSD -- that is the core of OS X.
(There is one GPL X Server written in Java you might try: [http://
www.jcraft.com/weirdx/] WeirdX [http://www.jcraft.com/weirdx/], though the
author points out it is not made for heavy loads.)
In this chapter, we have examined terminals that will give you a GUI (
graphical user interface). If you are tough enough, you can also hook up a
text terminal to your mock mainframe and access the system via a CLI (command
line interface). This option is covered further down.
-----------------------------------------------------------------------------
3.3. The Support Machines
In theory, you should need no other computers than the mock mainframe and
whatever you use as terminals. In practice, you'll probably want additional
machines for specific tasks. Usually this will be because of security, not
performance.
For example, let's assume you have a network with a dial-up connection to
the Internet for email and browsing. Of course you could put all the hard-
and software required on the mock mainframe and not see much of a performance
hit (in fact, if your network is slow, it might even be faster). But that
puts your most valuable computer right where everybody who is on the Internet
-- which increasingly means anybody on the planet -- can attack it.
For better security, put a machine between the mock mainframe and the
outside world. Make sure this Guardian machine is not only heavily fortified,
but also expendable, so if it is taken over by the forces of evil or
compromised in any other way, you won't lose anything valuable. To lock down
the network in an emergency, all you have to do now is to physically turn off
the power of the guardian machine (assuming this is the only entry point to
your local net). This can be very useful if you can't sit down and go though
security measures the moment you see a problem, because, say, your boss at
the burger grill just does not realize how important that dorm network is and
unfeelingly insists you show up on time to flip the meat.
Other functions you might want to isolate on different machines are web- or
other servers on your net that people from the Internet can access. You can
also have a support machine tend your Linux Terminals (a Terminal Mother) or
to burn CDs (a Burner).
-----------------------------------------------------------------------------
4. Putting the Pieces together
So after reading this far, you know what you want, know where to get it,
how to set it up, and want to get going. There are few things you should
think about, however, before you start editing configuration files and
stringing cables.
-----------------------------------------------------------------------------
4.1. Security
There is only a limited amount I can tell you about your security needs:
Everybody faces different threats. All I can do here is give some basic
background on how to secure a mock mainframe setup. If you are looking for a
good general introduction to security, try the book Secrets & Lies by Bruce
Schneier.
-----------------------------------------------------------------------------
4.1.1. Needs revisited
In most books on securing computer systems, there comes a point where the
author tells you to sit down and "formulate a security policy". This sounds
like such a bureaucratic nightmare that most people skip the whole chapter.
I'm not going to suggest you formulate anything. But the next time you're
taking a shower, ask yourself what kind of defenses you need.
What are you trying to protect? Are you worried about somebody hacking into
the mock mainframe and stealing your data, the classic Hollywood threat to
computers? Or that your hardware could be destroyed by lightning? Or that
somebody will sit down in front of a terminal when the user is off to the
bathroom and write emails in his name? Or that people will open the computer
cases and steal the processors? Another way to look at this is to figure out
what parts of the system would be hard or even impossible for you to do
without. For example, the digital photos and films of my daughter when she
was a baby are simply irreplaceable.
Who or what are the forces of evil? Once you know what you are trying to
protect, think about whom you are protecting it against, maybe while you are
brushing your teeth. Are you worried about crackers from the Internet, or
that the flaky power company you are stuck with will zap your computers with
a power surge? Remember those little kids with popsicle sticks?
If your system is connected to the Internet 24/7, you need to worry about
worms and crackers. If you are only online for as long as it takes to pick up
those three emails from your mother, you risk in this area is drastically
reduced. This shows how the probability of an attack figures in. How likely
is it for somebody to hit your system during those 20 seconds? If an attack
is highly improbable, you won't want to go to the effort of protecting
yourself against it. Some things you will probably dismiss without even
thinking: Just how were you going to defend your system against attacks by
rust monsters?
Once you know what you are afraid of and how probable an attack is, you
should have a feeling for the risks you are facing. There are three ways of
handling risk: You can take it, minimize it, or insure against it. The first
option is not as negligent as you might think: Given our budget, most of us
are simply taking the risk of meteor strikes. The third option usually costs
money, which we don't have, so we will ignore it here.
The second option is touches the three major parts of any security process:
prevention, detection, and response. Most computer security deals with
prevention: Making sure the cases are locked so nobody can steal the CPUs,
for example. Detection is usually skimped -- when is the last time you looked
at one of the files in /var/log/? -- and usually little thought is given to
the response, because people figure none of this is going to happen anyway.
Unfortunately, you need all three, always, at least to some extent.
Even if you decide that detection systems like tripwire are too much of a
hassle to install and you don't have the time to read log files every day,
give some thought to how you could tell that your system has been
compromised. In some cases, it will be hard to miss, say, when men with
badges knock at your door and take you away because your computer has been
sending spam related to an improbable sexual act with squirrels to all of
South Korea. Other intrusions might be more subtle. Would you know if
somebody copied the files from your letter folder?
Think about how you would respond to at least the most likely attacks and
failures. What would you do if your hard disk crashed? If you logged in as
root and the system told you that your last log in was on Friday -- except
that you were still in London, England on Friday, singing drinking songs as
you happily stumbled from one pub to the next. With a normal home system and
good backups, you might be able to get away with "reinstall from scratch" as
the standard response all problems great or small (but make sure that your
backups are not compromised).
By the time you are putting on your socks, you'll have probably found out
that your greatest risks are quite different from those the press talks about
all of the time. If you have no Microsoft products on your network, you don't
have to worry too much about anti-virus software or Active X vulnerabilities.
However, Linux does not enjoy any special bonuses when it comes to power
surges, flooding, or broken fans.
-----------------------------------------------------------------------------
4.1.2. Security Principles
Back to prevention: When you design your system, keep these security
principles in mind:
Building better baskets. Putting all of your files on one computer might seem
like putting all of your eggs in one basket, which proverbial grandmothers
say is a stupid thing to do. In fact, from a security point of view, this can
actually be a good strategy: Since it is almost always easier to defend one
thing than it is to defend many, one basket my be fine as long as you make
sure that is a very, very good basket.
Avoiding complexity. A centralized system is usually less complex to set up
and to administer: If you have all of your users on one machine, you don't
have to worry about network file systems, network logins, network printers,
and all other kinds of clever but complicated ways to connect computers.
Keeping things simple keeps things safe. This is true as well for the support
machines: They should do one job and one job only.
Encapsulation. This is the process of isolating a part of the system so that
if it is compromised, the whole of the system doesn't go down with it. The
guardian is an example of encapsulation: The dangerous work of connecting to
the Internet is handled by a cheap, expendable machine that gives attackers
few tools to work with. Another example is taking those parts of the system
that the user can actually touch with his grubby little hands -- monitor,
keyboard, and mouse -- and putting them on a Linux Terminal. The mock
mainframe setup, however, is obviously not that good at encapsulation: The
whole idea of doing everything on one machine runs contrary to this concept.
Defense in Depth. Preventative security measures are only ways of buying time
until your response kicks in -- given enough uninterrupted time, the attacker
will always win. To increase the time you have to respond, deploy your
defenses in depth: After the attacker has trekked through kilometers of dense
jungle, he reaches the moat which surrounds a twenty meter high outside wall,
which is followed by a mine field and poisoned bamboo spikes. And in the end,
the secret plans to your magical chocolate machine will not only be in code,
but also written in invisible ink. That's defense in depth.
The guardian is an extention of your defenses; installing a second firewall
on the mock mainframe is another one. It might sound trivial, but use
different passwords for the mock mainframe and the guardian. If you have
other support machines, putting them on a different network also means more
room between them and the attacker. If you have data that you have to keep
confidential at all costs (wink-wink nudge-nudge), encrypt it, or at least
those backup CDs. After a few years of backups, you won't know where they
have all ended up.
But keep in mind that even the deepest defenses only buy you more time. As
Indiana Jones and Lara Croft will tell you, getting by the preventative
measures is the easy part: All you need is a whip or a few well-timed jumps.
The problems start when the locals start shouting and the guys with the guns
arrive.
Choke Points. If there is only one way to get into the system and you can
control that way completely, that system will be easier to secure in time of
danger. We turn to the guardian one more time for an example of a choke
point: Turn off the machine, and you are safe from Internet villains,
provided it is really the only access point. The problem with many networks
is that somewhere, somebody has a connection to the outside that the system
administrator doesn't know about. Think of all the laptops that now come with
a modem or, even worse, a wireless LAN card built in. Connect those laptops
to your net, and you have an instant back door. Remember your history: Your
main gate can be high and strong and crawling with orcs, but miss one single
little spider hole, and two hobbits can ruin your whole day.
-----------------------------------------------------------------------------
4.2. Network Hardware
If you are setting up a network from scratch, go with Fast Ethernet. The
cables and network cards are not that much more expensive than the older, 10
MBit/sec Ethernet. X Windows is bandwidth-hungry, and needs always grow
before they shrink.
(I have no experience of running X terminals over a wireless LAN. If
anybody does, I would be glad to hear from them, because we could put in
another paragraph right here.)
-----------------------------------------------------------------------------
4.3. Network Geography
You can make life a little easer for yourself by picking a sane and
systematic way to name your computers: Pick a set of addresses for your
system based on what each machine does. Internally, use the IPv4 address
space of 192.168.*.* that is reserved for networks without direct contact to
the Internet. For example, let's take 192.168.1.*. The mock mainframe could
be 192.168.1.1, the support machines 192.168.1.10 to 192.168.1.19, and the
terminals 192.168.1.100 to 192.168.1.199. This way, you can immediately see
the type of computer based on the IPv4 number, and the less trusted a machine
is, the larger the last number will be.
Combine this with a naming system that is easy to use. For example, you can
name the mock mainframe fatcat and the terminals kitten00 to kitten99 (with
IPv4 numbers from 192.168.1.100 192.168.1.199). Giving the support machines
names based on their function is probably easier than something systematic.
In the feline theme, try claws for a guardian machine or mamacat for a
terminal mother.
-----------------------------------------------------------------------------
5. Life With Multiple Users
Having everybody using a common machine gives your users far more
opportunity to get in each other's hair. Even though Unix (and therefore
Linux) was designed from the bottom up as a multi-user system, there are only
so many resources available, and having one user hogging them will annoy
everybody. And they will all come to you and say it is your fault.
-----------------------------------------------------------------------------
5.1. Shared Resources
One of your biggest headaches will probably be CD-ROMs and CD-R/Ws. In
theory, they belong on the mock mainframe like everything else, but this
creates lots of problems. Your users need to be able to physically get to the
machine to put the CDs in, which might not be a good idea from the security
point of view. Once the CD is in the drive, you can get various people trying
to mount, unmount or eject it, and getting upset if they can't. Reading a CD
(for example with cdparanoia) can interfere with multimedia programs and
cause sound tracks to skip. Writing CDs is even worse because it requires the
system to pay attention for a certain uninterrupted amount of time. If you
only have one processor on the machine and other users decide to do something
intensive while the burn is going on, the write might fail, and somebody is
going to be really upset, because he just lost a blank CD.
One thing you can do is to move the CD-R/W onto a support machine (the
Burner). Then you can either export the user's home directory by NFS, which
is, however, exactly the sort of thing we are trying to avoid. Or have the
user create an image of the CD as an ISO file, and then let him send it to
the support machine via ftp or scp. Then the user can walk over to the
machine and burn it by hand.
In a family setting, none of this might be a problem. For a larger
configuration, with untrusted users, it could be a big problem. You might end
up telling everybody that they can't burn CDs on this system, period.
Other resources are less of a problem. Traditionally, you used a quota
setting to limit the amount of disk space any single user could use. With
hard disks becoming less expensive by the month, this is not much of a
problem anymore, but depending on your user base, you might consider
installing very large quotas just to be safe. Users, however, are easily
upset by the very idea of quotas, especially if they hit their limit while
most of the harddisk is still free.
-----------------------------------------------------------------------------
5.2. Screen Savers and Other Gimmicks
The original aim of screen savers was to keep whatever was being displayed
on the screen from burning itself into the monitor's phosphorous coating
while you were off to the water cooler. Soon, however, clever, cute, and
intricate screen savers became an end in themselves. Today's screen savers
have become so resource-hungry that some actually require you to have certain
types of hardware (like OpenGL support) before they will run.
If you have a mock mainframe with X Windows, you can be sure that every
single one of your users will have a screen saver setup that will test the
system to its limits (just for fun, log into every terminal attached to the
mainframe once you have set everything up, and let each one run a different
screen saver. Watch the system load while you do this. Try not to whimper too
loudly). To make matters worse, some desktops like KDE let the user set the
screen saver's priority. The idea is that the user can set a low priority,
but in reality, they increase the priority until their jumping OpenGL balls
don't jerk anymore.
Users consider playing around with their screen savers one of their basic
computer rights, so just blocking everything except the "blank screen" mode
can lead to people showing up in your office with pitchforks and torches. One
way around this is to put a wrapper around the screen saver that makes sure
the priority is set low. For example, if your setup uses the xlock command as
a screen saver, you can move it to xlock.real and then create a shell script
named xlock:
#!/bin/bash
nice -19 xlock.real $@
This is a very crude script, but you get the point. This lets your users keep
their beloved screen savers but makes sure that the performance hit won't be
deadly to the whole system.
-----------------------------------------------------------------------------
5.3. Idle Terminals
Another annoying habit users have is to walk away from their terminals
while they are still logged in. KDE and Gnome have a "Lock Screen" button
right next to their "Logout" button, but you might have problems getting your
users to use it, at least until the first person finds that somebody has had
fun with his email account.
One way to deal with this is to have the system shut down abandoned
terminals with the idle daemon, which should be included in your
distribution. Use this with care: If you force a user off the system when he
still has some half-written letter on his screen, he isn't going to like it.
-----------------------------------------------------------------------------
6. Going Hardcore: Non-GUI Systems
As nice as KDE and Gnome are, they use system resources like popcorn. If
you are only starting an application, try a desktop that is more lightweight
such as blackbox. Though your distribution should set up the basics for you,
you will probably have to edit the configuration files (in this case, the
blackbox menu file that is specified in ~/.blackbox) for each user. Also,
make sure your users know how to work the environment. At the very least,
teach them that CTRL-ALT-BACKSPACE kills the X server.
But real men and women don't need a graphical user interface (GUI) at all:
They use a command shell such as bash. Before the XFree86 group gave us
graphics, the Free Software Foundation (FSF) had created the GNU tools that
are as rock steady as any piece of software on the planet. They are the heart
of every distribution, and without them, there would be no "Linux" system
(which is why "GNU/Linux" is the more percise term). If you have no choice
but to get by with really weak hardware -- we're talking anything down to a
386SX here -- you can dump X Windows altogether and get along just fine. Even
if you stick to GUIs, some basic knowledge of the shell can help you get far
more out of your system.
-----------------------------------------------------------------------------
6.1. Why the Command Line Is Cool
Think of Linux on the command line as the Willow Rosenberg approach to
computers: Whereas GUIs are as spectacular as a punch on the nose by vampire
slayer Buffy Summers, even a little knowledge of the shell will let you work
nuanced magic of nearly unlimited power with little effort. True fans of the
TV series will realize that there is a warning implied here: The power of the
shell can become habit forming, if not downright addictive, and you can
destroy your whole system with no chance of recovery if you mess things up.
Using bash takes you as close to the raw energies of your machine as you can
get without using a C compiler, and the danger rises accordingly.
It took Willow six years to become a witch powerful enough to end the
world, but it should take you a few weeks at most to become familiar with the
command line. Here are four paragraphs to help you decide if you want to make
the effort:
The power of the command line environment is rooted in its design
philosophy: Each tool is designed to do one job and one job only, but to do
that job superbly. Also, almost every tool can be connected to every other
tool to create processing chains with just a few commands. Since these tools
are (almost) all general purpose, you can solve just about any problem with
the right combination. With these same commands, you can write little
programs (shell scripts) for everyday tasks. If you look closely at the
programs your distributor includes, you will see that a lot of the are in
bash. Other script languages such as Python or Perl might be more powerful,
but the command line is always included and has far less overhead.
It is learning the individual tools of the CLI that is somewhat daunting. A
lot of commands have strange names that don't even pretend to be mnemonic
(the pattern scanning tool awk is named for its creators Aho, Kernighan, and
Weinberger), only make sense in a historical context (the tape archiving
utility tar is now used to distribute compressed files), or look like they
are typos (umount instead of "unmount", passwd instead of "password"). There
can be dozens of options for each command, and they can be just as cryptic.
Because the system was written by hackers in the true sense of the word who
wanted the computer to get the job done and not talk about it, the shell
normally will not ask you for confirmation, even if you tell it to delete
every single file on your hard disk. This is where the end of the world
scenario from Buffy comes in.
Once you have mastered the basics of the shell, however, you will get stuff
done a lot faster, you will understand jokes such as rm -rf /bin/laden, and
you will have a spring in your step and a glint in your eye. This is why even
people who are young enough to have been born after the invention of the
mouse develop a tendency to use X Windows merely as a comfortable way to open
a lot of terminal windows (either xterm or the less resource-hungry rxvt).
The CLI has just about every tool you'll need: mutt or pine for email (real
hard-core basket cases use mail), w3m or lynx for surfing, and of course the
legendary editors vi (more commonly vim these days) or emacs. The obvious
exception to this rule are programs that let you view pictures. But then you
probably aren't interested in that sort of thing anyway, are you.
-----------------------------------------------------------------------------
6.2. Setting Up Text Terminals
Basically, you have the same options for text terminals as you do with X
terminals. Everything is just a bit easier.
For example, you don't have to reboot if you are forced to use a different
operation system: Any program that lets you log in via telnet (on secure,
closed networks) or ssh (everywhere else) will do. Microsoft Windows includes
a telnet client that is best described as rudimentary; for serious work, try
a free Win32 implementation such as Simon Tathamt's [http://
www.chiark.greenend.org.uk/~sgtatham/putty/] PuTTY [http://
www.chiark.greenend.org.uk/~sgtatham/putty/]. Apple users with Mac OS X
should have no problems with their clients.
The Linux Terminal Server Project also has a package for text terminals.
The hardware can be as basic as it gets: Go find yourself a 386DX (for those
of you who don't remember the Soviet Union or the first Star Trek series:
This is the original Pentium's grandaddy). The mainboard will probably not
have a PCI slot, so you'll need an ISA graphics card and an ISA network card.
These are so low down the hardware chain you might have problems finding
them, because they are being junked, not sold second hand.
There is no reason, though, why your computer has to be advanced enought to
understand the TCP/IP protocol and be part of your local network at all. You
can connect just about any computer to the serial port(s) of the mock
mainframe: For example, there is a Linux HOWTO for older Macs by Robert
Kiesling (The MacTerminal MINI-HOWTO); in an article in The Linux Gazette
[http://www.linuxgazette.com/issue70/arndt.html], Matthias Arndt shows how to
convert an Atari ST into a terminal; Nicholas Petreley explains in IT
World.com [http://www.itworld.com/Comp/2384/LWD010511penguin2/] how to use
your Palm Pilot. If you can get it connected to the serial port, chances are
you can get it running on Linux. There are special cards with multiple serial
ports for larger setups. Of course, there is a HOWTO for that as well: The
Serial HOWTO by David S.Lawyer.
You can also get special text terminals as individual machines. David S.
Lawyer has written an extensive Linux HOWTO on the subject (
Text-Terminal-HOWTO) that explains how they work, how you set them up, and
why you would want one.
-----------------------------------------------------------------------------
6.3. Useful Shell Commands
To get you started on the shell, here are a few commands that are
especially useful if you are sharing a system. These very basic examples were
chosen to be useful to normal users.
Play nice. The nice command is one of those things that would make the world
a better place if everybody used it more often, but nobody does. It allows
you to lower the scheduling priority of a process so that less important
programs don't get in the way of the important ones.
For example, assume you have a WAV recording of your own voice as you sing
a song under the shower, and you want to convert it to the Ogg Vorbis format
to distribute to your fans on the Internet, all three of them. A simple
command to do this is
oggenc -o showersong.ogg showersong.wav
Encoding music formats is a CPU intensive process, so performance will drop.
Now, if a few minutes more or less don't matter, just start the line off with
nice:
nice oggenc -o shower.ogg shower.wav
Now the encoding will be run with a lower priority, but you will still have
to wait for it to finish before you can use the shell again. To have the
computer execute a command in the background, add an ampersand ("&") to the
end of the line:
nice oggenc -o shower.ogg shower.wav &
The shell will respond by giving you a job number and a process id (PID), and
then will ask you for the next command.
The nice command is a good example of the power that was lost when
graphical interfaces became the default: There is no simple way to adjust the
priority of a process with a mouse-driven interface.
Do it later. Another way to spread the load is to have an intensive process
start at a time when the system is not being used much. Depending on who is
on the system with you, this could be three o'clock in the morning or any
time until two o'clock in the afternoon.
The at command lets you set a time to start a program or any other process
that can be run from the command line. To have our shower song encoded at
eight in the evening when you are out watching meaningful French love films,
you enter the command "at" followed by the time you want execution to start,
and then hit the ENTER-key. Then you type in the command itself, followed by
another ENTER, and finally a CTRL-d to finish the sequence:
me@mycomputer:~> at 20:00
warning: commands will be executed using /bin/sh
at> nice oggenc -o shower.ogg shower.wav
at> <CTRL-d>
job 1 at 2003-09-28 20:00
At accepts just about any time format: Americans get to use their quaint "08:
00pm" notation instead of "20:00", and there are a whole set of shortcuts
like midnight, noon or even teatime. at sends the output of the command to
your mailbox.
Do it when you are bored. A close relative of at uses system load, not time
of day to determine when a command should be run: batch saves the execution
for a time when the system load has fallen below a certain value (to see what
your system load currently is, run uptime from a shell or xload under X
Windows). The documentation gives this value as 0.8. The syntax for batch is
basically the same as for at, except that the time field is optional.
-----------------------------------------------------------------------------
7. Odds and Ends
7.1. Mock Mainframe Case Studies
Two People Home Setup (Sep 2003). Mainframe: Dual Intel Pentium II Xeon 450
MHz 512 KByte cache CPU, 384 MByte PC-100 RAM, PCI graphics card. Terminal:
AMD K6-2 300 MHz, 64 MByte SDRAM. Guardian: PentiumPro 200 MHz, 64 MByte RAM.
Other: AMD Duron 1.0 GHz, 512 MByte DDR RAM (for games).
Guardian and Terminals are on two different networks. Regular load: Two
people with KDE 3.1 with kmail, konqueror and/or Mozilla Firebird under SuSE
8.2.
-----------------------------------------------------------------------------
7.2. And Finally...
This text is dedicated to my uncle Gary W. Marklund, who gave me the Unix
bug.