docbok rev of NFS-HOWTO; initial entry

This commit is contained in:
gferg 2001-01-19 15:06:05 +00:00
parent 2d89579c35
commit f14a7bd604
10 changed files with 2361 additions and 0 deletions

View File

@ -0,0 +1,61 @@
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V3.1//EN" [
<!ENTITY preamble SYSTEM "preamble.sgml">
<!ENTITY intro SYSTEM "intro.sgml">
<!ENTITY server SYSTEM "server.sgml">
<!ENTITY client SYSTEM "client.sgml">
<!ENTITY performance SYSTEM "performance.sgml">
<!ENTITY security SYSTEM "security.sgml">
<!ENTITY troubleshooting SYSTEM "troubleshooting.sgml">
<!ENTITY interoperability SYSTEM "interop.sgml">
]>
<!-- make with the following cmd
mkdir html
cd html
jade -t sgml -i html -d /usr/lib/sgml/stylesheets/ldp.dsl\#html ../nfs-howto.sgml
-->
<article>
<artheader>
<title>Linux NFS-HOWTO</title>
<author>
<firstname>Tavis</firstname>
<surname>Barr</surname>
<affiliation>
<address>
<email>tavis@mahler.econ.columbia.edu</email>
</address>
</affiliation>
</author>
<author>
<firstname>Nicolai</firstname>
<surname>Langfeldt</surname>
<affiliation>
<address>
<email>janl@linpro.no</email>
</address>
</affiliation>
</author>
<author>
<firstname>Seth</firstname>
<surname>Vidal</surname>
<affiliation>
<address>
<email>skvidal@phy.duke.edu</email>
</address>
</affiliation>
</author>
<edition>Draft</edition>
<pubdate>2000-12-28</pubdate>
</artheader>
&preamble;
&intro;
&server;
&client;
&performance;
&security;
&troubleshooting;
&interoperability;
</article>

View File

@ -0,0 +1,153 @@
<sect1 id="client">
<title>Setting up an NFS Client</title>
<sect2 id="remotemount">
<title>Mounting remote directories</title>
<para>
Before beginning, you should double-check to make sure your mount
program is new enough (version 2.10m if you want to use Version 3
NFS), and that the client machine supports NFS mounting, though most
standard distributions do. If you are using a 2.2 or later kernel
with the <filename>/proc</filename> filesystem you can check the latter by reading the
file <filename>/proc/filesystems</filename> and making sure there is a line containing
nfs. If not, you will need to build (or download) a kernel that has
NFS support built in.
</para>
<para>
To begin using machine as an NFS client, you will need the portmapper
running on that machine, and to use NFS file locking, you will
also need <filename>rpc.statd</filename> and <filename>rpc.lockd</filename>
running on both the client and the server. Most recent distributions
start those services by default at boot time; if yours doesn't, see
<xref linkend="config"> for information on how to start them up.
</para>
<para>
With portmapper, lockd, and statd running, you should now be able to
mount the remote directory from your server just the way you mount
a local hard drive, with the mount command. Continuing our example
from the previous section, suppose our server above is called
<emphasis>master.foo.com</emphasis>,and we want to mount the <filename>/home</filename> directory on
<emphasis>slave1.foo.com</emphasis>. Then, all we have to do, from the root prompt on
<emphasis>slave1.foo.com</emphasis>, is type:
<screen>
# mount master.foo.com:/home /mnt/home
</screen>
and the directory <filename>/home</filename> on master will appear as the directory
<filename>/mnt/home</filename> on <emphasis>slave1</emphasis>.
</para>
<para>
If this does not work, see the Troubleshooting section (<xref linkend="troubleshooting">).
</para>
<para>
You can get rid of the file system by typing
<screen>
# umount /mnt/home
</screen>
just like you would for a local file system.
</para>
</sect2>
<sect2 id="boot-time-nfs">
<title>Getting NFS File Systems to Be Mounted at Boot Time</title>
<para>
NFS file systems can be added to your <filename>/etc/fstab</filename> file the same way
local file systems can, so that they mount when your system starts
up. The only difference is that the file system type will be
set to <userinput>nfs</userinput> and the dump and fsck order (the last two entries) will
have to be set to zero. So for our example above, the entry in
<filename>/etc/fstab</filename> would look like:
<programlisting>
# device mountpoint fs-type options dump fsckorder
...
master.foo.com:/home /mnt nfs rw 0 0
...
</programlisting>
</para>
<para>
See the man pages for <filename>fstab</filename> if you are unfamiliar
with the syntax of this file. If you are using an automounter such as
amd or autofs, the options in the corresponding fields of your mount
listings should look very similar if not identical.
</para>
<para>
At this point you should have NFS working, though a few tweaks
may still be necessary to get it to work well. You should also
read <xref linkend="security"> to be sure your setup is reasonably secure.
</para>
</sect2>
<sect2 id="Mountoptions">
<title>Mount options</title>
<sect3 id="soft-vs-hard">
<title>Soft vs. Hard Mounting</title>
<para>
There are some options you should consider adding at once. They
govern the way the NFS client handles a server crash or network
outage. One of the cool things about NFS is that it can handle this
gracefully. If you set up the clients right. There are two distinct
failure modes:
</para>
<para>
<glosslist>
<glossentry>
<glossterm>soft</glossterm>
<glossdef>
<para>
If a file request fails, the NFS client will report an
error to the process on the client machine requesting the file
access. Some programs can handle this with composure, most
won't. We do not recommend using this setting; it is a recipe
for corrupted files and lost data. You should especially not
use this for mail disks --- if you value your mail, that is.
</para>
</glossdef>
</glossentry>
<glossentry>
<glossterm>hard</glossterm>
<glossdef>
<para>
The program accessing a file on a NFS mounted file system
will hang when the server crashes. The process cannot be
interrupted or killed (except by a "sure kill") unless you also
specify intr. When the NFS server is back online the program will
continue undisturbed from where it was. We recommend using hard,
intr on all NFS mounted file systems.
</para>
</glossdef>
</glossentry>
</glosslist>
</para>
<para>
Picking up the from previous example, the fstab entry would now
look like:
<programlisting>
# device mountpoint fs-type options dump fsckord
...
master.foo.com:/home /mnt/home nfs rw,hard,intr 0 0
...
</programlisting>
</para>
</sect3>
<sect3 id="blocksize">
<title>Setting Block Size to Optimize Transfer Speeds</title>
<para>
The <userinput>rsize</userinput> and <userinput>wsize</userinput> mount
options specify the size of the chunks of data that the client and
server pass back and forth to each other.
</para>
<para>
The defaults may be too big or to small; there is no size that works
well on all or most setups. On the one hand, some combinations of
Linux kernels and network cards (largely on older machines) cannot
handle blocks that large. On the other hand, if they can handle
larger blocks, a bigger size might be faster.
</para>
<para>
Getting the block size right is an important factor in performance and
is a must if you are planning to use the NFS server in a production
environment. See <xref linkend="performance"> for details.
</para>
</sect3>
</sect2>
</sect1>

View File

@ -0,0 +1,268 @@
<sect1 id="interop">
<title>Using Linux NFS with Other OSes</title>
<para>
Every operating system, Linux included, has quirks and deviations
in the behavior of its NFS implementation -- sometimes because
the protocols are vague, sometimes because they leave gaping
security holes. Linux will work properly with all major vendors'
NFS implementations, as far as we know. However, there may be
extra steps involved to make sure the two OSes are communicating
clearly with one another. This section details those steps.
</para>
<para>
In general, it is highly ill-advised to attempt to use a Linux
machine with a kernel before 2.2.18 as an NFS server for non-Linux
clients. Implementations with older kernels may work fine as
clients; however if you are using one of these kernels and get
stuck, the first piece of advice we would give is to upgrade
your kernel and see if the problems go away. The user-space NFS
implementations also do not work well with non-Linux clients.
</para>
<para>
Following is a list of known issues for using Linux together with
major operating systems.
</para>
<sect2 id="aix">
<title>AIX</title>
<sect3 id="aixserver">
<title>Linux Clients and AIX Servers</title>
<para>
The format for the <filename>/etc/exports</filename> file for our example in <xref linkend="server"> is:
<programlisting>
/usr slave1.foo.com:slave2.foo.com,access=slave1.foo.com:slave2.foo.com
/home slave1.foo.com:slave2.foo.com,rw=slave1.foo.com:slave2.foo.com
</programlisting>
</para>
</sect3>
<sect3 id="aixclients">
<title>AIX clients and Linux Servers</title>
<para>
AIX uses the file <filename>/etc/filesystems</filename> instead of <filename>/etc/fstab</filename>.
A sample entry, based on the example in <xref linkend="client">, looks like this:
<programlisting>
/mnt/home:
dev = "/home"
vfs = nfs
nodename = master.foo.com
mount = true
options = bg,hard,intr,rsize=1024,wsize=1024,vers=2,proto=udp
account = false
</programlisting>
</para>
<para>
<orderedlist numeration="lowerroman">
<listitem>
<para>
Version 4.3.2 of AIX requires that file systems be exported with
the insecure option, which causes NFS to listen to requests from
insecure ports (i.e., ports above 1024, to which non-root users can
bind). Older versions of AIX do not seem to require this.
</para>
</listitem>
<listitem>
<para>
AIX clients will default to mounting version 3 NFS over TCP.
If your Linux server does not support this, then you may need
to specify vers=2 and/or proto=udp in your mount options.
</para>
</listitem>
<listitem>
<para>
Using netmasks in <filename>/etc/exports</filename> seems to sometimes cause clients
to lose mounts when another client is reset. This can be fixed
by listing out hosts explicitly.
</para>
</listitem>
<listitem>
<para>
Apparently automount in AIX 4.3.2 is rather broken.
</para>
</listitem>
</orderedlist>
</para>
</sect3>
</sect2>
<sect2 id="bsd">
<title>BSD</title>
<sect3 id="bsdserver">
<title>BSD servers and Linux clients</title>
<para>
BSD kernels tend to work better with larger block sizes.
</para>
</sect3>
<sect3 id="bsdclient">
<title>Linux servers and BSD clients</title>
<para>
Some versions of BSD may make requests to the server from insecure ports,
in which case you will need to export your volumes with the insecure
option. See the man page for <emphasis>exports(5)</emphasis> for more details.
</para>
</sect3>
</sect2>
<sect2 id="tru64">
<title>Compaq Tru64 Unix</title>
<sect3 id="tru64server">
<title>Tru64 Unix Servers and Linux Clients</title>
<para>
In general, Tru64 Unix servers work quite smoothly with Linux clients.
The format for the <filename>/etc/exports</filename> file for our example in <xref linkend="server"> is:
<programlisting>
/usr slave1.foo.com:slave2.foo.com \
-access=slave1.foo.com:slave2.foo.com \
/home slave1.foo.com:slave2.foo.com \
-rw=slave1.foo.com:slave2.foo.com \
-root=slave1.foo.com:slave2.foo.com
</programlisting>
</para>
<para>
Tru64 checks the <filename>/etc/exports</filename> file every time there is a mount request
so you do not need to run the <command>exportfs</command> command; in fact on many
versions of Tru64 Unix the command does not exist.
</para>
</sect3>
<sect3 id="tru64client">
<title>Linux Servers and Tru64 Unix Clients</title>
<para>
There are two issues to watch out for here. First, Tru64 Unix mounts
using Version 3 NFS by default. You will see mount errors if your
Linux server does not support Version 3 NFS. Second, in Tru64 Unix
4.x, NFS locking requests are made by daemon. You will therefore
need to specify the <userinput>insecure_locks</userinput> option on all volumes you export
to a Tru64 Unix 4.x client; see the <command>exports</command> man pages for details.
</para>
</sect3>
</sect2>
<sect2 id="hpux">
<title>HP-UX</title>
<sect3 id="hpuxserver">
<title>HP-UX Servers and Linux Clients</title>
<para>
A sample <filename>/etc/exports</filename> entry on HP-UX looks like this:
<programlisting>
/usr -ro,access=slave1.foo.com:slave2.foo.com
/home -rw=slave1.foo.com:slave2.fo.com:root=slave1.foo.com:slave2.foo.com
</programlisting>
(The <userinput>root</userinput> option is listed in the last entry for informational
purposes only; its use is not recommended unless necessary.)
</para>
</sect3>
<sect3 id="hpuxclient">
<title>Linux Servers and HP-UX Clients</title>
<para>
HP-UX diskless clients will require at least a kernel version 2.2.19
(or patched 2.2.18) for device files to export correctly.
</para>
</sect3>
</sect2>
<sect2 id="irix">
<title>IRIX</title>
<sect3 id="irixserver">
<title>IRIX Servers and Linux Clients</title>
<para>
A sample <filename>/etc/exports</filename> entry on IRIX looks like this:
<programlisting>
/usr -ro,access=slave1.foo.com:slave2.foo.com
/home -rw=slave1.foo.com:slave2.fo.com:root=slave1.foo.com:slave2.foo.com
</programlisting>
(The <userinput>root</userinput> option is listed in the last entry for informational
purposes only; its use is not recommended unless necessary.)
</para>
<para>
There are reportedly problems when using the nohide option on
exports to linux 2.2-based systems. This problem is fixed in the
2.4 kernel. As a workaround, you can export and mount lower-down
file systems separately.
</para>
</sect3>
<sect3 id="irixclient">
<title>IRIX clients and Linux servers</title>
<para>
There are no known interoperability issues.
</para>
</sect3>
</sect2>
<sect2 id="solaris">
<title>Solaris</title>
<sect3 id="solarisserver">
<title>Solaris Servers</title>
<para>
Solaris has a slightly different format on the server end from
other operating systems. Instead of <filename>/etc/exports</filename>, the configuration
file is <filename>/etc/dfs/dfstab</filename>. Entries are of the form of a "share"
command, where the syntax for the example in <xref linkend="server"> would
look like
<programlisting>
share -o rw=slave1,slave2 -d "Master Usr" /usr
</programlisting>
and instead of running <command>exportfs</command> after editing, you run <command>shareall</command>.
</para>
<para>
Solaris servers are especially sensitive to packet size. If you
are using a Linux client with a Solaris server, be sure to set
<userinput>rsize</userinput> and <userinput>wsize</userinput> to 32768 at mount time.
</para>
<para>
Finally, there is an issue with root squashing on Solaris: root gets
mapped to the user <emphasis>noone</emphasis>, which is not the same as the user <emphasis>nobody</emphasis>.
If you are having trouble with file permissions as root on the client
machine, be sure to check that the mapping works as you expect.
</para>
</sect3>
<sect3 id="solarisclient">
<title>Solaris Clients</title>
<para>
Solaris clients will regularly produce the following message:
</para>
<screen>
svc: unknown program 100227 (me 100003)
</screen>
<para>
This happens because Solaris clients, when they mount, try to obtain
ACL information - which linux obviously does not have. The messages
can safely be ignored.
</para>
<para>
There are two known issues with diskless Solaris clients: First, a kernel
version of at least 2.2.19 is needed to get <filename>/dev/null</filename> to export
correctly. Second, the packet size may need to be set extremely
small (i.e., 1024) on diskless sparc clients because the clients
do not know how to assemble packets in reverse order. This can be
done from <filename>/etc/bootparams</filename> on the clients.
</para>
</sect3>
</sect2>
<sect2 id="sunos">
<title>SunOS</title>
<para>
SunOS only has NFS Version 2 over UDP.
</para>
<sect3 id="sunosserver">
<title>SunOS Servers</title>
<para>
On the server end, SunOS uses the most traditional format for its
<filename>/etc/exports</filename> file. The example in <xref linkend="server"> would look like:
<programlisting>
/usr -access=slave1.foo.com,slave2.foo.com
/home -rw=slave1.foo.com,slave2.foo.com, root=slave1.foo.com,slave2.foo.com
</programlisting>
</para>
</sect3>
<sect3 id="sunosclient">
<title>SunOS Clients</title>
<para>
Be advised that SunOS makes all NFS locking requests as daemon, and
therefore you will need to add the <userinput>insecure_locks</userinput> option to any
volumes you export to a SunOS machine. See the <command>exports</command> man page
for details.
</para>
</sect3>
</sect2>
</sect1>

View File

@ -0,0 +1,148 @@
<sect1 id="intro">
<title>Introduction</title>
<sect2 id="what">
<title>What is NFS?</title>
<para>
The Network File System (NFS) was developed to allow machines
to mount a disk partition on a remote machine as if it were on
a local hard drive. This allows for fast, seamless sharing of
files across a network.
</para>
<para>
It also gives the potential for unwanted people to access your
hard drive over the network (and thereby possibly read your email
and delete all your files as well as break into your system) if
you set it up incorrectly. So please read the Security section of
this document carefully if you intend to implement an NFS setup.
</para>
<para>
There are other systems that provide similar functionality to NFS.
Samba provides file services to Windows clients. The Andrew File
System from IBM (<ulink url="http://www.transarc.com/Product/EFS/AFS/index.html">http://www.transarc.com/Product/EFS/AFS/index.html</ulink>),
recently open-sourced, provides a file sharing mechanism with some
additional security and performance features. The Coda File System
(<ulink url="http://www.coda.cs.cmu.edu/">http://www.coda.cs.cmu.edu/</ulink>) is still in development as of this writing
but is designed to work well with disconnected clients. Many of the
features of the Andrew and Coda file systems are slated for inclusion
in the next version of NFS (Version 4) (<ulink url="http://www.nfsv4.org">http://www.nfsv4.org</ulink>). The
advantage of NFS today is that it is mature, standard, well understood,
and supported robustly across a variety of platforms.
</para>
</sect2>
<sect2 id="scope">
<title>What is this HOWTO and what is it not?</title>
<para>
This HOWTO is intended as a complete, step-by-step guide to setting
up NFS correctly and effectively. Setting up NFS involves two steps,
namely configuring the server and then configuring the client. Each
of these steps is dealt with in order. The document then offers
some tips for people with particular needs and hardware setups, as
well as security and troubleshooting advice.
</para>
<para>
This HOWTO is not a description of the guts and
underlying structure of NFS. For that you may wish to read
<emphasis>Managing NFS and NIS</emphasis> by Hal Stern, published by O'Reilly &
Associates, Inc. While that book is severely out of date, much
of the structure of NFS has not changed, and the book describes it
very articulately. A much more advanced and up-to-date technical
description of NFS is available in <emphasis>NFS Illustrated</emphasis> by Brent Callaghan.
</para>
<para>
This document is also not intended as a complete reference manual,
and does not contain an exhaustive list of the features of Linux
NFS. For that, you can look at the man pages for <emphasis>nfs(5)</emphasis>,
<emphasis>exports(5)</emphasis>, <emphasis>mount(8)</emphasis>, <emphasis>fstab(5)</emphasis>,
<emphasis>nfsd(8)</emphasis>, <emphasis>lockd(8)</emphasis>, <emphasis>statd(8)</emphasis>,
<emphasis>rquotad(8)</emphasis>, and <emphasis>mountd(8)</emphasis>.
</para>
<para>
It will also not cover PC-NFS, which is considered obsolete (users
are encouraged to use Samba to share files with PC's) or NFS
Version 4, which is still in development.
</para>
</sect2>
<sect2 id="knowprereq">
<title>Knowledge Pre-Requisites</title>
<para>
You should know some basic things about TCP/IP networking before
reading this HOWTO; if you are in doubt, read the
<ulink url="http://www.linuxdoc.org/HOWTO/Networking-Overview-HOWTO.html">Networking-
Overview-HOWTO</ulink>.
</para>
</sect2>
<sect2 id="swprereq">
<title>Software Pre-Requisites: Kernel Version and nfs-utils</title>
<para>
The difference between Version 2 NFS and version 3 NFS will be
explained later on; for now, you might simply take the suggestion
that you will need NFS Version 3 if you are installing a dedicated
or high-volume file server. NFS Version 2 should be fine for
casual use.
</para>
<para>
NFS Version 2 has been around for quite some time now (at least
since the 1.2 kernel series) however you will need a kernel version
of at least 2.2.18 if you wish to do any of the following:
<itemizedlist>
<listitem><para>Mix Linux NFS with other operating systems' NFS</para></listitem>
<listitem><para>Use file locking reliably over NFS</para></listitem>
<listitem><para>Use NFS Version 3.</para></listitem>
</itemizedlist>
</para>
<para>
There are also patches available for kernel versions above 2.2.14
that provide the above functionality. Some of them can be downloaded
from the Linux NFS homepage. If your kernel version is 2.2.14-
2.2.17 and you have the source code on hand, you can tell if these
patches have been added because NFS Version 3 server support will be
a configuration option. However, unless you have some particular
reason to use an older kernel, you should upgrade because many bugs
have been fixed along the way.
</para>
<para>
Version 3 functionality will also require the nfs-utils package of
at least version 0.1.6, and mount version 2.10m or newer. However
because nfs-utils and mount are fully backwards compatible, and because
newer versions have lots of security and bug fixes, there is no good
reason not to install the newest nfs-utils and mount packages if you
are beginning an NFS setup.
</para>
<para>
All 2.4 and higher kernels have full NFS Version 3 functionality.
</para>
<para>
All kernels after 2.2.18 support NFS over TCP on the client side.
As of this writing, server-side NFS over TCP only exists in the
later 2.2 series (but not yet in the 2.4 kernels), is considered
experimental, and is somewhat buggy.
</para>
<para>
Because so many of the above functionalities were introduced in
kernel version 2.2.18, this document was written to be consistent
with kernels above this version (including 2.4.x). If you have an
older kernel, this document may not describe your NFS system
correctly.
</para>
<para>
As we write this document, NFS version 4 is still in development
as a protocol, and it will not be dealt with here.
</para>
</sect2>
<sect2 id="furtherhelp">
<title>Where to get help and further information</title>
<para>
As of November 2000, the Linux NFS homepage is at
<ulink url="http://nfs.sourceforge.net">http://nfs.sourceforge.net</ulink>. Please check there for NFS related
mailing lists as well as the latest version of nfs-utils, NFS
kernel patches, and other NFS related packages.
</para>
<para>
You may also wish to look at the man pages for <emphasis>nfs(5)</emphasis>,
<emphasis>exports(5)</emphasis>, <emphasis>mount(8)</emphasis>, <emphasis>fstab(5)</emphasis>,
<emphasis>nfsd(8)</emphasis>, <emphasis>lockd(8)</emphasis>, <emphasis>statd(8)</emphasis>,
<emphasis>rquotad(8)</emphasis>, and <emphasis>mountd(8)</emphasis>.
</para>
</sect2>
</sect1>

View File

@ -0,0 +1,47 @@
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V3.1//EN" [
<!ENTITY preamble SYSTEM "preamble.sgml">
<!ENTITY intro SYSTEM "intro.sgml">
<!ENTITY server SYSTEM "server.sgml">
<!ENTITY client SYSTEM "client.sgml">
]>
<article>
<artheader>
<title>Linux NFS-HOWTO</title>
<author>
<firstname>Tavis</firstname>
<surname>Barr</surname>
<affiliation>
<address>
<email>tavis@mahler.econ.columbia.edu</email>
</address>
</affiliation>
</author>
<author>
<firstname>Nicolai</firstname>
<surname>Langfeldt</surname>
<affiliation>
<address>
<email>janl@linpro.no</email>
</address>
</affiliation>
</author>
<author>
<firstname>Seth</firstname>
<surname>Vidal</surname>
<affiliation>
<address>
<email>skvidal@phy.duke.edu</email>
</address>
</affiliation>
</author>
<edition>Draft</edition>
<date>December 28, 2000</date>
</artheader>
&preamble;
&intro;
&server;
&client;
</article>

View File

@ -0,0 +1,252 @@
<sect1 id="performance">
<title>Optimizing NFS Performance</title>
<para>
Getting network settings right can improve NFS performance many times
over -- a tenfold increase in transfer speeds is not unheard of.
The most important things to get right are the <userinput>rsize</userinput>
and <userinput>wsize</userinput> <command>mount</command> options. Other factors listed below
may affect people with particular hardware setups.
</para>
<sect2 id="blocksizes">
<title>Setting Block Size to Optimize Transfer Speeds</title>
<para>
The <userinput>rsize</userinput> and <userinput>wsize</userinput>
<command>mount</command> options specify the size of the chunks of data
that the client and server pass back and forth to each other. If no
<userinput>rsize</userinput> and <userinput>wsize</userinput> options
are specified, the default varies by which version of NFS we are using.
4096 bytes is the most common default, although for TCP-based mounts
in 2.2 kernels, and for all mounts beginning with 2.4 kernels, the
server specifies the default block size.
</para>
<para>
The defaults may be too big or too small. On the one hand, some
combinations of Linux kernels and network cards (largely on older
machines) cannot handle blocks that large. On the other hand, if they
can handle larger blocks, a bigger size might be faster.
</para>
<para>
So we'll want to experiment and find an rsize and wsize that works
and is as fast as possible. You can test the speed of your options
with some simple commands.
</para>
<para>
The first of these commands transfers 16384 blocks of 16k each from
the special file <filename>/dev/zero</filename> (which if you read it
just spits out zeros _really_ fast) to the mounted partition. We will
time it to see how long it takes. So, from the client machine, type:
<screen>
# time dd if=/dev/zero of=/mnt/home/testfile bs=16k count=16384
</screen>
</para>
<para>
This creates a 256Mb file of zeroed bytes. In general, you should
create a file that's at least twice as large as the system RAM
on the server, but make sure you have enough disk space! Then read
back the file into the great black hole on the client machine
(<filename>/dev/null</filename>) by typing the following:
<screen>
# time dd if=/mnt/home/testfile of=/dev/null bs=16k
</screen>
</para>
<para>
Repeat this a few times and average how long it takes. Be sure to
unmount and remount the filesystem each time (both on the client and,
if you are zealous, locally on the server as well), which should clear
out any caches.
</para>
<para>
Then unmount, and mount again with a larger and smaller block size.
They should probably be multiples of 1024, and not larger than
8192 bytes since that's the maximum size in NFS version 2. (Though
if you are using Version 3 you might want to try up to 32768.)
Wisdom has it that the block size should be a power of two since most
of the parameters that would constrain it (such as file system block
sizes and network packet size) are also powers of two. However, some
users have reported better successes with block sizes that are not
powers of two but are still multiples of the file system block size
and the network packet size.
</para>
<para>
Directly after mounting with a larger size, cd into the mounted
file system and do things like ls, explore the fs a bit to make
sure everything is as it should. If the rsize/wsize is too large
the symptoms are very odd and not 100% obvious. A typical symptom
is incomplete file lists when doing 'ls', and no error messages.
Or reading files failing mysteriously with no error messages. After
establishing that the given rsize/wsize works you can do the speed
tests again. Different server platforms are likely to have different
optimal sizes. SunOS and Solaris is reputedly a lot faster with 4096
byte blocks than with anything else.
</para>
<para>
<emphasis>Remember to edit <filename>/etc/fstab</filename> to reflect the rsize/wsize you found.</emphasis>
</para>
</sect2>
<sect2 id="packet-and-network">
<title>Packet Size and Network Drivers</title>
<para>
There are many shoddy network drivers available for Linux,
including for some fairly standard cards.
</para>
<para>
Try pinging back and forth between the two machines with large
packets using the <option>-f</option> and <option>-s</option>
options with <command>ping</command> (see <command>man ping</command>)
for more details and see if a lot of packets get or if they
take a long time for a reply. If so, you may have a problem
with the performance of your network card.
</para>
<para>
To correct such a problem, you may wish to reconfigure the packet
size that your network card uses. Very often there is a constraint
somewhere else in the network (such as a router) that causes a
smaller maximum packet size between two machines than what the
network cards on the machines are actually capable of. TCP should
autodiscover the appropriate packet size for a network, but UDP
will simply stay at a default value. So determining the appropriate
packet size is especially important if you are using NFS over UDP.
</para>
<para>
You can test for the network packet size using the tracepath command:
From the client machine, just type <command>tracepath [server] 2049</command>
and the path MTU should be reported at the bottom. You can then set the
MTU on your network card equal to the path MTU, by using the MTU option
to <command>ifconfig</command>, and see if fewer packets get dropped.
See the <command>ifconfig</command> man pages for details on how to reset the MTU.
</para>
</sect2>
<sect2 id="nfsd-instance">
<title>Number of Instances of NFSD</title>
<para>
Most startup scripts, Linux and otherwise, start 8 instances of nfsd.
In the early days of NFS, Sun decided on this number as a rule of thumb,
and everyone else copied. There are no good measures of how many
instances are optimal, but a more heavily-trafficked server may require
more. If you are using a 2.4 or higher kernel and you want to see how
heavily each nfsd thread is being used, you can look at the file
<filename>/proc/net/rpc/nfsd</filename>. The last ten numbers on the
<emphasis>th</emphasis> line in that file indicate the number of seconds
that the thread usage was at that percentage of the maximum allowable.
If you have a large number in the top three deciles, you may wish to
increase the number of <command>nfsd</command> instances. This is done
upon starting <command>nfsd</command> using the number of instances as
the command line option. See the <command>nfsd</command> man page for
more information.
</para>
</sect2>
<sect2 id="memlimits">
<title>Memory Limits on the Input Queue</title>
<para>
On 2.2 and 2.4 kernels, the socket input queue, where requests
sit while they are currently being processed, has a small default
size limit of 64k. This means that if you are running 8 instances of
<command>nfsd</command>, each will only have 8k to store requests while it processes
them.
</para>
<para>
You should consider increasing this number to at least 256k for <command>nfsd</command>.
This limit is set in the proc file system using the files
<filename>/proc/sys/net/core/rmem_default</filename> and <filename>/proc/sys/net/core/rmem_max</filename>.
It can be increased in three steps; the following method is a bit of
a hack but should work and should not cause any problems:
</para>
<para>
<orderedlist Numeration="loweralpha">
<listitem>
<para>Increase the size listed in the file:
<programlisting>
echo 262144 > /proc/sys/net/core/rmem_default
echo 262144 > /proc/sys/net/core/rmem_max
</programlisting>
</para>
</listitem>
<listitem>
<para>
Restart <command>nfsd</command>, e.g., type <command>/etc/rc.d/init.d/nfsd restart</command> on Red Hat
</para>
</listitem>
<listitem>
<para>
Return the size limits to their normal size in case other kernel systems depend on it:
<programlisting>
echo 65536 > /proc/sys/net/core/rmem_default
echo 65536 > /proc/sys/net/core/rmem_max
</programlisting>
</para>
<para>
<emphasis>
Be sure to perform this last step because machines have been reported
to crash if these values are left changed for long periods of time.
</emphasis>
</para>
</listitem>
</orderedlist>
</para>
</sect2>
<sect2 id="frag-overflow">
<title>Overflow of Fragmented Packets</title>
<para>
The NFS protocol uses fragmented UDP packets. The kernel has
a limit of how many fragments of incomplete packets it can
buffer before it starts throwing away packets. With 2.2 kernels
that support the <filename>/proc</filename> filesystem, you can
specify how many by editing the files
<filename>/proc/sys/net/ipv4/ipfrag_high_thresh</filename> and
<filename>/proc/sys/net/ipv4/ipfrag_low_thresh</filename>.
</para>
<para>
Once the number of unprocessed, fragmented packets reaches the
number specified by <userinput>ipfrag_high_thresh</userinput> (in bytes), the kernel
will simply start throwing away fragmented packets until the number
of incomplete packets reaches the number specified
by <userinput>ipfrag_low_thresh</userinput>. (With 2.2 kernels, the default is usually 256K).
This will look like packet loss, and if the high threshold is
reached your server performance drops a lot.
</para>
<para>
One way to monitor this is to look at the field IP: ReasmFails in the
file <filename>/proc/net/snmp</filename>; if it goes up too quickly during heavy file
activity, you may have problem. Good alternative values for
<userinput>ipfrag_high_thresh</userinput> and <userinput>ipfrag_low_thresh</userinput>
have not been reported; if you have a good experience with a
particular value, please let the maintainers and development team know.
</para>
</sect2>
<sect2 id="autonegotiation">
<title>Turning Off Autonegotiation of NICs and Hubs</title>
<para>
Sometimes network cards will auto-negotiate badly with
hubs and switches and this can have strange effects.
Moreover, hubs may lose packets if they have different
ports running at different speeds. Try playing around
with the network speed and duplex settings.
</para>
</sect2>
<sect2 id="non-nfs-performance">
<title>Non-NFS-Related Means of Enhancing Server Performance</title>
<para>
Offering general guidelines for setting up a well-functioning
file server is outside the scope of this document, but a few
hints may be worth mentioning: First, RAID 5 gives you good
read speeds but lousy write speeds; consider RAID 1/0 if both
write speed and redundancy are important. Second, using a
journalling filesystem will drastically reduce your reboot
time in the event of a system crash; as of this writing, ext3
(<ulink url="ftp://ftp.uk.linux.org/pub/linux/sct/fs/jfs/">ftp://ftp.uk.linux.org/pub/linux/sct/fs/jfs/</ulink>) was the only
journalling filesystem that worked correctly with
NFS version 3, but no doubt that will change soon.
In particular, it looks like <ulink url="http://www.namesys.com">Reiserfs</ulink>
should work with NFS version 3 on 2.4 kernels, though not yet
on 2.2 kernels. Finally, using an automounter (such as autofs
or amd) may prevent hangs if you cross-mount files
on your machines (whether on purpose or by oversight) and one of those
machines goes down. See the
<ulink url="http://www.linuxdoc.org/HOWTO/mini/Automount.html">Automount Mini-HOWTO</ulink>
for details.
</para>
</sect2>
</sect1>

View File

@ -0,0 +1,64 @@
<sect1 id="Preamble">
<title>Preamble</title>
<sect2 id="legal">
<title>Legal stuff</title>
<para>
Copyright (c) <2001> by Tavis Barr, Nicolai Langfeldt, and Seth Vidal.
This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, v1.0 or later (the latest version
is presently available at <ulink url="http://www.opencontent.org/openpub/">http://www.opencontent.org/openpub/</ulink>).
</para>
</sect2>
<sect2 id="disclaimer">
<title>Disclaimer</title>
<para>This document is provided without any guarantees, including
merchantability or fitness for a particular use. The maintainers
cannot be responsible if following instructions in this document
leads to damaged equipment or data, angry neighbors, strange habits,
divorce, or any other calamity.
</para>
</sect2>
<sect2 id="feedback">
<title>Feedback</title>
<para>This will never be a finished document; we welcome feedback about
how it can be improved. As of October 2000, the Linux NFS home
page is being hosted at <ulink url="http://nfs.sourceforge.net">http://nfs.sourceforge.net</ulink>. Check there
for mailing lists, bug fixes, and updates, and also to verify
who currently maintains this document.
</para>
</sect2>
<sect2 id="translation">
<title>Translation</title>
<para>If you are able to translate this document into another language,
we would be grateful and we will also do our best to assist you.
Please notify the maintainers.</para>
</sect2>
<sect2 id="Dedication">
<title>Dedication</title>
<para>NFS on Linux was made possible by a collaborative effort of many
people, but a few stand out for special recognition. The original
version was developed by Olaf Kirch and Alan Cox. The version 3
server code was solidified by Neil Brown, based on work from
Saadia Khan, James Yarbrough, Allen Morris, H.J. Lu, and others
(including himself). The client code was written by Olaf Kirch and
updated by Trond Myklebust. The version 4 lock manager was developed
by Saadia Khan. Dave Higgen and H.J. Lu both have undertaken the
thankless job of extensive maintenance and bug fixes to get the
code to actually work the way it was supposed to. H.J. has also
done extensive development of the nfs-utils package. Of course this
dedication is leaving many people out.
</para>
<para>
The original version of this document was developed by Nicolai
Langfeldt. It was heavily rewritten in 2000 by Tavis Barr
and Seth Vidal to reflect substantial changes in the workings
of NFS for Linux developed between the 2.0 and 2.4 kernels.
Thomas Emmel, Neil Brown, Trond Myklebust, Erez Zadok, and Ion Badulescu
also provided valuable comments and contributions.
</para>
</sect2>
</sect1>

View File

@ -0,0 +1,441 @@
<sect1 id="security">
<title>Security and NFS</title>
<para>
This list of security tips and explanations will not make your site
completely secure. <emphasis>NOTHING</emphasis> will make your site completely secure. This
may help you get an idea of the security problems with NFS. This is not
a comprehensive guide and it will always be undergoing changes. If you
have any tips or hints to give us please send them to the HOWTO
maintainer.
</para>
<para>
If you're on a network with no access to the outside world (not even a
modem) and you trust all the internal machines and all your users then
this section will be of no use to you. However, its our belief that
there are relatively few networks in this situation so we would suggest
reading this section thoroughly for anyone setting up NFS.
</para>
<para>
There are two steps to file/mount access in NFS. The first step is mount
access. Mount access is achieved by the client machine attempting to
attach to the server. The security for this is provided by the
<filename>/etc/exports</filename> file. This file lists the names or ip addresses for machines
that are allowed to access a share point. If the client's ip address
matches one of the entries in the access list then it will be allowed to
mount. This is not terribly secure. If someone is capable of spoofing or
taking over a trusted address then they can access your mount points. To
give a real-world example of this type of "authentication": This is
equivalent to someone introducing themselves to you and you believe they
are who they claim to be because they are wearing a sticker that says
"Hello, My Name is ...."
</para>
<para>
The second step is file access. This is a function of normal file system
access controls and not a specialized function of NFS. Once the drive is
mounted the user and group permissions on the files take over access
control.
</para>
<para>
An example: bob on the server maps to the UserID 9999. Bob
makes a file on the server that is only accessible the user (0600 in
octal). A client is allowed to mount the drive where the file is stored.
On the client mary maps to UserID 9999. This means that the client
user mary can access bob's file that is marked as only accessible by him.
It gets worse, if someone has root on the client machine they can
<command>su - [username]</command> and become ANY user. NFS will be none
the wiser.
</para>
<para>
Its not all terrible. There are a few measures you can take on the server
to offset the danger of the clients. We will cover those shortly.
</para>
<para>
If you don't think the security measures apply to you, you're probably
wrong. In <xref linkend="portmapper-security"> we'll cover securing the portmapper,
server and client security in <xref linkend="server.security"> and <xref linkend="client.security"> respectively.
Finally, in <xref linkend="firewalls"> we'll briefly talk about proper firewalling for
your nfs server.
</para>
<para>
Finally, it is critical that all of your nfs daemons and client programs
are current. If you think that a flaw is too recently announced for it to
be a problem for you, then you've probably already been compromised.
</para>
<para>
A good way to keep up to date on security alerts is to subscribe to the
bugtraq mailinglists. You can read up on how to subscribe and various
other information about bugtraq here:
<ulink url="http://www.securityfocus.com/forums/bugtraq/faq.html">http://www.securityfocus.com/forums/bugtraq/faq.html</ulink>
</para>
<para>
Additionally searching for <emphasis>NFS</emphasis> at
<ulink url="http://www.securityfocus.com">securityfocus.com's</ulink> search engine will
show you all security reports pertaining to NFS.
</para>
<para>
You should also regularly check CERT advisories. See the CERT web page
at <ulink url="http://www.cert.org">www.cert.org</ulink>.
</para>
<sect2 id="portmapper-security">
<title>The portmapper</title>
<para>
The portmapper keeps a list of what services are running on what ports.
This list is used by a connecting machine to see what ports it wants to
talk to access certain services.
</para>
<para>
The portmapper is not in as bad a shape as a few years ago but it is
still a point of worry for many sys admins. The portmapper, like NFS and
NIS, should not really have connections made to it outside of a trusted
local area network. If you have to expose them to the outside world -
be careful and keep up diligent monitoring of those systems.
</para>
<para>
Not all Linux distributions were created equal. Some seemingly up-to-
date distributions do not include a securable portmapper.
The easy way to check if your portmapper is good or not is to run
<emphasis>strings(1)</emphasis> and see if it reads the relevant files, <filename>/etc/hosts.deny</filename> and
<filename>/etc/hosts.allow</filename>. Assuming your portmapper is <filename>/sbin/portmap</filename> you can
check it with this command:
<programlisting>
strings /sbin/portmap | grep hosts.
</programlisting>
</para>
<para>
On a securable machine it comes up something like this:
<screen>
/etc/hosts.allow
/etc/hosts.deny
@(#) hosts_ctl.c 1.4 94/12/28 17:42:27
@(#) hosts_access.c 1.21 97/02/12 02:13:22
</screen>
</para>
<para>
First we edit <filename>/etc/hosts.deny</filename>. It should contain the line
</para>
<para>
<screen>
portmap: ALL
</screen>
</para>
<para>
which will deny access to everyone. While it is closed run:
<screen>
rpcinfo -p
</screen>
just to check that your portmapper really reads and obeys
this file. Rpcinfo should give no output, or possibly an error message.
The files <filename>/etc/hosts.allow</filename> and <filename>/etc/hosts.deny</filename>
take effect immediately after you save them. No daemon needs to be restarted.
</para>
<para>
Closing the portmapper for everyone is a bit drastic, so we open it
again by editing <filename>/etc/hosts.allow</filename>. But first
we need to figure out what to put in it. It should basically list
all machines that should have access to your portmapper. On a run of
the mill Linux system there are very few machines that need any access
for any reason. The portmapper administers <command>nfsd</command>,
<command>mountd</command>, <command>ypbind</command>/<command>ypserv</command>,
<command>pcnfsd</command>, and 'r' services like <command>ruptime</command> and <command>rusers</command>.
Of these only <command>nfsd</command>, <command>mountd</command>,
<command>ypbind</command>/<command>ypserv</command> and perhaps
<command>pcnfsd</command> are of any consequence. All machines that need
to access services on your machine should be allowed to do that. Let's
say that your machine's address is <emphasis>192.168.0.254</emphasis> and
that it lives on the subnet <emphasis>192.168.0.0</emphasis>, and that all
machines on the subnet should have access to it (those are terms introduced
by the <ulink url="http://www.linuxdoc.org/HOWTO/Networking-Overview-HOWTO.html">Networking-Overview-HOWTO</ulink>,
go back and refresh your memory if you need to). Then we write:
<screen>
portmap: 192.168.0.0/255.255.255.0
</screen>
in <filename>/etc/hosts.allow</filename>. This is the same as the network
address you give to route and the subnet mask you give to <command>ifconfig</command>. For the
device eth0 on this machine <command>ifconfig</command> should show:
</para>
<para>
<screen>
...
eth0 Link encap:Ethernet HWaddr 00:60:8C:96:D5:56
inet addr:192.168.0.254 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:360315 errors:0 dropped:0 overruns:0
TX packets:179274 errors:0 dropped:0 overruns:0
Interrupt:10 Base address:0x320
...
</screen>
and <command>netstat -rn</command> should show:
<screen>
Kernel routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
...
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 174412 eth0
...
</screen>
(Network address in first column).
</para>
<para>
The <filename>/etc/hosts.deny</filename> and <filename>/etc/hosts.allow</filename> files are
described in the manual pages of the same names.
</para>
<para>
<emphasis>
IMPORTANT: Do not put anything but IP NUMBERS in the portmap lines of
these files. Host name lookups can indirectly cause portmap activity
which will trigger host name lookups which can indirectly cause
portmap activity which will trigger...
</emphasis>
</para>
<para>
Versions 0.2.0 and higher of the nfs-utils package also use the
<filename>hosts.allow</filename> and <filename>hosts.deny</filename>
files, so you should put in entries for <command>lockd</command>,
<command>statd</command>, <command>mountd</command>, and
<command>rquotad</command> in these files too.
</para>
<para>
The above things should make your server tighter. The only remaining
problem (Yeah, right!) is someone breaking root (or boot MS-DOS) on a
trusted machine and using that privilege to send requests from a
secure port as any user they want to be.
</para>
</sect2>
<sect2 id="server.security">
<title>Server security: nfsd and mountd</title>
<para>
On the server we can decide that we don't want to trust the client's
root account. We can do that by using the <userinput>root_squash</userinput> option in
<filename>/etc/exports</filename>:
<programlisting>
/home slave1(rw,root_squash)
</programlisting>
</para>
<para>
This is, in fact, the default. It should always be turned on unless you
have a VERY good reason to turn it off. To turn it off use the
<userinput>no_root_squash</userinput> option.
</para>
<para>
Now, if a user with <emphasis>UID</emphasis> 0 (i.e., root's user ID number)
on the client attempts to access (read, write, delete) the file system,
the server substitutes the <emphasis>UID</emphasis> of the server's 'nobody'
account. Which means that the root user on the client can't access or
change files that only root on the server can access or change. That's
good, and you should probably use <userinput>root_squash</userinput> on
all the file systems you export. "But the root user on the client can
still use <command>su</command> to become any other user and
access and change that users files!" say you. To which the answer is:
Yes, and that's the way it is, and has to be with Unix and NFS. This
has one important implication: All important binaries and files should be
owned by root, and not bin or other non-root account, since the only
account the clients root user cannot access is the servers root
account. In the <emphasis>exports(5)</emphasis> man page there are several other squash
options listed so that you can decide to mistrust whomever you (don't)
like on the clients.
</para>
<para>
The TCP ports 1-1024 are reserved for root's use (and therefore sometimes
referred to as "secure ports") A non-root user cannot bind these ports.
Adding the secure option to an <filename>/etc/exports</filename> entry forces it to run on a
port below 1024, so that a malicious non-root user cannot come along and
open up a spoofed NFS dialogue on a non-reserved port. This option is set
by default.
</para>
</sect2>
<sect2 id="client.security">
<title>Client Security</title>
<sect3 id="nosuid">
<title>The nosuid mount option</title>
<para>
On the client we can decide that we don't want to trust the server too
much a couple of ways with options to mount. For example we can
forbid suid programs to work off the NFS file system with the nosuid
option. Some unix programs, such as passwd, are called "suid" programs:
They set the id of the person running them to whomever is the owner of
the file. If a file is owned by root and is suid, then the program will
execute as root, so that they can perform operations (such as writing to
the password file) that only root is allowed to do. Using the nosuid
option is a good idea and you should consider using this with all NFS
mounted disks. It means that the server's root user cannot make a suid-root
program on the file system, log in to the client as a normal user
and then use the suid-root program to become root on the client too.
One could also forbid execution of files on the mounted file system
altogether with the <userinput>noexec</userinput> option.
But this is more likely to be impractical than nosuid since a file
system is likely to at least contain some scripts or programs that need
to be executed.
</para>
</sect3>
<sect3 id="brokensuid">
<title>The broken_suid mount option</title>
<para>
Some older programs (<command>xterm</command> being one of them) used to rely on the idea
that root can write everywhere. This is will break under new kernels on
NFS mounts. The security implications are that programs that do this
type of suid action can potentially be used to change your apparent uid
on nfs servers doing uid mapping. So the default has been to disable this
<userinput>broken_suid</userinput> in the linux kernel.
</para>
<para>
The long and short of it is this: If you're using an old linux
distribution, some sort of old suid program or an older unix of some
type you <emphasis>might</emphasis> have to mount from your clients with the
<userinput>broken_suid</userinput> option to <command>mount</command>.
However, most recent unixes and linux distros have <command>xterm</command> and such programs
just as a normal executable with no suid status, they call programs to do their setuid work.
</para>
<para>
You enter the above options in the options column, with the <userinput>rsize</userinput> and
<userinput>wsize</userinput>, separated by commas.
</para>
</sect3>
<sect3 id="securing-daemons">
<title>Securing portmapper, rpc.statd, and rpc.lockd on the client</title>
<para>
In the current (2.2.18+) implementation of nfs, full file locking is
supported. This means that <command>rpc.statd</command> and <command>rpc.lockd</command>
must be running on the client in order for locks to function correctly.
These services require the portmapper to be running. So, most of the
problems you will find with nfs on the server you may also be plagued with
on the client. Read through the portmapper section above for information on
securing the portmapper.
</para>
</sect3>
</sect2>
<sect2 id="firewalls">
<title>NFS and firewalls (ipchains and netfilter)</title>
<para>
IPchains (under the 2.2.X kernels) and netfilter (under the 2.4.x
kernels) allow a good level of security - instead of relying on the
daemon (or in this case the tcp wrapper) to determine who can connect,
the connection attempt is allowed or disallowed at a lower level. In
this case you canstop the connection much earlier and more globaly which
can protect you from all sorts of attacks.
</para>
<para>
Describing how to set up a Linux firewall is well beyond the scope of
this document. Interested readers may wish to read the Firewall-HOWTO
or the <ulink url="http://www.linuxdoc.org/HOWTO/IPCHAINS-HOWTO.HTML">IPCHAINS-HOWTO</ulink>.
For users of kernel 2.4 and above you might want to visit the netfilter webpage at:
<ulink url="http://netfilter.filewatcher.org">http://netfilter.filewatcher.org</ulink>.
If you are already familiar with the workings of ipchains or netfilter
this section will give you a few tips on how to better setup your
firewall to work with NFS.
</para>
<para>
A good rule to follow for your firewall configuration is to deny all, and
allow only some - this helps to keep you from accidentally allowing more
than you intended.
</para>
<para>
Ports to be concerned with:
<orderedlist numeration="loweralpha">
<listitem>
<para>The portmapper is on 111. (tcp and udp)</para>
</listitem>
<listitem>
<para>
nfsd is on 2049 and it can be TCP and UDP. Although NFS over TCP
is currently experimental on the server end and you will usually
just see UDP on the server, using TCP is quite stable on the
client end.
</para>
</listitem>
<listitem>
<para>
<command>mountd</command>, <command>lockd</command>, and <command>statd</command>
float around (which is why we need the portmapper to begin with) - this causes
problems. You basically have two options to deal with it:
<orderedlist numeration="lowerroman">
<listitem>
<para>
You more can more or less do a deny all on connecting ports
but explicitly allow most ports certain ips.
</para>
</listitem>
<listitem>
<para>
More recent versions of these utilities have a "-p" option
that allows you to assign them to a certain port. See the
man pages to be sure if your version supports this. You can
then allow access to the ports you have specified for your
NFS client machines, and seal off all other ports, even for
your local network.
</para>
</listitem>
</orderedlist>
</para>
</listitem>
</orderedlist>
</para>
<para>
Using IPCHAINS, a simply firewall using the first option would look
something like this:
<programlisting>
ipchains -A input -f -j ACCEPT
ipchains -A input -s trusted.net.here/trusted.netmask -d host.ip/255.255.255.255 -j ACCEPT
ipchains -A input -s 0/0 -d 0/0 -p 6 -j DENY -y -l
ipchains -A input -s 0/0 -d 0/0 -p 17 -j DENY -l
</programlisting>
</para>
<para>
The equivalent set of commands in netfilter (the firewalling tool in 2.4) is:
<programlisting>
iptables -A INPUT -f -j ACCEPT
iptables -A INPUT -s trusted.net.here/trusted.netmask -d \
host.ip/255.255.255.255 -j ACCEPT
iptables -A INPUT -s 0/0 -d 0/0 -p 6 -j DENY --syn --log-level 5
iptables -A INPUT -s 0/0 -d 0/0 -p 17 -j DENY --log-level 5
</programlisting>
</para>
<para>
The first line says to accept all packet fragments (except the first
packet fragment which will be treated as a normal packet). In theory
no packet will pass through until it is reassembled, and it won't be
reassembled unless the first packet fragment is passed. Of course
there are attacks that can be generated by overloading a machine
with packet fragments. But NFS won't work correctly unless you
let fragments through. See <xref linkend="troubleshooting"> for details.
</para>
<para>
The other three lines say trust your local networks and deny and log
everything else. It's not great and more specific rules pay off, but
more specific rules are outside of the scope of this discussion.
</para>
<para>
Some pointers if you'd like to be more paranoid or strict about your
rules. If you choose to reset your firewall rules each time <command>statd</command>,
<command>rquotad</command>, <command>mountd</command> or <command>lockd</command>
move (which is possible) you'll want to make sure you allow fragments to
your nfs server FROM your nfs client(s). If you don't you will get some very
interesting reports from the kernel regarding packets being denied. The messages
will say that a packet from port 65535 on the client to 65535 on the server
is being denied. Allowing fragments will solve this.
</para>
</sect2>
<sect2 id="summary">
<title>Summary</title>
<para>
If you use the <filename>hosts.allow</filename>, <filename>hosts.deny</filename>,
<filename>root_squash</filename>, <userinput>nosuid</userinput> and privileged
port features in the portmapper/nfs software you avoid many of the
presently known bugs in nfs and can almost feel secure about that at
least. But still, after all that: When an intruder has access to your
network, s/he can make strange commands appear in your <filename>.forward</filename> or
read your mail when <filename>/home</filename> or <filename>/var/mail</filename> is
NFS exported. For the same reason, you should never access your PGP private key
over nfs. Or at least you should know the risk involved. And now you know a bit
of it.
</para>
<para>
NFS and the portmapper makes up a complex subsystem and therefore it's
not totally unlikely that new bugs will be discovered, either in the
basic design or the implementation we use. There might even be holes
known now, which someone is abusing. But that's life.
</para>
</sect2>
</sect1>

View File

@ -0,0 +1,492 @@
<sect1 id="server">
<title>Setting Up an NFS Server</title>
<sect2 id="serverintro">
<title>Introduction to the server setup</title>
<para>
It is assumed that you will be setting up both a server and a
client. If you are just setting up a client to work off of
somebody else's server (say in your department), you can skip
to <xref linkend="client">. However, every client that is set up requires
modifications on the server to authorize that client (unless
the server setup is done in a very insecure way), so even if you
are not setting up a server you may wish to read this section to
get an idea what kinds of authorization problems to look out for.
</para>
<para>
Setting up the server will be done in two steps: Setting up the
configuration files for NFS, and then starting the NFS services.
</para>
</sect2>
<sect2 id="config">
<title>Setting up the Configuration Files</title>
<para>
There are three main configuration files you will need to edit to
set up an NFS server: <filename>/etc/exports</filename>,
<filename>/etc/hosts.allow</filename>, and <filename>/etc/hosts.deny</filename>.
Strictly speaking, you only need to edit <filename>/etc/exports</filename> to get
NFS to work, but you would be left with an extremely insecure setup. You may also need
to edit your startup scripts; see <xref linkend="daemons"> for more on that.
</para>
<sect3 id="exports">
<title>/etc/exports</title>
<para>
This file contains a list of entries; each entry indicates a volume
that is shared and how it is shared. Check the man pages (<command>man
exports</command>) for a complete description of all the setup options for
the file, although the description here will probably satistfy
most people's needs.
</para>
<para>
An entry in <filename>/etc/exports</filename> will typically look like this:
<programlisting>
directory machine1(option11,option12) machine2(option21,option22)
</programlisting>
</para>
<para>
where
<glosslist>
<glossentry><glossterm>directory</glossterm>
<glossdef>
<para>
the directory that you want to share. It may be an
entire volume though it need not be. If you share a directory,
then all directories under it within the same file system will
be shared as well.
</para>
</glossdef>
</glossentry>
<glossentry><glossterm>machine1 and machine2</glossterm>
<glossdef>
<para>
client machines that will have access to the directory. The machines
may be listed by their IP address or their DNS address
(e.g., <emphasis>machine.company.com</emphasis> or <emphasis>192.168.0.8</emphasis>).
Using IP addresses is more reliable and more secure.
</para>
</glossdef>
</glossentry>
<glossentry><glossterm>optionxx</glossterm>
<glossdef>
<para>
the option listing for each machine will describe what kind of
access that machine will have. Important options are:
<itemizedlist>
<listitem>
<para>
<userinput>ro</userinput>: The directory is shared read only; the client machine
will not be able to write to it. This is the default.
</para>
</listitem>
<listitem>
<para>
<userinput>rw</userinput>: The client machine will have read and write access to the
directory.
</para>
</listitem>
<listitem>
<para>
<userinput>no_root_squash</userinput>: By default, any file request made by user root
on the client machine is treated as if it is made by user
nobody on the server. (Excatly which UID the request is
mapped to depends on the UID of user "nobody" on the server,
not the client.) If no_root_squash is selected, then
root on the client machine will have the same level of access
to the files on the system as root on the server. This
can have serious security implications, although it may be
necessary if you want to perform any administrative work on
the client machine that involves the exported directories.
You should not specify this option without a good reason.
</para>
</listitem>
<listitem>
<para>
<userinput>no_subtree_check</userinput>: If only part of a volume is exported, a
routine called subtree checking verifies that a file that is
requested from the client is in the appropriate part of the
volume. If the entire volume is exported, disabling this check
will speed up transfers.
</para>
</listitem>
<listitem>
<para>
<userinput>sync</userinput>: By default, a Version 2 NFS server will tell a client
machine that a file write is complete when NFS has finished
handing the write over to the filesysytem; however, the file
system may not sync it to the disk, even if the client makes
a sync() call on the file system. The default behavior may
therefore cause file corruption if the server reboots. This
option forces the filesystem to sync to disk every time NFS
completes a write operation. It slows down write times
substantially but may be necessary if you are running NFS
Version 2 in a production environment. Version 3 NFS has
a commit operation that the client can call that
actually will result in a disk sync on the server end.
</para>
</listitem>
</itemizedlist>
</para>
</glossdef>
</glossentry>
</glosslist>
</para>
<para>
Suppose we have two client machines, <emphasis>slave1</emphasis> and <emphasis>slave2</emphasis>, that have IP
addresses <emphasis>192.168.0.1</emphasis> and <emphasis>192.168.0.2</emphasis>, respectively. We wish to share
our software binaries and home directories with these machines.
A typical setup for <filename>/etc/exports</filename> might look like this:
<screen>
/usr/local 192.168.0.1(ro) 192.168.0.2(ro)
/home 192.168.0.1(rw) 192.168.0.2(rw)
</screen>
</para>
<para>
Here we are sharing <filename>/usr/local</filename> read-only to slave1 and slave2,
because it probably contains our software and there may not be
benefits to allowing slave1 and slave2 to write to it that outweigh
security concerns. On the other hand, home directories need to be
exported read-write if users are to save work on them.
</para>
<para>
If you have a large installation, you may find that you have a bunch
of computers all on the same local network that require access to
your server. There are a few ways of simplifying references
to large numbers of machines. First, you can give access to a range
of machines at once by specifying a network and a netmask. For
example, if you wanted to allow access to all the machines with IP
addresses between <emphasis>192.168.0.0</emphasis> and
<emphasis>192.168.0.255</emphasis> then you could have the entries:
<screen>
/usr/local 192.168.0.0/255.255.255.0(ro)
/home 192.168.0.0/255.255.255.0(rw)
</screen>
</para>
<para>
See the <ulink url="http://www.linuxdoc.org/HOWTO/Networking-Overview-HOWTO.html">Networking-Overview HOWTO</ulink>
for further information about how netmasks work, and you may also wish to
look at the man pages for <filename>init</filename> and <filename>hosts.allow</filename>.
</para>
<para>
Second, you can use NIS netgroups in your entry. To specify a
netgroup in your exports file, simply prepend the name of the
netgroup with an "@". See the <ulink url="http://www.linuxdoc.org/HOWTO/NIS-HOWTO.html">NIS HOWTO</ulink>
for details on how netgroups work.
</para>
<para>
Third, you can use wildcards such as <emphasis>*.foo.com</emphasis> or
<emphasis>192.168.</emphasis> instead of hostnames.
</para>
<para>
However, you should keep in mind that any of these simplifications
could cause a security risk if there are machines in your netgroup
or local network that you do not trust completely.
</para>
<para>
A few cautions are in order about what cannot (or should not) be
exported. First, if a directory is exported, its parent and child
directories cannot be exported if they are in the same filesystem.
However, exporting both should not be necessary because listing the
parent directory in the <filename>/etc/exports</filename> file will cause all underlying
directories within that file system to be exported.
</para>
<para>
Second, it is a poor idea to export a FAT or VFAT (i.e., MS-DOS or
Windows 95/98) filesystem with NFS. FAT is not designed for use on a
multi-user machine, and as a result, operations that depend on
permissions will not work well. Moreover, some of the underlying
filesystem design is reported to work poorly with NFS's expectations.
</para>
<para>
Third, device or other special files may not export correctly to non-Linux
clients. See <xref linkend="interop"> for details on particular operating systems.
</para>
</sect3>
<sect3 id="hosts">
<title>/etc/hosts.allow and /etc/hosts.deny</title>
<para>
These two files specify which computers on the network can use
services on your machine. Each line of the file is an entry listing
a service and a set of machines. When the server gets a request
from a machine, it does the following:
<itemizedlist>
<listitem>
<para>
It first checks <filename>hosts.allow</filename> to see if the machine
matches a description listed in there. If it does, then the machine
is allowed access.
</para>
</listitem>
<listitem>
<para>
If the machine does not match an entry in <filename>hosts.allow</filename>, the
server then checks <filename>hosts.deny</filename> to see if the client matches a
listing in there. If it does then the machine is denied access.
</para>
</listitem>
<listitem>
<para>
If the client matches no listings in either file, then it
is allowed access.
</para>
</listitem>
</itemizedlist>
</para>
<para>
In addition to controlling access to services handled by inetd (such
as telnet and FTP), this file can also control access to NFS
by restricting connections to the daemons that provide NFS services.
Restrictions are done on a per-service basis.
</para>
<para>
The first daemon to restrict access to is the portmapper. This daemon
essentially just tells requesting clients how to find all the NFS
services on the system. Restricting access to the portmapper is the
best defense against someone breaking into your system through NFS
because completely unauthorized clients won't know where to find the
NFS daemons. However, there are two things to watch out for. First,
restricting portmapper isn't enough if the intruder already knows
for some reason how to find those daemons. And second, if you are
running NIS, restricting portmapper will also restrict requests to NIS.
That should usually be harmless since you usually want
to restrict NFS and NIS in a similar way, but just be cautioned.
(Running NIS is generally a good idea if you are running NFS, because
the client machines need a way of knowing who owns what files on the
exported volumes. Of course there are other ways of doing this such
as syncing password files. See the <ulink url="http://www.linuxdoc.org/HOWTO/NIS-HOWTO.html">NIS HOWTO</ulink> for information on
setting up NIS.)
</para>
<para>
In general it is a good idea with NFS (as with most internet services)
to explicitly deny access to hosts that you don't need to allow access
to.
</para>
<para>
The first step in doing this is to add the followng entry to
<filename>/etc/hosts.deny</filename>:
</para>
<para>
<screen>
portmap:ALL
</screen>
</para>
<para>
Starting with nfs-utils 0.2.0, you can be a bit more careful by
controlling access to individual daemons. It's a good precaution
since an intruder will often be able to weasel around the portmapper.
If you have a newer version of NFS-utils, add entries for each of the
NFS daemons (see the next section to find out what these daemons are;
for now just put entries for them in hosts.deny):
</para>
<para>
<screen>
lockd:ALL
mountd:ALL
rquotad:ALL
statd:ALL
</screen>
</para>
<para>
Even if you have an older version of <emphasis>nfs-utils</emphasis>, adding these entries
is at worst harmless (since they will just be ignored) and at best
will save you some trouble when you upgrade. Some sys admins choose
to put the entry <userinput>ALL:ALL</userinput> in the file <filename>/etc/hosts.deny</filename>,
which causes any service that looks at these files to deny access to all
hosts unless it is explicitly allowed. While this is more secure
behavior, it may also get you in trouble when you are installing new
services, you forget you put it there, and you can't figure out for
the life of you why they won't work.
</para>
<para>
Next, we need to add an entry to <filename>hosts.allow</filename> to give any hosts
access that we want to have access. (If we just leave the above
lines in <filename>hosts.deny</filename> then nobody will have access to NFS.) Entries
in <filename>hosts.allow</filename> follow the format
<informalexample>
<screen>
service: host [or network/netmask] , host [or network/netmask]
</screen>
</informalexample>
</para>
<para>
Here, host is IP address of a potential client; it may be possible
in some versions to use the DNS name of the host, but it is strongly
deprecated.
</para>
<para>
Suppose we have the setup above and we just want to allow access
to <emphasis>slave1.foo.com</emphasis> and <emphasis>slave2.foo.com</emphasis>,
and suppose that the IP addresses of these machines are <emphasis>192.168.0.1</emphasis>
and <emphasis>192.168.0.2</emphasis>, respectively. We could add the following entry to
<filename>/etc/hosts.allow</filename>:
<informalexample>
<screen>
portmap: 192.168.0.1 , 192.168.0.2
</screen>
</informalexample>
</para>
<para>
For recent nfs-utils versions, we would also add the following
(again, these entries are harmless even if they are not supported):
<informalexample>
<screen>
lockd: 192.168.0.1 , 192.168.0.2
rquotad: 192.168.0.1 , 192.168.0.2
mountd: 192.168.0.1 , 192.168.0.2
statd: 192.168.0.1 , 192.168.0.2
</screen>
</informalexample>
</para>
<para>
If you intend to run NFS on a large number of machines in a local
network, <filename>/etc/hosts.allow</filename> also allows for network/netmask style
entries in the same manner as <filename>/etc/exports</filename> above.
</para>
</sect3>
</sect2>
<sect2 id="servicestart">
<title>Getting the services started</title>
<sect3 id="prereq">
<title>Pre-requisites</title>
<para>
The NFS server should now be configured and we can start it running.
First, you will need to have the appropriate packages installed.
This consists mainly of a new enough kernel and a new enough version
of the nfs-utils package. See <xref linkend="swprereq"> if you are in doubt.
</para>
<para>
Next, before you can start NFS, you will need to have TCP/IP
networking functioning correctly on your machine. If you can use
telnet, FTP, and so on, then chances are your TCP networking is fine.
</para>
<para>
That said, with most recent Linux distributions you may be able to
get NFS up and running simply by rebooting your machine, and the
startup scripts should detect that you have set up your <filename>/etc/exports</filename>
file and will start up NFS correctly. If you try this, see <xref linkend="verify">
Verifying that NFS is running. If this does not work, or if
you are not in a position to reboot your machine, then the following
section will tell you which daemons need to be started in order to
run NFS services. If for some reason nfsd was already running when
you edited your configuration files above, you will have to flush
your configuration; see <xref linkend="later"> for details.
</para>
</sect3>
<sect3 id="portmapper">
<title>Starting the Portmapper</title>
<para>
NFS depends on the portmapper daemon, either called <command>portmap</command> or
<command>rpc.portmap</command>. It will need to be started first. It should be
located in <filename>/sbin</filename> but is sometimes in <filename>/usr/sbin</filename>.
Most recent Linux distributions start this daemon in the boot scripts, but it is
worth making sure that it is running before you begin working with
NFS (just type <command>ps aux | grep portmap</command>).
</para>
</sect3>
<sect3 id="daemons">
<title>The Daemons</title>
<para>
NFS serving is taken care of by five daemons: rpc.nfsd, which does
most of the work; rpc.lockd and rpc.statd, which handle file locking;
rpc.mountd, which handles the initial mount requests, and
rpc.rquotad, which handles user file quotas on exported volumes.
Starting with 2.2.18, lockd is called by nfsd upon demand, so you do
not need to worry about starting it yourself. statd will need to be
started separately. Most recent Linux distributions will
have startup scripts for these daemons.
</para>
<para>
The daemons are all part of the nfs-utils package, and may be either
in the <filename>/sbin</filename> directory or the <filename>/usr/sbin</filename> directory.
</para>
<para>
If your distribution does not include them in the startup scripts,
then then you should add them, configured to start in the following
order:
<simplelist>
<member>rpc.portmap</member>
<member>rpc.mountd, rpc.nfsd</member>
<member>rpc.statd, rpc.lockd (if necessary), rpc.rquotad</member>
</simplelist>
</para>
<para>
The nfs-utils package has sample startup scripts for RedHat and
Debian. If you are using a different distribution, in general you
can just copy the RedHat script, but you will probably have to take
out the line that says:
<screen>
. ../init.d/functions
</screen>
to avoid getting error messages.
</para>
</sect3>
</sect2>
<sect2 id="verify">
<title>Verifying that NFS is running</title>
<para>
To do this, query the portmapper with the command <command>rpcinfo -p</command> to
find out what services it is providing. You should get something
like this:
<screen>
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100011 1 udp 749 rquotad
100011 2 udp 749 rquotad
100005 1 udp 759 mountd
100005 1 tcp 761 mountd
100005 2 udp 764 mountd
100005 2 tcp 766 mountd
100005 3 udp 769 mountd
100005 3 tcp 771 mountd
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
300019 1 tcp 830 amd
300019 1 udp 831 amd
100024 1 udp 944 status
100024 1 tcp 946 status
100021 1 udp 1042 nlockmgr
100021 3 udp 1042 nlockmgr
100021 4 udp 1042 nlockmgr
100021 1 tcp 1629 nlockmgr
100021 3 tcp 1629 nlockmgr
100021 4 tcp 1629 nlockmgr
</screen>
</para>
<para>
This says that we have NFS versions 2 and 3, rpc.statd version 1,
network lock manager (the service name for rpc.lockd) versions 1, 3,
and 4. There are also different service listings depending on
whether NFS is travelling over TCP or UDP. Linux systems use UDP
by default unless TCP is explicitly requested; however other OSes
such as Solaris default to TCP.
</para>
<para>
If you do not at least see a line that says "portmapper", a line
that says "nfs", and a line that says "mountd" then you will need
to backtrack and try again to start up the daemons (see <xref linkend="troubleshooting">,
Troubleshooting, if this still doesn't work).
</para>
<para>
If you do see these services listed, then you should be ready to
set up NFS clients to access files from your server.
</para>
</sect2>
<sect2 id="later">
<title>Making changes to /etc/exports later on</title>
<para>
If you come back and change your <filename>/etc/exports</filename> file, the changes you
make may not take effect immediately. You should run the command
<command>exportfs -ra</command> to force nfsd to re-read the <filename>/etc/exports</filename>
  file. If you can't find the <command>exportfs</command> command, then you can kill nfsd with the
<userinput> -HUP</userinput> flag (see the man pages for kill for details).
</para>
<para>
If that still doesn't work, don't forget to check <filename>hosts.allow</filename> to
make sure you haven't forgotten to list any new client machines
there. Also check the host listings on any firewalls you may have
set up (see <xref linkend="troubleshooting"> for more details on firewalls
and NFS).
</para>
</sect2>
</sect1>

View File

@ -0,0 +1,435 @@
<sect1 id="troubleshooting">
<title>Troubleshooting</title>
<abstract>
<para>
This is intended as a step-by-step guide to what to do when
things go wrong using NFS. Usually trouble first rears its
head on the client end, so this diagnostic will begin there.
</para>
</abstract>
<sect2 id="symptom1">
<title>Unable to See Files on a Mounted File System</title>
<titleabbrev id="sym1short">Symptom 1</titleabbrev>
<para>
First, check to see if the file system is actually mounted.
There are several ways of doing this. The most reliable
way is to look at the file <filename>/proc/mounts</filename>,
which will list all mounted filesystems and give details about them. If
this doesn't work (for example if you don't have the /proc
filesystem compiled into your kernel), you can type
'mount -f' although you get less information.
</para>
<para>
If the file system appears to be mounted, then you may
have mounted another file system on top of it (in which
case you should unmount and remount both volumes), or you
may have exported the file system on the server before you
mounted it there, in which case NFS is exporting the underlying
mount point (if so then you need to restart NFS on the
server).
</para>
<para>
If the file system is not mounted, then attempt to mount it.
If this does not work, see <xref linkend="symptom3" endterm="sym3short">.
</para>
</sect2>
<sect2 id="symptom2">
<title>File requests hang or timeout waiting for access to the file.</title>
<titleabbrev id="sym2short">Symptom 2</titleabbrev>
<para>
This usually means that the client is unable to communicate with
the server. See <xref linkend="symptom3" endterm="sym3short"> letter b.
</para>
</sect2>
<sect2 id="symptom3">
<title>Unable to mount a file system</title>
<titleabbrev id="sym3short">Symptom 3</titleabbrev>
<para>
There are two common errors that mount produces when
it is unable to mount a volume. These are:
<orderedlist numeration="loweralpha">
<listitem>
<para>
failed, reason given by server: Permission denied
</para>
<para>
This means that the server does not recognize that you
have access to the volume.
</para>
<orderedlist numeration="lowerroman">
<listitem>
<para>
Check your <filename>/etc/exports</filename> file and make sure that the
volume is exported and that your client has the right
kind of access to it. For example, if a client only
has read access then you have to mount the volume
with the ro option rather than the rw option.
</para>
</listitem>
<listitem>
<para>
Make sure that you have told NFS to register any
changes you made to <filename>/etc/exports</filename> since starting nfsd
by running the exportfs command. Be sure to type
<command>exportfs -ra</command> to be extra certain that the exports are
being re-read.
</para>
</listitem>
<listitem>
<para>
Check the file <filename>/proc/fs/nfs/exports</filename> and make sure the
volume and client are listed correctly. (You can also
look at the file <filename>/var/lib/nfs/xtab</filename> for an unabridged
list of how all the active export options are set.) If they
are not, then you have not re-exported properly. If they
are listed, make sure the server recognizes your
client as being the machine you think it is. For
example, you may have an old listing for the client
in <filename>/etc/hosts</filename> that is throwing off the server, or
you may not have listed the client's complete address
and it may be resolving to a machine in a different
domain. Try to ping the client from the server, and try
to ping the server from the client. If this doesn't work,
or if there is packet loss, you may have lower-level network
problems.
</para>
</listitem>
</orderedlist>
</listitem>
<listitem>
<para>RPC: Program Not Registered (or another "RPC" error):</para>
<para>
This means that the client does not detect NFS running
on the server. This could be for several reasons.
</para>
<orderedlist numeration="lowerroman">
<listitem>
<para>
First, check that NFS actually is running on the
server by typing <command>rpcinfo -p</command> on the server.
You should see something like this:
<screen>
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100011 1 udp 749 rquotad
100011 2 udp 749 rquotad
100005 1 udp 759 mountd
100005 1 tcp 761 mountd
100005 2 udp 764 mountd
100005 2 tcp 766 mountd
100005 3 udp 769 mountd
100005 3 tcp 771 mountd
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
300019 1 tcp 830 amd
300019 1 udp 831 amd
100024 1 udp 944 status
100024 1 tcp 946 status
100021 1 udp 1042 nlockmgr
100021 3 udp 1042 nlockmgr
100021 4 udp 1042 nlockmgr
100021 1 tcp 1629 nlockmgr
100021 3 tcp 1629 nlockmgr
100021 4 tcp 1629 nlockmgr
</screen>
This says that we have NFS versions 2 and 3, rpc.statd
version 1, network lock manager (the service name for
rpc.lockd) versions 1, 3, and 4. There are also different
service listings depending on whether NFS is travelling over
TCP or UDP. UDP is usually (but not always) the default
unless TCP is explicitly requested.
</para>
<para>
If you do not see at least portmapper, nfs, and mountd, then
you need to restart NFS. If you are not able to restart
successfully, proceed to <xref linkend="symptom9" endterm="sym9short">.
</para>
</listitem>
<listitem>
<para>
Now check to make sure you can see it from the client.
On the client, type <command>rpcinfo -p [server]</command> where
<command>[server]</command> is the DNS name or IP address of your server.
</para>
<para>
If you get a listing, then make sure that the type
of mount you are trying to perform is supported. For
example, if you are trying to mount using Version 3
NFS, make sure Version 3 is listed; if you are trying
to mount using NFS over TCP, make sure that is
registered. (Some non-Linux clients default to TCP).
See man rpcinfo for more details on how
to read the output. If the type of mount you are
trying to perform is not listed, try a different
type of mount.
</para>
<para>
If you get the error No Remote Programs Registered,
then you need to check your <filename>/etc/hosts.allow</filename> and
<filename>/etc/hosts.deny</filename> files on the server and make sure
your client actually is allowed access. Again, if the
entries appear correct, check <filename>/etc/hosts</filename> (or your
DNS server) and make sure that the machine is listed
correctly, and make sure you can ping the server from
the client. Also check the error logs on the system
for helpful messages: Authentication errors from bad
<filename>/etc/hosts.allow</filename> entries will usually appear in
<filename>/var/log/messages</filename>, but may appear somewhere else depending
on how your system logs are set up. The man pages
for syslog can help you figure out how your logs are
set up. Finally, some older operating systems may
behave badly when routes between the two machines
are asymmetric. Try typing <command>tracepath [server]</command> from
the client and see if the word "asymmetric" shows up
anywhere in the output. If it does then this may
be causing packet loss. However asymmetric routes are
not usually a problem on recent linux distributions.
</para>
<para>
If you get the error Remote system error - No route
to host, but you can ping the server correctly,
then you are the victim of an overzealous
firewall. Check any firewalls that may be set up,
either on the server or on any routers in between
the client and the server. Look at the man pages
for <command>ipchains</command>, <command>netfilter</command>,
and <command>ipfwadm</command>, as well as the
<ulink url="http://www.linuxdoc.org/HOWTO/IPCHAINS-HOWTO.html">IPChains-HOWTO</ulink>
and the <ulink url="http://www.linuxdoc.org/HOWTO/Firewall-HOWTO.html">Firewall-HOWTO</ulink> for help.
</para>
</listitem>
</orderedlist>
</listitem>
</orderedlist>
</para>
</sect2>
<sect2 id="symptom4">
<title>I do not have permission to access files on the mounted volume.</title>
<titleabbrev id="sym4short">Symptom 4</titleabbrev>
<para>
This could be one of two problems.
</para>
<para>
If it is a write permission problem, check the export
options on the server by looking at <filename>/proc/fs/nfs/exports</filename>
and make sure the filesystem is not exported read-only.
If it is you will need to re-export it read/write
(don't forget to run <command>exportfs -ra</command> after editing
<filename>/etc/exports</filename>). Also, check
<filename>/proc/mounts</filename> and make sure the volume
is mounted read/write (although if it is mounted read-only
you ought to get a more specific error message). If not then
you need to re-mount with the rw option.
</para>
<para>
The second problem has to do with username mappings, and is
different depending on whether you are trying to do this
as root or as a non-root user.
</para>
<para>
If you are not root, then usernames may not be in sync on
the client and the server. Type <command>id [user]</command>
on both the client and the server and make sure they give the
same <emphasis>UID</emphasis> number. If they don't then
you are having problems with NIS, NIS+, rsync, or whatever
system you use to sync usernames. Check group names to make
sure that they match as well. Also, make sure you are not
exporting with the <userinput>all_squash</userinput> option.
If the user names match then the user has a more general
permissions problem unrelated to NFS.
</para>
<para>
If you are root, then you are probably not exporting with
the <userinput>no_root_squash</userinput> option; check <filename>/proc/fs/nfs/exports</filename>
or <filename>/var/lib/nfs/xtab</filename> on the server and make sure the option
is listed. In general, being able to write to the NFS
server as root is a bad idea unless you have an urgent need --
which is why Linux NFS prevents it by default. See
<xref linkend="security"> for details.
</para>
<para>
If you have root squashing, you want to keep it, and you're only
trying to get root to have the same permissions on the file that
the user <emphasis>nobody</emphasis> should have, then remember that it is the server
that determines which uid root gets mapped to. By default, the
server uses the <emphasis>UID</emphasis> and <emphasis>GID</emphasis> of
<emphasis>nobody</emphasis> in the <filename>/etc/passwd</filename> file,
but this can also be overridden with the <userinput>anonuid</userinput> and
<userinput>anongid</userinput> options in the <filename>/etc/exports</filename>
file. Make sure that the client and the server agree about which
<emphasis>UID</emphasis> <emphasis>nobody</emphasis> gets mapped to.
</para>
</sect2>
<sect2 id="symptom5">
<title>When I transfer really big files, NFS takes over
all the CPU cycles on the server and it screeches to a halt.
</title>
<titleabbrev id="sym5short">Symptom 5</titleabbrev>
<para>
This is a problem with the <function>fsync()</function> function in 2.2 kernels that
causes all sync-to-disk requests to be cumulative, resulting
in a write time that is quadratic in the file size. If you
can, upgrading to a 2.4 kernel should solve the problem.
Also, exporting with the <userinput>no_wdelay</userinput> option
forces the program to use <function>o_sync()</function> instead, which may prove faster.
</para>
</sect2>
<sect2 id="symptom6">
<title>Strange error or log messages</title>
<titleabbrev id="sym6short">Symptom 6</titleabbrev>
<para>
<orderedlist numeration="loweralpha">
<listitem>
<para>
Messages of the following format:
</para>
<para>
<screen>
Jan 7 09:15:29 server kernel: fh_verify: mail/guest permission failure, acc=4, error=13
Jan 7 09:23:51 server kernel: fh_verify: ekonomi/test permission failure, acc=4, error=13
</screen>
</para>
<para>
These happen when a NFS setattr operation is attempted on a
file you don't have write access to. The messages are
harmless.
</para>
</listitem>
<listitem>
<para>
The following messages frequently appear in the logs:
</para>
<para>
<screen>
kernel: nfs: server server.domain.name not responding, still trying
kernel: nfs: task 10754 can't get a request slot
kernel: nfs: server server.domain.name OK
</screen>
</para>
<para>
The "can't get a request slot" message means that the client-
side RPC code has detected a lot of timeouts (perhaps due to
network congestion, perhaps due to an overloaded server), and
is throttling back the number of concurrent outstanding
requests in an attempt to lighten the load. The cause of
these messages is basically sluggish performance. See
<xref linkend="performance"> for details.
</para>
</listitem>
<listitem>
<para>
After mounting, the following message appears on the client:
</para>
<para>
<screen>
nfs warning: mount version older than kernel
</screen>
</para>
<para>
It means what it says: You should upgrade your mount package and/or
am-utils. (If for some reason upgrading is a problem, you may be able
to get away with just recompiling them so that the newer kernel features
are recognized at compile time).
</para>
</listitem>
<listitem>
<para>
Errors in startup/shutdown log for lockd
</para>
<para>
You may see a message of the following kind in your boot log:
<screen>
nfslock: rpc.lockd startup failed
</screen>
</para>
<para>
They are harmless. Older versions of rpc.lockd needed to be
started up manually, but newer versions are started automatically
by knfsd. Many of the default startup scripts still try to start
up lockd by hand, in case it is necessary. You can alter your
startup scripts if you want the messages to go away.
</para>
</listitem>
<listitem>
<para>
The following message appears in the logs:
</para>
<para>
<screen>
kmem_create: forcing size word alignment - nfs_fh
</screen>
</para>
<para>
This results from the file handle being 16 bits instead of a
mulitple of 32 bits, which makes the kernel grimace. It is
harmless.
</para>
</listitem>
</orderedlist>
</para>
</sect2>
<sect2 id="symptom7">
<title>
Real permissions don't match what's in <filename>/etc/exports</filename>.
</title>
<titleabbrev id="sym7short">Symptom 7</titleabbrev>
<para>
<emphasis>
<filename>/etc/exports</filename> is VERY sensitive to whitespace - so the
following statements are not the same:
</emphasis>
<programlisting>
/export/dir hostname(rw,no_root_squash)
/export/dir hostname (rw,no_root_squash)
</programlisting>
The first will grant hostname rw access to <filename>/export/dir</filename>
without squashing root privileges. The second will grant
hostname rw privs w/root squash and it will grant EVERYONE
else read-write access, without squashing root privileges.
Nice huh?
</para>
</sect2>
<sect2 id="symptom8">
<title>Flaky and unreliable behavior</title>
<titleabbrev id="sym8short">Symptom 8</titleabbrev>
<para>
Simple commands such as <command>ls</command> work, but anything that transfers
a large amount of information causes the mount point to lock.
</para>
<para>
This could be one of two problems:
</para>
<orderedlist numeration="lowerroman">
<listitem>
<para>
It will happen if you have ipchains on at the server and/or the
client and you are not allowing fragmented packets through the
chains. Allow fragments from the remote host and you'll be able
to function again. See <xref linkend="firewalls"> for details on how to do this.
</para>
</listitem>
<listitem>
<para>
You may be using a larger rsize and wsize in your mount options
than the server supports. Try reducing rsize and wsize to 1024 and
seeing if the problem goes away. If it does, then increase them
slowly to a more reasonable value.
</para>
</listitem>
</orderedlist>
</sect2>
<sect2 id="symptom9">
<title>nfsd won't start</title>
<titleabbrev id="sym9short">Symptom 9</titleabbrev>
<para>
Check the file <filename>/etc/exports</filename> and make sure root has read permission.
Check the binaries and make sure they are executable. Make sure
your kernel was compiled with NFS server support. You may need
to reinstall your binaries if none of these ideas helps.
</para>
</sect2>
</sect1>