LDP/LDP/howto/docbook/openMosix-HOWTO/openMosix_Tuning.sgml

193 lines
5.4 KiB
Plaintext

<CHAPTER ID="Tuning">
<TITLE>Tuning Mosix</TITLE>
<sect1><title>Introduction</title>
<para>
Some of the parts below are still from the old Mosix Howto, as time passes
these parts will get replaced by relevant openMosix parts, however some things
are still the same , but your mileage may vary. </para>
</sect1>
<sect1><title>Creating a "Master" node</title>
<para>Although openMosix architcture does not require a master node as such,
you might want to have a head node from where you launch processes, this might
be a multihomed node from where users log in to your cluster.
You want to configure your machine to make processes
migrate away</para>
<para>
You have to trick the node in thinking it is the slowest node around and
it'd better migrate all it's processes to the faster nodes.
</para><para>
You will have to make it "slow" with :
<programlisting>
mosctl setspeed [n]
</programlisting>
where n should be much lower than the speed of the
other nodes Processes will move/migrate away fast.
You can get the speed of a node with :
<programlisting>
mosctl getspeed
</programlisting>
</para>
</sect1>
<SECT1><TITLE>Optimizing Mosix </TITLE>
<PARA>Editorial Comment: To be checked with openMosix versions
</PARA>
<PARA>
Login a normal terminal as root. Type
<PROGRAMLISTING>
setpe -r
</PROGRAMLISTING>
which, if everything went right, will give you a listing of your
/etc/mosix.map. If things did not go right, try
<PROGRAMLISTING>
setpe -w -f /etc/mosix.map
</PROGRAMLISTING>
to set up your node.
Then, type
<PROGRAMLISTING>
cat /proc/$$/lock
</PROGRAMLISTING>
to see if your child processes are locked in your mode (1) or can
migrate (0). If for some reason you find your processes are locked,
you can change this with
<PROGRAMLISTING>
echo 0 > /proc/$$/lock
</PROGRAMLISTING>
until you fix the problem.
Repeat the whole configuration scheme for a second computer.
The programs tune_kernel and prep_tune that Mosix uses to calibrate
the individual nodes do not work with the SuSE distribution.
However, you can fake it. First, bring the computer you want to
tune and another computer with Mosix installed down to single user
mode by typing
<PROGRAMLISTING>
init 1
</PROGRAMLISTING>
as root. All other computers on the network should be shutdown if
possible.
On both machines, run the following commands:
<PROGRAMLISTING>
/etc/init.d/network start
/etc/init.d/mosix start
echo 1 > /proc/mosix/admin/quiet
</PROGRAMLISTING>
This fakes prep_tune and the first parts of tune_kernel. Note that
if you have a laptop with a pcmcia network card, you will have to
run
<PROGRAMLISTING>
/etc/init.d/pcmcia start
</PROGRAMLISTING>
instead of "/etc/init.d/network start".
On the computer you want to tune, run tune_kernel and follow
instructions. Depending on your machines, this can take a while -
if you have a dog, this might be the time to go on that long, long
walk you've always promised him.
tune_kernel will create a program called "pg" in /root for testing
reasons. Ignore it.
After tuning is over, copy the contents of /tmp/overheads to the
file /etc/overheads (and/or recompile the kernel).
Repeat the tuning procedure for each computer. Reboot, enjoy Mosix,
and
don't forget to brag to your friends about your new cluster.
</PARA>
</SECT1>
<!--
<SECT1><TITLE>Where to place your files</TITLE>
<PARA>
</PARA>
</SECT1>
-->
<SECT1><TITLE>Channel Bonding Made Easy</TITLE>
<PARA>
Contributed by Evan Hisey
</PaRA>
<PARA>
Channel bonding is actually horrible easy. This may explain the lack of
documentation on this subject A bonded network appears as a normal
network to the applications. All machines on a subnet must be bonded the same way. Bonded and non-bonded machine really don't talk well
to each other.
</PaRA>
<PARA>
Channel bonding needs at least two physical sub-nets but can have
more(Currently I have a tri-bonded cluster). To enable bonding you need to
either compile in to the kernel or as a module (bonding.o) the Channel Bonding kernel code, as of 2.4.x is it a standard option of the kernel.
The NIC's are setup as normal with except that you only us 'ifconfig' to initialize the first card of the bond. 'ifenslave' is used to initialize
the remaining cards in the bonded connection. 'ifenslave' can be locate in the linux/Documentation/network/ directory. It will need to be
compiled as it is a .c file. The basic format for use is
<programlisting><![CDATA[
ifenslave <master> <slave1> <slave2> ...
]]></programlisting>
Channel bonded networks can connect to
standard networks via a router or bridge that supports channel bonding( I just use an extra NIC and port-forwarding in the head node).
</PARA>
</SECT1>
<!--
<sect1><title>Updatedb</title>
<para>
Updatedb in combination with mfs can cause some issues, you might want to add
/mfs to the PRUNEFPATHS or mfs to the PRUNEFS in your /etc/updatedb.conf
to disable updatedb from indexing this mountpoints.
</para>
</sect1>
-->
<sect1><title>openMosix and FireWire</title>
<para>
openMosix does gain performance by using another type of network device,
as described within the paper about <ulink
url="http://howto.ipng.be/FireWireClustering/"><citetitle>
openMosix and FireWire</citetitle>
</ulink>
</para>
</sect1>
</CHAPTER>