LDP/LDP/howto/docbook/openMosix-HOWTO/openMosix_Requirements_And_...

135 lines
5.1 KiB
Plaintext

<CHAPTER ID="Requirementsplanning">
<TITLE>Requirements and Planning</TITLE>
<SECT1> <TITLE>Hardware requirements</TITLE>
<PARA> Installing a basic cluster requires at least 2 network
connected machines, either using a cross-cable between the two network
cards or using a switch or hub (a switch is much better than a hub
though and only costs a few bucks more). Of course the faster your
network-cards the easier you will get better performance for your
cluster.
</PARA>
<PARA> These days Fast Ethernet (100 Mbps) is standard; putting
multiple ports in a machine isn't that difficult, but make sure to
connect them through other physical networks in order to gain the
speed you want. Gigabit Ethernet is getting cheaper every day now but
I suggest that you don't rush to the shop spending your money before
you have actually tested your setup with multiple 100Mbit cards and
noticed that you really do need the extra network capacity. Next to
putting a Gigabit card you might also want to try bonding different
100Mbit cards together. An even cheaper alternative can be found in
Firewire, as discussed in <ulink
url="http://howto.ipng.be/FireWireClustering/"><citetitle>this
paper</citetitle></ulink> </PARA>
</SECT1>
<SECT1>
<TITLE>Hardware Setup Guidelines</TITLE>
<para> Setting up a big cluster requires some thinking to be done:
where are you going to put the machines? Not under a table somewhere
or in the middle of your office I hope! It's ok if you just want to
do some small tests, but if you are planning to deploy a N node
cluster you will have to make sure that the environment that will hold
these machines is capable of doing so.
</para>
<para> I'm talking about preparing one or more 19" racks to host the
machines, configuring the appropriate network topology, either straight,
single connected or even a 1 to 1 cross connected network between all
your nodes. You will also need to make sure that there is enough
power to support such a range of machines, that your air-conditioning
system supports the load and that in case of power-failure your UPS
can cleanly shut down all the required systems. You might want to
invest in a KVM (Keyboard, Video, Mouse) Switch in order to facility
access to the machines' consoles.
</para>
<para> But even if you don't have the number of nodes that justifies
such an investment, make sure that you can always easily access the
different nodes, you never know when you have to replace a CPU fan or
an hard-disk of a machine in trouble. If that means that you have to
unload a stack of machines to reach the bottom one, hence shutting
down your cluster, you are in trouble.
</para>
</SECT1>
<SECT1>
<TITLE>Software requirements</TITLE>
<PARA> The systems we plan to use will need a basic Linux installation
of your choice: Red Hat, SuSe, Debian, Gentoo or any another
distribution: it doesn't really matter which one.
What does matter is that the kernel is at least on 2.4 level, and that
your network-cards are configured correctly; next to that you'll need
a healthy space of swap. </PARA>
</SECT1>
<SECT1>
<TITLE>Planning your Cluster</TITLE>
<PARA> When it comes to configuring openMosix Clusters with a pool of
servers and a set of (personal) workstations, you have different
options that will have their advantages and disadvantages.
<itemizedlist>
<listitem>
<para>
In a <emphasis>Single-pool</emphasis> configuration all the servers
and workstations are used as a single cluster: each machine is a part
of the cluster and can migrate processes to each other existing node.
This of course makes your workstation a part of the pool.
</para>
</listitem>
<listitem>
<para>
In an environment that is called a <emphasis>Server-pool</emphasis>,
servers are a part of the cluster while workstations aren't part of
it, they don't even have openMosix kernel. If you want to run
applications on the cluster you will need to
specifically log on to these servers. However your workstation will also
stay
clean and no remote processes will migrate to it.
</para>
</listitem>
<listitem>
<para>
A third alternative is called an <emphasis>Adaptive-pool</emphasis>
configuration: here servers are shared while workstations join or
leave the cluster. Imagine your workstation being used during daytime
by yourself but, as soon as you log out in the evening, a script tells
the workstation to join the cluster and start crunching numbers. This
way your machine is being used while you don't need it. If you need
the resources of the machine again just run the openmosix stop script
and your processes will stay away from the cluster and vice-versa.
</para>
<para>Practically this means that you will change the role of your machine
by using mosctl.
</para>
</listitem>
</itemizedlist>
</PARA>
</SECT1>
<SECT1><TITLE>Classrooms</TITLE>
<PARA>Although it might seem a good idea to convert your classroom into an
openMosix cluster at night, you'll have to consider training your end
users not to pull the power switch of those machines when they want to use
them again. More recent machines support automatic shutdowns when hitting
the power button, but with older machines you might loose some data
when this actually happens.</PARA> </SECT1>
</CHAPTER>