LDP/LDP/howto/docbook/openMosix-HOWTO/openMosixview.sgml

1199 lines
32 KiB
Plaintext

<chapter id="openMosixview">
<title>openMosixview</title>
<sect1><title>Introduction</title>
<para>
openMosixview is the next version and a complete rewrite of Mosixview.
It is a cluster-management GUI for openMosix-cluster and everybody is invited
to download and use it (at your own risk and responsibility).
The openMosixview-suite contains 5 useful applications for monitoring and
administrating openMosix-cluster.
</para>
<para>
<simplelist>
<member>
<emphasis>openMosixview </emphasis> the main monitoring+administration
application</>
<member>
<emphasis>openMosixprocs </emphasis> a process-box for managing processes
</><member>
<emphasis>openMosixcollector </emphasis> collecting daemon which logs
cluster+node informations
</><member>
<emphasis>openMosixanalyzer </emphasis> for analyzing the data collected by
the openMosixcollector
</><member>
<emphasis>openMosixhistory </emphasis> a process-history for your cluster
</></simplelist>
</para>
<para>
All parts are accessible from the main application window. The most common
openMosix-commands are executable by a few mouse-clicks. An advanced execution
dialog helps to start applications on the cluster.
"Priority-sliders" for each node simplifying the manual and automatic load-balancing.
openMosixview is now adapted to the openMosix-auto-discovery and gets all configuration-values from
the openMosix /proc-interface.
</para>
</sect1>
<sect1>
<title>
openMosixview vs Mosixview </title>
<para>
openMosixview is fully designed for openMosix cluster only.
The Mosixview-website (and all mirrors) will stay as they are but all further developing
will continue with openMosixview located at the new domain <ulink url="http://www.openmosixview.com"><citetitle>www.openmosixview.com</citetitle></ulink>
</para>
<para>
If you have: questions, features wanted, problems during installation,
comments, exchange of experiences etc. feel free to mail me, Matt Rechenburg
or
subscribe to the openMosix/Mosixview-mailing-list
and
mail to the openMosix/Mosixview-mailing-list
</para>
<para>
<emphasis>changes: (to Mosixview 1.1) </emphasis>
openMosixview is a complete rewrite "from the scratch" of Mosixview!
It has the same functionalities but there are fundamental changes in
ALL parts of the openMosixview source-code. It is tested with a
constantly changing cluster topography (required for the openMosix auto-discovery)
All "buggy" parts are removed or rewritten and it (should ;) run much more stable now.
</para>
<simplelist>
<member>
adapted to the openMosix-auto-discovery</>
<member>
not using /etc/mosix.map or any cluster-map file anymore
</><member>
removed the (buggy) map-file parser
</><member>
rewrote all parts/functions/methods to a cleaner c++ interface
</><member>fixed some smaller bugs in the display
</><member>
replaced MosixMem+Load with the openMosixanalyzer
</><member>
... many more changes
</></simplelist>
</sect1>
<sect1><title>Installation</title>
<para>
Requirements
<simplelist>
<member>QT library</>
<member>
root rights ! </>
<member>rlogin and rsh (or ssh) to all cluster-nodes without password
the openMosix userland-tools mosctl, migrate, runon, iojob, cpujob ...
(download them from the www.openmosix.org website)
</>
</simplelist>
On a RH 8.0 you will need at least the following rpm's
qt-3.0.5-17, libmng-1.0.4, XFree86-Mesa-libGLU-4.2.0, glut-3.7
etc ...
</para>
<para>
Documentation about openMosixview
There is a full HTML-documentation about openMosixview included
in every package. You find the startpage of the documentation in your
openMosixview installation directory:
openmosixview/openmosixview/docs/en/index.html
</para>
<para>
The RPM-packages have their installation directories in:
/usr/local/openmosixview
</para>
<sect2><title>Installation of the RPM-distribution </title>
<para>
Download the latest version of openMosixview rpm-package.
Then just execute e.g.:
<programlisting>
rpm -i openmosixview-1.4.rpm
</programlisting>
This will install all binaries in /usr/bin
To uninstall:
<programlisting>
rpm -e openmosixview
</programlisting>
</para>
</sect2>
<sect2>
<title>
Installation of the source-distribution </title>
<para>
Download the latest version of openMosixview and
unzip+untar the sources and copy
the tarball to e.g. /usr/local/.
<programlisting>
gunzip openmosixview-1.4.tar.gz
tar -xvf openmosixview-1.4.tar
</programlisting>
</para>
</sect2>
<sect2><title>
Automatic setup-script
</title>
<para>
Just cd to the openmosixview-directory and execute
<programlisting>
./setup [your_qt_2.3.x_installation_directory]
</programlisting>
</para>
</sect2>
<sect2>
<title>
Manual compiling </title>
<para>
Set the QTDIR-Variable to your actual QT-Distribution, e.g.
<programlisting>
export QTDIR=/usr/lib/qt-2.3.0 (for bash)
or
setenv QTDIR /usr/lib/qt-2.3.0 (for csh)
</programlisting>
</para>
</sect2>
<sect2>
<title>
Hints </title>
<para>
(from the testers of openMosixview/Mosixview who compiled it on different
linux-distributions, thanks again)
Create the link /usr/lib/qt pointing to your QT-2.3.x installation
e.g. if QT-2.3.x is installed in /usr/local/qt-2.3.0
<programlisting>
ln -s /usr/local/qt-2.3.0 /usr/lib/qt
</programlisting>
Then you have to set the QTDIR environment variable to
<programlisting>
export QTDIR=/usr/lib/qt (for bash)
or
setenv QTDIR /usr/lib/qt (for csh)
</programlisting>
After that the rest should work fine:
<programlisting>
./configure
make
</programlisting>
then do the same in the subdirectory openmosixcollector, openmosixanalyzer,
openmosixhistory and openmosixviewprocs.
Copy all binaries to /usr/bin
<programlisting>
cp openmosixview/openmosixview /usr/bin
cp openmosixviewproc/openmosixviewprocs/mosixviewprocs /usr/bin
cp openmosixcollector/openmosixcollector/openmosixcollector /usr/bin
cp openmosixanalyzer/openmosixanalyzer/openmosixanalyzer /usr/bin
cp openmosixhistory/openmosixhistory/openmosixhistory /usr/bin
</programlisting>
And the openmosixcollector init-script to your init-directory e.g.
<programlisting>
cp openmosixcollector/openmosixcollector.init /etc/init.d/openmosixcollector
or
cp openmosixcollector/openmosixcollector.init /etc/rc.d/init.d/openmosixcollector
</programlisting>
Now copy the openmosixprocs binary on each of your cluster-nodes to /usr/bin/openmosixprocs
<programlisting>
rcp openmosixprocs/openmosixprocs your_node:/usr/bin/openmosixprocs
</programlisting>
You can now execute mosixview
<programlisting>
openmosixview
</programlisting>
</para>
</sect2></sect1>
<sect1><title>using openMosixview</title>
<sect2><title>main application</title>
<para>
Here is a picture of the main application-window.
The functionality is explained in the following.
</para>
<mediaobject>
<imageobject>
<imagedata fileref="images/omview1.eps" format="eps">
</imageobject>
<imageobject>
<imagedata fileref="images/omview1.gif" format="gif">
</imageobject>
</mediaobject>
<para>
openMosixview displays a row with a lamp, a button, a slider, a lcd-number,
two progress-bars and some labels for each cluster-member.
The lights at the left are displaying the openMosix-Id and the status
of the cluster-node. Red if down, green for available.
</para>
<para>
If you click on a button displaying the ip-address of one node a configuration-dialog
will pop up. It shows buttons to execute the most common used "mosctl"-commands.
(described later in this HOWTO)
With the "speed-sliders" you can set the openMosix-speed for each host. The current speed
is displayed by the lcd-number.
</para>
<para>
You can influence the load-balancing of the whole cluster by changing these values.
Processes in a openMosix-Cluster are migrating easier to a node with more openMosix-speed
than to nodes with less speed. Sure it is not the physically speed you can set but it is the
speed openMosix "thinks" a node has.
e.g. a cpu-intensive job on a cluster-node which speed is set to the lowest value of the
whole cluster will search for a better processor for running on and migrate
away easily.
</para>
<para>
The progress bars in the middle gives an overview of the load on each cluster-member.
It displays in percent so it does not represent exactly the load written to the
file /proc/hpc/nodes/x/load (by openMosix), but it should give an overview.
</para>
<para>
The next progressbar is for the used memory the nodes.
It shows the currently used memory in percent from the available memory on the hosts
(the label to the right displays the available mem).
How many CPUs your cluster have is written in the box to the right.
The first line of the main windows contains a configuration button for "all-nodes".
You can configure all nodes in your cluster similar by this option.
</para>
<para>
How good the load-balancing works is displayed by the progressbar in the top left.
100% is very good and means that all nodes nearly have the same load.
</para>
<para>
Use the collector- and analyzer-menu to manage the openMosixcollector and
open the openMosixanalyzer. This two parts of the openMosixview-application suite
are useful for getting an overview of your cluster during a longer period.
</para>
</sect2>
<sect2><title>the configuration-window</title>
<para>
This dialog will
pop up if
an "cluster-node"-button is clicked.
</para>
<mediaobject>
<imageobject>
<imagedata fileref="images/omview2.eps" format="eps">
</imageobject>
<imageobject>
<imagedata fileref="images/omview2.gif" format="gif">
</imageobject>
</mediaobject>
<para>
The openMosix-configuration of each host can be changed easily now.
All commands will be executed per "rsh" or "ssh" on the remote hosts
(even on the local node) so "root" has to "rsh" (or "ssh") to each host
in the cluster without prompting for a password
(it is well described in a Beowulf documentation or on the HOWTO on this page
how to configure it).
</para>
<para>
The commands are:
<programlisting>
automigration on/off
quiet yes/no
bring/lstay yes/no
exspel yes/no
openMosix start/stop
</programlisting>
If openMosixprocs is properly installed on the remote cluster-nodes
click the "remote proc-box"-button to open openMosixprocs (proc-box) from remote.
xhost +hostname will be set and the display will point to your localhost.
The client is executed on the remote also per "rsh" or "ssh".
(the binary openmosixprocs must be copied to e.g. /usr/bin on each host of the cluster)
openMosixprocs is a process-box for managing your programs.
It is useful to manage programs started and running local on the remote nodes
and is described later in this HOWTO.
</para>
<para>
If you are logged on your cluster from a remote workstation insert your local hostname
in the edit-box below the "remote proc-box". Then openMosixprocs will be displayed
on your workstation and not on the cluster-member you are logged on.
(maybe you have to set "xhost +clusternode" on your workstation).
There is a history in the combo-box so you have to write the hostname only once.
</para>
</sect2>
<sect2>
<title>advanced-execution</title><para> If you want to start jobs on your
cluster the "advanced execution"-dialog may help you.
</para>
<mediaobject>
<imageobject>
<imagedata fileref="images/omview3.eps" format="eps">
</imageobject>
<imageobject>
<imagedata fileref="images/omview3.gif" format="gif">
</imageobject>
</mediaobject>
<para>
Choose a program to start with the "run-prog" button (file-open-icon) and you can specify
how and where the job is started by this execution-dialog. There are several options to explain.
</para>
</sect2>
<sect2>
<title>
the command-line </title>
<para>
You can specify additional commandline-arguments in the lineedit-widget on top of the window.
</para>
<table frame=all><title>how to start</title>
<tgroup cols=2 align=left>
<tbody>
<row><entry>
-no migration</entry><entry> start a local job which won't migrate
</entry></row>
<row><entry>
-run home </entry><entry> start a local job
</entry></row>
<row><entry>
-run on</entry><entry> start a job on the node you can choose with the
"host-chooser"
</entry></row>
<row><entry>
-cpu job </entry><entry> start a computation intensive job on a node
(host-chooser)
</entry></row>
<row><entry>
-io job </entry><entry> start a io intensive job on a node
(host-chooser)
</entry></row>
<row><entry>
-no decay </entry><entry> start a job with no decay (host-chooser)
</entry></row>
<row><entry>
-slow decay </entry><entry> start a job with slow decay (host-chooser)
</entry></row>
<row><entry>
-fast decay </entry><entry> start a job with fast decay (host-chooser)
</entry></row>
<row><entry>
-parallel </entry><entry> start a job parallel on some or all node
(special host-chooser)
</entry></row>
</tbody></tgroup></table>
</sect2>
<sect2>
<title>the host-chooser </title>
<para>
For all jobs you start non-local simple choose a host with the dial-widget.
The openMosix-id of the node is also displayed by a lcd-number. Then click execute to start the job.
</para>
</sect2>
<sect2>
<title>the parallel host-chooser</title>
<para>
You can set the first and last node with 2 spinboxes.
Then the command will be executed an all nodes from the first node to the last node.
You can also inverse this option.
</para>
</sect2>
</sect1>
<sect1>
<title>openMosixprocs</title>
<sect2><title>intro</title>
<para>
This process-box is really useful for managing the processes running on your cluster.
<mediaobject>
<imageobject>
<imagedata fileref="images/omview4.eps" format="eps">
</imageobject>
<imageobject>
<imagedata fileref="images/omview4.gif" format="gif">
</imageobject>
</mediaobject>
You should install it on every cluster-node!
</para>
<para>
The processlist gives an overview what is running where.
The second column displays the openMosix-node ID of each process.
0 means local, all other values are remote nodes. Migrated processes are marked with a
green icon and non movable processes have a lock.
</para>
<para>
By double-clicking a process from the list the migrator-window will pop-up for managing
e.g. migrating the process.
There are also options to migrate the remote processes away, send SIGSTOP and
SIGCONT to it or to "renice" it.
</para>
<para>
If you click on the "manage procs from remote" button a new window will come up
(the remote-procs windows) displaying the process currently migrated to this host.
</para>
</sect2>
<sect2>
<title>
the migrator-window</title>
<para>
This dialog will pop up if process
from the process box is clicked.
</para>
<mediaobject>
<imageobject>
<imagedata fileref="images/omview5.eps" format="eps">
</imageobject>
<imageobject>
<imagedata fileref="images/omview5.gif" format="gif">
</imageobject>
</mediaobject>
<para>
The openMosixview-migrator window displays all nodes in your openMosix-cluster.
This window is for managing one process (with additional status-information).
By double-clicking on an host from the list the process will migrate to this host.
After a short moment the process-icon for the managed process will be green,
which means it is running remote.
</para>
<para>
The "home"-button sends the process to its home node.
With the "best"-button the process is send to the best available node in your cluster.
This migration is influenced by the load, speed, CPU's and what openMosix "thinks"
of each node. It maybe will migrate to the host with the most CPU's and/or the best speed.
With the "kill"-button you can kill the process immediately.
</para>
<para>
To pause a program just click the "SIGSTOP"-button and to continue the "SIGCONT"-button.
With the renice-slider below you can renice the current managed process
(-20 means very fast, 0 normal and 20 very slow)
</para>
</sect2>
<sect2>
<title>managing processes from remote</title>
<para>
This dialog will pop up
if the "manage procs from remote"-button
beneath the process-box is clicked
</para>
<mediaobject>
<imageobject>
<imagedata fileref="images/omview6.eps" format="eps">
</imageobject>
<imageobject>
<imagedata fileref="images/omview6.gif" format="gif">
</imageobject>
</mediaobject>
<para>
The TabView displays processes that are migrated to the local host.
The procs are coming from other nodes in your cluster and currently computed
on the host openMosixview is started on.
Similar to the two buttons in the migrator-window the process is send home
by the "goto home node"-button and send to the best available node by the
"goto best node"-button.
</para>
</sect2>
</sect1>
<sect1>
<title>openMosixcollector</title>
<para>
The openMosixcollector is a daemon which should/could be started on one cluster-member.
It logs the openMosix-load of each node to the directory /tmp/openmosixcollector/*
These history log-files analyzed by the openMosixanalyzer (as described later)
gives an nonstop overview of the load, memory and processes in your cluster.
There is one main log-file called /tmp/openmosixcollector/cluster
Additional to this there are additional files in this directory to which the
data is written.
</para>
<para>
At startup the openMosixcollector writes its PID (process id) to
/var/run/openMosixcollector.pid
</para>
<para>
The openMosixcollector-daemon restarts every 12 hours and saves the current history to
/tmp/openmosixcollector[date]/*
These backups are done automatically but you can also trigger this manual.
</para>
<para>
There is an option to write a checkpoint to the history.
These checkpoints are graphically marked as a blue vertical line if you analyze
the history log-files with the openMosixanalyzer.
For example you can set a checkpoint when you start a job on your cluster
and another one at the end..
</para>
<para>
Here is the explanation of the possible commandline-arguments:
<programlisting>
openmosixcollector -d //starts the collector as a daemon
openmosixcollector -k //stops the collector
openmosixcollector -n //writes a checkpoint to the history
openmosixcollector -r //saves the current history and starts a new one
openmosixcollector //print out a short help
</programlisting>
</para>
<para>
You can start this daemon with its init-script in /etc/init.d or /etc/rc.d/init.d.
You just have to create a symbolic link to one of the runlevels for automatic startup.
</para>
<para>
How to analyze the created logfiles is described in the openMosixanalyzer-section.
</para>
</sect1>
<sect1>
<title>openMosixanalyzer</title>
<sect2><title>
the load-overview</title>
<para>
This picture shows the graphical Load-overview in the openMosixanalyzer (Click to enlarge)
</para>
<mediaobject>
<imageobject>
<imagedata fileref="images/omview7.eps" format="eps">
</imageobject>
<imageobject>
<imagedata fileref="images/omview7.gif" format="gif">
</imageobject>
</mediaobject>
<para>
With the openMosixanalyzer you can have a non-stop openMosix-history of your cluster.
The history log-files created by openMosixcollector are displayed in a graphically way
so that you have a long-time overview what happened and happens on your cluster.
The openMosixanalyzer can analyze the current "online" logfiles but you can also open older backups
of your openMosixcollector history logs by the filemenu.
The logfiles are placed in /tmp/openmosixcollector/* (the backups in /tmp/openmosixcollector[date]/*)
and you have to open only the main history file "cluster" to take a look at older load-informations.
(the [date] in the backup directories for the log-files is the date the history is saved)
The start time is displayed on the top and you have a full-day view in the openMosixanalyzer (12 h).
</para>
<para>
If you are using the openMosixanalyzer for looking at "online"-logfiles (current history)
you can enable the "refresh"-checkbox and the view will auto-refresh.
</para>
<para>
The load-lines are normally black. If the load increases to >75 the lines are drawn red.
These values are openMosix--informations. The openMosixanalyzer gets these informations
from the files /proc/hpc/nodes/[openMosix ID]/*
</para>
<para>
The Find-out-button of each nodes calculates several useful statistic values.
Clicking it will open a small new window in which you get the average load- and mem values
and some more statically and dynamic informations about the specific node or the whole cluster.
</para>
</sect2>
<sect2>
<title>statistical informations about a cluster-node</title>
<mediaobject>
<imageobject>
<imagedata fileref="images/omview9.eps" format="eps">
</imageobject>
<imageobject>
<imagedata fileref="images/omview9.gif" format="gif">
</imageobject>
</mediaobject>
<para>
If there are checkpoints written to the load-history by the openMosixcollector they are
displayed as a vertical blue line. You now can compare the load values at a certain moment
much easier.
</para>
</sect2>
<sect2>
<title>the memory-overview</title>
<mediaobject>
<imageobject>
<imagedata fileref="images/omview8.eps" format="eps">
</imageobject>
<imageobject>
<imagedata fileref="images/omview8.gif" format="gif">
</imageobject>
</mediaobject>
<para>
This picture shows the graphical
Memory-overview in the openMosixanalyzer
</para>
<para>
With Memory-overview in the openMosixanalyzer you can have a non-stop memory history
similar to the Load-overview. The history log-files created by openMosixcollector
are displayed in a graphically way so that you have a long-time overview what happened
and happens on your cluster.
It analyze the current "online" logfiles but you can also open older backups of your
openMosixcollector history logs by the filemenu.
</para>
<para>
The displayed values are openMosix-informations. The openMosixanalyzer gets these
informations from the files
<programlisting>
/proc/hpc/nodes/[openMosix-ID]/mem.
/proc/hpc/nodes/[openMosix-ID]/rmem.
/proc/hpc/nodes/[openMosix-ID]/tmem.
</programlisting>
</para>
<para>
If there are checkpoints written to the memory-history by the openMosixcollector
they are displayed as a vertical blue line.
</para>
</sect2>
<sect2>
<title>openMosixhistory</title>
<mediaobject>
<imageobject>
<imagedata fileref="images/omview10.eps" format="eps">
</imageobject>
<imageobject>
<imagedata fileref="images/omview10.gif" format="gif">
</imageobject>
</mediaobject>
<para>
displays the processlist from the
past
</para><para>
openMosixhistory
gives a detailed overview which
process was running on
which node.
The openMosixcollector saves the processlist from the host the collector was started
on and you can browse this log-data with openMosixhistory. You can easy change the
browsing time in openMosixhistory by the time-slider.
</para>
<para>
openMosixhistory can analyze the current "online" logfiles but you can also open older
backups of your openMosixcollector history logs by the filemenu.
</para>
<para>
The logfiles are placed in /tmp/openmosixcollector/*
(the backups in /tmp/openmosixcollector[date]/*) and you have to open only the main history
file "cluster" to take a look at older load-informations.
(the [date] in the backup directories for the log-files is the date the history is saved)
The start time is displayed on the top/left and you have a 12 hour view in openMosixhistory.
</para>
</sect2>
</sect1>
<sect1>
<title>openMosixmigmon</title>
<sect2><title>General</title>
<mediaobject>
<imageobject>
<imagedata fileref="images/ommigmon2.eps" format="eps">
</imageobject>
<imageobject>
<imagedata fileref="images/ommigmon2.gif" format="gif">
</imageobject>
</mediaobject>
<para>
The openMosixmigmon is a monitor for migrations in your
openMosix-cluster.
It displays all your nodes as little penguins sitting in a circle.
</para>
<para><emphasis>
-> nodes-circle.</emphasis>
</para>
<para>
The main penguin is the node on which openMosixmigmon runs and around
this node it shows its processes also in a circle of small black squares.
</para>
<para>
<emphasis>-> main process-circle</emphasis>
</para>
<para>
If a process migrates to one of the nodes the node gets an own
process-circle
and the process moved from the main process-circle to the remote
process-circle.
Then the process is marked green and draws a line from its origin to its
remote location to visualize the migration.
</para>
</sect2>
<sect2><title>
Tooltips:</title>
<para>
If you hold your mouse above a process it will show you its PID and
commandline
in a small tooltip-window.
</para>
</sect2>
<sect2><title>Drag'n Drop!</title>
<para>
The openMosixmigmon is fully Drag'n Drop enabled.
You can grab (drag) any process and drop them to any of your nodes (those
penguins)
and the process will move there.
If you double-click a process on a remote node it will be send home
immediately.
</para>
</sect2>
</sect1>
<sect1><title>openmosixview FAQ</title>
<qandaset>
<qandaentry><question><para>
I cannot compile openMosixview on my system?
</para></question>
<answer><para>
At first QT >= 2.3.x is required.
The QTDIR -environment variable has to be set to your QT-installation
directories like it is well described in the INSTALL- file.
In versions < 0.6 you can do a "make clean" and delete the two files:
/openmosixview/Makefile
/openmosixview/config.cache
and try to compile again because i alway left the binary-
and object-files in older versions.
If you have any other problems post them to the
openMosixview-mailinglist (or directly to me).
</para></answer>
</qandaentry>
<qandaentry>
<question><para>
Can I use openMosixview with SSH? </para>
</question>
<answer>
<para>
Yes, until version 0.7 there is a built-in SSH-support.
You have to be able to ssh to each node in your cluster without password
(just like the same with using RSH this is required)
</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>
I started openMosixview but only the splash-screen appears. What is wrong?
</para>
</question>
<answer>
<para>
Do not fork openMosixview in the background with & (e.g. openMosixview &).
Maybe you cannot rsh/ssh (depends on what you want to use) as user root
without password to each node?
Try "rsh hostname" as root. You should not been promped for a password
but soon get a login shell.
(If you use SSH try "ssh hostname" as root.)
You have to be root on the cluster because that is the only way the
administrative commands executed by openMosixview requires root-privileges.
openMosixview uses "rsh" as the default!
If you only have "ssh" installed on your cluster
edit (or create) the file /root/.openMosixview and put "1111" in it.
This is the main-configuration
file for openMosixview and the last "1" stands for "use ssh instead of rsh".
This will cause openMosixview to use "ssh" even for the first start.
</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>
The openMosixviewprocs/mosixview_client is not working for me!
</para>
</question>
<answer>
<para>
The openMosixview-client is executed per rsh
(or ssh which you can configer whith a checkbox)
on the remote host. It has to be installed in /usr/bin/ on each node.
If you use RSH try:
"xhost +hostname"
"rsh hostname /usr/bin/openMosixview_client -display your_local_host_name:0.0"
or if you use SSH try:
"xhost +hostname"
"ssh hostname /usr/bin/openMosixview_client -display your_local_host_name:0.0"
If this works it will work in openMosixview too.
openMosixview crashes with "segmentation fault"!
Maybe you still use an old version of openMosixview/Mosixview ?
in the mosix.map-parser (which is completly removed in openMosixview !!)
(the versions openMosixview 1.2 and Mosixview > 1.0 are stable)
</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>
Why are the buttons in the openMosixview-configuration dialog not preselected?
</para>
</question>
<answer>
<para>
(automigration on/off, blocking on/off......)
I want them to be preselected too.
The problem is to get the information of node.
You have to login to each cluster-node because these information
are not cluster-wide (to my mind).
The status of each node is stored in the /proc/hpc/admin directory
of each node. Everybody who knows a good way to get these information
easy is invited to mail me.
</para>
</answer>
</qandaentry>
</qandaset>
</sect1>
<sect1><title>openMosixview + ssh:</title>
<para>
(this HowTo is for SSH2)
You can read the reasons why you should use SSH instead of RSH everyday
on the newspaper when another script-kiddy hacked into an insecure system/network.
So SSH is a good decision at all.
<programlisting>
freedom x security = constant (from a security newsgroup)
</programlisting>
That is why it is a bit tricky to configure SSH. SSH is secure even if you use
it to login without being prompted for a password.
Here is a (one) way to configure it.
</para>
<para>
At first a running secure-shell daemon on the remote site is required.
If it is not already installed install it!
(rpm -i [sshd_rpm_packeage_from_your_linux_distribution_cd])
If it is not already running start it with:
<programlisting>
/etc/init.d/ssh start
</programlisting>
Now you have to generate a keypair for SSH on your local computer whith ssh-keygen.
<programlisting>
ssh-keygen
</programlisting>
You will be prompt for a passphrase for that keypair.
The passphrase normally is longer than a password and may be a whole sentence.
The keypair is encrypted with that passphrase and saved in
<programlisting>
/root/.ssh/identity //your private key
and
/root/.ssh/identity.pub //your public key
</programlisting>
<emphasis>Do NOT give your private-key to anybody!!! </emphasis>
Now copy the whole content of /root/.ssh/identity.pub
(your public-key which should be one long line) into /root/.ssh/authorized_keys
on the remote host.
(also copy the content of /root/.ssh/identity.pub to
your local /root/.ssh/authorized_keys like you did it with the remote-node
because openMosixview needed password-less login to the local-node too!)
</para>
<para>
If you ssh to this remote host now you will be prompted for the passphrase
of your public-key. Giving the right passphrase should give you a login.
</para>
<para>
What is the advantage right now???
The passphrase is normally a lot longer than a password!
</para>
<para>
The advantage you can get using the ssh-agent.
It manages the passphrase during ssh login.
<programlisting>
ssh-agent
</programlisting>
The ssh-agent is started now and gives you two environment-variables you should set
(if not set already).
Type:
<programlisting>
echo $SSH_AUTH_SOCK
and
echo $SSH_AGENT_PID
</programlisting>
to see if they are exported to your shell right now.
If not just cut and paste from your terminal.
e.g. for the bash-shell:
<programlisting>
SSH_AUTH_SOCK=/tmp/ssh-XXYqbMRe/agent.1065
export SSH_AUTH_SOCK
SSH_AGENT_PID=1066
export SSH_AGENT_PID
</programlisting>
example for the csh-shell:
<programlisting>
setenv SSH_AUTH_SOCK /tmp/ssh-XXYqbMRe/agent.1065
setenv SSH_AGENT_PID 1066
</programlisting>
With these variables the remote-sshd-daemon can connect your local ssh-agent
by using the socket-file in /tmp (in this example /tmp/ssh-XXYqbMRe/agent.1065).
The ssh-agent can now give the passphrase to the remote host by using this socket
(it is of course an encrypted transfer)!
</para>
<para>
You just have to add your public-key to the ssh-agent with the ssh-add command.
<programlisting>
ssh-add
</programlisting>
Now you should be able to login using ssh to the remote host without
being prompted for a passwod!
</para>
<para>
You could (should) add the ssh-agent and ssh-add commands in your
login-profile e.g.
<programlisting>
eval `ssh-agent`
ssh-add
</programlisting>
Now it is started when you login on your local workstation.
You have done it! I wish you secure logins now.
</para>
<para>
<emphasis>openMosixview </emphasis>
There is a menu-entry which toggles using rsh/ssh with openMosixview.
Just enable this and you can use openMosixview even in insecure
network-environments. You should also save this configuration
(the possibility for saveing the current config in openMosixview
was added in the 0.7 version) because it gets initial data from the slave
using rsh or ssh (just like you configured).
</para>
<para>
If you choose a service wich is not installed properly openMosixview will not work!
(e.g. if you cannot rsh to a slave without being prompted for a password
you cannot use openMosixview with RSH; if you cannot ssh to a slave
without being prompted for a password you cannot use openMosixview with SSH)
</para>
</sect1>
</chapter>