diff --git a/LDP/howto/docbook/openMosix-HOWTO/ClumpOS.sgml b/LDP/howto/docbook/openMosix-HOWTO/ClumpOS.sgml index 6afaa6f7..7f45b1b3 100644 --- a/LDP/howto/docbook/openMosix-HOWTO/ClumpOS.sgml +++ b/LDP/howto/docbook/openMosix-HOWTO/ClumpOS.sgml @@ -36,13 +36,13 @@ author of Clump/OS. How does it work - At boot-time, clump/os will + At boot-time, clump/os will auto-probe for network cards, and, if any are detected, try to configure them via DHCP. If successful, it will create a mosix.map file based on the assumption that all nodes are on local CLASS C networks, and configure MOSIX using this information. - + clump/os Release 4 best supports machines with a single connected network adapter. The MOSIX map created in such cases will consist of a single entry for the CLASS-C network detected, with the node number assigned @@ -53,13 +53,13 @@ to the order in which network adapters are detected. (Future releases will support complex topologies and feature more intelligent MOSIX map creation.) - + clump/os will then display a simple SVGA monitor (clumpview) indicating whether the node is configured, and, if it is, showing the load on all active nodes on the network. When you've finished using this node, simply press [ESC] to exit the interface and -shutdown. +shutdown. Alternatively, or if auto-configuration doesn't work for you, then you can use clump/os in Expert mode. Please note that clump/os is not a @@ -196,7 +196,8 @@ information you need is somewhere on this page -- please read -The CD-ROM doesn't boot +The CD-ROM doesn't boot + @@ -205,7 +206,7 @@ to boot from the CD-ROM drive; also make sure that the CD-ROM is the first boot device. - The SVGA interface doesn't work, or the display is incorrect @@ -242,7 +243,8 @@ and then configure MOSIX via setpe. advise us. We'd like to solve this problem, if possible, or at least document which network cards auto-probe correctly. - + + Migrating processes generate errors ("Network Unreachable") @@ -254,7 +256,8 @@ configured kernels -- even if you are using the all your nodes, but migrating processes generate errors, then please compare your master node's kernel configuration file with the R4.x kernel .config. - + + Migrating processes generate errors ("Process migration failed: incompatible topology") @@ -300,7 +303,7 @@ recommend doing so at this point.) If you want to run clumpview, execute: open -s -w -- clumpview --drone --svgalib - + This will force the node into 'drone' mode (local processes will not diff --git a/LDP/howto/docbook/openMosix-HOWTO/autodiscovery.sgml b/LDP/howto/docbook/openMosix-HOWTO/autodiscovery.sgml index 8f3217a5..ab581ecb 100644 --- a/LDP/howto/docbook/openMosix-HOWTO/autodiscovery.sgml +++ b/LDP/howto/docbook/openMosix-HOWTO/autodiscovery.sgml @@ -1,8 +1,7 @@ -Autodiscovery +2.4 Autodiscovery Easy Configuration - The auto-discovery daemon (omdiscd) provides a way to automatically configure an openMosix cluster hence diff --git a/LDP/howto/docbook/openMosix-HOWTO/openMosix-HOWTO.sgml b/LDP/howto/docbook/openMosix-HOWTO/openMosix-HOWTO.sgml index d15deda0..9d5845da 100644 --- a/LDP/howto/docbook/openMosix-HOWTO/openMosix-HOWTO.sgml +++ b/LDP/howto/docbook/openMosix-HOWTO/openMosix-HOWTO.sgml @@ -11,8 +11,8 @@ - - + + @@ -28,7 +28,8 @@ - + + ]> @@ -54,6 +55,23 @@ The best way to become acquainted with a subject is to write a book about it. + v.1.98p4 + 30 april 2005 + Preparing for 2.6 + + + + v1.0.5 + 3 march 2005 + Misc Small fixes + + + + + v1.0.4 + 13 december 2004 + Added infor about removing openMosixFS in 2.4.26-om1 + v1.0.3 18 june 2004 @@ -190,6 +208,11 @@ The best way to become acquainted with a subject is to write a book about it.openMosix at 2.6 +&twodotsix + + --> &PlumpOS-HOWTO &Installation diff --git a/LDP/howto/docbook/openMosix-HOWTO/openMosix_And_Distributions.sgml b/LDP/howto/docbook/openMosix-HOWTO/openMosix_And_Distributions.sgml index 2336a30a..822aa950 100644 --- a/LDP/howto/docbook/openMosix-HOWTO/openMosix_And_Distributions.sgml +++ b/LDP/howto/docbook/openMosix-HOWTO/openMosix_And_Distributions.sgml @@ -1,7 +1,7 @@ -Distribution specific installations +Distribution specific installations (2.4) Installing openMosix - +This Chapter deals with openMosix 2.4 This chapter deals with installing openMosix on different distributions. It won't be an exhaustive list of all the possible combinations. However throughout the chapter you should find enough @@ -313,6 +313,9 @@ Installation is finished now: the cluster is up and running :) oMFS +Note that oMFS has been removed from openMosix as of the 2.4.26-om1 release. + + First of all, the CONFIG_MOSIX_FS option in the kernel configuration has to be enabled. If the current kernel was compiled without this diff --git a/LDP/howto/docbook/openMosix-HOWTO/openMosix_Features.sgml b/LDP/howto/docbook/openMosix-HOWTO/openMosix_Features.sgml index 8b314b8b..b149e015 100644 --- a/LDP/howto/docbook/openMosix-HOWTO/openMosix_Features.sgml +++ b/LDP/howto/docbook/openMosix-HOWTO/openMosix_Features.sgml @@ -19,13 +19,10 @@ DSM is being released soon (late march 2003). - Well integrated with openAFS. Port to IA-64 as well as AMD-64 is underway. - oMFS has been improved much since plain - MFS. It is a clustering platform with more than 10 products based on it: openMosixView, openMosixWebView, openMosixApplet, diff --git a/LDP/howto/docbook/openMosix-HOWTO/openMosix_Problems.sgml b/LDP/howto/docbook/openMosix-HOWTO/openMosix_Problems.sgml index c4343b83..a28d7a38 100644 --- a/LDP/howto/docbook/openMosix-HOWTO/openMosix_Problems.sgml +++ b/LDP/howto/docbook/openMosix-HOWTO/openMosix_Problems.sgml @@ -282,9 +282,30 @@ Also: Do you have two nic cards on a node? then you have to edit the non-cluster_ip cluster-hostname.cluster-domain cluster-hostname -You -might also -need to set up a routing table, which is a whole different subject. +Or you can supply a -p option to the setpe script to point to the openmosix id of the node it shoud recognise as "self". For example if your internal cluster IP of the master node is "cnode1", you might have an openmosix.map file like + + +1 cnode1 1 +2 cnode2 1 + + +in which case the setpe script needs to be invoked with "setpe -p1 ...". +In the same case the /etc/hosts file might read + +127.0.0.1 localhost +123.456.7.89 usual.name.domain nickname +192.168.0.1 cnode1 +192.168.0.2 cnode2 + + + Then + +setpe -p1 -f /etc/openmosix.map +will give you what you want. You may wish to edit the openmosix init +script to do this properly. + + +You might also need to set up a routing table, which is a whole different subject. Maybe you used different kernel-parameters on each machine? Especially if @@ -318,6 +339,7 @@ openmosix, but a fix has been commited. DFSA ? MFS ? +As of 2.4.26-om1 oMFS has been removed from openMosix People often get confused about what exactly MFS and DFSA are. As discussed before in the howto MFS is the feature of openMosix diff --git a/LDP/howto/docbook/openMosix-HOWTO/openMosix_Testing.sgml b/LDP/howto/docbook/openMosix-HOWTO/openMosix_Testing.sgml index 7cd4b798..328e084d 100644 --- a/LDP/howto/docbook/openMosix-HOWTO/openMosix_Testing.sgml +++ b/LDP/howto/docbook/openMosix-HOWTO/openMosix_Testing.sgml @@ -298,12 +298,13 @@ The 'forkit' test is similar to the 'eatmem' test but uses fork to create multiple process (3*[processors_in_your_openMosix_cluster]) expect it does not write to files. + kernel syscall test: diff --git a/LDP/howto/docbook/openMosix-HOWTO/openMosix_Tuning.sgml b/LDP/howto/docbook/openMosix-HOWTO/openMosix_Tuning.sgml index 6810936d..447861fd 100644 --- a/LDP/howto/docbook/openMosix-HOWTO/openMosix_Tuning.sgml +++ b/LDP/howto/docbook/openMosix-HOWTO/openMosix_Tuning.sgml @@ -167,6 +167,7 @@ standard networks via a router or bridge that supports channel bonding( I just u + openMosix and FireWire diff --git a/LDP/howto/docbook/openMosix-HOWTO/openMosix_What_Is.sgml b/LDP/howto/docbook/openMosix-HOWTO/openMosix_What_Is.sgml index 7a925c86..71e7884b 100644 --- a/LDP/howto/docbook/openMosix-HOWTO/openMosix_What_Is.sgml +++ b/LDP/howto/docbook/openMosix-HOWTO/openMosix_What_Is.sgml @@ -412,6 +412,10 @@ openMosix takes care of the communication between these 2 processes. The openMosix File System (oMFS) +Please note that oMFS has been removed from the openMosix patch as of 2.4.26-om1 + + + oMFS is a feature of openMosix which allows you to access remote filesystems in a cluster as if they were locally mounted. The filesystems of your other nodes can be mounted on /mfs and you will, diff --git a/LDP/howto/docbook/openMosix-HOWTO/openMosixview.sgml b/LDP/howto/docbook/openMosix-HOWTO/openMosixview.sgml index f61ee6bb..e7333d59 100644 --- a/LDP/howto/docbook/openMosix-HOWTO/openMosixview.sgml +++ b/LDP/howto/docbook/openMosix-HOWTO/openMosixview.sgml @@ -280,10 +280,10 @@ The functionality is explained in the following. - + - + @@ -349,10 +349,10 @@ an "cluster-node"-button is clicked. - + - + @@ -406,10 +406,10 @@ cluster the "advanced execution"-dialog may help you. - + - + @@ -495,10 +495,10 @@ This process-box is really useful for managing the processes running on your clu - + - + @@ -536,10 +536,10 @@ from the process box is clicked. - + - + @@ -577,10 +577,10 @@ beneath the process-box is clicked - + - + @@ -689,10 +689,10 @@ the load-overview - + - + @@ -733,10 +733,10 @@ and some more statically and dynamic informations about the specific node or the statistical informations about a cluster-node - + - + @@ -755,10 +755,10 @@ much easier. the memory-overview - + - + @@ -799,10 +799,10 @@ they are displayed as a vertical blue line. openMosixhistory - + - + @@ -845,10 +845,10 @@ The start time is displayed on the top/left and you have a 12 hour view in openM - + - +