old-www/LDP/LG/issue56/lg_answer56.html

4312 lines
172 KiB
HTML
Raw Permalink Blame History

<!--startcut ======================================================= -->
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<html>
<head>
<META NAME="generator" CONTENT="lgazmail v1.3E.b">
<TITLE>The Linux Gazette 56: The Answer Gang (TWDT)</TITLE></HEAD><BODY BGCOLOR="#FFFFFF" TEXT="#000000"
LINK="#3366FF" VLINK="#A000A0">
<!-- ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: -->
<CENTER>
<!-- *** BEGIN navbar *** -->
<IMG ALT="" SRC="../gx/navbar/left.jpg" WIDTH="14" HEIGHT="45" BORDER="0" ALIGN="bottom"><A HREF="lg_mail56.html"><IMG ALT="[ Prev ]" SRC="../gx/navbar/prev.jpg" WIDTH="16" HEIGHT="45" BORDER="0" ALIGN="bottom"></A><A HREF="index.html"><IMG ALT="[ Table of Contents ]" SRC="../gx/navbar/toc.jpg" WIDTH="220" HEIGHT="45" BORDER="0" ALIGN="bottom" ></A><A HREF="../index.html"><IMG ALT="[ Front Page ]" SRC="../gx/navbar/frontpage.jpg" WIDTH="137" HEIGHT="45" BORDER="0" ALIGN="bottom"></A><A HREF="../faq/index.html"><IMG ALT="[ FAQ ]" SRC="./../gx/navbar/faq.jpg"WIDTH="62" HEIGHT="45" BORDER="0" ALIGN="bottom"></A><A HREF="lg_tips56.html"><IMG ALT="[ Next ]" SRC="../gx/navbar/next.jpg" WIDTH="15" HEIGHT="45" BORDER="0" ALIGN="bottom" ></A><IMG ALT="" SRC="../gx/navbar/right.jpg" WIDTH="15" HEIGHT="45" ALIGN="bottom">
<!-- *** END navbar *** -->
</CENTER>
</p>
<P> <hr> <P>
<!-- ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: -->
<center>
<H1><A NAME="answer">
<img src="../gx/dennis/qbubble.gif" alt="(?)"
border="0" align="middle">
<font color="#B03060">The Answer Gang</font>
<img src="../gx/dennis/bbubble.gif" alt="(!)"
border="0" align="middle">
<img src="../gx/dennis/bbubble.gif" alt="(!)"
border="0" align="middle">
<img src="../gx/dennis/bbubble.gif" alt="(!)"
border="0" align="middle">
</A></H1>
<BR>
<H4>By James T. Dennis,
Ben Okopnik, Michael "Alex" Williams, the staff of the <i>Linux
Gazette</i>, and you!</h4>
<H5>
Send submissions of technical questions about Linux to:
<a href="mailto:linux-questions-only@ssc.com">linux-questions-only@ssc.com</a>
</H5>
</center>
<p><hr><p>
<!-- endcut ======================================================= -->
<H3>Contents:</H3>
<dl>
<dt><a href="#tag/greeting"
><strong>&para;: Greetings From Heather Stern</strong></A></dl>
<DL>
<!-- index_text begins -->
<dt><A HREF="#tag/0"
><img src="../gx/dennis/bbub.gif" height="28" width="50"
alt="(!)" border="0"
><strong>Danish Translated: Overclocking.</strong></a>
<dt><A HREF="#tag/1"
><img src="../gx/dennis/bbub.gif" height="28" width="50"
alt="(!)" border="0"
><strong>Regarding #36: Plug and Pray Problems.</strong></a>
<dt><A HREF="#tag/2"
><img src="../gx/dennis/bbub.gif" height="28" width="50"
alt="(!)" border="0"
><strong>Regarding #55: "Simple Shell and Cron Question"</strong></a>
<dt><A HREF="#tag/3"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
><strong>Comparing files locally to those on an FTP server</strong></a>
<dt><A HREF="#tag/4"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
></a>linux using nt server data --or--
<dd><A HREF="#tag/4"
><strong>Accessing an NT Fileserver</strong></a>
<dt><A HREF="#tag/5"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
></a>booting larger than 8.4gb --or--
<dd><A HREF="#tag/5"
><strong>FIPS</strong></a>
<dt><A HREF="#tag/6"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
></a>LI boot problems --or--
<dd><A HREF="#tag/6"
><strong>Removing Linux Partitions</strong></a>
<dt><A HREF="#tag/7"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
></a>dumping filesystems --or--
<dd><A HREF="#tag/7"
><strong>Looking for a 'dump'</strong></a>
<dt><A HREF="#tag/8"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
></a>Ever ran into this? --or--
<dd><A HREF="#tag/8"
><strong>MMDF Anti-Relaying?</strong></a>
<dt><A HREF="#tag/9"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
></a>Creating an .ios file --or--
<dd><A HREF="#tag/9"
><strong>Making CDs</strong></a>
<dt><A HREF="#tag/10"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
></a>HELP --or--
<dd><A HREF="#tag/9"
><strong>Modem setup</strong></a>
<dt><A HREF="#tag/11"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
><strong>Binfmt/Exec Format Errors in <TT>/linuxrc</TT> on initrd</strong></a>
<dt><A HREF="#tag/12"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
></a>Linux Modem Problems.... --or--
<dd><A HREF="#tag/12"
><strong>Mandrake and the Missing Modem</strong></a>
<dt><A HREF="#tag/13"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
></a>Linux, Laptops, and Cooling Fans --or--
<dd><A HREF="#tag/13"
><strong>Making the Laptop's Fan Run</strong></a>
<dt><A HREF="#tag/14"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
><strong>MX Records and Precedence Values</strong></a>
<dt><A HREF="#tag/15"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
></a>unable to open a initial console --or--
<dd><A HREF="#tag/15"
><strong>Re: unable to open a initial console</strong></a>
<br>Also: A Short Guide on How to do Backups and Recovery
<dt><A HREF="#tag/16"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
><strong>RE: uninstall</strong></a>
<dt><A HREF="#tag/17"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
><strong>Basica Fascist SysAdmin's Laundry List</strong></a>
<dt><A HREF="#tag/18"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
><strong>More on TCP Wrappers and telnet Connection Delays</strong></a>
<dt><A HREF="#tag/19"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
></a>connecting red hat workstation to nt server --or--
<dd><A HREF="#tag/19"
><strong>Linux in a Windows NT Domain (under a PDC)</strong></a>
<dt><A HREF="#tag/20"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
></a>windows telnet/linux --or--
<dd><A HREF="#tag/20"
><strong>automating windows telnet to linux</strong></a>
<dt><A HREF="#tag/21"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
><strong>Telnet Clients for Windows and Linux</strong></a>
<dt><A HREF="#tag/22"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
><strong>Port 80 Telnet</strong></a>
<dt><A HREF="#tag/23"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
></a>Telnet to linux box from NT workstation in NT LAN --or--
<dd><A HREF="#tag/23"
><strong>Connection Refused</strong></a>
<dt><A HREF="#tag/24"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
><strong>Loadlin trouble</strong></a>
<dt><A HREF="#tag/25"
><img src="../gx/dennis/qbub.gif" height="28" width="50"
alt="(?)" border="0"
></a>linux mail server to an MS Exchange? --or--
<dd><A HREF="#tag/25"
><strong>Linux vs. MS Exchange for Mail Server</strong></a>
<!-- index_text ends -->
</DL>
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/greeting"><HR WIDTH="75%" ALIGN="center"></A>
<H3 align="left"><img src="../gx/dennis/hbubble.gif"
height="50" width="60" alt="(&para;) " border="0"
>Greetings from Heather Stern</H3>
<!-- begin hgreeting -->
<p>
Hi everybody! Wow, it was a lot more work than I expected... but then,
I also handled the Mailbag and Tips this month. It's all part of our
new team effort to <i>make Linux a little more fun!</i>
<p>
We got a fair handful of questions about statistics, none of which went
answered. I'm the statistician among us, and I've been hacking web pages
and perl scripts all month. I didn't even manage time to whip up a cool
new logo for The Answer Gang yet.
<p>
But I'll say this, and you can all percolate on what you think of it:
stattistics developed by someone else aren't terribly useful to you -
the situation they studied will be different, every difference is a statistical
skew, and it doesn't take much variance to make it not only <EM>not</EM>
useful, but actually a waste of time and effort.
<p>
As contrasted with benchmarking done in-house, in your own controlled
environment... where you know that the situation being tested is something
you really can apply and show to your boss. But you have to have a
"control" - at least one case that is not part of the experiment, but
allowed to run "naturally", whatever that means. The larger the sample
the more likely that you do not have a big bad skew like an observer's
opinion sway their observation, or hardware problems corrupting a software
test, or something like that.
<p>
By the way, the Benchmarking HOWTO over at the LDP homepage may be dusty,
but it's actually still very readable, I recommend that people who care
about serious comparison of systems, distributions, and OS' check it out,
and apply its methodology when making their comparisons.
<p>
The smaller the sample the sillier it is. If we used the methodology of
"letters that came to <i>LG</i> this month" why, MS Windows is still popular,
but Linux outsells it by at least 4 to 1 (dual boots and crossover issues
counted in favor of WIndows), maybe more ... and there were almost as many as
people who submitted questions that did not involve Linux. (pant stains?
car CD players? Where do these people come from?) Oh yeah, and there's
my final note. Look out for subjectifying words like "almost" "nearly"
"overwhelming" and other such vague quantifiers. If they aren't numbers,
they're not useful. If they are numbers, they're only as useful as the
correlation between how they were gotten, and your particular real life
use for them.
<p>
9 out of 10 of my donuts are gone, with a 60% chance of the rest disappearing
within the next 15 minutes. See you next month!
<P>
<!-- end hgreeting -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/0"><HR></A>
<!-- begin 0 -->
<H3 align="left"><img src="../gx/dennis/bbubble.gif"
height="50" width="60" alt="(!) " border="0"
>Danish Translated: Overclocking.</H3>
<p align="right">Translator: Aron Felix Gurski</p>
<p>Well, we got about a dozen people who came forward with our
solution. Not that we here at the <i>Gazette</i> have any better
answer for tthe original querent. So, if you know some useful sites
that Linux folk might enjoy for overclocking and other hardware hackery,
submit them to <a href="mailto:linux-questions-only@ssc.com"
>linux-questions-only@ssc.com</a> they will be published next month to finish this
thread.</p>
<p>
And a big hand for Aron, who sent in a very early reply
that also helped me learn something, plus an offer of future help!
</p>
<blockquote><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>Hi!
</blockquote>
<blockquote>
I just began looking over the July issue and found that you needed some help in
translating a question from Danish. Please do not call the user "hilsen kaspar";"hilsen" is just a friendly way of ending a letter (literally it means
"greetings") -- the user's name is Kaspar, a male first name. Kaspar really
*does* repeat himself at the end of the message. (He also has made not a few
typos...)
</blockquote>
<blockquote>
Good luck at answering him. (For future reference, I can translate Danish,
Norwegian and Swedish for you [<em>email address ellided</em>])
</blockquote>
<STRONG>
<p><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Dav jeg syntes at det er en gode side du har med en masse gode brugbare r<>d .
men det er ikke det jeg vil , je har et problem som du m<>ske kan hj<68>lpe mig med
. Jeg har en 450 mhz p3 cpu som jeg gerne vil have overclocket jeg har et asus
bundkort model :p2b/f1440bx agp atx. Jeg ved ikke om at jeg skal have noget
extra k<>ling p<> n<>r det kun er til 500 mhz da mit bundkort ikke kan tage mere.enanden ting er at jeg ikke ved hvordan jeg g<>r s<> jeg h<>ber at du vil hj<68>lpe mig.JEg h<>ber at du vil hl<68>lpe mig med mine sp<73>rgsm<73>l.
</p>
<p>hilsen kasper</p>
</strong>
<p>
Dav[e?], I think that you have a good page with a lot of good, useful advice.
But that's not what I want, I have a problem with which you may be able to help.I have a 450 MHz P3 CPU that I would like to overclock. I have an ASUS
P2B/F14440 BX AGP ATX motherboard. I don't know if I need extra cooling for 500
MHz (my motherboard cannot go any higher). Another thing is that I don't know
what to do, so I hope that you will help me. I hope that you will help me with
my questions.
</p>
<p>
Best wishes,
<br>Kaspar
</p>
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/1"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 1 -->
<H3 align="left"><img src="../gx/dennis/bbubble.gif"
height="50" width="60" alt="(!) " border="0"
>Regarding #36: Plug and Pray Problems.</H3>
<p align="right">AnswerGang: RazorBuzz, Jim Dennis</p>
<p><strong>From RazorBuzz on Fri, 07 Jul 2000
</strong></p>
<BLOCKQUOTE>
Here's a comment on a question from a while back. I don't remember
that question (but it was about a year and a half ago). I see that
this was the same month that I write a 26-page guide to "routing an
subnetting" and answered about a hundred other questions. No
wonder some of them weren't complete!
</BLOCKQUOTE>
<strong>
<p><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
<p>
Andswer Dude,
</p>
<p>
Your response to Tony Grant on the Plug and Play board problems (#36) can
</p
<p>
be overcome in Linux itself. You can manually set rc.S to run a config'
for the IRQ5 (Which, if memory serves, is Com3). If you add this line:
</p>
<BLOCKQUOTE><BLOCKQUOTE><CODE>
setserial /dev/ttyS3 uart 16550A port 0x2e8 irq 10
</CODE></BLOCKQUOTE></BLOCKQUOTE>
<p>
to the <TT>/etc/rc.d/rc.S</TT> file it'll be run on every boot (duh) and correct the
problem. Of course the IRQ and IO need to be changed. The chipset of
16550A is pretty much standard and most likely won't need changed...but if
it does, you can always grab it easily. All that command line does is
force's the box to accept the comport and recognize that it can in fact be
used. Dammed defaults tend to only recognize Com1-Com3...Hopefully the next
RH, <A HREF="http://www.caldera.com/">Caldera</A> OL, or <A HREF="http://www.debian.org/">Debian</A> should have that fix (since <A HREF="http://www.slackware.org/">Slackware</A> is
just..well....lacking....nobody has hopes for that to ever get itself in
gear.)
</p>
<p>
- <TT>-=Razor=-</TT>
<br>- <TT>-=Buzz=-</TT>
</p>
</strong>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Then again, looking at Tony's original question
(<A HREF="http://www.linuxgazette.com/issue36/45.html"
>http://www.linuxgazette.com/issue36/45.html</A>) I see that the
it wasn't clear that setserial would be the right tool for the
job. It was a question about a conflict between an ISDN TA
(terminal adapter) and a ethernet card. I have no idea how
the setserial command would change the IRQ on the actual device.
As far as I know all it does is configures the kernels serial
driver <TT>---</TT> to inform it of what IRQ the hardware is using.
</BLOCKQUOTE>
<BLOCKQUOTE>
So I stand by my original answer (in this case).
</BLOCKQUOTE>
<BLOCKQUOTE>
(I understand that the ISDN TA was probably acting like a
modem, and thus probably had a UART of some sort <TT>---</TT> probably
a 16550A since a 16450 or an 82xx series would be WAY too
old and obsolete for any sort of ISDN equipment. I don't
see any evidence in the message that the user had any way to
manually set hardware jumpers to specify non-conflicting IRQs
for these devices).
</BLOCKQUOTE>
<BLOCKQUOTE>
I wonder whatever happened to this correspondent? Have they
long since switched to DSL? Is that old ISDN TA a doorstop
somewhere?
</BLOCKQUOTE>
<!-- end 1 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/2"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 2 -->
<H3 align="left"><img src="../gx/dennis/bbubble.gif"
height="50" width="60" alt="(!) " border="0"
>Regarding #55: "Simple Shell and Cron Question"</H3>
<p align="right">AnswerGang: DUDU, Jim Dennis</p>
<p><strong>From dudu on Fri, 07 Jul 2000
</strong></p>
<!-- ::
More on "Simple Shell and Cron Question"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:: -->
<P><STRONG>
You answered in LG55 the following question:
</STRONG></P>
<FONT COLOR="#000066"><EM>
<P><STRONG>
Simple Shell and Cron Question
<br>From Amir Shakib Manesh on Thu, 08 Jun 2000
</STRONG></P>
<P><STRONG>
Dear ANswer Duy, I want to write a shell script, in which every 15 minutes it run a simple command, let say 'top
<TT>-b</TT>'. Would you help me?
</STRONG></P>
<BLOCKQuote>
Well one way would be to make a cron entry like:
</BLOCKQuote>
<BLOCKQuote>
<pre> */15 * * * * top -b
</pre>
</BLOCKQuote>
<BLOCKQuote>
... which you'd do by just issuing the command: '<tt>crontab -e</TT>' from your shell prompt. That should put you in an
editor from which you can type this command.
</BLOCKQuote>
</EM></FONT>
</STRONG></P>
<P><STRONG>
But, when the cron job runs it has now default environment variables like PATH.
So shouldn<64>t one include the full path to the top binary in order to run it
properly?
</STRONG></P>
<P><STRONG>
Rgds.
DUDU
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Of course cron runs it its own environment with its own PATH and
other settings. However, on most Linux systems 'top' is going to
be located in <TT>/usr/bin</TT> --- which really should be in cron's PATH.
</BLOCKQUOTE>
<BLOCKQUOTE>
So I think the example I gave was good enough for the common case
and I think I did go into more detail later in that response.
</BLOCKQUOTE>
<BLOCKQUOTE>
Of course I have a tendency to refer to programs and scripts
by their full path in my configuration files and scripts, but
by shorter names in examples and on the command line.
</BLOCKQUOTE>
<!-- sig -->
<!-- end 2 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/3"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 3 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>Checksum Script</H3>
<p align="right">AnswerGang: Mike Orr, Jim Dennis</p>
<p><strong>From Mick Faber on Fri, 07 Jul 2000
</strong></p>
<!-- ::
Checksum Script
~~~~~~~~~~~~~~~
:: -->
<P><STRONG>
Hi
</STRONG></P>
<P><STRONG>
I have written a script that automatically connects my machine to an FTP
server and downloads a set of files that I need nightly.
The client downloads a file which is my indicator to any changes. In effect,
if this downloaded txt file has changed, then I need to download the other
files.
</STRONG></P>
<P><STRONG>
That part is ok. I can automatically download the check file, so I have two
files (current and new dir) called the same but in different directories.
</STRONG></P>
<P><STRONG>
I have written a script that says
</STRONG></P>
<pre><strong>&gt; Set a=cksum file1
&gt; Set b=cksum file2
&gt; If a=b
&gt; Then ...
&gt; Else ...
</strong></pre>
<P><STRONG>
My problem seems to be even though the CKSUM results are differently when
done manually, in the script they ALWAYS are equal. Is SET the wrong term
to use to set a variable. Is there another way to do this altogether.
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Mike]
We need to know what language this script is written in. From the "set"
statement, I'd assume it's csh or tcsh, although what you wrote appears
to violate the rules for (t)csh syntax. (Capital letters, no &quot; around
"chksum file1", etc.)
</BLOCKQUOTE>
<BLOCKQUOTE>
Anyway, if the language is similar to C, the "a=b" expression should be
"a==b" to test for equality. "a=b" means set a to the value of b.
</BLOCKQUOTE>
<strong><BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Jim]
The code fragment you've included doesn't specify what scripting
language you're using. It isn't a valid fragment of bash, PERL, or
even csh. For one thing, the common UNIX scripting languages
are case sensitive. Thus your capitalization of "If" and "Then"
are enough to cause this fragment to fail under most interpreters.
</BLOCKQUOTE>
<BLOCKQUOTE>
Of than that there isn't enough context or code here to guess what
scripting language you're trying to use. However, the 'set'
command isn't used in most Linux scripting languages (at least not
for "setting values to variables"). csh, TCL (and
'expect', a TCL derivative) and the MS-DOS batch language, use the
"set" command for variable assignments.
</BLOCKQUOTE>
<BLOCKQUOTE>
This leads me to suspect that your code sample is in "MS-DOS
batch" or some sort of psuedo-syntax.
</BLOCKQUOTE>
<BLOCKQUOTE>
To do this with bash (or Korn shell or any similar interpreter)
you'd use something like:
</BLOCKQUOTE>
<blockquote><pre> #!/bin/sh
a=$(cksum $1)
b=$(cksum $2)
if [ "$a" = "$b" ] ; then
...
else
...
fi
</pre></blockquote>
<BLOCKQUOTE>
...assuming that you were calling the script with two parameters,
the names of the two files. Note: the $( ... ) expressions are
the key here. They "capture" the output from the enclosed
command(s) and substitute those result into the expression in
which they $(....) expressions have appeared. This is called
"command substition" (traditionally rendered as `...`
using backticks). This "command substitution" feature is
one of the shell's most powerful and useful scripting mechanisms
and it allows us to seamless assign the output from any normal
command (internal, or external) to shell variables.
</BLOCKQUOTE>
<BLOCKQUOTE>
(Note: Some very old Bourne shells might not recognized the
</BLOCKQUOTE>
<BLOCKQUOTE>
$(...) form and thus may require the backtick form. However,
all UNIX shells should be able to do command substitution.
I've never heard of one that didn't. csh/tcsh also requires
the backticks, and can't use the more legible $(...) form).
</BLOCKQUOTE>
<BLOCKQUOTE>
Actually this is an oversimplification. The GNU 'cksum' command
prints output of the form:
</BLOCKQUOTE>
<BLOCKQUOTE><BLOCKQuote><code>
2839321845 1516 /path/file.name
</code></BLOCKQuote></BLOCKQUOTE>
<BLOCKQUOTE>
Obviously if I take the output of two of these commands, with
DIFFERENT FILENAMES the full text of each output will be different
even if the checksums are the same. I need to extract just the
checksums, or at least filter out the differences in the filenames.
</BLOCKQUOTE>
<BLOCKQUOTE>
My first thought was that the cksum command might have some
switches or options to suppress the extraneous output. It seems
like the need to get <EM>just</EM> the numeric checksum value would be
pretty common. However, it appears that the FSF maintainer for
this utility doesn't agree with me. So we have to isolate it
ourselves. That's only a minor nuisance (taking far less time
for me to do than to explain).
</BLOCKQUOTE>
<BLOCKQUOTE>
There are a couple of ways I can do that. Here's the first that
comes to mind. Just insert the following at the top of the script.
</BLOCKQUOTE>
<blockquote><pre> function cksum () {
command cksum $1 | {
read a b x
echo $a $b
}
}
</pre></blockquote>
<BLOCKQUOTE>
This creates a local shell function which over-rides the output of
the external cksum command. The "command" command forces the shell
to execute the command (bypassing the shell functions and aliases
<TT>---</TT> and prevent a recursion loop).
</BLOCKQUOTE>
<BLOCKQUOTE>
All I do here is pipe the output into a command that reads the
first and second fields (the part I want to keep). I read the rest
of the output into a "throwaway" variable (which I expediently call
"x"). Then I just echo out the two pits of info I cared about (the
checksum and the size) leaving off the "rest." This trick of using
the read command to filter out fields that I want from lines of
input is pretty handy. It's a reasonable advantage over using the
external 'cut' command because read and echo are internal commands.
Also 'cut' defaults to using tabs as delimiters while I usually
want to "cut" on <EM>any</EM> whitespace (any number of tabs or spaces).
</BLOCKQUOTE>
<BLOCKQUOTE>
The advantage of writing this little shell function into our
script is that I can leave the rest of the script alone. I don't
have to re-write it. Of course it's better to avoid the name
collision. I could name my function "checksum" (and avoid having
to use the "command" command). Even if I do rename the shell
function I can leave my "command" command as is. It doesn't
hurt anything.
</BLOCKQUOTE>
<BLOCKQUOTE>
Naturally I could have also just piped the output of each of
these cksum command through cut like so:
</BLOCKQUOTE>
<BLOCKQUOTE><BLOCKQUOTE><CODE>
a=$(cksum $1 | cut -d" " -f 1-2 )
</CODE></BLOCKQUOTE></BLOCKQUOTE>
<BLOCKQUOTE>
... which works fine. Of course it is a little less
maintainable. Even though I'm only calling this expression
twice <TT>---</TT> it's still better to consolidate it into a
shell function so it really works the say way in both
invocations. Otherwise a slight difference to one of the
invocations could silently cause the later comparison to
always and erroneously fail.
</BLOCKQUOTE>
<BLOCKQUOTE>
Note that we don't have to use "if... then ... else .... fi"
in most shell scripts. We can shorten this script to:
</BLOCKQUOTE>
<BLOCKQUOTE><BLOCKQUOTE><CODE>
[ "$(checksum $1)" = "$(checksum $2)" ] &amp;&amp; .... || ....
</CODE></BLOCKQUOTE></BLOCKQUOTE>
<BLOCKQUOTE>
(assuming I made my checksum shell function as before).
</BLOCKQUOTE>
<BLOCKQUOTE>
... where the command after the &amp;&amp; is the same as you'd put after
the "then" token in the earlier script. The command after the ||
operator is similar to the "else" block, but it would be execute if
the checksums didn't match <EM>or</EM> if the if the command in the &amp;&amp;
clause returned a non-zero value (an error). This is frequently
what you actually want in shell programming; though the differences
can be subtle and important.
</BLOCKQUOTE>
<BLOCKQUOTE>
Note: the &amp;&amp; and || operators take a single command. If you want
to perform a block of commands under those conditionals you'll want
to use command grouping or possibly a subshell <TT>---</TT> using the
{...} (braces/grouping) or (...) (subshell) syntax.
</BLOCKQUOTE>
<BLOCKQUOTE>
One "gotchya" that crops up in bash 2.x when using "grouping" is
this:
</BLOCKQUOTE>
<BLOCKQUOTE><BLOCKQuote><code>
{ foo; bar }
</code></BLOCKQuote></BLOCKQUOTE>
<BLOCKQUOTE>
... was accepted under bash 1.x and is an error under bash 2.x
<TT>---</TT> it's because the closing brace is being taken as an argument
to the bar command. This is technically correct for the parser
(it was a bug in bash 1.x that allowed the command to work).
</BLOCKQUOTE>
<BLOCKQUOTE>
So, good shell scripting requires that we us this syntax:
</BLOCKQUOTE>
<BLOCKQUOTE><BLOCKQuote><code>
{ foo; bar; }
</code></BLOCKQuote></BLOCKQUOTE>
<BLOCKQUOTE>
(or simply put the braces, particularly the closing brace, after
a line end, perhaps on its own line).
</BLOCKQUOTE>
<BLOCKQUOTE>
That's basic shell scripting.
</BLOCKQUOTE></strong>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Any assistance appreciated. Email preferred, but will keep checking this
here to check for any legendary solutions...
</STRONG></P>
<P><STRONG>
Mick
</STRONG></P>
<BLOCKQUOTE><strong><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Jim]
I don't know that my answers are "legendary" but I hope they
help anyway.
</strong></BLOCKQUOTE>
<blockquote><em><p>[ Maybe most aren't but some are.
The length of this particular thread
is about to rival some of your own longer missives, but I
think it will still be shorter than your legendary
"Routing and Subnetting 101" (issue 36, plus it had a
floowup. Some people are teaching clasess based on it.
Rah Rah Rah, Go LDP!) Of course it's an unfair comparison;
there's two of you ganging up on the question this time
so your relative portion is even shorter. --Heather
]</p></em></blockquote>
<BLOCKQUOTE><strong>
BTW: When posting questions about scripting <TT>---</TT> include a
syntactically complete and semantically relevant portion of the
code. Try to keep that under 25 lines. Often the process of
isolating a testing a chunk of code that clearly illustrates the
problem, leads you to an understanding and a solution or
work around.
</strong></BLOCKQUOTE>
<em><p>... he replied ...</p></em>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Thanks so much for the reply, I have written this using VI on Redhat6.1 -
I don't know if that is the answer you need - I'm only a 2 week novice with
Linux and programming of this level for that matter ... Does this answer
your question?
<br>The actual command line I want to use is
</strong><p>
<pre>if cksum /usr/local/c_drive/batm/video/current/pod001.avc = cksum
/usr/local/c_drive/batm/video/new/pod001.avc; then
</pre>
<p><strong>
I also want to verify that the downloads are successful and not corrupted. I
figured CKSUM is the best for that as well - that problem is to get tackled
yet ....
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Mike]
Vi is the editor you're building the file with. What we need to know
is the program that's running the file. From the "actual command line
below", it looks like a shell script, so I assume it's running under the
default Linux shell, bash. Do you have a "#!" line at the top of the
file? If so, what does it say?
</BLOCKQUOTE>
<BLOCKQUOTE>
The following script works when I try it comparing one file with itself,
then comparing it with a different file.
</BLOCKQUOTE>
<BLOCKQUOTE><pre>
if [ "$(cksum /usr/local/c_drive/batm/video/current/pod001.avc)" = \
"$(cksum /usr/local/c_drive/batm/video/new/pod001.avc)" ] ;then
echo "They're the same."
else
echo "They're different."
fi
</pre></blockquote>
<BLOCKQUOTE>
"if" takes a single command. If the command's exit status is 0, the
"then" part is run. If the command's exit status is non-zero, the
"else" part is run. The brackets "[ ... ]" imply the "test" command,
which runs a test (in this case, a string comparision) and exits 0
if the answer is true.
</BLOCKQUOTE>
<strong><BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Jim]
Actually the [ .... ] doesn't "imply" the test command. [ is
really a built-in alias for 'test' (and it generally also exists as
a symbolic link to the <TT>/usr/bin/test</TT> command, for those shells
which don't implement it as a built-in).
</BLOCKQUOTE>
<BLOCKQUOTE>
When the command 'test' is called under the name '[' then it
requires the ']' as a delimiter. That's actually a bit silly,
since the shell is still doing it's own parsing, and the shell
"knows" when the command ends quite independently of this "]"
marker (which the shell ignores as it's just another argument
to the '[' command.
</BLOCKQUOTE>
<BLOCKQUOTE>
However, these are just syntactic anomalies. It's really better
for beginning shell scripters to use the 'test' command (so that
the really internalize that it is really just a command like any
other Unix command. It is not a "feature of the language" <TT>---</TT> it's
just a command that processes a list of command line arguments and
returns and exit value. (This is as true of '[' but it's less
obvious to people who've been exposed to any other programming
languages.
</BLOCKQUOTE></strong>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Mike]
"$( command arg1 arg2 )" returns the output of the specified command--
what it would have printed on the screen. This is different from its
exit status. The double quotes keep the output together even if it
contains spaces; otherwise the output would be misinterpreted.
</BLOCKQUOTE>
<BLOCKQUOTE>
Bash allows either "=" or "==" for string comparisions. Another
operator "<TT>-eq</TT>" does numeric comparisions, but we don't want that here
since "cksum" returns more than just a simple number. Some other
languages would require "==" instead of "=", as I said yesterday,
but bash isn't one of them.
</BLOCKQUOTE>
<strong><BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Jim]
Although bash allows this, the external 'test' command requires
that we use the = and will give an error if we try to use ==
</BLOCKQUOTE>
<BLOCKQUOTE>
So, depending on bash' permissiveness is less portable.
</BLOCKQUOTE>
<BLOCKQUOTE>
Incidentally, another approach we could have used (given the
original problem) is to do something like:
</BLOCKQUOTE>
<BLOCKQUOTE><pre>
pushd $(dirname $1)
a=$(cksum $(basename $1 ))
cd $(dirname $2)
b=$(cksum $(basename $2 ))
popd
....
</pre></BLOCKQUOTE>
<BLOCKQUOTE>
... this relies on the fact that the files being compare have the
same names but reside in different directories. However, it seems
really bad to impose that constraint on our shell script even
though this particular application/situation allows it. It would
make the resulting script useless for most other situations.
However, the approach I recommended (filtering out the filename
with and read/echo pair or a 'cut' command) gives us a more general
script that we can re-use for similar purposes.
</BLOCKQUOTE>
<BLOCKQUOTE>
This example does show the use of the very handy 'basename' and
'dirname' commands. It also shows that the $(...) form of
command substitution can be nested (which overcomes a limitation
of the older `...` backtick form).
</BLOCKQUOTE></strong>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Mike]
Please cc: <A HREF="mailto:linux-questions-only@ssc.com"
>linux-questions-only@ssc.com</A> on subsequent e-mails about this issue. This
is a mailing list which is used to build the Answer Gang/Answer Guy
column in <i>Linux Gazette</i>, and several people read it who may be able to
help read it.
</BLOCKQUOTE>
<strong><BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Jim]
Once you have local copies of the file, why not just use the
'cmp' command. The cksum command is already going to read the
whole file. You've already burned up the bandwidth (transfer
the whole files to the local machine).
</BLOCKQUOTE>
<BLOCKQUOTE>
So what's wrong with:
</BLOCKQUOTE>
<blockquote><pre> if cmp -s /old/path/file1 /new/path/file1
then
...
else
...
fi
</pre></blockquote>
<BLOCKQUOTE>
That seems quite a bit simpler.
</BLOCKQUOTE>
<BLOCKQUOTE>
Also, let's assume that you have two directories. A script
to compare corresponding files in them would look something like:
</BLOCKQUOTE>
<blockquote><pre> for i in $1/*; do
cmp -s $i $2/$(basename $i) &amp;&amp; # they're O.K ...
|| # Ooops: corrupt file
done
</pre></blockquote>
<BLOCKQUOTE>
(This assumes that you're call it with just two
parameters, the names of the old and new directories).
</BLOCKQUOTE>
<BLOCKQUOTE>
Alternatively you can have a script take a directory name
(the "new" directory for argument's sake) and a list of
files as probably provided by a "wildcard" (globbing)
pattern.
</BLOCKQUOTE>
<BLOCKQUOTE>
That would look something like:
</BLOCKQUOTE>
<blockquote><pre> d=$1
[ -d "$d" ] || exit 1
shift
for i; do
if cmp $i $d/$( basename $i )
then
....
else
....
fi
done
</pre></blockquote>
<BLOCKQUOTE>
... Here again I'm using the basename command. I could also use
the "parameter substitution" feature of the shell instead of
basename: ${i##*/} However, I find that form to be almost
unreadable. If performance where an issue I might hide the
${1##*/} in a shell function that I'd name "basename" (and I'd toss
in ${1%/*} as "dirname"). That would be a bit quicker for large
directories since basename and dirname are external commands. So
using them entails quite a bit of<TT> fork()</TT>'ing and<TT> exec()</TT>'ing.
Naturally the ${...} parameter substitution features are always
internal if they are supported at all.
</BLOCKQUOTE></strong>
<!-- end 3 -->
<em><p>... he replied ...</p></em>
<P><strong><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Hi, I am using the default program bash (have also tried sh as other
information I downloaded had this in it - are they significantly different?
</STRONG></P>
<P><STRONG>
I ran this command:
</STRONG></P>
<Pre><STRONG>
if [ "$(cksum /usr/local/c_drive/batm/video/current/pod001.avc)" = \
"$(cksum /usr/local/c_drive/batm/video/new/pod001.avc)" ] ;then
echo "They're the same."
else
echo "They're different."
fi
</STRONG></Pre>
<P><STRONG>
and found the following results:
</STRONG></P>
<P><STRONG>
when the file is compared to itself, it works.
When compared to a file of the SAME NAME in another folder, if doesn't work.
It's almost as if the folder is taken into account, but when I run cksum
filename on the two files they give me the same CRC, no. bits and file name
as they should. I would expect then that this command should work.
</STRONG></P>
<strong><BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Jim]
Of course the "folder" (directory name) is part of what's being
compared. The "$(.....)" are expressions that evaluate to text
strings. The contents of those strings are set to the output of
the commands that are included in the parentheses. The [ (test)
command takes a list of arguments and operators. In this case the
arguments are two strings (substitutes by the $(...) expressions)
and the = operator. Note that the "=" sign here is just an
argument to the test command <TT>---</TT> which is also know as the '['
command. The closing ']' is just an argument that the 'test'
command requires when it is called under the '[' name.
</BLOCKQUOTE>
<BLOCKQUOTE>
Now, if you think about it you'll see that the '[' command has no
reasonable way of "knowing" that you only care about the checksum
values of the two strings. It was give a couple of strings and an
argument (the "=" sign). So it (the test command) will return a
value (exit code, errorlevel) based on whether the two strings are
identical.
</BLOCKQUOTE></strong>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
I am interested only in the CRC value <TT>-</TT> perhaps we could use the <TT>-eq</TT> if we
can only extract the CRC value as a result instead of the other info CKSUM
give us....?
</STRONG></P>
<strong><BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Jim]
I don't recommand that. The 'test' command will probably emit an
error about the format of the operands to the <TT>-eq</TT> option/operator.
</BLOCKQUOTE></strong>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Feeling so close now.... Thanks again for your patience....
</STRONG></P>
<strong><BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Jim]
See my long response of a few minutes ago. The answer is simple,
we include a bit in the $(....) expressions that filters out the
irrelevant text. I do this by over-riding the cksum (external)
command with my own shell function, but the concept is the same.
</BLOCKQUOTE>
<BLOCKQUOTE>
Note: I dove into that message and my earlier response before
seeing that others had tried to help you with your question.
</BLOCKQUOTE></strong>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Regards,
Mick Faber
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Mike]
<pre>
~% cksum ksc.txt /tmp/ksc.txt
3082533539 2180 ksc.txt
3082533539 2180 /tmp/ksc.txt
</pre></BLOCKQUOTE>
<BLOCKQUOTE>
It looks like the difference is only in the path and not in the checksum.
I tried it both with the two filenames being hard links to the same file,
and with them being copies of each other.
To get the checksum only, run:
</BLOCKQUOTE>
<BLOCKQUOTE><pre>
~% cksum ksc.txt |cut -f 1 -d ' '
3082533539
</pre></BLOCKQUOTE>
<BLOCKQUOTE>
Or to be verbose:
<br><tt>cksum ksc.txt | cut --fields=1 --delimiter=' '
<br>3082533539
</tt></BLOCKQUOTE>
<BLOCKQUOTE>
Here's a script:
</BLOCKQUOTE>
<BLOCKQUOTE><pre>
---------cut here----------
#! /bin/bash
FILE1=that
FILE2=/tmp/that
cksum $FILE1 $FILE2
if [ "$(cksum $FILE1 | cut -f 1 -d ' ')" -eq \
"$(cksum $FILE2 | cut -f 1 -d ' ')" ] ;then
echo "They're the same."
else
echo "They're different."
fi
---------cut here----------
</pre></BLOCKQUOTE>
<BLOCKQUOTE><pre>
$ /tmp/checkit
3558380555 93104 that
3558380555 93104 /tmp/that
They're the same.
</pre></BLOCKQUOTE>
<BLOCKQUOTE>
Out of curiosity, what do you think of the difference between cksum and
md5sum?
</BLOCKQUOTE>
<BLOCKQUOTE>
Bash has more features than sh and is larger. Exactly what the differences
are, you'd have to consult the manuals. I use zsh for my interactive shell,
and zsh or bash for scripting.
</BLOCKQUOTE>
<em><p>... he replied ...</p></em>
<P><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Thanks heaps for your help. I have resolved the issue.
</p>
<p>
FYI: I am using the command "<tt>if cmp -e file1 file2</tt>"
and not using the cksum at all anymore.
</p>
<p>
Thanks again - you guys are lifesavers!!!
</p>
<p>
Mick
</p>
<!-- end 3 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/4"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 4 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>Accessing an NT Fileserver</H3>
<p align="right">AnswerGang: Jim Dennis</p>
<p><strong>From Stephen Richard Levine on Fri, 07 Jul 2000
</strong></p>
<!-- ::
Accessing an NT Fileserver
~~~~~~~~~~~~~~~~~~~~~~~~~~
:: -->
<P><STRONG>
I cannot find a reference which would show me how to access data sitting on
an nt server (version 4.0) in multiple directories. I want to use linux as
the o/s, apache as a web-server, but the content all resides on nts as pdfs
in separate subdirectories. each user has their own nt subdirectory. Any
assistance would be appreciated.
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
You could use the Linux SMBFS. You'd have to compile support for
that into your kernel and use the 'smbmount' command.
</BLOCKQUOTE>
<BLOCKQUOTE>
SMBFS is similar to Samba (and based on the same free sources and
work). However, it is the client side (Linux access SMB
filesystems) rather than the server. (Samba is an SMB server).
</BLOCKQUOTE>
<BLOCKQUOTE>
When you're accessing files via an MS-Win '95 "share" it's using
the SMB (server message block) protocol. Likewise for NT, Windows
for Workgroups, the old OS/2 Lan Manager, and for printing and some
of the MS Windows "popup" messages. Samba is a free package
written by Andrew Tridgell (and others). It runs on most forms of
UNIX, where it allows any UNIX or Linux system to emulate an NT
server. This allows all those MS Win '9x and NT workstation
clients to access files on Linux and UNIX systems using their
"native" protocols. No special software has to be installed on the
clients. (That's a big win for two reasons: MS Windows clients
don't offer very robust remote administration facilities, so
installing software on them is expensive and time consuming; and MS
Windows systems are frequently plagued with DLL and other software
conflicts which makes manually installing software on them
difficult, frustrating and time-consuming).
</BLOCKQUOTE>
<BLOCKQUOTE>
Anyway, you're trying to do the opposite of what Samba offers.
You're trying to use your Linux system as a "client" to your
NT fileserver. Personally I think that this is a backwards way
to do things. I'd suggest installing Samba on the Linux system
(along with <A HREF="http://www.apache.org/">Apache</A> and any other requisite tools) and let the
clients post their files directly to the Samba shares on the
Linux host. It's possible to configure Samba to listen on
a specific interface and to limit the IP address ranges with
which Samba will interact. Thus you can configure a system
so that only local users can access the Samba shares while
it's still publicly accessible as a web server.
</BLOCKQUOTE>
<BLOCKQUOTE>
(In the "belts <EM>and</EM> suspenders" philosophy it's also possible
to use ipchains to block SMB traffic from even reaching the
public interfaces on your Linux box. And of course you do that
blocking on the host itself <EM>and</EM> on a separate border router).
</BLOCKQUOTE>
<BLOCKQUOTE>
Another approach would be to house primary copies of these
files on the NT server, and write some sort of replication
script that would periodically be executed (task scheduler?)
to create an archive of the user files and push them over
to the Linux box. Probably that would be most easily done
using the 'rsync' command (another UNIX/Linux tool, written
by Andrew Tridgell). You can run many freeware UNIX tools
under Interix (formerly called "OpenNT" by a company
formerly called Softway Systems, now owned by Microsoft) or
under the Cygwin32 (Cygnus' package for supporting UNIX
APIs and libraries under on Win32 systems).
</BLOCKQUOTE>
<BLOCKQUOTE>
rsync is very efficient (sending only the "diffs" of large
files that have changed, rather than whole copies). It is
the most popular replication tool on Linux these days.
</BLOCKQUOTE>
<BLOCKQUOTE>
However, if you have some other constraint that really
mandates the use of NT for the fileserver, then I suppose
you can use Linux' smbfs. You can read more about it at
the Samba web site (<A HREF="http://www.samba.org/samba/smbfs"
>http://www.samba.org/samba/smbfs</A>).
</BLOCKQUOTE>
<!-- sig -->
<em><p>... he replied ...</p></em>
<P><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Many thanks for the assistance and setting me straight on which part of the
client/server I should access.
</p>
<p>
Steve
</p>
<!-- end 4 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/5"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 5 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>FIPS</H3>
<p align="right">AnswerGang: Jim Dennis</p>
<p><strong>From ajshields on Tue, 04 Jul 2000
</strong></p>
<!-- ::
FIPS
~~~~
:: -->
<P><STRONG>
gday
</STRONG></P>
<P><STRONG>
how are you, I am new to Linux and am trying to install it as dual
boot on my new 10gb seagate diskdrive i have already got windoze
installed. My bios doesn't support a 10gb drive so i downloaded
seagates boot manager that allows me to use the hdd full potential.
When i tried to run fips it said that the last bit of it has files on it
(it doesn't). And doesn't want to run anymore than that.
</STRONG></P>
<P><STRONG>
Can you help
<br>Andrew
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Did you read the FIPS.DOC file that comes with the FIPS package?
(FIPS is the "free internet partitioning software"). It discusses
this in the doc file, in the FAQ and in the ERRORS.TXT file:
</BLOCKQUOTE>
<blockquote><pre>Last cylinder is not free
Since the new partition is created at the end of the old one and
contains at least one cylinder, the partition can not be split if
not at least the very last cylinder is completely free.
Probably there is a hidden file like 'image.idx' or 'mirorsav.fil'
in the last cylinder - see the doc.
</pre></blockquote>
<BLOCKQUOTE>
(That's from ERRORS.TXT). In the doc and in the FAQ it describes
what you should do about this:
</BLOCKQUOTE>
<blockquote><pre>But before starting FIPS you _must_ now defragment your
Harddisk. All of the space that will be used for the new partition
must be free. Be aware that the Windows Swapfile will not be moved
by most defragmentation programs. You must uninstall it (in the
386enhanced part of the Windows Control Panel) and rein- stall it
after using FIPS. If you use IMAGE or MIRROR, the last sector of
the hard disk contains a hidden system file with a pointer to your
mirror files. You _must_ delete this file before using FIPS (it will
be recreated the next time you run mirror). Do 'attrib -r -s -h
image.idx' or 'attrib -r -s -h mirorsav.fil' in the root directory,
then delete the file. If FIPS does not offer as much disk space for
creation of the new partition as you would expect it to have, this
may mean that
a. You still have too much data in the remaining partition. Consider
making the new partition smaller or deleting some of the data.
b. There are hidden files in the space of the new partition that
have not been moved by the defragmentation program. You can find the
hidden files on the disk by typeing the command 'dir /a:h /s' (and
'dir /a:s /s' for the system files). Make sure to which program
they belong. If a file is a swap file of some program (e.g. NDOS)
it is possible that it can be safely deleted (and will be
recreated automatically later when the need arises). See your
manual for details.
If the file belongs to some sort of copy protection, you must
uninstall the program to which it belongs and reinstall it after
repartitioning.
I can't give you more aid in this - if you really can't figure
out what to do, contact me directly.
</pre></blockquote>
<BLOCKQUOTE>
Also Arno Schaefer, the author/maintainer of FIPS, suggests that
you create a debugging report with the <TT>-d</TT> switch and that you
include the resulting FIPSINFO.TXT file with any questions that you
mail to him.
</BLOCKQUOTE>
<BLOCKQUOTE>
The other approach would be to backup your data, check your backups
(restore the critical data to another drive, another system, or at
least a different subdirectory) and then do an old-fashioned
re-partition, re-install (of MS Windows) and then do your Linux
installation.
</BLOCKQUOTE>
<BLOCKQUOTE>
I realize that this sounds dull, tedious, time consuming, etc.
However, think of the advantages. First, you'll have a backup!
Also, your new installation of MS Windows may be much cleaner than
the existing one (since their OS seems to gather cruft at a
frightening rate).
</BLOCKQUOTE>
<BLOCKQUOTE>
I've only used FIPS a couple of times (on other people's systems,
at their insistence). I prefer the old-fashioned approach.
Actually I prefer to wipe out the old OS and give Linux the whole
system. Failing that I prefer to add an extra hard disk and use
<TT>LOADLIN.EXE</TT> to run Linux off of that (non-primary) drive. So
repartitioning is third on my list of preferences; and using FIPS
is fourth. That would be followed quite distantly by using
Partition Magic (which I've never tried).
</BLOCKQUOTE>
<BLOCKQUOTE>
Of course I have no idea what files FIPS is complaining about.
It might be some sort of hidden/system driver that was installed
by that Seagate boot managed you mentioned.
</BLOCKQUOTE>
<BLOCKQUOTE>
Incidentally I have no idea if Seagate's boot manager (software
disk driver?) is compatible with LILO. The LILO technical
documentation describes their success in operating with a variety
of partitioning drivers (like Ontrack's Disk Mangler^H^H^Hager, and
Maxtor's (??) EZ-Drive). However, I don't have the time to hunt
down information about Seagate's software (particularly since you
give no details about it <TT>---</TT> not even the name of the package).
</BLOCKQUOTE>
<BLOCKQUOTE>
As I said: my preference is to give Linux a whole hard drive. If
you can get a cheap little 1 or 2 Gb drive that your BIOS <EM>does</EM>
support <TT>---</TT> make that the master, install MS-Windows "C" drive on
it; and give Linux the other drive (or most of it. Of course you
could also look at upgrading your BIOS, replacing your motherboard
(getting a new BIOS along with that, of course), or installing a
smarter IDE controller (with its own BIOS).
</BLOCKQUOTE>
<BLOCKQUOTE>
Of course you can just try to do the installation. It might
just work with no fuss. However, when novices try to install
Linux, and they include these little constraints (wants dual
boot on a big drive, on a system that doesn't support big
drives, and wants to non-destructively resize and repartition
that drive) they naturally complicate their initial experiences.
</BLOCKQUOTE>
<BLOCKQUOTE>
You're likely to get an unduly dim view of Linux "ease of
installation" by trying an installation with all of these
constraints. (That isn't to say it can't be done just as you
want <TT>---</TT> it's just to point out that the process is often
more complicated than it needs to be).
</BLOCKQUOTE>
<BLOCKQUOTE>
So, consider alternatives as I've suggested. Ultimately
some hardware upgrades might save you enough time to offset
the cost.
</BLOCKQUOTE>
<p><em>... he replied ...</em><p>
<P><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
gday again
</p>
<p>
All that i can say is welll sooooooooorrrrrrrrryyyyyyy
</p>
<p>
it came up with 54h as it can't recognize this operating system
</p>
<!-- end 5 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/6"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 6 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>Removing Linux Partitions</H3>
<p align="right">AnswerGang: Jim Dennis</p>
<p><strong>From Rajan Karwal on Mon, 03 Jul 2000
</strong></p>
<!-- ::
Removing Linux Partitions
~~~~~~~~~~~~~~~~~~~~~~~~~
:: -->
<P><STRONG>
i recently read your cooments about LI on a web newsgroup. My
problem is this. I was running lunix on my machine but didnt like
it so i want to go back to windows. I deleted the several
partitions that linux reated and formatted the drive. Now all i
get if i start my machine is "LI".(not at this point i have
installed ms dos on the machine) The only way i can get to a C:/
prompt is to use a boot disk. Can you shed any light on this?
</STRONG></P>
<P><STRONG>
Thanks for your time
<br>Raj
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Boot from an MS-DOS floppy and run FDISK <TT>/MBR</TT>
</BLOCKQUOTE>
<BLOCKQUOTE>
One component of LILO is a "boot loader" (a bit of code that is
stored on your primary hard drive in the "master boot record" (MBR)
along with your partition table. The LILO boot loader code stores
some additional code beyond the 446 bytes that are available in the
MBR (the other 66 bytes are the primary partition table and a
"signature" that marks the drive as "formatted"). Usually that
additional code is stored on one of your Linux filesystems (<TT>/boot</TT>,
or the <TT>/</TT>, root filesystem, depending on how you've laid out your
systems).
</BLOCKQUOTE>
<BLOCKQUOTE>
When you removed your Linux filesystems, you also removed the
additional boot loader code (the "secondary boot loader").
The reason that the boot process stops at: LI
is that Werner Alsmesberger used a clever bit of programming
to fit some diagnostics into the 446 of code. The letters
L, I, L, O are printed at different points of the boot process.
</BLOCKQUOTE>
<BLOCKQUOTE>
So, if the boot loader hangs part way through the process, you
have some idea of how far it got. There are many reasons why a
system might stop at LI and not get to the second L in LILO.
All of them amount to "I couldn't load the second stage boot
loader." (Which makes sense in your case since you DELETED THEM).
</BLOCKQUOTE>
<BLOCKQUOTE>
Note: I've heard of cases where people have removed partitions
and/or kernels and were still able to boot from them. That's
because LILO stores the raw disk addresses of these files (this
refers to the data in a way that is "below" the filesystem
level). Removing the things from the partition tables or from
a filesystem marks space as "unallocated" --- but it doesn't
generally actually overwrite or affect the data. It just
changes the way that the space is accounted for and make it
available to be used by other partitions/files. So it makes
since that LILO can still be used to be boot the system from
an out-of-date mapping; until the data blocks that those
files and partitions are actually used by something else.
</BLOCKQUOTE>
<BLOCKQUOTE>
Running the <TT>/sbin/lilo</TT> command updates those mappings, of
course. The <TT>/sbin/lilo</TT> command is a program that uses
the <TT>/etc/lilo.conf</TT> file to build a set of boot blocks and
maps. I like to think of <TT>/sbin/lilo</TT> as a "compiler" for
the "<TT>/etc/lilo.conf</TT>" program; that makes the boot records
and maps analogous to the "program" and "libraries" that
a compiler generates from your source code. This analogy
makes perfect sense to programmers --- but it seems to
sink in for quite a few non-technical users as well.
</BLOCKQUOTE>
<!-- sig -->
<!-- end 6 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/7"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 7 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>Looking for a 'dump'</H3>
<p align="right">AnswerGang: Jim Dennis</p>
<p><strong>From Gillian Bennett on Sun, 02 Jul 2000
</strong></p>
<!-- ::
Looking for a 'dump'
~~~~~~~~~~~~~~~~~~~~
:: -->
<P><STRONG>
Hi James,
</STRONG></P>
<P><STRONG>
I guess that in all likelihood this is the wrong forum for this question,
but there are so many mailing lists for linux that I wasn't sure which one
to post to. I am reasonalbly new to linux after being an admin for sun, dec
etc for a few years.
</STRONG></P>
<P><STRONG>
I was wondering if there is a tool that will dump filesystems (similar to
ufsdump or some other dump tool from other unix flavours) on RH linux 6.X.
The filesystems are ext2 type filesystems and are currently backed up using
cpio (<EM>SHUDDER</EM>).
</STRONG></P>
<P><STRONG>
I appologise for the inconvenience,
Regards, Gillian
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
What have you got against cpio?
</BLOCKQUOTE>
<BLOCKQUOTE>
Anyway there is a Linux 'dump' (and 'restore') package. You should
find it on your installation CD or on any good archive site.
</BLOCKQUOTE>
<BLOCKQUOTE>
Of course it's version number is only 0.4b16 or so. In a rational
world that would suggest that the author things it is roughly
40% "feature complete" to version 1.0. However, some programmers
in the Linux world don't like simple, rational versioning schemes
so I have no idea what that version number is supposed to imply.
</BLOCKQUOTE>
<!-- sig -->
<!-- end 7 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/8"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 8 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>MMDF Anti-Relaying?</H3>
<p align="right">AnswerGang: Jim Dennis</p>
<p><strong>From Jaris Visscher on Thu, 06 Jul 2000
</strong></p>
<!-- ::
MMDF Anti-Relaying?
~~~~~~~~~~~~~~~~~~~
:: -->
<P><STRONG>
mars.ncn.net is a Linux server which is having problems emailing us.
We are having trouble with mars.ncn.net emailing us at mtc1.mtcnet.net. =
They seem to think it is our MMDF mail server.
</STRONG></P>
<P><STRONG>
We have checked all of their reverse DNS info and it is correct.
They are gettting the error
<code>
<br>Connections reset by mtc1.mtcnet.net
<br>Message could not be delivered for 5 days
<br>Message will be deleted from queue
</code>
</STRONG></P>
<P><STRONG>
This has been going on for 2 months.
Here is the exact message as it comes to our MMDF server in our log file.
/usr/mmdf/log/chan.log
As you will see we get a fetch of mars.ncn.net failed
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
I'm not at all familiar with the MMDF mail transport system. So
I don't know what sort of "fetch" is going on here. However,
it looks like:
</BLOCKQUOTE>
<blockquote><pre> 6/23 10:16:02 smtpsr8272: h2chan ('mars.ncn.net', 1)
6/23 10:16:02 smtpsr8272: h2chan table 'local'
6/23 10:16:02 smtpsr8272: tb_fetch: dbminit
6/23 10:16:02 smtpsr8272: fetch (mars.ncn.net)
6/23 10:16:02 smtpsr8272: fetch of 'mars.ncn.net' failed
6/23 10:16:02 smtpsr8272: h2chan table 'list'
6/23 10:16:02 smtpsr8272: h2chan table 'smtpchn'
6/23 10:16:02 smtpsr8272: ns_fetch (21, mars.ncn.net, 1)
6/23 10:16:02 smtpsr8272: ns_fetch: timeout (0), rep (0), servers (0)
6/23 10:16:02 smtpsr8272: ns: key mars.ncn.net -&gt; 38
6/23 10:16:02 smtpsr8272: ns_getmx(mars.ncn.net, 805db9c, 8068b58, 10)
6/23 10:16:02 smtpsr8272: ns_getmx: sending ns query (30 bytes)
6/23 10:16:02 smtpsr8272: ns_getmx: bad return from res_send, n=3D-1, =
errno=3D114, &gt; h_errno=3D0
6/23 10:16:02 smtpsr8272: nameserver query timed out
</pre></blockquote>
<BLOCKQUOTE>
... you're getting a name resolution failure while looking for MX
records?
</BLOCKQUOTE>
<BLOCKQUOTE>
Does mars.ncn.net have a valid MX record? It doesn't look like it
(from my own 'dig' commands).
</BLOCKQUOTE>
<BLOCKQUOTE>
It sounds like ncn.net hasn't created MX records for you. I don't
know if you're MMDF installation has been configured for
anti-relaying. It may be that the anti-relaying (anti-spam)
configuration that you used is requiring that the sender/relayer
have an MX (mail exchanger) record rather than just an A (address
record.
</BLOCKQUOTE>
<BLOCKQUOTE>
Anyway, I'm sure that you know more about MMDF than I do. However,
it occurs to me that it may be best to point you at the the
canonical MMDF resources page (<A HREF="http://www.ivine.com/~mmdf"
>http://www.ivine.com/~mmdf</A>) and let
you read through the FAQ (<A HREF="http://www.ivine.com/~mmdf/mmdf.html"
>http://www.ivine.com/~mmdf/mmdf.html</A>)
</BLOCKQUOTE>
<BLOCKQUOTE>
Hopefully that will make more sense to you, since you've configured
some of these programs and channels. There's also an searchable
archive the mailing list. I saw one message there that seemed to
assert that MMDF won't fall back to A records when MX lookups have
failed (searching MX). I would expect that to apply to SENDING
mail, which is why I'm wondering if your MMDF is trying to use a
similar mechanism in an anti-spam measure while it's recieving
messages.
</BLOCKQUOTE>
<BLOCKQUOTE>
Anyway, that should help. Having your postmaster subscribe to
that list and post MMDF questions there will also probably be
much better than posting them to more general fora. MMDF is a
bit of a niche, so you really want to talk to its specialists.
</BLOCKQUOTE>
<!-- sig -->
<!-- end 8 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/9"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 9 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>Making CDs</H3>
<p align="right">AnswerGang: Jim Dennis</p>
<p><strong>From Henry White on Thu, 29 Jun 2000
</strong></p>
<!-- ::
Making CDs
~~~~~~~~~~
:: -->
<P><STRONG>
Please point me to a place I can read on how to create an .ios file. I
want to make a CD from this file.
</STRONG></P>
<P><STRONG>
Thanks
Henry White
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
My guess is that you mean an ".iso" (as in International Standards
Organization) which is a filename extension commonly used with IS0 9660
(the formal specification on the formatting for data CD-ROM).
</BLOCKQUOTE>
<BLOCKQUOTE>
Assuming that this is the case you want to get the mkisofs and the
cdwrite and/or the cdrecord utilities. The mkisofs man page will
help a bit. However, you should also look at the CD-Writing HOWTO
at <A HREF="http://www.linuxdoc.org/HOWTO/CD-Writing-HOWTO.html"
>http://www.linuxdoc.org/HOWTO/CD-Writing-HOWTO.html</A>
</BLOCKQUOTE>
<BLOCKQUOTE>
That is quite detailed.
</BLOCKQUOTE>
<p><em>... he replied ...</em></p>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
You are right I was asking about iso.
Thanks for your help. I am on my way now.
</p>
<p>
Henry C. White
</p>
<!-- end 9 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/10"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 10 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>HELP</H3>
<p align="right">AnswerGang: Michael Williams, Heather Stern</p>
<p><strong>From WwSHADOWMASTERwW on Thu, 29 Jun 2000
</strong></p>
<p><strong>
Listen. I just installed RedHat Linux 6.2 and I cannot get my modem to
work. I did the test and modem test on the set up manu and is does detect it
but stays at the initializing Modem prompt.. What do I Do I can t find anyone
who can answer this for me HELP.....I am using the <A HREF="http://www.kde.org/">KDE</A> work station
setup..please tell em Step by Step on how to do this I would appreciate it
very much
<br>PS I am not using Gnome!
</strong></p>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Michael]
Is your modem internal? If it is, then there's a fair chance it's a
'WinModem'. These are modems designed to work within MSWindows. Since they
use drivers written for MSWindows to work, it is very difficult [currently
impossible] to get them working under Linux. If this is the case, then your
best bet is to buy a new external modem. They're reasonably priced, and will
work with all OS's.
</BLOCKQUOTE>
<strong><BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Heather]
While it is very much in the cumudgeonly spirit of the Answer Guy to tell
someone that their "lose"modem is not a big winner, it is no longer quite
accurate to say that they just don't work.
</BLOCKQUOTE>
<BLOCKQUOTE>
PCTel models work, because a different corporate entity is maintaining their
binary driver. How <EM>well</EM> they work, I wouldn't know
<IMG SRC="../gx/dennis/smily.gif" ALT=":)"
height="24" width="20" align="middle"> They aren't the most
common softmodem variety.
</BLOCKQUOTE>
<BLOCKQUOTE>
Lucent "56kFlex" modems work, because they (somewhat quietly) released a
binary driver (it's been updated once, even though the party line is "we
don't have a Linux, some outsiders did that, ask your modem manufacturer, we
just design the controllerless cores". Sure. The drivers have to be modem
specific, that's why Lucent has only one "Windows" driver posted on your i
website. I have to laugh). Their corporate confusion aside, Lucent's have
a fairly fine chance of becoming something much better than a modem as well,
since some folks are working on different aspects of <EM>real</EM> software for it
to be used as a phone line diagnosis tool and sampler. Depending on your
needs for that, it might already be better than a modem ... but it's not
usable <EM>as</EM> a modem that way; the open source software can't do PPP yet.
Whereas the binary driver is flawed as regards unloading, and often requires
shoe-horning into place.
</BLOCKQUOTE>
<BLOCKQUOTE>
We can hope that these binary maintainers are paying attention to roll out new
binaries as the 2.4 kernel ships, because it has a waaaaay different modules
interface.
</BLOCKQUOTE>
<BLOCKQUOTE>
But the other softmodems (Conexant, 3com, some others) are useless hunks of
incomplete hardware in a Linux, or *BSD box. Haven't checked regarding BeOS
or OS/2 but if those don't work either, don't say we didn't warn you. If you
bought or received a removable internal softmodem and it's among those that
don't work, vote with your wallet - send it back!
</BLOCKQUOTE>
<BLOCKQUOTE>
At the end, check out <a href="http://www.linmodems.org/">linmodems.org</a>
for your driver, if it exists. There is
also a link there to someone's big list of modems which are software driven
modems. Expect your softmodem to flake out at high speeds as the CPU load
grows (whether you're under MSwin or Linux won't matter, it will merely affect
how much overall load it will take to flake out). In short: if you are a
serious modem user, you want a serious modem.
</BLOCKQUOTE>
</strong>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Michael]
What distribution are you using? I'm guessing it's
<A HREF="http://www.caldera.com/">Caldera</A>, since that
attempts to set up the modem at installation.
</BLOCKQUOTE>
<blockquote><em><p>[ No, he said RH 6.2, but that's an interesting factoid,
so it stays. --Heather ]</p></em></blockquote>
<BLOCKQUOTE>
You don't actually have to
'install' the modem as you would have to do in Win98. To use a modem,
firstly find out its comm port. It'll probably be in Comm1 or 2. Under
Linux, these appear as <TT>/dev/cua0</TT> and <TT>/dev/cua1.</TT>
You'll also need to know the
modems speed. If it's a new modem it should be 57000 k's a second. Now, to
use this goto kppp under the internet selection of the
<A HREF="http://www.kde.org/">KDE</A> 'start' menu.
</BLOCKQUOTE>
<BLOCKQUOTE>
It's pretty self explanatary from here onwards. Enter your comm port
- try from 1 - 4 ( cua0 - cua3 ), until you find which port
your modem uses. Enter your modem's speed, and then your ISP's details.
Unless you have other problems, that should allow you to use the internet.
</BLOCKQUOTE>
<strong><BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Heather]
A Lucent controllerless modem, if you happen to have one and
force the driver (module ltmodem.o) to load, becomes /dev/ttyS14.
It is known to have problems interacting with the current ppp module
though; a patched ppp.o with features reduced back to 2.2.14 is
available for 22.15 and 2.2.16.
</BLOCKQUOTE>
<BLOCKQUOTE>
On systems without a ps/2 mouse, serial 0 is usually the mouse,
and serial 1 (com2) the modem. On laptops, the external serial
is usually serial 0, and the infrared (when turned on) serial 1,
leaving PC cards to be on serial 2 (com3).
</BLOCKQUOTE></strong>
<!-- end 10 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/11"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 11 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>Binfmt/Exec Format Errors in <TT>/linuxrc</TT> on initrd</H3>
<p align="right">AnswerGang: Jim Dennis</p>
<blockquote><em><p>[ Folks, while our Answer Gang does read technese
as well as English, it helps if you use some connective grammar...
little things like "when I used 'cat whateverfile' it said <gibberish here>"
or "with kerneloptthingy=nnn I can see syscalls blah() blah() blabla()".
This one had to be translated, and my wildest guess is Fuchangdong uses
some sort of kernel debugging that he didn't describe to us.
--Heather ]</p></em></blockquote>
<p><strong>From fuchangdong on Mon, 17 Jul 2000
</strong></p>
<!-- ::
Binfmt/Exec Format Errors in <TT>/linuxrc</TT> on initrd
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:: -->
<P><STRONG>
please give me some help,i didn't know how to explain at my
implementing embeded os. fuchangdong
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
You're trying to use Linux for an embedded system?
</BLOCKQUOTE>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
<A HREF="http://www.sohu.com/sas/temp/twoyear/2year.html"
>http://www.sohu.com/sas/temp/twoyear/2year.html</A>
<A HREF="http://www.sohu.com"
>http://www.sohu.com</A>
</STRONG></P>
<P><STRONG>
hi :
</STRONG></P>
<P><STRONG><BLOCKQuote>
i now have a question,please give me help, i use initrd and
ramdisk to complete embedded linux on my hardware.
first ,i create a initrd.img from command mkinitrd.and a bigger
root fs:ram.img.gz ,to lilo it,and reboot it
</BLOCKQuote></STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
You're using the Linux initrd (initial RAM disk) feature.
You use the mkinitrd command to create your RAM disk image
install that and your kernel onto the target hardware (which
I presume is x86 because...) you then run <TT>/sbin/lilo</TT> on that
and try to boot it.
</BLOCKQUOTE>
<P><STRONG><CODE>
at init process,do_basic_setup,this line :
</CODE></STRONG></P>
<P><STRONG>
kernel_thread(do_linuxrc,"<TT>/linuxrc</TT>",0);
</STRONG></P>
<P><STRONG><CODE>
at this function: do_linuxrc()</TT>
</CODE></STRONG></P>
<P><STRONG>
execve(shell,argv,envp_init);
it return <TT>-1</TT> ,and errno is 8,this tell that it is "exec format error"
</STRONG></P>
<P><STRONG>
so i can't to exec linuxrc script file.
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
According to the kernel sources it is calling the
kernel_thread(do_linuxrc,...) function and the do_linuxrc
function returns a failure on the<TT> execve()</TT>, with the errno
global set to 8, which translates to "exec format error"
according to the strerror()/perror() function.
</BLOCKQUOTE>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
linuxrc's content is :
</STRONG></P>
<P><STRONG><CODE>
#!/bin/sh
<BR>ls -l
<BR>and chmod 0777 linuxrc
</CODE></STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
The <TT>/linuxrc</TT> is a trivial (test) shell script. You've tried
marking that as executable with the chmod 0777 command.
</BLOCKQUOTE>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
so i can't know what wrong with me? why initrd.img cant't be load right?
but i find :
</STRONG></P>
<P><STRONG><BLOCKQuote>
ret = open("<TT>/linuxrc</TT>",O_RDONLY,0);
ret = success.
</BLOCKQuote></STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
If you (patch the kernel?) to simply open the file you
don't see any error.
</BLOCKQUOTE>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
and infomation have :
mount root filesystem (ext2);
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
You think you have an ext2 filesystem mounted on root at
this point? (It's not clear how you are getting this
info).
</BLOCKQUOTE>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
so i can't get reason ,please give me help?
linux is redhat 6.2
linux kernel is 2.2.12-20
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
The development environment is a <A HREF="http://www.redhat.com/">Red Hat</A> 6.2 system and you're
using a 2.2.12-20 kernel.
</BLOCKQUOTE>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
after, i test this ,give me these information:
i add modprobe/insmod command in initrd.img, reboot it,
this system give me information:
" kmod:failed to load <TT>/sbin/modprobe</TT> <TT>-s</TT> <TT>-k</TT> binfmt-0000"
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
When you try to run a modprobe command in the initrd.img
you get a kmod binfmt error.
</BLOCKQUOTE>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
<TT>execve()</TT> call<TT> do_execve()</TT><TT>,do_execve()</TT> call<TT> request_mode()</TT>
<TT>,request_mod()</TT> call<TT> exec_modprobe()</TT>,so it's path is right.
but i can see this inforamtion ,at boot ,system load script
,aout,elf binfmt. so i can't know greater!!! please give me help
!!!
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
This last bit of typing is utter gibberish. Actually your
whole message is basically incomprehensible. However, I've
echoed a guess after each fragment of what you've said to see
if I could understand the question.
</BLOCKQUOTE>
<BLOCKQUOTE>
It sounds to me like you are somehow missing some of the necessary
binfmt loaders from your kernel. Now there are a couple of options
in the 'make config' scripts that allow you to enable or disable
a couple of different types of executable (binfmt) loaders. You
generally need at least one of them compiled directly into the
kernel (so that it can execute a linuxrc and/or an init(8)
process).
</BLOCKQUOTE>
<BLOCKQUOTE>
I don't think it's possible to build a kernel without statically
linking one of a.out (COFF) or ELF. If 'make menuconfig' somehow
let you pull that off, it's a bug in the Makefiles and
dependencies.
</BLOCKQUOTE>
<BLOCKQUOTE>
You need one of those.
</BLOCKQUOTE>
<BLOCKQUOTE>
In addition I've never seen an option to leave out the text/script
binfmt loader. That is the loader that handles text files and
uses the #!/.../ line to execute most scripts.
</BLOCKQUOTE>
<BLOCKQUOTE>
However, it would seem that you have somehow managed to do this.
I could see it if you had been applying your own patches to the
kernel code, or if you were hand editing or bypassing the Makefiles
with some of your own.
</BLOCKQUOTE>
<BLOCKQUOTE>
I suppose English is not your native language (given the
distressing incompetance of your message). I supposed you should
look for a (Chinese?) users group, newsgroup, mailing list or
other forum where you can have someone translate your question into
English.
</BLOCKQUOTE>
<BLOCKQUOTE>
Other than that try recompiling your kernel and ensuring that
the ELF executable support (under "General Setup") is set to
"Y" (NOT "M" and definitely NOT "N").
</BLOCKQUOTE>
<BLOCKQUOTE>
To quote the help text that is associated with that menu
config option:
</BLOCKQUOTE>
<blockquote><pre> Saying M or N here is dangerous because some
programs on your system might be in ELF format.
</pre></blockquote>
<BLOCKQUOTE>
It is highly unlikely that you are somehow managing
to compile your core shell and other software in a.out
format. That actually might be quite useful for
embedded systems work <TT>---</TT> but the older format and the
tools to generate them haven't been used by any general
purpuse distribution in a few years. The only remaining
a.out distribution that I know of is David Parsons'
Mastodon (<A HREF="http://www.pell.portland.or.us/~orc/Mastodon"
>http://www.pell.portland.or.us/~orc/Mastodon</A>).
</BLOCKQUOTE>
<BLOCKQUOTE>
So, I think you can safely leave out the other binfmt loaders.
</BLOCKQUOTE>
<BLOCKQUOTE>
BTW: You also MUST have one of the filesystem types statically
linked into the kernel. You can just go through and blindly
mark EVERYTHING as modular. It won't work. The initial RAMdisk
will have to be in some filesystem format (minix, ext2, something).
Of course it would be possible to use the ROMfs. This is much
different than initrd <TT>---</TT> it's readonly and you have to make the
filesystem using a genromfs utility AND you'd have to link your
ROMFS into your kernel. I don't know of anyone that actually uses
ROMFS.
</BLOCKQUOTE>
<BLOCKQUOTE>
Anyway, I suspect that the reason your shell script isn't
working is that the kernel can't load the shell interpreter.
The reason it can't load the shell interpreter is because your
shell is probably in ELF (executable linking format) and you
left the ELF loader out or put it in as a module. Of course
the insmod/modprobe programs are also in ELF format <TT>---</TT> and
the kmod (kernel loader module) requires access to those
in order to actually load any modules. (kmod doesn't
load modules, it spawns a kernel thread, which runs modprobe
to do the actual work. You can read <TT>/usr/src/linux/kernel/kmod.c</TT>
to see that.
</BLOCKQUOTE>
<BLOCKQUOTE>
I hope that helps.
</BLOCKQUOTE>
<!-- sig -->
<!-- end 11 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/12"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 12 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>Mandrake and the Missing Modem</H3>
<p align="right">AnswerGang: Jim Dennis</p>
<p><strong>From Michael Hudson on Tue, 04 Jul 2000
</strong></p>
<!-- ::
Mandrake and the Missing Modem
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:: -->
<P><STRONG>
Hi yall,
</STRONG></P>
<P><STRONG>
First off let me tell you that I am completely new to the Linux world! I
have been &lt;Stuck&gt; with Windoze most of my computing life.. I have only
recently discoverd this whole new world! So please make you answers as
simple as possible to understand.. Thanx in advance!
</STRONG></P>
<P><STRONG>
I have recently installed Linux Mandrake on my K6 Machine. I am running it
Dual Boot with Windoze.. I am having some reall problems setting up my modem
to actually work..
</STRONG></P>
<P><STRONG>
I think this is solely down to my lack of knowledge towards Linux...
Could NE1 give me some advice?
</STRONG></P>
<P><STRONG>
Yours,
Michael Hudson.
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
You're also having "some reall" [sic] problems describing your
problem. Read back through your message. Try to pretend that you
were getting this from some stranger. Do you really think there is
enough detail provided for any mere mortal to devine what you
problem could be?
</BLOCKQUOTE>
<BLOCKQUOTE>
I understand that you're a novice a Linux. However, you could put
a little energy into the questions you're going to ask.
</BLOCKQUOTE>
<BLOCKQUOTE>
What did you try to do? Did you run some program to try to "set
up" you modem? What do you mean by "set up"? What kind of modem
is it? If you ran some program or command to try ot "set up" your
modem; WHAT DID IT DO? Did you get a error message? What were you
expecting the modem to do? What did it do?
</BLOCKQUOTE>
<BLOCKQUOTE>
Did you read any manuals or do searches through any Internet web
search engines?
</BLOCKQUOTE>
<BLOCKQUOTE>
Anyway, the problem is probably that you probably have a "winmodem"
or a "softmodem" or some other useless piece of junk that isn't
really a modem. If you go back to the <i>Linux Gazette</i> (which you
should have read in order to get this e-mail address) and you
peruse the FAQ and maybe search on the word "modem" you'll find
about 100 other messages where I've talked about modems, Linux,
using modems under Linux, testing to see if your modem is supported
by Linux, and especially about why "winmodems" are such losers.
</BLOCKQUOTE>
<!-- sig -->
<!-- end 12 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/13"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 13 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>Making the Laptop's Fan Run</H3>
<p align="right">AnswerGang: Jim Dennis, Heather Stern</p>
<p><strong>From Allen Tate on Thu, 27 Jul 2000
</strong></p>
<!-- ::<BLOCKQuote>
Making the Laptop's Fan Run
~~~~~~~~~~~~~~~~~~~~~~~~~~~
</BLOCKQuote>:: -->
<P><STRONG>
Anyone out there know anything about making the cooling fan run on
a laptop running Linux? Seems I read something somewhere about
running a module that made the fan run. Any advice is appreciated.
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Jim]
What makes you think you need a special module or driver to
control your system's fan?
</BlockQuote>
<BlockQuote>
On any reasonable equipment the fan should run when it is needed
without any software support required. The hardware should include
its own thermostat which should operate completely indendently of
the OS.
</BlockQuote>
<BlockQuote>
(Actually there's a good argument that we should be producing
better hardware that runs cooler, with lower power consumption.
So that fans would be unecessary for most laptops and general
purpose computing devices. That's what Transmeta <TT>---</TT> the company for
which Linus works <TT>---</TT> has recently introduced to the PC market).
</BlockQuote>
<BlockQuote>
Anyway, I don't know of any module that "makes the fan run." or
anything like that. The closest I can think of would be the
ACPI kernel features (ACPI is an advanced and somewhat complicated
alternative to APM <TT>---</TT> advanced power management). That would
require that you get a daemon to call those kernel functions from
user space. Under
<A HREF="http://www.debian.org/">Debian</A> you'd just use the command
'apt-get install acpid' to fetch and install that daemon, under other Linux
distributions you'd have to hunt for it on your CDs, and/or look for it on their
FTP contrib sites, etc).
</BlockQuote>
<BlockQuote>
There is also a package called "LM_Sensors" which allows one to
monitor some values such as CPU temperature, fan speed,
power supply voltage, etc. There are a number of motherboards
which use an LM78 or similar chip and sensor set to allow
software access to these sorts of metrics. Under Debian you
could get the sources to this package using 'apt-get source
lm-sensors' which will fetch the original package sources and
the Debian maintainer's patches and unpack them under your current
directory. I usually do that sort of thing from my <TT>/usr/src/debian</TT>
directory.
</BlockQuote>
<BlockQuote>
LM_Sensors consists of a kernel patch (you must recompile your
kernel to add these features) and some user space utilities for
querying the kernel driver.
</BlockQuote>
<BlockQuote>
I highly recommend LM_Sensors to sysadmins who are maintaining
servers at co-located facilities and in server closets. Those are
places where having this information available via software can
save a great deal of downtime and damage. (The audible alarms that
might be in your case to warn of fan failures and overheating aren't
very useful when there's no one there to hear them. Also the
typical machine room has to much fan and air conditioning noise for
anyone to hear the failure of one system).
</BlockQuote>
<BlockQuote>
However, I don't know if any laptops have any of the support
LM78 or similar sensor features. So that's probably not useful
to you.
</BlockQuote>
<!-- sig -->
<p><em>... he replied ...</em></p>
<p><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Thanks for the advice. I look into it.
</p>
<strong><BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Heather]
As someone who works a <em>lot</em> with laptops (imagine that, since
I work for a linux laptops company ... though the relation was really
the other way around) I'd like to add a couple of brief points:
<ul>
<li>There really are some special utilities for <em>some</em> laptops
out there. At minimum Thinkpads and Toshibas, two major brands
famous for being very nice systems, but somewhat weird. A colleague
of mine recently released source for a certain style of hibernation
partitions. Most of these sorts of tools are not useful to machines
with a different BIOS.
<li>If the fan comes on, it's because the system thinks it's too hot and
needs it. If you're personally feeling a bit toasty and it's
looking like it's 112 in the shade outside, do you
turn OFF the air conditioning in your house? nope, bad idea.
Some poor woman in the southwest turned her fans off in such heat
because she feared it would push up her electric bill; she died.
Basically, if a system that is getting cooked <em>doesn't</em>
turn its fan on, the thermal sensor or the motor may be broken
and it should be looked at by a technician before you get a
thermal failure. Now if your BIOS has a feature to spin the fan
<em>faster</em> than it really requires when it's overheating if
AC power is on... that'd be kinda cool
<IMG SRC="../gx/dennis/smily.gif" ALT=":)"
height="24" width="20" align="top">
</ul>
</BLOCKQUOTE></strong>
<!-- end 13 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/14"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 14 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>MX Records and Precedence Values</H3>
<p align="right">AnswerGang: Jim Dennis, Mike Orr</p>
<p><strong>From Todd Tredeau on Sat, 01 Jul 2000
</strong></p>
<!-- ::
MX Records and Precedence Values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:: -->
<P><STRONG>
I am trying to understand mx records, and the role the play in
relationship to a backup queue server. I have two mail servers
mx1.wisernet.com and mx2.wisernet.com, I also have a third emergency
back server, to be manually added if I need it.
</STRONG></P>
<P><STRONG>
If the primary mail store is on mx1 then should the priority be higher
or lower?
</STRONG></P>
<P><STRONG>
like mx1.wisernet.com 10 (primary)
mx2.wisernet.com 20 (backup)....
</STRONG></P>
<P><STRONG>
your help would be greatly appreciated, I have all sorts of mail
problems....Actually my antispam software was working so well at one
point, I couldn't send messages from mx1 to mx2 and so on... got that
straightened out though. Nice web site.....
</STRONG></P>
<blockquote><em>[ Thanks! -- Heather ]</em></blockquote>
<P><STRONG>
Todd
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
The MX record with the lowest value will have the highest
priority. Think of it as the "distance to user's mailboxes"
and consider that the various MTAs (mail transport agents) which
are relaying a piece of mail are each seeking to get the mail
closer to its final destination.
</BLOCKQUOTE>
<BLOCKQUOTE>
Of course the host with the lowest MX value will either have
to accept the mail or there will have to be an accessible route
to an A record of the host. (Note: CNAMES are never supposed to be
used for mail exchanges). Normally we have MX <EM>and</EM> A (address)
records for any host that is supposed to receive mail.
</BLOCKQUOTE>
<BLOCKQUOTE>
In general there is nothing special to setting up backup MX
relationships. It used to be that you could simply add the
appropriate MX records to your domain zones. These days there
is one extra step.
</BLOCKQUOTE>
<BLOCKQUOTE>
In recent years it has become almost mandatory for sites to limit
their mail relaying. Before the advent of widespread spamming it
was common to allow "promiscous relaying." That basically meant
that my mail servers would attempt to forward/relay/deliver any
piece of e-mail that landed on them, regardless of where it was
from and regardless of who it was to. That was basically a fault
tolerance feature. If a bit of e-mail got mis-routed and landed on
my server <TT>---</TT> the server would just try to get it delivered anyway.
That was common courtesy in a co-operative Internet.
</BLOCKQUOTE>
<BLOCKQUOTE>
However, the spammers ruined all of that forever. They would
dump one item of e-mail, generally with a couple thousand recipient
addresses, onto any open relay. This allows the spammer to use a
small bit of their own bandwidth (as provided by a 14.4 or 28.8
modem) while leeching much more bandwidth (a few thousand times
their "investment") off of the rest of the Internet and the
host of the open relay in particular.
</BLOCKQUOTE>
<BLOCKQUOTE>
So now we have to also configure the MTA on our backup MX hosts
to access mail to our domain. (Obviously that's no problem if
we're talking about additional hosts within our domain <TT>---</TT> they
presumably are already configured to accept/relay mail for us. It
is also true of cases where we want to set up mutual backup MX
services for and with other domains. (Thus if the connection(s)
into our domain is/are down, or if some regional outages prevent
some customers from reaching us directly, but still allow
connections to one of our MX partners, then the mail works its
way towards us. The correspondents feed their mail up to any
available MX server, so the mail doesn't languish on thier
systems.
</BLOCKQUOTE>
<BLOCKQUOTE>
That's the idea, anyway. I've had some people question whether
configuring backup MX services is still appropriate in the
modern Internet. Personally I think it is. However, there are
valid arguments on both sides of this issue.
</BLOCKQUOTE>
<BLOCKQUOTE> <BLOCKQUOTE><EM>
[The way I heard it, if the primary mail server is down,
a secondary server's job is to accept the message and keep trying to
forward it to the primary server, with a longer-than-usual
retry timeout. This prevents the mail from bouncing needlessly
if the primary server is down for a while. Note that the secondary
server cannot deliver the message itself, since the recipient is not
a local user on that machine. --Mike]
</EM></BLOCKQUOTE> </BLOCKQUOTE>
<!-- sig -->
<!-- end 14 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/15"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 15 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>Re: unable to open a initial console</H3>
<H4 ALIGN="center">Also: A Short Guide on How to do Backups and Recovery:</H4>
<p align="right">AnswerGang: Jim Dennis</p>
<p><strong>From Asghar Nafarieh on Tue, 25 Jul 2000
</strong></p>
<!-- ::
Re: unable to open a initial console
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Also: A Short Guide on How to do Backups and Recovery:
:: -->
<P><STRONG>
Hi,
</STRONG></P>
<P><STRONG>
I hope you can help me on this problem.
After booting my linux server (RedHat6.0) It goos through
booting and comes back with the above prompts and hangs
there. I have hat this machine running for 6 months and this
is the first time this is happenning. I have a lot of data
in there. I tried to use the resuce disk but I don't know
how to get to the hard disk to check the problems. I appreciate
your help.
</STRONG></P>
<P><STRONG>
Thanks,
-Asghar
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
This error message basically means that the kernel was
unable to find a console on which it could run init.
</BLOCKQUOTE>
<BLOCKQUOTE>
That suggests that it can't find your <TT>/dev</TT> directory
(on the root filesystem) or that it can't find the appropriate
<TT>/dev/tty*</TT> and <TT>/dev/console</TT> device nodes thereunder.
</BLOCKQUOTE>
<BLOCKQUOTE>
This is most commonly caused by one of two problems:
</BLOCKQUOTE>
<BLOCKQUOTE><ol>
<li> Perhaps you removed or damaged the <TT>/dev/*</TT> nodes
that the kernel needs.
<li> Perhaps the kernel is mounting the wrong filesystem
on the root directory (a filesystem which doesn't
HAVE a <TT>/dev</TT> directory).
</ol></BLOCKQUOTE>
<BLOCKQUOTE>
So, here's how you use a rescue diskette the troubleshoot
this sort of problem:
</BLOCKQUOTE>
<BLOCKQUOTE><ol>
<li> Boot from the rescue diskette.
<li> Mount your root filesystem. Use a command like:
<BLOCKQUOTE><code>
mount /dev/hda3 /mnt
</code></BLOCKQUOTE>
<li> Look for a <TT>.../dev/console</TT> device thereunder. Use
a command like:
<BLOCKQUOTE><code>
ls -l /mnt/dev/console
</code></BLOCKQUOTE>
<BLOCKQUOTE>
It should look something like:
</BLOCKQUOTE>
<blockquote><pre>crw-r--r-- 1 root root 5, 1 Jul 21 14:50 /dev/console
</pre></blockquote>
</ol></BLOCKQUOTE>
<BLOCKQUOTE>
If it's there then you want to try booting from your
hard drive again. This time, at the LILO prompt you'd
interrupt the boot process and pass the kernel some options.
</BLOCKQUOTE>
<BLOCKQUOTE>
When you see LILO press the [CapsLock] or the [ScrollLock] key.
Then hit the [Tab] key. That should give you a list of available
boot labels ("linux" and "dos" for example). You'd type something
like '<tt>linux root=/dev/hda3 init=/bin/sh</tt>' (Be sure to refer to the
same device, hda3, or whatever, as you did when mounting your root
fs under the rescue diskette).
</BLOCKQUOTE>
<BLOCKQUOTE>
In this case I've specified the kernel option "<tt>init=/bin/sh</tt>" just
for further troubleshooting. If that comes up O.K. you can then
type '<tt>exec /sbin/init 6</tt>' to force the system to shutdown and reboot
under the normal init.
</BLOCKQUOTE>
<BLOCKQUOTE>
I realize, from the tone of your question, that this may all be a
bit confusing to you. You don't mention what you've done to the
system between the time that it was working and the time that this
error started occurring. I can guess at a few possibilities, but
I'd only be guessing.
</BLOCKQUOTE>
<BLOCKQUOTE>
For example: if you are someone else with administrative access to
that system had built a new kernel it might be that you built it
with a faulty "rootfs" flag. A Linux kernel as a point to the
default root filesystem device and partition compiled into it. If
it isn't passed a root= parameter, than the this compiled in
pointer specified which device the kernel will try to find and
which partition it will try to mount as root. Normally the
LILO boot loader has a root= directive in it. That is usually
in the "global" section and is used for any "stanza" which
doesn't over-ride it. When we are typing in root= directives
at the LILO prompt we are over-riding both the kernel's default
and LILO's stored option.
</BLOCKQUOTE>
<BLOCKQUOTE>
As you can infer from the foregoing the Linux kernel mounts a
root filesystem and then it opens a console device. That done
it prints alot of messages to the screen, and runs the init
program. It looks in several places like <TT>/sbin</TT>, <TT>/etc</TT>, and
<TT>/bin</TT>, for a program named 'init' then it looks for <TT>/bin/sh</TT> as a
failsafe. Failling all those the kernel will print an error
message like: "No init found. Try passing init= option to kernel."
</BLOCKQUOTE>
<BLOCKQUOTE>
(You can read the kernel source code for these actions in
<TT>/usr/src/linux/init/main.c</TT>).
</BLOCKQUOTE>
<BLOCKQUOTE>
Note that I haven't addressed the issue of whether there is a
Linux filesystem, recognized by your kernel, available. If
you had no Linux filesystem there, you'd be getting a error
more like: "VFS Kernel Panic: Unable to mount root" or
"VFS: Cannot open root device" (depending on whether the
filesystem/partition was nonexistent or corrupt, or whether
the device couldn't even be found).
</BLOCKQUOTE>
<BLOCKQUOTE>
I've also left out any discussion of the initrd (initial
RAM disk). <A HREF="http://www.redhat.com/">Red Hat</A> does tend to use these, though they are
not necessary for most systems. Here's a little bit about
how those work:
</BLOCKQUOTE>
<BLOCKQUOTE><BLOCKQuote>
If you are using an initrd, then the loader (LILO) must load
the kernel, and the initrd into memory. It then passes the
kernel an option. The kernel (with initrd support enabled) will
then allocate memory for a RAM disk, and decompress the initrd
image into that memory. Normally the initrd will contain a
compressed filesystem image. (It's actually possible for it
to contain other sorts of data, but that's not a feature that I've
ever heard of anyone using).
</BLOCKQuote></BLOCKQUOTE>
<BLOCKQUOTE>
Once the initrd (RAMdisk) has been initialized and populated,
the kernel temporarily mounts that as the root filesystem and
attempts to execute a command called <TT>/linuxrc.</TT> After that
command exits, then the regular root filesytem is mounted,
and the normal init process is run.
</BLOCKQUOTE>
<BLOCKQUOTE>
Note that this is basically a hook between the kernel's
initialization and the normal root fileystem mount and
init process. Often the initrd will have no effect on
the regular boot process. However the most common case is
for the initrd to contain some modular device drivers, and
for the <TT>/linuxrc</TT> to load them. This is intended to allow
the kernel to access devices for which it only has modular
(rather than compiled in) drivers.
</BLOCKQUOTE>
<BLOCKQUOTE>
(Usually I suggest that users learn how to compile their
own kernel, statically including their main disk interface
and network adapter drivers. That obviates the need for an
initrd, making the whole system a tiny bit easier to maintain
and troubleshoot).
</BLOCKQUOTE>
<BLOCKQUOTE>
I mention all of this in your case because it's possible that
you kernel is fine, your root filesystem is fine but that your
initrd has been corrupted and is setting the rootfs flag to
some
</BLOCKQUOTE>
<BLOCKQUOTE>
For more details about this initrd subsystem you can read
/usr/src/linux/Documentation/initrd.txt
</BLOCKQUOTE>
<BLOCKQUOTE>
Of course I should also take this opportunity to give the standard
parental lecture about the need to make and test backups. However,
I don't have a really good resource to which I can refer you. I
don't know of a well-written "System Recovery HOWTO" and I should
take it upon myself to write one. (The third chapter of my
book on system administration is a start --- but it doesn't
go down to step-by-step details).
</BLOCKQUOTE>
<BLOCKQUOTE>
Let's just say this for now:
</BLOCKQUOTE>
<BLOCKQUOTE><BLOCKQuote>
If you end up re-installing here are some tips to make recovery
from these sorts of disasters much easier:
</BLOCKQuote></BLOCKQUOTE>
<BLOCKQUOTE>
First, during installation, create at least three or four
partitions. I like using lots of partitions. You want to
have partitions for root (<TT>/</TT>), system (<TT>/usr</TT>), and data (<TT>/home</TT>)
at least.
</BLOCKQUOTE>
<BLOCKQUOTE>
I like to have an alternative root filesystem (<TT>/mnt/altroot</TT>) (which
is normally not mounted) and a <TT>/var</TT> partition. Then I may add
other partitions based on the needs of a specific machine. I
usually create <TT>/tmp</TT> and <TT>/usr/local</TT> partitions, and sometimes I add
<TT>/var/spool</TT> and/or <TT>/var/spool/news</TT> partitions for some mail and news
servers.
</BLOCKQUOTE>
<BLOCKQUOTE>
One of the reasons for this partitioning is to facilitate
system and data recovery. Most problems will only affect
one of your filesystems. For example, if your root filesystem
is damaged (as it appears has happened in your case) then you can
just reformat and restore that without worrying about your data
(which should mostly be stored on <TT>/home</TT> and/or <TT>/usr/local</TT>).
</BLOCKQUOTE>
<BLOCKQUOTE>
If you have a separate <TT>/boot</TT> partition it can be mounted read-only
most of the time (just remounted in read-write mode when you are
installing a new kernel). That can also work around limitations
of older BIOS' and versions of LILO with regards to the infamous
1024 cylinder limit. If you keep an extra "alternative root"
filesystem you can maintain a "mirror" (replication of) the root
filesystem on that, with copies of all the system configuration
data (from under <TT>/etc</TT>). Then when your root fs is damaged you
can simply boot from the altroot using the root= kernel/LILO
option while booting. (You could also use the root= directive
when booting from a floppy disk or bootable rescue CD).
</BLOCKQUOTE>
<BLOCKQUOTE>
You can copy all of your root fs to the alternative root with
a sequence of commands something like:
</BLOCKQUOTE>
<blockquote><pre> mount /dev/hdc8 /mnt/altroot
cp -ax / /mnt/altroot
umount /mnt/altroot
</pre></blockquote>
<BLOCKQUOTE>
... assuming that you already have created a <TT>/mnt/altroot</TT>
mountpoint (using mkdir) and that you have a partition like
<TT>/dev/hdc8</TT>, the fourth extended partition on the primary IDE drive
of the secondary controller, with a valid filesystem thereon. Once
your create an altroot partition
</BLOCKQUOTE>
<BLOCKQUOTE>
I suggest keeping <TT>/usr</TT> as a separate filesystem for two reasons.
You can keep it mounted read-only most of the time (remounting it
in read-write mode during major system upgrades and while
installing new packages). That makes it more difficult for it to
get damaged and might even protect your system from some of the
sloppier "script kiddy" exploits (it's not a real security feature,
a better exploit will remount filesystems read-only before
installing a rootkit).
</BLOCKQUOTE>
<BLOCKQUOTE>
Of course keeping <TT>/home</TT> as a separate partition should be fairly
obvious. If you're using your system in a sane fashion, most of
your data should be under <TT>/home.</TT> That means that you can focus on
backing that system up. The other filesystems should change
somewhat less often, and you can be assured that the programs,
libraries and other files are store on them are recoverable (from
your installation CDs, and the Internet at large) or are expendable
(temporary files, caches, logs, etc).
</BLOCKQUOTE>
<BLOCKQUOTE>
Under Linux there are many different ways to perform a backup. In
general you can use 'tar', 'cpio' and/or the 'dump' commands for
individual systems, or you can use the free AMANDA package for
setting up a networked client/server backup infrastructure.
</BLOCKQUOTE>
<BLOCKQUOTE>
Each has its advantages and disadvantages. You could also get BRU
(the backup and recovery utility) which is probably the most
popular among several commercial Linux backup packages.
</BLOCKQUOTE>
<BLOCKQUOTE>
Of course you need more than software to do backups. You need to
have places to store these backups (media) and a device to handle
the media. Some of your choices are tape drives, CD-R or CDRW,
magneto optical or any of various types of removable storage
ranging from floppies through LS120, Zip, Jaz, etc.
</BLOCKQUOTE>
<BLOCKQUOTE>
Most systems sold these days don't include any backup devices.
With common disk drive capacities of several gigabytes, we can't
count 1.44Mb floppies as a reasonable backup device. (Even in the
days of 100 and 200 Mb hard drives, no one was using floppies to do
full system backups). Managing a thousand or more floppies per
hard drive is absurd.
</BLOCKQUOTE>
<BLOCKQUOTE>
Even the systems that sell with LS120 or Zip(tm) drives aren't
really meeting the backup/recovery needs of an average user. It
wasn't too bad for one and two gigabyte systems (10 to 20 disks)
but it's not reasonable for the 6 to 18 gigabyte hard drives we're
seeing now (60 to 200 disks). Even CD-R or CDRW are barely
adequate for backing up individual systems (at 650Mb each you need
about a dozen discs for a typical drive, and I'd need almost 30 of
them to backup my laptop).
</BLOCKQUOTE>
<BLOCKQUOTE>
So the only reasonable way to do full system backups on most
moderns PCs is to use tape drives. A 4mm DAT3 tape can store 12 Gb
uncompressed. DLT tape drive capacities range from 20 to 70 Gb.
There are other drives ranging from 250Mb (FT) through over 100 Gb
and most are supported by Linux drivers.
</BLOCKQUOTE>
<BLOCKQUOTE>
The biggest problems with tape drives is that they are expensive.
A good tape drive costs as much as a cheap PC.
</BLOCKQUOTE>
<BLOCKQUOTE>
Let's say you bought a 4mm DAT drive (and a SCSI controller to go
with it). You could to a backup of your whole system with a
command like:
</BLOCKQUOTE>
<blockquote><pre> tar cSlvf /dev/st0 / /usr /home ...
</pre></blockquote>
<BLOCKQUOTE>
... Note: here I'm not using compression, and I am using the "S"
(<TT>--sparse:</TT> note that's a capital "S") and "l" (<TT>--one-file-system</TT> a
lower case "ell") options to 'tar'. I'm assuming the first
(usually the only) tape drive which is called <TT>/dev/st0</TT> (or
<TT>/dev/nst0</TT> if you want to prevent the system from rewind the tape
after the access). I'm listing the top level directory of each
locally mounted filesystem (the mount points). Using this
technique avoids inadvertantly backing up <TT>/proc</TT> (a virtual
filesystem) and any network mounted or other unusual filesystems.
Obviously you'd only list those filesystems that made sense for
your system (read your <TT>/etc/fstab</TT> for a list).
</BLOCKQUOTE>
<BLOCKQUOTE>
I could add a "z" flag to force 'tar' to compress the data,
however that usually causes latency issues (the data doesn't
"stream" or flow smoothly to the tape drive). Since the tape
must be moving under the read-write head at a constant velocity,
if the data doesn't stream you'll get "shoeshining." The most
common causes of this are compression and networking. So, in
those cases you'd use a command more like:
</BLOCKQUOTE>
<blockquote><pre> tar cSlvf - / /usr /home ... | buffer -o /dev/st0
</pre></blockquote>
<BLOCKQUOTE>
(Here, I've changed 'tar' to write it's output into the pipe
<TT>---</TT> to stdout technically <TT>---</TT> and added the buffer command
which using a bunch of shared memory and a pair of read/write
processes to "smooth out" the data flow).
</BLOCKQUOTE>
<BLOCKQUOTE>
Hint: You should write down the exact command you used to write
your data on any tapes that you've created. This allows any good
sysadmin to figure out what command is required to restore the
data.
</BLOCKQUOTE>
<BLOCKQUOTE>
To restore a system using such a tape you'd follow the following
procedure:
</BLOCKQUOTE>
<BLOCKQUOTE><ol>
<li> Boot from a rescue diskette or CD (or onto your
altroot)
<li> Mount up a temporary filesystem using a command
like: mount <TT>/dev/hda5</TT> <TT>/tmp</TT> (or make sure your
RAM disk has a few meg of free space).
<li> Restore a table of contents (index) of your tar
file to <TT>/tmp/files</TT> using a command like:
tar tf <TT>/dev/st0</TT> &gt; <TT>/tmp/files</TT>
<li> Restore your <TT>/etc/passwd</TT> and <TT>/etc/group</TT>
files from the tape. Overwrite those in your
rescue system's (RAM disk based) <TT>/etc</TT> directory.
<BLOCKQUOTE>
NOTE: This must be done in order to ensure that
all the OTHER files that you restore will have
their proper ownership and permissions. Otherwise
you are quite likely to end up with all the files
on the system owned by the root user (depends on
the version of 'tar'). Trust me, you need to do
this. This may be a bit time consuming, since the
tar command will go throug the entire tape to find
those two files. (It does make more sense in practice
to do do different backups to your tapes, one of
just the root filesystem, or even just the <TT>/etc</TT>
directory, and the other containing the rest. However,
it is more complicated to understand and explain,
as you're dealing with "multi-member" tapes and have
to know how to use the 'mt' command with the nst0
device node to skip tape "members" (files). This
method will work, albeit slowly).
</BLOCKQUOTE>
To do this selective restore use a command like:
<blockquote><pre> tar xf /dev/st0 ./etc/passwd ./etc/group
</pre></blockquote>
<BLOCKQUOTE>
Note: when you did the backup as I described above
the GNU tar command will have prepended each filename
with "<TT>./</TT>"; if you weren't using GNU tar you should
modify the command I listed to create the backup by
inserting a cd <TT>/</TT> command before it, and changing each
directory/mountpoint reference to <TT>./</TT> <TT>./usr</TT>, etc. Of
course, if you weren't using GNU tar then the S and l
options might not work anyway. Those are GNU
extensions.
</BLOCKQUOTE>
<li> For each corrupted/damaged filesystem:
<ol>
<li> backup/copy any accessible files that are newer
than your last backup.
<li> reformat using the 'mkfs' command. Use the <TT>-c</TT> option
to check for bad blocks.
<li> mount that filesystem under <TT>/mnt</TT> in the same
(relative) place where it would go under normal
operations. For example a filesystem that would
normally be located under <TT>/</TT> would be under mnt, and
one that was usually under <TT>/usr</TT> would go under
<TT>/mnt/usr</TT>, and one that was under <TT>/usr/local</TT> would
now be mounted under <TT>/mnt/usr/local/</TT> (see your
old <TT>/etc/fstab</TT> for details, restore that to <TT>/tmp</TT>
if necessary).
<BLOCKQUOTE>
Note: It may make sense to mount any undamaged
filesystems read-only as part of this process
... so that the whole directory tree will appear
more like you expect as you're working, but
helping you avoid accidentally over-writing or
damaging your (previously) undamaged filesystems.
Obviously this is simpler if you're restoring to
a whole new disk or system <TT>---</TT> and are thus restoring
EVERYTHING.
</BLOCKQUOTE>
<li> restore the files that were on that filesystem.
</ol>
</ol></BLOCKQUOTE>
<BLOCKQUOTE>
If you are restoring a whole system (there were
no undamaged filesystems) then you can simply
use a command sequence like:
</BLOCKQUOTE>
<blockquote><pre> cd /mnt &amp;&amp; tar xpvf /dev/st0
</pre></blockquote>
<BLOCKQUOTE>
(after you've mounted up all the filesystems under
<TT>/mnt</TT> in the correct relationship).
</BLOCKQUOTE>
<BLOCKQUOTE>
If you need to restore individual filesystems
you'd still cd to <TT>/mnt</TT>, then you'd issue a command
like:
</BLOCKQUOTE>
<blockquote><pre> tar xpvf /dev/st0 ./home ./var ...
</pre></blockquote>
<BLOCKQUOTE>
where <TT>./home</TT> <TT>./var</TT> ... are the list of top level
</BLOCKQUOTE>
<BLOCKQUOTE>
directories below which you want to restore your
files.
</BLOCKQUOTE>
<BLOCKQUOTE>
If you just want to restore a small list of files
(you can't use "*.txt" or other wildcard patterns
on the 'tar' command line) then the best method is
to use a "take list." Take the "index" (table of
contents file) that you generated back in step 3
and either edit or "grep" it for the list files
that you want. Filter out or delete the names of
all the files that you don't want. Then
use a command like:
</BLOCKQUOTE>
<blockquote><pre> tar xpvTf /tmp/takelist /dev/st0 ./home ./var ...
</pre></blockquote>
<BLOCKQUOTE>
... assuming that you stored the list of files
you want in <TT>/tmp/takelist.</TT>
</BLOCKQUOTE>
<BLOCKQUOTE>
If you know of a regular expression that
uniquely describes the files you want to restore
you can use a command like:
</BLOCKQUOTE>
<blockquote><pre> grep "^\./home/docs/.*\.txt" /tmp/filelist |
tar xpvTf - /dev/st0 ./home ./var ...
</pre></blockquote>
<BLOCKQUOTE>
... to get them without having to create a
"takelist" file. Here we are forcing 'tar' to
"take" its list of files from "stdin" (the
command pipeline in this case).
</BLOCKQUOTE>
<BLOCKQUOTE>
I realize that all of this seems complicated. However, that's
about as easy as I can make it for people using the stock Linux
tools. If that's too complicated, then you might want to consider
trying something like BRU (which has menu and GUI screens in
addition to its command line utilities). Personally I think those
are really as complicated, but some of that complication is hidden
from the common cases and only comes out to bite you during moments
of extreme stress <TT>---</TT> like when your system is unusable while
you're trying to restore your root filesystem).
</BLOCKQUOTE>
<BLOCKQUOTE>
BTW: you don't have to buy a tape drive for every computer on
your network. Linux and other UNIX systems can easily share tape
drives using their standard tools. For example you can use, 'ssh'
(or 'rsh' if you have NO security requirements) and the 'buffer'
program to redirect any 'tar', 'cpio' or 'dump' backup (or restore)
to a tape drive on a remote system.
</BLOCKQUOTE>
<BLOCKQUOTE>
Then you can use commands like:
</BLOCKQUOTE>
<blockquote><pre> tar cSlvf - / /usr /home ... | ssh -l bakoper tapehost buffer -o /dev/st0
</pre></blockquote>
<BLOCKQUOTE>
... to do your backups. (In this case I'm using ssh to
access a "backup operator" account (bakoper) on the host
named "tapehost", and I'm directing my tar output to a
'buffer' process on that remote system). Obviously there's
more do it than that. You have to co-ordinate all the access
to those tapes <TT>---</TT> since it wouldn't do to have each machine
over-writing one tape. But that's what professional sysadmins
are for. They can write the scripts and handle all the
scheduling, tape changing etc.
</BLOCKQUOTE>
<!-- sig -->
<!-- end 15 -->
<!-- . . . . . . . . . . . . . . . . . . . -->
<HR WIDTH="40%" ALIGN="center">
<!-- begin 15 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>unable to open a initial console</H3>
<p><em>... he replied ...</em></p>
<p><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Jim,
</p>
<p>
The file <TT>/dev/console</TT> was missing as well as <TT>/var/log/*.</TT>
I think my server was compromised by a DNS attack. I was running old version of
bind. I noticed there is a directory ADMROCKS in <TT>/var/named</TT> which
implies bind overflow. I upgrated my OS and things back to normal.
</p>
<p>
Thanks for the tips,
<br>-Asghar
</p>
<!-- end 15 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/16"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 16 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>RE: uninstall</H3>
<p align="right">AnswerGang: Ben Okopnik, Jim Dennis</p>
<p><strong>From erwin on Fri, 30 Jun 2000
</strong></p>
<P><STRONG>
If i want to install a package from binary source, i put the command tar
<TT>-XXX</TT> foo.tar.gz, "make", and then "make install" ....
What I have to do if i want uninstall that package?
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Ben]
First, be a bit careful about syntax when using "tar"; for historic reasons, the '<TT>-</TT>' is not just a syntax "preceder" but a part of the syntax itself, signifying piped input. "tar xzf foo.tar.gz" would be the correct way to "untar and defeather" the package; "tar xvzf foo.tar.gz" would print some useful info while doing so.
</BLOCKQUOTE>
<BLOCKQUOTE>
As to uninstalling the package <TT>-</TT> this is where one of the disadvantages of *.tar.gz packages shows up: since most of them do not follow any kind of a filesystem standard or a set of install/uninstall rules (unless you're talking about packages from a standard Linux distrib), the process can range from "simple" to "I'd rather have a root canal".
</BLOCKQUOTE>
<BLOCKQUOTE>
Since you didn't say that you're using a package from, e.g., <A HREF="http://www.slackware.org/">Slackware</A>, which I believe has a specific uninstall procedure, I'm going to assume the worst case <TT>-</TT> that you're talking about a random tarball pulled off the Net somewhere, meaning that it could be anything at all. So, here we go...
</BLOCKQUOTE>
<BLOCKQUOTE>
Easy version: type "make uninstall". Some software authors have enough mercy in their hearts on people like me and you to include an uninstall routine in their makefile. If it works, burn a Windows CD as an offering and be happy.
</BLOCKQUOTE>
<BLOCKQUOTE>
More complex version: If the above process comes back with an error ("No rule to make target `uninstall'. Stop."), the next step is to examine the makefile itself. This can be an ugly, confusing, frustrating process if you're not used to reading makefiles <TT>-</TT> but since we're only looking for 'targets' (things like "all:", "install:", "clean:", and "uninstall:"), here's a shortcut -
</BLOCKQUOTE>
<BLOCKQUOTE><BLOCKQUOTE><CODE>
grep : makefile
</CODE></BLOCKQUOTE></BLOCKQUOTE>
<BLOCKQUOTE>
This will print all the target names contained in the makefile, possibly along with a bit of unrelated junk. The line you're looking for may be named something like "remove:", "purge:", "expunge:", or a number of other things <TT>-</TT> but what that target should have, as the listed action (run "make <TT>-n</TT> &lt;target_name&gt;" to see what commands would be executed by that option), is the deletion of everything done by the "install:" target. If you find one that fits, rerun "make" with that switch.
</BLOCKQUOTE>
<BLOCKQUOTE>
"Crawling on broken glass" version: if you can't find anything like that, then you have to remove everything manually. In a number of cases, I've found that the least painful way to do it is by 1) running "make <TT>-n</TT> install &gt; uninstall" and examining the created file to see exactly what is done by that target, 2) deleting all the compilation statements ("gcc [...]" or "g++ [...]" and the like) and <EM>reversing</EM> the action of all the "mkdir", "cp", and "install" statements (i.e., "rm <TT>-rf</TT>" the created directories and "rm" the individual files that fall outside that hierarchy), and 3) running what remains as a shell script to execute those actions (". uninstall").
</BLOCKQUOTE>
<BLOCKQUOTE>
Of course, if the "install" target is simple enough <TT>-</TT> say, copying one or two files into <TT>/usr/bin</TT> <TT>-</TT> just delete those.
</BLOCKQUOTE>
<BLOCKQUOTE>
On a more general note, you should _always_ examine any makefile that you're about to run (with at least a cursory glance to see if an "uninstall" target exists): since some programs require installation by the root user, a stray "rm <TT>-rf</TT>" could cause you a lot of grief. This requires learning to read makefiles <TT>-</TT> but, in my opinion, this is a rather useful skill anyway. Using Midnight Commander to view the makefiles can be very helpful in this, since it highlights the syntax, which visually breaks up the file into more easily readable units.
</BLOCKQUOTE>
<!-- end 16 -->
<p><em>... he replied ...</em></p>
<!-- begin 16 -->
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Thank you for information and correction I miss to interpretate
syntax "tar" with preceder ("<TT>-</TT>") and without preceder. Could you
explain what is the main different between command "tar <TT>-zxvf</TT>"
and "tar zxvf". In many linux (linux howto..) and other unix
clone articles I found "tar" command with preceder and sometime
without preceder, which one a correct?
</STRONG></P>
<strong><BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Jim]
Either case is fine (with GNU tar). The <TT>-</TT> flag is more
portable.
</BLOCKQUOTE></strong>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Ben]
If you examine the stated syntax carefully, you will find that
<em>both</em> are correct: as is usual with Linux, There's More Than One
Way To Do It. The dash ('<TT>-</TT>') in "tar" syntax (as in a number of
other utilities) indicates "piped" input. Here are two versions of
a command line that performs the same operation:
</BLOCKQUOTE>
<BLOCKquote><pre>
tar xvzf foo.tgz
gzip -dc foo.tgz | tar xv -
</pre></blockquote>
<blockquote>
The differences are the following:
</blockquote>
<blockquote><ol>
<li> In the first case, "gzip" is invoked by "tar", via the "z"
switch; in the second case, it is used explicitly. As I understand
it, "tar" did not originally have this capability <TT>-</TT> this may
explain why some folks would use the second version (i.e., a habit
from previous usage). As well, I believe that a number of users are
unaware of this "built-in decompression" in "tar" <TT>-</TT> and a name like
"foo.tar.gz" seems to just beg for <EM>two</EM> tools to process
it...&lt;grin&gt;
<li> The 'f' switch precedes the name of the file that "tar" should
process. In the second case, since the input to "tar" is piped from
the output of "gzip", '<TT>-</TT>' is substituted for 'f' to indicate
this. The 'z' switch is also eliminated, since the decompression is
done explicitly by "gzip".
</ol></blockquote>
<blockquote>
For LOTS of further info (prepare to spend an entire evening or
so), read the "tar" man page.
</blockquote>
<strong><BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Jim]
Ben, I think Erwin was asking about the difference between
'tar <TT>-xzf</TT>' and 'tar xzf' (with and without the conventional
"<TT>-</TT>" options prefix). Erwin has repeatedly referred to a
"preceder."
</BLOCKQUOTE>
<BLOCKQUOTE>
Ben's answer is correct so far as it goes. If the "<TT>-</TT>" is
used as a filename (in a place where tar's argument parser
requires a filename) it can refer to the "standard input" and/or
the "standard output" file descriptors.
</BLOCKQUOTE>
<BLOCKQUOTE>
However, this doesn't seem to be what the question was about.
</BLOCKQUOTE>
<BLOCKQUOTE>
Traditionally UNIX has used the "<TT>-</TT>" prefix to indicate that an
argument was a set of "switches" or "options." If you think of
an analogy between the UNIX command line and natural English
sentences the usual syntax of a UNIX command is:
</BLOCKQUOTE>
<BLOCKQUOTE><BLOCKQuote>
verb <TT>-adverbs</TT> objects
</BLOCKQuote></BLOCKQUOTE>
<BLOCKQUOTE>
... The "options" affect HOW the command operates. All other
arguments are taken as "nouns" (usually filenames) ON WHICH the
command operations.
</BLOCKQUOTE>
<BLOCKQUOTE>
However, this is only a convention. For example the dd command
doesn't normally take "options" with a dash prefix. Thus we
see commands like:
</BLOCKQUOTE>
<BLOCKQUOTE><BLOCKQUOTE><CODE>
dd if=/dev/zero of=/dev/null bs=12k
</CODE></BLOCKQUOTE></BLOCKQUOTE>
<BLOCKQUOTE>
In the case of 'tar' the options were traditionally prefixed
with a dash. However the 'tar' command required that the options
appear prior to any other arguments. Thus the prefix is redundant
on the first argument. Thus:
</BLOCKQUOTE>
<BLOCKQUOTE><BLOCKQUOTE><CODE>
tar xvf ...
</CODE></BLOCKQUOTE></BLOCKQUOTE>
<BLOCKQUOTE>
... is not ambiguous.
</BLOCKQUOTE>
<BLOCKQUOTE>
Actually it should be noted that many versions of the tar command
require that the first option be one of: c, x, t, r, or (GNU) d
<TT>---</TT> that it specifies the mode in which tar is operating: (c)reating,
e(x)tracting, listing a (t)able of contents, (r)e-doing (appending),
or (d)iffing (comparing contents of an archive to corresponding
files). Thus you might find that the command: 'tar vxf foo.tar'
might give an error message for some versions of 'tar').
</BLOCKQUOTE>
<BLOCKQUOTE>
Many versions of 'tar' still require the <TT>-</TT> prefix. However, the
GNU version of 'tar' (which is used by all mainstream
general-purpose Linux distributions) is reasonable permissive.
It will allow the dash but not required it (for the first
argument) and it will parse all of its command line to find the
command mode.
</BLOCKQUOTE>
<BLOCKQUOTE>
Thus we can use a command line like:
</BLOCKQUOTE>
<BLOCKQUOTE><BLOCKQUOTE><CODE>
tar vf foo.tar * -c
</CODE></BLOCKQUOTE></BLOCKQUOTE>
<BLOCKQUOTE>
... under GNU tar. Even though the <TT>-c</TT> is at the end of
the command line. (Note that after the first argument any
other options must be prefixed with a "dash" to disambiguate
them from file names).
</BLOCKQUOTE>
<BLOCKQUOTE>
Of course this raises the question: what if you want to use
a filename of "<TT>-</TT>" or one that starts with a "dash."
</BLOCKQUOTE>
<BLOCKQUOTE>
This is a classic UNIX FAQ. Usually it shows up on mailing lists
and in the comp.unix.questions and/or comp.unix.admin newsgroups
as: "How do a remove a file named <TT>-fr?</TT>"
</BLOCKQUOTE>
<BLOCKQUOTE>
The answer, of course, is to use the "<TT>./</TT>" prefix. Since any
filename with no explicit path is "in the current directory"
and the current directory is also know as "." then ANY
file in the current directory can also be referred to
with a preceding "<TT>./</TT>"
</BLOCKQUOTE>
<BLOCKQUOTE>
Personally I recommend that users avoid starting any file
specification with a globbing wild card (* or ?). Any time
you want to use "*.c" you should probably use: "<TT>./*.c</TT>"
That will be safer since any filenames that do start with
a "<TT>-</TT>" character will not be misinterpreted as command
options (switches).
</BLOCKQUOTE>
<BLOCKQUOTE>
I've frequently seen people suggest "<TT>--</TT>" as an answer to
this classic FAQ. My objection to this approach is that
it won't always work. The GNU 'rm' command, and many of the
other GNU commands, and some other implementations of some
other commands will recognize the "<TT>--</TT>" option as a terminator
for all "options" (switches). However, some versions of
'rm' and other commands might not.
</BLOCKQUOTE>
<BLOCKQUOTE>
It is generally safer to use <TT>./</TT> to prefix files in the current
directory. That MUST work because relies on the way that all
versions of UNIX have handled directory and file names throughout
UNIX' 30 year history.
</BLOCKQUOTE>
<BLOCKQUOTE>
Note that there are a number of commands which take a file
name of "<TT>-</TT>" as a reference to the "standard input" and/or
"standard output" file descriptors. It is also possible to
use <TT>/dev/fd/1</TT> (<TT>/proc/self/fd/1</TT>) or <TT>/dev/fd/0</TT> (<TT>/proc/self/fd/0</TT>)
to access these. (On most Linux systems <TT>/dev/fd</TT> is a symlink
to <TT>/proc/self/fd/;</TT> on many other UNIX systems <TT>/dev/fd</TT> is a
directory containing a set of special device nodes which act
in a way that is similar to <TT>/dev/tty</TT>).
</BLOCKQUOTE>
<BLOCKQUOTE>
Getting back to tar, here's an example where we use dashes
for BOTH input and output file descriptors:
</BLOCKQUOTE>
<BLOCKQUOTE><BLOCKQUOTE><CODE>
find . -not -type d .... | tar -czTf - - | ssh somehost buffer -o /dev/nst0
</CODE></BLOCKQUOTE></BLOCKQUOTE>
<BLOCKQUOTE>
... Here we use a find command to find files (not directories)
and we feed that list of filenames into a tar process. The
T (capital T) option on GNU tar takes a filename with a list of
files in it. Here we use our first dash, so the list of files
is read from standard input. We also specific the <TT>-f</TT> option
which forces tar to write to a file as named by the corresponding
argument. In this case we have used "dash" <TT>-</TT> as the argument
for the <TT>-f</TT> option, so the tar files is written to standard output
(which we are piping into a command that is feeding it into
Lee McLoughlin's 'buffer' filter, which does buffering and
feeds a nice steady stream of data to our SCSI tape drive
(in non-rewinding mode).
</BLOCKQUOTE>
<BLOCKQUOTE>
Note that most modern versions of GNU tar are compiled to
use stdout be default. It used to be that most versions of
tar would write to the default system tape drive if you
didn't specify any <TT>-f</TT> option. That seemed reasonable
(tar was originally written to be the "(t)ape (ar)chiver",
after all). However it caused problems, particularly on
occasions when novice users ran the command on systems with
no tape drive.
</BLOCKQUOTE>
<BLOCKQUOTE>
One of the "in" jokes among sysadmins is to ask how many
100Mb <TT>/dev/rmt0</TT> files you've removed. If you are interviewing
a sysadmin, ask them that question. If they "get it" you're
probably not dealing with a novice. I've seen a few full
root filesystem result from this sort of mistake.
</BLOCKQUOTE>
<BLOCKQUOTE>
Note that the "<TT>-z</TT>" (and the newer <TT>-I</TT>) option requires that you
have the 'gzip' program (or bzip2, for <TT>-I</TT>) on your path. The
compression and decompression are done by a separate process
which is transparently started <TT>(fork()</TT>'d then<TT> exec()</TT>'d) by
GNU tar. These options are unique to GNU tar as far as I know.
</BLOCKQUOTE>
<BLOCKQUOTE>
So, if there is any chance that your command will run on a
non-Linux system (i.e. you are writing a script and require
some portability) then you should always use the <TT>-</TT> prefix
for all 'tar' options, start the tar options list with
c, t, x, or r and avoid the GNU enhancments (z, I, d, T etc).
</BLOCKQUOTE>
</strong>
<!-- sig -->
<!-- end 16 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/17"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 17 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>Basica Fascist SysAdmin's Laundry List</H3>
<p align="right">AnswerGang: Jim Dennis</p>
<p><strong>From Edwin Ferguson on Tue, 04 Jul 2000
</strong></p>
<!-- ::
Basica Fascist SysAdmin's Laundry List
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:: -->
<P><STRONG>
Hello I am hoping that you can help me , even with your busy
schedule, can you tell me how I can stop my network user from
running chat room programs and instant messaging programs like ICQ
, Yahoo and MSN. I use a linux box as a firewall and proxy
server. I am running <A HREF="http://www.redhat.com/">Red Hat</A> 6.1, is there a way to also prevent
them from running Real Player and other such applications that
take up plenty bandwidth. Then finally how can I actually see what
sites they are visiting and in turn block out porn sites etc. Your
assitance is very much appreciated.
</STRONG></P>
<P><STRONG>
Edwin Ferguson
Technical Support
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
What you've presented here is the basic laundry list of the
"fascist sysadmin?" You're trying to enforce an acceptable
use policy based on the assumption that your users are trying to
waste your bandwidth and your company's time and other resources.
</BLOCKQUOTE>
<BLOCKQUOTE>
You could spend a considerable amount of time tightening your
packet filters, eliminating routing and IP masquerading in favor
of application layer proxies, monitoring your proxy logs,
installing and/or writing filtering software etc.
</BLOCKQUOTE>
<BLOCKQUOTE>
If you're users are motivated to break the rules and violate these
policies then you'll probably find yourself in an escalating
"cybercombat" with some of the more "hacker" oriented among them.
</BLOCKQUOTE>
<BLOCKQUOTE>
Ultimately this is a recipe for disaster.
</BLOCKQUOTE>
<BLOCKQUOTE>
Now, back to your questions:
</BLOCKQUOTE>
<BLOCKQUOTE>
Instead of making a list of all the things that you "don't want
them doing" try turning it around to ask: "What services should
my users be able to access?"
</BLOCKQUOTE>
<BLOCKQUOTE>
If all they need is e-mail, then you can block all IP routing
masquerading and proxying for all the client systems. You then run
a local mail server that is allowed to relay mail from the
Internet. That's that! If they need access to a selected dozen or
hundred external web sites, consider installling Squid
(<A HREF="http://www.squid-cache.org"
>http://www.squid-cache.org</A>) (an Internet caching deamon) and
SquidGuard (<A HREF="http://www.nbs.at/linux/Squidguard/installation.html"
>http://www.nbs.at/linux/Squidguard/installation.html</A>)
(a filtering module for Squid) and define your acceptable list
accordingly.
</BLOCKQUOTE>
<BLOCKQUOTE>
If you remain more vague about what you policies are then you'll
just enough up with an ever growing laundry list. It's obviously
that the list you gave here isn't comprehensive; you tossed in "and
block porn sites etc" as an afterthought. That approach will grow
to consume all of your time and creative energy. Be sure to
explain this to your management, assuming that they are pushing on
you to pursue this tack.
</BLOCKQUOTE>
<BLOCKQUOTE>
The bottom line is that the there are some policies that are best
enforced by human means (specifically by the HR department).
Otherwise it may well be that your best recommendation will
read something like:
</BLOCKQUOTE>
<BLOCKQUOTE><BLOCKQuote>
"For each user we hire one full-time armed guard.
Each guard is assigned a user, stands over his or
her shoulder with weapon locked, loaded and aimed
at the victim's temple...."
</BLOCKQUOTE></BlockQuote>
<BLOCKQUOTE>
(Of course you management might try doing some MANAGEMENT. If the
users are busy with their work, and if the management has
reasonable productivity metrics and sane methods for monitoring
behaviour <TT>---</TT> then abuses of your precious bandwidth should be
relatively limited ... unless management is spending all ITS time
in IRC on the porno channels!).
</BLOCKQUOTE>
<!-- sig -->
<!-- end 17 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/18"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 18 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>More on TCP Wrappers and telnet Connection Delays</H3>
<p align="right">AnswerGang: Jim Dennis</p>
<p><strong>From Hari P Kolasani on Wed, 26 Jul 2000
</strong></p>
<!-- ::
More on TCP Wrappers and telnet Connection Delays
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:: -->
<P><STRONG>
Hi,
</STRONG></P>
<P><STRONG>
I was looking at this issue:-
<A HREF="http://tech.buffalostate.edu/LDP/LDP/LG/issue38/tag/32.html"
>http://tech.buffalostate.edu/LDP/LDP/LG/issue38/tag/32.html</A>, and I
did not understand your solution correctly.
</STRONG></P>
<P><STRONG>
Can you please let me know what I need to do in order for telnet to work
without any pause?
</STRONG></P>
<P><STRONG>
I happen to see similar problem for FTP also.
</STRONG></P>
<P><STRONG>
Thanks
Hari Koalsani
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
If you look at some of the other back issues (search on the string
"tcpd" you can see that I've tried to explain the issue a few times and
at great length.
</BLOCKQUOTE>
<BLOCKQUOTE>
Basically there are three ways to approach this:
</BLOCKQUOTE>
<BLOCKQUOTE><ol>
<li> Abandon telnet; use ssh instead.
<li> Fix your reverse DNS zones. Make the PTR records consistent
with the A (address/host) records.
<li> Remove TCP Wrappers protection from the telnet service on
this host. Change the line in the <TT>/etc/inetd.conf</TT> file
that reads something like:
</BLOCKQUOTE>
<blockquote><pre>telnet stream tcp nowait telnetd.telnetd /usr/sbin/tcpd /usr/sbin/in.telnetd
</pre></blockquote>
<BLOCKQUOTE>
to look more like:
</BLOCKQUOTE>
<blockquote><pre>telnet stream tcp nowait telnetd.telnetd /usr/sbin/in.telnetd in.telnetd
</pre></blockquote>
</ol>
<BLOCKQUOTE>
Personally I suggest that you use both methods 1 and 2. Use
ssh, which USUALLY doesn't use tcpd or libwrap, the library
which implements tcpd access controls, AND fix your DNS zones
so that your hosts have proper PTR records.
</BLOCKQUOTE>
<BLOCKQUOTE>
As I said, I've written many pages on this topic. I'm not going
to re-hash it again. Hopefully this summary will get you on the
right track. If you still can't understand what is going on and
how to do this you should consider calling a tech support service
(<A HREF="http://www.linuxcare.com/">Linuxcare</A> does offer single-incident tech support calls, though
they are a bit expensive; there may be other companies still doing
this), or hire a Linux consultant in your area (look in the Linux
Consultants HOWTO <A HREF="http://www.linuxdoc.org/HOWTO/Consultants-HOWTO.html"
>http://www.linuxdoc.org/HOWTO/Consultants-HOWTO.html</A>
for one list of them).
</BLOCKQUOTE>
<BLOCKQUOTE>
They can provide hand holding services. A good consultant can
and will show you how to handle these sorts of things for yourself,
and will ask some questions regarding your needs, and recommend
comprehensive solutions.
</BLOCKQUOTE>
<BLOCKQUOTE>
I would ask about why you are using telnet, who needs access to the
system, what level and form of access they need, etc. I can simply
answer questions, but a good consultant will ask more questions
than he or she answers <TT>---</TT> to make sure that you're getting the
right answers. Given my constraints here, I don't have the luxury
of doing in-depth requirements analysis for this column. (Also note
that I'm not currently available for consulting contracts, Starshine
Technical Services is currently in hiatus).
</BLOCKQUOTE>
<!-- sig -->
<!-- end 18 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/19"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 19 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>Linux in a Windows NT Domain (under a PDC)</H3>
<p align="right">AnswerGang: Jim Dennis</p>
<p><strong>From Maenard Martinez on Tue, 25 Jul 2000
</strong></p>
<!-- ::
Linux in a Windows NT Domain (under a PDC)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:: -->
<P><STRONG>
Is it possible to connect the Linux <A HREF="http://www.redhat.com/">Red Hat</A> 6.0 (costum installed) to the
network wherein the PDC is a Windows NT 4.0 Server? Do I need additional
tools to connect it? Is it similar to UNIX X-windows?
</STRONG></P>
<P><STRONG>
Thanks,
Maenard
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Basically all interoperation between Linux (and other forms of UNIX)
and the Microsoft Windows family of network protocols (SMB used by
OS/2 LANManager and LANServer, WfW, Win '9x, NT, and W2K) is done
through the free Samba package.
</BLOCKQUOTE>
<BLOCKQUOTE>
Normally Samba allows a Linux or other UNIX system to act as an
SMB file and print server. There are various ways of getting Linux
to act as an SMB client (including the smbclient program, which is
basically like using "FTP" to an SMB server, and the smbfs kernel
option that allows one to mount SMB shares basically as though they
were NFS exports).
</BLOCKQUOTE>
<BLOCKQUOTE>
Now, when it comes to having Linux act as a client in an MS Windows
"domain" (under a PDC, or primary domain controller) it takes a
bit of extra work. Recently the Andrew Tridgell and his Samba team
have been working on a package called "winbind." Tridge demonstrated
it to me last time he was in San Francisco.
</BLOCKQUOTE>
<BLOCKQUOTE>
Basically you configure and run the winbind daemon, point it at
your PDC (and BDCs?) and it can do host and user lookups, (and
user authentication?) for you. I guess there is also a libnss
(name services selector) module that is also included, so you
could edit your Linux system's <TT>/etc/nsswitch.conf</TT> to add this,
just as you might to force glibc linked programs to query NIS,
NIS+, LDAP or other directory services.
</BLOCKQUOTE>
<BLOCKQUOTE>
Now I should point out two things about what Tridge showed me.
First, it was under development at the time. It probably still
is. You'd want to look at the Samba web pages and read about the
current state of the code <TT>---</TT> but it may not be ready for use
on production systems. (I hear that some sites are already
using it in production, but basically that's because it's their
only choice). The other thing I should mention is that I got the
basic "salesman's" demo. That's not any fault of Tridge's (he wasn't
trying to "sell" it to me and he certainly can get into the technical
nitty gritty to any level that I could understand). It's just that
we didn't have much time to spend together. As usual we were both
pressed for time.
</BLOCKQUOTE>
<BLOCKQUOTE>
(I'm writing this on a train, which is why I can't look for
more details at the Samba site for you. So, point your
browser at: <A HREF="http://www.samba.org"
>http://www.samba.org</A> for more details.
</BLOCKQUOTE>
<!-- sig -->
<!-- end 19 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/20"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 20 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>automating windows telnet to linux</H3>
<p align="right">AnswerGang: Ben Okopnik</p>
<p><strong>From Mike Miller on Sun, 23 Jul 2000
</strong></p>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Hi, I'm having a bit of trouble trying to figure out a way to automate
my dial up process. Say I'm sitting here at my hewlett packard and I want to
get on the internet.... I have to open a telnet window, logon as root on my
linuxbox, and type ppp-go. I already have a script for my isp login name and
password. Is there program out there that would possibly open a telnet
window, type root and password, and enter ppp-go, sort of a dial on demand?
Also, is there a way to disconnect from my isp from my hewlett packard
without opening telnet and using ppp-off?
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Short answer: IP-Masq and "diald". The man page and the HOWTO were on the NY Times Bestseller list for 16 weeks straight. &lt;grin&gt;
</BLOCKQUOTE>
<BLOCKQUOTE>
OK, let's see <TT>-</TT> you don't say what your setup is; for that matter, we have no information whatsoever, other than reasonable guesses. From these clues, I gather the following: you have a Windows box (3.1? 95/98? NT?) connected to a Linux machine on a local network. The Linux box is the one with the connection (ISDN? Dial-up? Telepathic?) to your ISP. If this is correct, then the explanation to follow may be of use; my main reason for answering this is that it's a relatively common setup, and other people may find it useful as well.
</BLOCKQUOTE>
<BLOCKQUOTE>
The first thing that's needed is IP-Masq and SLIP compiled into your kernel; depending on your distro and version, it may already be done. IP-Masq is a NAT (Network Address Translation) program; what it does, in effect, is make your LAN look like a single IP address to the "outside world", i.e., no matter which machine you use to surf, telnet, etc., all requests will come from (and all replies will be sent to) your IP-Masq router, which will then route the traffic inside the LAN.
</BLOCKQUOTE>
<BLOCKQUOTE>
"Diald" is a 'dial-on-demand' daemon (that requires SLIP) that will establish a connection to your ISP whenever you request an "outside" IP <TT>-</TT> i.e., if you fire up Netscape and ask for www.slashdot.com, "diald" will see that the address is non-local and establish a connection by dialing up. It will also, if you want it to, disconnect automatically after a period of inactivity.
</BLOCKQUOTE>
<BLOCKQUOTE>
What does this mean in practical terms? You never have to think about dialing from either of your machines again <TT>-</TT> just open your browser and start surfing, or telnet to anywhere, or ping at will. The first response will take 30 seconds or so (the period required for the dial-up connection), but that's it. As automatic as it gets.
</BLOCKQUOTE>
<BLOCKQUOTE>
The IP-Masquerading HOWTO (sorry, no URL <TT>-</TT> I'm writing this at sea, and don't have access to the Net) takes you step-by-step through the process of setting up IP-Masq, and the "diald" man page and documentation are very detailed, with lots of examples for various situations.
</BLOCKQUOTE>
<!-- end 20 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/21"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 21 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>Telnet Clients for Windows and Linux</H3>
<p><strongFrom Roberto Urban - IHQ on Fri, 14 Jul 2000
</strong></p>
<p align="right">AnswerGang: Heather Stern</p>
<P><STRONG>
Hello Answer Guy, or Gang perhaps,
</STRONG></P>
<P><STRONG>
I would like to ask your help on something that's been bugging for some time.
</STRONG></P>
<P><STRONG>
I work in a company where Windows and Microsoft in general are the standard
for the desktop and I more or less manage to survive the daily routine
(Windows 98 only crashes a few times a week, which is a big improvement
over Windows 95). However, for my technical support activity I use two
Linux boxes, old 486s recycled because no Windows 9x would run on them,
at least not without reducing productivity to 10%. I'm very happy with
them and I just couldn't not do without them. One runs <A HREF="http://www.slackware.org/">Slackware</A> 3.4, the
other <A HREF="http://www.debian.org/">Debian</A> 2.2.
</STRONG></P>
<P><STRONG>
The only problem I have is to telnet into them from my Windows machines
(as this is an internal network I don't need to use SSH and similar). That
is, any telnet client works fine but whenever I need to use applications
like Midnight Commander (wonderful tool) or even VI, some keys, namely
function and navigation keys, do not work. The test I normally do when I
try a new client is to run MC and try all the function keys. I have tried
the standard Windows client, Netmanage, and several others. The only client
that somehow achieves about some success if the new CRT 3.1, from Van Dyke,
www.vandyke.com. It has a Linux terminal and keyboard type and with it I
can use F6 to F10 with no problems but F1 to F5 seem not be working at all.
I have tried all the different combinations, like VT100 terminal and Linux
keyboard, and so on (for some obscure reason F5 does not work at all, with
any client).
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Teraterm is trainable. (As a side note it also has an ssh add-in available.)
You might also try whatever Hummingbird offers for telnet services, they have
been doing terminal emulators for a l...o...n...g time, and of all the possible
results you should be able to pick one on your side, and a matching TERM
variable on the Linux side.
</BLOCKQUOTE>
<BLOCKQUOTE>
But it's worth noting that there are big stacks of vt-something terminal types.
When I was playing with a Solaris box at one point (!) the "standard" Windows
telnet behaved best if I set the term variable to "vt100-nav" (no advanced
video, has some sort of effect on the way it handles the last screen column).
You probably want to try a bunch of the TERM variables anyway, because lame
little telnet announces itself as "ansi" but isn't close enough to that spec
either. For that matter, the telnet that comes with it also offers vt52
emulation, and you can try <EM>that</EM> ...
</BLOCKQUOTE>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
The Keyboard HowTo does not say anything on this issue, so I wonder whether
you had any information you may be willing to share.
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
There's no reason something about remapping the Linux console driver's idea
of keys would have any effect whatsoever on a remote connection (whether ssh
or telnet)
</BLOCKQUOTE>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
I know you do not use Windows since several years but maybe you have come
across this problem in the past.
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Best of luck with it; if you need to keep looking for a configurable enough
client, try winfiles.com or Tucows.
</BLOCKQUOTE>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Anyway, thanks a lot for your help and should you need any additional
information, please feel free to contact me at any time.
</STRONG></P>
<P><STRONG>
Best regards.
</STRONG></P>
<P><STRONG>
ROBERTO URBAN
</STRONG></P>
<p><em>... he replied ...</em></p>
<P><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Heather,
</P>
<P>
Thanks for your quick response. I'll act on your information right away. Thanks again.
</P>
<P>
Best regards.
<br>ROBERTO URBAN
</P>
<!-- end 21 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/22"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 22 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>Port 80 Telnet</H3>
<p align="right">AnswerGang: Srinivasa Shikaripura, Mike Orr</p>
<p><strong>From Nick Adams on Tue, 11 Jul 2000
</strong></p>
<P><STRONG><FONT COLOR="#000066"><EM>
Hello,
Quick question.
I want to change my port to accept telnet connections to port
80. This enables me to connect from behind
my proxy at work. How do I do this?
Thanks,
</EM></FONT></STRONG></P>
<P><STRONG><FONT COLOR="#000066"><EM>
Nick Adams
</EM></FONT></STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0">
[Sas]
hi,
</BLOCKQUOTE>
<BLOCKQUOTE>
If I understand your problem, "you want to telnet to your
personal machine which is behind a http proxy, from outside
the proxy network".
</BLOCKQUOTE>
<BLOCKQUOTE>
My quick answer would be it is not possible.
</BLOCKQUOTE>
<BLOCKQUOTE>
If you are behind a http proxy, then you can't connet
to your machine using telnet from outside.
Since proxy talks only in HTTP protocol, your telnet clint
from outside wouldn't be able to talk to your machine through
it.
</BLOCKQUOTE>
<BLOCKQUOTE>
Coming to other part of the question on how to make the telnetd
accept telnet connections on port 80, you may need to modify
your '<TT>/etc/services</TT>' and <TT>/etc/inded.conf</TT>'.
</BLOCKQUOTE>
<BLOCKQUOTE>
Hope that helps.
</BLOCKQUOTE>
<BLOCKQUOTE>
Cheers,
-Sas
</BLOCKQUOTE>
<BLOCKQUOTE><STRONG><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Mike Orr]
There exist telnet-via-web applications, but they have to be installed
on the host (i.e., proxy) machine. I've never used them, so I don't
know anything more about them.
</BLOCKQuote></STRONG>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>[Sas] Thanks for the info.
</BLOCKQUOTE>
<BLOCKQUOTE>
I agree with you that with custorm programs to handle Telnet proxy
we could telnet over proxy. But with a standard apache/Netscape/IIS
proxy web server it is not possible. Also, the proxy admin needs to
install and enable corresponding telnet port to outside world, which
may be risky.
</BLOCKQUOTE>
<BLOCKQUOTE><DL><DT>
Here is one server which does telnet proxy:
<DD><A HREF="http://www.nabe-intl.co.jp/faqs/telfaqs.html#tel001"
>http://www.nabe-intl.co.jp/faqs/telfaqs.html#tel001</A>
</DL></BLOCKQUOTE>
<BLOCKQUOTE>
Just FYI.
</BLOCKQUOTE>
<BLOCKQUOTE>
-Sas
</BLOCKQUOTE>
<!-- end 22 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/23"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 23 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>Connection Refused</H3>
<p><strong>From Yu-Kang Tsao on Wed, 26 Jul 2000
</strong></p>
<p align="right">AnswerGang: Jim Dennis</p>
<!-- ::<BLOCKQuote>
Connection Refused
~~~~~~~~~~~~~~~~~~
</BLOCKQuote>:: -->
<P><STRONG>
Hi James:
</STRONG></P>
<P><STRONG>
Now I am setting up a linux red hat 6.2
server box in our NT LAN and I am trying to telnet
connect to that box from one of the NT workstation in
our NT LAN. But it gives me connectiong refuse
message. Would you help me telnet connect to linux
box ? Thank you very much.
</STRONG></P>
<P><STRONG>
Sincerely
<br>Nathan
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
You probably don't have DNS, specifically your reverse DNS
zones (PTR records) properly configured.
</BLOCKQUOTE>
<BLOCKQUOTE>
Linux includes a package called TCP Wrappers (tcpd) which
allows you to control which systems can connect to which
services. This control is based on the contents of two
configuration files (<TT>/etc/hosts.allow</TT> and <TT>/etc/hosts.deny</TT>)
which can contain host/domain name and IP address patterns
that "allow" or "deny" access to specific services.
</BLOCKQUOTE>
<BLOCKQUOTE>
You could disable this feature by editing your <TT>/etc/inetd.conf</TT>
file and changing a line that reads something like:
</BLOCKQUOTE>
<blockquote><pre>telnet stream tcp nowait telnetd.telnetd /usr/sbin/tcpd /usr/sbin/in.telnetd
</pre></blockquote>
<BLOCKQUOTE>
to something that looks more like:
</BLOCKQUOTE>
<blockquote><pre>telnet stream tcp nowait telnetd.telnetd /usr/sbin/in.telnetd /usr/sbin/in.telnetd
</pre></blockquote>
<BLOCKQUOTE>
(Note: THESE ARE EACY JUST ON ONE LINE! THE TRAILING BACKSLASH
is for e-mail/browser legibility)
some of the details might differ abit. This example
is from my <A HREF="http://www.debian.org/">Debian</A> laptop and <A HREF="http://www.redhat.com/">Red Hat</A> has slightly different
paths and permissions in some cases).
</BLOCKQUOTE>
<BLOCKQUOTE>
You should search the back issues of LG for hosts.allow and
tcpd for other (more detailed) discussions of this issue. It is
an FAQ. Of course you can also read the man pages for
hosts_access(5), hosts_options(5) and tcpd(8) for more details
on how to use this package.
</BLOCKQUOTE>
<BLOCKQUOTE>
Note: You should also consider banning telnet from your networks.
I highly recommend that you search the LG back issues for
references to 'ssh' for discussions that relate to that. Basically,
the telnet protocol leaves your systems susceptible to sniffing
(and session hijacking, among other problems) and therefore greatly
increases your chances of getting cracked, and greatly increases the
amount of damage that an intruder or disgruntled local user can
do to your systems. 'ssh' and its alternatives are MUCH safer.
</BLOCKQUOTE>
<!-- sig -->
<p><em>... he replied ...</em></p>
<P><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Hi Jim:
</P>
<P>
I also want to thank you for advising me ban telnet from my networks. I will ban telnet from my networks. Thanks a lot.
</P
<P>
Sincerely
</P>
<P>
Nathan
</P>
<!-- end 23 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/24"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 24 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>Loadlin trouble</H3>
<p><strong>From sarnold on Fri, 07 Jul 2000
on the L.U.S.T List </strong></p>
<p align="right">AnswerGang: Jim Dennis</p>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
On 3 Jul 00, at 18:07, Number 4 wrote:
</STRONG></P>
<P><STRONG><FONT COLOR="#000066"><EM>
I've just installed Loadlin on my Win98 partition and can't get it
to boot my kernel (bzImage type). When I try to load the kernel,
with all of the proper parameters set, it gives an "invalid
compressed format" error message and the system is halted. I think
the problem is that when I copy the kernel onto the windoze
</EM></FONT></STRONG></P>
<P><STRONG><FONT COLOR="#000066"><EM>
partition, it is automatically converted from Linux binary format
(two-digit hex numbers in brackets) to DOS binary format (many weird
ASCII characters). Does anyone know how to remedy this? Thanks.
</EM></FONT></STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
I have no idea what you think you're saying in this last
statement. Binary is binary. If a file is copied as a stream of
binary octets (bytes) than it will the same file on any
platform that supports 8-bit bytes. There is no "Linux
binary" vs. "DOS binary" (in terms of <EM>file</EM> formats). Of
course the executable binaries have much different formats
(in fact Linux supports a.out, ELF and some others, while
MS-DOS support <TT>.COM</TT> and <TT>.EXE</TT>).
</BLOCKQUOTE>
<BLOCKQUOTE>
However, the Linux kernel is not "executed" by DOS. It is loaded
<TT>LOADLIN.EXE</TT> (which obviously an MS-DOS <TT>.EXE</TT> binary executable
file). However, the kernel image is generally a compressed kernel
image in ELF format with a small executable stub/header. It is
formatted so that it could be dropped unto a floppy and directly
booted (so the first sector of a Linux kernel image is basically
just like a floppy boot sector). Other loaders (like LILO,
SYSLINUX and <TT>LOADLIN.EXE</TT> copy the kernel into memory and jump into
a different entry point (past the "boot record" and unto the part
that allocates extended memory and decompresses the kernel into it)
</BLOCKQUOTE>
<BLOCKQUOTE>
I hope you can see that your characterization of "hex digits"
vs. "weird ASCII characters" is hopeless confused. Those are both
different ways of viewing or representing the same binary data.
The fact that they appeared to be different is probably an artifact
of the tool you were using to view them. To actually tell if the
file was modified as it was copied, use the the cmp (or at least
the diff) command and check its return value.
</BLOCKQUOTE>
<BLOCKQUOTE>
If the files are different then look to see if you have your
FAT/MSDOS filesystem mounted with the "convert" options enabled.
This was a feature in earlier Linux kernels that applied to some of
the FAT, VFAT, and UMSDOS filesystems. I think it as been dropped
from more recent kernels (or is at least depracated). It was
intended to automatically convert TEXT files as they were copied to
or from MS-DOS compatible filesystems. However, it is known to
have caused many problems and the consensus in the Linux kernel
community seems to be that kernel filesystem drivers should NOT
modify the contents of files as they are stored or retrieved. (I'm
inclined to agree <TT>---</TT> let the applications be modified to handle
the format differences gracefully).
</BLOCKQUOTE>
<BLOCKQUOTE>
Of course the TEXT file formats differ among UNIX/Linux, MS-DOS,
and MacOS systems. It all depends on the line termination
conventions. Linux/UNIX use just "newlines" (LF, linefeed, a
single character: ASCII hex 0x0A, '\n' in C strings) while MacOS
uses just the carriage return (CR, ASCII hex 0x0D, '\r' in C) and
MS-DOS uses the highly irritating CRLF (2 characters: carriage
return, line feed, ASCII hex 0D0A sequences, or "\r\n" in C). I've
seen some MS-DOS editors freak out when presented with text files
that had LFCR line boundaries (reversed CR and LF sequences).
However, most of them could handle that and some/most could handle
UNIX and Mac style text files.
</BLOCKQUOTE>
<BLOCKQUOTE>
(Of course most GNU and free text editors and tools can handle
any of these formats and there are many little scripts and
tools to convert a text file into any of the appropriate formats.
Some day, someone ought to write a really top notch "text file"
library that automatically detects the line feed convention
on open and defaults to preserving that throughout the rest
of the operation <TT>---</TT> with options to coerce a conversion as
necessary).
</BLOCKQUOTE>
<BLOCKQUOTE>
(The reason I say the MS-DOS form is so irritating is that it
messages with the sizes of the file. Having two character
line boundaries then breaks quite a few other assumptions about
the text of the file.
</BLOCKQUOTE>
<P><STRONG><IMG SRC="../gx/dennis/qbub.gif" ALT="(?)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
I don't use loadlin, but I think the 2.2.x kernels need to be
bzipped. 2.0.x kernels use the older compression; you could try an
older kernel, or maybe boot your kernel off a floppy disk. Is
there some reason why you can't install to an e2fs partition and
use lilo?
</STRONG></P>
<P><STRONG>
Sorry, that's about all I can think of (on the last morning of a
holiday weekend
<IMG SRC="../gx/dennis/smily.gif" ALT=":)"
height="24" width="20" align="middle">
</STRONG></P>
<P><STRONG>
Steve
</STRONG></P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
Older versions of any loader (LILO, <TT>LOADLIN.EXE</TT>, SYSLINUX)
my not be able to handle bzipped kernels. However recent
versions (as in the last two or three YEARS) should be able
to cope with them.
</BLOCKQUOTE>
<BLOCKQUOTE>
I suspect that it is more likely that he's corrupting the
kernel image as he's copying it.
</BLOCKQUOTE>
<!-- sig -->
<!-- end 24 -->
<!-- .~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~.~~. -->
<A NAME="tag/25"><HR WIDTH="75%" ALIGN="center"></A>
<!-- begin 25 -->
<H3 align="left"><img src="../gx/dennis/qbubble.gif"
height="50" width="60" alt="(?) " border="0"
>Linux vs. MS Exchange for Mail Server</H3>
<p><strong>From sas on Sun, 16 Jul 2000
</strong></p>
<p align="right">AnswerGang: Jim Dennis</p>
<!-- ::
Linux vs. MS Exchange for Mail Server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:: -->
<p>Christine Rancapero was published in the Mailbag:</p>
<P><STRONG><FONT COLOR="#000066"><EM>
Do you have an issue regarding the advantages and disadvantages
of migrating linux mail server to an MS
exchange? Your help is gratefully appreciated....thank you very
much
<IMG SRC="../gx/dennis/smily.gif" ALT="=)"
height="24" width="20" align="middle">
</EM></FONT></STRONG></P>
<p>One of our more active readers this month replied -
</P>
<P>
Advantages of moving from Linux mail server to MS Exchange:
</P>
<ol>
<li> Improves MS revenue, there by improving its financial status
(very crucial after the DOJ battle)
<li> When ever there is a "Mellissa" or "I LOVE YOU" virus,
MS Exchange get clogged for 3 days and you could enjoy
vacations, long weekends, frequently. (Anyway there will be
MS to show finger at!)
<li> You could have the pleasure of raising invoices for
Pentium IV (V, VI, which ever is latest), 1 GB main memory,
Windows 2000 systems and I tell you it is a good
administrative experience...
<IMG SRC="../gx/dennis/smily.gif" ALT=":-)"
height="24" width="20" align="middle">))
</ol>
<P>
Dis-advantages of not moving to MS Exchange:
</P>
<ol>
<li> I have been on Netscape, IMAP, *nix mail for 2 years in my
company and have accessed it from all sorts of environments
and locations (dial-up, international) and had no problem
with it. Bad luck, I couldn't tell my manager why I couldn't
complete my assignments (only if it were to be MS Exchange!)
</ol>
<P>
[Disclaimer: No hard feeling please. It is not a flaim bait.
</P>
<P>
Just my experience with *nix mail
and my colleguages experience with MS Exchange]
</P>
<P>
cheers
-Sas
</P>
<BLOCKQUOTE><IMG SRC="../gx/dennis/bbub.gif" ALT="(!)"
HEIGHT="28" WIDTH="50" BORDER="0"
>
All humor aside this would not be so much of a "special
issue" (of LG) as a white paper. Here are some thoughts:
</blockquote>
<h4 align="center">
Linux (and free software) vs. MS NT + MS Exchange for E-mail
</h4>
<blockquote>
The first observation to make is that we are comparing
apples to fish heads. Linux is an operating system kernel.
There are many packages that can supply standard mail services
under Linux. Basically the UNIX/Linux e-mail model involves
MTA (mail transport agents), MSA (mail storage/access agents)
and MUAs (mail user agents). There are also a variety of
utilities that don't really quite fit in any of these categories.
</blockquote>
<BLOCKQUOTE><EM><P>[
Our LG Editor, Mike, thought Jim's next part describing an overview of
Linux mail services was so good, he split it into a seperate article:
<A HREF="dennis.html"
>http://www.linuxgazette.com/issue56/dennis.html</A>
</P>
<P>
Summarized: there are a several MTAs, a number of ways to apply administrative
policy -- more complicated policy takes much more planning. You can also
get the LDA (local delivery agent) involved, and apply rules or filters at
the email client level. This certainly includes responders such as the
common 'vacation/out of office' note. With shell scripts invoking small
utilities, certain kinds of recovery are easier on the sysadmin; small
utlilities for the user (like 'biff' to spot new mail) exist too. Goodness
knows what mail client the user may have - he has so many choices.
</p>
<P>-- Heather. ]</P></EM></BLOCKQUOTE>
<BLOCKQUOTE>
This is all in contrast to Microsoft's approach. With Microsoft
you are almost forced to use the MS Outlook client, and the MS
Exchange server. They referred to that as "integrated." They
also basically require that you use their "Back Office" for
and "SMS" products for some management features, and their
WINS (or the newer ActiveDirectory?) for directory services.
</BLOCKQUOTE>
<BLOCKQUOTE>
One of the costs of all this integration is CONTROL. You must
set up your network, your routers, and your servers in one of the
approved Microsoft ways in order for any of to work. You can't
have one "farm" (cluster) of servers (say outside your firewall,
possibly with some geographic dispersion) recieving and relaying
mail with another cluster of servers (say inside your firewall, at
specific regional and departmental offices. You can't make your
e-mail address names follow one convention (abstraction) such as
"<A HREF="mailto:user_domain@department.yourdomain.com"
>user_domain@department.yourdomain.com</A>" while the actual underlying
routing and storage archictecture follows a different model
(such as <A HREF="mailto:user@region.yourdomain.com"
>user@region.yourdomain.com</A>).
</BLOCKQUOTE>
<BLOCKQUOTE>
The UNIX/Linux model is scalable. That's proven by the fact that
it's used by well over 80% of the Internet (obviously the largest
interconnecting set of computer e-mail networks in history).
</BLOCKQUOTE>
<BLOCKQUOTE>
As usual if the Microsoft package doesn't do what you want you'll
have to do without. There is very little option for administrators
and users to customize the operations. Even if you do try to
customize your Microsoft installation their internal complexity,
tight coupling (integration) and overall fragility result in steep
learning curves, and high risks (the packages you add in are more
likely to conflict with other, seemingly unrelated, parts of the
system or with other subsystems).
</BLOCKQUOTE>
<BLOCKQUOTE>
Obviously with the Linux tools there are no arbitrary limits placed
on number of users, number of accounts, number of sent or received
messages, sizes of messages, etc. While some specific tools may
bump into limits, more often the default configuration, or the wise
administrator, will impose constraints based on their own capacity
planning needs and their own policies. (Like when I modified my
sendmail.cf to set limits after the incident I described above).
</BLOCKQUOTE>
<BLOCKQUOTE>
With the Microsoft approach you're required to pay for every user;
and those costs will probably become ANNUAL expenses (as Microsoft
foists thier ASP software "subscription" model on their customers).
</BLOCKQUOTE>
<BLOCKQUOTE>
In addition, of course, the Microsoft approach emphasizes the
convenience for their programmers and the needs of their marketing
people over the security of your users. That's why we are
regularly treated to the perrennial debacle of the e-mail macro
virus epidemics (Melissa, ILOVEU, LoveBug, etc). These macro
viruses are basically caused by the very same programming flaws
that gave us the WinWord and Excel Macro viruses (and they are
written in basically the same language). Similar bugs seem to have
been found in Explorer.
</BLOCKQUOTE>
<BLOCKQUOTE>
Microsoft thrives of shallow whizzy "features" and one of the
easiest way to implement those is through poorly designed obscure
"dynamic content" hooks which treat "special" data as programs.
Those are precisely the kinds of "features" that are most
attractive to cybervandals and most easily exploited. Once they've
been put into a system and used by other components on that system
then they can't be removed or disabled (all in the name of
backwards compatibility).
</BLOCKQUOTE>
<BLOCKQUOTE>
Of course that hallowed "backward compatibility" will only be
honored to the degree that suits Microsoft's whims. They will
deliberately or neglectfully break their APIs in order to
force users and ISVs (independent software developers) to upgrade
existing products as a requirement to upgrading other (seemingly
unrelated) subsystems.
</BLOCKQUOTE>
<BLOCKQUOTE>
Thus an upgrade to the latest Powerpoint may entail an upgrade
to the rest of MS Office, which may require upgrades to the OS
and thus to the mail client (Outlook or Express) and thence
possibly right up to the mail server (Exchange) and the server's
OS (NT to W2K). Microsoft generally benefits from such domino
effects; though they do have to exhibit some restraint. That's
particularly true since they have enough trouble getting any
single product to ship on schedule and they can't try to sync
them all for really massive coups.
</BLOCKQUOTE>
<BLOCKQUOTE>
This is another cost of integration. The "integrated" systems
become rigid and hard to maintain, harder to upgrade or enhance,
impossible to troubleshoot or repair.
</BLOCKQUOTE>
<BLOCKQUOTE>
Open systems are characterized by modularity <TT>---</TT> separate
components interacting through common APIs (sometimes via shared
libraries), and communicating via published protocols. Open
systems generally have multiple combinations of clients and
servers. Of course that has its cost. Some of these components
will fail to implement their protocols in interoperable ways
some of the time. Sometimes this will require revisions to the
protocols, more often to the components. Some combinations of
components will not work, or will be a bad idea for other reasons.
Often the same functions will be implemented at multiple different
points (duplication of feature sets).
</BLOCKQUOTE>
<BLOCKQUOTE>
Overall these systems will be more robust, more resilient, and more
flexible. It will be possible for an organization to tailor their
system to meet their needs.
</BLOCKQUOTE>
<BLOCKQUOTE>
Such systems do require skilled, professional administrators (or
least consultants for the initial deployments, and for follow-up
support). However, the "easy to use" MS Windows based systems,
and even the famed "intuitive" MacOS networks required trained
professionals for most non-trivial networks.
</BLOCKQUOTE>
<BLOCKQUOTE>
Ultimately you should consider the availability of expertise
in your IT decisions. Hire people with broad experience and a
willingness to learn. Then ask them what systems they prefer
to manage.
</BLOCKQUOTE>
<!-- sig -->
<!-- end 25 -->
<!--startcut ======================================================= -->
<P> <hr> </p>
<H5 align="center"><a href="http://www.linuxgazette.com/copying.html"
>Copyright &copy;</a> 2000, the respective authors
<H5 align="center">Collection <a href="http://www.linuxgazette.com/copying.html"
>Copyright &copy;</a> 2000, <i>Linux Gazette</i>
<BR>Published in <I>Linux Gazette</I> Issue 56 August 2000</H5>
<H6 ALIGN="center">HTML transformation by
<A HREF="mailto:star@tuxtops.com">Heather Stern</a> of
Tuxtops, Inc.,
<A HREF="http://www.tuxtops.com/">http://www.tuxtops.com/</A>
</H6>
<!-- ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: -->
<CENTER>
<!-- *** BEGIN navbar *** -->
<IMG ALT="" SRC="../gx/navbar/left.jpg" WIDTH="14" HEIGHT="45" BORDER="0" ALIGN="bottom"><A HREF="lg_mail56.html"><IMG ALT="[ Prev ]" SRC="../gx/navbar/prev.jpg" WIDTH="16" HEIGHT="45" BORDER="0" ALIGN="bottom"></A><A HREF="index.html"><IMG ALT="[ Table of Contents ]" SRC="../gx/navbar/toc.jpg" WIDTH="220" HEIGHT="45" BORDER="0" ALIGN="bottom" ></A><A HREF="../index.html"><IMG ALT="[ Front Page ]" SRC="../gx/navbar/frontpage.jpg" WIDTH="137" HEIGHT="45" BORDER="0" ALIGN="bottom"></A><A HREF="../faq/index.html"><IMG ALT="[ FAQ ]" SRC="./../gx/navbar/faq.jpg"WIDTH="62" HEIGHT="45" BORDER="0" ALIGN="bottom"></A><A HREF="lg_tips56.html"><IMG ALT="[ Next ]" SRC="../gx/navbar/next.jpg" WIDTH="15" HEIGHT="45" BORDER="0" ALIGN="bottom" ></A><IMG ALT="" SRC="../gx/navbar/right.jpg" WIDTH="15" HEIGHT="45" ALIGN="bottom">
<!-- *** END navbar *** -->
</CENTER>
</BODY></HTML>
<!--endcut ========================================================= -->