old-www/HOWTO/DPT-Hardware-RAID-HOWTO-5.html

106 lines
4.4 KiB
HTML
Raw Permalink Normal View History

2020-08-23 10:33:19 +00:00
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<HTML>
<HEAD>
<META NAME="GENERATOR" CONTENT="SGML-Tools 1.0.9">
<TITLE> Linux DPT Hardware RAID HOWTO : Usage</TITLE>
<LINK HREF="DPT-Hardware-RAID-HOWTO-6.html" REL=next>
<LINK HREF="DPT-Hardware-RAID-HOWTO-4.html" REL=previous>
<LINK HREF="DPT-Hardware-RAID-HOWTO.html#toc5" REL=contents>
</HEAD>
<BODY>
<A HREF="DPT-Hardware-RAID-HOWTO-6.html">Next</A>
<A HREF="DPT-Hardware-RAID-HOWTO-4.html">Previous</A>
<A HREF="DPT-Hardware-RAID-HOWTO.html#toc5">Contents</A>
<HR>
<H2><A NAME="s5">5. Usage</A></H2>
<H2><A NAME="ss5.1">5.1 fdisk, mke2fs, mount, etc.</A>
</H2>
<P> You can now start treating the RAID as a regular disk. The first
thing you'll need to do is partition the disk (using fdisk). You'll
then need to set up an ext2 filesystem. This can be done
by running the command:
<P>
<BLOCKQUOTE><CODE>
<PRE>
% mkfs -t ext2 /dev/sdxN
</PRE>
</CODE></BLOCKQUOTE>
<P>where /dev/sdxN is the name of the SCSI partition. Once you do this,
you'll be able to mount the partitions and use them as you would any
other disk (including adding entries in /etc/fstab).
<H2><A NAME="ss5.2">5.2 Hot swapping </A>
</H2>
<P> We first tried to test hot swapping by removing a drive and putting
it back in the DPT-supplied enclosure/tower (which you buy for an
additional cost). Before we could carry this out to completion, one
of the disks failed (as I write this, the beeping is driving me
crazy). Even though one of the disks failed, all the data on the RAID
drive was accessible.
<P> Instead of replacing the drive, we just went through the motions
of hot swapping and put the same drive back in. The drive rebuilt
itself and everything turned out okay. During the time the disk had
filed, and during the rebuilding process, all the data was
accessible. Though it should be noted that if another disk had failed,
we'd have been in serious trouble.
<H2><A NAME="ss5.3">5.3 Performance </A>
</H2>
<P> Here's the output of the Bonnie program, on a 2144 UW with 9x3=17
GB RAID 5 setup, using the EATA DMA driver. The RAID is on a dual
processor Pentium Pro machine running Linux 2.0.33. For comparison,
the Bonnie results for the IDE drive on that machine are also given.
<P>
<BLOCKQUOTE><CODE>
<PRE>
-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
RAID 100 9210 96.8 1613 5.9 717 5.8 3797 36.1 90931 96.8 4648.2 159.2
IDE 100 3277 32.0 6325 23.5 2627 18.3 4818 44.8 59697 88.0 575.9 16.3
</PRE>
</CODE></BLOCKQUOTE>
<P> Some people have disputed the above timings (and rightly so---I've
been unable to try it out on our machines since they're completely loaded)
because the size of the file used may have led to it being cached
(resulting in an unusually good performance report). Here are some
timings with a 3344 UW controller:
<P>
<BLOCKQUOTE><CODE>
<PRE>
-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
1000 1714 17.2 1689 6.0 1200 5.7 5263 40.2 7023 12.1 51.3 2.2
</PRE>
</CODE></BLOCKQUOTE>
<P>And here are some timings on a SCSI-to-SCSI RAID system:
<P>
<BLOCKQUOTE><CODE>
<PRE>
-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
64 7465 100.0 70287 98.7 37012 97.7 8074 99.2 *****100.3 ***** 196.6
128 7289 99.3 67595 98.5 35294 98.6 7792 97.6 *****100.3 ***** 195.8
256 7222 98.8 44844 69.6 16096 51.8 5787 72.7 ***** 99.8 ***** 85.2
512 7138 98.4 13871 23.2 7888 29.3 7183 89.3 16488 27.2 1585. 11.5
1024 6908 95.8 12270 21.5 7161 25.4 7373 90.4 16527 28.2 123.8 1.8
2047 6081 84.1 12664 22.6 7191 25.6 7289 89.5 16573 28.5 75.0 1.2
***** results exceed column width (> 100 MB/sec, > 10000 seeks/sec)
host: Dual PII 400 MHz, 2 x U2W, 512 MB RAM, no internal disks
RAID: IFT 3102 UA 128 MB Cache, RAID-5, 6 x 9 GB
OS: SuSE Linux 6.0 with Kernel 2.2.3
</PRE>
</CODE></BLOCKQUOTE>
<HR>
<A HREF="DPT-Hardware-RAID-HOWTO-6.html">Next</A>
<A HREF="DPT-Hardware-RAID-HOWTO-4.html">Previous</A>
<A HREF="DPT-Hardware-RAID-HOWTO.html#toc5">Contents</A>
</BODY>
</HTML>