old-www/LDP/LG/issue22/bench.html

441 lines
21 KiB
HTML

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<TITLE>Linux Benchmarking - Concepts</TITLE>
</HEAD>
<BODY BGCOLOR="#FFFFFF" TEXT="#000000" LINK="#0000FF" VLINK="#0020F0"
ALINK="#FF0000">
<H4>
&quot;Linux Gazette...<I>making Linux just a little more fun!</I>&quot;
</H4>
<P> <HR> <P>
<!--===================================================================-->
<h1 align="center">Linux Benchmarking - Concepts</H1>
<H4 align="center">by Andr&eacute; D. Balsa <A
HREF="mailto:andrewbalsa@usa.net">andrewbalsa@usa.net</A></h4>
<h4 align="center">
With corrections and contributions by Uwe F. Mayer
<A HREF="mailto:mayer@math.vanderbilt.edu">mayer@math.vanderbilt.edu</A>
and David C. Niemi <A HREF="mailto:bench@wauug.erols.com">bench@wauug.erols.com</A>
</h4>
<P>
<HR><I>This is the first article in a series of 4 articles on Linux Benchmarking,
to be published by the Linux Gazette. This article deals with the fundamental
concepts in computer benchmarking, as they apply to the Linux OS. An example
of a classic benchmark, &quot;Whetstone&quot;, is analyzed in more detail.</I>
<HR>
<H2><A NAME="toc1"></A>1. <A HREF="bench.html#ss1">Basic concepts and
definitions</A></H2>
<UL>
<LI><A HREF="bench.html#ss1.1">1.1 Benchmark</A> </LI>
<LI><A HREF="bench.html#ss1.2">1.2 Benchmark results</A> </LI>
<LI><A HREF="bench.html#ss1.3">1.3 Index figures</A> </LI>
<LI><A HREF="bench.html#ss1.4">1.4 Performance metrics</A> </LI>
<LI><A HREF="bench.html#ss1.5">1.5 Elapsed wall-clock time vs. CPU
time</A> </LI>
<LI><A HREF="bench.html#ss1.6">1.6 Resolution and precision</A> </LI>
<LI><A HREF="bench.html#ss1.7">1.7 Synthetic benchmark</A> </LI>
<LI><A HREF="bench.html#ss1.8">1.8 Application benchmark</A> </LI>
<LI><A HREF="bench.html#ss1.9">1.9 Relevance</A> </LI>
</UL>
<H2><A NAME="toc2"></A>2. <A HREF="bench.html">A variety of benchmarks</A></H2>
<H2><A NAME="toc3"></A>3. <A HREF="bench.html">FPU tests: Whetstone
and Sons, Ltd.</A></H2>
<UL>
<LI><A HREF="bench.html#ss3.1">3.1 Whetstone history and general
features</A> </LI>
<LI><A HREF="bench.html#ss3.2">3.2 Getting the source and compiling
it</A> </LI>
<LI><A HREF="bench.html#ss3.3">3.3 Running Whetstone and gathering
results</A> </LI>
<LI><A HREF="bench.html#ss3.4">3.4 Examining the source code, the
object code and interpreting the results</A> </LI>
</UL>
<H2><A NAME="toc4"></A>4. <A HREF="bench.html">References</A></H2>
<P><hr><p>
<H2><A NAME="s1">1. Basic concepts and definitions</A></H2>
<H2><A NAME="ss1.1">1.1 Benchmark</A></H2>
<P>A benchmark is a documented procedure that will <B>measure</B> the <B>time</B> needed by a computer system to execute a well-defined computing task. It is assumed that this time is related to the <B>performance</B> of the computer system and that someh
ow the same procedure can be applied to other systems, so that <B>comparisons</B> can be made between different hardware/software configurations.</P>
<H2><A NAME="ss1.2">1.2 Benchmark results</A></H2>
<P>From the definition of a benchmark, one can easily deduce that there are two basic procedures for benchmarking:
<OL>
<LI>Measuring the time it takes for the system being examined to loop through a <B>fixed number of iterations</B> of a specific piece of code.</LI>
<LI>Measuring the number of iterations of a specific piece of code executed by the system under examination in a <B>fixed amount of time</B>.</LI>
</OL>
<P>If a single iteration of our test code takes a long time to execute, procedure 1 will be preferred. On the other hand, if the system being tested is able to execute thousands of iterations of our test code per second, procedure 2 should be chosen.</P>
<P>Both procedures 1 and 2 will yield final results in the form "seconds/iteration" or "iterations/second" (these two forms are interchangeable). One could imagine other algorithms, e.g. self-modifying code or measuring the time needed to reach a steady s
tate of some sort, but this would increase the complexity of the code and produce results that would probably be next to impossible to analyze and compare.</P>
<H2><A NAME="ss1.3">1.3 Index figures</A></H2>
<P>Sometimes, figures obtained from standard benchmarks on a system being tested are compared with the results obtained on a <B>reference</B> machine. The reference machine's results are called the <B>baseline</B> results. If we divide the results of the
system under examination by the baseline results, we obtain a performance <B>index</B>. Obviously, the performance index for the reference machine is 1.0. An index has no units, it is just a relative measurement.</P>
<H2><A NAME="ss1.4">1.4 Performance metrics</A></H2>
<P>The final result of any benchmarking procedure is always a set of numerical results which we can call <B>speed</B> or <B>performance</B> (for that particular aspect of our system effectively tested by the piece of code).</P>
<P>Under certain conditions we can combine results from similar tests or various indices into a single figure, and the term <B>metric</B> will be used to describe the "units" of performance for this benchmarking mix.</P>
<H2><A NAME="ss1.5">1.5 Elapsed wall-clock time vs. CPU time</A></H2>
<P>Time measurements for benchmarking purposes are usually taken by defining a starting time and an ending time, the difference between the two being the elapsed wall-clock time. <B>Wall-clock</B> means we are not considering just CPU time, but the "real"
time usually provided by an internal asynchronous real-time clock source in the computer or an external clock source (your wrist-watch for example). Some tests, however, make use of CPU time: the time effectively spent by the CPU of the system being test
ed in running the specific benchmark, and not other OS routines.</P>
<H2><A NAME="ss1.6">1.6 Resolution and precision</A></H2>
<P>Resolution and precision both measure the information provided by a data point, but should not be confused.</P>
<P>Resolution is the minimum time interval that can be (easily) measured on a given system. In Linux running on i386 architectures I believe this is 1/100 of a second, provided by the GNU C system library function <CODE>times</CODE> (see /usr/include/time
.h - not very clear, BTW). Another term used with the same meaning is "granularity". David C. Niemi has developed an interesting technique to lower granularity to very low (sub-millisecond) levels on Linux systems, I hope he will contribute an explanation
of his algorithm in the next article.</P>
<P>Precision is a measure of the total variability in the results for any given benchmark. Computers are deterministic systems and should always provide the same, identical benchmark results if running under identical conditions. However, since Linux is a
multi-tasking, multi-user system, some tasks will be running in the background and will eventually influence the benchmark results. </P>
<P>This "random" error can be expressed as a time measurement (e.g. 20 seconds + or - 0.2 s) or as a percentage of the figure obtained by the benchmark considered (e.g. 20 seconds + or - 1%). Other terms sometimes used to describe variations in results ar
e "variance", "noise", or "jitter".</P>
<P>Note that whereas resolution is system dependent, precision is a characteristic of each benchmark. Ideally, a well-designed benchmark will have a precision smaller than or equal to the resolution of the system being tested. It is very important to iden
tify the sources of noise for any particular benchmark, since this provides an indication of possibly erroneous results.</P>
<H2><A NAME="ss1.7">1.7 Synthetic benchmark</A></H2>
<P>A program or program suite specifically designed to measure the performance of a subsystem (hardware, software, or a combination of both). Whetstone is an example of a synthetic benchmark.</P>
<H2><A NAME="ss1.8">1.8 Application benchmark</A></H2>
<P>A commonly executed application is chosen and the time to execute a given task with this application is used as a benchmark. Application benchmarks try to measure the performance of computer systems for some category of real-world computing task. Measu
ring the time your Linux box takes to compile the kernel can be considered as a sort of application benchmark.</P>
<H2><A NAME="ss1.9">1.9 Relevance</A></H2>
<P>A benchmark or its results are said to be irrelevant when they fail to effectively measure the performance characteristic the benchmark was designed for. Conversely, benchmark results are said to be relevant when they allow an accurate prediction of re
al-life performance or meaningful comparisons between different systems.</P>
<HR>
<HR>
<H2><A NAME="s2">2. A variety of benchmarks</A></H2>
<P>The performance of a Linux system may be measured by all sorts of different benchmarks:
<OL>
<LI>Kernel compilation performance.</LI>
<LI>FPU performance.</LI>
<LI>Integer math performance.</LI>
<LI>Memory access performance.</LI>
<LI>Disk I/O performance.</LI>
<LI>Ethernet I/O performance.</LI>
<LI>File I/O performance.</LI>
<LI>Web server performance.</LI>
<LI>Doom performance.</LI>
<LI>Quake performance.</LI>
<LI>X graphics performance.</LI>
<LI>3D rendering performance.</LI>
<LI>SQL server performance.</LI>
<LI>Real-time performance.</LI>
<LI>Matrix performance.</LI>
<LI>Vector performance.</LI>
<LI>File server (NFS) performance.</LI>
</OL>
<P>Etc...
<UL>
<LI>Conclusion I: it's obvious that no single benchmark can provide results for all the above items.</LI>
<LI>Conclusion II: you must first decide what you are trying to measure, then choose an appropriate benchmark (or write your own). </LI>
<LI>Conclusion III: it's impossible to come up with a single figure (called Single Figure of Merit in benchmarking terminology) that will summarize the performance of a Linux system. Hence, no "Lhinuxstone" metric exists.</LI>
<LI>Conclusion IV: benchmarking always takes more time than you thought it would.</LI>
</UL>
<HR>
<H2><A NAME="s3"></A>3. FPU tests: Whetstone and Sons, Ltd.</H2>
<P>Floating-point (FP) instructions are among the least used while running
Linux. They probably represent &lt; 0.001% of the instructions executed
on an average Linux box, unless one deals with scientific computations.
Besides, if you really want to know how well designed the FPU in your processor
is, it's easier to have a look at its data sheet and check how many clock
cycles it takes to execute a given FPU instruction. But there are more
benchmarks that measure FPU performance than anything else. Why ? </P>
<OL>
<LI>RISC, pipelining, simultaneous issuing of instructions, speculative
execution and various other CPU design tricks make the CPU performance,
specially FPU performance, difficult to measure directly and simply. The
execution time of an FPU instruction varies depending on the data, and
a continuous stream of FPU instructions will execute under special circumstances
that make direct predictions of performance impossible in most cases. Simulations
(synthetic benchmarks) are needed.</LI>
<LI>FPU tests are easier to write than other benchmarks. Just put a bunch
of FP instructions together and make a loop: voil&agrave; !</LI>
<LI>The Whetstone benchmark is widely (and freely) available in Basic,
C and Fortran versions, in case you don't want to write your own FPU test.</LI>
<LI>FPU figures look good for marketing purposes. Here is what Dave Sill,
the author of the comp.benchmarks FAQ, has to say about MFLOPS: &quot;<I>Millions
of Floating Point Operations Per Second. Supposedly the rate at which the
system can execute floating point instructions. Varies widely between different
benchmarks and different configurations of the same benchmarks. Popular
with marketing types because it's sounds like a &quot;hard&quot; value
like miles per hour, and represents a simple concept.</I>&quot;</LI>
<LI>If you are going to buy a Cray, you'd better have an excuse for it.</LI>
<LI>You can't get a data sheet for the Cray (or don't believe the numbers),
but still want to know its FP performance.</LI>
<LI>You want to keep your CPU busy doing all sorts of useless FP calculations,
and want to check that the chip gets very hot.</LI>
<LI>You want to discover the next big bug in the FPU of your processor,
and get rich speculating with the manufacturer's shares.</LI>
</OL>
<P>Etc...</P>
<H2><A NAME="ss3.1"></A>3.1 Whetstone history and general features</H2>
<P>The original Whetstone benchmark was designed in the 60's by Brian Wichmann
at the National Physical Laboratory, in England, as a test for an ALGOL
60 compiler for a hypothetical machine. The compilation system was named
after the small town of Whetstone, where it was designed, and the name
seems to have stuck to the benchmark itself.</P>
<P>The first practical implementation of the Whetstone benchmark was written
by Harold Curnow in FORTRAN in 1972 (Curnow and Wichmann together published
a paper on the Whetstone benchmark in 1976 for <I>The Computer Journal</I>).
Historically it is the first major synthetic benchmark. It is designed
to measure the execution speed of a variety of FP instructions (+, *, sin,
cos, atan, sqrt, log, exp) on scalar and vector data, but also contains
some integer code. Results are provided in MWIPS (Millions of Whetstone
Instructions Per Second). The meaning of the expression &quot;Whetstone
Instructions&quot; is not clear, though, at least after close examination
of the C source code.</P>
<P>During the late 80's and early 90's it was recognized that Whetstone
would not adequately measure the FP performance of parallel multiprocessor
supercomputers (e.g. Cray and other mainframes dedicated to scientific
computations). This spawned the development of various modern benchmarks,
many of them with names like Fhoostone, as a humorous reference to Whetstone.
Whetstone however is still widely used, because it provides a very reasonable
metric as a measure of uniprocessor FP performance.</P>
<P>Whetstone has other interesting qualities for Linux users: </P>
<UL>
<LI>Its source code is short and relatively easy to understand, with a
clean, self-explanatory structure.</LI>
<LI>The C version compiles cleanly on Linux boxes with gcc.</LI>
<LI>Execution time is short: 100 seconds (by design).</LI>
<LI>It is very precise (small variations in the results).</LI>
<LI>CPU architecture digression: for the Whetstone benchmark, the object
code that gets looped through is very small, fitting entirely in the L1
cache of most modern processors, hence keeping the FPU pipeline filled
and the FPU permanently busy. This is desirable because Whetstone is doing
exactly what we want it to do: measuring FPU performance, not CPU/L2 cache/main
memory coupling, integer performance or any other feature of the system
under test. Note however that <A HREF="mailto:bench@wauug.erols.com">David
C. Niemi</A> has provided some conclusive evidence that at least some interaction
with the L2 cache or main memory is taking place on Pentium (R) systems
(Pentium CPUs have a sophisticated FPU instruction pipeline and can dispatch
two FPU instructions on a single clock cycle. One pipe can execute all
integer and FP instructions, while the other pipe can execute simple integer
instructions and the FXCH FP instructions. This is quoted from Intel's
datasheet on the Pentium processor, available at <A HREF="http://developer.intel.com/design/pentium/datashts/241997.htm">Intel's
developers site</A>). I wish somebody with a Pentium ICE equipment could
investigate this a little further...</LI>
</UL>
<H2><A NAME="ss3.2"></A>3.2 Getting the source and compiling it</H2>
<H3>Getting the standard C version by Roy Longbottom.</H3>
<P>The version of the Whetstone benchmark that we are going to use for
this example was slightly modified by Al Aburto and can be downloaded from
his excellent <A HREF="ftp://ftp.nosc.mil/pub/aburto">FTP site dedicated
to benchmarks</A>. After downloading the file whets.c, you will have to
edit slightly the source: a) Uncomment the &quot;<TT>#define POSIX1</TT>&quot;
directive (this enables the Linux compatible timer routine). b) Uncomment
the &quot;<TT>#define </TT>DP&quot; directive (since we are only interested
in the Double Precision results).</P>
<H3>Compiling</H3>
<P>This benchmark is extremely sensitive to compiler optimization options.
Here is the line I used to compile it: <TT>cc whets.c -o whets -O2 -fomit-frame-pointer
-ffast-math -fforce-addr -fforce-mem -m486 -lm</TT>.</P>
<P>Note that some compiler options of some versions of gcc are buggy, most
notably one of -O, -O2, -O3, ... together with -funroll-loops can cause
gcc to emit incorrect code on a Linux box. You can test your gcc with a
short test program available at <A HREF="http://www.math.vanderbilt.edu/~mayer/linux/gcc.html">Uwe
Mayer's site</A>. Of course, if your compiler is buggy, then any test results
are not written in stone, to say the least (pun intended). In short, don't
use -funroll-loops to compile this benchmark, and try to stick to the optimization
options listed above.</P>
<H2><A NAME="ss3.3"></A>3.3 Running Whetstone and gathering results</H2>
<H3>First runs</H3>
<P>Just execute whets. Whetstone will display its results on standard output
and also write a whets.res file if you give it the information it requests.
Run it a few times to confirm that variations in the results are very small.</P>
<H3>With L1, L2 or both L1 and L2 caches disabled</H3>
<P>Some motherboards allow you to disable the L1 (internal) or L2 (external)
caches through the BIOS configuration menus (take a look at the motherboard's
manual; the ASUS P55T2P4 motherboard, for example, allows disabling both
caches separately or together). You may want to experiment with these settings
and/or main memory (DRAM) timing settings.</P>
<H3>Without optimization</H3>
<P>You can try to compile whets.c without any special optimization options,
just to verify that compiler quality and compiler optimization options
do influence benchmark results.</P>
<H2><A NAME="ss3.4"></A>3.4 Examining the source code, the object code
and interpreting the results</H2>
<H3>General program structure</H3>
<P>The Whetstone benchmark main loop executes in a few milliseconds on
an average modern machine, so its designers decided to provide a calibration
procedure that will first execute 1 pass, then 5, then 25 passes, etc...
until the calibration takes more than 2 seconds, and then guess a number
of passes <TT>xtra</TT> that will result in an approximate running time
of 100 seconds. It will then execute <TT>xtra</TT> passes of each one of
the 8 sections of the main loop, measure the running time for each (for
a total running time very near to 100 seconds) and calculate a rating in
MWIPS, the Whetstone metric. This is an interesting variation in the two
basic procedures described in Section 1.</P>
<H3>Main loop</H3>
<P>The main loop consists of 8 sections each containing a mix of various
instructions representative of some type of computational task. Each section
is itself a very short, very small loop, and has its own timing calculation.
The code that gets looped through for section 8 for example is a single
line of C code:</P>
<P><TT>x = sqrt(exp(log(x)/t1);</TT> where x = 0.75 and t1=0.50000025,
both defined as doubles.</P>
<H3>Executable code size, library calls</H3>
<P>Compiling as specified above with gcc 2.7.2.1, the resulting ELF executable
<TT>whets</TT> is 13 096 bytes long on my system. It calls libc and of
course libm for the trigonometric and transcendental math functions, but
these should get compiled to very short executable code sequences since
all modern CPUs have FPUs with these functions wired-in.</P>
<H3>General comments</H3>
<P>Now that we have an FPU performance figure for our machine, the next
step is comparing it to other CPUs. Have you noticed all the data that
whets.c asked you after you had run it for the first time? Well, Al Aburto
has collected Whetstone results for your convenience at his site, you may
want to download the <A HREF="ftp://ftp.nosc.mil/pub/aburto">data file</A>
and have a look at it. This kind of benchmarking data repository is very
important, because it allows comparisons between various different machines.
More on this topic in one of my next articles.</P>
<P>Whetstone is not a Linux specific test, it's not even an OS specific
test, but it certainly is a good test for the FPU in your Linux box, and
also gives an indication of compiler efficiency for specific kinds of applications
that involve FP calculations.</P>
<P>I hope this gave you a taste of what benchmarking is all about.</P>
<P>
<HR>
<H2><A NAME="s4"></A>4. References</H2>
<P>Other references for benchmarking terminology: </P>
<UL>
<LI>The <A HREF="http://sacam.oren.ortn.edu/~dave/benchmark-faq.html">comp.benchmarks
FAQ</A> by Dave Sill.</LI>
<LI>The <A HREF="http://wombat.doc.ic.ac.uk/foldoc/index.html">On-Line
Computing Dictionary</A>.</LI>
<LI>The <A HREF="http://sunsite.unc.edu/LDP/HOWTO/Benchmarking-HOWTO.html">Linux
Benchmarking HOWTO</A> available from the LDP site and mirrors.</LI>
</UL>
<P>
<HR>
<H5 align="center">Copyright &copy; 1997, Andr&eacute; D. Balsa<BR>
Published in Issue 22 of the Linux Gazette, October 1997</H5>
<!--===================================================================-->
<P> <hr> <P>
<A HREF="./index.html"><IMG ALIGN=BOTTOM SRC="../gx/indexnew.gif"
ALT="[ TABLE OF CONTENTS ]"></A>
<A HREF="../index.html"><IMG ALIGN=BOTTOM SRC="../gx/homenew.gif"
ALT="[ FRONT PAGE ]"></A>
<A HREF="./gm.html"><IMG SRC="../gx/back2.gif"
ALT=" Back "></A>
<A HREF="./words.html"><IMG SRC="../gx/fwd.gif" ALT=" Next "></A>
<P> <hr> <P>
</body>
</html>