Fixed Martin's comments

This commit is contained in:
Federico Bolelli 2016-01-27 12:34:11 +01:00 committed by Natale Patriciello
parent 3e267b0adc
commit 35b2dce9ec
4 changed files with 12 additions and 15 deletions

View File

@ -940,7 +940,7 @@ $ tc qdisc ... dev dev ( parent classid | root) [ handle major: ] prio [bands b
</section>
<section id="qc-wrr">
<title>WRR, Wheighted Round Robin</title>
<title>WRR, Weighted Round Robin</title>
<para>
This qdisc is not included in the standard kernels.
</para>

View File

@ -255,7 +255,7 @@ $ tc -s qdisc ls dev eth0
</mediaobject>
<para>
Lots of numbers. The second column contains the value of the relevant four TOS bits, followed by their translated meaning. For example, 15 stands for a packet wanting Minimal Monetary Cost, Maximum Reliability, Maximum Throughput AND Minimum Delay. I would call this a 'Dutch Packet'.
Lots of numbers. The second column contains the value of the relevant four TOS bits, followed by their translated meaning. For example, 15 stands for a packet wanting Minimal Monetary Cost, Maximum Reliability, Maximum Throughput AND Minimum Delay.
</para>
<para>
The fourth column lists the way the Linux kernel interprets the TOS bits, by showing to which Priority they are mapped.

View File

@ -357,16 +357,14 @@
$ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:18:F3:51:44:10
inet addr:69.41.199.58 Bcast:69.41.199.63
Mask:255.255.255.248
inet addr:69.41.199.58 Bcast:69.41.199.63 Mask:255.255.255.248
inet6 addr: fe80::218:f3ff:fe51:4410/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:435033 errors:0 dropped:0 overruns:0 frame:0
TX packets:429919 errors:0 dropped:0 overruns:0
carrier:0
TX packets:429919 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 <command>txqueuelen:1000</command>
RX bytes:65651219 (62.6 MiB) TX bytes:132143593 (126.0 MiB)
Interrupt:23
Interrupt:23
</programlisting>
<programlisting>
@ -470,7 +468,7 @@ ip link set txqueuelen 500 dev eth0
<section id="c-bql">
<title>Byte Queue Limits(BQL)</title>
<para>
Byte Queue Limits (BQL) is a new feature in recent Linux kernels (> 3.3.0) which attempts to solve the problem of driver queue sizing automatically. This is accomplished by adding a layer which enables and disables queuing to the driver queue based on calculating the minimum buffer size required to avoid starvation under the current system conditions. Recall from earlier that the smaller the amount of queued data, the lower the maximum latency experienced by queued packets.
Byte Queue Limits (BQL) is a new feature in recent Linux kernels (> 3.3.0) which attempts to solve the problem of driver queue sizing automatically. This is accomplished by adding a layer which enables and disables queuing to the driver queue based on calculating the minimum buffer size required to avoid <link linkend="o-starv-lat">starvation</link> under the current system conditions. Recall from earlier that the smaller the amount of queued data, the lower the maximum <link linkend="o-starv-lat">latency</link> experienced by queued packets.
</para>
<para>
It is key to understand that the actual size of the driver queue is not changed by BQL. Rather BQL calculates a limit of how much data (in bytes) can be queued at the current time. Any bytes over this limit must be held or dropped by the layers above the driver queue..
@ -487,7 +485,7 @@ if the number of queued bytes is over the current LIMIT value then
disable the queueing of more data to the driver queue
</programlisting>
<para>
Notice that the amount of queued data can exceed LIMIT because data is queued before the LIMIT check occurs. Since a large number of bytes can be queued in a single operation when TSO, UFO or GSO (see chapter 2.9.1 aggiungi link for details) are enabled these throughput optimizations have the side effect of allowing a higher than desirable amount of data to be queued. If you care about latency you probably want to disable these features.
Notice that the amount of queued data can exceed LIMIT because data is queued before the LIMIT check occurs. Since a large number of bytes can be queued in a single operation when TSO, UFO or GSO (see chapter 2.9.1 aggiungi link for details) are enabled these throughput optimizations have the side effect of allowing a higher than desirable amount of data to be queued. If you care about <link linkend="o-starv-lat">latency</link> you probably want to disable these features.
</para>
<para>
The second stage of BQL is executed after the hardware has completed a transmission (simplified pseudo-code):
@ -508,7 +506,7 @@ if the number of queued bytes is less than LIMIT
enable the queueing of more data to the buffer
</programlisting>
<para>
As you can see, BQL is based on testing whether the device was starved. If it was starved, then LIMIT is increased allowing more data to be queued which reduces the chance of starvation. If the device was busy for the entire interval and there are still bytes to be transferred in the queue then the queue is bigger than is necessary for the system under the current conditions and LIMIT is decreased to constrain the latency.
As you can see, BQL is based on testing whether the device was starved. If it was starved, then LIMIT is increased allowing more data to be queued which reduces the chance of <link linkend="o-starv-lat">starvation</link>. If the device was busy for the entire interval and there are still bytes to be transferred in the queue then the queue is bigger than is necessary for the system under the current conditions and LIMIT is decreased to constrain the <link linkend="o-starv-lat">latency</link>.
</para>
<para>
A real world example may help provide a sense of how much BQL affects the amount of data which can be queued. On one of my servers the driver queue size defaults to 256 descriptors. Since the Ethernet MTU is 1,500 bytes this means up to 256 * 1,500 = 384,000 bytes can be queued to the driver queue (TSO, GSO etc are disabled or this would be much higher). However, the limit value calculated by BQL is 3,012 bytes. As you can see, BQL greatly constrains the amount of data which can be queued.
@ -517,7 +515,7 @@ if the number of queued bytes is less than LIMIT
An interesting aspect of BQL can be inferred from the first word in the name byte. Unlike the size of the driver queue and most other packet queues, BQL operates on bytes. This is because the number of bytes has a more direct relationship with the time required to transmit to the physical medium than the number of packets or descriptors since the later are variably sized.
</para>
<para>
BQL reduces network latency by limiting the amount of queued data to the minimum required to avoid starvation. It also has the very important side effect of moving the point where most packets are queued from the driver queue which is a simple FIFO to the queueing discipline (QDisc) layer which is capable of implementing much more complicated queueing strategies. The next section introduces the Linux QDisc layer.
BQL reduces network <link linkend="o-starv-lat">latency</link> by limiting the amount of queued data to the minimum required to avoid <link linkend="o-starv-lat">starvation</link>. It also has the very important side effect of moving the point where most packets are queued from the driver queue which is a simple FIFO to the queueing discipline (QDisc) layer which is capable of implementing much more complicated queueing strategies. The next section introduces the Linux QDisc layer.
</para>
<section>
@ -549,7 +547,7 @@ if the number of queued bytes is less than LIMIT
</listitem>
<listitem>
<para>
<emphasis>limit_max:</emphasis> A configurable maximum value for LIMIT. Set this value lower to optimize for latency.
<emphasis>limit_max:</emphasis> A configurable maximum value for LIMIT. Set this value lower to optimize for <link linkend="o-starv-lat">latency</link>.
</para>
</listitem>
<listitem>

View File

@ -175,8 +175,7 @@
</itemizedlist>
<para>
Remember, too that sometimes, it is simply better to purchase more
bandwidth. Traffic control does not solve all problems!But, keep attention:
A 100 Gigabit network is always faster than a 1 megabit network, isnt it? More bandwidth is always better! I want a faster network! No, such a network can easily be much slower. Bandwidth is a measure of capacity, not a measure of how fast the network can respond. You pick up the phone to send a message to Shanghai immediately, but dispatching a cargo ship full of blu-ray disks will be amazingly slower than the telephone call, even though the bandwidth of the ship is billions and billions of times larger than the telephone line. So more bandwidth is better only if its latency (speed) meets your needs. More of what you dont need is useless. Bufferbloat destroys the speed we really need. (<ulink url="https://gettys.wordpress.com/bufferbloat-faq/">Jim Gettys jg's Ramblings</ulink>)
bandwidth. Traffic control does not solve all problems!
</para>
<para>
</para>
@ -484,7 +483,7 @@
</section>
<section id="o-starv-lat">
<title>Starvation and </title>
<title>Starvation and Latency</title>
<para>
The queue between the IP stack and the hardware (see <link linkend="c-driver-queue">chapter 4.2</link> for detail about <link linkend="c-driver-queue">driver queue</link> or see <link linkend="s-ethtool">chapter 5.5</link> for how manage it) introduces two problems: starvation and latency.
</para>