From 9b265eb6e21c3cabf87bd2063794cc360583f244 Mon Sep 17 00:00:00 2001 From: "Martin A. Brown" Date: Sat, 13 Feb 2016 00:13:46 -0800 Subject: [PATCH] added section on bufferbloat; reworked language reworked some of the language for smoothness added a section on bufferbloat (which still needs some editorial love) --- .../Traffic-Control-HOWTO/overview.xml | 229 +++++++++++++++--- 1 file changed, 199 insertions(+), 30 deletions(-) diff --git a/LDP/howto/docbook/Traffic-Control-HOWTO/overview.xml b/LDP/howto/docbook/Traffic-Control-HOWTO/overview.xml index fa49889b..d436b351 100644 --- a/LDP/howto/docbook/Traffic-Control-HOWTO/overview.xml +++ b/LDP/howto/docbook/Traffic-Control-HOWTO/overview.xml @@ -11,9 +11,6 @@ Documentation License (GFDL) through The Linux Documentation Project (TLDP). - This was initially authored while Martin A. Brown worked for - SecurePipe, Inc. - This HOWTO is likely available at the following address: http://tldp.org/HOWTO/Traffic-Control-HOWTO/ @@ -44,15 +41,22 @@ Traffic control is the name given to the sets of queuing systems and mechanisms by which packets are received and transmitted on a router. - This includes deciding which (and whether) packets to accept at what + This includes deciding (if and) which packets to accept at what rate on the input of an interface and determining which packets to transmit in what order at what rate on the output of an interface. - In the overwhelming majority of situations, traffic control consists of + In the simplest possible model, traffic control consists of a single queue which collects entering packets and dequeues them as quickly as the hardware (or underlying device) can accept them. This - sort of queue is a &sch_fifo;. + sort of queue is a &sch_fifo;. This is like a single toll booth for + entering a highway. Every car must stop and pay the toll. Other cars + wait their turn. + + + Linux provides this simplest traffic control tool (&sch_fifo;), and + in addition offers a wide variety of other tools that allow all sorts of + control over packet handling. @@ -80,35 +84,76 @@ of the network resource between the two applications. - Traffic control is the set of tools which allows the user to have - granular control over these queues and the queuing mechanisms of a + Traffic control is a set of tools allowing an administrator granular + control over these queues and the queuing mechanisms of a networked device. The power to rearrange traffic flows and packets with these tools is tremendous and can be complicated, but is no substitute for adequate bandwidth. The term Quality of Service (QoS) is often used as a synonym for traffic - control. + control at an IP-layer.
Why use it? - Packet-switched networks differ from circuit based networks in one very - important regard. A packet-switched network itself is stateless. A - circuit-based network (such as a telephone network) must hold state - within the network. IP networks are stateless and packet-switched - networks by design; in fact, this statelessness is one of the - fundamental strengths of IP. + Traffic control tools allow the implementer to apply preferences, + organizational or business policies to packets or network flows + transmitted into a network. This control allows stewardship over the + network resources such as throughput or latency. - The weakness of this statelessness is the lack of differentiation - between types of flows. In simplest terms, traffic control allows an - administrator to queue packets differently based on attributes of the - packet. It can even be used to simulate the behaviour of a - circuit-based network. This introduces statefulness into the stateless - network. + Fundamentally, traffic control becomes a necessity because of packet + switching in networks. + + + + For a brief digression, to explain the novelty and cleverness + of packet switching, think about the circuit-switched telephone networks + that were built over the entire 20th century. In order to set up a + call, the network gear knew rules about call establishment and when a + caller tried to connect, the network employed the rules to reserve a + circuit for the entire duration of the call or connection. While one + call was engaged, using that resource, no other call or caller could use + that resource. This meant many individual pieces of equipment could + block call setup because of resource unavailability. + + + Let's return to packet-switched networks, a mid-20th century invention, + later in wide use, and nearly ubiquitous in the 21st century. + Packet-switched networks differ from circuit based networks in one very + important regard. The unit of data handled by the network gear is not a + circuit, but rather a small chunk of data called a packet. Inexactly + speaking, the packet is a letter in an envelope with a destination + address. The packet-switched network had only a very small amount of + work to do, reading the destination identifier and transmitting the + packet. + + + Sometimes, packet-switched networks are described as stateless because + they do not need to track all of the flows (analogy to a circuit) that + are active in the network. In order to be function, the packet-handling + machines must know how to reach the destinations addresses. One analogy + is a package-handling service like your postal service, UPS or DHL. + + + If there's a sudden influx of packets into a packet-switched network + (or, by analogy, the increase of cards and packages sent by mail and + other carriers at Christmas), the network can become slow or + unresponsive. Lack of differentiation between importance of specific + packets or network flows is, therefore, a weakness of such + packet-switched networks. The network can be overloaded with data + packets all competing. + + + + In simplest terms, the traffic control tools allow somebody to enqueue + packets into the network differently based on attributes of the packet. + The various different tools each solve a different problem and many can + be combined, to implement complex rules to meet a preference or business + goal. There are many practical reasons to consider traffic control, and many @@ -118,9 +163,9 @@ The list below is not an exhaustive list of the sorts of solutions - available to users of traffic control, but introduces the - types of problems that can be solved by using traffic control to - maximize the usability of a network connection. + available to users of traffic control, but shows the + types of common problems that can be solved by using traffic control + tools to maximize the usability of the network. Common traffic control solutions @@ -134,7 +179,7 @@ Limit the bandwidth of a particular user, service or client; &link-sch_htb; classes and &elements-classifying; with a - &linux-filter;. traffic. + &linux-filter;. @@ -174,11 +219,9 @@ - Remember, too that sometimes, it is simply better to purchase more + Remember that, sometimes, it is simply better to purchase more bandwidth. Traffic control does not solve all problems! - -
@@ -447,12 +490,31 @@
NIC, Network Interface Controller - A network interface controller is a computer hardware component, differently from previous ones thar are software components, that connects a computer to a computer network. The network controller implements the electronic circuitry required to communicate using a specific physical layer and data link layer standard such as Ethernet, Fibre Channel, Wi-Fi or Token Ring. Traffic contol must deal with the characteristic of NIC interface. + A network interface controller is a computer hardware component, + differently from previous ones thar are software components, that + connects a computer to a computer network. The network controller + implements the electronic circuitry required to communicate using a + specific data link layer and physical layer standard such as + Ethernet, Fibre Channel, Wi-Fi or Token Ring. Traffic control must + deal with the physical constraints and characteristics of the NIC + interface.
Huge Packets from the Stack - Most NICs have a fixed maximum transmission unit (MTU) which is the biggest frame which can be transmitted by the physical media. For Ethernet the default MTU is 1,500 bytes but some Ethernet networks support Jumbo Frames of up to 9,000 bytes. Inside IP network stack, the MTU can manifest as a limit on the size of the packets which are sent to the device for transmission. For example, if an application writes 2,000 bytes to a TCP socket then the IP stack needs to create two IP packets to keep the packet size less than or equal to a 1,500 MTU. For large data transfers the comparably small MTU causes a large number of small packets to be created and transferred through the driver queue. + Most NICs have a fixed maximum transmission unit + (MTU) which is the biggest frame which can be + transmitted by the physical medium. For Ethernet the default MTU + is 1500 bytes but some Ethernet networks support Jumbo Frames + of up to 9000 bytes. Inside the IP network stack, the MTU can + manifest as a limit on the size of the packets which are sent to + the device for transmission. For example, if an application + writes 2000 bytes to a TCP socket then the IP stack needs to + create two IP packets to keep the packet size less than or equal + to a 1500 byte MTU. For large data transfers the comparably small + MTU causes a large number of small packets to be created and + transferred through the driver + queue. In order to avoid the overhead associated with a large number of packets on the transmit path, the Linux kernel implements several optimizations: TCP segmentation offload (TSO), UDP fragmentation offload (UFO) and generic segmentation offload (GSO). All of these optimizations allow the IP stack to create packets which are larger than the MTU of the outgoing NIC. For IPv4, packets as large as the IPv4 maximum of 65,536 bytes can be created and queued to the driver queue. In the case of TSO and UFO, the NIC hardware takes responsibility for breaking the single large packet into packets small enough to be transmitted on the physical interface. For NICs without hardware support, GSO performs the same operation in software immediately before queueing to the driver queue. @@ -553,6 +615,113 @@
+
+ Relationship between throughput and latency + + In all traffic control systems, there is a relationship between + throughput and latency. The maximum information rate of a network link + is termed bandwidth, but for the user of a network the actually achieved + bandwidth has a dedicated term, throughput. + + + + latency + + + the delay in time between a sender's transmission and the + recipient's decoding or receiving of the data; always non-negative + and non-zero (time doesn't move backwards, then) + + + in principle, latency is unidirectional, however almost the entire + Internet networking community talks about bidirectional delay + —the delay in time between a sender's transmission of data + and some sort of acknowledgement of receipt of that data; cf. + ping + + + measured in milliseconds (ms); on Ethernet, latencies are + typically between 0.3 and 1.0 ms and on wide-area networks (i.e. + to your ISP, across a large campus or to a remote server) between + 5 to 300 ms + + + + + throughput + + + a measure of the total amount of data that can be transmitted + successfully between a sender and receiver + + + measured in bits per second; the measurement most often + quoted by complaining users after buying a 10Mbit/s package from + their provider and receiving 8.2Mbit/s. + + + + + + + Latency and throughput are general computing terms. For example, + application developers speak of user-perceived latency when trying to + build responsive tools. Database and filesystem people speak about + disk throughput. And, above the network layer, latency of a website + name lookup in DNS is a major contributor to the perceived performance + of a website. The remainder of this document concerns latency in the + network domain, specifically the IP network layer. + + + + During the millenial fin de siècle, many developed world network service + providers had learned that users were interested in the highest possible + download throughput (the above mentioned 10Mbit/s bandwidth figure). + + + In order to maximize this download throughput, gear vendors and + providers commonly tuned their equipment to hold a large number of data + packets. When the network was ready to accept another packet, the + network gear was certain to have one in its queue and could simply send + another packet. In practice, this meant that the user, who was + measuring download throughput, would receive a high number and was + likely to be happy. This was desirable for the provider because the + delivered throughput could more likely meet the advertised number. + + + This technique effectively maximized throughput, at the cost of latency. + Imagine that a high priority packet is waiting at the end of the big + queue of packets mentioned above. Perhaps, the theoretical latency of + the packet on this network might be 100ms, but it needs to wait its turn + in the queue to be transmitted. + + + While the decision to maximize + throughput has been wildly successful, the effect on latency is + significant. + + + + Despite a general warning from Stuart Cheshire in the mid-1990s called + It's the Latency, Stupid, + it took the novel term, bufferbloat, widely publicized about 15 + years later by Jim Getty in an ACM Queue article + Bufferbloat: Dark Buffers in the Internet + and a + Bufferbloat FAQ + in his blog, to bring some focus onto the choice for maximizing + throughput that both gear vendors and providers preferred. + + + The relationship (tension) between latency and throughput in + packet-switched networks have been well-known in the academic, + networking and Linux development community. Linux traffic control core + data structures date back to the 1990s and have been continuously + developed and extended with new schedulers and features. + +
+ +