This commit is contained in:
gferg 2002-05-15 21:21:06 +00:00
parent 4ab00525d5
commit 7bbd2075c6
4 changed files with 106 additions and 89 deletions

View File

@ -1123,7 +1123,7 @@ Netware, NT and Windows together. </Para>
IO-Perf-HOWTO</ULink>,
<CiteTitle>I/O Performance HOWTO</CiteTitle>
</Para><Para>
<CiteTitle>Updated: April 2002</CiteTitle>.
<CiteTitle>Updated: May 2002</CiteTitle>.
Covers information on available patches for the 2.4 kernel
that will improve the I/O performance of your Linux operating system. </Para>
</ListItem>

View File

@ -1270,7 +1270,7 @@ How to set and keep your computer's clock on time. </Para>
IO-Perf-HOWTO</ULink>,
<CiteTitle>I/O Performance HOWTO</CiteTitle>
</Para><Para>
<CiteTitle>Updated: April 2002</CiteTitle>.
<CiteTitle>Updated: May 2002</CiteTitle>.
Covers information on available patches for the 2.4 kernel
that will improve the I/O performance of your Linux operating system. </Para>
</ListItem>

View File

@ -463,7 +463,7 @@ device parameters. </Para>
IO-Perf-HOWTO</ULink>,
<CiteTitle>I/O Performance HOWTO</CiteTitle>
</Para><Para>
<CiteTitle>Updated: April 2002</CiteTitle>.
<CiteTitle>Updated: May 2002</CiteTitle>.
Covers information on available patches for the 2.4 kernel
that will improve the I/O performance of your Linux operating system. </Para>
</ListItem>

View File

@ -1,7 +1,11 @@
<?xml version='1.0' encoding='ISO-8859-1'?>
<!DOCTYPE article PUBLIC '-//OASIS//DTD DocBook XML V4.1.2//EN' >
<?xml version="1.0"?>
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"
"http://docbook.org/xml/4.1.2/docbookx.dtd" [
<!ENTITY ver3 "18b.bz2">
<!ENTITY ver4 "2.4.17.patch">
]>
<article>
<articleinfo>
<title>I/O Performance HOWTO</title>
@ -12,14 +16,22 @@
<authorblurb><para><ulink url="mailto:snidersd@us.ibm.com">snidersd@us.ibm.com</ulink></para></authorblurb>
</author>
<pubdate>v1.0, 2002-04-05</pubdate>
<legalnotice><para>Linux is a trademark of Linus Torvalds. Other company, products, and service names may be trademarks or service marks of others.</para></legalnotice>
<pubdate>v1.1, 05/2002</pubdate>
<abstract><para>This HOWTO covers information on available patches for the 2.4 kernel that will improve the I/O performance of your Linux operating system. </para></abstract>
<abstract><para>This HOWTO covers information on available patches for the 2.4 kernel that can improve the I/O performance of your Linux&trade; operating system. </para></abstract>
<revhistory>
<revision>
<revnumber>v1.1</revnumber>
<date>2002-05-01</date>
<authorinitials>sds</authorinitials>
<revremark>Updated technical information and links.</revremark>
</revision>
<revision>
<revnumber>v1.0</revnumber>
<date>2002-04-05</date>
<date>2002-04-01</date>
<authorinitials>sds</authorinitials>
<revremark>Wrote and converted to DocBook XML.</revremark>
</revision>
@ -30,9 +42,10 @@
<sect1>
<title>Distribution Policy</title>
<para>The I/O Performance-HOWTO is copyrighted &copy; 2002, by IBM Corporation </para>
<para>The I/O Performance-HOWTO may be distributed, at your choice, under either the terms of the GNU Public License version 2 or later or the standard Linux Documentation Project (LDP) terms. These licenses should be available from the LDP Web site <ulink url="http://www.linuxdoc.org/docs.html"></ulink>. Please note that since the LDP terms do not allow modification (other than translation), modified versions can be assumed to be distributed under the GPL.</para>
<para>Permission is granted to copy, distribute, and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation with no Invariant Sections, no Front-Cover text, and no Back-Cover text. A copy of the license can be found at <ulink url="http://www.gnu.org/licenses/fdl.txt"></ulink>.</para>
</sect1>
<sect1 id="INTRODUCTION">
@ -46,30 +59,31 @@
<sect1 id="OVERVIEW">
<title>Avoiding Bounce Buffers</title>
<para>This section provides information on applying and using the bounce buffer patch on the Linux 2.4 kernel. The bounce buffer patch, written by Jens Axboe, enables device drivers that support Direct Memory Access (DMA) I/O to high-address physical memory to avoid bounce buffers.</para>
<para>This section provides information on applying and using the bounce buffer patch on the Linux 2.4 kernel. The bounce buffer patch, written by Jens Axboe, enables device drivers that support direct memory access (DMA) I/O to high-address physical memory to avoid bounce buffers.</para>
<para>This document provides a brief overview on memory and addressing in the Linux kernel, followed by information on why and how to make use of the bounce buffer patch.</para>
<sect2>
<title>Memory and Addressing in the Linux 2.4 Kernel</title>
<para>The Linux 2.4 kernel includes configuration options for specifying the amount of physical memory in the target computer. By default, the configuration is limited to the amount of memory that can be directly mapped into the kernel's virtual address space. The mapping starts at PAGE_OFFSET (normally 0xC0000000). On i386 systems the default mapping scheme limits the kernel-mode addressability to the first gigabyte (GB) of physical memory, also known as low memory. High-address physical memory is normally the memory above 1 GB. This memory is not directly accessible or permanently mapped by the kernel. Support for high-address physical memory is an option that is enabled during <link linkend="config">configuration of the Linux kernel</link>.</para>
<para>The Linux 2.4 kernel includes configuration options for specifying the amount of physical memory in the target computer. By default, the configuration is limited to the amount of memory that can be directly mapped into the kernel's virtual address space starting at PAGE_OFFSET. On i386 systems the default mapping scheme limits kernel-mode addressability to the first gigabyte (GB) of physical memory, also known as low memory. Conversely, high memory is normally the memory above 1 GB. High memory is not directly accessible or permanently mapped by the kernel. Support for high memory is an option that is enabled during <link linkend="config">configuration of the Linux kernel</link>.</para>
</sect2>
<sect2>
<title>The Problem with Bounce Buffers</title>
<para>When DMA I/O is performed to or from high-address physical memory, an area is allocated in memory known as a bounce buffer. When data travels between a device and high-address physical memory, it is first copied through the bounce buffer.</para>
<para>When DMA I/O is performed to or from high memory, an area is allocated in low memory known as a bounce buffer. When data travels between a device and high memory, it is first copied through the bounce buffer.</para>
<para>Systems with a large amount of high-address physical memory and intense I/O activity can create a large number of bounce buffer data copies. The excessive number of data copies can lead to a shortage of memory and performance degradation.</para>
<para>Systems with a large amount of high memory and intense I/O activity can create a large number of bounce buffers that can cause memory shortage problems. In addition, the excessive number of bounce buffer data copies can lead to performance degradation.</para>
<para>Peripheral component interface (PCI) devices normally address up to 4 GB of physical memory. When a bounce buffer is used for high-address physical memory that is below 4 GB, time and memory are wasted because the peripheral has the ability to address that memory directly. Using the bounce buffer patch can decrease, and possibly eliminate, the use of bounce buffers.</para>
<para>Peripheral component interface (PCI) devices normally address up to 4 GB of physical memory. When a bounce buffer is used for high memory that is below 4 GB, time and memory are wasted because the peripheral has the ability to address that memory directly. Using the bounce buffer patch can decrease, and possibly eliminate, the use of bounce buffers.</para>
</sect2>
<sect2 id="config">
<title>Locating the Patch</title>
<para> The latest version of the bounce buffer patch is <emphasis>block-highmem-all-&lt;version&gt;.gz </emphasis>, and it is available from Andrea Arcangeli's -aa series kernels at <ulink url="http://kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/"></ulink>.</para>
<para> The latest version of the bounce buffer patch is <emphasis>block-highmem-all-&ver3;</emphasis>, and it is available from Andrea Arcangeli's -aa series kernels at
<ulink url="http://kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/"></ulink>.</para>
<sect3>
<title>Configuring the Linux Kernel to Avoid Bounce Buffers</title>
@ -78,22 +92,22 @@
<para>The following kernel configuration options are required to enable the bounce buffer patch:</para>
<itemizedlist>
<listitem><para>Development Code - To enable the configurator to display the <guimenuitem>High I/O Support</guimenuitem> option, select <guimenuitem>Code Maturity Level Options</guimenuitem> category and specify "y" to <menuchoice><guibutton>prompt for development and/or incomplete code/drivers</guibutton></menuchoice>.</para></listitem>
<listitem><para>High-Address Physical Memory Support - To enable high memory support for physical memory that is greater than 1 GB, select <guimenuitem>Processor type and feature</guimenuitem> category, and enter the actual amount of physical memory under the <menuchoice><guilabel>High Memory Support</guilabel></menuchoice> option.</para></listitem>
<para><emphasis role="bold">Development Code</emphasis> - To enable the configurator to display the <option>High I/O Support</option> option, select the <option>Code maturity level options</option> category and specify "y" to <option>Prompt for development and/or incomplete code/drivers</option>.</para>
<para><emphasis role="bold">High Memory Support</emphasis> - To enable support for physical memory that is greater than 1 GB, select the <option>Processor type and features</option> category, and select a value from the <option>High Memory Support</option> option.</para>
<listitem><para>High-Address Physical Memory I/O Support - To enable high DMA I/O to physical addresses greater than 1 GB, select <guimenuitem>Processor type and feature</guimenuitem> category, and enter "y" to <menuchoice><guibutton>HIGHMEM I/O support</guibutton></menuchoice> option. This configuration option is a new option introduced by the bounce buffer patch.</para></listitem>
</itemizedlist>
<para><emphasis role="bold">High Memory I/O Support</emphasis> - To enable DMA I/O to physical addresses greater than 1 GB, select the <option>Processor type and features</option> category, and enter "y" to the <option>HIGHMEM I/O support</option> option. This configuration option is a new option introduced by the bounce buffer patch.</para>
</sect3>
<sect3 id="enabled">
<title>Enabled Device Drivers</title>
<para>The bounce buffer patch provides the kernel infrastructure, small computer system interface (SCSI), and IDE mid-level driver modifications to support DMA I/O to high-address physical memory. Updates for several device drivers to make use of the added support are included with the patch.</para>
<para>The bounce buffer patch provides the kernel infrastructure, as well as the SCSI and IDE mid-level driver modifications to support DMA I/O to high memory. Updates for several device drivers to make use of the added support are also included with the patch.</para>
<para>You will need to apply the bounce buffer patch and configure the kernel to support high-address physical memory I/O. Many IDE configurations and the peripheral device drivers listed below perform DMA I/O without the use of bounce buffers:</para>
<para>If the bounce buffer patch is applied and you configure the kernel to support high memory I/O, many IDE configurations and the device drivers listed below perform DMA I/O without the use of bounce buffers:</para>
<para><simplelist columns="1" type="vert">
<member>aic7xxx_drv.o</member>
@ -111,59 +125,51 @@
<sect2>
<title>Modifying Your Device Driver to Avoid Bounce Buffers</title>
<para>The entire process of rebuilding a Linux device driver is beyond the scope of this document. However, additional information is available at
<ulink url="http://www.xml.com/ldd/chapter/book/index.html"></ulink>.</para>
<note><para>Modifications are required for all device drivers that are not listed above in the
<link linkend="enabled">Enabled Device Drivers</link> section.</para></note>
<para>If your device driver is capable of high-address physical memory DMA I/O, you can modify your device driver to make use of the bounce buffer patch by making the following modifications:</para>
<para>For SCSI Device Drivers: set the <structfield>highmem_io</structfield> bit in the <structname>Scsi_Host_Template</structname> structure, then call <structfield>scsi_register ( )</structfield>.</para>
<para>For IDE Drivers: set the <structfield>highmem</structfield> in the <structname>ide_hwif_t</structname> structure, then call <structfield>ide_dmaproc ( )</structfield>.</para>
<para>If your device drivers are not listed above in the
<link linkend="enabled">Enabled Device Drivers</link> section, and the device is capable of high-memory DMA I/O, you can modify your device driver to make use of the bounce buffer patch as follows. More information on rebuilding a Linux device driver is available at <ulink url="http://www.xml.com/ldd/chapter/book/index.html"></ulink>.
</para>
<orderedlist>
<listitem>
<para>A.) For SCSI Adapter Drivers: set the <structfield>highmem_io</structfield> bit in the <structfield>Scsi_Host_Template</structfield> structure. </para>
<listitem><para>Call <structfield>pci_set_dma_mask ( )</structfield> to specify the address bits that the device can successfully use on DMA operations. Modify the code as follows:</para>
<para><structfield>int pci_set_dma_mask (struct pci_dev *pdev, dma_addr_t mask);</structfield></para>
<para>If DMA I/O can be supported with the specified mask, <structfield>pci_set_dma_mask ( )</structfield> will set <structfield>pdev->dma_mask</structfield> and return 0. For SCSI or IDE, the mask value will also be passed by the mid-level drivers to <structfield>blk_queue_bounce_limit ( )</structfield> so that bounce buffers are not created for memory directly addressable by the device. Drivers other than SCSI or IDE must call <structfield>blk_queue_bounce_limit ( )</structfield> directly. Modify the code as follows:</para>
<para><structfield>void blk_queue_bounce_limit (request_queue_t *q, u64 dma_addr);</structfield></para> </listitem>
<listitem><para>Use <structfield>pci_map_page (dev, page, offset, size, direction)</structfield> to map a memory region so that it is accessible by the peripheral device, instead of <structfield>pci_map_single (dev, address, size, direction)</structfield>.</para>
<para>The address parameter for <structfield>pci_map_single ( )</structfield> correlates to the page and offset parameters of <structfield>pci_map_page ( )</structfield>. <structfield>pci_map_page ( )</structfield> supports both the high and low physical memory.</para>
<para>Use the <structfield>virt_to_page ( )</structfield> macro to convert an address to a page/offset pair. The macro is defined by including pc.h. For example:</para>
<simplelist columns="1" type="vert">
<member><structfield> void *address;</structfield></member>
<member><structfield> struct page *page;</structfield></member>
<member><structfield> unsigned long offset;</structfield></member>
</simplelist>
<simplelist columns="1" type="vert">
<member><structfield> page = virt_to_page (address);</structfield></member>
<member><structfield> offset = (unsigned long) address &amp; ~PAGE_MASK;</structfield></member>
</simplelist>
<para>Call <structfield>pci_unmap_page ( )</structfield> after the DMA I/O transfer is complete, the mapping established by <structfield> pci_map_page ( )</structfield> should be removed by calling <structfield>pci_unmap_page ( )</structfield>.</para>
<important><title>Important:</title><para><structfield>pci_map_single ( )</structfield> is implemented using <structfield>virt_to_bus ( ) </structfield>. This function call handles low memory addresses only. Drivers supporting high-address physical memory should no longer call <structfield>virt_to_bus ( )</structfield> or <structfield>bus_to_virt ( )</structfield>.</para></important></listitem>
<listitem><para>Set your driver to map a scatter-gather DMA operation using <structfield>pci_map_sg ( )</structfield>. The driver should set the page and offset fields instead of the address field of the scatterlist structure. Refer to step 3 for converting an address to a page/offset pair.</para>
<note><para>If your driver is already using the PCI DMA API, continue to use <structfield>pci_map_page ( ) </structfield> or <structfield>pci_map_sg ( )</structfield> as appropriate. However, do not use the address field of the scatterlist structure.</para></note>
<para>B.) For IDE Adapter Drivers: set the <structfield>highmem</structfield>bit in the <structfield>ide_hwif_t</structfield> structure.</para>
</listitem>
<listitem>
<para>Call <structfield>pci_set_dma_mask(struct pci_dev *pdev, dma_addr_t mask)</structfield> to specify the address bits that the device can successfully use on DMA operations. </para>
<para>If DMA I/O can be supported with the specified mask, <structfield>pci_set_dma_mask()</structfield> will set <structfield>pdev->dma_mask</structfield> and return 0. For SCSI or IDE, the mask value will also be passed by the mid-level drivers to <structfield>blk_queue_bounce_limit(request_queue_t *q, u64 dma_addr)</structfield> so that bounce buffers are not created for memory directly addressable by the device. Drivers other than SCSI or IDE must call <structfield>blk_queue_bounce_limit()</structfield> directly. </para>
</listitem>
<listitem>
<para>Use <structfield>pci_map_page(dev, page, offset, size, direction)</structfield>, instead of <structfield>pci_map_single(dev, address, size, direction)</structfield> to map a memory region so that it is accessible by the peripheral device. <structfield>pci_map_page() </structfield> supports both high and low memory.</para>
<para>The <structfield>address </structfield> parameter for <structfield>pci_map_single()</structfield> correlates to the <structfield>page</structfield> and<structfield> offset </structfield> parameters for <structfield>pci_map_page()</structfield>. Use the <structfield>virt_to_page()</structfield> macro to convert an <structfield>address</structfield> to a <structfield>page </structfield> and <structfield>offset</structfield>. The <structfield>virt_to_page()</structfield> macro is defined by including pci.h. For example:</para>
<para><screen><structfield>void *address;</structfield></screen>
<screen><structfield>struct page *page;</structfield></screen>
<screen><structfield>unsigned long offset;</structfield></screen>
<screen><structfield>page = virt_to_page(address);</structfield></screen>
<screen><structfield>offset = (unsigned long) address &amp; ~PAGE_MASK;</structfield></screen></para>
</orderedlist>
</sect2>
<para>Call <structfield>pci_unmap_page()</structfield> after the DMA I/O transfer is complete to remove the mapping established by <structfield>pci_map_page()</structfield>.</para>
<note>
<para><structfield>pci_map_single()</structfield> is implemented using <structfield>virt_to_bus()</structfield>. <structfield>virt_to_bus()</structfield> handles low memory addresses only. Drivers supporting high memory should no longer call <structfield>virt_to_bus()</structfield> or <structfield>bus_to_virt()</structfield>.</para></note></listitem>
<listitem><para>If your driver calls <structfield>pci_map_sg()</structfield> to map a scatter-gather DMA operation, your driver should set the <structfield>page</structfield> and <structfield>offset</structfield> fields instead of the <structfield>address</structfield> field of the <structfield>scatterlist</structfield> structure. Refer to step 3 for converting an <structfield>address</structfield> to a <structfield>page</structfield> and <structfield>offset</structfield>.</para>
<note><para>If your driver is already using the PCI DMA API, continue to use <structfield>pci_map_page() </structfield> or <structfield>pci_map_sg()</structfield> as appropriate. However, do not use the <structfield>address</structfield> field of the <structfield>scatterlist</structfield> structure.</para></note>
</listitem>
</orderedlist></sect2>
</sect1>
<sect1>
@ -171,7 +177,7 @@
<para>This section provides information on the raw I/O variable-size optimization patch for the Linux 2.4 kernel written by Badari Pulavarty. This patch is also known as the RAW VARY or PAGESIZE_io patch. </para>
<para>The raw I/O variable-size patch changes the block size used for raw I/O from hardsect_size (normally 512 bytes) to 4 kilobytes (K). The patch improves I/O throughput and central processing unit (CPU) utilization by reducing the number of buffer heads needed for raw I/O operations.</para>
<para>The raw I/O variable-size patch changes the block size used for raw I/O from <structfield>hardsect_size</structfield> (normally 512 bytes) to 4 kilobytes (K). The patch improves I/O throughput and CPU utilization by reducing the number of buffer heads needed for raw I/O operations.</para>
<sect2>
<title>Locating the Patch</title>
@ -185,7 +191,7 @@ The name of the file is <emphasis>10_rawio-vary-io-1</emphasis>.</para></listite
<listitem><para>Alan Cox has included the patch in the <emphasis>2.4.18pre9-ac2</emphasis> kernel patch. The patch is available at <ulink url="http://www.kernel.org/pub/linux/kernel/people/alan/linux-2.4/2.4.18/"></ulink>. </para></listitem>
<listitem><para>The patch can be found as part of the IO Scalability Package at <ulink url="http://sourceforge.net/projects/lse/io"></ulink>. The name of the patch is <emphasis>PAGESIZE_io-&lt;version&gt;</emphasis> listed under the <emphasis>Raw I/O Enhancements </emphasis> release.</para> </listitem>
<listitem><para>The patch is available from SourceForge at <ulink url="http://sourceforge.net/projects/lse/io"></ulink>. The latest version is <emphasis>PAGESIZE_io-&ver4;</emphasis>.</para> </listitem>
</itemizedlist>
</sect2>
@ -193,16 +199,12 @@ The name of the file is <emphasis>10_rawio-vary-io-1</emphasis>.</para></listite
<sect2>
<title>Modifying Your Driver for the Raw I/O Variable-Size Optimization Patch</title>
<para>Modifications are required for all device drivers using version 2.4.17 patch. However, rebuilding device drivers is beyond the scope of this document.
Additional information is available at <ulink url="http://www.xml.com/ldd/chapter/book/index.html"></ulink>.</para>
<para>In previous versions of this patch, changes were enabled for all drivers. However, the 2.4.17 and later versions of the patch enable the changes only for the Adaptec, Qlogic ISP1020, and IBM ServerRAID drivers. All other drivers for version 2.4.17 and later must be modified to make use of the patch by setting the <structfield>can_do_varyio</structfield> bit in the <structfield>Scsi_Host_Template</structfield> structure.</para>
<para>In previous versions of this patch, changes were enabled for all drivers. However, the 2.4.17 and later versions of the patch enable only the changes for the Adaptec aic7xxx and the Qlogic ISP1020 SCSI drivers. All other drivers for version 2.4.17 must be modified to make use of the patch.</para>
<para>You will need to modify the code as follows:</para>
<para>Set the <structfield>can_do_varyio</structfield> bit in the <structname>Scsi_Host_Template</structname> structure before calling <structfield>scsi_register ( ).</structfield></para>
<important><para>Drivers that have the raw I/O patch enabled must support buffer heads of variable sizes (b_size) in a single I/O request because <structfield>hardsect_size</structfield> is used until the data buffer is aligned to the 4 K boundary.</para></important>
<note><para>Drivers that have the raw I/O patch enabled must support buffer heads of variable sizes <structfield>(b_size)</structfield> in a single I/O request because <structfield>hardsect_size</structfield> is used until the data buffer is aligned on a 4 K boundary.</para>
<para>Additional information is available on rebuilding Linux device drivers at <ulink url="http://www.xml.com/ldd/chapter/book/index.html"></ulink>.</para></note>
</sect2>
</sect1>
@ -211,28 +213,27 @@ Additional information is available at <ulink url="http://www.xml.com/ldd/chapte
<para>This section provides information on the I/O request lock patch, also known as the scsi concurrent queuing patch (sior1), written by Johnathan Lahr. </para>
<para>The I/O request lock patch improves scsi I/O performance on Linux 2.4 multi-processor systems by providing concurrent I/O request queuing. There are significant I/O preformance and CPU utilization improvements possible by enabling multi-processors to concurrently drive multiple block devices.</para>
<para>The I/O request lock patch improves SCSI I/O performance on Linux 2.4 multi-processor systems by providing concurrent I/O request queuing. There are significant I/O performance and CPU utilization improvements possible by enabling multi-processors to concurrently drive multiple block devices.</para>
<para>Initially block I/O requests are queued one at a time holding the global spin lock, <structfield> io_request_lock</structfield>. Once the patch is applied, SCSI requests are queued which holds the specific queue lock targeted by the request. Requests that are made to different devices are queued concurrently, and requests that are made to the same device are queued serially.</para>
<para>Before the patch is applied block I/O requests are queued one at a time holding the global spin lock, <structfield> io_request_lock</structfield>. Once the patch is applied, SCSI requests are queued while holding the lock specific to the queue associated with the request. Requests that are made to different devices are queued concurrently, and requests that are made to the same device are queued serially.</para>
<sect2>
<title>Locating the Patch</title>
<para>You can download the I/O request patch from Sourceforge at <ulink url="http://sourceforge.net/projects/lse/io"></ulink>. The latest version is <emphasis>sior1-v1.2416</emphasis>.</para>
<para>You can download the I/O request patch from Sourceforge at <ulink url="http://sourceforge.net/projects/lse/io"></ulink>. The latest version is <emphasis>sior1-v1.2416</emphasis>. Patches that enable concurrent queuing for specific drivers are also available at SourceForge. The patch for the Emulex SCSI/FC is <emphasis>lpfc_sior1-v0.249</emphasis> and the patch for Adaptec SCSI is <emphasis>aic_sior1-v0.249</emphasis>.</para>
<para>Additional patches that enable concurrent queuing can be downloaded from Sourceforge. The patch for the Emulex SCSI/FC is <emphasis>lpfc_sior1-v0.249</emphasis> and the patch for Adaptec SCSI is <emphasis>aic_sior1-v0.249</emphasis> .</para>
<para></para>
</sect2>
<sect2>
<title>Modifying Your Driver for the I/O Request Lock Patch</title>
<para>Modifications are required for all device drivers. However, rebuilding device drivers is beyond the scope of this document.
Additional information is available at <ulink url="http://www.xml.com/ldd/chapter/book/index.html"></ulink>.</para>
<para>The I/O request lock patch installs concurrent queuing capability into the SCSI midlayer. Concurrent queuing is
activated for each SCSI adapter device driver. To activate the device, the <structfield>concurrent_queue</structfield> field in the <structfield>Scsi_Host_Template</structfield> must be set when the system registers the driver.</para>
activated for each SCSI adapter device driver. To activate the driver, the <structfield>concurrent_queue</structfield> field in the <structfield>Scsi_Host_Template</structfield> structure must be set when the driver is registered.</para>
<important><para>You activate concurrent queuing when you apply the patch. Concurrent queuing ensures access to the drivers <structfield>request_queue</structfield>. by This access is protected by the <structfield>request_queue.queue_lock</structfield> acquisition.</para></important>
<note><para>Drivers that activate concurrent queuing must ensure that any access of the <structfield>request_queue</structfield> by the driver is protected by the <structfield>request_queue.queue_lock</structfield>.</para>
<para>Additional information is available on rebuilding device drivers at <ulink url="http://www.xml.com/ldd/chapter/book/index.html"></ulink>.</para></note>
</sect2>
</sect1>
@ -254,3 +255,19 @@ activated for each SCSI adapter device driver. To activate the device, the <stru
</sect1>
</article>