An I_T nexus is a pairing of an initiator device and a target device. The devices that request input/output (I/O) operations are referred to as initiators and the devices that perform these operations are referred to as targets. For example, a host computer may be an initiator, and a storage array may be a target. The target may include one or more separate storage devices.
A Host Bus Adapter (HBA) is a hardware device that “connects” the operating system of a host computer and a communication path (e.g., a Small Computer System Interface (SCSI) (American National Standards Institute (ANSI) SCSI Controller Commands-2 (SCC-2) NCITS.318:1998) bus). The HBA manages the transfer of data between the host computer and the communication path.
Internet Small Computer Systems Interface (iSCSI) is a protocol (IETF RFC 3347, published February 2003; IETF Draft, published Jan. 19, 2003) that defines a technique for transporting SCSI commands/data to and from I/O devices across TCP (“Transmission Control Protocol”)-enabled networks (Internet Engineering Task Force (IETF) Request for Comments (RFC) 793, published September 1981). As such, iSCSI acts as a bridge between two independently designed protocols that have significantly different tolerances for, and facilities for, detecting and recovering from network congestion and from errors. iSCSI permits the existence of multiple parallel data paths to the same storage target.
HBA teaming refers to grouping together several HBAs to form a “team,” where each HBA in a team is connected to a particular target and may route data to that target. HBA teams may be built on an Internet Small Computer System Interface (iSCSI) portal group concept. A portal group concept may be described as a collection of network portals within an iSCSI Network Entity that collectively support the capability of coordinating a session with connections spanning these portals. HBA teaming may be used with Small Computer System Interface (SCSI) initiators running a Linux® operating system.
Notwithstanding conventional techniques for load balancing and failover, there is a need in the art for improved failover and load balancing across several HBAs, each of which may have one or more connections to the same target.
Referring now to the drawings in which like reference numbers represent corresponding parts throughout:
In the following description, reference is made to the accompanying drawings which form a part hereof and which illustrate several embodiments. It is understood that other embodiments may be utilized and structural and operational changes may be made.
The network adapter component 126 includes network adapter specific commands to communicate with each network adapter 132a, 132b and interfaces between the operating system 110 and each network adapter 132a, 132b. The network adapters 132a, 132b and network adapter component 126 implement logic to process iSCSI packets, where a SCSI command is wrapped in the iSCSI packet, and the iSCSI packet is wrapped in a TCP packet. The transport protocol layer unpacks the payload from the received Transmission Control Protocol (TCP) (Internet Engineering Task Force (IETF) Request for Comments (RFC) 793, published September 1981) packet and transfers the data to the network adapter component 126 to return to, for example an application program 108. Further, an application program 108 transmitting data transmits the data to the network adapter component 126, which then sends the data to the transport protocol layer to package in a TCP/IP (Internet Protocol (IP) is described in, Internet Engineering Task Force (IETF) Request for Comments (RFC) 791, published September 1981) packet before transmitting over the network 170.
Each network adapter 132a, 132b includes various components implemented in the hardware of the network adapter 132a, 132b. Each network adapter 132a, 132b is capable of transmitting and receiving packets of data over network 170, which may comprise a Local Area Network (LAN), the Internet, a Wide Area Network (WAN), Storage Area Network (SAN), WiFi (Institute of Electrical and Electronics Engineers (IEEE) 802.11a, published 1999: IEEE 802.11b, published Sep. 16, 1999; IEEE 802.11g, published Jun. 27, 2003), Wireless LAN (IEEE 802.11a, published 1999; IEEE 802.11b, published Sep. 16, 1999; 802.11g, published Jun. 27, 2003), etc.
In particular, network adapter 132a includes bus controller 134a and physical communications layer 136a. Network adapter 132b includes bus controller 134b and physical communications layer 136b. A bus controller 134a, 134b enables each network adapter 132a, 132b to communicate on a computer bus 130, which may comprise any bus interface known in the art, such as a Peripheral Component Interconnect (PCI) bus or PCI express bus (PCI Special Interest Group, PCI Local Bus Specification, Rev 2.3, published March 2002), etc. The physical communication layer 136a, 136b implements Media Access Control (MAC) functionality to send and receive network packets to and from remote data storages over a network 170. In certain embodiments, the network adapters 132a, 132b may implement the Ethernet protocol (IEEE std. 802.3, published Mar. 8, 2002), Fibre Channel ANSI X3.269-199X, Revision 012, published Dec. 4, 1995; FC Arbitrated Loop, ANSI X3.272-199X, Draft, published Jun. 1, 1995; FC Fabric Generic Requirements, ANSI X3.289-199X, Draft, published Aug. 7, 1996; Fibre Channel Framing and Signaling Interface ANSI/INCITS 373, Draft, published Apr. 9, 2003; FC Generic Services ANSI X3.288-199X, Draft, published Aug. 7, 1996), or any other network communication protocol known in the art.
The computer 102 may comprise a computing device known in the art, such as a mainframe, server, personal computer, workstation, laptop, handheld computer, etc. Any CPU 104 and operating system 110 may be used. Programs and data in memory 106 may be swapped into and out of storage 154 as part of memory management operations. The storage 154 may comprise an internal storage device or an attached or network accessible storage. Programs in the storage 154 are loaded into the memory 106 and executed by the CPU 104. An input device 150 and an output device 152 are connected to the computer 102. The input device 150 is used to provide user input to the CPU 104 and may be a keyboard, mouse, pen-stylus, microphone, touch sensitive display screen, or any other activation or input mechanism known in the art. The output device 152 is capable of rendering information transferred from the CPU 104, or other component, at a display monitor, printer, storage or any other output mechanism known in the art.
In certain embodiments, in addition to the storage drivers 120, the computer 102 may include other drivers. The network adapters 132a, 132b may include additional hardware logic to perform additional operations to process received packets from the computer 102 or the network 170. Further, the network adapters 132a, 132b may implement a transport layer offload engine (TOE) to implement the transport protocol layer in the network adapter as opposed to the computer storage drivers 120 to further reduce computer 102 processing burdens. Alternatively, the transport layer may be implemented in the storage drivers 120 or other drivers (for example, provided by an operating system).
The data storage 140 is connected to network 170 and includes network adapters 142a, 142b. Network adapter 142a includes a bus controller 144a and a physical communications layer 146a. Network adapter 142b includes a bus controller 144b and a physical communications layer 146b. The data storage 140 includes one or more logical units (i.e., “n” logical units, where “n” may be any positive integer value, which in certain embodiments, is less than 128). Merely for ease of understanding, logical unit 0, logical unit 1, and logical unit “n” are illustrated. Each logical unit may be described as a separate storage device. Additionally, a logical unit number (LUN) is associated with each logical device. In certain embodiments, a network adapter team is organized based on the target and LUN (i.e., each network adapter that can route data to a particular LUN of a target is grouped into one network adapter team), and one network adapter may belong to different network adapter teams.
Various structures and/or buffers (not shown) may reside in memory 106 or may be located in a storage unit separate from the memory 106 in certain embodiments.
An iSCSI SAN configuration provides the ability to balance I/O workload across multiple HBA data paths. This configuration has multiple HBAs to distribute SCSI request flow across the data paths. In particular,
In certain embodiments, the load balancing component 122 acts as a functional SCSI wrapper for the iSCSI Host Bus Adapter component 346. Requests to the load balancing component 122 are normal SCSI requests, and results returned are normal SCSI results.
In certain embodiments, the failover component 124 acts as a functional SCSI wrapper for the iSCSI Host Bus Adapter component 346. Requests to the failover component 124 are normal SCSI requests, and results returned are normal SCSI results.
Embodiments provide a load balancing component 122 that improves system throughput by utilizing the parallel data paths to the same storage target simultaneously. The load balancing component 122 distributes the I/O from a computing system (e.g., a server computer) among network adapters in a configured team of network adapters (e.g., Host Bus Adapters). This results in faster I/O operations and reduces data path congestion in shared storage (e.g., multiple-server) configurations. Additionally, the load balancing component 122 maximizes the available bandwidth for the shared storage configurations. Embodiments also provide both static load balancing and dynamic load balancing.
In block 402, the load balancing component 122 computes a load balancing value for each data path in a team. The load balancing value is computed by dividing the total number of bytes by the number of bytes transferred on the data path to generate a first value and multiplying the first value by the load balancing share of the data path. In block 404, the load balancing component 122 determines a maximum value of all the load balancing values that have been computed for the data paths in a team. If multiple data paths have the same maximum value, one of the data paths may be selected using any one of various techniques (e.g., the first data path found with that value is selected or the data path that has not recently been selected is selected). In block 406, the load balancing component 122 selects a data path with the maximum value on which to route data.
With static load balancing, when a new SCSI command (read/write) comes from the SCSI mid level 314, the load balancing component 122 implements a static load balancing technique to determine a data path on which to send the command. The load balancing component 122 attempts to maintain a specified read or write load balancing share for each of the data paths. The load balancing share may be specified by, for example, a system administrator or other user.
In certain embodiments, the load balancing share for each data path may be different for reads and writes of data. That is, a data path acting as a read data path may have a different load balancing share than when the data path is acting as a write data path.
The following is sample pseudocode for static load balancing and is calculated separately for read and write data paths in accordance with certain embodiments:
In block 502, the load balancing component 122 receives input parameters for dynamic load balancing. The input parameters may include, for example, a list of the data paths in the team, a total number of bytes transferred by the team in a last time frame, a load balancing share of each data path in the last time frame, and a number of bytes transferred on each data path in the last time frame.
In block 504, the load balancing component 122 determines that the timer has fired. The load balancing component 122 implements dynamic load balancing when the timer fires every “TimerInterval” value. At the TimerInterval time, load balancing shares are recomputed. In certain embodiments, during the length of the interval, a SCSI command is directed on a data path based on the recomputed values using static load balancing.
In block 506, the load balancing component 122 computes an actual load balancing share (also referred to as “ActualLBShare”) and a difference load balancing value (also referred to as “DifferenceLB”) for each data path in the team for the selected data path. The actual load balancing share is computed by dividing the number of bytes transferred on the data path by the total number of bytes to generate a first value and multiplying the first value by 100. The difference load balancing value is computed by subtracting the load balancing share from the actual load balancing share.
In block 508, the load balancing component 122 selects the next data path in a team, starting with the first data path. In block 510, the load balancing component 122 determines whether all data paths have been selected. If so, processing is done, otherwise, processing continues to block 512.
In block 512, the load balancing component 122 determines whether the load balancing share is less than the actual load balancing share for the selected data path. If so, processing continues to block 514 (
In block 514, the load balancing component 122 determines whether the difference between load balancing share and the actual load balancing share is less than a change threshold. If so, processing continues to block 516, otherwise, processing loops back to block 508 (
Note that the load balancing share of one data path is decreased (block 516), therefore, in order for the load balancing shares of all data paths to be 100 percent, the load balancing share of another data path is increased (block 518).
The following is sample pseudocode for dynamic load balancing in accordance with certain embodiments:
With dynamic load balancing, the static load balancing technique may be implemented once the LoadBalancingShares have been computed at the “TimerInterval value”. In certain embodiments, switching between static and dynamic load balancing is performed by a user. If a switch occurs, the LoadBalancingShares may be recomputed.
Thus, the load balancing component 122 reduces data path congestion in shared storage networks, maximizes available bandwidth for the entire system, and leads to faster I/O operations.
In certain embodiments, the static and dynamic load balancing techniques may be deployed in a Linux® driver for a iSCSI Host Bus Adapter. The addition of this technology to any iSCSI Linux® driver increases the overall throughput in an iSCSI SAN and reduces data path congestion in shared storage networks. In certain embodiments, the load balancing component 122 is a load balancing driver, and both the load balancing driver and the iSCSI driver may exist as a single driver within a Linux® SCSI subsystem.
Embodiments also provide a failover component 124 that provides a high-availability failover technique that utilizes the parallel data paths to the same storage target to provide an iSCSI networking environment a reliable connectivity between storage servers in the face of critical-mission tasks and possible component or network connection failures.
There are a number of possible cases of failure in a SAN configuration, including, for example: 1) connection failure in the path from the initiator to a specific iSCSI target; 2) failure of a network adapter (e.g., HBA) on the initiator side; and, 3) failure of a network adapter (e.g., HBA) or a disk on the target side. Embodiments are able to handle these failure cases. In particular, even failure of a network adapter (e.g., HBA) or a disk on the target side may be treated as a connection failure.
Another possible case of failure is a connection failure (e.g., connection is lost between computer 102 and Ethernet switches 210, 212). However, the failover component 124, in combination with other technologies (e.g., Redundant Array of Independent Disks (RAID) and clustering software provided by the host operating system), and a high-availability target storage system, covers this case of failure.
Embodiments allow for both failover only mode and failover and load balancing mode. For example, an iSCSI Linux® driver may run in either failover only mode or failover and load balancing mode. Network adapter teaming is used for both failover and load balancing. Also, multiple network adapter teams may share a network adapter, and all network adapter data paths are inside a network adapter team, including a single data path session.
In failover mode, the primary data path to target1760 flows over HBA1750, while HBA2752 is designated as the secondary failover data path. An initiator in a portal group logs into the target1760 by using a pre-assigned Inter-Session IDentifier (ISID) defining the session and receives a Target Session IDentifier (TSID). One or more connections are established for the session for the secondary data path. Thus, both HBAs are logged into target1760 with HBA1750 configured as primary and actively accessing target1760. HBA2752, marked as secondary, remains quiescent with respect to target1760. However, some activity occurs on the quiescent HBA2752, such as TCP or iSCSI traffic that keeps the connection alive. If the data flow to the target1760 port attached through the primary HBA1750 fails, the failover component 124 instantly redirects data flow through the secondary HBA2752. Thus, as long as any data path exists that maintains connectivity between an iSCSI HBA and a remote target, the data flow to the remote target continues in an uninterrupted fashion.
In particular, commands from the host operating system (e.g., SCSI_Host1 command 722, SCSI_Host2 command 724) are stored in a command data structure 730 for processing by the failover component 124. The failover component 124 maintains a context for each host (e.g., SCSI_Host1 context 732, SCSI_Host2 context 734). The failover component 124 retrieves a command from command data structure 730, determines an HBA to be used for routing the command to target1760, and invokes the routing command 740, which routes the command to the hardware (i.e., to the determined HBA 750 or 752).
In failover and load balancing mode, a secondary network adapter (e.g., HBA2752) actively participates in maintaining iSCSI traffic and does not remain quiescent with respect to the target (e.g., target1760). The failover and load balancing components 124, 122 work together to distribute SCSI command flow among the network adapters (e.g., HBAs 750, 752) according to configurable load balancing shares for each team member (e.g., in case of static load balancing). In case only one team member remains functional in a team, failover and load balancing components 124, 122 enter failover mode and direct all iSCSI traffic over the healthy network adapter. Load balancing shares may be used to control switching between failover mode and failover and load balancing mode. In particular, if one data path has a 100 percent load balancing share, then failover mode is used.
Failover may be connection based or session based. In connection based failover, the connections in a team belong to one session. In session based failover, a number of different sessions belong to one team.
The iSCSI target1806 returns a status indicator of “good” to the SCSI low level driver 804, which forwards the status indicator of “good” to the failover/load balancing components 802, which, in turn, forward the status indicator of “good” to the SCSI mid layer 800.
In this example, the failover/load balancing components 802 receive a notification from the SCSI low level driver 804 that HBA1 has failed, for example, due to a lost link between the SCSI low level driver 804 and HBA1 (block 820). The SCSI midlayer 800 issues another SCSI_Host command to a SCSI low level driver 804 for iSCSI target1806 (block 830). The failover/load balancing components 802 intercept the command and send the command on secondary HBA2 (block 832). A SCSI low level driver 812 receives the command and sends the command on HBA2 (block 834). The command is sent by HBA2 to iSCSI target1806.
The iSCSI target1806 returns a status indicator of “good” to the SCSI low level driver 804, which forwards the status indicator of “good” to the failover/load balancing components 802, which, in turn, forward the status indicator of “good” to the SCSI mid layer 800.
Thus, embodiments increase the reliability of the storage network, and provide an easy mechanism to integrate load balancing and failover into the Linux® iSCSI driver. Moreover, one SCSI low-level driver may be used. In particular, the failover and load balancing features may be integrated with the SCSI low-level driver without having to have two separate drivers. In certain embodiments, the failover driver and the iSCSI driver may exist as a single driver within the Linux® SCSI subsystem. This solution also enables integrating other features, such as load balancing, into the iSCSI driver.
Linux is a registered trademark of Linus Torvalds in the United States and/or other countries.
The described embodiments may be implemented as a method, apparatus or article of manufacture using programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof. The term “article of manufacture” and “circuitry” as used herein refers to a state machine, code or logic implemented in hardware logic (e.g., an integrated circuit chip, Programmable Gate Array (PGA), Application Specific Integrated Circuit (ASIC), etc.) or a computer readable medium, such as magnetic storage medium (e.g., hard disk drives, floppy disks, tape, etc.), optical storage (CD-ROMs, optical disks, etc.), volatile and non-volatile memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, DRAMs, SRAMs, firmware, programmable logic, etc.). Code in the computer readable medium is accessed and executed by a processor. When the code or logic is executed by a processor, the circuitry may include the medium including the code or logic as well as the processor that executes the code loaded from the medium. Thus, the “article of manufacture” may comprise the medium in which the code is embodied. Additionally, the “article of manufacture” may comprise a combination of hardware and software components in which the code is embodied, processed, and executed. Of course, those skilled in the art will recognize that many modifications may be made to this configuration. Additionally, the devices, adapters, etc., may be implemented in one or more integrated circuits on the adapter or on the motherboard.
The illustrated operations of
The foregoing description of various embodiments has been presented for the purposes of illustration and description. It is not intended to be exhaustive or limiting. Many modifications and variations are possible in light of the above teachings.
Number | Name | Date | Kind |
---|---|---|---|
4403286 | Fry et al. | Sep 1983 | A |
5006781 | Schultz et al. | Apr 1991 | A |
5086499 | Mutone | Feb 1992 | A |
5168208 | Schultz et al. | Dec 1992 | A |
5390068 | Schultz et al. | Feb 1995 | A |
5493689 | Waclawsky et al. | Feb 1996 | A |
5495426 | Waclawsky et al. | Feb 1996 | A |
5724569 | Andres | Mar 1998 | A |
5790775 | Marks et al. | Aug 1998 | A |
6052795 | Murotani et al. | Apr 2000 | A |
6081511 | Carr et al. | Jun 2000 | A |
6185601 | Wolff | Feb 2001 | B1 |
6381218 | McIntyre et al. | Apr 2002 | B1 |
6430610 | Carter | Aug 2002 | B1 |
6438133 | Ervin et al. | Aug 2002 | B1 |
6453360 | Muller et al. | Sep 2002 | B1 |
6470397 | Shah et al. | Oct 2002 | B1 |
6526521 | Lim | Feb 2003 | B1 |
6618798 | Burton et al. | Sep 2003 | B1 |
6654801 | Mann et al. | Nov 2003 | B2 |
6658018 | Tran et al. | Dec 2003 | B1 |
6687735 | Logston et al. | Feb 2004 | B1 |
6711137 | Klassen et al. | Mar 2004 | B1 |
6769071 | Cheng et al. | Jul 2004 | B1 |
6802021 | Cheng et al. | Oct 2004 | B1 |
6823477 | Cheng et al. | Nov 2004 | B1 |
6940853 | Yamada et al. | Sep 2005 | B2 |
6941341 | Logston et al. | Sep 2005 | B2 |
6990553 | Nakayama et al. | Jan 2006 | B2 |
7003687 | Matsunami et al. | Feb 2006 | B2 |
7016367 | Dyckerhoff et al. | Mar 2006 | B1 |
7111084 | Tan et al. | Sep 2006 | B2 |
7126910 | Sridhar | Oct 2006 | B1 |
7134040 | Ayres | Nov 2006 | B2 |
7139242 | Bays | Nov 2006 | B2 |
7281169 | Golasky et al. | Oct 2007 | B2 |
7307948 | Infante et al. | Dec 2007 | B2 |
7308604 | McDonnell et al. | Dec 2007 | B2 |
7310341 | Prager et al. | Dec 2007 | B2 |
7313681 | Chen et al. | Dec 2007 | B2 |
7330972 | Vlodavsky et al. | Feb 2008 | B2 |
7496104 | Moussa et al. | Feb 2009 | B2 |
7603463 | Argo | Oct 2009 | B2 |
20020141343 | Bays | Oct 2002 | A1 |
20030074473 | Pham et al. | Apr 2003 | A1 |
20030126315 | Tan et al. | Jul 2003 | A1 |
20030140191 | McGowen et al. | Jul 2003 | A1 |
20040107304 | Grun | Jun 2004 | A1 |
20040153710 | Fair | Aug 2004 | A1 |
20040162901 | Mangipudi et al. | Aug 2004 | A1 |
20040184483 | Okamura et al. | Sep 2004 | A1 |
20040203827 | Heiner et al. | Oct 2004 | A1 |
20040236860 | Logston et al. | Nov 2004 | A1 |
20050185789 | Goodwin | Aug 2005 | A1 |
20060173331 | Booton et al. | Aug 2006 | A1 |
20060193331 | Areddu et al. | Aug 2006 | A1 |
20060242313 | Le et al. | Oct 2006 | A1 |
20060291392 | Alicherry et al. | Dec 2006 | A1 |
20070030804 | Kimball et al. | Feb 2007 | A1 |
20070070893 | Butenweg et al. | Mar 2007 | A1 |
20070133406 | Vasseur | Jun 2007 | A1 |
20070153763 | Rampolla et al. | Jul 2007 | A1 |
20070271570 | Brown et al. | Nov 2007 | A1 |
20080034249 | Husain et al. | Feb 2008 | A1 |
20080291826 | Licardie et al. | Nov 2008 | A1 |
20090193105 | Charny et al. | Jul 2009 | A1 |
Number | Date | Country |
---|---|---|
2002-304331 | Oct 2002 | JP |
2202123 | Apr 2003 | RU |
Number | Date | Country | |
---|---|---|---|
20050259632 A1 | Nov 2005 | US |