This invention is related to application Ser. No. 10/128,656, filed Apr. 22, 2002, now U.S. Pat. No. 7,165,258, issued on Jan. 16, 2007, entitled “SCSI-BASED STORAGE AREA NETWORK”, application Ser. No. 10/131,793, filed Apr. 22, 2002, entitled “VIRTUAL SCSI BUS FOR SCSI-BASED STORAGE AREA NETWORK”, provisional application Ser. No. 60/374,921, filed Apr. 22, 2002, entitled “INTERNET PROTOCOL CONNECTED STORAGE AREA NETWORK”, application Ser. No. 10/356,073, filed Jan. 31, 2003, entitled “STORAGE ROUTER WITH INTEGRATED SCSI SWITCH”, and application Ser. No. 10/128,657, filed Apr. 22, 2002, entitled “METHOD AND APPARATUS FOR EXCHANGING CONFIGURATION INFORMATION BETWEEN NODES OPERATING IN A MASTER-SLAVE CONFIGURATION” all of the above of which are hereby incorporated by reference.
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. The following notice applies to the software and data as described below and in the drawing hereto: Copyright© 2003, Cisco Systems, Inc., All Rights Reserved.
This invention relates generally to network addressing, and more particularly to providing address failover capability for network interfaces on an application gateway device.
Many devices capable of being attached to a network such as personal computers, servers, routers and switches have more than one network interface. Typically multiple network interfaces may be used by the network device to provide connectivity to differing networks or systems, to provide a redundant path to a network, or they may be used to provided increased network throughput (i.e. increased bandwidth).
Occasionally a network interface may fail. When this happens, software applications using the network interface are no longer able to use the network interface to send and receive data, possibly resulting in the failure of the software application.
In some systems, when a network interface fails, the system attempts to migrate the software application to another network device on the same network as the device experiencing the network interface failure. The application then runs on the new network device, often in a manner that is transparent to the users on the system. The ability to migrate an application to a new device is sometimes referred to as “failover.”
Failover capability is useful in providing fault tolerant applications, however there are problems associated with failing over to a second network device. Often it takes a substantial amount of time to accomplish the failover, because application configuration and data must be transferred to the second network device. A user will often notice a delay in the response of the system while the failover takes place. In addition, network connections between the failed over application and other hosts and applications may need to be reestablished because the new application will reside on a network device having a different network address than the original network device. This also can take a substantial mount of time and may result in the loss of data.
In view of the above problems and issues, there is a need in the art for the present invention.
The above-mentioned shortcomings, disadvantages and problems are addressed by the present invention, which will be understood by reading and studying the following specification.
Systems and methods provide network address failover capability within an application gateway device. In one aspect, a system has a first network interface and a second network interface. The system receives a set of configuration data, the configuration data may include a first network address for the first network interface and a second network address for the second network interface. At startup or during later operation, the system may detect the failure of the first network interface. The configuration data may be analyzed to determine if the first network address can be used on the second network interface. If so, the first network address is moved from the first network interface to the second network interface.
The present invention describes systems, methods, and computer-readable media of varying scope. In addition to the aspects and advantages of the present invention described in this summary, further aspects and advantages of the invention will become apparent by reference to the drawings and by reading the detailed description that follows.
In the following detailed description of exemplary embodiments of the invention, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific exemplary embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical, electrical and other changes may be made without departing from the scope of the present invention.
Some portions of the detailed descriptions that follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
In the Figures, the same reference number is used throughout to refer to an identical component which appears in multiple Figures. Signals and connections may be referred to by the same reference number or label, and the actual meaning will be clear from its use in the context of the description.
The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.
Some embodiments of the invention operate in an environment of systems and methods that provide a means for Fibre Channel based Storage Area Networks (SANs) to be accessed from TCP/IP network hosts.
In one embodiment, storage router 110 provides IPv4 router functionality between a Gigabit Ethernet and a Fibre Channel interface. In one such embodiment, static routes are supported. In addition, storage router 110 supports a configurable MTU size for each interface, and has the ability to reassemble and refragment IP packets based on the MTU of the destination interface.
In one embodiment, storage router 110 acts as a gateway, converting SCSI protocol between Fibre Channel and TCP/IP. Storage router 110 is configured in such an embodiment to present Fibre Channel devices as iSCSI targets, providing the ability for clients on the IP network to directly access storage devices.
In one embodiment, SCSI routing occurs in the Storage Router 110 through the mapping of physical storage devices to iSCSI targets. An iSCSI target (also called logical target) is an arbitrary name for a group of physical storage devices. Mappings between an iSCSI target to multiple physical devices can be established using configuration programs on storage router 110. An iSCSI target always contains at least one Logical Unit Number (LUN). Each LUN on an iSCSI target is mapped to a single LUN on a physical storage target.
In operation, if a network interface on storage router 110 fails, the SCSI router instances using the interface may have their respective IP network addresses failed over to a secondary network interface. For example, assume that the network interface being used by SCSI router 105.2 fails. The IP network address associated with SCSI router 105.2 may be moved (i.e. failed over) to the same network interface as SCSI router 105.1. The movement is generally transparent both to the SCSI router instance 105.2, and to hosts and applications that are communicating via the network to SCSI router instance 105.2. Further details on the failover of the IP network address are provided with reference to
Although the exemplary environment illustrates two members 110.1 and 110.2 of cluster 112, the invention is not limited to any particular number of members of a cluster.
Further details on the operation of the above can be found in U.S. patent application Ser. No. 10/131,793 entitled “VIRTUAL SCSI BUS FOR SCSI-BASED STORAGE AREA NETWORK” and in U.S. patent application Ser. No. 10/356,073 entitled “INTEGRATED STORAGE ROUTER AND FIBRE CHANNEL SWITCH”, both of which have been previously incorporated by reference.
Router portion 210, which in the exemplary embodiment complies with draft 08 and later versions of the iSCSI protocol and incorporates commercially available router technology, such as the 5420 and 5428 Storage Routers from Cisco Systems, Inc. of San Jose, Calif., includes Gigabit Ethernet (GE) ports 211.1 and 211.2, console port 212, management port 213, high-availability (HA) port 214, bridge-and-buffer module 215, interface software 216, router processor 217, and router-to-switch interface 218.
GE ports 211.1 and 211.2 couple the storage router to an IP network for access by one or more servers or other computers, such as servers or iSCSI hosts (in
Console port 212 couples to a local control console (not shown). In the exemplary embodiment, this port takes the form of an RS-232 interface.
Management port 213 provides a connection for managing and/or configuring storage router 110. In the exemplary embodiment, this port takes the form of a 10/100 Ethernet port and may be assigned the base MAC address for the router-switch.
HA port 214 provides a physical connection for high-availability communication with another router-switch, such as storage router 110 in
Bridge-and-buffer module 215, which is coupled to GE ports 211.1 and 211.2, provides router services that are compliant with draft 08 and later versions of the iSCSI protocol. In the exemplary embodiment, module 215 incorporates a Peripheral Component Interface (PCI) bridge, such as the GT64260 from Marvell Technology Group, LTD. of Sunnyvale, Calif. Also module 215 includes a 64-megabyte flash file system, a 1-megabyte boot flash, and a 256-megabyte non-volatile FLASH memory (not shown separately.) Configuration memory 230 may be part of the flash file system, the boot flash or the non-volatile flash memory, or it may be a separate non-volatile flash memory. In addition, in alternative embodiments, configuration memory 230 may be part of a hard disk, CD-ROM, DVD-ROM or other persistent memory (not shown). The invention is not limited to any particular type of memory for configuration memory 230.
In addition to data and other software used for conventional router operations, module 215 includes router-switch interface software 216. Router-switch software 216 performs iSCSI routing between servers and the storage devices. The software includes an integrated router-switch command line interface module CLI and a web-based graphical-user-interface module (GUI) for operation, configuration and administration, maintenance, and support of the router-switch 110. Both the command-line interface and the graphical user interface are accessible from a terminal via one or both of the ports 213 and 214. Additionally, to facilitate management activities, interface software 216 includes an SNMP router-management agent AGT and an MIB router handler HD. (SNMP denotes the Simple Network Management Protocol, and MIB denotes Management Information Base (MIB)). The agent and handler cooperate with counterparts in switch portion 220 (as detailed below) to provide integrated management and control of router and switching functions in router-switch 200.
Router Processor 217, in the exemplary embodiment, is implemented as a 533-MHz MPC7410 PowerPC from Motorola, Inc. of Schaumburg, Ill. This processor includes 1-megabyte local L2 cache (not shown separately). In the exemplary embodiment, router processor 217 runs a version of the VX Works operating system from WindRiver Systems, Inc. of Alameda, Calif. To support this operating system, the exemplary embodiment also provides means for isolating file allocations tables from other high-use memory areas (such as areas where log and configuration files are written).
Coupled to router processor 217 as well as to bridge-and-buffer module 215 is router-to-switch (RTS) interface 218. RTS interface 218 includes N/NL switch-interface ports 218.1 and 218.2 and management-interface port 218.3, where the port type of N or NL is determined by negotiation. N type ports may act as a Fibre Channel point to point port, NL type ports may negotiate as a loop.
Switch-interface ports 218.1 and 218.2 are internal Fibre Channel (FC) interfaces through which the router portion conducts I/O operations with the switch portion. When a mapping to a FC storage device is created, the router-switch software automatically selects one of the switch-interface ports to use when accessing the target device. The internal interfaces are selected at random and evenly on a per-LUN (logical unit number) basis, allowing the router-switch to load-balance between the two FC interfaces. The operational status of these internal FC interfaces is monitored by each active SCSI Router application running on the switch-router. The failure of either of these two interfaces is considered a unit failure, and if the switch-router is part of a cluster, all active SCSI Router applications will fail over to another switch-router in the cluster. Other embodiments allow operations to continue with the remaining switch-interface port. Still other embodiments include more than two switch-interface ports.
In the exemplary embodiment, the N/NL switch-interface ports can each use up to 32 World Wide Port Names (WWPNs). The WWPNs for port 218.1 are computed as 28+virtual port+base MAC address, and the WWPNs for port 218.2 are computed as 29+virtual port+base MAC address. Additionally, switch-interface ports 218.1 and 218.2 are hidden from the user. One exception is the WWPN of each internal port. The internal WWPNs are called “initiator” WWPNs. Users who set up access control by WWPN on their FC devices set up the device to allow access to both initiator WWPNs.
Switch-interface port 218.3 is used to exchange configuration data and get operational information from switch portion 220 through its management-interface port 224. In the exemplary embodiment, switch-interface port 218.3 is an 10/100 Ethernet port. In the exemplary embodiment, this exchange occurs under the control of a Switch Management Language (SML) Application Program Interface (API) that is part of interface software 216. One example of a suitable API is available from QLogic Corporation of Aliso Viejo, Calif. Ports 218.1, 218.2, and 218.3 are coupled respectively to FC interface ports 221.1 and 221.2 and interface port 224 of switch portion 220.
Switch portion 220, which in the exemplary embodiment incorporates commercially available technology and supports multiple protocols including IP and SCSI, additionally includes internal FC interface ports 221.1 and 221.2, an FC switch 222, external FC ports (or interfaces) 223.1-223.8, a management interface port 224, and a switch processor module 225.
FC interface ports 221.1221.2 are coupled respectively to ports of 218.1 and 218.2 of the router-to-switch interface via internal optical fiber links, thereby forming internal FC links. In the exemplary embodiment, each FC interface supports auto-negotiation as either an F or FL port.
FC switch 222, in the exemplary embodiment, incorporates a SANbox2-16 FC switch from QLogic Corporation. This SANbox2 switch includes QLogic's Itasca switch ASIC (application-specific integrated circuit.) Among other things, this switch supports Extended Link Service (ELS) frames that contain manufacturer information.
FC ports 223.1-223.8, which adhere to one or more FC standards or other desirable communications protocols, can be connected as point-to-point links, in a loop or to a switch. For flow control, the exemplary embodiment implements a Fibre Channel standard that uses a look-ahead, sliding-window scheme, which provides a guaranteed delivery capability. In this scheme, the ports output data in frames that are limited to 2148 bytes in length, with each frame having a header and a checksum. A set of related frames for one operation is called a sequence.
Moreover, the FC ports are auto-discovering and self-configuring and provide 2-Gbps full-duplex, auto-detection for compatibility with 1-Gbps devices. For each external FC port, the exemplary embodiment also supports: Arbitrated Loop (AL) Fairness; Interface enable/disable; Linkspeed settable to 1 Gbps, 2 Gbps, or Auto; Multi-Frame Sequence bundling; Private (Translated) Loop mode.
Switch processor module 225 operates the FC switch and includes a switch processor (or controller) 225.1, and associated memory that includes a switch management agent 225.2, and a switch MIB handler 225.3. In the exemplary embodiment, switch processor 225.1 includes an Intel Pentium processor and a Linux operating system. Additionally, processor 225 has its own software image, initialization process, configuration commands, command-line interface, and graphical user interface (not shown). (In the exemplary embodiment, this command-line interface and graphical-user interface are not exposed to the end user.) A copy of the switch software image for the switch portion is maintained as a tar file 226 in bridge-and-buffer module 215 of router portion 210.
Further details on the operation of the above describe system, including high availability embodiments can be found in application Ser. No. 10/128,656, entitled “SCSI-BASED STORAGE AREA NETWORK”, application Ser. No. 10/131,793, entitled “VIRTUAL SCSI BUS FOR SCSI-BASED STORAGE AREA NETWORK”, and provisional application Ser. No. 60/374,921, entitled “INTERNET PROTOCOL CONNECTED STORAGE AREA NETWORK”, all of which have been previously incorporated by reference.
The method begins when a system executing the method receives configuration data (block 305). In some embodiments, the configuration data includes the network addresses for applications running on the application gateway device, and may also include specifications of primary and secondary network interfaces that are to be assigned to the network address. In some embodiments, the network address is an IP network address.
At some point during the operation of the system, the system may detect the failure of a network interface (block 310). The failure may be detected either at startup time, in which case the secondary network interface may be used, or the failure may be detected after startup. In some embodiments of the invention, the failing network interface must be down for two seconds in order for a failure to be determined.
If the failure occurs after startup, the configuration data is analyzed to determine if the network address assigned to the first (failing) network interface can be failed over to the second network interface (block 315). Various embodiments of the invention may use various factors in determining if the network address may be failed over from a first network interface to a second network interface. For example, one factor that may be analyzed is whether or not the network interfaces are connected to the same network. If not, the network address may not be failed over. Additionally, some embodiments of the invention analyze the configuration data to determine if the first and second network interfaces are on the same subnet. If not, the network address may not be failed over.
Additionally, some embodiments of the invention support VLANs (Virtual Local Area Network). In these embodiments, if the first network address and network interface are on a VLAN, the configuration data is analyzed to determine if second network interface can support the same VLAN. If not, the network address may not be failed over to the second network interface. In some embodiments executing the VTP protocol, a switch participating in the VLAN will inform the network interfaces which VLANs are acceptable. In alternative embodiments, the acceptable VLANS are configured.
Furthermore, in clustered environments, such as those described in
A further check performed by some embodiments of the invention is to determine if the second network interface can support an additional network address. In some embodiments, each network interface can support up to fifteen network addresses. If the second network interface is at the maximum, the network address may not be failed over.
Similarly, the system may check to determine if the second network interface can support an additional MAC address. If not, the network address may not be failed over.
After analyzing the configuration data as described above, the system will determine if the network address can be failed over from a failed first network interface to a second network interface (block 320). If so, the network address is moved to the second network interface (block 325) and applications using the first network interface continue to operate as if the failure did not occur (note that some data may need to be retransmitted, however this is typically handled by the network protocol layers and is typically transparent to the application). If not, the network address remains associated with the first network interface and the application may no longer be able to send or receive data to and from the network.
Additionally, any static routes associated with the network address are removed from routing tables on the system (block 344).
In some embodiments of the invention, ARP (Address Resolution Protocol) entries associated with the first network address are removed from the system (block 346).
Finally, in some embodiments, any cached routes associated with the network address are flushed (i.e. removed) from the system (block 348). In some embodiments, cached routes associated with TCP, UDP and IP protocols are flushed.
The system then proceeds to prepare to associate the network address with the second network interface. The network address is assigned to the second network interface (block 350). In some embodiments, the MAC address that was associated with the network address on the first interface is moved to the second interface (block 352).
In some embodiments, the static routes that were removed at block 344 above are reinstalled on the system and associated with the second network interface (block 354).
In those embodiments supporting VLANS, if the first network interface was participating in a VLAN, then the VLAN logical interfaces are deleted from the first network interface and established on the second network interface if necessary.
Finally, in some embodiments of the invention, a gratuitous ARP packet is issued by the second network interface (block 356). The packet is gratuitous in that it is not issued in response to an ARP request. The gratuitous ARP is desirable, because it causes other network elements in the network such as switches and routers to update their respective ARP tables more quickly than they would through normal address resolution mechanisms that rely on timeouts.
It should be noted that the tasks performed above need not be performed in the order indicated in the flowchart. Additionally, various embodiments of the invention need not perform each and every task noted above.
Systems and methods for failing over a network address from a first network interface to a second network interface have been described. The embodiments of the invention provide advantages over previous systems. For example, by transferring the network address from one network interface to another, the failover may be transparent to the applications and hosts communicating with the applications, thus resulting in less disruption on the network.
While the embodiments of the invention have been described as operating in a storage router environment, the systems and methods may be applied to variety of application gateway devices, including switches, routers, personal computers, laptop computers, server computers etc. that have more than one network interface. This application is intended to cover any adaptations or variations of the present invention. The terminology used in this application is meant to include all of these environments. It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. Therefore, it is manifestly intended that this invention be limited only by the following claims and equivalents thereof.
Number | Name | Date | Kind |
---|---|---|---|
4495617 | Ampulski et al. | Jan 1985 | A |
5390326 | Shah | Feb 1995 | A |
5461608 | Yoshiyama | Oct 1995 | A |
5473599 | Li et al. | Dec 1995 | A |
5535395 | Tipley et al. | Jul 1996 | A |
5544077 | Hershey | Aug 1996 | A |
5579491 | Jeffries et al. | Nov 1996 | A |
5600828 | Johnson et al. | Feb 1997 | A |
5666486 | Alfieri et al. | Sep 1997 | A |
5732206 | Mendel | Mar 1998 | A |
5812821 | Sugi et al. | Sep 1998 | A |
5832299 | Wooten | Nov 1998 | A |
5850573 | Wada | Dec 1998 | A |
5870571 | Duburcq et al. | Feb 1999 | A |
5909544 | Anderson et al. | Jun 1999 | A |
5935215 | Bell et al. | Aug 1999 | A |
5941972 | Hoese et al. | Aug 1999 | A |
5951683 | Yuuki et al. | Sep 1999 | A |
5991813 | Zarrow | Nov 1999 | A |
5996024 | Blumenau | Nov 1999 | A |
5996027 | Volk et al. | Nov 1999 | A |
6006259 | Adelman et al. | Dec 1999 | A |
6009476 | Flory et al. | Dec 1999 | A |
6009480 | Pleso | Dec 1999 | A |
6018765 | Durana et al. | Jan 2000 | A |
6041381 | Hoese | Mar 2000 | A |
6078957 | Adelman et al. | Jun 2000 | A |
6108300 | Coile et al. | Aug 2000 | A |
6108699 | Moiin | Aug 2000 | A |
6131119 | Fukui | Oct 2000 | A |
6134673 | Chrabaszcz | Oct 2000 | A |
6145019 | Firooz et al. | Nov 2000 | A |
6151297 | Congdon et al. | Nov 2000 | A |
6163855 | Shrivastava et al. | Dec 2000 | A |
6178445 | Dawkins et al. | Jan 2001 | B1 |
6185620 | Weber et al. | Feb 2001 | B1 |
6195687 | Greaves et al. | Feb 2001 | B1 |
6195760 | Chung et al. | Feb 2001 | B1 |
6209023 | Dimitroff et al. | Mar 2001 | B1 |
6219771 | Kikuchi et al. | Apr 2001 | B1 |
6268924 | Koppolu et al. | Jul 2001 | B1 |
6269396 | Shah et al. | Jul 2001 | B1 |
6314526 | Arendt et al. | Nov 2001 | B1 |
6343320 | Fairchild et al. | Jan 2002 | B1 |
6353612 | Zhu et al. | Mar 2002 | B1 |
6363416 | Naeimi et al. | Mar 2002 | B1 |
6378025 | Getty | Apr 2002 | B1 |
6392990 | Tosey et al. | May 2002 | B1 |
6393583 | Meth et al. | May 2002 | B1 |
6400730 | Latif et al. | Jun 2002 | B1 |
6421753 | Hoese et al. | Jul 2002 | B1 |
6425035 | Hoese et al. | Jul 2002 | B2 |
6425036 | Hoese et al. | Jul 2002 | B2 |
6449652 | Blumenau et al. | Sep 2002 | B1 |
6470382 | Wang et al. | Oct 2002 | B1 |
6470397 | Shah et al. | Oct 2002 | B1 |
6473803 | Stern et al. | Oct 2002 | B1 |
6480901 | Weber et al. | Nov 2002 | B1 |
6484245 | Sanada et al. | Nov 2002 | B1 |
6553408 | Merrell et al. | Apr 2003 | B1 |
6560630 | Vepa et al. | May 2003 | B1 |
6574755 | Seon | Jun 2003 | B1 |
6591310 | Johnson | Jul 2003 | B1 |
6597956 | Aziz et al. | Jul 2003 | B1 |
6606690 | Padovano | Aug 2003 | B2 |
6640278 | Nolan et al. | Oct 2003 | B1 |
6654830 | Taylor et al. | Nov 2003 | B1 |
6658459 | Kwan et al. | Dec 2003 | B1 |
6665702 | Zisapel et al. | Dec 2003 | B1 |
6678721 | Bell | Jan 2004 | B1 |
6683883 | Czeiger et al. | Jan 2004 | B1 |
6691244 | Kampe et al. | Feb 2004 | B1 |
6697924 | Swank | Feb 2004 | B2 |
6701449 | Davis et al. | Mar 2004 | B1 |
6718361 | Basani et al. | Apr 2004 | B1 |
6718383 | Hebert | Apr 2004 | B1 |
6721907 | Earl | Apr 2004 | B2 |
6724757 | Zadikian et al. | Apr 2004 | B1 |
6728780 | Hebert | Apr 2004 | B1 |
6738854 | Hoese et al. | May 2004 | B2 |
6748550 | McBrearty et al. | Jun 2004 | B2 |
6757291 | Hu | Jun 2004 | B1 |
6760783 | Berry | Jul 2004 | B1 |
6763195 | Willebrand et al. | Jul 2004 | B1 |
6763419 | Hoese et al. | Jul 2004 | B2 |
6766520 | Rieschl et al. | Jul 2004 | B1 |
6771663 | Jha | Aug 2004 | B1 |
6771673 | Baum et al. | Aug 2004 | B1 |
6779016 | Aziz et al. | Aug 2004 | B1 |
6785742 | Teow et al. | Aug 2004 | B1 |
6789152 | Hoese et al. | Sep 2004 | B2 |
6799316 | Aguilar et al. | Sep 2004 | B1 |
6807581 | Starr et al. | Oct 2004 | B1 |
6823418 | Langendorf et al. | Nov 2004 | B2 |
6839752 | Miller et al. | Jan 2005 | B1 |
6845403 | Chadalapaka | Jan 2005 | B2 |
6848007 | Reynolds et al. | Jan 2005 | B1 |
6856591 | Ma et al. | Feb 2005 | B1 |
6859462 | Mahoney et al. | Feb 2005 | B1 |
6874147 | Diamant | Mar 2005 | B1 |
6877044 | Lo et al. | Apr 2005 | B2 |
6885633 | Mikkonen | Apr 2005 | B1 |
6886171 | MacLeod | Apr 2005 | B2 |
6895461 | Thompson | May 2005 | B1 |
6920491 | Kim | Jul 2005 | B2 |
6938092 | Burns | Aug 2005 | B2 |
6941396 | Thorpe et al. | Sep 2005 | B1 |
6944785 | Gadir et al. | Sep 2005 | B2 |
6976134 | Lolayekar et al. | Dec 2005 | B1 |
6985490 | Czeiger et al. | Jan 2006 | B2 |
7043727 | Bennett et al. | May 2006 | B2 |
7089293 | Grosner et al. | Aug 2006 | B2 |
7120837 | Ferris | Oct 2006 | B1 |
7146233 | Aziz et al. | Dec 2006 | B2 |
7165258 | Kuik et al. | Jan 2007 | B1 |
20010020254 | Blumenau et al. | Sep 2001 | A1 |
20020010750 | Baretzki | Jan 2002 | A1 |
20020010812 | Hoese et al. | Jan 2002 | A1 |
20020023150 | Osafune et al. | Feb 2002 | A1 |
20020042693 | Kampe et al. | Apr 2002 | A1 |
20020049845 | Sreenivasan et al. | Apr 2002 | A1 |
20020052986 | Hoese et al. | May 2002 | A1 |
20020055978 | Joon-Bo et al. | May 2002 | A1 |
20020059392 | Ellis | May 2002 | A1 |
20020065872 | Genske et al. | May 2002 | A1 |
20020103943 | Lo et al. | Aug 2002 | A1 |
20020116460 | Treister et al. | Aug 2002 | A1 |
20020126680 | Inagaki et al. | Sep 2002 | A1 |
20020156612 | Schulter et al. | Oct 2002 | A1 |
20020161950 | Hoese et al. | Oct 2002 | A1 |
20020176434 | Yu et al. | Nov 2002 | A1 |
20020188657 | Traversat et al. | Dec 2002 | A1 |
20020188711 | Meyer et al. | Dec 2002 | A1 |
20020194428 | Green | Dec 2002 | A1 |
20030005068 | Nickel et al. | Jan 2003 | A1 |
20030014462 | Bennett et al. | Jan 2003 | A1 |
20030018813 | Antes et al. | Jan 2003 | A1 |
20030018927 | Gadir et al. | Jan 2003 | A1 |
20030058870 | Mizrachi et al. | Mar 2003 | A1 |
20030084209 | Chadalapaka | May 2003 | A1 |
20030093541 | Lolayekar et al. | May 2003 | A1 |
20030093567 | Lolayekar et al. | May 2003 | A1 |
20030097607 | Bessire | May 2003 | A1 |
20030131157 | Hoese et al. | Jul 2003 | A1 |
20030145108 | Joseph et al. | Jul 2003 | A1 |
20030145116 | Moroney et al. | Jul 2003 | A1 |
20030149829 | Basham et al. | Aug 2003 | A1 |
20030163682 | Kleinsteiber et al. | Aug 2003 | A1 |
20030182422 | Bradshaw et al. | Sep 2003 | A1 |
20030182455 | Hetzler et al. | Sep 2003 | A1 |
20030208579 | Brady et al. | Nov 2003 | A1 |
20030210686 | Terrell et al. | Nov 2003 | A1 |
20030212898 | Steele et al. | Nov 2003 | A1 |
20030229690 | Kitani et al. | Dec 2003 | A1 |
20030233427 | Taguchi | Dec 2003 | A1 |
20030236988 | Snead | Dec 2003 | A1 |
20040022256 | Green | Feb 2004 | A1 |
20040024778 | Cheo | Feb 2004 | A1 |
20040064553 | Kjellberg | Apr 2004 | A1 |
20040085893 | Wang et al. | May 2004 | A1 |
20040117438 | Considine et al. | Jun 2004 | A1 |
20040141468 | Christensen | Jul 2004 | A1 |
20040148376 | Rangan et al. | Jul 2004 | A1 |
20040233910 | Chen et al. | Nov 2004 | A1 |
20050055418 | Blanc et al. | Mar 2005 | A1 |
20050063313 | Nanavati et al. | Mar 2005 | A1 |
20050268151 | Hunt et al. | Dec 2005 | A1 |
20060265529 | Kuik et al. | Nov 2006 | A1 |