This application is related to U.S. patent application Ser. No. 09/738,960 filed on Dec. 14, 2000, now U.S. Pat. No. 6,792,507; U.S. patent application Ser. No. 10/015,047 filed on Oct. 26, 2001; U.S. patent application Ser. No. 10/039,190 filed on Dec. 31, 2001, now abandoned; U.S. patent application Ser. No. 10/039,189 filed on Dec. 31, 2001; and U.S. patent application Ser. No. 10/039,184 filed on Dec. 31, 2001, all of which are incorporated herein by reference in its entirety for all purposes. This application is also related to the following four U.S. patent applications which are being filed concurrently herewith: U.S. patent application Ser. No. 10/117,040 entitled “System and Method for Expansion of Computer Network Switching System Without Disruption Thereof”; U.S. patent application Ser. No. 10/117,266 entitled “System and Method for Guaranteed Link Layer Flow Control”; U.S. patent application Ser. No. 10/117,638 entitled “Fibre Channel Implementation Using Network Processors”; U.S. patent application Ser. No. 10/117,290 entitled “Method and System for Reduced Distributed Event Handling in a Network Environment”, and each of which are incorporated herein by reference in its entirety for all purposes.
The present invention is related to computer networks. More specifically, the present application is related to a system and method for linking a plurality of network switches.
Current Storage Area Networks (“SAN”s) are designed to carry block storage traffic over predominantly Fibre Channel standard medium and protocols using fabric networks comprising local area networks (“LAN”s). Expansion of SAN fabric networks is limited in that conventional SAN fabric channels cannot be implemented over geographically distant locations. Conventional Fibre Channel architecture is not suitable for wide area network (“WAN”)/LAN applications. While SCSI and Ethernet may be used to implement a WAN/LAN, these two protocols are not efficient for storage applications. Accordingly, current SAN fabric networks are limited to a single geographic location.
There exist several proposals for moving block storage traffic over SANs built on other networking medium and protocol technologies such as Gigabit Ethernet, ATM/SONET, Infiniband, and the like. Presently, to bridge or interconnect storage data traffic from SANs using one medium/protocol type to another SAN using an incompatible protocol/medium type requires devices and software that perform the necessary protocol/medium translations. These translation devices, hereinafter referred to as “translation bridges,” make the necessary translations between incompatible protocol/mediums in order to serve the host computers/servers and storage target devices (the “clients”). Interconnecting heterogeneous SANs that may be easily scaled upward using these translation bridges is very difficult because the translation bridges usually become the bottleneck in speed of data transfer when the clients (servers and/or storage devices) become larger in number. In addition, in a mixed protocol environment and when the number of different protocols increase, the complexity of the software installed on the translation bridges increases, which further impacts performance.
A limitation of the size of SAN fabric networks, in terms of storage capacity, is cost and manpower. In order to expand the storage capacity of a SAN fabric network, storage devices such as disk drives, controllers, fiber channel switches and hubs, and other hardware must be purchased, interconnected and made functionally operable together. Another major, if not primary, expense is the cost of managing a SAN. SAN management requires a lot of manpower for maintenance and planning. For example, as storage capacity grows, issues such as determining server access to storage devices, backup strategy, data replication, data recovery, and other considerations become more complex.
It is desirable that next generation storage network switch systems will have ingress and egress ports that support different protocols and network media so that different types of host computer/servers and storage target devices may be attached directly to the switch system and start communicating with each other without translation overhead. In order to communicate between any two ports, the source and destination ports must be identifiable in both the source and destination protocol. For example, to send a message or frame from a Fibre Channel port to a Gigabit Ethernet port, the destination port needs to appear as a Fibre Channel port to the connected Fibre Channel source, and the source port needs to appear as a Gigabit Ethernet port to the destination port.
SAN and networking products are usually used in mission critical applications and housed in chassis or racks. When a customer wants to expand this system, one or more chassis are added into the existing domain. However, the user has to power down the existing system and reconnect the new chassis into the existing system. Once the new configuration or topology is complete, the user will have to power on the new system. Unfortunately, this upgrade causes system downtime and potential loss of revenue.
Switches have a limited resource—the switch fabric or routing core. A non-blocking switch must have enough bandwidth to receive traffic at full speed from all ingress ports and direct the traffic to the egress ports without dropping traffic, assuming that the traffic is spread equally across all egress ports and does not congest one of them. Therefore, if all ports connected to the switch have the same data rate, then the switch fabric must have bandwidth greater than the number of ports multiplied by the port speed if it wants to be a non-blocking switch that does not drop traffic.
The problem with existing switches is that the internal switch fabric is fixed in size. If large scalability is desired one has to pay for a large switch fabric that initially is not needed. In present systems a smaller switch has to be replaced when more capacity is needed by a larger switch. This is a disruptive upgrade that causes all nodes connected to the switch to lose connectivity while the upgrade is occurring. In another scenario, multiple smaller switches can be interconnected using lower bandwidth interconnects. However, these interconnects can become congested and limit the throughput of the network.
The majority of the SAN switches are not expandable and typically have a limited number of ports, for example, 16 ports. When a customer needs more than 16 ports, two or more of the 16 port switches must be connected together. Unfortunately, to achieve a non-blocking switch in a typical configuration half of the ports on the switch are then used for interconnect purposes. As every switch system is internally limited to a specific amount of devices that can be coupled with each switch such an interconnection is not desirable. Expansion of an internally fully expanded system can be further achieved through special designed trunk couplings. These couplings are, however, limited in their flexibility as their assignment to the plurality of channels of each switch is static and, thus, not very flexible. It is, therefore, highly likely that traffic within the switch system will be blocked due to an already in-use trunk coupling.
Thus, there is a demand for a more user friendly system reducing the downtime and overall cost of a network switch fabric system.
The invention overcomes the above-identified problems as well as other shortcomings and deficiencies of existing technologies by providing a storage network device that performs a multiplicity of functions and has a multiplicity of port types to allow it to connect to a variety of network types (e.g., Fibre Channel, Gigabit Ethernet, etc.) and is easily maintainable and/or can be easily upgraded with either minimal or no downtime.
A primary function of the invention is to act as a storage network switch where frames are switched from port to port. However, because of its architecture, the present invention has the ability to perform many additional functions that take advantage of its high performance, highly scalable, and highly programmable infrastructure. The switch architecture of the present invention is comprised of: 1) a Switch Fabric Subsystem, 2) I/O Subsystems, 3) Application Subsystems, and 4) System Control Subsystems.
The Switch Fabric Subsystem is a protocol agnostic cell or packet switching infrastructure that provides the high performance and highly scalable interconnections between the I/O Subsystems and Application Subsystems. It provides primary data paths for network traffic being moved by the switch. The I/O Subsystems provide the actual port connectivity to the external network devices that use the switch to communicate with other external network devices. The I/O Subsystems are part of the data path and are responsible for making the high performance, low level decoding of ingress frames from the external ports; and switching/routing, identifying the destination I/O subsystem for the frame, and queuing the frame for transmission through the Switching Fabric. The I/O Subsystems process packets at the very lowest protocol levels (Data Link and Network Layer of the OSI Model) where fast switching and routing decisions can be made. The Application Subsystems provide the platforms for higher level processing of frames and data streams in the switch system. The Application Subsystems have more advanced programmability and functionality than the I/O Subsystems, but rely on the control and data information provided by the I/O Subsystems to maintain high performance packet throughput. Typical applications that can run on the Application Subsystems are caching, storage virtualization, file serving, and high level protocol conversion. The System Control Subsystems provide the overall management of the storage network switch. Most of the low level switching and routing protocol functions are executed on the System Control Subsystems. In addition, management access functions such as the SNMP agent, web server, telnet server, and the direct command line interface reside on the System Control Subsystems. The hardware and software executing on the System Control Subsystems are responsible for managing the other subsystems in the network storage switch.
The present invention is directed to a method of linking at least two network switches, wherein each network switch switches data traffic of a plurality of devices, through a plurality of couplings, wherein the method comprises the steps of:
The present invention is also directed to a network switch for coupling a plurality of devices and for switching data traffic between the devices comprising:
The present invention is furthermore directed to a network switch system comprising at least a first and a second network switch coupled through a plurality of couplings wherein each network switch is coupling a plurality of devices for switching data traffic between the devices and wherein each network switch further comprises:
A more complete understanding of the present disclosure and advantages thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings wherein:
The present invention may be susceptible to various modifications and alternative forms. Specific embodiments of the present invention are shown by way of example in the drawings and are described herein in detail. It should be understood, however, that the description set forth herein of specific embodiments is not intended to limit the present invention to the particular forms disclosed. Rather, all modifications, alternatives, and equivalents falling within the spirit and scope of the invention as defined by the appended claims are intended to be covered.
The present invention is directed to a storage network device that performs a multiplicity of functions and has a multiplicity of port types to allow it to connect to a variety of network types (e.g., Fibre Channel, Gigabit Ethernet, etc.). A primary function of the invention is to act as a storage network switch wherein frames are switched from port to port. However, because of its architecture, the present invention has the ability to perform many additional functions that take advantage of its high performance, highly scalable, and highly programmable infrastructure.
The following description of the exemplary embodiments of the present invention contains a number of technical terms using abbreviations and/or acronyms which are defined herein and used hereinafter:
Referring now to the drawings, the details of an exemplary specific embodiment of the invention is schematically illustrated. Like elements in the drawings will be represented by like numbers, and similar elements will be represented by like numbers with a different lower case letter suffix.
The Switch Fabric Subsystem (“SFS”) 102 is responsible for routing the plurality of data channels from and to the respective ingress and egress ports of each line card. Each line card comprises a plurality of links for coupling with the switches included in the switch fabric cards. These links can be optical or electrical. Each switch in a switch fabric card is linked with all line cards through one of these links which are hereinafter called ports. Each link usually consists of a separate receiving and transmitting line. Thus, if a system comprises, for example, 16 line cards, each switch must be able to receive 16 links. Such a coupling is hereinafter called a 16-port link. To provide sufficient bandwidth, each line card provides a plurality of links. Thus, for example, 12 high speed links per line card can be provided. To provide the maximum bandwidth, 12 switches must be implemented. In one embodiment of the present invention, for example, 3 line cards are provided wherein each line card comprises 4 independent switches, thus, providing 12 independent switches. However, if the necessary bandwidth is less than the maximum bandwidth, for example, when only a subset of line cards is installed, only one or two switch fabric cards can be implemented.
Each switch fabric card provides two switch fabric chips 301 and 302 which can be configured in different modes. For example, the switch fabric chip 301, 302 can be configured in a first mode to comprise a single 64-port switch. In a second mode, the chip provides two independent 32-port switches and in a third mode it provides four independent 16-port switches. Each port comprises separate transmit and receive lines per link. In the embodiment shown in
The present invention takes advantage of the switching and rerouting capabilities of the system. Thus, if only a certain number of line cards 310 are implemented only a certain number of switch fabric cards 300 is needed. In case the system has to be expanded, an additional switch fabric card 300 and more line cards 310 can be inserted into the chassis. In this embodiment, one chassis can receive up to three switch fabric cards 300 and up to 16 line cards 310. Thus, the system can be easily expanded, for example, using one chassis it is possible to install 16 line cards and, thus, a maximum of 160 ports can be established.
However, each system comprises a limit and can, thus, only be expanded by linking multiple network switches to form a larger network. When interconnecting multiple network switches to form a larger network, trunking, or link aggregation is often implemented to distribute traffic across multiple links when those links form equal-cost paths. For example, in a Fibre Channel network, the interconnecting switches employ the Fabric Shortest Path First algorithm as defined in the Fibre Channel Standard to determine path cost. When multiple equal-cost paths are found, the interswitch links that form those paths become candidates for link aggregation.
For example, a first switch system 200 and a second switch system 210 are linked by a special coupling 240a and 240b, and 250a and 250b as shown in
Known device-ID based trunking algorithms are all coarse-grained. The granularity of the information unit is all the data from a device or to a device. The shortcoming of this scheme is that there is no attempt to distribute traffic to a different link of a trunk for different devices once the links are assigned to a set of devices.
For example, if link 240a in
The present invention overcomes this disadvantage. Examining the Fibre Channel Protocol more closely, the most basic unit of information in a data stream where ordering needs to be preserved is an exchange. The Fibre Channel Standard defines a data stream as a compilation of exchanges. An exchange is a compilation of sequences, and a sequence is a compilation of frames. Data frames from different exchanges are independent of one another and there is not an ordering restriction between frames of different exchanges. Fibre Channel exchanges are identified by an exchange ID. This exchange ID, for example, a 16-bit value, increments sequentially for every exchange until this ID wraps around to 0 after it increments past 65535. Therefore, a better distribution of data traffic with a distribution scheme based on this exchange ID can be achieved. A hashing function is devised to determine a route for an incoming frame to determine a specific link in a trunk that this frame will take. This scheme operates as follows.
When an incoming frame arrives at Switch 200, the exchange-ID of that frame is passed to a hashing function. The hashing function returns a value that falls inclusively between the smallest and largest identifier that denotes the links in the trunk. The frame routing component of the switch then passes this frame to the link with this identifier.
A better distribution can be achieved with the exchange-level trunking than that of a device-ID based algorithm because the links are not bound to any device. Exchanges from multiple devices are distributed to the links in a trunk irrespective of the device from which the exchanges arrive, or to which device the exchanges are destined. The distribution of received data stream is done within the respective receiving switch by means of a respective data stream portion identifying the recipient.
While there exists a large number of hashing functions that will work with this scheme, a simple and most-straight forward hash function that will be used to illustrate the exchange-level trunking mechanism is the ceiling adjusted modulo function. This function is used instead of a straight modulo function for several reasons. The routing decision often needs to be done in the shortest time possible to minimize switching latency. The general modulo function requires a division operation that requires a long computing time. In a switch, a hardware-accelerated divider is often not available to the routing component, thus making the division operation infeasible since division implemented in software consumes a large portion of code space and aggravates the long computing time.
The ceiling adjusted modulo function computes the remainder of the exchange-ID divided by the next power of two higher than the number of links in a trunk, then adjusted to fall within the number of links in a trunk. This can be shown by the following pseudo-code, assuming a zero-based link ID that goes from 0 to number_of links−1:
BitPosition:=(the highest position of a binary I bit in the Number_of_links)+1;
Next2Power:=1 shifted left by BitPosition;
Link:=Exchange_ID AND (Next2Power−1);
If Link greater than or equal to the Number_of_links
The following Table 1 shows the operands and the results obtained when the ceiling adjusted modulo function is applied:
As another example showing the use of the ceiling adjusted modulo function for trunk link determination, the following Table 2 gives a few values for a trunk configuration consisting of 21 links.
With the ceiling adjusted modulo function, the most optimal traffic distribution can be achieved when the number of links in a trunk is exactly a power of 2. When the number of links is not a power of 2, a traffic distribution better than a device-based trunking can still be achieved.
Whenever a new transfer from the switch to another switch is requested, an exchange_ID is fed to trunk channel number generator 410. Trunk channel number generator 410 then generates the respective trunk channel number, for example, by the above described ceiling adjusted modulo function. Trunk select control unit 420 then activates the respective multiplexer and selects the respective input 435a, 435b, 435c, . . . 435n.
The above embodiments use the Fibre Channel Protocol for communication. Thus, usually a Fibre Channel Architecture is used within the system. However, the system will also operate with the Fibre Channel Protocol regardless of the actual transport or physical medium. This includes Fibre Channel Protocol encapsulation through other network protocols. The encapsulation is accomplished by storing a Fibre Channel frame into a frame/packet or some other equivalent network protocol construct. Such encapsulation allows a Fibre Channel frame to be routed to another Fibre Channel capable network node using Ethernet, ATM, or other network protocol. The encapsulation is used in, but not limited to, IP packets, ATM packets, and TCP connections.
The invention, therefore, is well adapted to carry out the objects and attain the ends and advantages mentioned, as well as others inherent therein. While the invention has been depicted, described, and is defined by reference to exemplary embodiments of the invention, such references do not imply a limitation on the invention, and no such limitation is to be inferred. The invention is capable of considerable modification, alternation, and equivalents in form and function, as will occur to those ordinarily skilled in the pertinent arts and having the benefit of this disclosure. The depicted and described embodiments of the invention are exemplary only, and are not exhaustive of the scope of the invention. Consequently, the invention is intended to be limited only by the spirit and scope of the appended claims, giving full cognizance to equivalents in all respects.
Number | Name | Date | Kind |
---|---|---|---|
4442504 | Dummermuth et al. | Apr 1984 | A |
4598404 | Perry et al. | Jul 1986 | A |
4692073 | Martindell | Sep 1987 | A |
4755930 | Wilson, Jr. et al. | Jul 1988 | A |
4903259 | Hayano | Feb 1990 | A |
5140682 | Okura et al. | Aug 1992 | A |
5247649 | Bandoh | Sep 1993 | A |
5289460 | Drake et al. | Feb 1994 | A |
5377180 | Laurent | Dec 1994 | A |
5394556 | Oprescu | Feb 1995 | A |
5515376 | Murthy et al. | May 1996 | A |
5530832 | So et al. | Jun 1996 | A |
5586847 | Mattern, Jr. et al. | Dec 1996 | A |
5602841 | Lebizay et al. | Feb 1997 | A |
5606669 | Bertin et al. | Feb 1997 | A |
5611049 | Pitts | Mar 1997 | A |
5699548 | Choudhury et al. | Dec 1997 | A |
5778429 | Sukegawa et al. | Jul 1998 | A |
5805785 | Dias et al. | Sep 1998 | A |
5835756 | Caccavale | Nov 1998 | A |
5835943 | Yohe et al. | Nov 1998 | A |
5844887 | Oren et al. | Dec 1998 | A |
5845280 | Treadwell, III et al. | Dec 1998 | A |
5845324 | White et al. | Dec 1998 | A |
5852717 | Bhide et al. | Dec 1998 | A |
5864854 | Boyle | Jan 1999 | A |
5873100 | Adams et al. | Feb 1999 | A |
5878218 | Maddalozzo, Jr. et al. | Mar 1999 | A |
5881229 | Singh et al. | Mar 1999 | A |
5889775 | Sawicz et al. | Mar 1999 | A |
5918244 | Percival | Jun 1999 | A |
5924864 | Loge et al. | Jul 1999 | A |
5930253 | Brueckheimer et al. | Jul 1999 | A |
5933607 | Tate et al. | Aug 1999 | A |
5933849 | Srbljic et al. | Aug 1999 | A |
5944780 | Chase et al. | Aug 1999 | A |
5944789 | Tzelnic et al. | Aug 1999 | A |
5978841 | Berger | Nov 1999 | A |
5978951 | Lawler et al. | Nov 1999 | A |
5987223 | Narukawa et al. | Nov 1999 | A |
5991810 | Shapiro et al. | Nov 1999 | A |
6041058 | Flanders et al. | Mar 2000 | A |
6044406 | Barkey et al. | Mar 2000 | A |
6081883 | Popelka et al. | Jun 2000 | A |
6085234 | Pitts et al. | Jul 2000 | A |
6098096 | Tsirigotis et al. | Aug 2000 | A |
6105062 | Andrews et al. | Aug 2000 | A |
6128306 | Simpson et al. | Oct 2000 | A |
6138209 | Krolak et al. | Oct 2000 | A |
6147976 | Shand et al. | Nov 2000 | A |
6243358 | Monin | Jun 2001 | B1 |
6252514 | Nolan et al. | Jun 2001 | B1 |
6289386 | Vangemert | Sep 2001 | B1 |
6361343 | Daskalakis et al. | Mar 2002 | B1 |
6400730 | Latif et al. | Jun 2002 | B1 |
6424657 | Voit et al. | Jul 2002 | B1 |
6438705 | Chao et al. | Aug 2002 | B1 |
6457048 | Sondur et al. | Sep 2002 | B2 |
6470013 | Barach et al. | Oct 2002 | B1 |
6484209 | Momirov | Nov 2002 | B1 |
6499064 | Carlson et al. | Dec 2002 | B1 |
6532501 | McCracken | Mar 2003 | B1 |
6584101 | Hagglund et al. | Jun 2003 | B2 |
6594701 | Forin | Jul 2003 | B1 |
6597689 | Chiu et al. | Jul 2003 | B1 |
6597699 | Ayres | Jul 2003 | B1 |
6601186 | Fox et al. | Jul 2003 | B1 |
6615271 | Lauck et al. | Sep 2003 | B1 |
6654895 | Henkhaus et al. | Nov 2003 | B1 |
6657962 | Barri et al. | Dec 2003 | B1 |
6662219 | Nishanov et al. | Dec 2003 | B1 |
6674756 | Rao et al. | Jan 2004 | B1 |
6687247 | Wilford et al. | Feb 2004 | B1 |
6704318 | Stuart et al. | Mar 2004 | B1 |
6721818 | Nakamura | Apr 2004 | B1 |
6731644 | Epps et al. | May 2004 | B1 |
6731832 | Alvarez et al. | May 2004 | B2 |
6735174 | Hefty et al. | May 2004 | B1 |
6747949 | Futral | Jun 2004 | B1 |
6754206 | Nattkemper et al. | Jun 2004 | B1 |
6757791 | O'Grady et al. | Jun 2004 | B1 |
6758241 | Pfund et al. | Jul 2004 | B1 |
6762995 | Drummond-Murray et al. | Jul 2004 | B1 |
6765871 | Knoebel et al. | Jul 2004 | B1 |
6765919 | Banks et al. | Jul 2004 | B1 |
6792507 | Chiou et al. | Sep 2004 | B2 |
6822957 | Schuster et al. | Nov 2004 | B1 |
6839750 | Bauer et al. | Jan 2005 | B1 |
6845431 | Camble et al. | Jan 2005 | B2 |
6847647 | Wrenn | Jan 2005 | B1 |
6850531 | Rao et al. | Feb 2005 | B1 |
6865602 | Nijemcevic et al. | Mar 2005 | B1 |
6876663 | Johnson et al. | Apr 2005 | B2 |
6876668 | Chawla et al. | Apr 2005 | B1 |
6879559 | Blackmon et al. | Apr 2005 | B1 |
6889245 | Taylor et al. | May 2005 | B2 |
6938084 | Gamache et al. | Aug 2005 | B2 |
6944829 | Dando | Sep 2005 | B2 |
6954463 | Ma et al. | Oct 2005 | B1 |
6973229 | Tzathas et al. | Dec 2005 | B1 |
6980515 | Schunk et al. | Dec 2005 | B1 |
6983303 | Pellegrino et al. | Jan 2006 | B2 |
6985490 | Czeiger et al. | Jan 2006 | B2 |
6988149 | Odenwald | Jan 2006 | B2 |
7006438 | West et al. | Feb 2006 | B2 |
7010715 | Barbas et al. | Mar 2006 | B2 |
7013084 | Battou et al. | Mar 2006 | B2 |
7035212 | Mittal et al. | Apr 2006 | B1 |
7079485 | Lau et al. | Jul 2006 | B1 |
7190695 | Schaub et al. | Mar 2007 | B2 |
20010023443 | Fichou et al. | Sep 2001 | A1 |
20010037435 | Van Doren | Nov 2001 | A1 |
20010043564 | Bloch et al. | Nov 2001 | A1 |
20020004842 | Ghose et al. | Jan 2002 | A1 |
20020010790 | Ellis et al. | Jan 2002 | A1 |
20020012344 | Johnson et al. | Jan 2002 | A1 |
20020024953 | Davis et al. | Feb 2002 | A1 |
20020034178 | Schmidt et al. | Mar 2002 | A1 |
20020071439 | Reeves et al. | Jun 2002 | A1 |
20020078299 | Chiou et al. | Jun 2002 | A1 |
20020103921 | Nair et al. | Aug 2002 | A1 |
20020118682 | Choe | Aug 2002 | A1 |
20020165962 | Alvarez et al. | Nov 2002 | A1 |
20020176131 | Walters et al. | Nov 2002 | A1 |
20020186703 | West et al. | Dec 2002 | A1 |
20020188786 | Barrow et al. | Dec 2002 | A1 |
20030002506 | Moriwaki et al. | Jan 2003 | A1 |
20030012204 | Czeiger et al. | Jan 2003 | A1 |
20030014540 | Sultan et al. | Jan 2003 | A1 |
20030026267 | Oberman et al. | Feb 2003 | A1 |
20030033346 | Carlson et al. | Feb 2003 | A1 |
20030037022 | Adya et al. | Feb 2003 | A1 |
20030037177 | Sutton et al. | Feb 2003 | A1 |
20030048792 | Xu et al. | Mar 2003 | A1 |
20030063348 | Posey, Jr. | Apr 2003 | A1 |
20030074449 | Smith et al. | Apr 2003 | A1 |
20030084219 | Yao et al. | May 2003 | A1 |
20030091267 | Alvarez et al. | May 2003 | A1 |
20030093541 | Lolayekar et al. | May 2003 | A1 |
20030093567 | Lolayekar et al. | May 2003 | A1 |
20030097439 | Strayer et al. | May 2003 | A1 |
20030097445 | Todd et al. | May 2003 | A1 |
20030123274 | Cambie et al. | Jul 2003 | A1 |
20030126223 | Jenne et al. | Jul 2003 | A1 |
20030126280 | Hawkins et al. | Jul 2003 | A1 |
20030126297 | Olarig et al. | Jul 2003 | A1 |
20030128703 | Zhao et al. | Jul 2003 | A1 |
20030152182 | Pai et al. | Aug 2003 | A1 |
20030154301 | McEachern et al. | Aug 2003 | A1 |
20030163555 | Battou et al. | Aug 2003 | A1 |
20030163592 | Odenwald | Aug 2003 | A1 |
20030195956 | Bramhall et al. | Oct 2003 | A1 |
20030198231 | Kalkunte et al. | Oct 2003 | A1 |
20030202520 | Witkowski et al. | Oct 2003 | A1 |
20050018619 | Banks et al. | Jan 2005 | A1 |
20050018709 | Barrow et al. | Jan 2005 | A1 |
20050044354 | Hagerman | Feb 2005 | A1 |
20050243734 | Nemirovsky et al. | Nov 2005 | A1 |