Method and system for detecting congestion and over subscription in a fibre channel network

Information

  • Patent Grant
  • 7522529
  • Patent Number
    7,522,529
  • Date Filed
    Tuesday, July 20, 2004
    20 years ago
  • Date Issued
    Tuesday, April 21, 2009
    15 years ago
Abstract
A method and system for detecting congestion and over-subscription in a fiber channel switch element is provided. A counter is updated if a frame cannot be transmitted due to lack of credit; then the counter value is compared to a threshold value; and an event is triggered if the counter value varies from the threshold value. Also, provided is a first register that maintains information regarding a rate at which a source port can transfer data; a counter that counts entries corresponding to a number of frames to be transmitted at a given time; and a second register that determines an over-subscription rate.
Description
BACKGROUND

1. Field of the Invention


The present invention relates to fibre channel systems, and more particularly, to detecting congestion and oversubscription in fibre channel switches.


2. Background of the Invention


Fibre channel is a set of American National Standard Institute (ANSI) standards, which provide a serial transmission protocol for storage and network protocols such as HIPPI, SCSI, IP, ATM and others. Fibre channel provides an input/output interface to meet the requirements of both channel and network users.


Fibre channel supports three different topologies: point-to-point, arbitrated loop and fibre channel fabric. The point-to-point topology attaches two devices directly. The arbitrated loop topology attaches devices in a loop. The fibre channel fabric topology attaches host systems directly to a fabric, which are then connected to multiple devices. The fibre channel fabric topology allows several media types to be interconnected.


Fibre channel is a closed system that relies on multiple ports to exchange information on attributes and characteristics to determine if the ports can operate together. If the ports can work together, they define the criteria under which they communicate.


In fibre channel, a path is established between two nodes where the path's primary task is to transport data from one point to another at high speed with low latency, performing only simple error detection in hardware.


Fibre channel fabric devices include a node port or “N_Port” that manages fabric connections. The N_port establishes a connection to a fabric element (e.g., a switch) having a fabric port or F_port. Fabric elements include the intelligence to handle routing, error detection, recovery, and similar management functions.


A fibre channel switch is a multi-port device where each port manages a simple point-to-point connection between itself and its attached system. Each port can be attached to a server, peripheral, I/O (input/output) subsystem, bridge, hub, router, or even another switch. A switch receives messages from one port and automatically routes it to another port. Multiple calls or data transfers happen concurrently through the multi-port fibre channel switch.


Fibre channel switches use memory buffers to hold frames received and sent across a network. Associated with these buffers are credits, which are the number of frames a Fibre Channel port can transmit without overflowing the receive buffers at the other end of the link. Receiving an R_RDY primitive signal increases the credit, and sending a frame decreases the credit. The initial amount of credit is negotiated by two ends of the link during login. Credit counts can be implemented on a transmit port by starting at zero and counting up to the maximum, or by starting at the maximum and counting down to zero.


When using large networks, bottlenecks may occur that could reduce the performance of a network. Fibre Channel networks use flow control to make sure that for every transmitted frame there is a receive buffer at the other end of the link.


Congestion on a Fibre Channel network will prevent ports from transmitting frames while waiting for flow control signals (the R_RDY primitive signal in Fibre Channel).


In a Fabric with multiple switches, congestion may occur if more traffic is being routed through an E-port than it can handle. The use of frame counts or byte counts is not sufficient to detect congestion.


Often a fibre channel switch is coupled between devices that use varying data rates to transfer data. The mismatch in the data transfer rates can result in inefficient use of the overall bandwidth. An illustration of this problem is shown in FIG. 2. FIG. 2 shows switches 207 and 209 coupled by a 10 G (gigabytes) link 208. Host systems 203 and 202 are coupled to switch 207 by 2 G links 204 and 205, respectively. Host system 201 is coupled by a 1 G link 206. A target 213 is coupled to switch 209 by a 1 G link 210, while targets 214 and 215 are coupled by 2 G links 211 and 212, respectively. Host system may be any computing device and a target may be any device with which a host or another target can communicate.


Host 203 can send data at 2 G to target 213 that can receive data at 1 G. Since target 213 receives data at a lower rate that can overfill the receive buffers in switch 209 resulting in congestion.


As data rates increase (for example, from 1 G to 10 G), Fibre Channel networks will need efficient congestion and over subscription detection techniques. Therefore, what is required is a process and system that efficiently detects congestion and over subscription.


SUMMARY OF THE INVENTION

In one aspect of the present invention, a method for detecting congestion in a transmit side of a fibre channel switch element is provided. The method includes, updating a counter if a frame cannot be transmitted from a transmit side of a switch due to lack of credit; comparing the counter value to a threshold value; and triggering a threshold event if the counter value varies from the threshold value.


In another aspect, a method for detecting congestion on a receive segment of a fibre channel switch element is provided. The method includes, comparing a counter value to a threshold value, if a receive buffer is full; and triggering a threshold event if the counter value varies from the threshold value.


In yet another aspect of the present invention, a method for detecting congestion in a transmit segment of a fibre channel switch element is provided. The method includes, determining if credit is available for transmitting a frame; triggering an event based on a duration that the frame waits for transmission; and notifying a processor based on such event. A first counter value is compared to a threshold value to trigger the event.


In yet another aspect of the present invention, a method for detecting congestion at a receive segment of a fibre channel switch element is provided. The method includes, determining if a receive buffer has been full for a certain duration; and triggering an event if the duration varies from a threshold value.


In yet another aspect, a system for detecting congestion in a fibre channel switch element is provided. The system includes, a first counter that counts a duration for which a frame waits for transmission, and the duration is compared to a threshold value to detect congestion. The threshold value may be programmed by firmware used by the fibre channel switch element and if the first counter value is greater than the threshold value, an event is triggered.


In yet another aspect of the present invention, a system for detecting congestion at a receive segment of a fibre channel switch element is provided. The system includes, a receive buffer log that indicates how quickly frames are moving through the receive segment. The system also includes, a first counter that is incremented when a receive buffer is full and if the counter value varies from a threshold value, an event is generated; and a register that maintains count for frames that are routed to another switch element.


In yet another aspect of the present invention, a system for determining over-subscription in a transmit segment of a fibre channel switch element is provided. The system includes a first register that maintains information regarding a rate at which a source port can transfer data; a first counter that counts entries corresponding to a number of frames to be transmitted at a given time; and a second register that determines an over-subscription rate.


In yet another aspect of the present, a method for determining over-subscription in a transmit port of a fibre channel switch element is provided. The method includes, determining an over-subscription value based on a source port's data rate, a transmit port's data rate and an entry corresponding to a number of frames that are to be transmitted from the transmit port at a given time; and notifying a processor of the over-subscription rate if the over-subscription value is different from a threshold value.


This brief summary has been provided so that the nature of the invention may be understood quickly. A more complete understanding of the invention can be obtained by reference to the following detailed description of the preferred embodiments thereof concerning the attached drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing features and other features of the present invention will now be described with reference to the drawings of a preferred embodiment. In the drawings, the same components have the same reference numerals. The illustrated embodiment is intended to illustrate, but not to limit the invention. The drawings include the following Figures:



FIG. 1A shows an example of a Fibre Channel network system;



FIG. 1B shows an example of a Fibre Channel switch element, according to one aspect of the present invention;



FIG. 1C shows a block diagram of a 20-channel switch chassis, according to one aspect of the present invention;



FIG. 1D shows a block diagram of a Fibre Channel switch element with sixteen GL_Ports and four 10 G ports, according to one aspect of the present invention;


FIGS. 1E-1/1E-2 (jointly referred to as FIG. 1E) show another block diagram of a Fibre Channel switch element with sixteen GL_Ports and four 10 G ports, according to one aspect of the present invention;



FIG. 2 show a topology highlighting congestion and oversubscription in Fibre Channel networks;


FIGS. 3A/3B (jointly referred to as FIG. 3) show a block diagram of a GL_Port, according to one aspect of the present invention;


FIGS. 4A/4B (jointly referred to as FIG. 3) show a block diagram of XG_Port (10 G) port, according to one aspect of the present invention;



FIG. 5 shows a block diagram of the plural counters and registers at a transmit port, according to one aspect of the present invention;



FIG. 6 shows a process flow diagram for detecting congestion on the transmit side, according to one aspect of the present invention;



FIG. 7 is a block diagram of a system with the registers/counters used according to one aspect of the present invention to detect congestion;



FIG. 8 shows a flow diagram of a process flow diagram for detecting congestion at a receive port, according to one aspect of the present invention;



FIGS. 9A-9B show examples of how the adaptive aspects of the present invention are used to minimize congestion;



FIG. 10 shows how a counter adjustment is used, according to one aspect of the present invention;



FIG. 11 is a block diagram of an over subscription detection system/logic, according to one aspect of the present invention; and



FIG. 12 shows a flow diagram for determining over subscription, according to one aspect of the present invention; and



FIG. 13 provides a graphical illustration of how the adaptive aspects of the present invention assist in improving congestion management in Fibre Channel networks.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

Definitions:


The following definitions are provided as they are typically (but not exclusively) used in the fibre channel environment, implementing the various adaptive aspects of the present invention.


“E-Port”: A fabric expansion port that attaches to another Interconnect port to create an Inter-Switch Link.


“F_Port”: A port to which non-loop N_Ports are attached to a fabric and does not include FL_ports.


“Fibre channel ANSI Standard”: The standard (incorporated herein by reference in its entirety) describes the physical interface, transmission and signaling protocol of a high performance serial link for support of other high level protocols associated with IPI, SCSI, IP, ATM and others.


“FC-1”: Fibre channel transmission protocol, which includes serial encoding, decoding and error control.


“FC-2”: Fibre channel signaling protocol that includes frame structure and byte sequences.


“FC-3”: Defines a set of fibre channel services that are common across plural ports of a node.


“FC-4”: Provides mapping between lower levels of fibre channel, IPI and SCSI command sets, HIPPI data framing, IP and other upper level protocols.


“Fabric”: A system which interconnects various ports attached to it and is capable of routing fibre channel frames by using destination identifiers provided in FC-2 frame headers.


“Fabric Topology”: This is a topology where a device is directly attached to a fibre channel fabric that uses destination identifiers embedded in frame headers to route frames through a fibre channel fabric to a desired destination.


“FL_Port”: A L_Port that is able to perform the function of a F_Port, attached via a link to one or more NL_Ports in an Arbitrated Loop topology.


“Inter-Switch Link”: A Link directly connecting the E_port of one switch to the E_port of another switch.


“Port”: A general reference to N. Sub.--Port or F.Sub.--Port.


“L_Port”: A port that contains Arbitrated Loop functions associated with the Arbitrated Loop topology.


“N_Port”: A direct fabric attached port.


“NL_Port”: A L_Port that can perform the function of a N_Port.


“Over subscription”: is defined herein as data arriving at a Fibre Channel transmit port faster than the port can transmit it. It is noteworthy that the over subscribed transmit port itself may not be congested and may be sending at its full data rate. But an over subscribed transmit port will cause congestion at the ports that are sending frames routed to the oversubscribed port.


“Switch”: A fabric element conforming to the Fibre Channel Switch standards.


“VL”: Virtual Lane: A portion of the data path between a source and destination port.


Fibre Channel System:


To facilitate an understanding of the preferred embodiment, the general architecture and operation of a fibre channel system will be described. The specific architecture and operation of the preferred embodiment will then be described with reference to the general architecture of the fibre channel system.



FIG. 1A is a block diagram of a fibre channel system 100 implementing the methods and systems in accordance with the adaptive aspects of the present invention. System 100 includes plural devices that are interconnected. Each device includes one or more ports, classified as node ports (N_Ports), fabric ports (F_Ports), and expansion ports (E_Ports). Node ports may be located in a node device, e.g. server 103, disk array 105 and storage device 104. Fabric ports are located in fabric devices such as switch 101 and 102. Arbitrated loop 106 may be operationally coupled to switch 101 using arbitrated loop ports (FL_Ports).


The devices of FIG. 1A are operationally coupled via “links” or “paths”. A path may be established between two N_ports, e.g. between server 103 and storage 104. A packet-switched path may be established using multiple links, e.g. an N-Port in server 103 may establish a path with disk array 105 through switch 102.


Fabric Switch Element


FIG. 1B is a block diagram of a 20-port ASIC fabric element according to one aspect of the present invention. FIG. 1B provides the general architecture of a 20-channel switch chassis using the 20-port fabric element. Fabric element includes ASIC 20 with non-blocking fibre channel class 2 (connectionless, acknowledged) and class 3 (connectionless, unacknowledged) service between any ports. It is noteworthy that ASIC 20 may also be designed for class 1 (connection-oriented) service, within the scope and operation of the present invention as described herein.


The fabric element of the present invention is presently implemented as a single CMOS ASIC, and for this reason the term “fabric element” and ASIC are used interchangeably to refer to the preferred embodiments in this specification. Although FIG. 1B shows 20 ports, the present invention is not limited to any particular number of ports.


ASIC 20 has 20 ports numbered in FIG. 1B as GL0 through GL19. These ports are generic to common Fibre Channel port types, for example, F_Port, FL_Port and E-Port. In other words, depending upon what it is attached to, each GL port can function as any type of port. Also, the GL port may function as a special port useful in fabric element linking, as described below.


For illustration purposes only, all GL ports are drawn on the same side of ASIC 20 in FIG. 1B. However, the ports may be located on both sides of ASIC 20 as shown in other figures. This does not imply any difference in port or ASIC design. Actual physical layout of the ports will depend on the physical layout of the ASIC.


Each port GL0-GL19 has transmit and receive connections to switch crossbar 50. One connection is through receive buffer 52, which functions to receive and temporarily hold a frame during a routing operation. The other connection is through a transmit buffer 54.


Switch crossbar 50 includes a number of switch crossbars for handling specific types of data and data flow control information. For illustration purposes only, switch crossbar 50 is shown as a single crossbar. Switch crossbar 50 is a connectionless crossbar (packet switch) of known conventional design, sized to connect 21×21 paths. This is to accommodate 20 GL ports plus a port for connection to a fabric controller, which may be external to ASIC 20.


In the preferred embodiments of switch chassis described herein, the fabric controller is a firmware-programmed microprocessor, also referred to as the input/out processor (“IOP”). IOP 66 is shown in FIG. 1C as a part of a switch chassis utilizing one or more of ASIC 20. As seen in FIG. 1B, bi-directional connection to IOP 66 is routed through port 67, which connects internally to a control bus 60. Transmit buffer 56, receive buffer 58, control register 62 and Status register 64 connect to bus 60. Transmit buffer 56 and receive buffer 58 connect the internal connectionless switch crossbar 50 to IOP 66 so that it can source or sink frames.


Control register 62 receives and holds control information from IOP 66, so that IOP 66 can change characteristics or operating configuration of ASIC 20 by placing certain control words in register 62. IOP 66 can read status of ASIC 20 by monitoring various codes that are placed in status register 64 by monitoring circuits (not shown).



FIG. 1C shows a 20-channel switch chassis S2 using ASIC 20 and IOP 66. S2 will also include other elements, for example, a power supply (not shown). The 20 GL ports correspond to channel C0-C19. Each GL port has a serial/deserializer (SERDES) designated as S0-S19. Ideally, the SERDES functions are implemented on ASIC 20 for efficiency, but may alternatively be external to each GL port.


Each GL port has an optical-electric converter, designated as OE0-OE19 connected with its SERDES through serial lines, for providing fibre optic input/output connections, as is well known in the high performance switch design. The converters connect to switch channels C0-C19. It is noteworthy that the ports can connect through copper paths or other means instead of optical-electric converters.



FIG. 1D shows a block diagram of ASIC 20 with sixteen GL ports and four 10 G (Gigabyte) port control modules designated as XG0-XG3 for four 10 G ports designated as XGP0-XGP3. ASIC 20 include a control port 62A that is coupled to IOP 66 through a PCI connection 66A.


FIG. 1E-1/1E-2 (jointly referred to as FIG. 1E) show yet another block diagram of ASIC 20 with sixteen GL and four XG port control modules. Each GL port control module has a Receive port (RPORT) 69 with a receive buffer (RBUF) 69A and a transmit port 70 with a transmit buffer (TBUF) 70A, as described below in detail. GL and XG port control modules are coupled to physical media devices (“PMD”) 76 and 75 respectively.


Control port module 62A includes control buffers 62B and 62D for transmit and receive sides, respectively. Module 62A also includes a PCI interface module 62C that allows interface with IOP 66 via a PCI bus 66A.


XG_Port (for example 74B) includes RPORT 72 with RBUF 71 similar to RPORT 69 and RBUF 69A and a TBUF and TPORT similar to TBUF 70A and TPORT 70. Protocol module 73 interfaces with SERDES to handle protocol based functionality.


GL Port:



FIGS. 3A-3B (referred to as FIG. 3) show a detailed block diagram of a GL port as used in ASIC 20. GL port 300 is shown in three segments, namely, receive segment (RPORT) 310, transmit segment (TPORT) 312 and common segment 311.


Receive Segment of GL Port:


Frames enter through link 301 and SERDES 302 converts data into 10-bit parallel data to fibre channel characters, which are then sent to receive pipe (“Rpipe” or “Rpipe1” or “Rpipe2”) 303A via a de-multiplexer (DEMUX) 303. Rpipe 303A includes, parity module 305 and decoder 304. Decoder 304 decodes 10B data to 8B and parity module 305 adds a parity bit. Rpipe 303A also performs various Fibre Channel standard functions such as detecting a start of frame (SOF), end-of frame (EOF), Idles, R_RDYs (fibre channel standard primitive) and the like, which are not described since they are standard functions.


Rpipe 303A connects to smoothing FIFO (SMF) module 306 that performs smoothing functions to accommodate clock frequency variations between remote transmitting and local receiving devices.


Frames received by RPORT 310 are stored in receive buffer (RBUF) 69A, (except for certain Fibre Channel Arbitrated Loop (AL) frames). Path 309 shows the frame entry path, and all frames entering path 309 are written to RBUF 69A as opposed to the AL path 308.


Cyclic redundancy code (CRC) module 313 further processes frames that enter GL port 300 by checking CRC and processing errors according to FC_PH rules. The frames are subsequently passed to RBUF 69A where they are steered to an appropriate output link. RBUF 69A is a link receive buffer and can hold multiple frames.


Reading from and writing to RBUF 69A are controlled by RBUF read control logic (“RRD”) 319 and RBUF write control logic (“RWT”) 307, respectively. RWT 307 specifies which empty RBUF 69A slot will be written into when a frame arrives through the data link via multiplexer 313B, CRC generate module 313A and EF (external proprietary format) module 314. EF module 314 encodes proprietary (i.e. non-standard) format frames to standard Fibre Channel 8B codes. Mux 313B receives input from Rx Spoof module 314A, which encodes frames to a proprietary format (if enabled). RWT 307 controls RBUF 69A write addresses and provide the slot number to tag writer (“TWT”) 317.


RRD 319 processes frame transfer requests from RBUF 69A. Frames may be read out in any order and multiple destinations may get copies of the frames.


Steering state machine (SSM) 316 receives frames and determines the destination for forwarding the frame. SSM 316 produces a destination mask, where there is one bit for each destination. Any bit set to a certain value, for example, 1, specifies a legal destination, and there can be multiple bits set, if there are multiple destinations for the same frame (multicast or broadcast).


SSM 316 makes this determination using information from alias cache 315, steering registers 316A, control register 326 values and frame contents. IOP 66 writes all tables so that correct exit path is selected for the intended destination port addresses.


The destination mask from SSM 316 is sent to TWT 317 and a RBUF tag register (RTAG) 318. TWT 317 writes tags to all destinations specified in the destination mask from SSM 316. Each tag identifies its corresponding frame by containing an RBUF 69A slot number where the frame resides, and an indication that the tag is valid.


Each slot in RBUF 69A has an associated set of tags, which are used to control the availability of the slot. The primary tags are a copy of the destination mask generated by SSM 316. As each destination receives a copy of the frame, the destination mask in RTAG 318 is cleared. When all the mask bits are cleared, it indicates that all destinations have received a copy of the frame and that the corresponding frame slot in RBUF 69A is empty and available for a new frame.


RTAG 318 also has frame content information that is passed to a requesting destination to pre-condition the destination for the frame transfer. These tags are transferred to the destination via a read multiplexer (RMUX) (not shown).


Transmit Segment of GL Port:


Transmit segment (“TPORT”) 312 performs various transmit functions. Transmit tag register (TTAG) 330 provides a list of all frames that are to be transmitted. Tag Writer 317 or common segment 311 write TTAG 330 information. The frames are provided to arbitration module (“transmit arbiter” (“TARB”)) 331, which is then free to choose which source to process and which frame from that source to be processed next.


TTAG 330 includes a collection of buffers (for example, buffers based on a first-in first out (“FIFO”) scheme) for each frame source. TTAG 330 writes a tag for a source and TARB 331 then reads the tag. For any given source, there are as many entries in TTAG 330 as there are credits in RBUF 69A.


TARB 331 is activated anytime there are one or more valid frame tags in TTAG 330. TARB 331 preconditions its controls for a frame and then waits for the frame to be written into TBUF 70A. After the transfer is complete, TARB 331 may request another frame from the same source or choose to service another source.


TBUF 70A is the path to the link transmitter. Typically, frames don't land in TBUF 70A in their entirety. Mostly, frames simply pass through TBUF 70A to reach output pins, if there is a clear path.


Switch Mux 332 is also provided to receive output from crossbar 50. Switch Mux 332 receives input from plural RBUFs (shown as RBUF 00 to RBUF 19), and input from CPORT 62A shown as CBUF 1 frame/status. TARB 331 determines the frame source that is selected and the selected source provides the appropriate slot number. The output from Switch Mux 332 is sent to ALUT 323 for S_ID spoofing and the result is fed into TBUF Tags 333.


TMUX (“TxMux”) 339 chooses which data path to connect to the transmitter. The sources are: primitive sequences specified by IOP 66 via control registers 326 (shown as primitive 339A), and signals as specified by Transmit state machine (“TSM”) 346, frames following the loop path, or steered frames exiting the fabric via TBUF 70A.


TSM 346 chooses the data to be sent to the link transmitter, and enforces all fibre Channel rules for transmission. TSM 346 receives requests to transmit from loop state machine 320, TBUF 70A (shown as TARB request 346A) and from various other IOP 66 functions via control registers 326 (shown as IBUF Request 345A). TSM 346 also handles all credit management functions, so that Fibre Channel connectionless frames are transmitted only when there is link credit to do so.


Loop state machine (“LPSM”) 320 controls transmit and receive functions when GL_Port is in a loop mode. LPSM 320 operates to support loop functions as specified by FC-AL-2.


IOP buffer (“IBUF”) 345 provides IOP 66 the means for transmitting frames for special purposes.


Frame multiplexor (“Frame Mux” or “Mux”) 336 chooses the frame source, while logic (TX spoof 334) converts D_ID and S_ID from public to private addresses. Frame Mux 336 receives input from Tx Spoof module 334, TBUF tags 333, and Mux 335 to select a frame source for transmission.


EF module 338 encodes proprietary (i.e. non-standard) format frames to standard Fibre Channel 8B codes and CRC module 337 generates CRC data for the outgoing frames.


Modules 340-343 put a selected transmission source into proper format for transmission on an output link 344. Parity 340 checks for parity errors, when frames are encoded from 8B to 10B by encoder 341, marking frames “invalid”, according to Fibre Channel rules, if there was a parity error. Phase FIFO 342A receives frames from encode module 341 and the frame is selected by Mux 342 and passed to SERDES 343. SERDES 343 converts parallel transmission data to serial before passing the data to the link media. SERDES 343 may be internal or external to ASIC 20.


Common Segment of GL Port:


As discussed above, ASIC 20 include common segment 311 comprising of various modules. LPSM 320 has been described above and controls the general behavior of TPORT 312 and RPORT 310.


A loop look up table (“LLUT”) 322 and an address look up table (“ALUT”) 323 is used for private loop proxy addressing and hard zoning managed by firmware.


Common segment 311 also includes control register 326 that controls bits associated with a GL_Port, status register 324 that contains status bits that can be used to trigger interrupts, and interrupt mask register 325 that contains masks to determine the status bits that will generate an interrupt to IOP 66. Common segment 311 also includes AL control and status register 328 and statistics register 327 that provide accounting information for FC management information base (“MIB”).


Output from status register 324 may be used to generate a Fp Peek function. This allows a status register 324 bit to be viewed and sent to the CPORT.


Output from control register 326, statistics register 327 and register 328 (as well as 328A for an X_Port, shown in FIG. 4) is sent to Mux 329 that generates an output signal (FP Port Reg Out).


Output from Interrupt register 325 and status register 324 is sent to logic 335 to generate a port interrupt signal (FP Port Interrupt).


BIST module 321 is used for conducting embedded memory testing.


XG Port



FIGS. 4A-4B (referred to as FIG. 4) show a block diagram of a 10 G Fibre Channel port control module (XG FPORT) 400 used in ASIC 20. Various components of XG FPORT 400 are similar to GL port control module 300 that are described above. For example, RPORT 310 and 310A, Common Port 311 and 311A, and TPORT 312 and 312A have common modules as shown in FIGS. 3 and 4 with similar functionality.


RPORT 310A can receive frames from links (or lanes) 301A-301D and transmit frames to lanes 344A-344D. Each link has a SERDES (302A-302D), a de-skew module, a decode module (303B-303E) and parity module (304A-304D). Each lane also has a smoothing FIFO (SMF) module 305A-305D that performs smoothing functions to accommodate clock frequency variations. Parity errors are checked by module 403, while CRC errors are checked by module 404.


RPORT 310A uses a virtual lane (“VL”) cache 402 that stores plural vector values that are used for virtual lane assignment. In one aspect of the present invention, VL Cache 402 may have 32 entries and two vectors per entry. IOP 66 is able to read or write VL cache 402 entries during frame traffic. State machine 401 controls credit that is received. On the transmit side, credit state machine 347 controls frame transmission based on credit availability. State machine 347 interfaces with credit counters 328A.


Also on the transmit side, modules 340-343 are used for each lane 344A-344D, i.e., each lane can have its own module 340-343. Parity module 340 checks for parity errors and encode module 341 encodes 8-bit data to 10 bit data. Mux 342B sends the 10-bit data to a smoothing (“TxSMF”) module 342 that handles clock variation on the transmit side. SERDES 343 then sends the data out to the link.


Congestion Detection:


In one aspect of the present invention, the following set of counters and status registers can be used to detect congestion, both at the transmit and receive side.


TPORT Congestion:


The following describes various registers/counters that are used to detect congestion at TPORT 312A:


“Transmit Wait Count Register”: This register increments each time a frame is available for transmission but cannot be transmitted due to lack of credit. This time interval may be the time needed to transmit, for example, one word (32 bits).


“Transmit Wait Count Rollover Event”: This status event is set when the transmit wait count register rolls over from its maximum value to zero. This can be set to cause an interrupt to IOP 66.


“Transmit wait Count Threshold Register”(FIG. 5, 508): This register contains a count that is compared to the transmit wait count threshold counter value. IOP 66 can program the register.


“Transmit Wait Count Threshold Counter”(FIG. 5, 507): This register increments each time a frame is ready to be transmitted but cannot due to lack of credit. It decrements each time the above condition is not true. If the counter is at its maximum value, then it does not increment. If the counter is at zero, then it does not decrement.


“Transmit Wait Count Threshold Event Status”: This event occurs when the transmit wait count threshold counter value exceeds a threshold value programmed in the transmit wait count threshold register (508). This denotes that frames have been waiting to transmit based on a threshold value. The event can be used to trigger an interrupt to IOP 66.


“Congestion count adjustment” (FIG. 5, modules 513 and 514, & FIG. 10): Logic modules 513 and 514 allow the rate of counting up or down to be adjusted with a programmed value. Module 513 adjusts the rate of counting up, while module 514 adjusts the rate of counting down.



FIG. 5 shows a block diagram of the plural counters and registers at TPORT 312A that have been described above. FIG. 5 shows signal 501 to transfer frames and a “no credit” signal 502. Signal 501 and 502 are sent to logic 503. A count up signal 504 (from logic 513) and count down signal 506 (from inverter 505) are sent to transmit wait threshold counter 507. Counter 507 is incremented for each period a frame is ready to be transmitted (signal 501) and cannot be transmitted due to lack of credit (signal 502). This period could be set to the amount of time required to transmit one word of the frame.


Register 508 includes a threshold value that can be programmed by IOP 66 using the firmware (or hard coded). Register 508 output 512 and counter 507 output 511 is compared (by logic 509), and if the counter value (511) is greater than the threshold value (512) then the threshold wait count event is set, which results in an interrupt to IOP 66 (510).


To extend the range of values that can be compared without having to increase the number of bits for threshold count in module 508, compare module 509, and counter 507 include more bits than the threshold count. Then counter output 511 is shifted down by a programmable number of bits. For instance, if counter 507 is 2 bits longer than threshold count 508, then shifting counter output 511 is shifted down 1 or 2 bits, divides the counter output by 2 or 4, making the range available for the threshold count larger by a factor of 2 or 4, but losing precision in the lowest 1 or 2 bits of the counter.



FIG. 10 shows how counter adjustment is used to change the rate when the wait count goes up or down. The adjust level module 1001 is programmed by firmware to include a certain adjustment level value. The adjust counter 1002 is incremented whenever a count up signal (if adjusting count up from FIG. 5, 503) or count down signal (if adjusting count down from FIG. 5, 505) is set. The values in modules 1001 and 1002 are compared by module 1003, with the output set, if 1002 is greater than or equal to 1001.


The output of module 1003 is “ANDED” with the original signal by 1004 to provide the “adjusted count up” or “count down” output. The adjusted count rolls over when incremented past its maximum (depending on number of bits in count). The result is to change the rate of count up or count down, depending on the adjusted level value and the number of bits in the counter. If there are n bits in the counter, the rate of count signals is modified as follows:

C=r*(1−(a/2**n))


Where C is the effective count rate (rate of signals in FIG. 5, 504 or 506), r is the raw count rate (rate of signals in FIG. 5 from 503 or 505), and “a” is the programmed adjust level from module 1001, which is less than 2**n. In one aspect of the present invention, a 4 bit counter is used for most cases, although the invention is not limited to any particular bit size or counter value.



FIG. 6 is a flow diagram of executable steps for detecting congestion on the transmit side (TPORT 312A), according to one aspect of the present invention.


In step S600, frames (or signal to transmit frames) are received for transmission. In step S601, the process determines if credit is available to transmit the frame. If credit is available, then in step S603, the frame is sent and counter 507 is decremented or cleared.


If no credit is available, then in step S602, counter 507 is incremented.


In step S604, the process compares counter 507 value 511 to a threshold value 512 that can be programmed by firmware in register 508. If the counter value 511 is greater than threshold value 512, then in step S605, a wait count event is triggered. This can be an interrupt to IOP 66 and denotes congestion.


If counter value 511 is less than threshold value 512, then the process goes back to step S601.


RPORT Congestion:


The following describes various registers/counters that are used to detect congestion at RPORT 310A:


“Receive Buffer Full Status”: This status is set when all buffers (RBUF 69A) for a port are full.


If the credit mechanism per Fibre Channel standards is operative then TPORT 312A cannot transmit because of lack of credit. This status can be programmed by firmware to cause an interrupt for IOP 66.


“Receive Buffer Full Threshold Register” (FIG. 7, 706): This register maintains a count that is compared to “Receive Buffer Full threshold Counter” value.


“Receive Buffer Full Threshold Counter” (FIG. 7, 705): This counter is incremented every time the receive buffers (69A) are full. The counters decrement when the buffer is not full. If the counter is at its maximum value, it stops incrementing. If the counter is at zero, it stops decrementing.


“Receive Buffer Full Threshold Event Status” (709): This event happens if the receive buffer full threshold counter value exceeds the programmed (or hard coded) receive buffer full threshold register value. This occurs if received frames cannot be moved to their destination for a certain period. This event can be used to generate an interrupt for IOP 66.


“Receive Buffer Log”: A buffer log can be kept in RBUF 69A. The log includes the upper 16 bits of the source and destination addresses (S_ID and D_ID) of the frames that are received in RBUF 69A, and the status indicating if data is valid. If the frames are forwarded rapidly, the log values will change quickly. However, due to congestion, if frames do not move quickly, then these values do not change rapidly. Sampling the log values provides a statistical sample of frame sources and destinations at a port. The log allows identifying the destination(s) that are congested. The log can be sent upstream to a device so that the upstream device can alter routing based on congestion.


“E-Port Frame In Count Register”: This register located in CPORT 311A, counts received frames that are routed to an E_Port to go to another switch. By comparing this register count to the overall received frame count at a port; the percentage of frames going to another switches, versus local destinations can be determined.



FIG. 7 is a block diagram of system 708 showing the registers/counters used according to one aspect of the present invention to detect congestion. A receive buffer full signal 701 is received and based upon that (count up signal 704) counter 705 is incremented. Counter 705 is also decreased (signal 703 received via inverter 702) when a frame leaves the receive buffer.


Register 706 can be programmed with a threshold value by firmware. Counter 705 generates a value 710 that is compared with register 706 threshold value 711. If counter value 710 is greater than threshold value 711, then a “receive buffer full” event is triggered (709). This can be used to generate an interrupt for IOP 66.



FIG. 8 shows a flow diagram of a process flow diagram for detecting congestion at RPORT 310A, according to one aspect of the present invention. In step S801, the process determines if the receive buffer is full. If the buffer is not full, then in step S802, counter 705 is decremented.


If the buffer is full, then in step S803, counter 705 is incremented.


In step S804, counter 705 value 710 is compared with threshold value 711. If the counter value 710 is greater than threshold value 711, then a threshold event is set in step S805, otherwise, the process goes back to step S801.



FIGS. 9A-9B show examples of how the adaptive aspects of the present invention can be used. In FIG. 9A, if some local ports in switches A and B send large amount of data to switch C, and most of the traffic uses link 1 between A and C passing through switch B. Link 2 does not have enough bandwidth for the traffic. In this scenario, the E-Port on switch B-side of link 1 and the local ports on switch B sending to switch C will get receive buffer full threshold events. The E-Port on the side of switch A side of link 1 will get transmit wait count threshold events.


Based on the foregoing adaptive aspects of the present invention, one possible improvement would be to route traffic from A to C over link 3 or to add another link between switches B and C. These improvements are possible because the various counters and registers above can detect congestion in the links.



FIG. 9B shows that local ports on Switch A get receive buffer full threshold events. The E_Port “frame in count” for those local ports can be sampled and compared to the total received frame count. If most frames are going from switch A to switch B, congestion can be relieved by adding links between switches A and B. If most of the frames are going to local destinations, then performance is not limited by the switch fabric, but by the number of devices being used.


Over Subscription Detection:


The following describes various registers/counters that are used to detect over subscription at TPORT 312A. In one aspect, the register/counters are implemented in TTAG 330:


“Port Rate” register: This register includes the receive speed of the source port associated with that TTAG FIFO.


“Port TTAG Entry Count” counter: This counter provides the number of TTAG FIFO entries representing frames to be transmitted, currently in the TTAG FIFO for a source port.


“Calculate Over Subscription” Register: This register calculates the amount of over subscription by multiplying the port TTAG entry count by the source port rate, adding the result for all ports, then dividing the total by the transmit port's speed rate. If there are n source ports, and if Rx is the rate of source port x, Fx is the number of frames in the TTAG FIFO, and T is the transmit rate for the transmit port, then over subscription is provided by:

((R0*F0)+(R1*F1)+ . . . (R(n−1)*F(n−1)))/T


“Threshold” Value: This value is programmed by firmware and is compared to the calculated over subscription value. If the calculated over subscription value is greater than or equal to the threshold value, then the over subscription status is set. The status is used by firmware and may cause an interrupt for IOP 66.



FIG. 11 is a block diagram of the over subscription detection system/logic 1100. System 1100 may be located in TTAG 330. Each TTAG FIFO 1106 includes entries representing frames from a particular source port ready for transmission. Port rate 1101 includes the rate corresponding to a particular source port. The port TTAG entry count 1102 contains the number of TTAG FIFO entries for a particular source port. To calculate over subscription, module 1103 calculates the sum of the products of each port's TTAG count and rate, and divides the sum by the transmit port speed rate. Compare module 1105 compares the result from module 1103 with the programmed threshold value in module (or register) 1104. If module 1103 output is greater than the threshold value in module 1104, a status signal 1107 is set.


If integer arithmetic is used, any result of the over subscription calculation between 1 and 2 may be rounded down to 1. To increase precision, the sum of the products of the port TTAG counts and rates can be shifted up by 2 or 3 bits (multiplying by 4 or 8) before the division by the transmit rate. Over subscription is determined by:

(((R0*F0)+(R1*F1)+ . . . (R(n−1)*F(n−1))*4)/T


The value selected from module 1104 takes the foregoing into account.



FIG. 12 shows a flow diagram for determining over subscription. Step 1201 initializes the calculation. Step 1202 calculates the product of the TTAG FIFO count and the rate for a source port, and is repeated for each port by going through steps 1203 and 1204 until all ports have been added. Step 1205 finishes the calculation by dividing the sum by the transmit port rate. The compare in step 1206 causes the over subscription status to be set in step 1207 if the calculated number is greater than the programmed threshold.


The raw values i.e., (R0*F0) . . . (R(n−1)*F(n−1)) are available to IOP 66 as status and used in the determination of which ports have how much over subscription.


It is noteworthy that the term “signal” as used in the foregoing description includes firmware/software commands.


In one aspect of the present invention, congestion can be detected in fibre channel switches and routing changes can be made to improve the overall performance of the networks.



FIG. 13 provides a graphical illustration of how the foregoing adaptive aspects of the present invention assist in improving congestion management.


Although the present invention has been described with reference to specific embodiments, these embodiments are illustrative only and not limiting. Many other applications and embodiments of the present invention will be apparent in light of this disclosure and the following claims.

Claims
  • 1. A system for detecting congestion at a receive segment of a port of a fibre channel switch element, comprising: a counter at the receive segment that is incremented when an indicator is set indicating that a receive buffer at the receive segment is full; wherein the receive buffer is used for temporarily storing fibre channel frames at the receive segment;a threshold register for storing a threshold value for detecting congestion at the receive segment; wherein an output value from the counter is compared with the threshold value and if the output value is greater than the threshold value, then congestion is detected at the receive segment; anda receive buffer log that stores a destination identifier value and a source identifier value for frames received at the receive segment; and the rate at which the receive buffer log changes, indicates how quickly frames are moving through the receive segment to a transmit segment of the port.
  • 2. The system of claim 1, wherein a threshold event is triggered if the output from the counter is greater than the threshold value; and the threshold event generates an interrupt for a processor of the fibre channel switch element, notifying the processor of congestion at the receive segment of the port.
  • 3. The system of claim 1, further comprising: a register that maintains count for frames that are routed to another switch element and by comparing the register count with an overall received frame count, a percentage of frames that are routed within the fibre channel switch element is determined.
  • 4. A system for determining over-subscription in a transmit segment of a port of a fibre channel switch element, comprising: an over-subscription module that receives information regarding a rate at which a plurality of source ports transmit frames and a number of frames that are waiting to be transmitted by the plurality of source ports, at any given time;wherein the over-subscription rate is determined by the following: ((R0*F0)+(R1*F1)+ . . . (R(n−1)*F(n−1)))/T; where “n” is a number of the plurality of source ports, “R” is a rate at which the plurality of source ports operate, “T” is a number of frames that are waiting to be transmitted at any given time, and “T” is a transmit rate for the transmit segment;wherein the transmit segment is over-subscribed if frames arrive faster than a rate at which the transmit segment transmit the frames.
  • 5. The system of claim 4, further comprising: a register that stores information regarding a rate at which the plurality of source ports transfer data; anda counter that counts entries indicating a number of frames waiting to be transmitted at each of the plurality of source ports, at any given time;wherein values from the register and the counter are input into the over-subscription module for determining the over-subscription rate.
  • 6. The system of claim 4, wherein the determined over-subscription value is compared to a stored threshold value and if the determined over-subscription value is greater than the threshold value, then an over-subscription status is set for the port.
  • 7. A method for determining over-subscription in a transmit segment of a port for a fibre channel switch element, comprising: determining an over-subscription value based on following: ((R0*F0)+(R1*F1)+ . . . (R(n−1)*F(n−1)))/T; where “n” is a number of a plurality of source ports sending frames to the port, “R” is a rate at which the plurality of source ports operate, “F” is a number of frames that are waiting to be transmitted at any given time, and “T” is a transmit rate for the transmit segment;wherein the transmit segment is over-subscribed if frames arrive faster than a rate at which the transmit segment transmits the frames; andnotifying a processor for the fibre channel switch element of the over-subscription, if the determined over-subscription value is different from a stored threshold value.
  • 8. The method of claim 7, wherein the threshold value is programmable.
  • 9. The method of claim 7, wherein a register stores information regarding a rate at which the plurality of source ports transfer data; and a counter counts entries indicating a number of frames waiting to be transmitted at each of the plurality of source ports, at any given time; wherein values from the register and the counter are input into an over-subscription module for determining the over-subscription value.
  • 10. The method of claim 7, wherein if the determined over-subscription value is greater than the threshold value, then an over-subscription status is set for the port.
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims priority under 35 U.S.C.§ 119(e)(1) to the following provisional patent applications: Filed on Sep. 19, 2003, Ser. No. 60/503,812, entitled “Method and System for Fibre Channel Switches”; Filed on Jan. 21, 2004, Ser. No. 60/537,933 entitled “Method And System For Routing And Filtering Network Data Packets In Fibre Channel Systems”; Filed on Jul. 21, 2003, Ser. No. 60/488,757, entitled “Method and System for Selecting Virtual Lanes in Fibre Channel Switches”; Filed on Dec. 29, 2003, Ser. No. 60/532,965, entitled “Programmable Pseudo Virtual Lanes for Fibre Channel Systems”; Filed on Sep. 19, 2003, Ser. No. 60/504,038, entitled “Method and System for Reducing Latency and Congestion in Fibre Channel Switches”; Filed on Aug. 14, 2003, Ser. No. 60/495,212, entitled “Method and System for Detecting Congestion and Over Subscription in a Fibre channel Network” Filed on Aug. 14, 2003, Ser. No. 60/495,165, entitled “LUN Based Hard Zoning in Fibre Channel Switches”; Filed on Sep. 19, 2003, Ser. No. 60/503,809, entitled “Multi Speed Cut Through Operation in Fibre Channel Switches” Filed on Sep. 23, 2003, Ser. No. 60/505,381, entitled “Method and System for Improving bandwidth and reducing Idles in Fibre Channel Switches”; Filed on Sep. 23, 2003, Ser. No. 60/505,195, entitled “Method and System for Keeping a Fibre Channel Arbitrated Loop Open During Frame Gaps”; Filed on Mar. 30, 2004, Ser. No. 60/557,613, entitled “Method and System for Congestion Control based on Optimum Bandwidth Allocation in a Fibre Channel Switch”; Filed on Sep. 23, 2003, Ser. No. 60/505,075, entitled “Method and System for Programmable Data Dependent Network Routing”; Filed on Sep. 19, 2003, Ser. No. 60/504,950, entitled “Method and System for Power Control of Fibre Channel Switches”; Filed on Dec. 29, 2003, Ser. No. 60/532,967, entitled “Method and System for Buffer to Buffer Credit recovery in Fibre Channel Systems Using Virtual and/or Pseudo Virtual Lane” Filed on Dec. 29, 2003, Ser. No. 60/532,966, entitled “Method And System For Using Extended Fabric Features With Fibre Channel Switch Elements” Filed on Mar. 4, 2004, Ser. No. 60/550,250, entitled “Method And System for Programmable Data Dependent Network Routing” Filed on May 7, 2004, Ser. No. 60/569,436, entitled “Method And System For Congestion Control In A Fibre Channel Switch” Filed on May 18, 2004, Ser. No. 60/572,197, entitled “Method and System for Configuring Fibre Channel Ports” and Filed on Dec. 29, 2003, Ser. No. 60/532,963 entitled “Method and System for Managing Traffic in Fibre Channel Switches”. The disclosure of the foregoing applications is incorporated herein by reference in their entirety.

US Referenced Citations (348)
Number Name Date Kind
4081612 Hafner Mar 1978 A
4162375 Schilichte Jul 1979 A
4200929 Davidjuk et al. Apr 1980 A
4258418 Heath Mar 1981 A
4344132 Dixon et al. Aug 1982 A
4382159 Bowditch May 1983 A
4425640 Philip et al. Jan 1984 A
4546468 Christmas et al. Oct 1985 A
4569043 Simmons et al. Feb 1986 A
4691296 Struger Sep 1987 A
4716561 Angell et al. Dec 1987 A
4725835 Schreiner et al. Feb 1988 A
4821034 Anderson et al. Apr 1989 A
4860193 Bentley et al. Aug 1989 A
4964119 Endo et al. Oct 1990 A
4980857 Walter et al. Dec 1990 A
5025370 Koegel et al. Jun 1991 A
5051742 Hullett et al. Sep 1991 A
5090011 Fukuta et al. Feb 1992 A
5115430 Hahne et al. May 1992 A
5144622 Takiyasu et al. Sep 1992 A
5258751 DeLuca et al. Nov 1993 A
5260933 Rouse Nov 1993 A
5260935 Turner Nov 1993 A
5280483 Kamoi et al. Jan 1994 A
5291481 Doshi et al. Mar 1994 A
5339311 Turner Aug 1994 A
5367520 Cordell Nov 1994 A
5390173 Spinney et al. Feb 1995 A
5425022 Clark et al. Jun 1995 A
5537400 Diaz et al. Jul 1996 A
5568165 Kimura Oct 1996 A
5568167 Galbi et al. Oct 1996 A
5579443 Tatematsu et al. Nov 1996 A
5590125 Acampora et al. Dec 1996 A
5594672 Hicks Jan 1997 A
5598541 Malladi Jan 1997 A
5610745 Bennett Mar 1997 A
5623492 Teraslinna Apr 1997 A
5666483 McClary Sep 1997 A
5677909 Heide Oct 1997 A
5687172 Cloonan et al. Nov 1997 A
5701416 Thorson et al. Dec 1997 A
5706279 Teraslinna Jan 1998 A
5732206 Mendel Mar 1998 A
5748612 Stoevhase et al. May 1998 A
5757771 Li et al. May 1998 A
5764927 Murphy et al. Jun 1998 A
5768271 Seid et al. Jun 1998 A
5768533 Ran Jun 1998 A
5784358 Smith et al. Jul 1998 A
5790545 Holt et al. Aug 1998 A
5790840 Bulka et al. Aug 1998 A
5812525 Teraslinna Sep 1998 A
5818842 Burwell et al. Oct 1998 A
5821875 Lee et al. Oct 1998 A
5822300 Johnson et al. Oct 1998 A
5825748 Barkey et al. Oct 1998 A
5828475 Bennett et al. Oct 1998 A
5835752 Chiang et al. Nov 1998 A
5850386 Anderson et al. Dec 1998 A
5892604 Yamanaka et al. Apr 1999 A
5894560 Carmichael et al. Apr 1999 A
5925119 Maroney Jul 1999 A
5936442 Liu et al. Aug 1999 A
5954796 McCarty et al. Sep 1999 A
5974547 Klimenko Oct 1999 A
5978359 Caldara et al. Nov 1999 A
5978379 Chan et al. Nov 1999 A
5987028 Yang et al. Nov 1999 A
5999528 Chow et al. Dec 1999 A
6009226 Tsuji et al. Dec 1999 A
6011779 Wills Jan 2000 A
6014383 McCarty Jan 2000 A
6021128 Hosoya et al. Feb 2000 A
6026092 Abu-Amara et al. Feb 2000 A
6031842 Trevitt et al. Feb 2000 A
6046979 Bauman Apr 2000 A
6047323 Krause Apr 2000 A
6055618 Thorson Apr 2000 A
6061360 Miller et al. May 2000 A
6081512 Muller et al. Jun 2000 A
6108738 Chambers et al. Aug 2000 A
6108778 LaBerge Aug 2000 A
6118776 Berman Sep 2000 A
6118791 Fichou et al. Sep 2000 A
6128292 Kim et al. Oct 2000 A
6134127 Kirchberg Oct 2000 A
6144668 Bass et al. Nov 2000 A
6147976 Shand et al. Nov 2000 A
6151644 Wu Nov 2000 A
6158014 Henson Dec 2000 A
6160813 Banks et al. Dec 2000 A
6185203 Berman Feb 2001 B1
6201787 Baldwin et al. Mar 2001 B1
6209089 Selitrennikoff et al. Mar 2001 B1
6229822 Chow et al. May 2001 B1
6230276 Hayden May 2001 B1
6240096 Book May 2001 B1
6252891 Perches Jun 2001 B1
6253267 Kim et al. Jun 2001 B1
6278708 Von Hammerstein et al. Aug 2001 B1
6286011 Velamuri et al. Sep 2001 B1
6289002 Henson et al. Sep 2001 B1
6301612 Selitrennikoff et al. Oct 2001 B1
6307857 Yokoyama et al. Oct 2001 B1
6308220 Mathur Oct 2001 B1
6311204 Mills et al. Oct 2001 B1
6324181 Wong et al. Nov 2001 B1
6330236 Ofek et al. Dec 2001 B1
6333932 Kobayasi et al. Dec 2001 B1
6335935 Kadambi et al. Jan 2002 B2
6343324 Hubis et al. Jan 2002 B1
6353612 Zhu et al. Mar 2002 B1
6370605 Chong Apr 2002 B1
6397360 Bruns May 2002 B1
6401128 Stai et al. Jun 2002 B1
6404749 Falk Jun 2002 B1
6411599 Blanc et al. Jun 2002 B1
6411627 Hullett et al. Jun 2002 B1
6418477 Verma Jul 2002 B1
6421342 Schwartz et al. Jul 2002 B1
6421711 Blumenau et al. Jul 2002 B1
6424658 Mathur Jul 2002 B1
6438628 Messerly et al. Aug 2002 B1
6449274 Holden et al. Sep 2002 B1
6452915 Jorgensen Sep 2002 B1
6457090 Young Sep 2002 B1
6467008 Gentry, Jr. et al. Oct 2002 B1
6470026 Pearson et al. Oct 2002 B1
6509988 Saito Jan 2003 B1
6522656 Gridley Feb 2003 B1
6532212 Soloway et al. Mar 2003 B1
6563796 Saito May 2003 B1
6570850 Gutierrez et al. May 2003 B1
6570853 Johnson et al. May 2003 B1
6594231 Byham et al. Jul 2003 B1
6597691 Anderson et al. Jul 2003 B1
6597777 Ho Jul 2003 B1
6606690 Padovano Aug 2003 B2
6614796 Black et al. Sep 2003 B1
6622206 Kanamaru et al. Sep 2003 B1
6629161 Matsuki et al. Sep 2003 B2
6643298 Brunheroto et al. Nov 2003 B1
6657962 Barri et al. Dec 2003 B1
6684209 Ito et al. Jan 2004 B1
6697359 George Feb 2004 B1
6697368 Chang et al. Feb 2004 B2
6697914 Hospodor et al. Feb 2004 B1
6718497 Whitby-Strevens Apr 2004 B1
6738381 Agnevik et al. May 2004 B1
6744772 Eneboe et al. Jun 2004 B1
6760302 Ellinas et al. Jul 2004 B1
6779083 Ito et al. Aug 2004 B2
6807181 Weschler Oct 2004 B1
6816492 Turner et al. Nov 2004 B1
6816750 Klaas Nov 2004 B1
6859435 Lee et al. Feb 2005 B1
6865157 Scott et al. Mar 2005 B1
6886141 Kunz et al. Apr 2005 B1
6888831 Hospodor et al. May 2005 B1
6901072 Wong May 2005 B1
6904507 Gil Jun 2005 B2
6922408 Bloch et al. Jul 2005 B2
6928470 Hamlin Aug 2005 B1
6934799 Acharya et al. Aug 2005 B2
6941357 Nguyen et al. Sep 2005 B2
6941482 Strong Sep 2005 B2
6947393 Hooper, III Sep 2005 B2
6952659 King et al. Oct 2005 B2
6968463 Pherson et al. Nov 2005 B2
6975627 Parry et al. Dec 2005 B1
6987768 Kojima et al. Jan 2006 B1
6988130 Blumenau et al. Jan 2006 B2
6988149 Odenwald Jan 2006 B2
7000025 Wilson Feb 2006 B1
7002926 Eneboe et al. Feb 2006 B1
7010607 Bunton Mar 2006 B1
7024410 Ito et al. Apr 2006 B2
7031615 Genrile Apr 2006 B2
7039070 Kawakatsu May 2006 B2
7039870 Takaoka et al. May 2006 B2
7047326 Crosbie et al. May 2006 B1
7050392 Valdevit May 2006 B2
7051182 Blumenau et al. May 2006 B2
7055068 Riedl May 2006 B2
7061862 Horiguchi et al. Jun 2006 B2
7061871 Sheldon et al. Jun 2006 B2
7076569 Bailey et al. Jul 2006 B1
7092374 Gubbi Aug 2006 B1
7110394 Chamdani et al. Sep 2006 B1
7120728 Krakirian et al. Oct 2006 B2
7123306 Goto et al. Oct 2006 B1
7124169 Shimozono et al. Oct 2006 B2
7150021 Vajjhala et al. Dec 2006 B1
7151778 Zhu et al. Dec 2006 B2
7171050 Kim Jan 2007 B2
7185062 Lolayekar et al. Feb 2007 B2
7221650 Cooper et al. Feb 2007 B1
7187688 Garmire et al. Mar 2007 B2
7188364 Volpano Mar 2007 B2
7190667 Susnow et al. Mar 2007 B2
7194538 Rabe et al. Mar 2007 B1
7200108 Beer et al. Apr 2007 B2
7200610 Prawdiuk et al. Apr 2007 B1
7209478 Rojas et al. Apr 2007 B2
7215680 Mullendore et al. May 2007 B2
7230929 Betker et al. Jun 2007 B2
7233985 Hahn et al. Jun 2007 B2
7245613 Winkles et al. Jul 2007 B1
7248580 George et al. Jul 2007 B2
6785241 Lu et al. Aug 2007 B1
7263593 Honda et al. Aug 2007 B2
7266286 Tanizawa et al. Sep 2007 B2
7269168 Roy et al. Sep 2007 B2
7277431 Walter et al. Oct 2007 B2
7287063 Baldwin et al. Oct 2007 B2
7269131 Cashman et al. Nov 2007 B2
7292593 Winkles et al. Nov 2007 B1
7315511 Morita et al. Jan 2008 B2
7327680 Kloth Feb 2008 B1
7346707 Erimli Mar 2008 B1
7352740 Hammons et al. Apr 2008 B2
7397788 Mies et al. Jul 2008 B2
7406034 Cometto et al. Jul 2008 B1
20010011357 Mori Aug 2001 A1
20010022823 Renaud Sep 2001 A1
20010033552 Barrack et al. Oct 2001 A1
20010038628 Ofek et al. Nov 2001 A1
20010043564 Bloch et al. Nov 2001 A1
20010047460 Kobayashi et al. Nov 2001 A1
20020016838 Geluc et al. Feb 2002 A1
20020034178 Schmidt et al. Mar 2002 A1
20020071387 Horiguchi et al. Jun 2002 A1
20020103913 Tawil et al. Aug 2002 A1
20020104039 DeRolf et al. Aug 2002 A1
20020118692 Oberman et al. Aug 2002 A1
20020122428 Fan et al. Sep 2002 A1
20020124124 Matsumoto et al. Sep 2002 A1
20020147560 Devins et al. Oct 2002 A1
20020147843 Rao Oct 2002 A1
20020156918 Valdevit et al. Oct 2002 A1
20020159385 Susnow et al. Oct 2002 A1
20020172195 Pekkala et al. Nov 2002 A1
20020174197 Schimke et al. Nov 2002 A1
20020191602 Woodring et al. Dec 2002 A1
20020194294 Blumenau et al. Dec 2002 A1
20020196773 Berman Dec 2002 A1
20030002503 Brewer et al. Jan 2003 A1
20030002516 Boock et al. Jan 2003 A1
20030016683 George et al. Jan 2003 A1
20030021239 Mullendore et al. Jan 2003 A1
20030026267 Oberman et al. Feb 2003 A1
20030026287 Mullendore et al. Feb 2003 A1
20030033487 Pfister et al. Feb 2003 A1
20030035433 Craddock et al. Feb 2003 A1
20030046396 Richter et al. Mar 2003 A1
20030056000 Mullendore et al. Mar 2003 A1
20030063567 Dehart Apr 2003 A1
20030072316 Niu et al. Apr 2003 A1
20030076788 Grabauskas et al. Apr 2003 A1
20030079019 Lolayekar et al. Apr 2003 A1
20030084219 Yao et al. May 2003 A1
20030086377 Berman May 2003 A1
20030091062 Lay et al. May 2003 A1
20030093607 Main et al. May 2003 A1
20030103451 Lutgen et al. Jun 2003 A1
20030112819 Kofoed et al. Jun 2003 A1
20030115355 Cometto et al. Jun 2003 A1
20030117961 Chuah et al. Jun 2003 A1
20030118053 Edsall et al. Jun 2003 A1
20030120743 Coatney et al. Jun 2003 A1
20030120791 Weber et al. Jun 2003 A1
20030120983 Vieregge et al. Jul 2003 A1
20030126223 Jenne et al. Jul 2003 A1
20030126242 Chang Jul 2003 A1
20030131105 Czeiger et al. Jul 2003 A1
20030137941 Kaushik et al. Jul 2003 A1
20030139900 Robison Jul 2003 A1
20030172149 Edsall et al. Sep 2003 A1
20030172239 Swank Sep 2003 A1
20030174652 Ebata Sep 2003 A1
20030174721 Black et al. Sep 2003 A1
20030174789 Waschura et al. Sep 2003 A1
20030179709 Huff Sep 2003 A1
20030179748 George et al. Sep 2003 A1
20030179755 Fraser Sep 2003 A1
20030189930 Terrell et al. Oct 2003 A1
20030189935 Warden et al. Oct 2003 A1
20030191857 Terell et al. Oct 2003 A1
20030195983 Krause Oct 2003 A1
20030198238 Westby Oct 2003 A1
20030218986 DeSanti et al. Nov 2003 A1
20030229808 Heintz et al. Dec 2003 A1
20030236953 Grieff et al. Dec 2003 A1
20040013088 Gregg Jan 2004 A1
20040013092 Betker et al. Jan 2004 A1
20040013113 Singh et al. Jan 2004 A1
20040013125 Betker et al. Jan 2004 A1
20040015638 Bryn Jan 2004 A1
20040024831 Yang et al. Feb 2004 A1
20040028038 Anderson et al. Feb 2004 A1
20040054776 Klotz et al. Mar 2004 A1
20040054866 Blumenau et al. Mar 2004 A1
20040057389 Klotz et al. Mar 2004 A1
20040064664 Gil Apr 2004 A1
20040081186 Warren et al. Apr 2004 A1
20040081196 Elliott Apr 2004 A1
20040081394 Biren et al. Apr 2004 A1
20040085955 Walter et al. May 2004 A1
20040085974 Mies et al. May 2004 A1
20040085994 Warren et al. May 2004 A1
20040092278 Diepstraten et al. May 2004 A1
20040100944 Richmond et al. May 2004 A1
20040109418 Fedorkow et al. Jun 2004 A1
20040123181 Moon et al. Jun 2004 A1
20040141518 Milligan et al. Jul 2004 A1
20040141521 George Jul 2004 A1
20040151188 Maveli et al. Aug 2004 A1
20040153526 Haun et al. Aug 2004 A1
20040153914 El-Batal Aug 2004 A1
20040174813 Kasper et al. Sep 2004 A1
20040202189 Arndt et al. Oct 2004 A1
20040208201 Otake Oct 2004 A1
20040267982 Jackson et al. Dec 2004 A1
20050023656 Leedy Feb 2005 A1
20050036499 Dutt et al. Feb 2005 A1
20050036763 Kato et al. Feb 2005 A1
20050047334 Paul et al. Mar 2005 A1
20050073956 Moores et al. Apr 2005 A1
20050076113 Klotz et al. Apr 2005 A1
20050088969 Carlsen et al. Apr 2005 A1
20050108444 Flauaus et al. May 2005 A1
20050111845 Nelson et al. May 2005 A1
20050117522 Basavaiah et al. Jun 2005 A1
20050177641 Yamagami Aug 2005 A1
20050198523 Shanbhag et al. Sep 2005 A1
20060013248 Mujeeb et al. Jan 2006 A1
20060034192 Hurley et al. Feb 2006 A1
20060034302 Peterson Feb 2006 A1
20060047852 Shah et al. Mar 2006 A1
20060074927 Sullivan et al. Apr 2006 A1
20060107260 Motta May 2006 A1
20060143300 See et al. Jun 2006 A1
20060184711 Pettey Aug 2006 A1
20060203725 Paul et al. Sep 2006 A1
20060274744 Nagai et al. Dec 2006 A1
20070206502 Martin et al. Sep 2007 A1
Foreign Referenced Citations (5)
Number Date Country
0649098 Sep 1994 EP
0856969 Jan 1998 EP
WO-9836537 Aug 1998 WO
WO-0195566 Dec 2001 WO
WO03088050 Oct 2003 WO
Related Publications (1)
Number Date Country
20050030893 A1 Feb 2005 US
Provisional Applications (19)
Number Date Country
60503812 Sep 2003 US
60537933 Jan 2004 US
60488757 Jul 2003 US
60532965 Dec 2003 US
60504038 Sep 2003 US
60495212 Aug 2003 US
60495165 Aug 2003 US
60503809 Sep 2003 US
60505381 Sep 2003 US
60505195 Sep 2003 US
60557613 Mar 2004 US
60505075 Sep 2003 US
60504950 Sep 2003 US
60532967 Dec 2003 US
60532966 Dec 2003 US
60550250 Mar 2004 US
60569436 May 2004 US
60572197 May 2004 US
60532963 Dec 2003 US