Method and system for controlling packet flow in networks

Information

  • Patent Grant
  • 7319669
  • Patent Number
    7,319,669
  • Date Filed
    Friday, November 22, 2002
    21 years ago
  • Date Issued
    Tuesday, January 15, 2008
    16 years ago
Abstract
A system and method for transmitting and bundling network packets is provided. The incoming network packet size is determined and if the remote buffer space is sufficient to hold the network packet it is transmitted to the destination port. If the remote buffer space is not enough to hold the network packet it is discarded. The system includes an arbitration module that receives remote buffer space information and transmits the network packet if the remote buffer space has enough space to hold the packet. The arbitration module also determines if a second network packet is from a same source port having a same source virtual lane, and has the same destination virtual lane (bundling conditions). If the second network packet meets the bundling conditions, then it is transmitted after the first network packet, even if other packets were received before the second network packet.
Description
BACKGROUND

1. Field of the Invention


The present invention relates to networks, and more particularly to routing network packets in switches.


2. Background of the Invention


Switches are commonly used in networks. A typical switch routes network packets (may also be referred to herein as “packets”) from one port to another port. Various industry standards and protocols are used to monitor and manage data packet transmission through networks and switches. One such standard is Infiniband published by the Infiniband Trade Association and incorporated herein by reference in its entirety. Infiniband provides an architecture that allows a single unified input/output (I/O) fabric. The standard provides a virtual lane (VL) mechanism that allows network data packets from different sources and bound for different destinations, to flow through a single channel. The standard also describes how data packets are processed for delivery within each virtual lane.


Switches in networks typically use a central arbitration unit that selects input/out ports for initiating packet transfer. Various algorithms are used by arbitration units for selecting (or not selecting) a particular port for packet transfer.


Typically, arbitration units arbitrate port “requests” without any knowledge of remote port buffer space and size where the packets are stored after delivery. Hence, in conventional switches, the arbitration unit transfers data packets without knowing how much remote buffer space is available for storing the transferred packet(s). This can result in packet dropping, port stalling and remote buffer overrun.


In addition, most arbitration units select packets for transmission based on a first in-first out (FIFO) model. This is inefficient if packets are received from the same source and are meant for the same destination using the same virtual lane. In this case, the FIFO model is inefficient because packets with the foregoing conditions should be sent in close proximity to each other rather than waiting to satisfy the FIFO requirements.


Another drawback in conventional switch fabrics is that packets are accumulated, bundled and then transmitted. This increases latency based on the number of packets.


Therefore, what is needed is a process and system, such that switches can efficiently transmit packets and also arbitrate and select packets for transmission with minimum packet dropping, port stalling and remote buffer overrun.


SUMMARY OF THE INVENTION

In one aspect of the present invention, a method for transmitting network packets is provided. The method includes, determining a network packet size; determining a remote buffer space size at a destination port where the network packet is to be sent; determining if the remote buffer space size is sufficient to hold the network packet; and then transmitting the network packet to the destination port. If there is not enough remote buffer space to hold the network packet, it may be discarded.


In another aspect of the present invention, a system for transmitting network packets is provided. The system includes a receiving port that receives the network packet and generates a packet descriptor and packet size descriptor; a transmit port that receives packet tags with packet descriptor and packet size information; and an arbitration module that receives remote buffer space size information and transmits the network packet if the remote buffer space has enough space to hold the packet.


The system also includes an arbitration priority table that provides priority information regarding virtual lane access in the transmit port to the arbitration module; and a first in first out (FIFO) storage at the transmit port for storing network packet tags.


In another aspect of the present invention, a system for bundling packets for network transmission is provided. The arbitration module determines if a second network packet is from a same source port having a same source virtual lane and will use the same destination virtual lane. If this is accurate, then the second network packet is transmitted after the first network packet, even if other network packets were received before the second network packet.


In another aspect of the present invention, a method for bundling network packets is provided. The method includes, determining if a second network packet has a same source of a first network packet; determining if the second network packet has a same source virtual lane as the first network packet; determining if the second network packet has a same destination virtual lane as the first network packet; and transmitting the second network packet after the first network packet even if other network packets received before the second network packet are waiting for transmission.


In one aspect of the present invention, while a first packet is currently being transmitted, the arbitration unit determines whether to bundle the second packet with the first. This “on-the-fly” packet bundling reduces latency.


In one aspect of the present invention, remote buffer space is efficiently used and is not overrun because only those packets that can fit the remote buffer space are transmitted. This also prevents port stalling and packet dropping.


In another aspect of the present invention, network packets are efficiently transmitted because they are bundled together based on certain conditions (e.g. same source, same source virtual lane and same destination virtual lane). This prevents packets to the same destination from waiting in the network queue and reduces latency.


This brief summary has been provided so that the nature of the invention may be understood quickly. A more complete understanding of the invention can be obtained by reference to the following detailed description of the preferred embodiments thereof concerning the attached drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing features and other features of the present invention will now be described with reference to the drawings of a preferred embodiment. In the drawings, the same components have the same reference numerals. The illustrated embodiment is intended to illustrate, but not to limit the invention. The drawings include the following Figures:



FIG. 1A shows a block diagram of a network using the INFINIBAND standard, according to one aspect of the present invention;



FIG. 1B shows a block diagram of a switch using the system, according to one aspect of the present invention;



FIG. 2A shows a block diagram of a network packet structure used according to one aspect of the present invention;



FIG. 2B shows a block diagram of a local route header in the packet structure of FIG. 2A, used according to one aspect of the present invention;



FIG. 3 shows another block diagram of a switch with a switch fabric, according to one aspect of the present invention;



FIG. 4A shows a block diagram of a transmit port, according to one aspect of the present invention;



FIG. 4B is a block diagram of a virtual lane priority table, according to one aspect of the present invention;



FIG. 5 shows a block diagram showing an arbitration module, according to one aspect of the present invention;



FIG. 6 shows a flow diagram for transmitting network packets to avoid remote buffer overrun, according to one aspect of the present invention; and



FIG. 7 shows a flow diagram for bundling network packets, according to one aspect of the present invention.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

To facilitate an understanding of the preferred embodiment, the general architecture and operation of a network system will be described. The specific architecture and operation of the preferred embodiments will then be described with reference to the general architecture of the network system.



FIG. 1A shows a block diagram of plural computing devices operationally coupled using the Infiniband architecture as described in the Infiniband standard specification, published by the Infiniband Trade Association.



FIG. 1A shows system 104 with a fabric 117. Fabric 117 includes plural switches 106, 107, 111 and 112. Switches 106, 107, 111 and 112 may have common or separate functionality's, and can include the adaptive aspects of the present invention, described below. Fabric 117 also includes a router 108 that is coupled to a wide area network 109 and local area network 110.


Switch 106 is operationally coupled to a RAID storage system 105 and system 102, while system 101 and 103 may be operationally coupled to switch 107.


Switch 112 may be coupled to a small computer system interface (“SCSI”) SCSI port 113 that is coupled to SCSI based devices. Switch 112 may also be coupled to Ethernet 114, fiber channel device (s) 115 and other device(s) 116.


It is noteworthy that systems 101-103 may be any computing system with a microprocessor that can be operationally coupled to a network, including one that is based on the Infiniband standard.



FIG. 1B shows a block diagram of switch 112 that includes a processor 120 which is operationally coupled to plural ports 122, 123, 124 and 125 via a control port 121 and cross-bar 119. In one aspect of the present invention, processor 120 may be a reduced instruction set computer (RISC) type microprocessor. Ports 122-125 may be similar to ports 113-116, respectively.


Switch 112 may be coupled to a processor 129 that is coupled to Ethernet 127 and serial port 128. In one aspect of the present invention, processor 129 may be included in computing systems 101-103.



FIG. 2A provides a example of packet structure that may be used in the various adaptive aspects of the present invention. It is noteworthy that this is to illustrate the adaptive aspects of the present invention. Other packet structures may be used with the methods and systems described below.


Packet 200 includes a local route header 200A, a base transport header (BTH) 200B, packet payload 200C, invariant cyclic redundancy code (CRC), and variant CRC 200E. Packet structure 200 is also described in Infiniband Architecture Specification, Volume 1, Chapter 6, titled “Data Packet Format”, incorporated herein by reference in its entirety.



FIG. 2B shows a block diagram of local route header (LRH) 200A, where the local route contains the fields for local routing by switches within an InfiniBand subnet (LRH in InfiniBand (Subnet routing) is analogous to FC-2 in Fibre Channel and MAC layer (LAN routing) in Ethernet. In all three cases it is considered Layer 2 routing/switching information). LRH 200A includes a virtual lane (VL) field 201 that identifies which receive buffer and flow control credits should be used for processing a received packet, link version (Lver) field 202 specifies the version of the LRH packet 200A, service level (SL) field 203 is used by switch 112 to determine a transmit VL for a packet, and link next header (LNH) field 205 specifies what header follow the LRH 200A. Field 209 is a reserved field.


LRH 200A also includes a destination local identifier (DLID) field 206 that specifies the port to which switch 112 delivers the packet and source identifier (SLID) field 207 that indicates the source of the packet. Packet length field 208 specifies the number of words contained in a packet.



FIG. 3 shows a block diagram of switch 112 with a switch fabric 300 and associated components. Switch fabric 300 is operationally coupled to CPORT 121 and plural ports 305 and 309. It is noteworthy that ports 305 and 309 are similar to ports 122-125.


Switch fabric 300 includes a packet data crossbar 302, packet request crossbar 303 and packet tag crossbar 304 and a control bus 301.


Packet data crossbar 302 connects receive ports (306, 310), transmit ports (307,311), and can concurrently transmit plural packets via plural VLs.


Packet Tag crossbar 303 is used to move plural packet tags from receive ports (306, 310) to transmit ports (307, 311), as described below.


Packet request crossbar 303 is used by transmit port (307, 311) to request a particular packet from a receive buffer.


Routing table (RTABLE) 313 is used to map DLID from a LRH 200A to one or more output ports. A forwarding table 314 includes look up tables (LUTs) that service ports 305 and 309. Ports 305 and 309 (also referred to as XPORT) are a part of switch 112.


Interface (I/F) 308 and 312 provide input/output interface to switch 112.



FIG. 4A shows a block diagram of transmit port 307 in switch 112 with an arbitration module 402. It is noteworthy that each transmit port (e.g. 307, 311) may have its own arbitration module (also referred to as “arbitration unit”) for transmitting packets.


Incoming packets are received by receive ports (306, 310). Incoming packets include the size of the data packet. This information may be included in the packet header 200. Packets are stored in receive port 305 buffer(s) (not shown).


Transmit tag fetch module 403 receives packet tags using packet tag crossbar 304. The tags are sent to a VL mapping module 404 which is used to map packet service level (SL) to a given output VL. This allows packets to enter via one VL and leave via another VL. Tags that are mapped by VL mapping module 404 are stored in a tag FIFO 406. Each packet tag represents data packets residing at receive port 306 buffer (not shown) waiting to be fetched. The tags include information on packet size, receive port (e.g. 306, 310) and receive port VL. Arbitration module 402 selects a packet tag from one of the VL tag FIFOs. The tag is used to generate a packet request and results in subsequent packet transfer.


Arbitration module 402 selects a packet from a set of candidate packets available for transmission from a port. As mentioned earlier, each port has its own arbitration module. VL priority table 405 provides packet ordering priority, which may be as shown below in Table I:
















PACKET TYPE
PRECEDENCE ORDER









MANAGEMENT PACKET
HIGHEST



LINK PACKET
2ND HIGHEST



DROP PACKET
3RD HIGHEST



OTHERS
LOWEST










Packets at a higher precedence level are sent before packets at a lower precedence level.


A block diagram of VL arbitration priority table 405 is shown in FIG. 4B. VL arbitration priority table 405 includes three components, High Priority 405A, Low Priority 405D and Limit of High Priority 405G.


High Priority 405A and Low priority 405D includes plural entries with a VL number (for example, 405B and 405E, respectively) and a weightage value (for example, 405C and 405F). The weightage values indicate the number of byte units during which packet transmission may occur from a VL. The unit includes a header, payload, CRC and packet padding.


Limit of High priority value 405G indicates the number of high priority packets that can be transmitted without sending a low priority packet. VL arbitration values may be adjusted dynamically, as discussed below, while a port is active.


Priority values 405A and Low Priority values 405D form a two level priority scheme. If there is a packet available for transmission from any VL listed as a high priority component, then it is transmitted first. Transmission of packets with low priority values 405D occurs when:


No packets with high priority values 405A are available; or


The high priority transmit period has been exceeded defined by the Limit of High Priority value 405G.


Weighted round robin arbitration is used within each high or low priority component, where the order of entries for each component specifies the order of VL scheduling and the weightage value specifies the bandwidth allocated to the specific entry.


When an entry is completed, arbitration module 402 proceeds to the next value. The available weight is then set for the new entry. A packet is sent to an output port for transmission (via Buffer 401) and the available weight value is decreased for a current entry. This allows a transmit port (e.g., 307, 311) to send packets that are from the same source with the same destination efficiently without delay or port stalling.


In another aspect of the present invention, arbitration module 402 is provided with information regarding the remote buffer (or memory) where a packet is being sent. The remote buffer space size information together with the packet size informs arbitration module 402 whether the remote buffer can receive a packet. If a remote buffer cannot accommodate a given packet, the next VL (as shown by 405B, FIG. 4B)/WT (as shown by 405F, FIG. 4B)) pair is tested against the available packet candidates. This process continues until either a packet is selected and transmitted, or the port stalls. If the port stalls, it is an indication that the remote device does not have enough buffer space for any packet. This prevents packet dropping and port stalling.



FIG. 5 shows another top-level block diagram of switch 112 with arbitration unit 402. Receive port 306 receives packets from a device (not shown) and a packet request descriptor 501 with the packet size descriptor 502 are sent to arbitration module 402. Remote buffer space 503 information is also provided to arbitration module 402 in real time.


Arbitration module 402 uses remote buffer space 503 information with packet size data 502 to generate packet request (to a receive port (e.g. 306) to fetch a packet) descriptor 504 such that remote buffer space 503 is not overrun or under utilized. Arbitration module 402 uses this information with the priority table 405 and VL mapping module 404 information that is stored in FIFO 406, as described above.



FIG. 6 shows a flow diagram for transmitting packets based on remote buffer space availability.


In step S600, arbitration module 402 receives packet size and descriptor information. This also provides arbitration module 402 with the location of the remote buffer.


In step S601, arbitration module 402 obtains information (503) regarding the remote buffer space (via link layer or layer 1 VL flow control packets (not shown)). Such information will include the amount of buffer space available at any given time. This information allows arbitration module 402 to send packets that can be held in the remote buffer space.


In step S602, arbitration module 402 determines if the remote buffer can hold the packets based on packet size and available buffer space at a given time.


In step S603, arbitration module 402 sends the packets if the packet can fit in the available remote buffer space.


If the remote buffer cannot hold the packet, then in step S604, arbitration module 402 selects another packet for transmission. Steps S600 to S603 are repeated until the packet selected will fit in the remote buffer.


In one aspect of the present invention, packets that are from the same source port and received in consecutive order, have the same source VL and the same destination VL (“bundling conditions”) are bundled together. Arbitration module 402 sends these packets sequentially. Arbitration module 402 while sending a packet, searches for packets that meet the bundling conditions. The incoming packet is moved ahead of the transmit queue if the incoming packet meets the bundling conditions. This allows switch 112 to efficiently transmit packets if the bundling conditions are met.



FIG. 7 shows a process flow diagram for “packet bundling” such that packets from the same source and destination are sent in order.


In step S700, based on the packet tag information (or descriptor 501) arbitration module 402 determines if the packet is from the same source as a previous packet that has been transmitted.


If the packet is from the same source, then in step S701, arbitration module 402 determines if the packet received is in a consecutive order with respect to the previous packet received from the same source. If the packet is in consecutive order, then in step S703, arbitration module 402 determines if the packet has the same destination VL.


If the packet has the same destination VL, then it is sent after the previously transmitted packet, instead of waiting in a FIFO based system. Hence, the incoming packet is transmitted using a Last In-First Out (LIFO) system, instead of the first in first out (FIFO). This allows the packets to be efficiently transmitted.


In one aspect of the present invention, while a first packet is currently being transmitted, the arbitration unit determines whether to bundle the second packet with the first. This “on-the-fly” packet bundling reduces latency.


In one aspect of the present invention, remote buffer space is efficiently used and is not overrun because only those packets that can fit the remote buffer space are transmitted. This also prevents port stalling and packet dropping.


In another aspect of the present invention, network packets are efficiently transmitted because they are bundled together based on certain conditions (e.g. same source, same source virtual lane and same destination virtual lane). This prevents packets to the same destination from waiting in the network queue and reduces latency.


Although the present invention has been described with reference to specific embodiments, these embodiments are illustrative only and not limiting. Many other applications and embodiments of the present invention will be apparent in light of this disclosure and the following claims.

Claims
  • 1. A method for transmitting a plurality of network packets, comprising: (a) receiving network packet information for transmitting a network packet from one of the plurality of network packets that are waiting to be transmitted at a given time; wherein an arbitration module of a network switch receives the network packet information that includes information regarding network packet size;(b) obtaining information regarding a remote buffer at a destination port where the network packet is to be sent; wherein the arbitration module obtains the information regarding the remote buffer;(c) determining if available space in the remote buffer is sufficient to hold the network packet; and(d) transmitting the network packet via a virtual lane to the destination port, if the network packet can be stored in available space in the remote buffer.
  • 2. The method of claim 1, further comprising: selecting another network packet from the plurality of network packets and repeating steps (a) to (d), if available space in the remote buffer is not enough to hold the network packet.
  • 3. The method of claim 1, wherein the network switch is an Infiniband switch.
  • 4. The method of claim 1, wherein a network packet tag includes information regarding network packet size and information regarding a receive port where the network packet is waiting to be fetched for transmission and the arbitration module uses the network tag information and generates a packet request to transmit the network packet.
  • 5. The method of claim 1, wherein the arbitration module selects a network packet from among the plurality of network packets based on a priority table that stores information to categorize a network packet as a high priority network packet or a low priority packet; and the priority table stores a value for limiting transmission of high priority network packets before transmitting a low priority network packet that is waiting for transmission at any given time.
  • 6. The method of claim 5, wherein a low priority network packet is transmitted if there is no high priority network packet waiting for transmission; or if a period to transmit a high priority network packet has exceeded the value in the priority table to limit transmission of high priority network packets.
  • 7. The method of claim 5, wherein the arbitration module transmits a second network packet after a first network packet even if other network packets that were received before the second network packet are waiting for transmission, if the second network packet is received from a same source as the first network packet; the second network packet is destined for a same destination as the first network packet and the second network packet is received in a consecutive order after the first network packet.
  • 8. A system for transmitting a plurality of network packets, comprising: a receive port of a network port in a network switch that receives a network packet and generates a packet descriptor and packet size information;a transmit port of the network port of the network switch that receives a packet tag with the packet descriptor and packet size information; andan arbitration module that selects a network packet from among the plurality of network packets that are waiting to be transmitted at a given time, obtains information regarding available space in a remote buffer and compares the packet size information with the available space in the remote buffer and transmits the network packet via a virtual lane if the available space in the remote buffer is large enough to hold the network packet.
  • 9. The system of claim 8, wherein the network port stores an arbitration priority table that stores information to categorize a network packet as a high priority network packet or a low priority packet; and the priority table stores a value for limiting transmission of high priority network packets before transmitting a low priority network packet that is waiting for transmission at any given time via a virtual lane.
  • 10. The system of claim 9, wherein a low priority network packet is transmitted via a virtual lane if there is no high priority network packet waiting for transmission; or if a high priority transmit period has exceeded the value in the priority table that limits transmission of high priority network packets.
  • 11. The system of claim 3, wherein the network switch is an Infiniband switch.
  • 12. The system of claim 8, wherein the arbitration module transmits a second network packet after a first network packet even if other network packets that were received before the second network packets are waiting for transmission, if the second network packet is received from a same source as the first network packet; the second network packet is destined for a same destination as the first network packet and the second network packet is received in a consecutive order after the first network packet.
  • 13. A system for network transmission, comprising: a first network switch communicating with a second network switch; wherein the first network switch comprises a receive port of a network port in the first network switch that receives a network packet and generates a packet descriptor and packet size information; a transmit port of the network port of the first network switch that receives a packet tag with the packet descriptor and packet size information; andan arbitration module for the network port of the first network switch that determines if a second network packet is from a same source port having a same source virtual lane and destination virtual lane and was received in a consecutive order, and transmits the second network packet after the first network packet, even if other packets were received before the second network packet and are waiting to be sent, wherein the network port for the first network switch stores an arbitration priority table that stores information to categorize a network packet as a high priority network packet or a low priority packet; and the priority table stores a value for limiting transmission of high priority network packets before transmitting a low priority network packet that is waiting for transmission at any given time.
  • 14. The system of claim 13, wherein a low priority network packet is transmitted if there is no high priority network packet waiting for transmission; or if a high priority transmit period has exceeded the value in the priority table that limits transmission of high priority network packets.
  • 15. The system of claim 13, wherein the network switch is an Infiniband switch.
US Referenced Citations (245)
Number Name Date Kind
4162375 Schilichte Jul 1979 A
4200929 Davidjuk et al. Apr 1980 A
4268906 Bourke et al. May 1981 A
4333143 Calder Jun 1982 A
4382159 Bowditch May 1983 A
4425640 Philip et al. Jan 1984 A
4449182 Rubinson et al. May 1984 A
4546468 Christmas et al. Oct 1985 A
4549263 Calder Oct 1985 A
4569043 Simmons et al. Feb 1986 A
4725835 Schreiner et al. Feb 1988 A
4777595 Strecker et al. Oct 1988 A
4783730 Fischer et al. Nov 1988 A
4783739 Calder Nov 1988 A
4803622 Bain, Jr. et al. Feb 1989 A
4821034 Anderson et al. Apr 1989 A
4980857 Walter et al. Dec 1990 A
5051742 Hullett et al. Sep 1991 A
5115430 Hahne et al. May 1992 A
5129064 Fogg, Jr. et al. Jul 1992 A
5144622 Takiyasu et al. Sep 1992 A
5212795 Hendry May 1993 A
5249279 Schmenk et al. Sep 1993 A
5276807 Kodama et al. Jan 1994 A
5321816 Rogan et al. Jun 1994 A
5347638 Desai et al. Sep 1994 A
5367520 Cordell Nov 1994 A
5371861 Keener et al. Dec 1994 A
5448702 Garcia, Jr. et al. Sep 1995 A
5568614 Mendelson et al. Oct 1996 A
5590125 Acampora et al. Dec 1996 A
5598541 Malladi Jan 1997 A
5610745 Bennett Mar 1997 A
5623492 Teraslinna Apr 1997 A
5647057 Roden et al. Jul 1997 A
5666483 McClary Sep 1997 A
5671365 Binford et al. Sep 1997 A
5687172 Cloonan et al. Nov 1997 A
5701416 Thorson et al. Dec 1997 A
5706279 Teraslinna Jan 1998 A
5740467 Chmielecki et al. Apr 1998 A
5748612 Stoevhase et al. May 1998 A
5758187 Young May 1998 A
5761427 Shah et al. Jun 1998 A
5812525 Teraslinna Sep 1998 A
5818842 Burwell et al. Oct 1998 A
5828475 Bennett et al. Oct 1998 A
5828903 Sethuram et al. Oct 1998 A
5835752 Chiang et al. Nov 1998 A
5875343 Binford et al. Feb 1999 A
5881296 Williams et al. Mar 1999 A
5892969 Young Apr 1999 A
5894560 Carmichael et al. Apr 1999 A
5905905 Dailey et al. May 1999 A
5917723 Binford Jun 1999 A
5937169 Connery et al. Aug 1999 A
5968143 Chisholm et al. Oct 1999 A
5978359 Caldara et al. Nov 1999 A
5983292 Nordstrom et al. Nov 1999 A
5987028 Yang et al. Nov 1999 A
5999528 Chow et al. Dec 1999 A
6006340 O'Connell Dec 1999 A
6014383 McCarty Jan 2000 A
6021128 Hosoya et al. Feb 2000 A
6026092 Abu-Amara et al. Feb 2000 A
6031842 Trevitt et al. Feb 2000 A
6047323 Krause Apr 2000 A
6049802 Waggener, Jr. et al. Apr 2000 A
6055603 Ofer et al. Apr 2000 A
6055618 Thorson Apr 2000 A
6061360 Miller et al. May 2000 A
6078970 Nordstrom Jun 2000 A
6081512 Muller et al. Jun 2000 A
6085277 Nordstrom et al. Jul 2000 A
6105122 Muller et al. Aug 2000 A
6108738 Chambers et al. Aug 2000 A
6108778 LaBerge Aug 2000 A
6115761 Daniel et al. Sep 2000 A
6118776 Berman Sep 2000 A
6128292 Kim et al. Oct 2000 A
6134617 Weber Oct 2000 A
6138176 McDonald et al. Oct 2000 A
6144668 Bass et al. Nov 2000 A
6160813 Banks et al. Dec 2000 A
6185620 Weber et al. Feb 2001 B1
6201787 Baldwin et al. Mar 2001 B1
6229822 Chow et al. May 2001 B1
6233244 Runaldue et al. May 2001 B1
6240096 Book May 2001 B1
6246683 Connery et al. Jun 2001 B1
6247060 Boucher et al. Jun 2001 B1
6253267 Kim et al. Jun 2001 B1
6269413 Sherlock Jul 2001 B1
6289002 Henson et al. Sep 2001 B1
6308220 Mathur Oct 2001 B1
6324181 Wong et al. Nov 2001 B1
6330236 Ofek et al. Dec 2001 B1
6334153 Boucher et al. Dec 2001 B2
6343324 Hubis et al. Jan 2002 B1
6353612 Zhu et al. Mar 2002 B1
6370605 Chong Apr 2002 B1
6389479 Boucher et al. May 2002 B1
6393487 Boucher et al. May 2002 B2
6401128 Stai et al. Jun 2002 B1
6408349 Castellano Jun 2002 B1
6411599 Blanc et al. Jun 2002 B1
6411627 Hullett et al. Jun 2002 B1
6418477 Verma Jul 2002 B1
6421711 Blumenau et al. Jul 2002 B1
6424658 Mathur Jul 2002 B1
6427171 Craft et al. Jul 2002 B1
6427173 Boucher et al. Jul 2002 B1
6434620 Boucher et al. Aug 2002 B1
6434630 Micalizzi, Jr. et al. Aug 2002 B1
6449274 Holden et al. Sep 2002 B1
6452915 Jorgensen Sep 2002 B1
6457090 Young Sep 2002 B1
6463032 Lau et al. Oct 2002 B1
6467008 Gentry et al. Oct 2002 B1
6470026 Pearson et al. Oct 2002 B1
6470173 Okada et al. Oct 2002 B1
6470415 Starr et al. Oct 2002 B1
6502189 Westby Dec 2002 B1
6504846 Yu et al. Jan 2003 B1
6532212 Soloway et al. Mar 2003 B1
6546010 Merchant et al. Apr 2003 B1
6564271 Micalizzi et al. May 2003 B2
6570850 Gutierrez et al. May 2003 B1
6591302 Boucher et al. Jul 2003 B2
6594231 Byham et al. Jul 2003 B1
6594329 Susnow Jul 2003 B1
6597691 Anderson et al. Jul 2003 B1
6597777 Ho Jul 2003 B1
6614796 Black et al. Sep 2003 B1
6643748 Wieland Nov 2003 B1
6697359 George Feb 2004 B1
6697368 Chang et al. Feb 2004 B2
6721799 Slivkoff Apr 2004 B1
6725388 Susnow Apr 2004 B1
6744772 Eneboe et al. Jun 2004 B1
6760302 Ellinas et al. Jul 2004 B1
6775693 Adams Aug 2004 B1
6785241 Lu et al. Aug 2004 B1
6807181 Weschler Oct 2004 B1
6810440 Micalizzi, Jr. et al. Oct 2004 B2
6810442 Lin Oct 2004 B1
6816750 Klaas Nov 2004 B1
6859435 Lee et al. Feb 2005 B1
6886141 Kunz et al. Apr 2005 B1
6941357 Nguyen et al. Sep 2005 B2
6941482 Strong Sep 2005 B2
6952659 King et al. Oct 2005 B2
6968463 Pherson et al. Nov 2005 B2
7000025 Wilson Feb 2006 B1
7002926 Eneboe et al. Feb 2006 B1
7010607 Bunton Mar 2006 B1
7039070 Kawakatsu May 2006 B2
7039870 Takaoka et al. May 2006 B2
7047326 Crosbie et al. May 2006 B1
7050392 Valdevit May 2006 B2
7055068 Riedl May 2006 B2
7092374 Gubbi Aug 2006 B1
7110394 Chamdani et al. Sep 2006 B1
7124169 Shimozono et al. Oct 2006 B2
7151778 Zhu et al. Dec 2006 B2
7171050 Kim Jan 2007 B2
7185062 Lolayekar et al. Feb 2007 B2
7190667 Susnow et al. Mar 2007 B2
7194538 Rabe et al. Mar 2007 B1
7200108 Beer et al. Apr 2007 B2
7215680 Mullendore et al. May 2007 B2
7221650 Cooper et al. May 2007 B1
7245613 Winkles et al. Jul 2007 B1
7248580 George et al. Jul 2007 B2
7269131 Cashman et al. Sep 2007 B2
7292593 Winkles et al. Nov 2007 B1
20010011357 Mori Aug 2001 A1
20010038628 Ofek et al. Nov 2001 A1
20010047460 Kobayashi et al. Nov 2001 A1
20020034178 Schmidt et al. Mar 2002 A1
20020071387 Horiguchi et al. Jun 2002 A1
20020103913 Tawil et al. Aug 2002 A1
20020104039 DeRolf et al. Aug 2002 A1
20020124124 Matsumoto et al. Sep 2002 A1
20020147560 Devins et al. Oct 2002 A1
20020147802 Murotani et al. Oct 2002 A1
20020147843 Rao Oct 2002 A1
20020156918 Valdevit et al. Oct 2002 A1
20020172195 Pekkala et al. Nov 2002 A1
20020191602 Woodring et al. Dec 2002 A1
20020196773 Berman Dec 2002 A1
20030002503 Brewer et al. Jan 2003 A1
20030016683 George et al. Jan 2003 A1
20030021239 Mullendore et al. Jan 2003 A1
20030026267 Oberman et al. Feb 2003 A1
20030026287 Mullendore et al. Feb 2003 A1
20030035433 Craddock et al. Feb 2003 A1
20030046396 Richter et al. Mar 2003 A1
20030056000 Mullendore et al. Mar 2003 A1
20030072316 Niu et al. Apr 2003 A1
20030079019 Lolayekar et al. Apr 2003 A1
20030084219 Yao et al. May 2003 A1
20030086377 Berman May 2003 A1
20030091062 Lay et al. May 2003 A1
20030103451 Lutgen et al. Jun 2003 A1
20030117961 Chuah et al. Jun 2003 A1
20030120983 Vieregge et al. Jun 2003 A1
20030126320 Liu et al. Jul 2003 A1
20030137941 Kaushik et al. Jul 2003 A1
20030174789 Waschura et al. Sep 2003 A1
20030179709 Huff Sep 2003 A1
20030179748 George et al. Sep 2003 A1
20030189930 Terrell et al. Oct 2003 A1
20030189935 Warden et al. Oct 2003 A1
20030195983 Krause Oct 2003 A1
20030229808 Heintz et al. Dec 2003 A1
20030236953 Grieff et al. Dec 2003 A1
20040013092 Betker et al. Jan 2004 A1
20040013125 Betker et al. Jan 2004 A1
20040015638 Bryn Jan 2004 A1
20040024831 Yang et al. Feb 2004 A1
20040028038 Anderson et al. Feb 2004 A1
20040057389 Klotz et al. Mar 2004 A1
20040081186 Warren et al. Apr 2004 A1
20040081394 Biran et al. Apr 2004 A1
20040085955 Walter et al. May 2004 A1
20040100944 Richmond et al. May 2004 A1
20040109418 Fedorkow et al. Jun 2004 A1
20040123181 Moon et al. Jun 2004 A1
20040141521 George Jul 2004 A1
20040153914 El-Batal Aug 2004 A1
20040174813 Kasper et al. Sep 2004 A1
20040208201 Otake Oct 2004 A1
20040267982 Jackson et al. Dec 2004 A1
20050023656 Leedy Feb 2005 A1
20050036499 Dutt et al. Feb 2005 A1
20050058148 Castellano et al. Mar 2005 A1
20050117522 Basavaiah et al. Jun 2005 A1
20050141661 Renaud et al. Jun 2005 A1
20050177641 Yamagami Aug 2005 A1
20060047852 Shah et al. Mar 2006 A1
20060074927 Sullivan et al. Apr 2006 A1
20060156083 Jang et al. Jul 2006 A1
20060184711 Pettey Aug 2006 A1
20070124623 Tseng May 2007 A1
Foreign Referenced Citations (8)
Number Date Country
0649098 Sep 1994 EP
0738978 Oct 1996 EP
0856969 Jan 1998 EP
1059588 Dec 2000 EP
WO- 9506286 Mar 1995 WO
WO-9836537 Aug 1998 WO
WO- 0058843 Oct 2000 WO
WO03088050 Oct 2003 WO