Protocol offload transmit traffic management

Information

  • Patent Grant
  • 8339952
  • Patent Number
    8,339,952
  • Date Filed
    Tuesday, March 6, 2012
    12 years ago
  • Date Issued
    Tuesday, December 25, 2012
    11 years ago
Abstract
Transfer of data is facilitated between at least one application and a peer via a network. Data destined for the peer is provided from the at least one application for transmission to the peer via the network. Modulation event tokens are managed, and protocol processing of the data with the peer is based in part on a result of the modulation event tokens managing such that protocol processed data is caused to be transmitted to the peer via the network nominally with desired data transmission rate characteristics. A result of the protocol processing step is fed back to the to the modulation event tokens managing. The desired data transmission rate characteristics may include, for example, shaping and pacing.
Description
TECHNICAL FIELD

The present invention is in the field of protocol offload processing and, in particular, relates to transmit traffic management in correspondence with protocol offload processing.


BACKGROUND

Protocol offload processing is known. For example, an interface device may be provided to operate in coordination with a host, for protocol offload processing with a peer device across a network. For example, the protocol offload processing may be according to a Transmission Control Protocol (TCP) whereas communication across the network may be via high-speed Ethernet, such as 10 Gbps Ethernet.


SUMMARY

Transfer of data is facilitated between at least one application and a peer via a network. Data destined for the peer is provided from the at least one application for transmission to the peer via the network. Modulation event tokens are managed, and protocol processing of the data with the peer is based in part on a result of the modulation event tokens managing such that protocol processed data is caused to be transmitted to the peer via the network nominally with desired data transmission rate characteristics. A result of the protocol processing step is fed back to the modulation event tokens managing. The desired data transmission rate characteristics may include, for example, shaping and pacing.





BRIEF DESCRIPTION OF FIGURES


FIGS. 1-1 broadly illustrates modulating data transmission in protocol offload processing.



FIG. 1 illustrates an architecture of a flow processor to handle protocol offload processing and including data transmission modulation (traffic management) capability.



FIG. 2 illustrates a more detailed example of the traffic management portion of the FIG. 1 architecture.





DETAILED DESCRIPTION

It is desirable that the protocol offload processing be capable of modulating the transmission of data across the network to have particular desired data rate characteristics. As a result, for example, the data transmission can be modulated based on characteristics of the network itself, such as round trip time (RTT). As another example, data transmission may also be modulated based on a desired peak transmission rate to, for example, operate to defined quality of service transmission characteristics for particular customers, smooth out (i.e., not propagate) jitter from a data source, and/or attempt to match the receive capabilities of receiving peer devices.



FIGS. 1-1 broadly illustrates modulating data transmission from protocol offload processing. A data source 50 is a source of data to be transmitted. For example, the data source 50 may be a host computer. A protocol offload processing device 52 (such as a network interface controller, or NIC) handles transmission of data, according to the protocol (such as, for example, TCP) to a peer 54 over a network. A data transmission modulator 56 controls the protocol offload processing (traffic management) according to desired data transmission characteristics and based on feedback 58 (e.g., modulation event tokens) from the protocol offload processing device 52 to the data transmission modulator 56.


Broadly speaking, the traffic management controls the delivery of data across the network to nominally have desired characteristics, and a transmission traffic management capability may be provided for protocol offload processing accomplished using various architectures. Typically, the desired characteristics for data delivery are provided from a host computer. In some cases, processing more closely associated with the protocol processing determines the desired characteristics, typically based at least partly on characteristics of the network.


We now describe a specific example of protocol offload processing and modulating the transmission of data across the network. In the specific example, a flow processor architecture for protocol offload processing is employed, and a traffic management capability manages the operation of the flow processor (or, at least, portions of the flow processor) to control the flow of data communication via the network between the protocol offload processing and peer devices. While the processor architecture in the described example is a flow processor architecture, other architectures (perhaps not even processors) may be employed.


Turning now to FIG. 1, the flow processor architecture of the interface device 100, having transmission traffic management capability, is described. An arbiter 102 arbitrates among various signals such as headers of control messages from a host (104a), data packets from the physical wire of the network (104b) and transmission modulation event tokens (104c). Before proceeding to describe the remainder of the FIG. 1 flow processor architecture, it is noted by way of introduction that the transmission modulation event tokens 104c, provided to the arbiter 102 via a transmission event modulator 106, are employed to modulate the transmission of data across the network from the protocol offload interface device. It is noted that the arbiter 102 is a feature of the particular flow processor architecture of the FIG. 1 device and would typically have only an indirect effect on the transmission traffic management capability.


When the arbiter 102 operates to allow a transmission modulation event through (the source of the transmission modulation event tokens, including the transmission event modulator 106, is discussed in detail later), the protocol header in the transmission modulation event is provided to a protocol processing block 107.


The protocol processing block includes a lookup block 108. A protocol header (including, for the TCP protocol, a 4-tuple) uniquely identifies a connection according to the protocol. The lookup block 108 operates to match the protocol header to an internal identification (“tid,” used by the interface device and the host) corresponding to particular protocol connection states. In the FIG. 1 example, the lookup block 108 provides a TCP 4-tuple, which uniquely identifies the TCP connection, to a TCAM 110, and the TCAM 110 returns the tid for the unique TCP connection.


The lookup block 108 then provides the tid, received from the TCAM 110, to connection manager circuitry 112 that manages the connection state and attributes. In the FIG. 1 example, the connection state and attributes are in a TCP Control Block (TCB) 114. The connection manager 112 operates in concert with the payload command manager 116 to generate and provide payload commands to a payload manager block 118.


In particular, the connection manager provides the tid to the TCB 114, and the TCB 114 provides the current connection state and attributes for the connection (i.e., the connection to which the tid corresponds) to the connection manager 112. Based on the current connection state and attributes provided from the TCB 114, the connection manager 112 determines how to appropriately modify the connection state and provides, to the payload command manager 116, an indication of the modification to the connection state. Based on the indication of the modification, the payload command manager 116 issues one or more appropriate payload commands to the payload manager block 118. Furthermore, as appropriate based on the modified connection state and the availability of additional data to send for the connection, the payload command manager 116 provides transmission modulation event tokens to the transmission event modulator 106.


The connection manager 112 writes the modified connection state and attributes back into the TCB 114. The read, modify and write of the connection state and attributes is done in an atomic operation


The connection manager 112 provides an appropriate packet header for data transmission to a form packet block 120. Meanwhile, the payload manager block 118 provides the corresponding payload to the form packet block 120 (as discussed above, based on payload commands from the payload command manager 116). The form packet block 120 combines the packet header and corresponding payload into a packet for transmission across the network. A network protocol block 122 forms appropriate units of data for transmission across the network. In the FIG. 1 example, packet data is transmitted across the network in an Ethernet-encapsulated manner, so the network protocol block 112 issues Ethernet frames for transmission across the network to a peer device.


As discussed above, the transmission modulation event tokens originate in the payload command manager 116 and are provided to the transmission event modulator 106. In the example discussed above, a transmission modulation event is provided to the transmission event modulator 106 as the ultimate result of the arbiter 102 operating to allow a transmission modulation event through. As another example, a transmission modulation event may be provided to the transmission event modulator 106 as the ultimate result of the arbiter 102 operating to allow through information received off the wire. For example, the information received off the wire may be a header of an ingress Ethernet packet that comprises, for example, an acknowledgement from a peer device indicating that data sent to the peer (such as peer 54 in FIGS. 1-1) has been successfully received (e.g., according to the TCP protocol).


More generally, the information received off the wire is information to indicate that, but for the transmission traffic management capability, data may be transmitted to the peer device. In some examples, the transmission of particular data is not to be managed (i.e., is not to be deferred). For example, if “duplicate ACKs” are received, this could indicate that data was received out-of-order. It would then be inferred that the gap is due to a drop and a “fast retransmit” of a segment would be performed. It may be preferable that transmission of this segment not be deferred. On the other hand, an received ACK that indicates successful receipt of data by the peer would generally not result in a modulation token event being provided to the transmission event modulator 106, since a modulation token event would typically be outstanding based on whether there was more data to be sent when the data corresponding to the received ACK was originally transmitted. Thus, in accordance with this aspect of shaping, an ACK does not cause transmission to be triggered.


We now discuss operation of a detailed example of the transmission event modulator 106, with specific reference to the FIG. 2 transmission event modulator 201 (an example of the transmission event modulator 106 in FIG. 1) and also with reference to FIG. 1. Before describing FIG. 2 in detail, however, we first discuss some general aspects of data transmission modulation. In general, the data transmission modulation discussed here relates to scheduling packet transmissions according to one or more desired data rate characteristics.


For example, “pacing” refers to the sender spacing the packet transmission when the RTT (Round Trip Time) is large. Thus, for example, pacing can be used to minimize burstiness. For example, high speed long delay links may require very large send and receive windows. A default sending pattern is typically bursty. That is, some packets may be closely spaced on the wire, which can result in overflows at intermediate switches and routers. This can be particularly problematic for satellite links, where the buffering resources in the satellites may be extremely limited, even though the RTT is large. With TCP pacing, in general, the transmission of packets are distributed in a window across the RTT.


In contrast to pacing, shaping limits the peak rate at which data is transmitted over the network for a particular connection or class of connections. This capability has potentially many uses. For example, shaping can be used to provide different quality of service for different customers, based on an amount the customers pay, for example.


Shaping can also be useful when data coming from a source is inherently jittery. For example, an application reading data from disk storage may provide data with jitter (e.g., there may be bursts in the data when a read head has been moved over the data to be read). As another example, when a server is connected to a very high speed link (e.g., 10 Gbps) serving clients connected to 10/100 Mbps or even 1 Gbps links, data may be sent from the server to the clients up to 1,000 times faster than the client links can handle. In such a case, congestion and packet loss can result.


Yet another example area where shaping can be useful is when a server is connected to a high performance networked striping filing system, where the server generates a multi-gigabit I/O stream to be striped over multiple disks, and each disk can only handle a 1-3 Gbps stream depending on the type of the disk. If the data transmission rate exceeds the rate the disk can handle, packet loss will probably result.


Thus, in general, shaping can be used to limit the maximum data transmission rate to accommodate characteristics of the link (including endpoints) or to impose characteristics on the transmissions, even if not to accommodate characteristics of the link.


We now discuss FIG. 2 in some detail. Referring to FIG. 2, a transmission event modulator 201 (as discussed above, a specific example of the FIG. 1 transmission event modulator 106) includes a data structure 202 provided to hold transmission modulation event tokens sent by the payload command manager 116 to the transmission event modulator 201.


In the FIG. 2 example, the data structure 202 includes a heap 203 usable for providing a pacing function, whereas the FIFO's 204a through 204h (generally, 204) are usable for providing a shaping function. In general, then, modulation event tokens are stored into the appropriate portion of the data structure based on desired data transmission characteristics for the connection to which the modulation event token corresponds. For example, each FIFO 204 may correspond to a different Quality of Service promise.


Heap timer 213, in association with the heap 203, accomplishes the pacing function. Timers 214a through 214h, in association with FIFO's 204a through 204h, respectively, accomplish the shaping function. Notwithstanding the accomplishment of the pacing function and the shaping function as just discussed, the selector 216 is configured to enforce arbitration (e.g., priority-based or round robin) among those data modulation event tokens that are ready to “graduate” from the transmission event modulator 201. The characteristics of the pacing and shaping are configurable using messages passed between the host and the interface device. We now discuss pacing and shaping in greater detail.


With regard to pacing, when the transmission protocol is TCP, one way to spread the packet transmissions over an RTT, when the send window (snd_wnd) is larger than 2 MSS (Maximum TCP Segment Size), is to configure the heap timer 213 to schedule the transmission of the next packet (i.e., schedule the release of the next transmission modulation event from the heap 203) according to:

    • RTT/(snd_wnd/MSS) [in TCP ticks]


In one example, the value of the heap timer 213 is maintained as eleven bits and, therefore, the maximum inter-packet spacing is 2095 TCP ticks. (In terms of core clock cycles, one TCP tick is 2 to the power of the timer resolution setting). Thus, the timer tick may be chosen to allow a range typically sufficient for pacing, while keeping the maximum sampled RTT value (which, in this example, is limited to 216 times the timer tick) greater than the expected maximum network RTT (10 times that value, in this example).


In one example, the heap timer 213 operates to use a delta time value to trigger a timer event. Each transmission modulation event includes an indication of the connection for which the heap timer 213 is triggered, e.g., the tid (for the TCB 4-tuple, when the connection is a TCP connection). As discussed above, the transmission modulation event is used to fetch the TCB state, which in turn dispatches a TCP packet, and schedules another transmission modulation event (an entry in the heap 203, for paced connections) if there is more data to send for the particular connection.


If the traffic management was absent, the protocol processing step would determine an amount of data to be subsequently transmitted. With traffic management, this determination becomes an estimate that is provided with the token back to the transmission event modulator. The modulation may occur based in part on this estimate. When transmission actually occurs, the actual amount of data transmitted may be different from the estimate, since the estimate was based on the state at the time it was determined and the state may change before the transmission actually occurs.


One useful effect of pacing is to reduce “burstiness” during fast ramp up (e.g., in the slow start phase of TCP flow-control). As the receive window of the connection opens up, the computed inter-packet delay decreases and may eventually reach zero, and the pacing function is effectively disabled.


We now discuss shaping. In one example, there are two types of shaping FIFO's. One type of shaping FIFO provides control over the inter-packet delay for a group (class) of connections, while the second type provides control over the inter-packet delay within a single connection. In one example, all event tokens for a particular FIFO cause the same inter-packet delay (i.e., out of that FIFO), so only one inter-packet delay is supported by each FIFO.


The mapping of connections to FIFO's determines the shaping type (per-class or per-connection). The first type of shaping (per class) may be accomplished by having a single FIFO (modulation queue) being configured to hold modulation event tokens for connections in a single group. The second type of shaping (per connection) may be accomplished by having a single FIFO configured to hold modulation event tokens for a single connection. Each token has associated with it an indication of a time to graduate the token out of the FIFO. For example, the time indications may be a graduation time, such that the delay associated with each event in the FIFO (where each event in the FIFO corresponds to a different connection, each connection being in the same class) from the immediately previous event in the FIFO, is substantially the same. The overall effect is that data for each connection is transmitted at the same fixed rate, whereas the first type of shaping realizes a fixed rate on a per-class basis.


In some examples, triggering of a timer associated with the heap 203 or a FIFO 214 means only that a modulation event in the heap 203 or the FIFO 214 is ready to graduate out of the heap 203 or the FIFO 214 and into the arbiter 102 (FIG. 1), not that the modulation event actually does so graduate. That is, as mentioned above, and as discussed in greater detail below, in some examples, an arbiter/selector 216 is provided at the output of the heap 203 and the FIFO's 214 to arbitrate among those modulation event tokens that are ready to graduate. The arbiter 216 may be configured according to, for example, a priority scheme, round robin scheme, or other arbitration scheme.


For example, a weighted round robin scheme may be employed, where the weight for a modulation event varies according to how much data there is to send for the connection or group of connections corresponding to that modulation event.


It is noted that the described traffic management scheme has applications broader than just offloaded protocol processing. Thus, for example, while FIGS. 1-1 shows protocol offload block 52, the modulation block 56 may be implemented in conjunction with protocol processing more generally. For example, the protocol processing may be implemented as part of a host. Thus, for example, the host may be executing applications corresponding to connections maintained by a protocol processing stack (or other protocol processing implementation) on the host. As another example, protocol processing may be distributed among the host and an offload mechanism. As yet another example, only the traffic management may be offloaded, with the protocol processing occurring on the host.


Furthermore, while the TCP protocol has been provided as an example, the traffic management scheme may be employed with other protocols, whether existing now or in the future. Examples of such other protocols are UDP (e.g., with video on demand applications) or STCP, which is currently being advanced as a replacement to the TCP protocol.


It should also be noted that the traffic management scheme does not require that the modulation events directly result in or from, or be part of protocol processing. For example, it is possible to space out the transmissions of already protocol processed packets to achieve a desired data rate by scheduling modulation events which are “graduated” in relation to the size of each packet to be sent.

Claims
  • 1. A method of operating a network interface device to facilitate a transfer of data between at least one application, operating on a host, and a peer, the transfer of data according to a particular communication protocol, the network interface device communicatively coupled to the host, the method comprising: receiving, from the application, data destined for transmission to the peer via the network;protocol processing with the peer, by a protocol mechanism operating according to the particular communication protocol and based at least in part on an indication of a desired transmission rate characteristic, to provide packets including the data received from the host such that the packets are caused to be provided from the network interface device to the network with the desired transmission rate characteristics;wherein the protocol processing with the peer is controlled by a transmission rate regulating mechanism operating based on an amount of data to be transmitted and on the desired transmission rate characteristics, to regulate a rate at which the protocol processing mechanism operates with respect to the data, to thus cause the packets to be output from the network interface device to the network with the desired transmission rate characteristic.
  • 2. The method of claim 1, wherein: the peer is a particular peer; anda protocol processing mechanism is configured to operate a plurality of connections between the host and a plurality of peers including the particular peer;the connections are grouped into a plurality of categories; andthe protocol processing step includes protocol processing for each connection with at least one of the plurality of peers such that, for each connection, the protocol processing for that connection by the protocol processing mechanism is controlled by the transmission rate regulating mechanism based at least in part on a desired transmission rate characteristic for the category to which that connection belongs, to thus cause the packets of each connection to be output from the network interface device to the network with the desired transmission rate characteristic for the category to which that connection belongs.
  • 3. The method of claim 2, wherein: at least some of the indications of desired transmission rate characteristics indicate at least one of a group consisting of at least one desired shaping characteristic and at least one desired pacing characteristic.
  • 4. The method of claim 3, wherein: the transmission rate regulation mechanism operates, for each connection, based on an estimate of an amount of data to be transmitted for that connection and the desired transmission rate for the category to which that connection belongs, to regulate a rate at which the protocol processing mechanism operates with respect to data of that connection to thus cause the data for that connection to be output to the network with the desired transmission rate characteristic for the category to which that connection belongs.
  • 5. The method of claim 4, wherein: the transmission rate regulation mechanism further operates, for each connection, according to a priority associated with the category to which that connection belongs.
  • 6. The method of claim 5, wherein: the priorities associated with the categories are based at least in part on an amount of data to be transmitted for the separate categories.
  • 7. The method of claim 5 wherein: the priorities associated with the categories are based at least in part on the urgency of the data to be transmitted.
  • 8. The method of claim 5 wherein: the priorities associated with the categories are based at least in part on the customer transmitting the data.
  • 9. The method of claim 1, wherein: the protocol processing step includes processing information received from the network and, based in part thereon, controlling the protocol processing step.
  • 10. The method of claim 1, wherein: the protocol processing step includes processing information received from the network, and,the method further comprises determining whether transmission of particular data to the peer should not be deferred, based on a result of processing the information received from the network.
  • 11. The method of claim 10, wherein: the protocol processing step includes TCP, anddetermining whether transmission of particular data to the peer should not be deferred includes processing the information received from the network to determine if at least part of the data to be transmitted to the peer has previously been transmitted.
  • 12. The method of claim 11, wherein: if it is determined that at least part of the data to be transmitted to the peer has previously been transmitted, determining that the transmission of the particular data to the peer should not be deferred.
  • 13. A network interface device configured to facilitate a transfer of data between at least one application, operating on a host, and a peer, the transfer of data according to a particular communication protocol, the network interface device communicatively coupled to the host, the method comprising: means for receiving, from the application, data destined for transmission to the peer via the network; andmeans for protocol processing with the peer, operating according to the particular communication protocol and based at least in part on an indication of a desired transmission rate characteristic, by providing packets including the data received from the host such that the packets are caused to be provided from the network interface device to the network with the desired transmission rate characteristics; wherein the means for protocol processing with the peer is controlled by a transmission rate regulating mechanism operating based on an amount of data to be transmitted and on the desired transmission rate characteristics, to regulate a rate at which the protocol processing mechanism operates with respect to the data, to thus cause the packets to be output from the network interface device to the network with the desired transmission rate characteristic.
  • 14. A network interface controller configured to facilitate a transfer of data between at least one application and a peer via a network using a protocol processing mechanism, wherein data destined for the peer is provided from the at least one application, operating on a host, for transmission to the peer via the network, to a protocol processing mechanism, according to a particular transmission protocol, to cause the data to be provided from the network interface device to the network, the controller comprising: means for managing a data structure of tokens, each token including an estimate of an amount of data in the data to be transmitted, managing the data structure including retrieving tokens out of the data structure based on the included estimate of an amount of data in the data to be transmitted and based on desired data transmission rate characteristics;protocol processing means for protocol processing with the peer, for each token retrieved out of the data structure by the protocol processing mechanism according to the particular transmission protocol, to cause data packets including the data provided from the at least one application to be transmitted to the peer via the network, such that data packet transmission to the peer via the network is modulated to nominally have desired data transmission rate characteristics; andmeans for feeding back a result of the protocol processing step to cause a token to be stored into the data structure of tokens.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation of pending U.S. patent application Ser. No. 12/752,719, filed Apr. 1, 2010, and entitled “PROTOCOL OFFLOAD TRANSMIT TRAFFIC MANAGEMENT,” which is a continuation of U.S. patent application Ser. No. 11/217,661 (now U.S. Pat. No. 7,724,658), filed Aug. 31, 2005, and entitled “PROTOCOL OFFLOAD TRANSMIT TRAFFIC MANAGEMENT,” both of which are incorporated herein by reference in their entirety for all purposes.

US Referenced Citations (227)
Number Name Date Kind
4445116 Grow Apr 1984 A
4533996 Hartung et al. Aug 1985 A
5058110 Beach et al. Oct 1991 A
5065397 Shiobara Nov 1991 A
5497476 Oldfield et al. Mar 1996 A
5778189 Kimura et al. Jul 1998 A
5937169 Connery et al. Aug 1999 A
6087581 Emmer et al. Jul 2000 A
6141705 Anand et al. Oct 2000 A
6226680 Boucher et al. May 2001 B1
6240094 Schneider May 2001 B1
6247060 Boucher et al. Jun 2001 B1
6334153 Boucher et al. Dec 2001 B2
6389479 Boucher et al. May 2002 B1
6393487 Boucher et al. May 2002 B2
6397316 Fesas, Jr. May 2002 B2
6401117 Narad et al. Jun 2002 B1
6401177 Koike Jun 2002 B1
6427171 Craft et al. Jul 2002 B1
6427173 Boucher et al. Jul 2002 B1
6434620 Boucher et al. Aug 2002 B1
6460080 Shah et al. Oct 2002 B1
6463077 Sato Oct 2002 B1
6470415 Starr et al. Oct 2002 B1
6510164 Ramaswamy et al. Jan 2003 B1
6564267 Lindsay May 2003 B1
6591302 Boucher et al. Jul 2003 B2
6594268 Aukia et al. Jul 2003 B1
6625671 Collette et al. Sep 2003 B1
6658480 Boucher et al. Dec 2003 B2
6681244 Cross et al. Jan 2004 B1
6687758 Craft et al. Feb 2004 B2
6697868 Craft et al. Feb 2004 B2
6701372 Yano et al. Mar 2004 B2
6708223 Wang et al. Mar 2004 B1
6708232 Obara Mar 2004 B2
6717946 Hariguchi et al. Apr 2004 B1
6751665 Philbrick et al. Jun 2004 B2
6757245 Kuusinen et al. Jun 2004 B1
6757746 Boucher et al. Jun 2004 B2
6792502 Pandya et al. Sep 2004 B1
6798743 Ma et al. Sep 2004 B1
6807581 Starr et al. Oct 2004 B1
6813652 Stadler et al. Nov 2004 B2
6862648 Yatziv Mar 2005 B2
6907042 Oguchi Jun 2005 B1
6925055 Erimli et al. Aug 2005 B1
6938092 Burns Aug 2005 B2
6941386 Craft et al. Sep 2005 B2
6965941 Boucher et al. Nov 2005 B2
6996070 Starr et al. Feb 2006 B2
7031267 Krumel Apr 2006 B2
7042898 Blightman et al. May 2006 B2
7076568 Philbrick et al. Jul 2006 B2
7089289 Blackmore et al. Aug 2006 B1
7089326 Boucher et al. Aug 2006 B2
7093099 Bodas et al. Aug 2006 B2
7114096 Freimuth et al. Sep 2006 B2
7124205 Craft et al. Oct 2006 B2
7133902 Saha et al. Nov 2006 B2
7133914 Holbrook Nov 2006 B1
7133940 Blightman et al. Nov 2006 B2
7136355 Lin et al. Nov 2006 B2
7155542 Trainin Dec 2006 B2
7164656 Foster et al. Jan 2007 B2
7167926 Boucher et al. Jan 2007 B1
7167927 Philbrick et al. Jan 2007 B2
7174393 Boucher et al. Feb 2007 B2
7185266 Blightman et al. Feb 2007 B2
7191241 Boucher et al. Mar 2007 B2
7191318 Tripathy et al. Mar 2007 B2
7239642 Chinn et al. Jul 2007 B1
7254637 Pinkerton et al. Aug 2007 B2
7260631 Johnson et al. Aug 2007 B1
7284047 Barham et al. Oct 2007 B2
7313623 Elzur et al. Dec 2007 B2
7320042 Trainin et al. Jan 2008 B2
7336608 Sin et al. Feb 2008 B2
7346701 Elzur et al. Mar 2008 B2
7349337 Mahdavi Mar 2008 B1
7376147 Seto et al. May 2008 B2
7408906 Griswold et al. Aug 2008 B2
7447795 Naghshineh et al. Nov 2008 B2
7453892 Buskirk et al. Nov 2008 B2
7457845 Fan et al. Nov 2008 B2
7460510 Olivier et al. Dec 2008 B2
7474670 Nowshadi Jan 2009 B2
7493427 Freimuth et al. Feb 2009 B2
7533176 Freimuth et al. May 2009 B2
7583596 Frink Sep 2009 B1
7594002 Thorpe et al. Sep 2009 B1
7596634 Mittal et al. Sep 2009 B2
7609696 Guygyi et al. Oct 2009 B2
7613813 Hussain et al. Nov 2009 B2
7616563 Eiriksson et al. Nov 2009 B1
7656887 Okuno Feb 2010 B2
7660264 Eiriksson et al. Feb 2010 B1
7660306 Eiriksson et al. Feb 2010 B1
7715436 Eiriksson et al. May 2010 B1
7724658 Eiriksson et al. May 2010 B1
7735099 Micalizzi, Jr. Jun 2010 B1
7746780 Seo Jun 2010 B2
7751316 Yarlagadda et al. Jul 2010 B2
7760733 Eiriksson et al. Jul 2010 B1
7813339 Bar-David et al. Oct 2010 B2
7826350 Michailidis et al. Nov 2010 B1
7831720 Noureddine et al. Nov 2010 B1
7831745 Eiriksson et al. Nov 2010 B1
7844742 Pope et al. Nov 2010 B2
7869355 Kodama et al. Jan 2011 B2
7924840 Eiriksson et al. Apr 2011 B1
7929540 Elzur et al. Apr 2011 B2
7930349 Hussain et al. Apr 2011 B2
7945705 Eiriksson et al. May 2011 B1
8060644 Michailidis et al. Nov 2011 B1
8139482 Eiriksson et al. Mar 2012 B1
8155001 Eiriksson et al. Apr 2012 B1
8213427 Eiriksson et al. Jul 2012 B1
8231484 Quinn Jul 2012 B1
20010010046 Muyres et al. Jul 2001 A1
20010021949 Blightman et al. Sep 2001 A1
20010037406 Philbrick et al. Nov 2001 A1
20020039366 Sano Apr 2002 A1
20020101848 Lee et al. Aug 2002 A1
20020188753 Tang et al. Dec 2002 A1
20020191622 Zdan Dec 2002 A1
20030005164 Trainin et al. Jan 2003 A1
20030018516 Ayala et al. Jan 2003 A1
20030035436 Denecheau et al. Feb 2003 A1
20030046330 Hayes Mar 2003 A1
20030048751 Han et al. Mar 2003 A1
20030067884 Abler et al. Apr 2003 A1
20030079033 Craft et al. Apr 2003 A1
20030158906 Hayes Aug 2003 A1
20030200284 Philbrick et al. Oct 2003 A1
20030204631 Pinkerton et al. Oct 2003 A1
20040003094 See Jan 2004 A1
20040019689 Fan Jan 2004 A1
20040028069 Tindal et al. Feb 2004 A1
20040030745 Boucher et al. Feb 2004 A1
20040042487 Ossman Mar 2004 A1
20040047361 Fan et al. Mar 2004 A1
20040054813 Boucher et al. Mar 2004 A1
20040062245 Sharp et al. Apr 2004 A1
20040062246 Boucher et al. Apr 2004 A1
20040064578 Boucher et al. Apr 2004 A1
20040064590 Starr et al. Apr 2004 A1
20040073703 Boucher et al. Apr 2004 A1
20040078480 Boucher et al. Apr 2004 A1
20040088262 Boucher et al. May 2004 A1
20040100952 Boucher et al. May 2004 A1
20040111535 Boucher et al. Jun 2004 A1
20040117496 Mittal et al. Jun 2004 A1
20040117509 Craft et al. Jun 2004 A1
20040123142 Dubal et al. Jun 2004 A1
20040158640 Philbrick et al. Aug 2004 A1
20040165592 Chen et al. Aug 2004 A1
20040190533 Modi et al. Sep 2004 A1
20040199808 Freimuth et al. Oct 2004 A1
20040213235 Marshall et al. Oct 2004 A1
20040240435 Craft et al. Dec 2004 A1
20050071490 Craft et al. Mar 2005 A1
20050083850 Sin et al. Apr 2005 A1
20050083935 Kounavis et al. Apr 2005 A1
20050102682 Shah et al. May 2005 A1
20050111483 Cripe et al. May 2005 A1
20050120037 Maruyama et al. Jun 2005 A1
20050125195 Brendel Jun 2005 A1
20050135378 Rabie et al. Jun 2005 A1
20050135396 McDaniel et al. Jun 2005 A1
20050135412 Fan Jun 2005 A1
20050135417 Fan et al. Jun 2005 A1
20050147126 Qiu et al. Jul 2005 A1
20050188074 Voruganti et al. Aug 2005 A1
20050190787 Kuik et al. Sep 2005 A1
20050216597 Shah et al. Sep 2005 A1
20050223134 Vasudevan et al. Oct 2005 A1
20050259644 Huitema et al. Nov 2005 A1
20050259678 Gaur Nov 2005 A1
20050286560 Colman et al. Dec 2005 A1
20050289246 Easton et al. Dec 2005 A1
20060015618 Freimuth et al. Jan 2006 A1
20060015651 Freimuth et al. Jan 2006 A1
20060031524 Freimuth et al. Feb 2006 A1
20060039413 Nakajima et al. Feb 2006 A1
20060072564 Cornett et al. Apr 2006 A1
20060075119 Hussain et al. Apr 2006 A1
20060080733 Khosmood et al. Apr 2006 A1
20060098675 Okuno May 2006 A1
20060133267 Alex et al. Jun 2006 A1
20060168649 Venkat et al. Jul 2006 A1
20060200363 Tsai Sep 2006 A1
20060206300 Garg et al. Sep 2006 A1
20060209693 Davari et al. Sep 2006 A1
20060221832 Muller et al. Oct 2006 A1
20060221946 Shalev et al. Oct 2006 A1
20060235977 Wunderlich et al. Oct 2006 A1
20060265517 Hashimoto et al. Nov 2006 A1
20060268841 Nagaraj et al. Nov 2006 A1
20060274788 Pong Dec 2006 A1
20060281451 Zur Dec 2006 A1
20070011358 Wiegert et al. Jan 2007 A1
20070033301 Aloni et al. Feb 2007 A1
20070064737 Williams Mar 2007 A1
20070070901 Aloni et al. Mar 2007 A1
20070076623 Aloni et al. Apr 2007 A1
20070083638 Pinkerton et al. Apr 2007 A1
20070086480 Elzur et al. Apr 2007 A1
20070110436 Bennett May 2007 A1
20070143848 Kraemer et al. Jun 2007 A1
20070162572 Aloni et al. Jul 2007 A1
20070201474 Isobe Aug 2007 A1
20070233892 Ueno Oct 2007 A1
20080002731 Tripathi et al. Jan 2008 A1
20080016511 Hyder et al. Jan 2008 A1
20080043750 Keels et al. Feb 2008 A1
20080089347 Phillipi et al. Apr 2008 A1
20080091868 Mizrachi et al. Apr 2008 A1
20080135415 Han et al. Jun 2008 A1
20080168190 Parthasarathy et al. Jul 2008 A1
20080232386 Gorti et al. Sep 2008 A1
20080273532 Bar-David et al. Nov 2008 A1
20090073884 Kodama et al. Mar 2009 A1
20090172301 Ebersole et al. Jul 2009 A1
20090222564 Freimuth et al. Sep 2009 A1
20100023626 Hussain et al. Jan 2010 A1
20100235465 Thorpe et al. Sep 2010 A1
Continuations (2)
Number Date Country
Parent 12752719 Apr 2010 US
Child 13413196 US
Parent 11217661 Aug 2005 US
Child 12752719 US