Method and apparatus for aggregating input data streams

Information

  • Patent Grant
  • 8493988
  • Patent Number
    8,493,988
  • Date Filed
    Monday, September 13, 2010
    14 years ago
  • Date Issued
    Tuesday, July 23, 2013
    11 years ago
Abstract
A method and apparatus aggregate a plurality of input data streams from first processors into one data stream for a second processor, the circuit and the first and second processors being provided on an electronic circuit substrate. The aggregation circuit includes (a) a plurality of ingress data ports, each ingress data port adapted to receive an input data stream from a corresponding first processor, each input data stream formed of ingress data packets, each ingress data packet including priority factors coded therein, (b) an aggregation module coupled to the ingress data ports, adapted to analyze and combine the plurality of input data steams into one aggregated data stream in response to the priority factors, (c) a memory coupled to the aggregation module, adapted to store analyzed data packets, and (d) an output data port coupled to the aggregation module, adapted to output the aggregated data stream to the second processor.
Description
FIELD OF THE INVENTION

The present invention relates to network interface devices. More particularly, the present invention relates to a method and apparatus for aggregating input data streams from first processors into one data stream for a second processor.


BACKGROUND OF THE INVENTION

Switched Ethernet technology has continued evolving beyond the initial 10 Mbps (bit per second). Gigabit Ethernet technology complying the Institute of Electrical and Electronics Engineers (IEEE) 1000BASE-T Standard (IEEE 802.3 2002-2002) meets demands for greater speed and bandwidth of increasing network traffic. Gigabit over Copper technologies provides high performance in the Enterprise local area network (LAN) and accelerates the adoption of Gigabit Ethernet in various areas, such as server farms, cluster computing, distributed computing, bandwidth-intensive applications, and the like. Gigabit over Copper technologies can be integrated into the motherboard of a computer system, and many server makers are offering integrated Gigabit over Copper ports, which is also referred to as LAN on Motherboard.


Gigabit Ethernet works seamlessly with existing Ethernet and Fast Ethernet networks, as well as Ethernet adapters and switches. The 1 Gbps (i.e., 1000 Mbps) speeds of Gigabit Ethernet are 10 times faster than Fast Ethernet (IEEE 100BASE-T), and 100 times faster than standard Ethernet (IEEE 10BASE-T). 10 Gigabit Ethernet (10 GbE) enables Gigabit to be migrated into an Enterprise LAN by providing the appropriate backbone connectivity. For example, 10 GbE delivers a bandwidth required to support access to Gigabit over Copper attached server farms.


Switch fabrics and packet processors in high-performance broadband switches, such as Gigabit Ethernet switches or line cards, typically run at a fraction of their rated or maximum capacity. That is, typical processing loads do not require the full capacity of the switch fabrics and packet processors. Thus, it would be desirable to provide a scheme to allow such switch fabrics or packet processors to “oversubscribe” data to achieve more efficient usage of the processing capacity, where oversubscription means that the capacity of the data feed is larger than the capacity of data processing or switching.


BRIEF DESCRIPTION OF THE INVENTION

A method and apparatus aggregate a plurality of input data streams from first processors into one data stream for a second processor, the circuit and the first and second processors being provided on an electronic circuit substrate. The aggregation circuit includes (a) a plurality of ingress data ports, each ingress data port adapted to receive an input data stream from a corresponding first processor, each input data stream formed of ingress data packets, each ingress data packet including priority factors coded therein, (b) an aggregation module coupled to the ingress data ports, adapted to analyze and combine the plurality of input data steams into one aggregated data stream in response to the priority factors, (c) a memory coupled to the aggregation module, adapted to store analyzed data packets, and (d) an output data port coupled to the aggregation module, adapted to output the aggregated data stream to the second processor.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated into and constitute a part of this specification, illustrate one or more embodiments of the present invention and, together with the detailed description, serve to explain the principles and implementations of the invention.


In the drawings:



FIG. 1 is a block diagram schematically illustrating a circuit for aggregating a plurality of input data streams from first processors into one data stream for a second processor in accordance with one embodiment of the present invention.



FIG. 2 is a block diagram schematically illustrating an example of implementation of the aggregation module of the circuit in accordance with one embodiment of the present invention.



FIG. 3 is a block diagram schematically illustrating a circuit for aggregating an input data stream from a first processor into an aggregated data stream for a second processor in accordance with one embodiment of the present invention.



FIG. 4 is a block diagram schematically illustrating a circuit for aggregating a plurality of input data streams from first processors into one data stream for a second processor in accordance with one embodiment of the present invention.



FIG. 5 is a system block diagram schematically illustrating an example in which two data streams from the switching processors are aggregated into one data stream for a packet processing processor by an aggregation circuit in accordance with one embodiment of the present invention.



FIG. 6 is a process flow diagram schematically illustrating a method for aggregating a plurality of input data streams from first processors into one data stream for a second processor in accordance with one embodiment of the present invention.



FIG. 7 is a data flow diagram schematically illustrating the method of aggregating a plurality of data streams along the receive (Rx) data path in accordance with one embodiment of the present invention.



FIG. 8 is a data flow diagram schematically illustrating the method of aggregating a plurality of data streams along the transmit (Tx) data path in accordance with one embodiment of the present invention.



FIG. 9 is a process flow diagram schematically illustrating a method for aggregating a plurality of input data streams from first processors into one data stream for a second processor, in accordance with one embodiment of the present invention.





DETAILED DESCRIPTION

Embodiments of the present invention are described herein in the context of a method and apparatus for aggregating input data streams. Those of ordinary skill in the art will realize that the following detailed description of the present invention is illustrative only and is not intended to be in any way limiting. Other embodiments of the present invention will readily suggest themselves to such skilled persons having the benefit of this disclosure. Reference will now be made in detail to implementations of the present invention as illustrated in the accompanying drawings. The same reference indicators will be used throughout the drawings and the following detailed description to refer to the same or like parts.


In the interest of clarity, not all of the routine features of the implementations described herein are shown and described. It will, of course, be appreciated that in the development of any such actual implementation, numerous implementation-specific decisions must be made in order to achieve the developer's specific goals, such as compliance with application- and business-related constraints, and that these specific goals will vary from one implementation to another and from one developer to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking of engineering for those of ordinary skill in the art having the benefit of this disclosure.


In accordance with one embodiment of the present invention, the components, process steps, and/or data structures may be implemented using various types of operating systems (OS), computing platforms, firmware, computer programs, computer languages, and/or general-purpose machines. The method can be implemented as a programmed process running on processing circuitry. The processing circuitry can take the form of numerous combinations of processors and operating systems, or a stand-alone device. The process can be implemented as instructions executed by such hardware, hardware alone, or any combination thereof. The software may be stored on a program storage device readable by a machine.


In addition, those of ordinary skill in the art will recognize that devices of a less general purpose nature, such as hardwired devices, field programmable logic devices (FPLDs), including field programmable gate arrays (FPGAs) and complex programmable logic devices (CPLDs), application specific integrated circuits (ASICs), or the like, may also be used without departing from the scope and spirit of the inventive concepts disclosed herein.


In the context of the present invention, the term “network” includes local area networks (LANs), wide area networks (WANs), the Internet, cable television systems, telephone systems, wireless telecommunications systems, fiber optic networks, ATM networks, frame relay networks, satellite communications systems, and the like. Such networks are well known in the art and consequently are not further described here.



FIG. 1 schematically illustrates a circuit 10 for aggregating a plurality of input data streams from first processors 12 (12a, 12b) into one data stream for a second processor 14 in accordance with one embodiment of the present invention. The circuit 10, the first processors 12, and the second processor 14 are provided on an electronic circuit substrate. For example, such an electronic circuit substrate may be a circuit board for a line card, network interface device, and the like.


As shown in FIG. 1, the circuit 10 includes a plurality of ingress data ports 16 (16a, 16b), an aggregation module 18 coupled to the plurality of ingress data ports 16, a memory 20 coupled to the aggregation module 18, and an output data port 22 coupled to the aggregation module 18. The aggregation module 18 may be implemented by a field programmable logic device (FPLD), field programmable gate array (FPGA), or the like. Each of the ingress data port 16 (16a or 16b) receives an input data stream 24 (24a or 24b) from a corresponding first processor 12 (12a or 12b). Each of the input data streams 24 (24a, 24b) is formed of ingress data packets. The aggregation module 18 is adapted to analyze and combine the plurality of input data streams 24 (24a, 24b) into one aggregated data stream 26 in response to priority factors of the ingress data packets. The memory 20 is adapted to store analyzed data packets. The memory 20 may be an external buffer memory. The aggregated data stream 26 is output from the output data port 22 to the second processor 14. Although FIG. 1 shows two first processors 12, the number of the first processors and the corresponding data streams is not limited to two.


Each of the ingress data packets includes, typically in its header, certain information such as indication of the type of the packets (ordinary data packet, protocol packet, control or management packet, and the like), port information, virtual LAN (VLAN) address, and the like. In accordance with one embodiment of the present invention, the information indicating the data packet is a certain protocol packet is used as a priority factor. In addition, port information and VLAN information may also be used as priority factors.


In accordance with one embodiment of the present invention, each of the first processors 12 and second processors 14 includes a logical interface providing logical interconnection between a Media Access Control sublayer (MAC) and a Physical layer (PHY), such as the 10 Gigabit Media Independent Interface (XGMII), through which data streams are received and transmitted. For example, the first processors 12 may be Layer-2 switching processors implementing Ethernet Maida Access Controllers and supporting the GMII, and the second processor 14 may be a data packet processor processing the aggregated packet data stream in the GMII format. Typically, the first processors 12 receive a receive (Rx) signal as the input data stream from transceivers, and the data flow from the first processors 12 to the second processor 14 through the aggregation module 18 forms a receive data path in the system. On the other hand, the data flow from the second processor 14 to the first processors 12 typically forms a transmit (Tx) data path.


Accordingly, in accordance with one embodiment of the present invention, as shown in FIG. 1, the circuit 10 further includes an egress data input port 28 adapted to receive a data stream 30 from the second processor 14, a forwarding module 32, and a plurality of egress data output ports 34 (34a, 34b) for outputting output data streams 36 (36a, 36b) to the corresponding first processors 12. The data stream 30 from the second processor 14 is formed of egress data packets. The forwarding module 32 is coupled between the egress data input port 28 and the egress data output ports 34, and forwards an egress data packet in the data stream 30 to one of the egress data output port 34 in response to destination information associated with the egress data packet. The forwarding module 32 may be implemented using a field programmable logic device (FPLD), field programmable gate array (FPGA), and the like.



FIG. 2 schematically illustrates an example of implementation of the aggregation module 18 of the circuit 10 in accordance with one embodiment of the present invention. The same or corresponding elements in FIGS. 1 and 2 are denoted by the same numeral references. In this implementation, the ingress data ports 16 include a first data port 16a for receiving a first input data stream 24a and a second data port 16b for receiving a second input data stream 24b. As shown in FIG. 2, the aggregation module 18 includes a first packet analyzer 40a, a second packet analyzer 40b, a queue module 42, a memory interface 44, and an output module 46. It should be noted that the number of the ports and the data streams is not limited to two.


The first packet analyzer 40a is coupled to the first data port 16a, and adapted to classify each of the ingress data packets in the first data stream 24a into one of predetermined priority classes based on the priority factors of the ingress data packets. Similarly, the second packet analyzer 40b is coupled to the second data port 16b, and adapted to classify each of the ingress data packets in the second data stream 24b into one of predetermined priority classes based on the priority factors. As described above, each of the ingress data packets includes, typically in the header, certain information such as indication of the type of the packets (ordinary data packet, protocol packet, control or management packet, and the like), port information, virtual LAN (VLAN) address, and the like, which can be used as priority factors. The priority class of each data packet is determined using one or more priority factors.


The queue module 42 includes a plurality of priority queues 48 and selection logic 50. Each of the priority queues 48 is provided for the corresponding priority class, and the selection logic 50 implements a queue scheme. For example, four (4) priority queues may be provided. The first and second packet analyzers 40a and 40b analyze and classify each of the ingress data packets into one of the priority classes based on the priority factors, and also generate a packet descriptor for each of the analyzed ingress data packets. The analyzed data packet is stored in the memory 20. The packet descriptor contains a reference to a memory location of its analyzed data packet. The packet descriptor is placed in a priority queue 48 corresponding to the priority class of the data packet. The selection logic 50 arbitrates and selects a packet descriptor from among the priority queues 48 in accordance with the queue scheme. Such a queue scheme includes strict fair queuing, weighted fair queuing, and the like.


The memory interface 44 provides access to the external buffer memory 20, and may include a first write interface 52a, a second write interface 52b, and a common read interface 54. The first write interface 52a is coupled to the first packet analyzer 40a and adapted to write the analyzed data packets into the memory 20 at the memory location indicated by the corresponding packet descriptor. Similarly, the second write interface 52b is coupled to the second packet analyzer 40b, and adapted to write the analyzed data packets into the memory 20 at the memory location indicated by the corresponding packet descriptor. The common read interface 54 is coupled to the queue module 42 (the queue selection logic 50) and adapted to read a data packet from a memory location of the memory 20 indicated by the selected packet descriptor. The data packet read from the memory 20 is provided to the output module 46 which sends the data packets to the output data port 22 as the aggregated data stream. Providing separate write interfaces (and the corresponding write ports) and a common read interface (and the corresponding common read port) saves the number of input/output (I/O) pins of the circuit 10.


In the above-discussed embodiments, two or more input data streams from different processors are aggregated into one data stream. The present invention is also applicable when data from one processor (first processor) is oversubscribed by another (second processor), for example, when the first processor's uplink bandwidth (capacity) is greater than the second processor's data processing bandwidth (capacity). The circuit in accordance with the present invention can “bridge” the two processors and provides aggregation scheme for the oversubscribed data.



FIG. 3 schematically illustrates a circuit 11 for aggregating an input data stream from a first processor 13 into an aggregated data stream for a second processor 15, in accordance with one embodiment of the present invention. The circuit 11, the first processor 13, and the second processor 15 are provided on an electronic circuit substrate. Similarly to the circuit 10 described above, the circuit 11 includes an ingress data port 17, an aggregation module 19, a memory 21, and an output data port 23. The ingress data port receives the input data stream 25 from the first processor 13 via a first data link having a first bandwidth. Similarly to the input data stream in the circuit 10 above, the input data stream 25 is formed of ingress data packets, and each ingress data packet includes priority factors coded therein. The aggregation module 19 is coupled to the ingress data port 17. The aggregation module 19 analyzes and selectively recombines the ingress data packets in response to the priority factors so as to generate an aggregated data stream 27 for a second data link which has a second bandwidth smaller than the first bandwidth. The memory 21 is coupled to the aggregation module 19, and is adapted to store analyzed data packets. The output data port 23 is coupled to the aggregation module 19, and outputs the aggregated data stream 27 to the second processor 15.


The implementation of the circuit 11 can be done in a similar manner as that of the circuit 10 shown in FIG. 3 or circuits described in the following embodiments. One packet analyzer may be provided for the ingress data port 17, instead of two or more packet analyzers provided for respective ingress data ports in FIG. 1 or 2, so long as the packet analyzer can handle the first bandwidth of the input data stream. Alternatively, the input data stream 25 may be divided to be handled by two or more packet analyzers. In this embodiment, the aggregation module 19 selectively recombines the stored data packet using the packet descriptors in the priority queues according to the implemented queue scheme. The above-described aggregation scheme classifying and prioritizing ingress data packets, as well as that in the following embodiments, is equally applicable to the circuit 11. The resulting output data stream is outputted within the second bandwidth (capacity) of the second data link.



FIG. 4 schematically illustrates a circuit 100 for aggregating a plurality of input data streams from first processors into one data stream for a second processor in accordance with one embodiment of the present invention. The circuit 100, the first processors, and the second processor are provided on an electronic circuit substrate. For example, such an electronic circuit substrate may be a circuit board for a line card, network interface device, and the like.


Similarly to the circuit 10 in FIGS. 1 and 2, the circuit 100 includes a plurality of ingress data ports 116 (116a, 116b), an aggregation module 118 coupled to the plurality of ingress data ports 116, a memory 120 coupled to the aggregation module 118, and an output data port 122 coupled to the aggregation module 118. Each of the ingress data ports 116 receives an input data stream 124 (124a or 124b) from a corresponding first processor (not shown). Each of the input data streams 124 (124a, 124b) is formed of ingress data packets, and each of the ingress data packets includes priority factors coded therein. The aggregation module 118 is adapted to analyze and combine the plurality of input data streams 124 (124a, 124b) into one aggregated data stream 126 in response to the priority factors. The memory 120 is adapted to store analyzed data packets. The memory 120 may be an external buffer memory. The aggregated data stream 126 is output from the output data port 122 to the second processor (not shown). Although the number of the input data streams is not limited to two, the following description uses an example where two input data streams 124 are aggregated into one data stream 126.


As shown in FIG. 4, the ingress data ports 116 (116a, 116b), the aggregation module 118, the memory 120, and the output data port 122 are in the receive signal (Rx) path. The circuit 110 further includes, in the transmit (Tx) data path, an egress data input port 128 for receiving a data stream 130 from the second processor (not shown), a forwarding module 132, and egress data output ports 134 (134a, 134b) for outputting output data streams 136 (136a, 136b) to the corresponding first processors (not shown). The data stream 130 is formed of egress data packets. The forwarding module 132 is coupled between the egress data input port 128 and the egress data output ports 134, and adapted to forward an egress data packet in the data stream 130 to one of the egress data output ports 134 (134a or 134b) in response to destination information associated with the egress data packet. The aggregation module 118 and the forwarding module 132 may be implemented by a field programmable logic device (FPLD), field programmable gate array (FPGA), and the like.


As described above, each of the first processors and second processors may include a logical interface providing logical interconnection between a Media Access Control sublayer (MAC) and a Physical layer (PHY), such as the 10 Gigabit Media Independent Interface (XGMII), through which data streams are received and transmitted. For example, the first processors may be Layer-2 switching processors implementing Ethernet Maida Access Controllers and supporting GMII, and the second processor may be a data packet processor processing the aggregated packet data stream. Typically, the first processors receive a receive signal (Rx) as the input data stream from transceivers. For example, the first processors may be a 10 GbE switching processor that supports various features used for switching and forwarding operation of data packets as well as the interface standards such as IEEE 1000BASE-T. Typically, such a 10 GbE switching processor has ten or more Gigabit ports and a 10 Gigabit uplink. For example, BCM 5632 processors, available from Broadcom Corporation, Irvine, Calif., may be used as such switching processors. However, any other MAC/PHY devices supporting required features can be used in the embodiment of the present invention. The second processor is typically a proprietary packet processor implementing specific packet processing processes and switching fabrics.


As shown in FIG. 4, the aggregation module 118 includes a first packet analyzer 140a, a second packet analyzer 140b, a queue module 142, a memory interface 144 including a first memory interface 144a and a second memory interface 144b, and an output module 146. The first packet analyzer 140a is coupled to the first data port 116a, the first memory interface 144a, and the queue module 142. Similarly, the second packet analyzer 140b is coupled to the second data port 116b, the second memory interface 144b, and the queue module 142. The first and second packet analyzers 140a and 140b analyze and classify each of the ingress data packets into one of the priority classes based on the priority factors contained in the ingress data packet. The first and second packet analyzers 140a and 140b also generate a packet descriptor for each of the analyzed ingress data packets. The analyzed data packets are stored in the memory 120.


As shown in FIG. 4, the external memory 120 may include a first memory unit (memory bank) 120a and a second memory unit (memory bank) 120b for the first input data stream 124a and the second input data stream 124b, respectively. In addition, the memory interface 144 may also include a first memory interface 144a for the first input data stream 124a and a second memory interface 144b for the second input data stream 124b. Each of the memory unit may include a set of quad data rate (QDR) random access memories (RAMs) as shown in FIG. 4. It should be noted that write ports for the memory units 120a and 120b may be provided separately for the first and second input data streams 124a and 124b, and a read port may be common to both the first and second input data streams 124a and 124b.


The packet descriptor contains a reference to a memory location of its analyzed data packet in the memory 120. The packet descriptor is placed in the queue module 142. The queue module 142 includes a plurality of priority queues 148 and selection logic 150. Each of the priority queues 148 is provided for the corresponding priority class, and the packet descriptor is placed in the priority queue 148 corresponding to the priority class of its data packet. That is, packet descriptors of the ingress data packets for both of the first and second input data streams 124a and 124b are placed in the same priority queue 148 if they belong to the same priority class. The selection logic 150 implements a queue scheme, and arbitrates and selects a packet descriptor from among the priority queues 148 in accordance with the queue scheme. Such a queue scheme includes strict fair queuing, weighted fair queuing, and the like.


The memory interface 144 provides access to the external memory 120. When the analyzed data packets are to be written into the memory 120 (memory unit 120a or 120b), the first or second packet analyzer 140a or 140b uses the corresponding memory interface 144a or 144b. When the stored data packet specified by a selected packet descriptor is to be read from the referenced memory location in the memory 120, one of the first and second interfaces is commonly used (the first interface 144a in this example) as the read interface. The data packet read from the memory 120 is provided to the output module 146 which sends the data packets to the output data port 122 as the aggregated data stream.


As shown in FIG. 4, the first packet analyzer 140a may include a first data decoder 150a coupled to the first ingress data port 116a. The first packet decoder 150a is adapted to decode each ingress data packet to extract the priority factors therefrom. Similarly, the second packet analyzer 140b may include a second data decoder 150b coupled to the second ingress data port 116b. The second packet decoder 150b is adapted to decode each ingress data packet to extract the priority factors therefrom. For example, these packet decoders are XGMII decoders suitable to decode and extract various information (typically contained in the headers) from the ingress data packet complying the specified interface format.


As described above, the priority factors include information indicating the type of the packets (ordinary data packet, protocol packet, control or management packet, and the like), destination port information, virtual LAN (VLAN) address, and the like. In accordance with one embodiment of the present invention, the information indicating that the data packet is a certain protocol packet is used for protocol-filtering to classify certain protocols. The data packets meet the protocol filter criterion may be given the highest priority such that protocol packets are less likely to be dropped or discarded. The port information and/or VLAN information is also used as priority factors.


In accordance with one embodiment of the present invention, the priority of a data packet is assigned using per-port priority, VLAN priority, and protocol filter. For example, assume that the ingress data packets are to be classified into four priority classes. Each priority factor of an ingress data packet may be assigned with a certain number such as 3, 2, 1, or 0, indicating the priority class, with number 3 indicating the highest priority. For example, each port number may be mapped onto one of the priority numbers. If the ingress data packet has been formatted with another priority queue scheme, such an external priority number, for example, a predefined VLAN priority number, may also be mapped onto one of the (internal) priority numbers 3, 2, 1, and 0. If the ingress data packet is a protocol packet, the priority factor associated with the protocol filter may be assigned with number 3. Then, the priority numbers assigned to respective factors of the data packet are “merged” or compared to each other and the highest priority number is determined as the ultimate priority number for that data packet. The data packet is classified according to the ultimate priority number. For example, if the ingress data packet is a protocol packet, it would be classified into the highest priority class even if other priority factors receive lower priority number.


Referring back to FIG. 4, the aggregation module 118 may further include a first write buffer 152a coupled between the first data decoder 150a and the first memory interface 144a, and a second write buffer 152b coupled between the second data decoder 150b and the second memory interface 144b. These write buffers 152a and 152b are typically first-in first-out (FIFO) buffers and adapted to store the analyzed data packets until they are written into the memory 120. In accordance with one embodiment of the present invention, the aggregation module 118 may further include a flow control module 154. The flow control module 154 monitors the first write buffer 152a and the second write buffer 152b, and asserts a flow control signal if an amount of data stored in the first write buffer 152a or the second write buffer 152b exceeds a threshold. The flow control module 154 may also monitor the priority queues 148 in the queue module 142, and assert a flow control signal if an amount of data stored in a priority queue 148 exceeds a threshold. The flow control signal may be sent via the second processor (packet processor) to a module that controls transmit signals, and actual flow control may be done through the transmit signal path. For example, a pause control packet for the first processors is inserted in the data stream 130 such that the uplink data flow (input data streams 124) from first processors is paused.


The output module 146 may include a read buffer 156 coupled to a common read interface of the memory interface 144, and a data encoder 158 coupled to the read buffer 146. The data encoder 158 encodes the data packets into an interface format corresponding to that used by the first and second processors. For example, the data packets are encoded into the XGMII format to form an output data stream sent from the output data port 122.


As shown in FIG. 4, in the transmit signal (Tx) path, the circuit 110 includes the forwarding module 132 between the egress data input port 128 and the egress data output ports 134a and 134b. In accordance with one embodiment of the present invention, the forwarding module 132 includes a data decoder 160, a buffer 162, first and second forwarding logic 164a and 164b, and first and second data encoders 166a and 166b. The forwarding logic 164a and 146b forwards an egress data packet of the data stream 130 to one of the data encoders 166a or 166b in response to destination information associated with the egress data packet.



FIG. 5 schematically illustrates an example of a system 200 in which two data streams from the switching processors 202 are aggregated into one data stream for a packet processing processor (XPP) 204 by an aggregation circuit 206 in accordance with one embodiment of the present invention. For example, the system 200 may be 60 Gigabit over Copper (60 GoC) line card, and the switching processors 202 may be Broadcom's BCM5632s explained above. The aggregation circuit 206 may be one of the circuits 10, 11, or 110 as described in embodiments above. As shown in FIG. 5, the system 200 includes three sets (stacks) of aggregation data pipe lines 208 (208a, 208b, and 208c). In each of the data pipe lines 208, the aggregation circuit 206 bridges two of the switching processors 202 to one packet processing processor 204. The data coupling between the switching processors 202 and the aggregation circuit 206, and that between the aggregation circuit 206 and the packet processor 206 are supported by the XGMII. Each of the switching processors 202 receives ten (10) Gigabit data streams from Gigabit Ethernet transceivers 210, for example, BCM5464 Quad-Port Gigabit Copper Transceivers, available from Broadcom Corporation, Irvine, Calif. The data aggregation of the oversubscribed input data is performed in the lower layers (PHY/MAC), prior to actual packet processing in higher layers.



FIG. 6 schematically illustrates a method for aggregating a plurality of input data streams from first processors into one data stream for a second processor in accordance with one embodiment of the present invention. The first processors and the second processor are provided on an electronic circuit substrate. The method may be performed by the circuits 10, 11, 110, or 204 described above.


An input data stream is received from each of the first processors (300). Each input data stream is formed of ingress data packets, and each ingress data packet includes priority factors coded therein, as described above. Each of the ingress data packets are analyzed and classified into one of predetermined priority classes based on the priority factors (302). The analyzed ingress data packet is stored in a memory (304), and a packet descriptor is generated for the analyzed ingress data packet (306). The packet descriptor contains a reference to a memory location of its analyzed data packet stored in the memory. The packet descriptor is placed in a priority queue corresponding to the priority class of the data packet (308). The packet descriptors from each data stream of the same priority class are placed in the same priority queue for that priority class. A packet descriptor is selected from among the priority queues by arbitrating the packet descriptors in the priority queues using selection logic implementing a queue scheme (310). A data packet corresponding to the selected packet descriptor is read from the memory (312), and an aggregated data stream is generated combining the data packets read from the memory, and aggregated data stream is sent to the second processor (314).



FIG. 7 schematically illustrates the method of aggregating a plurality of data streams along the receive (Rx) data path in accordance with one embodiment of the present invention. The input data streams (two data streams in this example) from switching processors (first processors) are received at the respective receive signal (Rx) front ends (320a and 320b), and a header of each ingress data packet is decoded to extract the priority factors. The data format may be that of the XGMII. Ingress data packets are buffered in the corresponding write buffers (322a and 322b) during the packet analysis until they are stored in the memory. The write buffers may be QDR FIFOs. The ingress data packets are evaluated and classified into different priority classes in accordance with the priority factors (324a and 324b). The packet descriptors and analyzed ingress data packets are sent to the write interfaces (326a and 326b). The packet descriptors are placed into the priority queues 328 corresponding to the priority class of its ingress data packet. For example, four (4) priority queues are provided. The analyzed ingress data packets are stored in the corresponding buffer memories (330a and 330b). The buffer memories may be external QDR RAMs. The packet descriptors in the priority queues are arbitrated by queue selection logic (332), and the selected packet descriptor is sent to the read interface (334). Since the packet descriptor includes a reference to the memory location of its data packet, the corresponding data packet is read from the memory through the read interface. The read-out data packets are buffered in a read FIFO (336), and then encoded into the specific data format (338), for example that of the XGMII. The encoded data packets are sent as an output data stream to the second processor (packet processor)


As shown in FIG. 7, write-buffering, analyzing and classifying, and storing the data packets, and generating packet descriptors are performed separately for each data stream (320a through 326a, and 330a; 320b through 326b, and 330b). However, the packet descriptors for the both data streams are stored in the common priority queues and commonly arbitrated (328, 332). The stored data packet specified by the selected packet descriptors are also read out using the common read interface, and the data packets thereafter are processed in a single data channel (334 through 338). As described above, in analyzing and evaluating the ingress data packets, protocol-filtering, per-port priority, VLAN priority, and the like may be used as priority factors.



FIG. 8 schematically illustrates the method of aggregating a plurality of data streams along the transmit (Tx) data path in accordance with one embodiment of the present invention. A data stream formed of egress data packets from a packet processor (second processor) is received at a transmit signal (Tx) front end (340) and decoded to extract their destination information. The decoding may include decoding a specific interface data format such as the XGMII into a single data rate (SDR). The decoded data packets are buffered in a FIFO (342), and dispatched to the destination port by forwarding logic (344). Since one data stream is divided into two output data streams for different switching processors, an Idle Packet is inserted between End of Packet (EOP) and Start of Packet (SOP) in each data stream, such that the data for the other destination is replaced with the idle data (346a and 346b). Each of the output data stream is encoded for an interface format such as the XGMII (348a and 348b).



FIG. 9 schematically illustrates a method for aggregating a plurality of input data streams from first processors into one data stream for a second processor, in accordance with one embodiment of the present invention. The first processors and the second processor are provided on an electronic circuit substrate. A field programmable logic device (FPLD) coupled between the first processors and the second processor is provided (350). An ingress data interface is provided between each of the first processors and the FPLD (352). Each ingress data interface is adapted to couple an input data stream from a corresponding first processor to the FPLD. For example, the ingress data interface may be the XGMII supported by the first processor. Each input data stream is formed of ingress data packets, and each ingress data packet includes priority factors coded therein, as described above. An output data interface is also provided between the FPLD and the second processor (354), which is adapted to couple the aggregated data stream to the second processor. For example, the output data interface may be a XGMII supported by the second processor. A memory coupled to the FPLD is also provided (356), which is adapted to store analyzed data packets. The FPLD is programmed such that the FPLD analyzes and combines the plurality of input data streams into one aggregated data stream in response to the priority factors (360). The programmed FPLD performs the aggregation function for the Rx data stream as described above in detail with respect to other embodiments. The FPLD may also be programmed such that it also performs forwarding functions for the Tx data stream as described above, with providing an input data interface for receiving the Tx data from the second processor, and output interfaces for outputting output data streams to the first processors.


The numbers of ports, processors, priority queues, memory banks, and the like are by way of example and are not intended to be exhaustive or limiting in any way. While embodiments and applications of this invention have been shown and described, it would be apparent to those skilled in the art having the benefit of this disclosure that many more modifications than mentioned above are possible without departing from the inventive concepts herein. The invention, therefore, is not to be restricted except in the spirit of the appended claims.

Claims
  • 1. A method for aggregating data packets received from a first processor for a second processor, said method comprising: receiving an input data stream from the first processor using a data link having a first bandwidth, the input data stream comprising ingress data packets, each ingress data packet comprising at least one priority factor coded therein;analyzing and classifying each of the ingress data packets into one of predetermined priority classes based on the at least one priority factor;storing an analyzed data packet in a memory;generating a packet descriptor for the analyzed ingress data packet, the packet descriptor containing a reference to a memory location of its analyzed data packet stored in the memory;placing the packet descriptor in a priority queue corresponding to the priority class of the data packet;arbitrating and selecting a packet descriptor from among the priority queues using selection logic implementing a queue scheme;reading a data packet corresponding to the selected packet descriptor from the memory; andsending the data packets read from the memory to the second processor using a second data link as an aggregated data stream, wherein the first bandwidth is greater than a second bandwidth of the second data link.
  • 2. The method of claim 1, wherein said analyzing comprises: decoding a header of each ingress data packet to extract the at least one priority factor.
  • 3. The method of claim 1, further comprising: buffering the analyzed data packet in a write buffer before storing in the memory.
  • 4. The method of claim 3, further comprising: asserting a flow control signal if an amount of data stored in the write buffer exceeds a threshold.
  • 5. The method of claim 1, further comprising: buffering the data packet read from the memory in a read buffer.
  • 6. The method of claim 1, further comprising: encoding the data packets into an interface format before sending to the second processor.
  • 7. The method of claim 1, further comprising: asserting a flow control signal if a length of a corresponding priority queue exceeds a threshold.
  • 8. The method of claim 1, wherein said memory is an external buffer memory.
  • 9. The method of claim 1, wherein the first processor and the second processor transmits and receives a data stream through a logical interface providing logical interconnection between a Media Access Control sublayer (MAC) and a Physical layer (PHY).
  • 10. The method of claim 1, wherein said arbitrating and selecting, said reading, and said sending are performed as a single data channel.
  • 11. The method of claim 1, wherein said analyzing and classifying comprises: protocol-filtering to determine if the ingress data packet is a certain protocol packet.
  • 12. The method of claim 11, wherein the at least one priority factor comprises: protocol filter priority;per-port priority; andvirtual LAN priority.
  • 13. The method of claim 1, wherein the first processor is a Layer-2 switching processor.
  • 14. The method of claim 1, wherein the second processor is a data packet processor.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 10/810,208, filed Mar. 26, 2004, which is incorporated herein by reference in its entirety for all purposes.

US Referenced Citations (464)
Number Name Date Kind
3866175 Seifert, Jr. et al. Feb 1975 A
4325119 Grandmaison et al. Apr 1982 A
4348725 Farrell et al. Sep 1982 A
4628480 Floyd Dec 1986 A
4667323 Engdahl et al. May 1987 A
4679190 Dias et al. Jul 1987 A
4683564 Young et al. Jul 1987 A
4698748 Juzswik et al. Oct 1987 A
4723243 Joshi et al. Feb 1988 A
4754482 Weiss Jun 1988 A
4791629 Burns et al. Dec 1988 A
4794629 Pastyr et al. Dec 1988 A
4807280 Posner et al. Feb 1989 A
4876681 Hagiwara et al. Oct 1989 A
4896277 Vercellotti et al. Jan 1990 A
4985889 Frankish et al. Jan 1991 A
5101404 Kunimoto et al. Mar 1992 A
5136584 Hedlund Aug 1992 A
5195181 Bryant et al. Mar 1993 A
5208856 Leduc et al. May 1993 A
5224108 McDysan et al. Jun 1993 A
5231633 Hluchyj et al. Jul 1993 A
5280582 Yang et al. Jan 1994 A
5282196 Clebowicz Jan 1994 A
5287477 Johnson et al. Feb 1994 A
5299190 LaMaire et al. Mar 1994 A
5299195 Shah Mar 1994 A
5301192 Henrion Apr 1994 A
5307345 Lozowick et al. Apr 1994 A
5323386 Wiher et al. Jun 1994 A
5365512 Combs et al. Nov 1994 A
5377189 Clark Dec 1994 A
5390173 Spinney et al. Feb 1995 A
5392279 Taniguchi Feb 1995 A
5406643 Burke et al. Apr 1995 A
5408469 Opher et al. Apr 1995 A
5430442 Kaiser et al. Jul 1995 A
5436893 Barnett Jul 1995 A
5461615 Henrion Oct 1995 A
5490258 Fenner Feb 1996 A
5506840 Pauwels et al. Apr 1996 A
5506841 Sandquist Apr 1996 A
5521923 Willmann et al. May 1996 A
5546385 Caspi et al. Aug 1996 A
5550816 Hardwick et al. Aug 1996 A
5563948 Diehl et al. Oct 1996 A
5566170 Bakke et al. Oct 1996 A
5598410 Stone Jan 1997 A
5600795 Du Feb 1997 A
5619497 Gallagher et al. Apr 1997 A
5640504 Johnson, Jr. Jun 1997 A
5646878 Samra Jul 1997 A
5649089 Kilner Jul 1997 A
5663952 Gentry, Jr. Sep 1997 A
5663959 Nakagawa Sep 1997 A
5666353 Klausmeier et al. Sep 1997 A
5721819 Galles et al. Feb 1998 A
5732080 Ferguson et al. Mar 1998 A
5734826 Olnowich et al. Mar 1998 A
5740176 Gupta et al. Apr 1998 A
5745708 Weppler et al. Apr 1998 A
5751710 Crowther et al. May 1998 A
5802287 Rostoker et al. Sep 1998 A
5815146 Youden et al. Sep 1998 A
5818816 Chikazawa et al. Oct 1998 A
5835496 Yeung et al. Nov 1998 A
5838684 Wicki et al. Nov 1998 A
5862350 Coulson Jan 1999 A
5864555 Mathur et al. Jan 1999 A
5867675 Lomelino et al. Feb 1999 A
5870538 Manning et al. Feb 1999 A
5872769 Caldara et al. Feb 1999 A
5872783 Chin Feb 1999 A
5875200 Glover et al. Feb 1999 A
5896380 Brown et al. Apr 1999 A
5907566 Benson et al. May 1999 A
5907660 Inoue et al. May 1999 A
5909686 Muller et al. Jun 1999 A
5915094 Kouloheris et al. Jun 1999 A
5920566 Hendel et al. Jul 1999 A
5920886 Feldmeier Jul 1999 A
5936939 Des Jardins et al. Aug 1999 A
5936966 Ogawa et al. Aug 1999 A
5956347 Slater Sep 1999 A
5999528 Chow et al. Dec 1999 A
6000016 Curtis et al. Dec 1999 A
6011910 Chau et al. Jan 2000 A
6016310 Muller et al. Jan 2000 A
6023471 Haddock et al. Feb 2000 A
6031843 Swanbery et al. Feb 2000 A
6035414 Okazawa et al. Mar 2000 A
6038288 Thomas et al. Mar 2000 A
6067298 Shinohara May 2000 A
6067606 Holscher et al. May 2000 A
6076115 Sambamurthy et al. Jun 2000 A
6081522 Hendel et al. Jun 2000 A
6088356 Hendel et al. Jul 2000 A
6094434 Kotzur et al. Jul 2000 A
6104696 Kadambi et al. Aug 2000 A
6104700 Haddock et al. Aug 2000 A
6104969 Beeks Aug 2000 A
6108306 Kalkunte et al. Aug 2000 A
6118787 Kalkunte et al. Sep 2000 A
6125417 Bailis et al. Sep 2000 A
6128666 Muller et al. Oct 2000 A
6144668 Bass et al. Nov 2000 A
6147996 Laor et al. Nov 2000 A
6151301 Holden Nov 2000 A
6151497 Yee et al. Nov 2000 A
6154446 Kadambi et al. Nov 2000 A
6157643 Ma Dec 2000 A
6160809 Adiletta et al. Dec 2000 A
6160812 Bauman et al. Dec 2000 A
6172990 Deb et al. Jan 2001 B1
6178520 DeKoning et al. Jan 2001 B1
6181699 Crinion et al. Jan 2001 B1
6185222 Hughes Feb 2001 B1
6195335 Calvignac et al. Feb 2001 B1
6201492 Amar et al. Mar 2001 B1
6222845 Shue et al. Apr 2001 B1
6243667 Kerr et al. Jun 2001 B1
6249528 Kothary Jun 2001 B1
6263374 Olnowich et al. Jul 2001 B1
6272144 Berenbaum et al. Aug 2001 B1
6304903 Ward Oct 2001 B1
6320859 Momirov Nov 2001 B1
6333929 Drottar et al. Dec 2001 B1
6335932 Kadambi et al. Jan 2002 B2
6335935 Kadambi et al. Jan 2002 B2
6343072 Bechtolsheim et al. Jan 2002 B1
6351143 Guccione et al. Feb 2002 B1
6356550 Williams Mar 2002 B1
6356942 Bengtsson et al. Mar 2002 B1
6359879 Carvey et al. Mar 2002 B1
6363077 Wong et al. Mar 2002 B1
6366557 Hunter Apr 2002 B1
6369855 Chauvel et al. Apr 2002 B1
6421352 Manaka et al. Jul 2002 B1
6424658 Mathur Jul 2002 B1
6424659 Viswanadham et al. Jul 2002 B2
6427185 Ryals et al. Jul 2002 B1
6430190 Essbaum et al. Aug 2002 B1
6442067 Chawla et al. Aug 2002 B1
6457175 Lerche Sep 2002 B1
6459705 Cheng Oct 2002 B1
6460088 Merchant Oct 2002 B1
6463063 Bianchini, Jr. et al. Oct 2002 B1
6466608 Hong et al. Oct 2002 B1
6470436 Croft et al. Oct 2002 B1
6473428 Nichols et al. Oct 2002 B1
6473433 Bianchini, Jr. et al. Oct 2002 B1
6477174 Dooley et al. Nov 2002 B1
6480477 Treadaway et al. Nov 2002 B1
6490280 Leung Dec 2002 B1
6493347 Sindhu et al. Dec 2002 B2
6496502 Fite et al. Dec 2002 B1
6505281 Sherry Jan 2003 B1
6510138 Pannell Jan 2003 B1
6522656 Gridley Feb 2003 B1
6532229 Johnson et al. Mar 2003 B1
6532234 Yoshikawa et al. Mar 2003 B1
6535504 Johnson et al. Mar 2003 B1
6549519 Michels et al. Apr 2003 B1
6553370 Andreev et al. Apr 2003 B1
6556208 Congdon et al. Apr 2003 B1
6567404 Wilford May 2003 B1
6570884 Connery et al. May 2003 B1
6577631 Keenan et al. Jun 2003 B1
6587432 Putzolu et al. Jul 2003 B1
6591302 Boucher et al. Jul 2003 B2
6601186 Fox et al. Jul 2003 B1
6606300 Blanc et al. Aug 2003 B1
6628650 Saite et al. Sep 2003 B1
6633580 Torudbakken et al. Oct 2003 B1
6636483 Pannell Oct 2003 B1
6643269 Fan et al. Nov 2003 B1
6654342 Dittia et al. Nov 2003 B1
6654346 Mahalingaiah et al. Nov 2003 B1
6654370 Quirke et al. Nov 2003 B1
6654373 Maher, III et al. Nov 2003 B1
6658002 Ross et al. Dec 2003 B1
6661791 Brown Dec 2003 B1
6671275 Wong et al. Dec 2003 B1
6678248 Haddock et al. Jan 2004 B1
6681332 Byrne et al. Jan 2004 B1
6683872 Saito Jan 2004 B1
6687217 Chow et al. Feb 2004 B1
6687247 Wilford et al. Feb 2004 B1
6691202 Vasquez et al. Feb 2004 B2
6696917 Heitner et al. Feb 2004 B1
6697359 George Feb 2004 B1
6697368 Chang et al. Feb 2004 B2
6700894 Shung Mar 2004 B1
6708000 Nishi et al. Mar 2004 B1
6721229 Cole Apr 2004 B1
6721268 Ohira et al. Apr 2004 B1
6721313 Van Duyne Apr 2004 B1
6721338 Sato Apr 2004 B1
6731875 Kartalopoulos May 2004 B1
6735218 Chang et al. May 2004 B2
6745277 Lee et al. Jun 2004 B1
6747971 Hughes et al. Jun 2004 B1
6751224 Parruck et al. Jun 2004 B1
6754881 Kuhlmann et al. Jun 2004 B2
6765866 Wyatt Jul 2004 B1
6775706 Fukumoto et al. Aug 2004 B1
6778546 Epps et al. Aug 2004 B1
6781990 Puri et al. Aug 2004 B1
6785290 Fujisawa et al. Aug 2004 B1
6788697 Aweya et al. Sep 2004 B1
6792484 Hook Sep 2004 B1
6792502 Pandya et al. Sep 2004 B1
6798740 Senevirathne et al. Sep 2004 B1
6804220 Odenwalder et al. Oct 2004 B2
6804731 Chang et al. Oct 2004 B1
6807179 Kanuri et al. Oct 2004 B1
6807363 Abiko et al. Oct 2004 B1
6810038 Isoyama et al. Oct 2004 B1
6810046 Abbas et al. Oct 2004 B2
6813243 Epps et al. Nov 2004 B1
6813266 Chiang et al. Nov 2004 B1
6816467 Muller et al. Nov 2004 B1
6831923 Laor et al. Dec 2004 B1
6831932 Boyle et al. Dec 2004 B1
6836808 Bunce et al. Dec 2004 B2
6839346 Kametani Jan 2005 B1
6842422 Bianchini Jan 2005 B1
6854117 Roberts Feb 2005 B1
6856600 Russell et al. Feb 2005 B1
6859438 Haddock et al. Feb 2005 B2
6865153 Hill et al. Mar 2005 B1
6901072 Wong May 2005 B1
6906936 James et al. Jun 2005 B1
6912637 Herbst Jun 2005 B1
6920154 Achler Jul 2005 B1
6925516 Struhsaker et al. Aug 2005 B2
6934305 Duschatko et al. Aug 2005 B1
6937606 Basso et al. Aug 2005 B2
6946948 McCormack et al. Sep 2005 B2
6957258 Maher, III et al. Oct 2005 B2
6959007 Vogel et al. Oct 2005 B1
6973092 Zhou et al. Dec 2005 B1
6975599 Johnson et al. Dec 2005 B1
6978309 Dorbolo Dec 2005 B1
6980552 Belz et al. Dec 2005 B1
6982974 Saleh et al. Jan 2006 B1
6990102 Kaniz et al. Jan 2006 B1
6993032 Dammann et al. Jan 2006 B1
6996663 Marsteiner Feb 2006 B1
7005812 Mitchell Feb 2006 B2
7009968 Ambe et al. Mar 2006 B2
7012919 So et al. Mar 2006 B1
7050430 Kalkunte et al. May 2006 B2
7065673 Subramaniam et al. Jun 2006 B2
7082133 Lor et al. Jul 2006 B1
7103041 Speiser et al. Sep 2006 B1
7120744 Klein Oct 2006 B2
7126948 Gooch et al. Oct 2006 B2
7126956 Scholten Oct 2006 B2
7151797 Limberg Dec 2006 B2
7161948 Sampath et al. Jan 2007 B2
7167471 Calvignac et al. Jan 2007 B2
7176911 Kidono et al. Feb 2007 B1
7185141 James et al. Feb 2007 B1
7185266 Blightman et al. Feb 2007 B2
7187687 Davis et al. Mar 2007 B1
7188237 Zhou et al. Mar 2007 B2
7190696 Manur et al. Mar 2007 B1
7191277 Broyles Mar 2007 B2
7191468 Hanner Mar 2007 B2
7194652 Zhou et al. Mar 2007 B2
7203194 Chang et al. Apr 2007 B2
7206283 Chang et al. Apr 2007 B2
7212536 Mackiewich May 2007 B2
7218637 Best et al. May 2007 B1
7219293 Tsai et al. May 2007 B2
7228509 Dada et al. Jun 2007 B1
7236490 Chang et al. Jun 2007 B2
7237058 Srinivasan Jun 2007 B2
7266117 Davis Sep 2007 B1
7272611 Cuppett et al. Sep 2007 B1
7272613 Sim et al. Sep 2007 B2
7277425 Sikdar Oct 2007 B1
7283547 Hook et al. Oct 2007 B1
7286534 Kloth Oct 2007 B2
7324509 Ni Jan 2008 B2
7355970 Lor Apr 2008 B2
7356030 Chang et al. Apr 2008 B2
7391769 Rajkumar et al. Jun 2008 B2
7414979 Jarvis Aug 2008 B1
7468975 Davis Dec 2008 B1
7512127 Chang et al. Mar 2009 B2
7558193 Bradbury et al. Jul 2009 B2
7561590 Walsh Jul 2009 B1
7590760 Banks Sep 2009 B1
7596139 Patel et al. Sep 2009 B2
7609617 Appanna et al. Oct 2009 B2
7613991 Bain Nov 2009 B1
7624283 Bade et al. Nov 2009 B2
7636369 Wong Dec 2009 B2
7649885 Davis Jan 2010 B1
7657703 Singh Feb 2010 B1
7721297 Ward May 2010 B2
7738450 Davis Jun 2010 B1
7813367 Wong Oct 2010 B2
7817659 Wong Oct 2010 B2
7830884 Davis Nov 2010 B2
7903654 Bansal Mar 2011 B2
7933947 Fleischer et al. Apr 2011 B2
7948872 Patel et al. May 2011 B2
7953922 Singh May 2011 B2
7953923 Singh May 2011 B2
7978614 Wong et al. Jul 2011 B2
7978702 Chang et al. Jul 2011 B2
7995580 Patel et al. Aug 2011 B2
8014278 Subramanian et al. Sep 2011 B1
8037399 Wong et al. Oct 2011 B2
8090901 Lin et al. Jan 2012 B2
8140044 Villain et al. Mar 2012 B2
8149839 Hsu et al. Apr 2012 B1
8155011 Wong et al. Apr 2012 B2
20010001879 Kubik et al. May 2001 A1
20010007560 Masuda et al. Jul 2001 A1
20010026551 Horlin Oct 2001 A1
20010048785 Steinberg Dec 2001 A1
20010053150 Clear et al. Dec 2001 A1
20020001307 Nguyen et al. Jan 2002 A1
20020040417 Winograd et al. Apr 2002 A1
20020054594 Hoof et al. May 2002 A1
20020054595 Ambe et al. May 2002 A1
20020069294 Herkersdorf et al. Jun 2002 A1
20020073073 Cheng Jun 2002 A1
20020085499 Toyoyama et al. Jul 2002 A1
20020087788 Morris Jul 2002 A1
20020089937 Venkatachary et al. Jul 2002 A1
20020091844 Craft et al. Jul 2002 A1
20020091884 Chang et al. Jul 2002 A1
20020126672 Chow et al. Sep 2002 A1
20020131437 Tagore-Brage Sep 2002 A1
20020141403 Akahane et al. Oct 2002 A1
20020146013 Karlsson et al. Oct 2002 A1
20020161967 Kirihata et al. Oct 2002 A1
20020169786 Richek Nov 2002 A1
20020191605 Van Lunteren et al. Dec 2002 A1
20030009466 Ta et al. Jan 2003 A1
20030012198 Kaganoi Jan 2003 A1
20030043800 Sonksen et al. Mar 2003 A1
20030043848 Sonksen Mar 2003 A1
20030061459 Aboulenein et al. Mar 2003 A1
20030074657 Bramley, Jr. Apr 2003 A1
20030081608 Barri et al. May 2003 A1
20030095548 Yamano May 2003 A1
20030103499 Davis et al. Jun 2003 A1
20030103500 Menon et al. Jun 2003 A1
20030108052 Inoue et al. Jun 2003 A1
20030110180 Calvignac et al. Jun 2003 A1
20030115403 Bouchard et al. Jun 2003 A1
20030120861 Calle et al. Jun 2003 A1
20030128668 Yavatkar et al. Jul 2003 A1
20030137978 Kanetake Jul 2003 A1
20030152084 Lee et al. Aug 2003 A1
20030152096 Chapman Aug 2003 A1
20030156586 Lee et al. Aug 2003 A1
20030159086 Arndt Aug 2003 A1
20030165160 Minami et al. Sep 2003 A1
20030169470 Alagar et al. Sep 2003 A1
20030177221 Ould-Brahim et al. Sep 2003 A1
20030198182 Pegrum et al. Oct 2003 A1
20030200343 Greenblat et al. Oct 2003 A1
20030214956 Navada et al. Nov 2003 A1
20030223424 Anderson et al. Dec 2003 A1
20030223466 Noronha, Jr. et al. Dec 2003 A1
20030227943 Hallman et al. Dec 2003 A1
20040022263 Zhao et al. Feb 2004 A1
20040028060 Kang Feb 2004 A1
20040054867 Stravers et al. Mar 2004 A1
20040062245 Sharp et al. Apr 2004 A1
20040062246 Boucher et al. Apr 2004 A1
20040083404 Subramaniam et al. Apr 2004 A1
20040088469 Levy May 2004 A1
20040128434 Khanna et al. Jul 2004 A1
20040141504 Blanc Jul 2004 A1
20040190547 Gordy et al. Sep 2004 A1
20040205393 Kitamorn et al. Oct 2004 A1
20040208177 Ogawa Oct 2004 A1
20040208181 Clayton et al. Oct 2004 A1
20040223502 Wybenga et al. Nov 2004 A1
20040235480 Rezaaifar et al. Nov 2004 A1
20040264380 Kalkunte et al. Dec 2004 A1
20050010630 Doering et al. Jan 2005 A1
20050010849 Ryle et al. Jan 2005 A1
20050041684 Reynolds et al. Feb 2005 A1
20050097432 Obuchi et al. May 2005 A1
20050120122 Farnham Jun 2005 A1
20050132132 Rosenbluth et al. Jun 2005 A1
20050132179 Glaum et al. Jun 2005 A1
20050138276 Navada et al. Jun 2005 A1
20050152324 Benveniste Jul 2005 A1
20050152335 Lodha et al. Jul 2005 A1
20050169317 Pruecklmayer Aug 2005 A1
20050185577 Sakamoto et al. Aug 2005 A1
20050185652 Iwamoto Aug 2005 A1
20050193316 Chen Sep 2005 A1
20050201387 Willis Sep 2005 A1
20050226236 Klink Oct 2005 A1
20050246508 Shaw Nov 2005 A1
20050249124 Elie-Dit-Cosaque et al. Nov 2005 A1
20060031610 Liav et al. Feb 2006 A1
20060034452 Tonomura Feb 2006 A1
20060050690 Epps et al. Mar 2006 A1
20060077891 Smith et al. Apr 2006 A1
20060092829 Brolin et al. May 2006 A1
20060092929 Chun May 2006 A1
20060114876 Kalkunte Jun 2006 A1
20060146374 Ng et al. Jul 2006 A1
20060165089 Klink Jul 2006 A1
20060209685 Rahman et al. Sep 2006 A1
20060221841 Lee et al. Oct 2006 A1
20060268680 Roberts et al. Nov 2006 A1
20070038798 Bouchard et al. Feb 2007 A1
20070088974 Chandwani et al. Apr 2007 A1
20070127464 Jain et al. Jun 2007 A1
20070179909 Channasagara Aug 2007 A1
20070208876 Davis Sep 2007 A1
20070258475 Chinn et al. Nov 2007 A1
20070288690 Wang et al. Dec 2007 A1
20080025309 Swallow Jan 2008 A1
20080031263 Ervin et al. Feb 2008 A1
20080037544 Yano et al. Feb 2008 A1
20080069125 Reed et al. Mar 2008 A1
20080092020 Hasenplaugh et al. Apr 2008 A1
20080095169 Chandra et al. Apr 2008 A1
20080117075 Seddigh et al. May 2008 A1
20080126652 Vembu et al. May 2008 A1
20080181103 Davies Jul 2008 A1
20080205407 Chang et al. Aug 2008 A1
20080307288 Ziesler et al. Dec 2008 A1
20090175178 Yoon et al. Jul 2009 A1
20090279423 Suresh et al. Nov 2009 A1
20090279440 Wong et al. Nov 2009 A1
20090279441 Wong et al. Nov 2009 A1
20090279541 Wong et al. Nov 2009 A1
20090279542 Wong et al. Nov 2009 A1
20090279546 Davis Nov 2009 A1
20090279549 Ramanathan et al. Nov 2009 A1
20090279558 Davis et al. Nov 2009 A1
20090279561 Chang et al. Nov 2009 A1
20090282148 Wong et al. Nov 2009 A1
20090282322 Wong et al. Nov 2009 A1
20090287952 Patel et al. Nov 2009 A1
20090290499 Patel et al. Nov 2009 A1
20100034215 Patel et al. Feb 2010 A1
20100046521 Wong Feb 2010 A1
20100061393 Wong Mar 2010 A1
20100100671 Singh Apr 2010 A1
20100135313 Davis Jun 2010 A1
20100161894 Singh Jun 2010 A1
20100246588 Davis Sep 2010 A1
20100293327 Lin et al. Nov 2010 A1
20110002340 Davis Jan 2011 A1
20110044340 Bansal et al. Feb 2011 A1
20110069711 Jha et al. Mar 2011 A1
20110110237 Wong et al. May 2011 A1
20120023309 Abraham et al. Jan 2012 A1
Foreign Referenced Citations (5)
Number Date Country
1380127 Jan 2004 EP
2003289359 Oct 2003 JP
2004-537871 Dec 2004 JP
WO 0184728 Nov 2001 WO
WO 0241544 May 2002 WO
Non-Patent Literature Citations (260)
Entry
Davis, et al., “Pipeline Method and System for Switching Packets,” U.S. Appl. No. 13/398,725, filed Feb. 16, 2012, 51 pages.
Hsu, et al., “Selection of Trunk Ports and Paths Using Rotation,” U.S. Appl. No. 13/407,397, filed Feb. 28, 2012, 30 pages.
Final Office Action for U.S. Appl. No. 11/831,950, mailed Feb. 28, 2012, 20 pages.
Notice of Allowance for U.S. Appl. No. 11/831,950, mailed May 16, 2012, 11 pages.
Notice of Allowance for U.S. Appl. No. 11/953,751, mailed Dec. 7, 2011, 12 pages.
Supplemental Notice of Allowance for U.S. Appl. No. 11/953,751, mailed Dec. 27, 2011, 6 pages.
Notice of Allowance for U.S. Appl. No. 11/668,322, mailed Feb. 10, 2012, 20 pages.
Notice of Allowance for U.S. Appl. No. 12/198,697, mailed Nov. 28, 2011, 12 pages.
Notice of Allowance for U.S. Appl. No. 12/198,697, mailed Jan. 5, 2012, 4 pages.
Non-Final Office Action for U.S. Appl. No. 13/083,481, mailed Dec. 1, 2011, 7 pages.
Notice of Allowance for U.S. Appl. No. 11/779,778, mailed on Jul. 28, 2011, 9 pages.
Final Office Action for U.S. Appl. No. 12/795,492, mailed on Jul. 20, 2011, 11 pages.
Requirement for Restriction/Election for U.S. Appl. No. 12/466,277, mailed on Aug. 9, 2011, 6 pages.
Non-Final Office Action for U.S. Appl. No. 11/646,845, mailed on Oct. 14, 2011, 19 pages.
Non-Final Office Action for U.S. Appl. No. 11/831,950, mailed Aug. 26, 2011, 45 pages.
Final Office Action for U.S. Appl. No. 11/953,742, mailed on Oct. 26, 2011, 19 pages.
Non-Final Office Action for U.S. Appl. No. 11/668,322, mailed on Aug. 30, 2011 17 pages.
Non-Final Office Action for U.S. Appl. No. 11/745,008, mailed on Sep. 14, 2011, 26 pages.
Notice of Allowance for U.S. Appl. No. 12/795,492, mailed on Nov. 14, 2011, 10 pages.
Final Office Action for U.S. Appl. No. 12/198,710, mailed on Oct. 19, 2011, 58 pages.
Final Office Action for U.S. Appl. No. 12/070,893, mailed on Sep. 21, 2011, 12 pages.
Notice of Allowance for U.S. Appl. No. 12/466,277, mailed on Nov. 2, 2011, 47 pages.
10 Gigabit Ethernet—Technology Overview White Paper, Sep. 2001, 16 pages.
10 Gigabit Ethernet, Interconnection with Wide Area Networks, Version 1.0, Mar. 2002, 5 pages.
ANSI/IEEE Standard 802.1D, 1998, 373 pages.
Belhadj et al., “Feasibility of a 100GE MAC,” PowerPoint Presentation, IEEE Meeting, Nov. 13-15, 2006, 18 pages.
Braun et al., “Fast incremental CRC updates for IP over ATM networks,” IEEE Workshop on High Performance Switching and Routing, 2001, 6 pages.
Degermark, et al., “Small Forwarding Tables for Fast Routing Lookups,” ACM Computer Communications Review, Oct. 1997, pp. 3-14, vol. 27, No. 4.
Foundry Networks, “Biglron Architecture Technical Brief,” Oct. 1998, 15 pages, version 1.0.
Foundry Networks, “Biglron Architecture Technical Brief,” Oct. 1998, 15 pages, version 1.02.
Foundry Networks, “Biglron Architecture Technical Brief,” Dec. 1998, 14 pages, version 1.03.
Foundry Networks, “Biglron Architecture Technical Brief,” May 1999, 15 pages, version 2.0.
Foundry Networks, “Biglron Architecture Technical Brief,” May, 1999, 15 pages, Version 2.01.
Foundry Networks, “Biglron Architecture Technical Brief,” Jul. 2001, 16 pages, Version 2.02.
Foundry Networks, “Next Generation Terabit System Architecture—The High Performance Revolution for 10 Gigabit Networks,” Nov. 17, 2003, 27 pages.
Gigabit Ethernet Alliance—“Accelerating the Standard for Speed,” Copyright 1999, 19 pages.
Kichorowsky, et al., “Mindspeed.TM. Switch Fabric Offers the Most Comprehensive Solution for Multi-Protocol Networking Equipment,” Apr. 30, 2001, 3 pages.
Matsumoto, et al., “Switch Fabrics Touted At Interconnects Conference,” Aug. 21, 2000, printed on Aug. 12, 2002, at URL: http://www.eetimes.com/story/OEG2000821S0011, 2 pages.
McAuley, et al., “Fast Routing Table Lookup Using CAMs,” Proceedings of INFOCOM, Mar.-Apr. 1993, pp. 1382-1391.
Foundry Networks, “JetCore™ Based Chassis Systems—An Architecture Brief on Netlron, Biglron, and Fastlron Systems,” Jan. 17, 2003, 27 pages.
Mier Communications, Inc., “Lab Testing Summary Report—Product Category: Layer-3 Switches, Vendor Tested:, Product Tested: Foundry Networks, Biglron 4000,” Report No. 231198, Oct. 1998, 6 pages.
Mier Communications, Inc.,“Lab Testing Summary Report—Product Category: Gigabit Backbone Switches, Vendor Tested: Foundry Networks, Product Tested: Biglron 4000,” Report No. 210998, Sep. 1998, 6 pages.
Mindspeed—A Conexant Business, “Switch Fabric Chipset—CX27300 iScale.TM.,” Apr. 30, 2001, 2 pages.
Mindspeed—A Conexant Business, “17×17 3.2 Gbps Crosspoint Switch with Input Equalization—M21110,” Feb. 1, 2001, 2 pages.
Newton, Newton's Telecom Dictionary, CMP Books, Mar. 2004, 20th Ed., p. 617.
Satran, et al., “Out of Order Incremental CRC Computation,” IEEE Transactions on Computers, Sep. 2005, vol. 54, Issue 9, 11 pages.
Spurgeon, “Ethernet, The Definitive Guide,” O'Reilly & Associates, Inc., Sebastapol, CA, Feb. 2000.
The Tolly Group, “Foundry Networks, Inc.—Biglron 4000, Layer 2 & Layer 3 Interoperability Evaluation,” No. 199133, Oct. 1999, 4 pages.
The Tolly Group, “Foundry Networks, Inc.—Biglron 8000 Gigabit Ethernet Switching Router, Layer 2 & Layer 3 Performance Evaluation,” No. 199111, May, 1999, 4 pages.
International Search Report for Application No. PCT/US2001/043113, mailed Dec. 13, 2002, 2 pages.
Written Opinion of the International Searching Authority for Application No. PCT/US2001/043113, mailed May 1, 2003, 6 pages.
International Preliminary Examination Report for Application No. PCT/US2001/043113, mailed Nov. 6, 2003, 6 pages.
International Search Report for Application No. PCT/US03/08719, mailed Jun. 17, 2003. 1 page.
U.S. Appl. No. 12/883,073, Flexible Method for Processing Data Packets in a Network Routing System for Enhanced Efficiency and Monitoring Capability, filed Sep. 15, 2010, Davis.
U.S. Appl. No. 12/684,022, Provisioning Single or Multistage Networks Using Ethernet Service Instances (ESIs), filed Jan. 7, 2010, Jha et al.
U.S. Appl. No. 12/417,913, Backplane Interface Adapter With Error Control and Redundant Fabric, filed Apr. 3, 2009, Patel et al.
U.S. Appl. No. 12/198,710, Techniques for Selecting Paths and/or Trunk Ports for Forwarding Traffic Flows, filed Aug. 26, 2008, Zhang et al.
U.S. Appl. No. 12/198,697, Selection of Trunk Ports and Paths Using Rotation, filed Aug. 26, 2008, Hsu et al.
U.S. Appl. No. 11/724,965, filed Mar. 15, 2007, Chang et al.
U.S. Appl. No. 11/586,991, Hitless Management Failover, filed Oct. 25, 2006, Ramanathan et al.
U.S. Appl. No. 10/832,086, System and Method for Optimizing Router Lookup, filed Apr. 26, 2004, Wong.
U.S. Appl. No. 10/141,223, Apparatus & Method of Processing Packets Under the Gigabit Ethernet Protocol in a Communications Network, filed May 7, 2002, Veerabadran et al.
U.S. Appl. No. 10/140,753, Integrated Adapter for a Network Routing System With Enhanced Efficiency and Monitoring Capability, filed May 6, 2002, Davis et al.
U.S. Appl. No. 10/140,751, Method and Apparatus for Efficiently Processing Data Packets in a Computer Network, filed May 6, 2002, Davis.
Non-Final Office Action for U.S. Appl. No. 09/855,024. mailed Jun. 4, 2002, 10 pages.
Final Office Action for U.S. Appl. No. 09/855,024, mailed Jan. 15, 2003, 20 pages.
Advisory Action for U.S. Appl. No. 09/855,024, mailed May 2, 2003.
Notice of Allowance for U.S. Appl. No. 09/855,024, mailed Nov. 3, 2003.
Notice of Allowance for U.S. Appl. No. 09/855,024, mailed Dec. 15, 2003. 6 pages.
Non-Final Office Action for U.S. Appl. No. 10/810,301, mailed Mar. 17, 2005,11 pages.
Non-Final Office Action for U.S. Appl. No. 10/810,301, mailed Feb. 16, 2006, 12 pages.
Notice of Allowance for U.S. Appl. No. 10/810,301, mailed Jul. 28, 2006, 5 pages.
Notice of Allowance for U.S. Appl. No. 10/810,301, mailed Feb. 6, 2007, 9 pages.
Non-Final Office Action for U.S. Appl. No. 09/855,025, mailed Nov. 23, 2004, 17 pages.
Non-Final Office Action for U.S. Appl. No. 09/855,031, mailed May 22, 2002.
Non-Final Office Action for U.S. Appl. No. 09/855,031, mailed Dec. 10, 2002.
Final Office Action for U.S. Appl. No. 09/855,031, mailed Jul. 30, 2003.
Notice of Allowance for U.S. Appl. No. 09/855,031, mailed Nov. 4, 2003.
Non-Final Office Action for U.S. Appl. No. 10/736,680, mailed Feb. 16, 2006, 18 pages.
Final Office Action for U.S. Appl. No. 10/736,680, mailed Aug. 3, 2006, 10 pages.
Notice of Allowance for U.S. Appl. No. 10/736,680, mailed Feb. 22, 2007, 12 pages.
Non-Final Office Action for U.S. Appl. No. 10/210,041, mailed Sep. 10, 2003, 12 pages.
Final Office Action for U.S. Appl. No. 10/210,041, mailed Jan. 7, 2004, 14 pages.
Non-Final Office Action for U.S. Appl. No. 10/210,041, mailed Mar. 11, 2004, 12 pages.
Final Office Action for U.S. Appl. No. 10/210,041, mailed Jul. 7, 2004, 13 pages.
Non-Final Office Action for U.S. Appl. No. 10/210,041, mailed Feb. 9, 2005, 7 pages.
Final Office Action for U.S. Appl. No. 10/210,041, mailed Aug. 24, 2005, 7 pages.
Advisory Action for U.S. Appl. No. 10/210,041, mailed Dec. 13, 2005, 4 pages.
Non-Final Office Action for U.S. Appl. No. 10/210,108, mailed Jun. 12, 2003, 6 pages.
Notice of Allowance for U.S. Appl. No. 10/210,108, mailed Oct. 7, 2003.
Requirement for Restriction/Election for U.S. Appl. No. 10/438,545, mailed Oct. 31,2003.
Non-Final Office Action for U.S. Appl. No. 10/438,545, mailed Dec. 12, 2003, 7 pages.
Notice of Allowance for U.S. Appl. No. 10/438,545, mailed Jun. 15, 2004, 4 pages.
Non-Final Office Action for U.S. Appl. No. 10/832,086, mailed Sep. 19, 2007, 12 pages.
Final Office Action for U.S. Appl. No. 10/832,086, mailed May 1, 2008, 31 pages.
Advisory Action for U.S. Appl. No. 10/832,086, mailed Jul. 21, 2008, 4 pages.
Non-Final Office Action for U.S. Appl. No. 10/832,086, mailed Sep. 18, 2008, 18 pages.
Non Final Office Action for U.S. Appl. No. 10/832,086, mailed Apr. 1, 2009 , 17 pages.
Final Office Action for U.S. Appl. No. 10/832,086, mailed Sep. 29, 2009, 26 pages.
Non-Final Office Action for U.S. Appl. No. 11/586,991, mailed Oct. 2, 2008, 23 pages.
Non-Final Office Action for U.S. Appl. No. 11/646,845, mailed Oct. 4, 2010, 48 pages.
Non-Final Office Action for U.S. Appl. No. 11/831,950, mailed Aug. 18, 2009, 49 pages.
Final Office Action for U.S. Appl. No. 11/831,950, mailed Jan. 6, 2010, 21 pages.
Advisory Action for U.S. Appl. No. 11/831,950, mailed Mar. 4, 2010, 4 pages.
Non-Final Office Action for U.S. Appl. No. 11/953,742, mailed on Nov. 19, 2009, 51 pages.
Final Office Action for U.S. Appl. No. 11/953,742, mailed Jun. 14, 2010, 21 pages.
Non-Final Office Action for U.S. Appl. No. 11/953,743, mailed Nov. 23, 2009, 47 pages.
Final Office Action for U.S. Appl. No. 11/953,743, mailed Jul. 15, 2010, 21 pages.
Non-Final Office Action for U.S. Appl. No. 11/953,745, mailed Nov. 24, 2009, 48 pages.
Non-Final Office Action for U.S. Appl. No. 11/953,745, mailed Jun. 14, 2010, 19 pages.
Non-Final Office Action for U.S. Appl. No. 11/953,751, mailed Nov. 16, 2009, 55 pages.
Final Office Action for U.S. Appl. No. 11/953,751, mailed Jun. 25, 2010, 24 pages.
Non-Final Office Action for U.S. Appl. No. 11/779,778, mailed Feb. 2, 2011, 63 pages.
Non-Final Office Action for U.S. Appl. No. 11/779,714, mailed Sep. 1, 2009, 58 pages.
Non-Final Office Action for U.S. Appl. No. 11/779,714, mailed Mar. 31, 2010, 26 pages.
Final Office Action for U.S. Appl. No. 11/779,714, mailed Nov. 9, 2010, 24 pages.
Non-Final Office Action for U.S. Appl. No. 10/810,208, mailed Jul. 16, 2007, 24 pages.
Non-Final Office Action for U.S. Appl. No. 10/810,208, mailed Dec. 18, 2007, 40 pages.
Final Office Action for U.S. Appl. No. 10/810,208, mailed Jun. 11, 2008, 34 pages.
Advisory Action for U.S. Appl. No. 10/810,208, mailed Aug. 27, 2008, 4 pages.
Non-Final Office Action for U.S. Appl. No. 10/810,208, mailed Feb. 13, 2009, 17 pages.
Non-Final Office Action for U.S. Appl. No. 10/810,208, mailed Aug. 24, 2009, 38 pages.
Non-Final Office Action for U.S. Appl. No. 10/810,208, mailed Feb. 5, 2010, 13 pages.
Notice of Allowance for U.S. Appl. No. 10/810,208, mailed Jul. 15, 2010, 15 pages.
Requirement for Restriction/Election for U.S. Appl. No. 10/140,752, mailed May 18, 2006, 8 pages.
Non-Final Office Action for U.S. Appl. No. 10/140,752, mailed Dec. 14, 2006, 17 pages.
Non-Final Office Action for U.S. Appl. No. 10/140,752, mailed Apr. 23, 2007, 6 pages.
Non-Final Office Action for U.S. Appl. No. 10/140,752, mailed Jan. 24, 2008, 8 pages.
Notice of Allowance of U.S. Appl. No. 10/140,752, mailed Jul. 24, 2008, 14 pages.
Notice of Allowance of U.S. Appl. No. 10/140,752, mailed Sep. 10, 2008, 4 pages.
Non-Final Office Action for U.S. Appl. No. 11/668,322, mailed Mar. 23, 2009, 19 pages.
Requirement for Restriction/Election for U.S. Appl. No. 11/668,322, mailed Oct. 29, 2009, 6 pages.
Final Office Action for U.S. Appl. No. 11/668,322, mailed Feb. 24, 2010, 33 pages.
Non-Final Office Action for U.S. Appl. No. 11/668,322, mailed Jun. 22, 2010, 16 pages.
Final Office Action for U.S. Appl. No. 11/668,322, mailed Feb. 1, 2011, 17 pages.
Non-Final Office Action for U.S. Appl. No. 11/854,486, mailed Jul. 20, 2009, 29 pages.
Non-Final Office Action for U.S. Appl. No. 11/854,486, mailed Jan. 12, 2010, 23 pages.
Notice of Allowance for U.S. Appl. No. 11/854,486, mailed Jul. 13, 2010, 12 pages.
Non-Final Office Action for U.S. Appl. No. 10/139,912, mailed Jan. 25, 2006, 14 pages.
Final Office Action for U.S. Appl. No. 10/139,912, mailed Aug. 11, 2006, 26 pages.
Non-Final Office Action for U.S. Appl. No. 10/139,912, mailed Apr. 20, 2007, 20 pages.
Final Office Action for U.S. Appl. No. 10/139,912, mailed Nov. 28, 2007, 20 pages.
Non-Final Office Action for U.S. Appl. No. 10/139,912, mailed Aug. 1, 2008, 21 pages.
Notice of Allowance for U.S. Appl. No. 10/139,912, mailed Feb. 5, 2009, 8 pages.
Notice of Allowance for U.S. Appl. No. 10/139,912, mailed Jun. 8, 2009, 8 pages.
Notice of Allowance for U.S. Appl. No. 10/139,912, mailed Oct. 19, 2009, 17 pages.
Supplemental Notice of Allowance for U.S Appl. No. 10/139,912, mailed Nov. 23, 2009, 4 pages.
Requirement for Restriction/Election for U.S. Appl. No. 10/140,751, mailed Apr. 27, 2006, 5 pages.
Non-Final Office Action for U.S. Appl. No. 10/140,751, mailed Aug. 10, 2006, 15 pages.
Final Office Action for U.S. Appl. No. 10/140,751, mailed Apr. 10, 2007, 16 pages.
Non-Final Office Action for U.S. Appl. No. 10/140,751, mailed Oct. 30, 2007, 14 pages.
Final Office Action for U.S. Appl. No. 10/140,751, mailed May 28, 2008, 19 pages.
Non-Final Office Action for U.S. Appl. No. 10/140,751, mailed Sep. 17, 2008, 15 pages.
Final Office Action for U.S. Appl. No. 10/140,751, mailed Mar. 17, 2009, 17 pages.
Advisory Action for U.S. Appl. No. 10/140,751, mailed Jun. 1, 2009, 3 pages.
Non-Final Office Action for U.S. Appl. No. 10/140,751, mailed Sep. 28, 2009, 34 pages.
Final Office Action for U.S. Appl. No. 10/140,751, mailed Mar. 25, 2010, 29 pages.
Non-Final Office Action for U.S. Appl. No. 10/140,751, mailed Dec. 20, 2010, 23 pages.
Non-Final Office Action for U.S. Appl. No. 11/745,008, mailed May 14, 2009, 27 pages.
Final Office Action for U.S. Appl. No. 11/745,008, mailed Dec. 30, 2009, 27 pages.
Advisory Action for U.S. Appl. No. 11/745,008, mailed Apr. 21, 2010, 8 pages.
Non-Final Office Action for U.S. Appl. No. 10/141,223, mailed Feb. 23, 2006, 25 pages.
Non-Final Office Action for U.S. Appl. No. 10/141,223, mailed Feb. 13, 2007, 29 pages.
Final Office Action for U.S. Appl. No. 10/141,223, mailed Aug. 21, 2007, 25 pages.
Non-Final Office Action for U.S. Appl. No. 10/141,223, mailed Dec. 28, 2007, 13 pages.
Non-Final Office Action for U.S. Appl. No. 10/141,223, mailed Sep. 3, 2008, 22 pages.
Non-Final Office Action for U.S. Appl. No. 10/139,831, mailed Oct. 17, 2005, 7 pages.
Notice of Allowance for U.S. Appl. No. 10/139,831, mailed Feb. 9, 2006, 7 pages.
Non-Final Office Action for U.S. Appl. No. 10/139,831, mailed Jun. 27, 2006, 9 pages.
Final Office Action for U.S. Appl. No. 10/139,831, mailed Nov. 28, 2006, 17 pages.
Notice of Allowance for U.S. Appl. No. 10/139,831, mailed Jun. 14, 2007, 26 pages.
Notice of Allowance for U.S. Appl. No. 10/139,831, mailed Jun. 26, 2007, 25 pages.
Non-Final Office Action for U.S. Appl. No. 11/828,246, mailed Jun. 15, 2009, 26 pages.
Notice of Allowance for U.S. Appl. No. 11/828,246, mailed Nov. 16, 2009, 20 pages.
Non-Final Office Action for U.S. Appl. No. 10/140,088, mailed Apr. 27, 2006, 13 pages.
Notice of Allowance for U.S. Appl. No. 10/140,088, mailed Sep. 7, 2006, 13 pages.
Notice of Allowance for U.S. Appl. No. 10/140,088, mailed Oct. 24, 2006, 8 pages.
Notice of Allowance for U.S. Appl. No. 10/140,088, mailed Jan. 11, 2007, 5 pages.
Non-Final Office Action for U.S. Appl. No. 11/621,038, mailed Apr. 23, 2009, 44 pages.
Final Office Action for U.S. Appl. No. 11/621,038, mailed Dec. 23, 2009, 27 pages.
Notice of Allowance for U.S. Appl. No. 11/621,038, mailed Apr. 28, 2010, 15 pages.
Non-Final Office Action for U.S. Appl. No. 12/795,492, mailed Mar. 17, 2011, 51 pages.
Non-Final Office Action for U.S. Appl. No. 12/198,697, mailed Feb. 2, 2010, 50 pages.
Final Office Action for U.S. Appl. No. 12/198,697, mailed Aug. 2, 2010, 55 pages.
Non-Final Office Action for U.S. Appl. No. 12/198,697, mailed Oct. 25, 2010, 36 pages.
Non-Final Office Action for U.S. Appl. No. 10/140,749, mailed Aug. 10, 2006, 22 pages.
Final Office Action for U.S. Appl. No. 10/140,749, mailed Jun. 27, 2007, 23 pages.
Final Office Action for U.S. Appl. No. 10/140,749, mailed Jan. 8, 2008, 23 pages.
Non-Final Office Action for U.S. Appl. No. 10/140,749, mailed Jun. 6, 2008, 28 pages.
Final Office Action for U.S. Appl. No. 10/140,749, mailed Dec. 8, 2008, 30 pages.
Non-Final Office Action for U.S. Appl. No. 10/140,749, mailed May 27, 2009, 38 pages.
Final Office Action for U.S. Appl. No. 10/140,749, mailed Jan. 13, 2010, 44 pages.
Non-Final Office Action for U.S. Appl. No. 10/140,753, mailed Apr. 20, 2006, 11 pages.
Final Office Action for U.S. Appl. No. 10/140,753, mailed Jan. 10, 2007, 27 pages.
Non-Final Office Action for U.S. Appl. No. 10/140,753, mailed Aug. 22, 2007, 14 pages.
Non-Final Office Action for U.S. Appl. No. 10/140,753, mailed Jan. 8, 2008, 14 pages.
Final Office Action for U.S. Appl. No. 10/140,753, mailed Aug. 25, 2008, 22 pages.
Non-Final Office Action for U.S. Appl. No. 12/198,710, mailed Sep. 28, 2010, 15 pages.
Requirement for Restriction/Election for U.S. Appl. No. 11/000,359, mailed Jun. 20, 2008, 7 pages.
Non-Final Office Action for U.S. Appl. No. 11/000,359, mailed Oct. 23, 2008, 28 pages.
Non-Final Office Action for U.S. Appl. No. 11/000,359, mailed May 29, 2009, 14 pages.
Notice of Allowance for U.S. Appl. No. 11/000,359, mailed Sep. 22, 2009, 17 pages.
Requirement for Restriction/Election for U.S. Appl. No. 11/118,697, mailed Jun. 2, 2009, 8 pages.
Notice of Allowance for U.S. Appl. No. 11/118,697, mailed Sep. 30, 2009, 41 pages.
Requirement for Restriction/Election for U.S. Appl. No. 12/639,749, mailed Dec. 7, 2010, 3 pages.
Notice of Allowance for U.S. Appl. No. 12/639,749, mailed Feb. 11, 2011, 51 pages.
Non-Final Office Action for U.S. Appl. No. 12/639,762, mailed Sep. 1, 2010, 40 pages.
Notice of Allowance for U.S. Appl. No. 12/639,762, mailed Mar. 4, 2011, 7 pages.
Non-Final Office Action for U.S. Appl. No. 09/855,038, mailed Jun. 2, 2005, 14 pages.
Final Office Action for U.S. Appl. No. 09/855,038, mailed Feb. 7, 2006, 8 pages.
Non-Final Office Action for U.S. Appl. No. 09/855,038, mailed Oct. 4, 2006, 14 pages.
Notice of Allowance for U.S. Appl. No. 09/855,038, mailed Apr. 26, 2007, 8 pages.
Requirement for Restriction/Election for U.S. Appl. No. 09/988,066, mailed Dec. 13, 2005, 7 pages.
Non-Final Office Action for U.S. Appl. No. 09/988,066, mailed Jul. 14, 2006, 17 pages.
Non-Final Office Action for U.S. Appl. No. 09/988,066, mailed Apr. 6, 2007, 22 pages.
Final Office Action for U.S. Appl. No. 09/988,066, mailed Oct. 31, 2007, 16 pages.
Advisory Action for U.S. Appl. No. 09/988,066, mailed May 28, 2008, 4 pages.
Notice of Allowance for U.S. Appl. No. 09/988,066, mailed Oct. 30, 2008, 16 pages.
Notice of Allowance for U.S. Appl. No. 09/988,066, mailed Jan. 9, 2009.
Non Final Office Action U.S. Appl. No. 11/804,977, mailed Jan. 14, 2008, 13 pages.
Notice of Allowance for U.S. Appl. No. 11/804,977, mailed Nov. 19, 2008, 17 pages.
Non-Final Office Action for U.S. Appl. No. 12/400,594, mailed May 14, 2010, 53 pages.
Final Office Action for U.S. Appl. No. 12/400,594, mailed Oct. 28, 2010, 13 pages.
Non-Final Office for U.S. Appl. No. 12/400,645, mailed Sep. 1, 2010, 45 pages.
Notice of Allowance for U.S. Appl. No. 12/400,645, mailed Jan. 26, 2011, 14 pages.
Non-Final Office Action for U.S. Appl. No. 12/372,390, mailed Apr. 22, 2010, 46 pages.
Non-Final Office Action for U.S. Appl. No. 12/372,390, mailed Sep. 13, 2010, 10 pages.
Notice of Allowance for U.S. Appl. No. 12/372,390, mailed Mar. 9, 2011, 8 pages.
Non-Final Office Action for U.S. Appl. No. 12/505,390, mailed Oct. 28, 2010, 51 pages.
Non-Final Office Action for U.S. Appl. No. 09/855,015, mailed Oct. 28, 2004, 12 pages.
Non-Final Office Action for U.S. Appl. No. 09/855,015, mailed Jan. 12, 2006, 6 pages.
Notice of Allowance for U.S. Appl. No. 09/855,015, mailed Sep. 8, 2006, 3 pages.
Requirement for Restriction/Election for U.S. Appl. No. 09/855,015, mailed Nov. 3, 2006, 6 pages.
Notice of Allowance for U.S. Appl. No. 09/855,015, mailed Jan. 7, 2008, 4 pages.
Supplemental Notice of Allowance for U.S. Appl. No. 09/855,015, mailed Feb. 4, 2008, 3 pages.
Non-Final Office Action for U.S. Appl. No. 12/070,893, mailed Jun. 10, 2010, 44 pages.
Final Office Action for U.S. Appl. No. 12/070,893, mailed Nov. 24, 2010, 11 pages.
Non-Final Office Action for U.S. Appl. No. 12/070,893, mailed Mar. 18, 2011, 7 pages.
Non-Final Office Action for U.S. Appl. No. 11/611,067, mailed Feb. 20, 2009, 11 pages.
Final Office Action for U.S. Appl. No. 11/611,067, mailed Oct. 16, 2009, 35 pages.
Non-Final Office Action for U.S. Appl. No. 11/611,067, mailed Dec. 8, 2009, 11 pages.
Non-Final Office Action for U.S. Appl. No. 11/615,769, mailed Apr. 15, 2009, 11 pages.
Final Office Action for U.S. Appl. No. 11/615,769, mailed Jan. 22, 2010, 34 pages.
Advisory Action for U.S. Appl. No. 11/615,769, mailed May 25, 2010, 3 pages.
Notice of Allowance for U.S. Appl. No. 11/615,769, mailed Jul. 12, 2010, 14 pages.
Final Office Action for U.S. Appl. No. 11/646,845, mailed on Jun. 9, 2011, 22 pages.
Non-Final Office Action for U.S. Appl. No. 11/953,742, mailed on Mar. 30, 2011, 23 pages.
Notice of Allowance for U.S. Appl. No. 11/953,743, mailed on Apr. 28, 2011, 19 pages.
Non-Final Office Action for U.S. Appl. No. 11/953,751, mailed on Mar. 29, 2011, 29 pages.
Final Office Action for U.S. Appl. No. 10/140,751, mailed on Jun. 28, 2011, 23 pages.
Non-Final Office Action for U.S. Appl. No. 12/702,031, mailed on Apr. 29, 2011, 5 pages.
Non-Final Office Action for U.S. Appl. No. 12/198,697, mailed on May 20, 2011, 43 pages.
Non-Final Office Action for U.S. Appl. No. 12/198,710, mailed on Mar. 24, 2011, 40 pages.
Notice of Allowance for U.S. Appl. No. 12/400,594, mailed on Mar. 23, 2011, 11 pages.
Notice of Allowance for U.S. Appl. No. 11/646,845 mailed on Jan. 8, 2013, 8 pages.
Non-Final Office Action for U.S. Appl. No. 13/083,481 mailed on Mar. 1, 2013, 14 pages.
Final Office Action for U.S. Appl. No. 12/198,710 mailed on Mar. 21, 2013, 17 pages.
Non-Final Office Action for U.S. Appl. No. 11/745,008 mailed on Mar. 7, 2013, 18 pages.
Non-Final Office Action for U.S. Appl. No. 13/548,116 mailed on Apr. 15, 2013, 8 pages.
Non-Final Office Action for U.S. Appl. No. 12/900,279 mailed on Apr. 11, 2013, 7 pages.
Related Publications (1)
Number Date Country
20110110237 A1 May 2011 US
Continuations (1)
Number Date Country
Parent 10810208 Mar 2004 US
Child 12880518 US