This application relates, in general, to routers, and more particularly to a crossbar apparatus for permitting read and write accesses to a forwarding table memory used within a router.
Packet switched networks, such as the Internet, divide a message or a data stream transmitted by a source into discrete packets prior to transmission. Upon receipt of the packets by the recipient, the packets are recompiled to form the original message or data stream. As a packet-switched network, the Internet is comprised of various physical connections between computing devices, servers, routers, sub-networks, and other devices which are distributed throughout the network.
Routers connect networks, and each router has multiple inputs and multiple outputs coupled to independent network devices such as servers or other routers, the connections being made through communications links such as optical fibers or copper wires or the like.
Routers receive the packets being sent over the network and determine the next hop or segment of the network to which each packet should be sent through one of the ports of the router. When the router passes the packet to the next destination in the network, the packet is one step closer to its final destination. Each packet includes header information indicating the final destination address of the packet.
Conventionally, routers include memories and microprocessors therein for processing the packets received by the routers, as well as for performing other functions required of the router. Typically, routers contain one or more processors, one or more forwarding engines, and a switch fabric. The route processor is a dedicated embedded subsystem which is responsible for communicating with the neighboring routers in the network to obtain current and ever-changing information about the network conditions. The route processor forms a routing table which is downloaded into and subsequently accessed for forwarding packets by the forwarding engine(s).
The forwarding engine of the router is responsible for determining the destination address and output port within the router to which to direct the received packet, this determination conventionally being made by accessing a routing table containing routing information for the entire network and performing a look-up operation.
One example of a conventional forwarding engine for a router is shown in
Conventionally, determining the destination port within the router to which to send the received packet is a computationally intensive process, particularly in view of the high data rates of the network (known as the “line rate”), such as 10 Giga bits/second. At this line rate, a forwarding engine within a router must make the destination port determination for approximately 30 million minimum sized IP packets per second per port. Accordingly, as the router receives multiple packets, a conventional forwarding engine utilizes the large buffer memory 26 on its front end, as shown in
As such, conventional forwarding engines for routers can be susceptible to performance degradation if the network traffic directed at the router is high, particularly when the router receives a plurality of packets having short lengths, thereby requiring that the look-up operations be performed quickly. Further, the increasing demand for IP-centric services over the Internet, such as voice over IP, streaming video, and data transfers to wireless devices with unique IP addresses, has increased the demand for data handling by the forwarding engines, as well as the size of the forwarding table.
Also, in such a conventional arrangement as shown in
As recognized by the present inventors, what is needed is a cross-bar apparatus or circuit for permitting access by various stages of a forwarding engine to the forwarding table memory so that look up operations can occur efficiently. It is against this background that various embodiments of the present invention were developed.
In light of the above and according to one broad aspect of one embodiment of the present invention, disclosed herein is a crossbar apparatus which permits different stages of a forwarding engine to access an on-chip forwarding table memory. In one embodiment, the crossbar utilizes shared, differential low swing buses to provide high bandwidth for read operations.
According to another broad aspect of another embodiment of the present invention, disclosed herein is a programmable crossbar for dynamically coupling a plurality of stages of an execution unit to one or more portions of a memory. In one embodiment, the crossbar may include a set of address lines coupled with each stage of the plurality of stages for receiving an address signal from at least one stage, and logic for selectively coupling one of the plurality of stages to a portion of the memory. In one example, the logic receives the set of address lines from each stage and compares a portion of the address signal to one or more hardwired addresses associated with each portion of the memory. The logic may also receive a plurality of programmable enable signals corresponding to each stage of the plurality of stages. In one embodiment, when the portion of the address signal from one stage of the plurality of stages matches one of the hardwired addresses associated with one portion of the memory, then the one stage of the plurality of stages is coupled with the one portion of memory if the programmable enable signal associated with the one stage is active. Hence, under programmatic control (via control of the enable line) a particular stage of the execution unit can have its address lines for a read operation dynamically and selectively coupled with a particular portion of the memory.
In one example, the set of address lines may be implemented as sets of differential, low swing pairs of signal lines, each pair corresponding to a single address bit. In this way, high speed addressing and memory accesses can take place. For improved noise immunity at high clock frequencies, the plurality of differential pairs of address lines may be interleaved along their length.
The logic may include a multiplexer for receiving the address signals from each of the stages and selecting the address signals associated with one stage of the stages based on a plurality of select lines, and a comparator for comparing a portion of the address signal from one stage of the plurality of stages to a hardwired addresses associated with one portion of the memory. The logic may also include a logic gate, such as an AND gate or other combinatorial logic device or element, receiving an output from the comparator and receiving a programmable enable signal associated with the one stage, the logic gate activating a select line associated with a stage based on the output from the comparator and a state of the programmable enable signal, thereby effecting the multiplexer to select the address signals of the one stage for connection with the memory.
In another example, the crossbar may also include a set of data lines from the memory and logic for dynamically coupling the set of data lines to one stage of the plurality of stages. In this way, data can be selectively delivered to particular stages of the execution unit under programmatic control. The data lines may include a plurality of differential pair data lines, each differential pair data line representing a single data bit, and the plurality of differential pair data lines may be interleaved along their length.
According to another broad aspect of another embodiment of the present invention, disclosed herein is a router including a lookup execution unit including a plurality of stages, a forwarding table memory arranged in hierarchy including addressable sectors, blocks, and entries, and a crossbar having an address crossbar for selectively coupling one of the plurality of stages to a sector of the memory so that data from the sector can be read. In one example, any one of the stages of the plurality of stages may be selectively and dynamically coupled with any one of the sectors of the forwarding table memory for providing an address to a particular sector of the memory to read data therefrom.
In one embodiment, the address crossbar is dynamically controllable to selectively couple stages of the lookup execution unit to different sectors of the forwarding table memory. The address crossbar may be formed from a plurality of differential signal pairs.
According to another broad aspect of the present invention, disclosed herein is a crossbar apparatus for permitting multiple portions of a forwarding engine to read from a forwarding table memory. In one embodiment, the crossbar apparatus includes a plurality of differential low swing bus signal lines coupled with the multiple portions of the forwarding engine to control a selection of a sector of the forwarding table memory, and a plurality of differential low swing bus signal lines coupled with the multiple portions of the forwarding engine to control a selection of a block of the forwarding table memory.
The features, utilities and advantages of various embodiments of the invention will be apparent from the following more particular description of embodiments of the invention as illustrated in the accompanying drawings.
According to one broad aspect of one embodiment of the present invention, disclosed herein is a crossbar apparatus which permits different stages of a forwarding engine to access an on-chip forwarding table memory. In one embodiment, the crossbar utilizes shared, differential low swing buses to provide high bandwidth for read operations. The forwarding engine may have stages or execution units which access the forwarding table memory through the crossbar. In order to reduce the number of cycles needed to performing a look-up operation, the forwarding table memory may include a portion on-chip with the forwarding engine and an external portion outside of the forwarding engine. Additionally, the forwarding engine performs a lookup operation by accessing the route information contained in the forwarding table. Various embodiments of the present invention will now be described.
In one example, the lookup execution unit 38 includes 12 stages, and each stage 36 is coupled with the data crossbar 42 and the address crossbar 40. In one example, each stage 36 is coupled with the data crossbar 42 through a 34 line bus, and each stage 36 is coupled with the address crossbar 40 through a 19 line bus with an enable signal. The address crossbar 40 selectively couples the address signal lines from a particular execution stage 36 to the read address ports 32 of the memory, so that a stage 36 of the execution unit can launch a request that data from a particular address of the forwarding table 30 be returned. The data crossbar 42 selectively couples the read data ports 34 of the memory to an execution unit stage 36, so that data returned from a read request can be provided to the appropriate execution unit stage 36.
In one embodiment, the write ports 44 of the memory can be directly accessed through a 19 bit address bus and a 34 bit data bus, so that the forwarding table memory 30 can be populated with entries such as route information or other data used during lookup operations or forwarding operations in a router.
In one example, read ports 98, 100 are assigned to a single LXU pipeline stage 93 dynamically by external control, such as software. By using 2 read ports in each sector, a pipeline stage with need for a single 4 kB block can be shared with a pipeline stage that requires a large number of blocks, thereby improving the usage of memory.
Although only one sector 94 of the memory is illustrated in
As shown in
Assuming that the address lines of a particular stage 93 of an execution unit have been coupled with a particular port 98, 100 of a sector 94 based on the sector address and the Enable control 124, the block address is then decoded.
Referring to
The entry address 142 (10 bits in one example) from either the address from port A0 or port A1 is selected through a multiplexer 144 where the select line 146 is coupled with a control signal that determines whether the block should be assigned to port A0 or port A1; this control/selection 146 may be controlled by software, in one example. Accordingly, the memory block 130 is selectively provided with the entry address portion of the address supplied by a stage of the execution unit, along with a Read Enable signal 134. In response, the memory block 130 decodes the entry address 142 and provides the data/contents of the particular entry to the read data port of the memory so that the data can be read by a stage of the execution unit.
When a valid read operation (address plus a read enable, where the address corresponds to a block that is mapped to that read address port) is presented to a sector read address port, that sector knows that valid data will be driven out the corresponding read data port at a fixed time in the future (3 clock cycles later, in one example). Logically, each sector's read port drives its corresponding select signal based upon this information.
Accordingly, it can be seen from
As shown in
In
One example of a tristate driver 174 and pre-charge elements 176 is shown in
Further, the wires of differential pairs may be interleaved as shown in
In accordance with one embodiment of the present invention, a forwarding engine 180, such as shown in
The forwarding engine 180 may be, in one example, a network processing unit (NPU) for determining the destination of a packet, the NPU employing a systolic array pipeline architecture. As used herein, the term “network processing unit” includes any processor, microprocessor, or other integrated circuit (or collections thereof)—such as a forwarding engine—which determines the destination of a packet. As will be described herein in greater detail, the NPU of one embodiment of the present invention may employ one or more systolic arrays in various execution units of the NPU to perform various operations on a packet as the packet passes through the NPU. As used herein, the term “systolic array” or “systolic array pipeline” includes, but is not limited to, a series or collection of stages wherein each stage may contain a register file and one or more functional units. In one embodiment, the data or program context being processed by the stages—which may include items such as the state of the register files, the program counter, and/or the current state of the program—flows from a stage to a next stage. In one example, the stages of a systolic array are arranged in a generally linear or sequential order, wherein each stage is capable of performing an operation involved in processing a packet, and the data/program context processed in each stage is processed therein for one clock cycle after which the data/program context is passed to a next stage for processing therein. An example of an NPU and router is disclosed in co-pending, commonly assigned application Ser. No. 10/177,496 entitled “Packet Routing and Switching Device” filed Jun. 20, 2002, the disclosure of which is incorporated herein by reference in its entirety.
In one embodiment, some of the stages of the systolic array are programmable to perform a processing operation involved in processing the packet under program control, while other stages of the systolic array can perform a delay operation (as with “sleep stages,” discussed below) where the data passes through a stage with no processing therein. In general, on every clock cycle of the NPU, data/program context is moved from one stage of the systolic array to the next stage in the systolic array, without blocking the intake of new packets or the processing of existing packets. As will be described below, the systolic array of the NPU can receive new packets at a line rate of, for example, 40 Gbits/second, and can finish processing a packet at the line rate during steady state operation. The NPU is adapted for use in a router, where the router has multiple bi-directional ports for receiving and transmitting data into and out of the router, wherein each port is connected with different portions of the network. As mentioned above in one embodiment, when the NPU receives a packet, the NPU operates to determine to which destination port of the router the packet should be sent out so that the packet gets closer to its final destination (i.e., the next hop in the network).
Referring to
In one example, when a packet is received by the NPU, the header sequencer 186 of
The header sequencer 186 also passes the packet (in one example, the entire packet) to a packet buffer 188 where the packet is stored. As the LXU and QXU perform their operations using the packet context and as they modify the packet context, the packet remains in the packet buffer 188 until the QXU completes its operations. Generally, after the LXU has determined the destination port to which the packet should be sent and the QXU has modified the packet context to specify the destination port and the queue to which to send the packet, unification logic merges the packet context with the respective packet stored in the packet buffer. In one example, both the packet context and the packet are passed out of the NPU to other portions within the router where the switching functions of the router are performed and the packet is transmitted out of the router to the appropriate output port, using the appropriate data formatting and encapsulation associated with the appropriate output port.
Referring again to
Using the context of the packet, the LXU performs the necessary table lookup for forwarding the packet to the proper output port of the router, as well as to perform any quality of service (QOS) or filtering functionality. It is understood that since the LXU is under program control, the operations performed by the LXU to determine the proper output port to which to send the packet, or to perform other functions within the LXU, are a matter of choice depending on the particular implementation chosen and how the software is written to process packets.
As will be described below with reference to
After determining the destination queue/port in the router to which to send the packet, the LXU attaches the forwarding information to the context for the packet, and passes the context of the packet to the QXU. Using the context, the QXU removes the corresponding packet from the packet buffer and passes the packet and the context to a portion of the router for writing to the appropriate output queue in the router so that the packet can be transmitted out of the router on the appropriate output port.
In the example of
The input packet buffers account for rate mismatches between the media adapters (10 Gbits/sec) and the input packet arbiter (40 Gbits/sec) by aggregating four 10 Gbits/sec packet streams to a 40 Gbits/sec packet stream. The input packet arbiter, being coupled with the input packet buffers and the header sequencer, selects an input packet buffer for obtaining a packet, and passes the packet to the header sequencer. The input packet arbiter cycles between the various input packet buffers to obtain packets therefrom, and in this manner, the input packet arbiter creates a 40 Gbits/sec stream of packet data which is passed to the header sequencer of the NPU.
Further as shown in
In addition, the IPA counts the length of the incoming packet, and then in one example adds the length information to the packet header. In one embodiment, the IOD and the length are prepended to the packet, i.e., added to the beginning of the packet. The IPA also examines a checksum to determine if the packet was transmitted correctly from the media adapter.
The IPA may also receive, from the RP packet buffer, packets originating from RP (these packets are referred to herein as “RP generated packets”). The RP generated packets are encoded to pass through the NPU with minimal processing, and bypass the IOD lookup because the routing protocol software (running in the route processor) adds the correct IOD to the packet before forwarding to the RP packet buffer.
The IOD table is implemented using a static random access memory (SRAM) and stores information about each type of port that the router is servicing, e.g., 1 gigabit Ethernet, 10 gigabit Ethernet, etc. The route processor communicates with the media adapters via a system interface to determine which type of ports are presently configured in the router, and then assembles the IOD table to reflect the ports that are presently configured. Accordingly, the media adapters may be dynamically connected or disconnected to/from the router to support various types of ports, and the router will be able to reconfigure itself to support the new media adapters.
In accordance with one embodiment of the present invention, the destination queue for the packet is assigned by the NPU before the packet is transmitted to the switching engine. Once the packet is provided to the switching engine of the router, in a process known as cellification, the switching engine breaks the packet into a set of cells and stores the cells in the queue specified by the NPU and associated with the proper output port of the router.
As mentioned above, the NPU execution units—the PXU, LXU, and QXU—are implemented using systolic array pipeline architectures, in one embodiment, so that operations (such as the look-up operation and memory reads) can be performed at the line rate, which eliminates the need for input-striping as with conventional routers. The NPU thereby permits the packets to be stored in memory of the router as a function of the router's output port associated with the packet, which thereby permits the orderly and efficient storage and extraction of the packets to and from memory, such as by using round-robin output striping.
As shown in
Referring now to
In one example, each major stage (i.e., major stage 0 to 11 as shown in
The FT as illustrated in the example of
The 16 FT read ports communicate with sectors using a plurality of shared, differential, low swing buses. Collectively, the buses are called the crossbar, because they connect all sectors to all FT read ports. Read address ports drive onto shared crossbar buses terminating at sectors. Each FT read data port has its own dedicated crossbar bus that is shared by the sectors. The write address and data are transported with a full swing bus.
Each 64 KB sector includes two read ports and one write port, in one example. One FT read address crossbar bus is dedicated to each sector read address port. Within a sector, addresses and data are transported to blocks as full swing signals, and read output data is returned over shared, differential, low swing buses. Each 4 KB block contains 1024 34-bit (includes 2 parity bits) entries, in one example. The 4 KB granularity is a function of the trade-off between the maximum number of blocks that can access the sector's low swing bus and the amount of memory that is unused by blocks using only one of their entries. The blocks are implemented as a standard SRAM block, and can perform one read and one write per cycle. In one example, when a read address and write address select the same block, the read operation first reads out the old data and the write operation writes the new data.
In one embodiment, each FT read port is controlled by 1 major LXU pipeline stage, and each of the 32×2=64 sector read ports is mapped to 1 of the FT's 16 read ports. Within a sector, each block is mapped to one of the sector's two read ports. All sector write ports are connected to the FT write port, and all block write ports are connected to their sector's write port in one example.
As illustrated in
In one embodiment, the FT communicates with systolic array through the crossbar, which connects all FT read ports to sector read ports. The FT has an address crossbar and a data crossbar. A high-speed bus may be used to communicate between the systolic array and FT, and in one example, the buses are mixed. A sector port may be assigned to one unique stage, but a single stage can have multiple sector ports assigned to it. In one example, the FT delivers 34 bits of data to each pipeline stage every cycle at 375 Mhz. In one embodiment, the crossbar is implemented as a tristate, differential, low swing bus. Alternatively, the crossbar can be implemented using static combinational logic.
In one embodiment, particular stages of the systolic array are adapted to launch memory accesses to the forwarding table SRAM so that the results from the memory access will be available to stages downstream in the systolic array. These stages which may be dedicated to memory accesses can be spaced throughout the systolic array so that the intermediate stages can perform other operations while the memory access is in flight. The different stages may access the forwarding table SRAM through multiple ports to the FT SRAM.
At operation 202, for LXU stages 1-11, one block of memory is allocated dynamically to each of the LXU stages 1-11 that has a corresponding level in the trie for processing. For instance, for a radix trie that whose nodes span 8 levels deep (root node plus 7 additional levels), LXU stages 1-7 would each be allocated 1 block of memory in this example. Each block of memory permits the stage to compare against 1,024 nodes. If a particular level in the trie has more than 1,024 nodes, then the corresponding LXU stage may be allocated one or more additional blocks of memory. At operation 204, if the trie changes such as due to changes in the network topology such that a level of the trie has fewer nodes, then the corresponding LXU stage may have one or more blocks of memory de-allocated.
While the methods disclosed herein have been described and shown with reference to particular operations performed in a particular order, it will be understood that these operations may be combined, sub-divided, or re-ordered to form equivalent methods without departing from the teachings of the present invention. Accordingly, unless specifically indicated herein, the order and grouping of the operations is not a limitation of the present invention.
While the invention has been particularly shown and described with reference to various embodiments thereof, it will be understood by those skilled in the art that various other changes in the form and details may be made without departing from the spirit and scope of the invention.
This application is a continuation of commonly assigned patent application entitled “CROSSBAR APPARATUS FOR A FORWARDING TABLE MEMORY IN A ROUTER”, filed on Apr. 17, 2003, application Ser. No. 10/418, 634, now U.S. Pat. No. 7,450,438, issued on Nov. 11, 2008, which is a continuation-in-part of the commonly assigned patent application entitled “PACKET ROUTING AND SWITCHING DEVICE,” filed on Jun. 20, 2002, application Ser. No. 10/177,496, now U.S. Pat. No. 7,382,787, issued on Jun. 3, 2008, the disclosures which are hereby incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4885744 | Lespagnol et al. | Dec 1989 | A |
5014262 | Harshavardhana | May 1991 | A |
5140417 | Tanaka et al. | Aug 1992 | A |
5412646 | Cyr et al. | May 1995 | A |
5471592 | Gove et al. | Nov 1995 | A |
5524258 | Corby et al. | Jun 1996 | A |
5677851 | Kingdon et al. | Oct 1997 | A |
5734649 | Carvey et al. | Mar 1998 | A |
5781772 | Wilkinson, III et al. | Jul 1998 | A |
5802278 | Isfeld et al. | Sep 1998 | A |
5838894 | Horst | Nov 1998 | A |
5878415 | Olds | Mar 1999 | A |
5905725 | Sindhu et al. | May 1999 | A |
5909440 | Ferguson et al. | Jun 1999 | A |
5920699 | Bare | Jul 1999 | A |
5923643 | Higgins et al. | Jul 1999 | A |
5930256 | Greene et al. | Jul 1999 | A |
6011795 | Varghese et al. | Jan 2000 | A |
6018524 | Turner et al. | Jan 2000 | A |
6078963 | Civaniar et al. | Jun 2000 | A |
6091725 | Cheriton et al. | Jul 2000 | A |
6097721 | Goody | Aug 2000 | A |
6101192 | Wakeland | Aug 2000 | A |
6161139 | Win et al. | Dec 2000 | A |
6192405 | Bunnell | Feb 2001 | B1 |
6308219 | Hughes | Oct 2001 | B1 |
6430181 | Tuckey | Aug 2002 | B1 |
6434148 | Park et al. | Aug 2002 | B1 |
6453413 | Chen et al. | Sep 2002 | B1 |
6526055 | Perlman et al. | Feb 2003 | B1 |
6584528 | Kurafuji et al. | Jun 2003 | B1 |
6631419 | Greene | Oct 2003 | B1 |
6636895 | Li et al. | Oct 2003 | B1 |
6658002 | Ross et al. | Dec 2003 | B1 |
6675187 | Greenberger | Jan 2004 | B1 |
6687781 | Wynne et al. | Feb 2004 | B2 |
6697875 | Wilson | Feb 2004 | B1 |
6721316 | Epps et al. | Apr 2004 | B1 |
6731633 | Sohor et al. | May 2004 | B1 |
6732203 | Kanapathippillai et al. | May 2004 | B2 |
6751191 | Kanekar et al. | Jun 2004 | B1 |
6778490 | Achilles et al. | Aug 2004 | B1 |
6785728 | Schneider et al. | Aug 2004 | B1 |
6795886 | Nguyen | Sep 2004 | B1 |
6801950 | O'Keeffe et al. | Oct 2004 | B1 |
6804815 | Kerr et al. | Oct 2004 | B1 |
6879559 | Blackmon et al. | Apr 2005 | B1 |
6920456 | Lee et al. | Jul 2005 | B2 |
6922724 | Freeman et al. | Jul 2005 | B1 |
6934281 | Kanehara | Aug 2005 | B2 |
6941487 | Balakrishnan et al. | Sep 2005 | B1 |
6944183 | Iyer et al. | Sep 2005 | B1 |
6944860 | Schmidt | Sep 2005 | B2 |
6954220 | Bowman-Amuah | Oct 2005 | B1 |
6954436 | Yip et al. | Oct 2005 | B1 |
6961783 | Cook et al. | Nov 2005 | B1 |
6965615 | Kerr et al. | Nov 2005 | B1 |
6973488 | Yavatkar et al. | Dec 2005 | B1 |
6990527 | Spicer et al. | Jan 2006 | B2 |
7006431 | Kanekar et al. | Feb 2006 | B1 |
7020718 | Brawn et al. | Mar 2006 | B2 |
7024693 | Byrne | Apr 2006 | B2 |
7028098 | Mate et al. | Apr 2006 | B2 |
7043494 | Joshi et al. | May 2006 | B1 |
7051039 | Murthy et al. | May 2006 | B1 |
7051078 | Cheriton | May 2006 | B1 |
7054315 | Liao | May 2006 | B2 |
7054944 | Tang et al. | May 2006 | B2 |
7069372 | Leung, Jr. et al. | Jun 2006 | B1 |
7069536 | Yaung | Jun 2006 | B2 |
7073196 | Dowd et al. | Jul 2006 | B1 |
7095713 | Willhite et al. | Aug 2006 | B2 |
7096499 | Munson | Aug 2006 | B2 |
7099341 | Lingafelt et al. | Aug 2006 | B2 |
7103708 | Eatherton et al. | Sep 2006 | B2 |
7111071 | Hooper | Sep 2006 | B1 |
7124203 | Joshi et al. | Oct 2006 | B2 |
7136383 | Wilson | Nov 2006 | B1 |
7139238 | Hwang | Nov 2006 | B2 |
7150015 | Pace et al. | Dec 2006 | B2 |
7155518 | Forslow | Dec 2006 | B2 |
7159125 | Beadles et al. | Jan 2007 | B2 |
7167918 | Byrne et al. | Jan 2007 | B2 |
7184440 | Sterne et al. | Feb 2007 | B1 |
7185192 | Kahn | Feb 2007 | B1 |
7185365 | Tang et al. | Feb 2007 | B2 |
7200144 | Terrell et al. | Apr 2007 | B2 |
7200865 | Roscoe et al. | Apr 2007 | B1 |
7203171 | Wright | Apr 2007 | B1 |
7225204 | Manley et al. | May 2007 | B2 |
7225256 | Villavicencio | May 2007 | B2 |
7225263 | Clymer et al. | May 2007 | B1 |
7227842 | Ji et al. | Jun 2007 | B1 |
7230912 | Ghosh et al. | Jun 2007 | B1 |
7231661 | Villavicencio et al. | Jun 2007 | B1 |
7239639 | Cox et al. | Jul 2007 | B2 |
7249374 | Lear et al. | Jul 2007 | B1 |
7257815 | Gbadegesin et al. | Aug 2007 | B2 |
7274702 | Toutant et al. | Sep 2007 | B2 |
7274703 | Weyman et al. | Sep 2007 | B2 |
7280975 | Donner | Oct 2007 | B1 |
7289517 | Shimonishi | Oct 2007 | B1 |
7302701 | Henry | Nov 2007 | B2 |
7355970 | Lor | Apr 2008 | B2 |
7382787 | Barnes et al. | Jun 2008 | B1 |
7403474 | Rorie | Jul 2008 | B2 |
7406038 | Oelke et al. | Jul 2008 | B1 |
7418536 | Leung, Jr. et al. | Aug 2008 | B2 |
7525904 | Li et al. | Apr 2009 | B1 |
7536476 | Alleyne | May 2009 | B1 |
7710991 | Li et al. | May 2010 | B1 |
7889712 | Talur et al. | Feb 2011 | B2 |
20020002650 | Christenson | Jan 2002 | A1 |
20020009095 | Van Doren et al. | Jan 2002 | A1 |
20020035639 | Xu | Mar 2002 | A1 |
20020124145 | Arimilli et al. | Sep 2002 | A1 |
20030005178 | Hemsath | Jan 2003 | A1 |
20030046507 | Swanson | Mar 2003 | A1 |
20030056001 | Mate et al. | Mar 2003 | A1 |
20030056134 | Kanapathippillai et al. | Mar 2003 | A1 |
20030091043 | Mehrotra et al. | May 2003 | A1 |
20030108056 | Sindhu et al. | Jun 2003 | A1 |
20030120888 | Huang | Jun 2003 | A1 |
20030163589 | Bunce et al. | Aug 2003 | A1 |
20030188192 | Tang et al. | Oct 2003 | A1 |
20030206528 | Lingafelt et al. | Nov 2003 | A1 |
20030208597 | Belgaied | Nov 2003 | A1 |
20030212806 | Mowers et al. | Nov 2003 | A1 |
20030212900 | Liu et al. | Nov 2003 | A1 |
20040024888 | Davis et al. | Feb 2004 | A1 |
20040139179 | Beyda | Jul 2004 | A1 |
20060117126 | Leung et al. | Jun 2006 | A1 |
20060159034 | Talur et al. | Jul 2006 | A1 |
Number | Date | Country | |
---|---|---|---|
20090063702 A1 | Mar 2009 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10418634 | Apr 2003 | US |
Child | 12260841 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10177496 | Jun 2002 | US |
Child | 10418634 | US |