§ 1.1 Field of the Invention
The present invention concerns communications. In particular, the present invention concerns large scale switches used in communications networks.
§ 1.2 Background Information
To keep pace with Internet traffic growth, researchers continually explore transmission and switching technologies. For instance, it has been demonstrated that hundreds of signals can be multiplexed onto a single fiber with a total transmission capacity of over 3 Tbps and an optical cross-connect system (OXC) can have a total switching capacity of over 2 Pbps. However, today's core Internet Protocol (IP) routers' capacity remains at a few hundred Gbps, or a couple Tbps in the near future.
It still remains a challenge to build a very large IP router with a capacity of tens Tbps or more. The complexity and cost of building such a large-capacity router is much higher than building an OXC. This is because packet switching may require processing (e.g., classification and table lookup), storing, and scheduling packets, and performing buffer management. As the line rate increases, the processing and scheduling time available for each packet is proportionally reduced. Also, as the router capacity increases, the time for resolving output contention becomes more constrained.
Demands on memory and interconnection technologies are especially high when building a large-capacity packet switch. Memory technology very often becomes a bottleneck of a packet switch system. Interconnection technology significantly affects a system's power consumption and cost. As a result, designing a good switch architecture that is both cost-effective and scalable to have a very large capacity remains a challenge.
The numbers of switch elements and interconnections are often critical to the scalability and cost of a switch fabric. Since the number of switch elements of single-stage switch fabrics is proportional to the square of the number of switch ports, single-stage switch fabric architectures are not attractive for large switches. On the other hand, multi-stage switch architectures, such as a Clos network for example, are more scalable and require fewer switch elements and interconnections, and are therefore more cost-effective.
A line card 110,120 usually includes ingress and/or egress functions and may include one or more of a transponder (TP) 112,122, a framer (FR) 114,124, a network processor (NP) 116,126, and a traffic manager (TM) 118,128. A TP 112,122 may be used, for example, to perform optical-to-electrical signal conversion and serial-to-parallel conversion at the ingress side. At the egress side, it 112,122 may be used, for example, to perform parallel-to-serial conversion and electrical-to-optical signal conversion. An FR 114,124 may be used, for example, to perform synchronization, frame overhead processing, and cell or packet delineation. An NP 116,126 may be used, for example, to perform forwarding table lookup and packet classification. Finally, a TM 118,128 may be used, for example, to store packets and perform buffer management, packet scheduling, and any other functions performed by the router architecture (e.g., distribution of cells or packets in a switching fabric with multiple planes).
Switch fabric 130 may be used to deliver packets from an input port to a single output port for unicast traffic, and to multiple output ports for multicast traffic.
When a packet arrives at CR 100, it 100 determines an outgoing line to which the packet is to be transmitted. Variable length packets may be segmented into fixed-length data units, called “cells” without loss of generality, when entering CR 100. The cells may be reassembled into packets before they leave CR 100. Packet segmentation and reassembly is usually performed by NP 116,126 and/or TM 118,128.
Traffic enters the switch 200 via an ingress traffic manager (TMI) 210 and leaves the switch 200 via an egress traffic manager (TME) 220. The TMI 210 and TME 220 can be integrated on a single chip. Therefore, the number of TM chips may be the same as the number of ports (denoted as N) in the system 200. Cells passing through the switch 200 via different paths may experience different queuing delays. However, if packets belonging to the same flow traverse the switch via the same path (i.e., the same switch plane and the same CM) until they have all left the switch fabric, there should be no packet out-of-sequence problem.
In the embodiment 200 illustrated in
From the TMI 210 to the TME 220, a cell traverses four internal links: (i) a first link from a TMI 210 to an IM 242; (ii) a second link from the IM 242 to a CM 244; (iii) a third link from the CM 244 to an OM 246; and (iv) a fourth link from the OM 246 to a TME 220.
In such a switch 200, as well as other switches, a number of issues may need to be considered. Such issues may include packet reassembly and deadlock avoidance.
Section 1.2.1 introduces the need for packet reassembly, as well as known packet reassembly techniques and their limitations.
§ 1.2.1 Packet Reassembly
When building a packet switch, it is a common practice to segment each arriving packet into multiple fixed-length cells (e.g., 64 bytes) at the input port, pass them through the switch fabric, and reassemble them back into packets with reassembly queues (RAQs) at the output port.
Cells may be classified into four categories: Beginning of Packet (BOP) cells; End of Packet (EOP) cells; Continue of Packet (COP) cells; and Single Cell Packet (SCP) cells. A BOP cell is the first cell of a packet. An EOP cell is the last cell of a packet. A COP cell is a cell between a BOP cell and an EOP cell. An SCP cell is a packet whose size is equal to or smaller than the cell payload size (e.g., 52 Bytes).
When cells are routed through the switch fabric, if more than one packet is contending for the same output link, and if output port contention arbitration is performed on a per cell basis rather than on a per packet basis, the cells can be interleaved in the switch fabric. Consequently, the output port may receive many partial packets and may need to store the partial packets until the last cell of the packet (i.e., EOP cell) arrives at the output port so that the packet can be reassembled from its constituent cells.
A cell is transferred over a link (such as one of the four internal links listed in § 1.2 above) from a queue at the upstream side to a queue at the downstream side. The term source queue (SQ) is used to denote the queue at the upstream side of a link, and the term destination queue (DQ) is used to denote the queue at the downstream side of a link.
Cells waiting at SQs attached to the same output link compete with each other. In the switch fabric described above, one link can send at most one cell in each cell time slot. If more than one cell is waiting at the SQs associated with the output link, an arbiter associated with the link should choose one of them for transmission in the next time slot and all the other cells have to wait at the SQs until they win the contention (assuming there are still other cells competing for the desired outgoing link).
This section explains scheduling algorithms from the perspective of the output link. Output links of TMI, IM, CM, and OM may have the same scheduling policy. One link has multiple SQs where cells are queued to be transmitted to multiple DQs in the next stage. The challenge is to deliver cells from the SQ to the DQ so that cell sequence integrity is maintained, while also providing high throughput and fairness.
§ 1.2.1.1 Previous Approaches and Their Limitations
As illustrated in
If there is only one path for an input port-output port pair, the required number of RAQs is equal to the number of input ports multiplied by the number of scheduling priorities. Therefore, a virtual input queue (VIQ) can be used to reassemble the packet. This VIQ approach is adopted in many multi-plane single-stage switch fabrics, where the cells of a packet can be striped among the multiple planes.
The CCI scheduling scheme has a major drawback in that the number of reassembly queues (RAQs) can become very large in certain switch fabrics. Since cells are interleaved without any consideration of packet boundary, when they arrive at TME, they should be separated per packet. To ensure proper packet reassembly, the TME must have as many RAQs as the number of TMIs (i.e., N=n*k) multiplied by the number of scheduling priorities (i.e., q) and the number of possible paths between TMI and TME (i.e., p*m) For a multi-plane multi-stage switch such as the one illustrated in
As can be appreciated by the foregoing, although the CCI scheduling scheme has the best load-balancing among the possible paths and minimum cell transmission delays through the switch fabric (i.e., IM, CM, and OM), it may require too many queues at TME to reassemble the packet in large multi-plane, multi-stage switch fabrics.
In view of the foregoing, better packet scheduling and reassembly schemes are needed, particularly for large scale devices with multiple-stage, multiple switch plane switch fabrics. In any such scheme, deadlock situations should be avoided.
The present invention may be used to make packet reassembly practical. It may do so by using a technique to perform a packet interleaving, instead of cell interleaving, throughout stages (e.g., every stage) of the switch fabric. One such technique is referred to as dynamic packet interleaving (DPI) scheduling scheme. If more than one packet is contending for the same output link in a switch module, the arbiter in the switch module gives priority to a partial packet (i.e., to a packet that has had at least one cell sent to the queue). The number of reassembly queues required to ensure reassembly is dramatically reduced to the number of paths multiplied by the number of scheduling priorities, and is independent of the switch size.
The present invention may be used to prevent a deadlock. The present invention may do so by guaranteeing (e.g., reserving) at least one cell space for all partial packets. If the destination queue for a partial packet has a non-zero outstanding cell counter, the link doesn't need to reserve a cell space in the downstream cell memory. However, if the outstanding cell counter for a partial packet is equal to zero, at least one cell space should be reserved to prevent a deadlock situation. That is, if the outstanding cell counter for the partial packet is greater than 0, the downstream cell memory assigns at least one cell space for the partial packet. This ensures that the EOP cell can be forwarded to its destination.
a, 13b, 14a and 14b illustrate examples of HOL cell eligibility determinations when made consistent with the present invention
The present invention may involve novel methods, apparatus, message formats, and/or data structures for simplifying packet reassembly, while maintaining fairness and throughput and avoiding deadlock. The following description is presented to enable one skilled in the art to make and use the invention, and is provided in the context of particular applications and their requirements. Thus, the following description of embodiments consistent with the present invention provides illustration and description, but is not intended to be exhaustive or to limit the present invention to the precise form disclosed. Various modifications to the disclosed embodiments will be apparent to those skilled in the art, and the general principles set forth below may be applied to other embodiments and applications. For example, although a series of acts may be described with reference to a flow diagram, the order of acts may differ in other implementations when the performance of one act is not dependent on the completion of another act. Further, non-dependent acts may be performed in parallel. No element, act or instruction used in the description should be construed as critical or essential to the present invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used. Thus, the present invention is not intended to be limited to the embodiments shown and the inventors regard their invention as any patentable subject matter described.
The following list includes letter symbols that may be used in this application.
The following list includes acronyms that may be used in this application.
§ 4.1 Packet Reassembly Using Packet Interleaving Such as Dynamic Packet Interleaving
To simplify reassembly in the multi-plane, multi-stage switch fabric, some form of packet interleaving (e.g., CPI, PPI or DPI) may be performed instead of cell interleaving (e.g., CCI) throughout stages (e.g., every stage) of the switch fabric. That is, if more than one packet is contending for the same output link in the switch module, the arbiter gives priority to a cell of a packet for which at least one cell has already been sent to the output link. As a result, the number of reassembly queues is dramatically reduced to the number of paths multiplied by the number of scheduling priorities, and is independent of the switch size. These schemes are referred to as the packet interleaving scheduling schemes. Three different packet interleaving scheduling schemes are now described.
One way to prevent the out-of-sequence problem is to schedule “complete packets” in a round-robin manner as shown in
Another scheduling scheme, referred to as partial packet interleaving (PPI), is the same as the CPI scheme except that the arbiter may update its pointer when the SQ is empty or the DQ is full. The PPI scheme maintains a DQ flag for each DQ so that no more than one packet is sent to the DQ when the DQ holds a partial packet.
Dynamic packet interleaving (DPI) scheduling schemes may be used at one or more of the output links of the TMI, IM, CM, and OM.
In the DPI scheduling scheme, the arbiter chooses cells in round robin fashion among the non-empty SQs while maintaining DQ flags. A DQ flag for each DQ is used to ensure that no more than one packet can send to the same DQ when the DQ holds a partial packet. However, cells destined for different DQs whose flags aren't set may be interleaved with each other.
Table 1 shows the number of reassembly queues at the TME required to ensure reassembly for the four scheduling schemes described in this application. The required number of reassembly queues in the CCI scheme is p*q*m*n*k. In the CPI scheme, only one reassembly queue per plane per priority is necessary. For example, if p=8, q=2, and n=m=k=64, the CCI scheme requires 4 million queues while the CPI scheme only 16 queues. The PPI and DPI schemes require a reasonable number of reassembly queues, though more than CPI. However, their throughputs are better than that of CPI.
§ 4.1.1 Implementation of Packet Interleaving Schemes Such as CPI, PPI and DPI
In the following examples, the counter is counting the outstanding number of cells for each destination queue (DQ). The round robin (RR) pointer is an index indicating the starting source queue (SQ).
Referring back to 710, if the link is not reserved (no partial packet was sent and stored at the DQ), the SQ is selected in a round-robin manner beginning from the SQ indicated by the RR pointer. (Blocks 740, 745, 750, 755, 760) More specifically, it is determined whether the SQ indicated by the updated RR pointer is empty. (Block 745) If the queue is not empty, and if the HOL cell is eligible (Block 750) the queue reserves the link (Block 755) and the method 700 continues at Block 725. As was the case above, if the cell type of the cell just transmitted is BOP or COP, the RR pointer is not changed. (Blocks 725, 730) If, on the other hand, the cell just transmitted is an EOP or SCP, the RR pointer is moved to the next SQ. (Blocks 725, 735)
In the exemplary CPI scheme in which an instance of the method 700 is performed for each DQ, since cells coming from different input ports are not interleaved at all, cells belonging to the same packet arrive at the TME back-to-back, without any intervening cells from another packet, until the EOP cell is sent. Therefore, each TME needs only p*q reassembly queues; one per plane-priority level combination. The CPI scheme is attractive from the perspective of reassembly but experiences performance degradation when there are large packets in the switch fabric. For example, if a jumbo packet (e.g., 9 KB) is in the switch fabric, all other packets sharing the same link with the jumbo packet have to wait until the jumbo packet finishes its transmission. The blocked HOL packets waiting for the link reserved in the jumbo packet may block packets behind them, even if such blocked packets are destined for an idle link. Thus, the CPI scheme can degrade the throughput of the switch fabric. To summarize, the CCI scheme is attractive from the perspective of load-balancing and the CPI scheme is attractive from the perspective of reassembly.
The PPI scheduling scheme has the advantage of good load balancing like the CCI scheme and the advantage of reduced reassembly queues like the CPI scheme.
The PPI scheme performs well under non-uniform traffic. However, as with CPI, its throughput is degraded if the packet size becomes large. This is because it is a blocking network if n=m=k. If m=2*n=2*k, the throughput is improved but the cost is increased (the number of center modules is doubled), too.
The DPI scheduling scheme is similar to the PPI scheme except for RR pointer updates.
§ 4.2 Deadlock Avoidance by Memory Reservation for a Partial Packet
The problem of deadlock, which may occur in packet interleaving schemes such as CPI or DPI, is introduced in § 4.2.1 below. Then, ways of avoiding deadlock, consistent with the present invention, are described in § 4.2.2 below.
§ 4.2.1 Deadlock
Packet interleaving in a multi-stage switch fabric can cause a deadlock problem. The following example illustrates the problem of deadlock. Assume that the BOP cell of a packet has been sent to a TME. However, the packet's EOP cell may remain at the TMI waiting to win arbitration since the TMI sends cells in round robin fashion among the cells destined for different CMs. When the EOP cell of the packet eventually wins arbitration, the buffer at IM (i.e., the DQ in this case) might be full (e.g., by other cells coming from different SQs). In such a case, the transmission of the EOP cell will be blocked to prevent buffer overflow. (Assuming that each of IM, CM, OM has a finite buffer size, a buffer overflow may happen. Buffer overflow can be avoided by implementing a credit-based flow control scheme across all links.) Moreover, the buffers at IM can be full of fresh packet cells (if a buffer contains a BOP cell, it is considered to. have a fresh packet) destined for the same TME as the one storing the BOP cell of the packet whose EOP is stuck at the TMI waiting for the IM buffer to make room for it. Such fresh packet cells at IM cannot be sent because of the EOP cell at the TMI. In the worst case, there can be a situation that all partial packets at the TMI are blocked due to the full buffers (DQs) at IM. Moreover, fresh packet cells at IM are blocked because of the partial packets. In such a scenario, no cells in the switch fabric can be sent. This is called a deadlock.
§ 4.2.2 Avoiding Deadlock
An eligibility test may be used to (i) avoid the interleaving of cells of different packets at a DQ, and (ii) to avoid deadlock.
The problem of deadlock, introduced in § 4.2.1 above, can be avoided if the buffer in which the DQs are defined reserves free memory space for a partial packet. When the memory space is reserved for the partial packet, the SQ may be empty. The reserved memory space can be filled when the SQ receives a cell from its upstream source. To avoid a deadlock situation, at least one cell space should be reserved in the receiver's buffer for all partial packets. Note that a receiver's buffer may include multiple DQs. For example, one buffer may consist of 64 DQs. In this way, a cell from any partial packet can be forwarded to the next stage and a deadlock situation is avoided. Since there are multiple (e.g., 4) links from TMI to TME, there are multiple (e.g., 4) sets of SQs and DQs. On each link, a cell can be forwarded if the memory space is reserved for the partial packet. For this purpose, a Queue Reserved Cell (QRC) counter may be used. In embodiments that use a QRC, the QRC is set to Maximum Reserved Cells (MRC) as soon as the first cell of a packet in its associated SQ is sent. Maximum Reserved Cells (MRC) is a constant that guarantees memory space in the DQ buffer for each partial packet. That is, QRC should be equal to or less than MRC. For example, if MRC=8, the QRC is set to 8 as soon as the BOP cell is granted and the QRC is set to 0 as soon as the EOP cell is granted. The QRC is decremented by one whenever the SQ sends a (COP) cell to the DQ. All counters, including QRC, may be maintained at the upstream SM. Although the QRC is not incremented, it can jump from any value (e.g., 0, 1, 2, . . . 7) to MRC(e.g., 8).
The notion of a queue outstanding cell counter (QOC) is introduced in the '733 provisional and in U.S. patent application Ser. No. 10/776,575 (incorporated herein by reference), titled “SWITCH MODULE MEMORY STRUCTURE AND PER-DESTINATION QUEUE FLOW CONTROL FOR USE IN A SWITCH”, filed on Feb. 11, 2004 and listing Hung-Hsiang Jonathan Chao and Jinsoo Park as inventors. Briefly stated, QOC may be used to represent the sum of cells left in the DQ and any cells on the link that are destined for the DQ. If QOC of the DQ is larger than MRC, the QRC may be set to 0. Setting QRC to 0 prevents the reserved memory space for the DQ from exceeding the MRC. The QRC counts the number of cells allowed to be sent from the upstream SM to the DQ even when the downstream buffer has no space for any new fresh packet. Although the QRC becomes 0, the SQ can send more cells to the DQ if the buffer to which the DQ belongs has a free space. If the QOC is less than MRC and the DQF is equal to 1 (i.e., if the DQ is taken), the sum of QOC and QRC should always be equal to MRC. The DQF bit indicates if the DQ is taken or not. If the DQ is not taken (i.e., if DQF=0), there is no partial packet destined for the DQ and any BOP cell or SCP cell can be sent to the DQ. If, on the other hand, the DQ is taken (i.e., if DQF=1), there is an SQ that has a partial packet destined for that DQ and only that SQ can send a cell to the DQ. If the DQF is set to 1, a BOP cell or SCP cell should not be sent. Otherwise, if a BOP or SCP cell were sent, more than one packet would be interleaved and packet integrity would not be maintained in the DQ. To avoid this, no more than one packet is allowed to be interleaved for the DQ.
The Buffer Reserved Cell counter (BRC) is the sum of QRCs of all DQs in the buffer. By adding a buffer outstanding cell counter (BOC) and BRC, the memory space is reserved for the partial packet. When the partial packet arrives at the SQ, although the sum of the BOC and BRC is equal to the buffer size, the cell is eligible for transmission if the QRC is greater than 0.
§ 4.2.2.1 Eligibility of the Hol Cell
Recall from blocks 720 and 750 of
Otherwise, the HOL cell is not eligible. (Block 1255).
If q=2, two buffers are associated with each link. That is, high priority cells can share one buffer and low priority cells can share another buffer. In at least some embodiments, cells with different priorities do not share the same memory space. The QOC can have a value between 0 and the SM queue size (Q_sm) (e.g., 15 cells). The BOC can have a value between 0 and the SM buffer size (B_sm) (e.g., 32 cells). The QRC can have a value between 0 and the MRC (e.g., 8 cells). For a DQ with a partial packet, only one SQ is eligible because the HOL cell of the SQ that sent a cell to the DQ should have a cell type of COP or EOP and the DQF should be set to 1. If the DQF is equal to 0, any BOP or SCP cell destined for the DQ is eligible. The sum of the BOC and BRC should be less than the buffer size (i.e., 32).
§ 4.2.2.1.1 Examples Illustrating Hol Cell Eligibility Determinations
a, 13b, 14a and 14b illustrate examples of HOL cell eligibility determinations when made consistent with the present invention, in an exemplary embodiment in which the buffer size is 32 cells, the DQ size is 15 cells, and the MRC=8. In the tables in
a and 13b show an example of the flow control mechanism. The buffer size is 32 cells and the DQ size is 15 cells. The reservation parameter is 8 cells. The sum of the BOC (i.e., 19 cells) and the BRC (i.e., 12 cells) is less than the buffer size (i.e., 32 cells). If the HOL cell is destined for the DQ(0), it is eligible only if its cell type is SCP. If the cell type is COP or EOP, the DQF should be equal to 1. If the cell type is BOP, the sum of BOC and BRC should be equal to or smaller than 24. If the HOL cell is destined for the DQ(1), DQ(2), or DQ(3), it is eligible if its cell type is COP or EOP because the DQF is equal to 1.
a and 14b show another example of the flow control mechanism. If the HOL cell of the SQ is destined for DQ(O) and its cell type is BOP, it is eligible for transmission because there is enough space in the DQ. However, if it is destined for DQ(1), it cannot be transmitted because the DQ is full.
§ 4.3 Performance of DPI
The sizes of packets used in the Internet vary widely. (See, e.g., the Cooperative Association for Internet Data Analysis at www.CAIDA.org.) One of the most popular ways to simulate the variable packet size is geometric distribution. In a simulation, the average packet size is assumed to be 10 cells, with a maximum packet size of 192 cells. If the average packet size is smaller, the performance improves. In the Internet, the average packet size is about 280 bytes (i.e., 5 cells) and the maximum packet size is 9000 bytes (i.e., 161 cells).
This application claims benefit to U.S. Provisional Application Ser. No. 60/479,733, titled “A HIGHLY SCALABLE MULTI-PLANE MULTI-STAGE BUFFERED PACKET SWITCH,” filed on Jun. 19, 2003, and listing H. Jonathan Chao and Jinsoo Park as inventors (referred to as “the '733 provisional”). That application is incorporated herein by reference. The scope of the present invention is not limited to any requirements of the specific embodiments described in that application.
This invention was made with Government support and the Government may have certain rights in the invention as provided for by grant number ANTI-9906673 by the National Science Foundation.
Number | Name | Date | Kind |
---|---|---|---|
5179556 | Turner | Jan 1993 | A |
5600795 | Du | Feb 1997 | A |
5689506 | Chiussi et al. | Nov 1997 | A |
5864539 | Yin | Jan 1999 | A |
6072772 | Charny et al. | Jun 2000 | A |
6333932 | Kobayasi et al. | Dec 2001 | B1 |
6396815 | Greaves et al. | May 2002 | B1 |
6426957 | Hauser et al. | Jul 2002 | B1 |
6449275 | Anderson et al. | Sep 2002 | B1 |
6463485 | Chui et al. | Oct 2002 | B1 |
6504820 | Oliva | Jan 2003 | B1 |
6621824 | Lauffenburger et al. | Sep 2003 | B1 |
6628657 | Manchester et al. | Sep 2003 | B1 |
6631130 | Roy et al. | Oct 2003 | B1 |
6819675 | Benayoun et al. | Nov 2004 | B2 |
6870831 | Hughes et al. | Mar 2005 | B2 |
6920156 | Manchester et al. | Jul 2005 | B1 |
6954428 | Gotoh et al. | Oct 2005 | B2 |
6973092 | Zhou et al. | Dec 2005 | B1 |
7016365 | Grow et al. | Mar 2006 | B1 |
7042842 | Paul et al. | May 2006 | B2 |
7050448 | Johnson et al. | May 2006 | B2 |
7068654 | Joseph et al. | Jun 2006 | B1 |
7068672 | Jones | Jun 2006 | B1 |
7126918 | Robert | Oct 2006 | B2 |
7136356 | Suzuki et al. | Nov 2006 | B2 |
7142553 | Ojard et al. | Nov 2006 | B1 |
7145873 | Luijten et al. | Dec 2006 | B2 |
7145914 | Olarig et al. | Dec 2006 | B2 |
7154885 | Nong | Dec 2006 | B2 |
7180857 | Kawakami et al. | Feb 2007 | B2 |
7366165 | Kawarai et al. | Apr 2008 | B2 |
7443851 | Fukushima et al. | Oct 2008 | B2 |
7453801 | Taneja et al. | Nov 2008 | B2 |
7464180 | Jacobs et al. | Dec 2008 | B1 |
7486678 | Devanagondi et al. | Feb 2009 | B1 |
7545801 | Miller et al. | Jun 2009 | B2 |
20010014096 | Zhou et al. | Aug 2001 | A1 |
20020054567 | Fan | May 2002 | A1 |
20020085578 | Dell et al. | Jul 2002 | A1 |
20020099900 | Kawarai et al. | Jul 2002 | A1 |
20020131412 | Shah et al. | Sep 2002 | A1 |
20020191588 | Personick | Dec 2002 | A1 |
20030099194 | Lee et al. | May 2003 | A1 |
20030118052 | Kuhl et al. | Jun 2003 | A1 |
20030123468 | Nong | Jul 2003 | A1 |
20030126297 | Olarig et al. | Jul 2003 | A1 |
20030179774 | Saidi et al. | Sep 2003 | A1 |
20030223424 | Anderson et al. | Dec 2003 | A1 |
20030227906 | Hallman | Dec 2003 | A1 |
20040037313 | Gulati et al. | Feb 2004 | A1 |
20040213156 | Smallwood et al. | Oct 2004 | A1 |
20050002334 | Chao | Jan 2005 | A1 |
20050002410 | Chao | Jan 2005 | A1 |
20050025141 | Chao | Feb 2005 | A1 |
20050025171 | Chao | Feb 2005 | A1 |
20050201314 | Hirano | Sep 2005 | A1 |
20060203725 | Paul et al. | Sep 2006 | A1 |
20060239259 | Norman et al. | Oct 2006 | A1 |
20080069125 | Reed et al. | Mar 2008 | A1 |
20090028152 | Shimonishi | Jan 2009 | A1 |
Number | Date | Country | |
---|---|---|---|
20050025141 A1 | Feb 2005 | US |
Number | Date | Country | |
---|---|---|---|
60479733 | Jun 2003 | US |