The present invention relates generally to communications networks and, more particularly, to systems and methods for forwarding data in communications networks.
Communications networks have existed for decades. To route datagrams, such as packets, cells, etc., through such networks, each routing node in the networks may rely on forwarding tables. Forwarding tables provide routing nodes with instructions for forwarding a received datagram “one hop” further towards its destination. Specifically, a routing node can inspect one or more fields in a datagram, look up a corresponding entry in a forwarding table, and then put the datagram on the indicated queue for outbound transmission across one of a group of outbound links associated with the routing node.
Many variants exist on how to build and use forwarding tables. Many existing routing nodes, for instance, create multiple forwarding tables. Then, when a datagram arrives, the routing node selects which forwarding table should be used for this particular datagram. The choice of forwarding table typically depends on the protocol used by the datagram, the datagram's relative priority, or the like.
When failures occur in the network (e.g., a link fails), the routing nodes may often need to be supplied with (or need to generate) new forwarding tables to allow the nodes to route datagrams around the failures, which can cause delays in the network. Therefore, there exists a need to improve data routing in communications networks.
Systems and methods consistent with the principles of the invention provide improved techniques for routing data in a communications network.
In accordance with an exemplary implementation consistent with the principles of the invention, a communications network includes at least one control station and a group of network nodes. The at least one control station generates batches of forwarding tables, where each batch of forwarding tables includes a primary forwarding table and a group of backup forwarding tables, and forwards the batches of forwarding tables. Each of the network nodes is associated with one or more outbound and inbound links and is configured to receive a batch of forwarding tables from the at least one control station and install the primary forwarding table from the batch as a current forwarding table. Each network node is further configured to detect that a quality of one of an outbound and inbound link has changed, generate a message instructing other nodes of the group of network nodes to switch to a backup forwarding table in response to detecting the quality change, and transmit the message to the other nodes.
In another implementation consistent with the principles of the invention, a control station in a communications network that includes a group of nodes, includes a processor and a memory configured to store topology information for the communications network. The processor is configured to generate a batch of forwarding tables for each of the group of nodes based on the topology information, where each batch of forwarding tables includes a primary forwarding table and a group of backup forwarding tables, and cause each batch of forwarding tables to be transmitted to the corresponding node of the group of nodes.
In yet another implementation consistent with the principles of the invention, a method for routing data in a communications network is provided. The method includes receiving a group of forwarding tables, including a primary forwarding table and a group of backup forwarding tables, from a remote device; using the primary forwarding table as a current forwarding table for routing data in the communications network; and storing the group of backup forwarding tables. The group of backup forwarding tables enables continued routing of data in the communications network when at least one event occurs in the communications network.
In still another implementation consistent with the principles of the invention, a node, associated with at least one outbound link and at least one inbound link, that transmits data in the communications network is provided. The node includes a processor and a memory configured to store a primary forwarding table and a group of backup forwarding tables. The processor detects a change in quality in one of the at least one outbound link and the at least one inbound link, generates a message that identifies the detected one outbound link or inbound link, and causes the message to be transmitted to one or more other nodes in the communications network. The message instructs the one or more other nodes to switch to a backup forwarding table associated with the identified one outbound link or inbound link.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, explain the invention. In the drawings,
The following detailed description of implementations consistent with the present invention refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and equivalents.
Implementations consistent with the principles of the invention ensure continued operation of a communications network when one or more events (e.g., link failure or degradation) occur in the communications network. To ensure the continued operation of the network, each node in the network stores a group of backup forwarding tables. When an event occurs, the nodes may switch to one of the backup forwarding tables and continue to route data based on the backup forwarding table.
As shown in
Backbone satellites 110 communicate with each other over inter-satellite links, labeled as links 111 in
Inter-satellite links 111 may be high capacity links. For example, when implemented using RF technology, they may run at 100 s of megabits/second. When implemented with optical transmitters and receivers, they may run at 10 s of gigabits/second.
User satellites 120 may communicate with backbone satellites 110 through access links 112 (shown in
Network 100, in addition to including backbone satellites 110 and user satellites 120, may also include earth-based entities 130. As shown in
Backbone satellites 110 may connect to one or more ground stations 140-1 through 140-3 (collectively ground stations 140) via up/down links 113. Up/down links 113 may include high capacity links designed for communication between a satellite and a ground terminal. Ground stations 140 may include fairly large and established ground terminals that have high capacity links designed for communications with satellites. Ground stations 140 may include, for example, large dish antennas that communicate through an RF connection with backbone satellites 110. The RF connection may run at, for example, 1 gigabit/second.
Ground stations 140 may connect to one another through standard terrestrial links, such as fiber optic links 114. One of ordinary skill in the art will appreciate that other types of terrestrial links, such as, for instance, coaxial cable and freespace optical connections are also possible.
Ground stations 140 may also act as network gateways to other private or public networks, such as network 150. In the case of a public network, network 150 may be the Internet. In the case of a private network, network 150 may be, for example, a proprietary military or corporate network. In some cases, network 150 may include a private portion and a public portion. In general, networks 100 and 150 allow any entity that can connect to network 150 the ability to communicate through the satellite portion of network 100.
Control stations 160-1 and 160-2 (collectively control stations 160) store network topology information (and other information) for controlling the forwarding of information throughout network 100. While two standalone control stations 160 are illustrated in
Network 100 may transmit data using a packet-based transmission protocol, such as the well known Internet Protocol (IP). Under the IP protocol, each device in network 100 is associated with an IP address that uniquely identifies it from all other devices. Data sent under the IP protocol is broken into data units called packets, each of which includes the IP address that identifies the destination for the packet. A packet “hops” from one device to another in network 100 until it is received by its destination device.
The A side architecture may include a read-only-memory (ROM) 201, a processor (CPU) 202, random access memory (RAM) 203, and forwarding engine (FWD) 204. A cross-bar bus (X-BAR) 205 may connect RAM 203 and forwarding engine 204 to input/output components 221-224.
The B side architecture may be implemented in an essentially identical manner to the A side architecture and acts as a backup in case of a failure in the A side architecture. In particular, the B side architecture may include ROM 211, a CPU 212, RAM 213, and forwarding engine 214, which utilizes cross-bar 215.
ROM 201 and 211 may each contain all necessary read-only storage for backbone satellite 110. ROM 201 and 211 may, for example, store programming instructions for operation of the backbone satellite, geo-locations of some or all ground stations, system identifiers, configuration parameters, etc. Although shown as single monolithic ROM devices, ROM 201 and 211 may be implemented as a mix of different types of non-volatile memory, and may even include a certain amount of reprogrammable memory as well. For instance, ROM 201 or 211 may be implemented as ROM, EEPROM, flash memory, etc.
CPUs 202 and 212 may be embedded processors that execute computer instructions. CPUs 202 and 212 may generally manage the control and routing functions for backbone satellite 110.
Forwarding engines 204 and 214 may each include high-speed data forwarding paths that obtain header information from packets received by backbone satellite 110, and based on the header information, may retransmit the packets on a link that leads towards the final destination of the packet. To increase processing speed, forwarding engines 204 and 214 may be implemented as FPGAs (field programmable gate arrays), ASICs (application specific integrated circuits), or other high-speed circuitry. In general, forwarding engines 204 and 214 implement many of the core routing functions of backbone satellite 110, and thus, in conjunction with their CPUs and RAMs, function as routers in the satellite. The design and implementation of routers and routing techniques is generally known in the art and will thus not be described further herein.
Forwarding engines 204 and 214 may include one or more forwarding tables that store information relating to packet forwarding. The forwarding tables may alternatively be stored in RAM 203 and 213. In one implementation, forwarding engines 204 and 214 store a “batch” of forwarding tables. This batch contains a whole set of fallback (or backup) forwarding tables for any unexpected event or sequence of events that can occur in network 100 (e.g., a link failing). Forwarding engines 204 and 214 may store forwarding tables based on the protocol used by backbone satellite 110. For example, forwarding engines 204 and 214 may use forwarding tables for IP, asynchronous transfer mode (ATM), Multi-Protocol Label Switching (MPLS), fast packet switching, Ethernet, and the like for routing data in network 100.
As illustrated, batch 300 may include a primary forwarding table 305 and a group of backup forwarding tables 310. Forwarding engine 204/214 may rely on primary forwarding table 305 when no problems exist in network 100 (i.e., no unexpected events have occurred). Forwarding engines 204 and 214 may also store one or more backup forwarding tables 310, which may be used by forwarding engine 204/214 when one or more unexpected events occur in network 100. In one implementation consistent with the principles of the invention, the number of backup forwarding tables 310 for a given node may be ns×nl×nq, where ns represents the total number of nodes in network 100, nl represents the total number of outbound links (e.g., transmitters) associated with a given node, and nq represents the total number of different non-normative “qualities” that a given outbound link can take.
As an example, assume that network 100 includes 10 satellites 110/120 and 2 ground stations 140. In this situation, ns would equal 12. Moreover, assume, for example, that backbone satellite 110-2 (
1+(12×5×2)=121
different forwarding tables—one primary forwarding table 305 and 120 backup forwarding tables 310.
Batch 300 may thus contain backup forwarding tables 310 for each possible single unexpected event that may occur in network 100. If all links are operating normally, then forwarding engine 204/214 may use primary forwarding table 305. If, for example, link 2 (L2) on a satellite S1, which could correspond to backbone satellite 110-1, unexpectedly goes into a degraded or failed mode, designated as Q2, then forwarding engine 204/214 may use the backup forwarding table #S1-L2-Q2 instead of primary forwarding table 305.
As indicated above, each node in network 100 would contain its own unique batch 300 of forwarding tables. Thus, in a network containing 12 nodes, as per the example above, there would be 12 distinct batches 300 of forwarding tables, one for each node in the network.
While the above example described the situation in which a node in network 100 includes backup forwarding tables 310 to ensure continued traffic routing when any single event occurs in network 100, implementations consistent with the present invention are not so limited. For example, in another implementation, each node in network 100 may include backup forwarding tables 310 for handling only a subset of all events that could occur in network 100 (e.g., the most probable single event failures). Alternatively, batch 300 could include enough backup forwarding tables 310 to handle any two-event scenario. In such situations, each node may include enough backup forwarding tables 310 to ensure continued operation when any two links in network 100 experience unexpected events (e.g., degraded modes, failure, etc.). In yet other implementations, each node in network 100 may contain enough backup forwarding tables 310 to handle any possible combination of unexpected events.
Returning to
I/O devices 221-224 contain the hardware interfaces, transceivers, and antennas (or telescopes) that implement links 111-113. ACC I/O device 211 handles access links 112. ISL I/O devices 222 and 223 handle inter-satellite links 111. UPD I/O device 224 handles up/down links 113.
Although backbone satellite 110 is shown as having four I/O devices 221-224, one of ordinary skill in the art will recognize that backbone satellite 110 could have more or fewer I/O devices. Further, multiple I/O devices, such as ISL I/O devices 222 and 223, may be operated in unison to form a single high capacity link.
As illustrated, ground station 140 may include a redundant implementation to facilitate fault tolerance. In
The A side architecture may include ROM 401, a processor (CPU) 402, RAM 403, and forwarding engine (FWD) 404. A cross-bar bus (X-BAR) 405 may connect RAM 403 and forwarding engine 404 to input/output components 420-422.
The B side architecture may be implemented in an essentially identical manner to the A side architecture and acts as a backup in case of a failure in the A side architecture. In particular, the B side architecture may include ROM 411, a CPU 412, RAM 413, and forwarding engine 414, which utilizes cross-bar 415.
ROM 401 and 411 may each contain all necessary read-only storage for ground station 140. ROM 401 and 411 may, for example, store programming instructions for operation of the ground station, geo-locations of some or all backbone satellites 110, system identifiers, configuration parameters, etc. Although shown as single monolithic ROM devices, ROM 401 and 411 may be implemented as a mix of different types of non-volatile memory, and may even include a certain amount of reprogrammable memory as well. For instance, ROM 401 or 411 may be implemented as ROM, EEPROM, flash memory, etc.
CPUs 402 and 412 may be embedded processors that execute computer instructions. CPUs 402 and 412 may generally manage the control and routing functions for ground station 140.
Forwarding engines 404 and 414 may each include high-speed data forwarding paths that obtain header information from packets received by ground station 140, and based on the header information, may retransmit the packets on a link that leads towards the final destination of the packet. To increase processing speed, forwarding engines 404 and 414 may be implemented as FPGAs, ASICs, or other high-speed circuitry. In general, forwarding engines 404 and 414 implement many of the core routing functions of ground station 140, and thus, in conjunction with their CPUs and RAMs, function as routers in the ground station. The design and implementation of routers and routing techniques is generally known in the art and will thus not be described further herein.
Forwarding engines 404 and 414 may include one or more forwarding tables that store information relating to packet forwarding. The forwarding tables may alternatively be stored in RAM 403 and 413. In one implementation, forwarding engines 404 and 414 store a unique batch 300 of forwarding tables 305 and 310 in a manner similar to that described above with respect to
RAM 403 and 413 include volatile memory in which data packets and/or other data structures may be stored and manipulated. I/O devices 420-422 may access RAM 403 and 413. RAM 403 and 413 may store queues of packets that can be read and transmitted by I/O devices 420-422.
I/O devices 420-422 contain the hardware interfaces, transceivers, and antennas (or telescopes) that implement links 113 and 114. Inter-ground station links (IGSL) I/O devices 420 and 421 handle links 114. UPD I/O device 422 handles up/down links 113.
Although ground station 140 is shown as having three I/O devices 420-422, one of ordinary skill in the art will recognize that ground station 140 could have more or fewer I/O devices. Further, multiple I/O devices, such as IGSL I/O devices 420 and 421, may be operated in unison to form a single high capacity link.
As illustrated, control station 160 may include a bus 510, a processor 520, a memory 530, an input device 540, an output device 550, and a communication interface 560. Bus 510 permits communication among the components of control station 160.
Processor 520 may include any type of conventional processor or microprocessor that interprets and executes instructions. Memory 530 may include a RAM or another type of dynamic storage device that stores information and instructions for execution by processor 520, a ROM device and/or another type of static storage device that stores static information and instructions for processor 520, and/or a magnetic disk or optical disk and its corresponding drive.
Input device 540 may include one or more conventional mechanisms that permit an operator to input information to control station 160, such as a keyboard, pointing device (e.g., a mouse, a pen, or the like), one or more biometric mechanisms, such as a voice recognition device, etc. Output device 550 may include one or more conventional mechanisms that output information to the operator, such as a display, a printer, a speaker, etc. Communication interface 560 may include any transceiver-like mechanism that enables control station 160 to communicate with other devices and/or systems. For example, communication interface 560 may include a modem or an Ethernet interface to a network. Alternatively, communication interface 560 may include other mechanisms for communicating via a data network, such as network 150.
Control station 160 may implement the functions described below in response to processor 520 executing software instructions contained in a computer-readable medium, such as memory 530. A computer-readable medium may be defined as one or more memory devices. In alternative embodiments, hardwired circuitry may be used in place of or in combination with software instructions to implement features consistent with the principles of the invention. Thus, implementations consistent with the principles of the invention are not limited to any specific combination of hardware circuitry and software.
The batch message may include a primary forwarding table 305 and a group of backup forwarding tables 310. As described above, the number of backup forwarding tables 310 may be enough to handle any single unexpected event, a subgroup of possible unexpected events, or any combination of two or more unexpected events.
Protocol field 710 may include information identifying message 700. In one implementation, protocol field 710 may store information indicating that message 700 is a new batch message 710. Destination node ID field 720 may store information identifying the particular node to which batch message 700 is destined. Batch field 730 may store batch 300 of forwarding tables for the node identified in destination node ID field 720.
Once generated, control station 160 may transmit batch message 700 to the destination node (act 610). Control station 160 may generate and transmit batch messages 700 in the above-described manner for all of the nodes in network 100. Control station 160 may implement the following acts, expressed as a high-level pseudo-code, to ensure that the requisite number of forwarding tables (primary 305 and backup 310) are generated for each node in network 100:
The network node may receive batch message 700 from control station 160 (act 615). In response, the network node may authenticate and validate the contents of batch message 700 (act 620). To authenticate and validate batch message 700, the network node may, for example, check destination node field 720 to ensure that batch message 700 is intended for this particular network node. The network node may also check that batch message 700 includes the correct number of primary and backup forwarding tables.
The network node may install primary forwarding table 305 from batch message 700 as the current forwarding table (act 625). Network node may also replace any existing backup forwarding tables with backup forwarding tables 310 from batch message 700 (act 625).
If a given network node detects a local failure, the network node may generate a command telling other network nodes to switch to the appropriate backup forwarding table 310 instead of primary forwarding table 305. In general, the network node may detect a local failure in a number of different ways (e.g., the network node's transmitter equipment may be able to generate a signal that it has failed, a “loss of carrier” indication may be associated with the link, “hello packets” may cease flowing, etc.). Any of these techniques may be used to deduce that an outbound link has failed, and to trigger the transmission of a switch message.
Protocol field 910 may include information identifying message 900. In one implementation, protocol field 910 may store information indicating that message 900 is a message instructing other network nodes to switch forwarding tables. Originating node ID field 920 may store information identifying the particular node from which switch forwarding table message 900 originated. In the example above, originating node ID field 920 may store information identifying the network node as node #I. Link ID field 930 may store information identifying the link associated with the network node identified in originating node ID field 920 that has experienced the event (e.g., failure, degradation, etc.). In the example above, link ID field 930 may store information identifying the link as #K. New quality for link field 940 may store information identifying the quality of the link identified in link ID field 930. In the example above, new quality for link field 940 may store information identifying the quality as #L.
Once switch forwarding table message 900 is generated, the network node may transmit switch forwarding table message 900 to the other network nodes in network 100 (act 815). The network node may also transmit switch forwarding message 900 to all control stations 160 in network 100 (act 820).
The network node may determine if it is already using a backup forwarding table 310 from its stored batch 300 of forwarding tables (act 1015). If the network node is already using a backup forwarding table 310, processing may end. On the other hand, if the network node is using its primary forwarding table 305, the network node may switch to backup forwarding table 305 identified in switch forwarding table message 900 (act 1020). For example, if switch forwarding message 900 identifies the originating node as node #I, the link as link #K, and the new quality of link #K as #L, the network node may switch to backup forwarding table #I-K-L.
The network node may disseminate received switch forwarding table message 900 to other nodes in network 100 to implement this change throughout network 100 (act 1025). In general, the dissemination can be to some subset of the nodes in network 100, or to all nodes. In one implementation, a reliable flooding technique may be used for disseminating switch forwarding table message 900 throughout network 100. In some implementations consistent with the principles of the invention, a network node may piggy-back switch forwarding table message 900 with datagrams (or other data units, such as packets, cells, etc.) scheduled to be forwarded to other nodes. In this way, other nodes in network 100 will learn of the change in forwarding tables just as quickly as the nodes receive the datagrams being forwarded along the new path, and can adjust their forwarding tables immediately before even processing the datagram. Thus, the network node may immediately start forwarding datagrams along the revised path, without any time lag whatsoever. In essence, the datagrams themselves can carry the information needed to find the new path after a fault has occurred.
Implementations consistent with the principles of the invention ensure continued operation of a communications network when one or more events (e.g., link failure or degradation) occur in the communications network. To ensure the continued operation of the network, each node in the network stores a group of backup forwarding tables. When an event occurs, the nodes may switch to one of the backup forwarding tables and continue to route data based on the backup forwarding table.
The foregoing description of exemplary embodiments of the present invention provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. For example, while the above description focused on a satellite-based network, implementations consistent with the principles of the invention are not so limited. In fact, implementations consistent with the principles of the invention are equally applicable to terrestrial networks where continued routing of data upon the occurrence of events is desired.
While series of acts have been described with regard to
No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used.
The scope of the invention is defined by the claims and their equivalents.
Number | Name | Date | Kind |
---|---|---|---|
4718002 | Carr | Jan 1988 | A |
4827411 | Arrowood et al. | May 1989 | A |
5079767 | Perlman | Jan 1992 | A |
5093824 | Coan et al. | Mar 1992 | A |
5117422 | Hauptschein et al. | May 1992 | A |
5175843 | Casavant et al. | Dec 1992 | A |
5243592 | Perlman et al. | Sep 1993 | A |
5412654 | Perkins | May 1995 | A |
5430729 | Rahnema | Jul 1995 | A |
5541912 | Choudhury et al. | Jul 1996 | A |
5649119 | Kondoh et al. | Jul 1997 | A |
5742820 | Perlman et al. | Apr 1998 | A |
5764895 | Chung | Jun 1998 | A |
5828835 | Isfeld et al. | Oct 1998 | A |
5850592 | Ramanathan | Dec 1998 | A |
5878095 | Kainulainen et al. | Mar 1999 | A |
5881246 | Crawley et al. | Mar 1999 | A |
5884040 | Chung | Mar 1999 | A |
5903735 | Kidder et al. | May 1999 | A |
5913921 | Tosey et al. | Jun 1999 | A |
5959989 | Gleeson et al. | Sep 1999 | A |
5960047 | Proctor, Jr. et al. | Sep 1999 | A |
5974327 | Agrawal et al. | Oct 1999 | A |
6000011 | Freerksen et al. | Dec 1999 | A |
6028857 | Poor | Feb 2000 | A |
6032190 | Bremer et al. | Feb 2000 | A |
6067301 | Aatresh | May 2000 | A |
6069895 | Ayandeh et al. | May 2000 | A |
6088622 | Dollin et al. | Jul 2000 | A |
6088734 | Marin et al. | Jul 2000 | A |
6092096 | Lewis et al. | Jul 2000 | A |
6094435 | Hoffman et al. | Jul 2000 | A |
6122753 | Masuo et al. | Sep 2000 | A |
6139199 | Rodriguez | Oct 2000 | A |
6151308 | Ibanez-Meier et al. | Nov 2000 | A |
6173324 | D'Souza | Jan 2001 | B1 |
6215765 | McAllister et al. | Apr 2001 | B1 |
6216167 | Momirov | Apr 2001 | B1 |
6252856 | Zhang | Jun 2001 | B1 |
6262976 | McNamara | Jul 2001 | B1 |
6272567 | Pal et al. | Aug 2001 | B1 |
6275492 | Zhang | Aug 2001 | B1 |
6304548 | Shaffer et al. | Oct 2001 | B1 |
6310883 | Mann et al. | Oct 2001 | B1 |
6330459 | Crichton et al. | Dec 2001 | B1 |
6349091 | Li | Feb 2002 | B1 |
6362821 | Gibson et al. | Mar 2002 | B1 |
6385174 | Li | May 2002 | B1 |
6385673 | DeMoney | May 2002 | B1 |
6396814 | Iwamura et al. | May 2002 | B1 |
6418299 | Ramanathan | Jul 2002 | B1 |
6470329 | Livschitz | Oct 2002 | B1 |
6473421 | Tappan | Oct 2002 | B1 |
6473434 | Araya et al. | Oct 2002 | B1 |
6496510 | Tsukakoshi et al. | Dec 2002 | B1 |
6542469 | Kelley et al. | Apr 2003 | B1 |
6594268 | Aukia et al. | Jul 2003 | B1 |
6628929 | Nomura et al. | Sep 2003 | B1 |
6631136 | Chowdhury et al. | Oct 2003 | B1 |
6633544 | Rexford et al. | Oct 2003 | B1 |
6671819 | Passman et al. | Dec 2003 | B1 |
6683885 | Sugai et al. | Jan 2004 | B1 |
6687781 | Wynne et al. | Feb 2004 | B2 |
6714563 | Kushi | Mar 2004 | B1 |
6721273 | Lyon | Apr 2004 | B1 |
6745224 | D'Souza et al. | Jun 2004 | B1 |
6769043 | Fedorkow et al. | Jul 2004 | B1 |
6804236 | Mahajan et al. | Oct 2004 | B1 |
6807158 | Krishnamurthy et al. | Oct 2004 | B2 |
6807172 | Levenson et al. | Oct 2004 | B1 |
6829222 | Amis et al. | Dec 2004 | B2 |
6870846 | Cain | Mar 2005 | B2 |
6954449 | Cain et al. | Oct 2005 | B2 |
RE38902 | Srisuresh et al. | Nov 2005 | E |
6977895 | Shi et al. | Dec 2005 | B1 |
6977937 | Weinstein et al. | Dec 2005 | B1 |
6980515 | Schunk et al. | Dec 2005 | B1 |
6980537 | Liu | Dec 2005 | B1 |
6990350 | Davis et al. | Jan 2006 | B2 |
7020501 | Elliott et al. | Mar 2006 | B1 |
7020701 | Gelvin et al. | Mar 2006 | B1 |
7039720 | Alfieri et al. | May 2006 | B2 |
7042834 | Savage | May 2006 | B1 |
7042837 | Cassiday et al. | May 2006 | B1 |
7046628 | Luhmann et al. | May 2006 | B2 |
7065059 | Zinin | Jun 2006 | B1 |
7068971 | Abutaleb et al. | Jun 2006 | B2 |
7072952 | Takehiro et al. | Jul 2006 | B2 |
7106703 | Belcea | Sep 2006 | B1 |
7120120 | Guerin et al. | Oct 2006 | B2 |
7177295 | Sholander et al. | Feb 2007 | B1 |
7184421 | Liu et al. | Feb 2007 | B1 |
7200120 | Greenberg et al. | Apr 2007 | B1 |
7215926 | Corbett et al. | May 2007 | B2 |
7254111 | Choe et al. | Aug 2007 | B2 |
7266386 | Kim et al. | Sep 2007 | B2 |
7281057 | Cain | Oct 2007 | B2 |
7353259 | Bakke et al. | Apr 2008 | B1 |
7369512 | Shurbanov et al. | May 2008 | B1 |
20010007560 | Masuda et al. | Jul 2001 | A1 |
20010034793 | Madruga et al. | Oct 2001 | A1 |
20010040895 | Templin | Nov 2001 | A1 |
20010045914 | Bunker | Nov 2001 | A1 |
20020016869 | Comeau et al. | Feb 2002 | A1 |
20020029214 | Yianilos et al. | Mar 2002 | A1 |
20020057660 | Park et al. | May 2002 | A1 |
20020067693 | Kodialam et al. | Jun 2002 | A1 |
20020071392 | Grover et al. | Jun 2002 | A1 |
20020080755 | Tasman et al. | Jun 2002 | A1 |
20020103893 | Frelechoux et al. | Aug 2002 | A1 |
20020108107 | Darnell et al. | Aug 2002 | A1 |
20020131409 | Frank et al. | Sep 2002 | A1 |
20020143755 | Wynblatt et al. | Oct 2002 | A1 |
20020176390 | Sparr et al. | Nov 2002 | A1 |
20020186694 | Mahajan et al. | Dec 2002 | A1 |
20020191545 | Pieda et al. | Dec 2002 | A1 |
20030012168 | Elson et al. | Jan 2003 | A1 |
20030016624 | Bare | Jan 2003 | A1 |
20030058852 | Luhmann et al. | Mar 2003 | A1 |
20030063613 | Carpini et al. | Apr 2003 | A1 |
20030096577 | Heinonen et al. | May 2003 | A1 |
20030124976 | Tamaki et al. | Jul 2003 | A1 |
20030126284 | Houston et al. | Jul 2003 | A1 |
20030153338 | Herz et al. | Aug 2003 | A1 |
20030174719 | Sampath et al. | Sep 2003 | A1 |
20030202476 | Billhartz et al. | Oct 2003 | A1 |
20040001720 | Krill et al. | Jan 2004 | A1 |
20040003111 | Maeda et al. | Jan 2004 | A1 |
20040027284 | Leeper et al. | Feb 2004 | A1 |
20040029553 | Cain | Feb 2004 | A1 |
20040106408 | Beasley et al. | Jun 2004 | A1 |
20040202164 | Hooper et al. | Oct 2004 | A1 |
20040213167 | Garcia-Luna-Aceves et al. | Oct 2004 | A1 |
20040243702 | Vainio et al. | Dec 2004 | A1 |
20050013613 | Stevenson et al. | Jan 2005 | A1 |
20050030949 | Shirakawa et al. | Feb 2005 | A1 |
20050036442 | Saleh et al. | Feb 2005 | A1 |
20050050221 | Tasman et al. | Mar 2005 | A1 |
20050117914 | Chuah et al. | Jun 2005 | A1 |
20050213586 | Cyganski et al. | Sep 2005 | A1 |
20070106852 | Lam et al. | May 2007 | A1 |
Number | Date | Country |
---|---|---|
0447725 | Sep 1991 | EP |
WO-0137483 | May 2001 | WO |