System and method to achieve sub-second routing performance

Abstract
A system and method for routing data, the system having a processor, an interface in communication with the processor and capable of being in communication with a second system for routing data, a memory unit in communication with the processor, the memory unit having a network routing table having a plurality of routes, the plurality of routes having a first route; and a network failure route selection logic including instructions adapted to configure the processor to determine when the first route is inoperative, transmit a first data packet to the second system for routing data when the first route is inoperative and utilize a second route selected from one of the plurality of routes, the second route being different from the first route.
Description
TECHNICAL FIELD

The present disclosure generally relates to methods and systems for routing data on a network.


BACKGROUND

Dynamic routing protocols are widely used in broadband networks to route data packets. One of the functions of these dynamic routing protocols, besides establishing the initial routing, is to reroute data packets around network failures. In the past, rerouting data involves a procedure that may take several minutes from the time the network failure occurs to the time the new routes are installed in the routing tables of the nodes, routers or switches that encompass the network.


Time sensitive network traffic, such as network traffic including voice over internet protocol (“VoIP”) telephone calls and real time audio and video transmissions, are sensitive to network failures. For example, if a VoIP telephone call is interrupted by a network failure, the VoIP telephone call will likely be dropped because the rerouting around the network failure may take several minutes. This has the unfortunate drawback of requiring VoIP telephone users to redial to reestablish their telephone calls affected by the network failure. In like manner, real time streaming of audio and video may be interrupted for several minutes while the network reroutes around the network failure. The time taken to reroute around the network failure may significantly reduce the user's ability to effectively utilize network services. Therefore, there exists a need for a system that can more efficiently reroute around network failures.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of an embodiment of a system for routing data;



FIG. 2 is a block diagram of a network within which the system of FIG. 1 may operate;



FIG. 3 is a block diagram of a flow chart illustrating one method the system of FIG. 1;



FIG. 4 is a block diagram of a routing table;



FIG. 5 is a TRIE structure incorporating the present invention



FIG. 6 is an network utilizing a fast rerouting scheme;



FIG. 7 is a block diagram of a general purpose computer suitable for use in implementing the system and method described herein.





DETAILED DESCRIPTION

In order to address the drawbacks of current data routing systems, a suitable system for routing data may include a processor, an interface in communication with the processor and capable of being in communication with a second system for routing data, a memory unit in communication with the processor, the memory unit having a network routing table having a plurality of routes, the plurality of routes having a first route, and a network failure route selection logic including instructions adapted to configure the processor to determine when the first route is inoperative, transmit a first data packet to the second system for routing data when the first route is inoperative and utilize a second route selected from one of the plurality of routes, the second route being different from the first route. These and other aspects and advantages are described in greater detail below.


Referring to FIG. 1, a system 10 for routing data is shown. The system 10 may be a router commonly used to route data on a computer network. The system 10 includes a processor 12 in communication with an interface 14, a storage device 16 and network failure route selection (“NFRS”) logic 18. The interface 14 is an interface capable of being connected to a computer network such as an Ethernet interface. This configuration is typical of a router for a computer network.


The storage device 16 may be a solid state storage device as found in a typical router. However, the storage device 16 may further be a magnetic storage device or an optical storage device. The storage device 16 may be a discrete device or integrated within the processor 12. Additionally, the NFRS logic 18 and the storage device 16 may be separate modules integrated into a common memory unit 13. The NFRS logic 18 may include instructions adapted to configure the processor 12 to implement a data routing method which will be later described in this description.


The memory unit 16 may contain a network routing table. Routing is a concept of the Internet and many other networks. Routing provides the means of discovering paths along which information, such as data packets can be sent. The network routing table contains information for numerous routes that is used to determine which route to utilize so that a data packet may be transmitted to the data packet's destination.


Referring to FIGS. 1 and 2, the system 10 may be implemented in a computer network 20. The computer network 20 may be a local area network or may be a wide area network. For exemplary purposes, the network 20 may include systems 10a, 10b, 10c, 10d and 10e. Systems 10a, 10b, 10c, 10d and 10e may be similar to the system 10 described in FIG. 1. Systems 10a, 10b, 10c, 10d and 10e are connected to each other via their respective interface 14 (as best shown in FIG. 1) using a network cable or a wireless communication system such as IEEE 802.11g. In this example, system 10a is connected to system 10b and system 10e. System 10c is connected system 10b and system 10d. System 10d is connected to system 10c system 10e.


Referring to FIGS. 1 and 3, a fast reroute protocol (“FRRP”) method 30 for routing data is shown. The method 30 may be contained within the NFRS logic 18 and may be executed by the processor 12. Block 32 indicates the starting point of the method. As shown in block 34, the method 30 first determines if a one of the routes in the routing table and utilized by the system 10 has a network failure. This network failure may be a link layer failure or a protocol layer failure. These failures may be detected by losing a signal for a period of time or a link layer detection scheme.


Moving to block 36, once a network failure has been detected, a data packet is sent to the other systems. This data packet contains the identity route wherein the network failure has occurred. In addition, the data packet may include additional information such as the type of network failure. The type of network failure may be a link failure or a nodal failure.


Next, as shown in block 38, once the other systems have been informed of the network failure, the route experiencing the network failure information is stored. This may be accomplished by storing the network failure information in the storage device 16. Afterwards, a new route is selected from the routing table and utilized. The selection process will consider any of the routes containing within the routing table with the exception of any routes experiencing a network failure. The selection process is based on route data structures. More specifically, the method 30 may allow certain routes to participate in the route calculation and disable some other routes from being used in the route calculation process. In essence, a function of this recalculation is to allow/disallow certain routes to be used by recalculation process. By so doing, the system 10 will avoid using the route experiencing the network failure when transmitting data. Thereafter, this process repeats as new routes experiencing a network failure are discovered.


In block 42, if a network failure is not detected as shown in block 34, a determination is made if another system has sent a multicasted data packet indicating that a route is experiencing a network failure. If this occurs, the network failure information is stored and a new routing is selected as shown in blocks 40, 42. Afterwards, process repeats as new routes experiencing a network failure are discovered. It should be understood that more than one route may be stored. By so doing, routes experiencing failures may be excluded from the selection process.


Each multicasted data packet may contain (1) identity of the system detecting the network failure, (2) an indication of the type of network failures detected and (3) a sequence number which may be used to identify the instance of the fast reroute request from each system.


Each system may keep a table to track the multicasted packet received from its peers. The table stores the system detecting the failure. When the multicasted packet is received, the system shall perform the following tasks: (1) examine the multicasted packet process ID to ensure this multicasted packet is from a peer, (2) compare the packet sequence number from the received packet to the sequence number stored in the sequence tracking table, (3) if the sequence number is not newer than the stored sequence number, the received multicasted packet shall be discarded without being further forwarded, (4) if the sequence number of the received multicasted packet is newer than the stored sequence number, then the sequence tracking table is updated with the new sequence number, (5) flood the multicasted packet to all the other systems except the one the packet was received, and (6) update the NFRS logic and utilize a route indicated by the NFRS logic.


Referring to FIG. 2, an example of how method 30 operates will be described. Assume a first host 15 connected to system 10b is attempting to transmit data to and/or from a second host 11, which is connected to system 10a. The first host 15 via system 10b will transmit information to the second host 11 via the system 10a. However, in the occurrence that the communication link between system 10a and system 10b is broken, the system 10b will send a data packet notifying all the other systems 10a, 10c, 10d and 10e of the broken link. Afterwards, the systems 10a, 10b, 10c, 10d and 10e will store this broken link information and select a route from the routing table. Thereafter, the system 10b will transmit data to the second host 11 via systems 10c, 10d, 10e, 10a.


Referring to FIG. 4, a first, second and third routing tables 50, 52, 54 utilizing the method 30 of FIG. 3 are shown. The three routing tables 50, 52, 54 are shown for exemplary purposes. Any number of routing tables may be utilized. The routing tables 50, 52, 54 each include prefixes 56, 58, 60, NFRS Logics 62, 64, 66. To route a data packet, the NFRS Logics 56, 58, 60 select one of routes 67, 68, 69, respectively during a failure condition.


Referring to FIG. 5, an NFRS logic 72 embedded in a TRIE structure 70. As stated previously, the NFRS logic 72 will select of the routes 74. The search of the longest prefix match is combined and controlled by the NFRS logic 72. NFRS logic 72 chooses the best route based on the network failure conditions. When network failures are detected and fast rerouting request is issued, all the routers shall save the network failure condition in the NFRS logic. The NFRS logic selection logic is implemented at each router independently and yields a consistent decision for selecting a route that warrants network-wide routing integrity and completeness under the failures. Use hardware/software failure indication to directly trigger the FRRP. This step skips the traditional failure confirmation, link state update, topology database convergence and route recalculation stages to significantly reduce the rerouting latency. Implement the method 30 to promptly invoke rerouting by updating the NFRS logic 72 at each node of the protected domain where the network failures occurred. The FRRP ensures the consistence of all the NFRS logic for all of the routers in a protected domain


Referring to FIG. 6, the FRRP method 30 may be used to protect an entire network or may be used to protect selected portions of the network for better scalability. More specifically, the FRRP method 30 may be utilized in nodes R1, R2, R3, R4. The network shown in FIG. 6 can be partitioned into several rerouting domains, each rerouting domain covering a subset of the network. The system can have all its ports utilized in the method or it can have just a subset of its ports utilizing the method. There could also be several overlaps between different routing domains. For instance, a system can have a set of ports in one routing domain and a set of ports in another routing domain. Each node can belong to multiple routing domains. The purpose of partitioning the entire network into separate rerouting domains is to constrain the network complexity to simplify the HFRS logic and speed up the rerouting process. The rerouting domains can be structured to span the entire network or only selected portions of the network. The FRRP protocol will be bounded by the rerouting domains for better scalability.


For example, nodes R1, R8 and R7 are a first routing domain. Nodes R3, R11 and R12 are a second routing domain. Nodes R4, R5 and R6 are a third routing domain. Nodes R2, R9 and R10 are a fourth routing domain. Finally, nodes R1, R2, R3 and R4 make up a fifth routing domain. As stated previously, by partitioning the network into these different routing domains the network complexity will be simplified speeding up the rerouting process.


Referring to FIG. 7, an illustrative embodiment of a general computer system is shown and is designated 80. The computer system 80 can include a set of instructions that can be executed to cause the computer system 80 to perform any one or more of the methods or computer based functions disclosed herein. The computer system 80 may operate as a standalone device or may be connected, e.g., using a network, to other computer systems or peripheral devices.


In a networked deployment, the computer system may operate in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment. The computer system 80 can also be implemented as or incorporated into various devices, such as a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile device, a palmtop computer, a laptop computer, a desktop computer, a communications device, a wireless telephone, a land-line telephone, a control system, a camera, a scanner, a facsimile machine, a printer, a pager, a personal trusted device, a web appliance, a network router, switch or bridge, or any other machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. In a particular embodiment, the computer system 80 can be implemented using electronic devices that provide voice, video or data communication. Further, while a single computer system 80 is illustrated, the term “system” shall also be taken to include any collection of systems or sub-systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions.


As illustrated in FIG. 7, the computer system 80 may include a processor 82, e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both. Moreover, the computer system 80 can include a main memory 84 and a static memory 86 that can communicate with each other via a bus 88. As shown, the computer system 80 may further include a video display unit 90, such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid state display, or a cathode ray tube (CRT). Additionally, the computer system 80 may include an input device 92, such as a keyboard, and a cursor control device 94, such as a mouse. The computer system 80 can also include a disk drive unit 96, a signal generation device 98, such as a speaker or remote control, and a network interface device 100.


In a particular embodiment, as depicted in FIG. 7, the disk drive unit 96 may include a computer-readable medium 102 in which one or more sets of instructions 104, e.g. software, can be embedded. Further, the instructions 104 may embody one or more of the methods or logic as described herein. In a particular embodiment, the instructions 104 may reside completely, or at least partially, within the main memory 84, the static memory 86, and/or within 84 and the processor 82 also may include computer-readable media.


In an alternative embodiment, dedicated hardware implementations, such as application specific integrated circuits, programmable logic arrays and other hardware devices, can be constructed to implement one or more of the methods described herein. Applications that may include the apparatus and systems of various embodiments can broadly include a variety of electronic and computer systems. One or more embodiments described herein may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules, or as portions of an application-specific integrated circuit. Accordingly, the present system encompasses software, firmware, and hardware implementations.


In accordance with various embodiments of the present disclosure, the methods described herein may be implemented by software programs executable by a computer system. Further, in an exemplary, non-limited embodiment, implementations can include distributed processing, component/object distributed processing, and parallel processing. Alternatively, virtual computer system processing can be constructed to implement one or more of the methods or functionality as described herein.


The present disclosure contemplates a computer-readable medium that includes instructions 104 or receives and executes instructions 104 responsive to a propagated signal, so that a device connected to a network 106 can communicate voice, video or data over the network 106. Further, the instructions 104 may be transmitted or received over the network 106 via the network interface device 100.


While the computer-readable medium is shown to be a single medium, the term “computer-readable medium” includes a single medium or multiple media, such as a centralized or distributed database, and/or associated caches and servers that store one or more sets of instructions. The term “computer-readable medium” shall also include any medium that is capable of storing, encoding or carrying a set of instructions for execution by a processor or that cause a computer system to perform any one or more of the methods or operations disclosed herein.


In a particular non-limiting, exemplary embodiment, the computer-readable medium can include a solid-state memory such as a memory card or other package that houses one or more non-volatile read-only memories. Further, the computer-readable medium can be a random access memory or other volatile re-writable memory. Additionally, the computer-readable medium can include a magneto-optical or optical medium, such as a disk or tapes.


Although the present specification describes components and functions that may be implemented in particular embodiments with reference to particular standards and protocols, the invention is not limited to such standards and protocols. For example, standards for Internet and other packet switched network transmission (e.g., TCP/IP, UDP/IP, HTML, HTTP) represent examples of the state of the art. Such standards are periodically superseded by faster or more efficient equivalents having essentially the same functions. Accordingly, replacement standards and protocols having the same or similar functions as those disclosed herein are considered equivalents thereof.


The illustrations of the embodiments described herein are intended to provide a general understanding of the structure of the various embodiments. The illustrations are not intended to serve as a complete description of all of the elements and features of apparatus and systems that utilize the structures or methods described herein. Many other embodiments may be apparent to those of skill in the art upon reviewing the disclosure. Other embodiments may be utilized and derived from the disclosure, such that structural and logical substitutions and changes may be made without departing from the scope of the disclosure. Additionally, the illustrations are merely representational and may not be drawn to scale. Certain proportions within the illustrations may be exaggerated, while other proportions may be minimized. Accordingly, the disclosure and the figures are to be regarded as illustrative rather than restrictive.


One or more embodiments of the disclosure may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any particular invention or inventive concept. Moreover, although specific embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the description.


The Abstract of the Disclosure is provided to comply with 37 C.F.R. §1.72(b) and is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, various features may be grouped together or described in a single embodiment for the purpose of streamlining the disclosure. This disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may be directed to less than all of the features of any of the disclosed embodiments. Thus, the following claims are incorporated into the Detailed Description, with each claim standing on its own as defining separately claimed subject matter.


The above disclosed subject matter is to be considered illustrative, and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other embodiments, which fall within the true spirit and scope of the present invention. Thus, to the maximum extent allowed by law, the scope of the present invention is to be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description.

Claims
  • 1. A system for routing data, the system comprising: a processor;an interface in communication with the processor and capable of being in communication with a second system for routing data;a memory in communication with the processor, the memory having a network routing table having a plurality of routes included within rerouting domains, each rerouting domain corresponding to a subset of a network, the plurality of routes including a first route included within a first rerouting domain; andnetwork failure route selection logic comprising instructions to configure the processor to: receive a first message including network failure information identifying a cause of a network failure associated with the first route from the second system, the first message to include a sequence number incremented to indicate that the first message corresponds to a new reroute request sent from the second system, the processor to ignore the first message when the sequence number is not newer than a stored sequence number associated with the second system; andselect a second route from the plurality of routes to route data in place of the failed first route when the processor determines that the sequence number included in the first message is newer than the stored sequence number, the second route being selected based on the cause of the network failure, the second route included within the first rerouting domain, and being different from the first route.
  • 2. The system of claim 1, wherein the cause of the network failure includes at least one of a nodal failure, a protocol layer failure, or a link failure.
  • 3. The system of claim 1, wherein second network failure route selection logic is located in a third system, and the first route is identified in a second message sent to the third system.
  • 4. The system of claim 3, wherein the second message comprises information related to an operating condition of the first route.
  • 5. The system of claim 4, wherein the network failure route selection logic further comprises instructions to configure the processor to determine the operating condition of the first route by extracting the operating condition of the first route from the first message.
  • 6. The system of claim 1, wherein the network failure route selection logic further comprises instructions to configure the processor to determine a cause of a network failure of a third route by at least one of detecting a loss of a signal along the third route or using a link layer failure detection scheme.
  • 7. The system of claim 1, wherein the network failure route selection logic further comprises instructions to identify routes associated with the first route from the plurality of routes based on at least one of the cause of the network failure or an identifier associated with the first route, wherein the second route is different from the identified routes.
  • 8. A method for a first system, the method comprising: receiving, at a processor of the first system, a first message including network failure information identifying a cause of a failure of a first route from a second system when the first route has failed, the first message to include a sequence number incremented to indicate that the first message corresponds to a new reroute request sent from the second system, the sequence number to cause the first system to discard the first message when the sequence number is not newer than a stored sequence number associated with the second system; andselecting, using the processor, a second route from a plurality of routes to carry data in place of the first route when the sequence number included in the first message is determined to be newer than the stored sequence number but not when the sequence number included in the first message is determined to be older than the stored sequence number, the plurality of routes being included in rerouting domains, each rerouting domain corresponding to a subset of a network, the first route being within a first rerouting domain, the second route being selected based on the cause of the failure and being included within the first rerouting domain, the second route being different from the first route.
  • 9. The method of claim 8, further comprising determining a condition of the first route at the second system.
  • 10. The method of claim 8, wherein the first message comprises information related to a condition of the first route.
  • 11. The method of claim 8, wherein the cause of the failure includes at least one of a nodal failure, a protocol layer failure, or a link failure.
  • 12. The method of claim 8, further comprising determining a cause of a failure of a third route by at least one of detecting a loss of a signal along the third route or using a link layer failure detection scheme.
  • 13. The method of claim 8, further comprising sending a second message to a third system, the second message comprising information related to a condition of the first route.
  • 14. The method of claim 8, further comprising determining an operating condition of the first route by extracting the operating condition of the first route from the first message.
  • 15. The method of claim 8, wherein selecting the second route comprises identifying routes associated with the first route from the plurality of routes based on at least one of the cause of the network failure or an identifier associated with the first route, wherein the second route is different from the identified routes.
  • 16. A tangible computer readable medium, excluding propagating signals, and storing processor executable code which, when executed, causes a machine to perform a method comprising: receiving a first message including network failure information identifying a cause of a failure of a first route included within a first rerouting domain, the first route being identified in a routing table identifying a plurality of rerouting domains corresponding to subsets of a network, the first message including a sequence number incremented to indicate that the first message corresponds to a new reroute request sent from a sender of the first message, the sequence number to cause the machine to discard the first message when the sequence number is not newer than a stored sequence number associated with the sender of the first message; andselecting a second route from the plurality of routes to carry data in place of the first route when the sequence number included in the first message is determined to be newer than the stored sequence number but not when the sequence number included in the first message is determined to be older than the stored sequence number, the second route being selected based on the cause of the failure identified in the first message, the second route being included within the first rerouting domain, and the second route being different from the first route.
  • 17. The tangible computer readable medium of claim 16, wherein the first message comprises information related to a condition of the first route.
  • 18. The tangible computer readable medium of claim 16, wherein the cause of the failure includes at least one of a nodal failure, a protocol layer failure, or a link failure.
  • 19. The tangible computer readable medium of claim 16, wherein the instructions further cause the machine to send a second message to a third system identifying the first route.
US Referenced Citations (181)
Number Name Date Kind
4905233 Cain et al. Feb 1990 A
5016244 Massey, Jr. et al. May 1991 A
5065392 Sibbitt et al. Nov 1991 A
5241534 Omuro et al. Aug 1993 A
5265092 Soloway et al. Nov 1993 A
5375126 Wallace Dec 1994 A
5408461 Uriu et al. Apr 1995 A
5539817 Wilkes Jul 1996 A
5544170 Kasahara Aug 1996 A
5548639 Ogura et al. Aug 1996 A
5559959 Foglar Sep 1996 A
5629938 Cerciello et al. May 1997 A
5633859 Jain et al. May 1997 A
5650994 Daley Jul 1997 A
5754527 Fujita May 1998 A
5764626 VanDervort Jun 1998 A
5774456 Ellebracht et al. Jun 1998 A
5812528 VanDervort Sep 1998 A
5832197 Houji Nov 1998 A
5848055 Fedyk et al. Dec 1998 A
5856981 Voelker Jan 1999 A
5894475 Bruno et al. Apr 1999 A
5926456 Takano et al. Jul 1999 A
5936939 Des Jardins et al. Aug 1999 A
6028863 Sasagawa et al. Feb 2000 A
6038219 Mawhinney et al. Mar 2000 A
6091951 Sturniolo et al. Jul 2000 A
6104998 Galand et al. Aug 2000 A
6108300 Coile et al. Aug 2000 A
6108307 McConnell et al. Aug 2000 A
6118763 Trumbull Sep 2000 A
6147998 Kelley et al. Nov 2000 A
6167025 Hsing et al. Dec 2000 A
6181675 Miyamoto Jan 2001 B1
6181679 Ashton et al. Jan 2001 B1
6185695 Murphy et al. Feb 2001 B1
6195416 DeCaluwe et al. Feb 2001 B1
6259696 Yazaki et al. Jul 2001 B1
6269401 Fletcher et al. Jul 2001 B1
6311288 Heeren et al. Oct 2001 B1
6360260 Compliment et al. Mar 2002 B1
6366581 Jepsen Apr 2002 B1
6377548 Chuah Apr 2002 B1
6421722 Bauer et al. Jul 2002 B1
6424629 Rubino et al. Jul 2002 B1
6449259 Allain et al. Sep 2002 B1
6456306 Chin et al. Sep 2002 B1
6473398 Wall et al. Oct 2002 B1
6535990 Iterum et al. Mar 2003 B1
6538987 Cedrone et al. Mar 2003 B1
6549533 Campbell Apr 2003 B1
6553015 Sato Apr 2003 B1
6556659 Bowman-Amuah Apr 2003 B1
6570846 Ryoo May 2003 B1
6581166 Hirst et al. Jun 2003 B1
6590899 Thomas et al. Jul 2003 B1
6594246 Jorgensen Jul 2003 B1
6594268 Aukia et al. Jul 2003 B1
6597689 Chiu et al. Jul 2003 B1
6608831 Beckstrom et al. Aug 2003 B1
6625114 Hassell Sep 2003 B1
6643254 Kajitani et al. Nov 2003 B1
6687228 Fichou et al. Feb 2004 B1
6697329 McAllister et al. Feb 2004 B1
6711125 Walrand et al. Mar 2004 B1
6716165 Flanders et al. Apr 2004 B1
6738459 Johnstone et al. May 2004 B1
6763476 Dangi et al. Jul 2004 B1
6766113 Al-Salameh et al. Jul 2004 B1
6778525 Baum et al. Aug 2004 B1
6781952 Shirakawa Aug 2004 B2
6795393 Mazzurco et al. Sep 2004 B1
6795394 Swinkels et al. Sep 2004 B1
6810043 Naven et al. Oct 2004 B1
6823477 Cheng et al. Nov 2004 B1
6826184 Bryenton et al. Nov 2004 B1
6829223 Richardson et al. Dec 2004 B1
6842513 Androski et al. Jan 2005 B1
6850483 Semaan Feb 2005 B1
6862351 Taylor Mar 2005 B2
6865170 Zendle Mar 2005 B1
6882652 Scholtens et al. Apr 2005 B1
6885678 Curry et al. Apr 2005 B2
6925578 Lam et al. Aug 2005 B2
6952395 Manoharan et al. Oct 2005 B1
6973034 Natarajan et al. Dec 2005 B1
6973037 Kahveci Dec 2005 B1
6978394 Charny et al. Dec 2005 B1
6981039 Cerami et al. Dec 2005 B2
6983401 Taylor Jan 2006 B2
6990616 Botton-Dascal et al. Jan 2006 B1
7006443 Storr Feb 2006 B2
7012898 Farris et al. Mar 2006 B1
7027053 Berndt et al. Apr 2006 B2
7035202 Callon Apr 2006 B2
7043250 DeMartino May 2006 B1
7072331 Liu et al. Jul 2006 B2
7093155 Aoki Aug 2006 B2
7120148 Batz et al. Oct 2006 B1
7120819 Gurer et al. Oct 2006 B1
7146000 Hollman et al. Dec 2006 B2
7165192 Cadieux et al. Jan 2007 B1
7184439 Aubuchon et al. Feb 2007 B1
7200148 Taylor et al. Apr 2007 B1
7209452 Taylor et al. Apr 2007 B2
7240364 Branscomb et al. Jul 2007 B1
7275192 Taylor et al. Sep 2007 B2
7287083 Nay et al. Oct 2007 B1
7350099 Taylor et al. Mar 2008 B2
7391734 Taylor et al. Jun 2008 B2
7457233 Gan et al. Nov 2008 B1
7460468 Taylor et al. Dec 2008 B2
7466646 Taylor et al. Dec 2008 B2
7469282 Taylor et al. Dec 2008 B2
7483370 Dayal et al. Jan 2009 B1
7609623 Taylor et al. Oct 2009 B2
7630302 Taylor et al. Dec 2009 B2
7639606 Taylor et al. Dec 2009 B2
7639623 Taylor et al. Dec 2009 B2
7646707 Taylor et al. Jan 2010 B2
7768904 Taylor et al. Aug 2010 B2
7890618 Taylor et al. Feb 2011 B2
8031588 Taylor et al. Oct 2011 B2
8031620 Taylor et al. Oct 2011 B2
20010000700 Eslambolchi et al. May 2001 A1
20010010681 McAllister et al. Aug 2001 A1
20020001307 Nguyen et al. Jan 2002 A1
20020072358 Schneider et al. Jun 2002 A1
20020089985 Wahl et al. Jul 2002 A1
20020112072 Jain Aug 2002 A1
20020131362 Callon Sep 2002 A1
20020172148 Kim et al. Nov 2002 A1
20020181402 Lemoff et al. Dec 2002 A1
20030043753 Nelson et al. Mar 2003 A1
20030051049 Noy et al. Mar 2003 A1
20030051195 Bosa et al. Mar 2003 A1
20030086413 Tartarelli et al. May 2003 A1
20030091024 Stumer May 2003 A1
20030092390 Haumont May 2003 A1
20030117951 Wiebe et al. Jun 2003 A1
20030128692 Mitsumori et al. Jul 2003 A1
20030152028 Raisanen et al. Aug 2003 A1
20030185151 Kurosawa et al. Oct 2003 A1
20040090918 McLendon May 2004 A1
20040090973 Christie et al. May 2004 A1
20040125776 Haugli et al. Jul 2004 A1
20040141464 Taylor et al. Jul 2004 A1
20040143653 Taylor et al. Jul 2004 A1
20040172574 Wing et al. Sep 2004 A1
20040202112 McAllister et al. Oct 2004 A1
20050002339 Patil et al. Jan 2005 A1
20050013242 Chen et al. Jan 2005 A1
20050135237 Taylor et al. Jun 2005 A1
20050135238 Taylor et al. Jun 2005 A1
20050135254 Taylor et al. Jun 2005 A1
20050135263 Taylor et al. Jun 2005 A1
20050138203 Taylor et al. Jun 2005 A1
20050138476 Taylor et al. Jun 2005 A1
20050152028 Mitzkus Jul 2005 A1
20050172160 Taylor et al. Aug 2005 A1
20050172174 Taylor et al. Aug 2005 A1
20050237925 Taylor et al. Oct 2005 A1
20050238006 Taylor et al. Oct 2005 A1
20050238007 Taylor et al. Oct 2005 A1
20050238024 Taylor et al. Oct 2005 A1
20050240840 Taylor et al. Oct 2005 A1
20050276216 Vasseur et al. Dec 2005 A1
20060013210 Bordogna et al. Jan 2006 A1
20060146700 Taylor et al. Jul 2006 A1
20060153066 Saleh et al. Jul 2006 A1
20070050492 Jorgensen Mar 2007 A1
20070168200 Shimizu Jul 2007 A1
20090041012 Taylor et al. Feb 2009 A1
20090086626 Taylor et al. Apr 2009 A1
20090103544 Taylor et al. Apr 2009 A1
20090323534 Taylor et al. Dec 2009 A1
20100020677 Taylor et al. Jan 2010 A1
20100046366 Taylor et al. Feb 2010 A1
20100046380 Taylor et al. Feb 2010 A1
20100054122 Taylor et al. Mar 2010 A1
20110083045 Taylor et al. Apr 2011 A1
Related Publications (1)
Number Date Country
20070268832 A1 Nov 2007 US