System and method for automatic stream fail-over

Abstract
A method is described comprising: maintaining a plurality of data relating to client streaming connections across a plurality of servers; and assigning a particular client streaming connection to a first server upon detecting that a second server previously serving a streaming connection to the client has become inoperative.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention


This invention relates generally to the field of network services. More particularly, the invention relates to an improved architecture for providing fault tolerant data communication.


2. Description of the Related Art


As is known in the art, streaming is a mechanism for playing back audio and/or video content over a network in-real-time, typically used in situations where network bandwidth is limited. The basic streaming concept is that the destination (e.g., a client) begins to play back the underlying streaming file from a buffer before the entire file has been received from its source.


A traditional network streaming system is illustrated in FIG. 1. As shown, one or more clients 150, 160, configured with streaming application software such as RealPlayer® from RealNetworks® or Windows Media® Player from Microsoft® Corporation, communicate with one or more streaming servers 110, 111, . . . N, over a network 100 (e.g., the Internet). The group of streaming servers 110, 111, . . . N, are located together at a point of presence (“POP”) site. Each of the streaming servers 110, 111, . . . N, may store a copy of the same streaming data or, alternatively, may store different streaming data, depending on the configuration at the POP site 130.


In operation, when a client 150 requests a particular streaming file from a server at the POP site 130, the request is received by a load balancer module 120, which routes the request to an appropriate streaming server 111. Which server is “appropriate” may depend on where the requested file is stored, the load on each server 110, 111, . . . N, and/or the type of streaming file requested by the client (e.g., Windows Media format or RealPlayer format). Once the file has been identified by the load balancer 120 on an appropriate server—server 111 in the illustrated example—it is streamed to the requesting client 150 (represented by stream 140) through the network 100.


One problem with current systems for streaming multimedia content, however, is that when delivery of a stream to a client/player 150 is interrupted, there is no automated mechanism for correcting the interruption (e.g., providing another source for the stream) without some type of manual intervention. For example, if server 111 in FIG. 1 were to crash while streaming content to client 150, the client's multimedia stream would be interrupted. The client 150 (or, rather, the streaming application) would then manually attempt to reconnect to the server 111 through the load balancer 120.


The problem is even more serious if the client 150 is receiving a stream of a live event or a scheduled event (e.g., such as a “Web-cast”). In this case, by the time the client 150 reestablished a connection, a significant portion of the event would simply be unavailable to the client 150. In sum, current streaming applications do not provide any mechanism for automatically and seamlessly reestablishing a streaming session with a client, once the initial session has been interrupted.


Accordingly, what is needed is a system and method for providing fault tolerance and/or automatic fail-over techniques with respect to streaming multimedia content over a network.





BRIEF DESCRIPTION OF THE DRAWINGS

A better understanding of the present invention can be obtained from the following detailed description in conjunction with the following drawings, in which:



FIG. 1 illustrates a prior art system and method for streaming content over a network.



FIG. 2 illustrates an exemplary network architecture including elements of the invention.



FIG. 3 illustrates an exemplary computer architecture including elements of the invention.



FIGS. 4
a and 4b illustrate embodiments of a system and method for streaming content over a network.



FIG. 5 illustrates client streaming information stored in accordance with one embodiment of the invention.



FIG. 6 illustrates coordination between various different point of presence sites according to one embodiment of the invention.





DETAILED DESCRIPTION

In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form to avoid obscuring the underlying principles of the invention.


An Exemplary Network Architecture

Elements of the present invention may be included within a multi-tiered networking architecture 200 such as that illustrated in FIG. 2, which includes one or more data centers 220-222, a plurality of “intermediate” Point of Presence (“POP”) nodes 230-234 (also referred to herein as “Private Network Access Points,” or “P-NAPs”), and a plurality of “edge” POP nodes 240-245 (also referred to herein as “Internet Service Provider Co-Location” sites or “ISP Co-Lo” sites).


According to the embodiment depicted in FIG. 2, each of the data centers 220-222, intermediate POPs 230-234 and/or edge POPs 240-245 are comprised of groups of network servers on which various types of network content may be stored and transmitted to end users 250, including, for example, Web pages, network news data, e-mail data, File Transfer Protocol (“FTP”) files, and live & on-demand multimedia streaming files. It should be noted, however, that the underlying principles of the invention may be practiced using a variety of different types of network content.


The servers located at the data centers 220-222 and POPs 230-234; 240-245 may communicate with one another and with end users 150 using a variety of communication channels, including, for example, Digital Signal (“DS”) channels (e.g., DS-3/T-3, DS-1/T1), Synchronous Optical Network (“SONET”) channels (e.g., OC-3/STS-3), Integrated Services Digital Network (“ISDN”) channels, Digital Subscriber Line (“DSL”) channels, cable modem channels and a variety of wireless communication channels including satellite broadcast and cellular.


In addition, various networking protocols may be used to implement aspects of the system including, for example, the Asynchronous Transfer Mode (“ATM”), Ethernet, and Token Ring (at the data-link level); as well as Transmission Control Protocol/Internet Protocol (“TCP/IP”), Internetwork Packet Exchange (“IPX”), AppleTalk and DECnet (at the network/transport level). It should be noted, however, that the principles of the invention are not limited to any particular communication channel or protocol.


In one embodiment, a database for storing information relating to distributed network content is maintained on servers at the data centers 220-222 (and possibly also at the POP nodes 230-234; 240-245). The database in one embodiment is a distributed database (i.e., spread across multiple servers) and may run an instance of a Relational Database Management System (RDBMS), such as Microsoft™ SQL-Server, Oracle™ or the like.


An Exemplary Computer Architecture

Having briefly described an exemplary network architecture which employs various elements of the present invention, a computer system 300 representing exemplary clients and servers for implementing elements of the present invention will now be described with reference to FIG. 3.


One embodiment of computer system 300 comprises a system bus 320 for communicating information, and a processor 310 coupled to bus 320 for processing information. The computer system 300 further comprises a random access memory (RAM) or other dynamic storage device 325 (referred to herein as “main memory”), coupled to bus 320 for storing information and instructions to be executed by processor 310. Main memory 325 also may be used for storing temporary variables or other intermediate information during execution of instructions by processor 310. Computer system 300 also may include a read only memory (“ROM”) and/or other static storage device 326 coupled to bus 320 for storing static information and instructions used by processor 310.


A data storage device 327 such as a magnetic disk or optical disc and its corresponding drive may also be coupled to computer system 300 for storing information and instructions. The computer system 300 can also be coupled to a second I/O bus 350 via an I/O interface 330. A plurality of I/O devices may be coupled to I/O bus 350, including a display device 343, and/or an input device (e.g., an alphanumeric input device 342 and/or a cursor control device 341).


The communication device 340 is used for accessing other computers (servers or clients) via a network 100. The communication device 340 may comprise a modem, a network interface card, or other well known interface device, such as those used for coupling to Ethernet, token ring, or other types of computer networks.


EMBODIMENTS OF THE INVENTION

One embodiment of the invention, illustrated in FIG. 4a, provides an automatic fail-over solution for content streaming. The cluster of streaming servers 400, . . . N, in this embodiment actively share client data between one another relating to current client streaming connections. Accordingly, when a client-server streaming connection is lost (e.g., due to a server crash), a different server reestablishes a connection to the client at a point in the streaming file where the previous server left off.


In one embodiment, a cluster manager 410, 415 maintains up-to-date connection data for each client. The cluster manager 410, 415 may reside on one server at the POP site 430 or multiple servers, depending on the particular configuration (although illustrated in FIG. 4a in a multiple server configuration). Moreover, in one embodiment, the cluster manager 410, 415 may be distributed across multiple POP sites (see, e.g., FIG. 6 described below).


In one embodiment, the cluster manager 410, 415 receives client connection updates from cluster agents 425, 420 running on various streaming servers 400, 405 at the POP site 430. For example, when a client 150 initially connects to a particular streaming server 405 (e.g., through the load balancer 405), a cluster agent 425 running on that server 405 transmits connection data pertaining to that client to the cluster manager 410, 415. In addition, in one embodiment, the cluster agent 425 regularly transmits client connection updates to the cluster manager 410, 415 at predetermined intervals. Thus, the client data maintained by the cluster manager 410, 415 is kept up-to-date.


As illustrated in FIG. 5, various types of client connection data may be collected and transmitted by the cluster manager 410, 415, including, for example, an identification code assigned by the cluster agent 425, the client's Internet Protocol (“IP”) address, the current state of the streaming connection (e.g., active, paused, ended, . . . etc), a media identification code indicating the particular streaming content requested by the client (including a network path), the type of streaming protocol used to stream the content (e.g., real time streaming protocol, active streaming format, . . . etc), a media offset indicating the point in time to which the multimedia stream has progressed, and various networking data such as the Transmission Control Protocol (“TCP”) port, Network Address Translation data (if any), the maximum supported client bit-rate and/or the actual bit-rate used in the stream delivered to the client. In one embodiment, copies of the foregoing client data are maintained on multiple streaming servers by the cluster manager 410, 415.


As described above, in one embodiment, cluster agents 420,425, running on the streaming servers 400, 405 send client connection updates to the cluster manager 410,415 at predetermined intervals. Accordingly, in one embodiment, the cluster manager 410, 415 uses this periodic update as a “heartbeat” to detect when a particular streaming server has become inoperative (or, alternatively, to detect other problems with the server such as an unmanageable server load or a server communications/network problem). If the cluster manager 410, 415 does not receive an update from a server after one or more update periods have elapsed, it may conclude that the server is inoperative. Alternatively, before arriving at this conclusion, the cluster manager 410, 415 may first attempt to communicate with the server to verify that the server is, in fact, inoperative.


Regardless of how the cluster manager 410, 415 determines that a server is inoperative, once it does, it attempts to reassign each of the client connections supported by the inoperative server to an operative server. Thus, as illustrated in FIG. 4b, if server 405 crashes while serving a stream 440 to a client 150, the cluster manager 410 will reestablish a new stream 450 with the client 150 from a different streaming server 400. Moreover, because client streaming data (such as the data illustrated in FIG. 5) is continually updated at the client manager 410, the new streaming server 400 will begin serving the client 150 at the same point in the stream (i.e., at the same media offset) at which the original server 405 became inoperative. In a live stream embodiment, there may be a slight loss of stream data while the connection is re-established. In addition, the new server 400 will know the exact streaming configuration (e.g., bit-rate, streaming format, . . . etc) required to stream data to the client 150.


The foregoing reallocation mechanism may be employed regardless of whether the original stream was of a live/scheduled event 460 (e.g., a Webcast) or a previously-stored audio/video file. In one embodiment, each of the streaming servers 400, 405, may be configured to buffer a predetermined portion of the streaming data for the live event 460. Accordingly, if a server 405 crashes, the new server 400 assigned by the cluster manager 410 can begin streaming the event at the exact point (stored in the buffer) where the original server 405 became inoperative.


The cluster manager 410 may assign servers to handle the streams of the inoperative server based on a variety of factors. For example, in one embodiment, the assignment may be based on the relative load on each of the servers at a given point in time. The assignment may also be based on the type of content supported by the servers. For example, some servers may be configured to support only real time streaming protocol (“RTSP”) requests, while the other may be configured to support only active streaming format (“ASF”) requests. Various additional factors may be evaluated for the purpose of making an assignment while still complying with the underlying principles of the invention.


In one embodiment, after the various client streams are reassigned to new servers, the load balancer module 405 is notified of the new server assignments so that it can properly distribute new client requests based on server load. In one embodiment, the load balancer module 405 is a layer 4 switch, capable of directing client requests to a particular server based on the type of streaming service being requested by the client (e.g., based on a virtual IP address associated with the service) and/or the server.


As illustrated in FIG. 6, in one embodiment, cluster managers 610-612 and/or agents 620-622 from different POP sites 600-602 may communicate with one another. Accordingly, in this embodiment, if the server capacity at a particular POP site 602 has been reached when one of the servers at that site becomes inoperative, the client manager 612 may assign a server from a different site 600 to handle the client streams previously supported by the inoperative server. In this embodiment, cluster managers 610-612 and/or cluster agents 620-622 at different sites continually exchange data relating to client streaming connections. As such, a server from a completely different site 600 will begin streaming data to a client at the same point in the stream at which the previous server became inoperative, using the same set of streaming variables.


Embodiments of the present invention include various steps, which have been described above. The steps may be embodied in machine-executable instructions. The instructions can be used to cause a general-purpose or special-purpose processor to perform certain steps. Alternatively, these steps may be performed by specific hardware components that contain hardwired logic for performing the steps, or by any combination of programmed computer components and custom hardware components.


Elements of the invention may also be provided as a machine-readable medium for storing the machine-executable instructions. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, magnet or optical cards, propagation media or other type of media/machine-readable medium suitable for storing electronic instructions. For example, the present invention may be downloaded as a computer program which may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).


Throughout the foregoing description, for the purposes of explanation, numerous specific details were set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the invention may be practiced without some of these specific details. Accordingly, the scope and spirit of the invention should be judged in terms of the claims which follow.

Claims
  • 1. A method comprising: streaming a first stream from a first server to a first client over a first streaming connection, said first server communicably connected to a first cluster manager, wherein said first server sends a data relating to said first streaming connection to said first cluster manager in periodic intervals;streaming a first portion of a second stream from a second server to a second client over a second streaming connection, said second server communicably connected to a second cluster manager, wherein said second server sends a data relating to said second streaming connection to said second cluster manager in periodic intervals;continually sending said data relating to said second streaming connection from said second cluster manager to said first cluster manager;continually sending said data relating to said first streaming connection from said first cluster manager to said second cluster manager;sending said data relating to said second streaming connection from said first cluster manager to said first server; sending said data relating to said first streaming connection from said second cluster manager to said second server;maintaining, at said first server, a said data relating to said second streaming connection; and determining from said data relating to said second streaming connection periodically sent to the second cluster server that said second server is either inoperative, has an unmanageable server load, or is having communications/network problems;upon determining that said second server is either inoperative, has an unmanageable server load, or has communications/network problems, communicating to said first cluster manager by said second cluster manager that said second server has a problem;communicating to said first server by said first cluster manager that said second server has said problem;upon said first server receiving communication that said second server has said problem, assigning said first server to stream a second portion of said second stream to said second client over a third streaming connection, said first server using said data to comprehend:a starting point for said second portion, said second portion including content of said second stream that follows and is not included in the content of said second stream encompassed by said first portion; and an at least one handshake parameter for said second client;wherein said second client connection data includes a media offset value identifying a position within a particular streaming file streamed to said second client over said second streaming connections, and streaming said streaming file from said first server to second client over said third streaming connection starting at a point identified by said media offset value.
  • 2. The method as in claim 1 wherein said data includes a media offset identifying a position within said second stream that has already been streamed to said second client by said second server.
  • 3. The method as in claim 2 wherein: said media offset is said starting point.
  • 4. The method as in claim 1 wherein said data includes said second client's internet protocol (“IP”) address.
  • 5. The method as in claim 1 wherein said data includes a current state of said second streaming connection.
  • 6. The method as in claim 1 wherein said data includes a bit-rate between said second client and said second server.
  • 7. The method as in claim 1 wherein said data includes a bit-rate between said first client and said first server.
  • 8. The method as in claim 1 wherein assigning further comprises: said assigning said first server based upon said first server having a server load less than other servers in a plurality of servers.
  • 9. The method as in claim 1 wherein said server is said deemed inoperable because an update of said second client connection data was not received at a predetermined time interval.
  • 10. A method comprising: receiving a first client request from a first client to open a streaming connection;establishing a first streaming connection between said first client and a first server, said first server communicably connected to a first cluster manager, wherein said first server sends a first client connection data to said first cluster manager in periodic intervals;receiving a second client request from a second client to open a streaming connection;establishing a second streaming connection between said second client and a second server, said second server communicably connected to a second cluster manager, wherein said second server sends a second client connection data to said second cluster manager in periodic intervals;continually sending said second client connection data from said second cluster manager to said first cluster manager;continually sending said first client connection data from said first cluster manager to said second cluster manager; sending said second client connection data from said first cluster manager to said first server;sending said first client connection data from said second cluster manager to said second server;updating at said first server and at said second server a said first and a said second client connection data associated with said first streaming connection and said second streaming connection, respectively, wherein said second client connection data comprises an at least one handshake parameter for said second client;determining from said second client connection data periodically sent to the second cluster server that said second server is either inoperative, has an unmanageable server load, or is having communications/network problems;upon determining that said second server is either inoperative, has an unmanageable server load, or has communications/network problems, communicating to said first cluster manager by said second cluster manager that said second server has a problem;communicating to said first server by said first cluster manager that said second server has said problem;upon said first server receiving communication that said second server has said problem, establishing a third streaming connection said third streaming connection to resume said second connection by not sending streaming content already transmitted to said second client and by sending streaming content not yet transmitted to said second client, said third streaming connection established between said first server and said second client; andwherein said second client connection data includes a media offset value identifying a position within a particular streaming file streamed to said second client over said second streaming connections, and streaming said streaming file from said first server to second client over said third streaming connection starting at a point identified by said media offset value.
  • 11. The method as in claim 10 further comprising: updating said first and second client connection data at predetermined time intervals.
  • 12. The method as in claim 11 wherein said first and second client connection data includes a media offset value identifying a position within a particular streaming file streamed to said first and second clients over said first and second streaming connections, respectively.
  • 13. A system comprising: means for storing a client connection data on a first streaming server and a second streaming server, said client connection data including a values describing a plurality of client streaming connections that involve different streaming servers, wherein said values comprise an at least one handshake parameter for each of said plurality of client streaming connections, said first streaming server connected to a first cluster manager and said second streaming server connected to a second cluster manager, wherein the first cluster manager is communicably connected to the second cluster manager;means for transmitting portions of said client connection data from said first streaming server to said second streaming server as new clients establish client streaming connections with said first streaming server;means for detecting by a first cluster manager from a first client connection data periodically sent from said first streaming server to said first cluster manager that said first server is either inoperative, has an unmanageable server load, or is having communications/network problems;upon detecting that said first server is either inoperative, has an unmanageable server load, or is having communications/network problems, means for reassigning a client streaming connection from said first streaming server to said second streaming server through said first and second cluster managers responsive to detecting a problem with said first streaming server so as to resume said client streaming connection by not sending streaming content already sent to said client streaming connection's corresponding client and by sending streaming content not yet sent to said corresponding client; andwherein said second client connection data includes a media offset value identifying a position within a particular streaming file streamed to said second client over said second streaming connections, and streaming said streaming file from said first server to second client over said third streaming connection starting at a point identified by said media offset value.
  • 14. The system as in claim 13 wherein said means for transmitting further comprises: means for updating said client connection data at predetermined time intervals.
  • 15. The system as in claim 13 wherein said client connection data includes a media offset value identifying a position within a particular streaming file streamed to said corresponding client from said first server over said client streaming connection.
  • 16. The system as in claim 13 wherein said problem is that said second server is inoperative.
  • 17. The system as in claim 14 wherein detecting a problem with said second server comprises not receiving an update of said client connection data at a predetermined time interval.
  • 18. An article of manufacture including a sequence of instructions, which, when executed by a streaming server, cause said streaming server to: store a client connection data, said client connection data including a values describing a plurality of client streaming connections that involve different streaming servers, wherein said values comprise an at least one handshake parameter for each of said plurality of client streaming connections, said plurality of streaming servers each connected to a unique cluster manager, wherein the plurality of cluster manager are communicably connected to one another;wherein said client connection data further includes a media offset value identifying a position within a particular streaming file streamed to said client over said streaming connections, said streaming file from said server to client starts at a point identified by said media offset value;transmit new portions of said client connection data as new clients establish new client streaming connections;receive from a cluster manager of said plurality cluster managers information that a first cluster manager of the plurality of cluster managers has detected from a client connection data periodically sent from a first streaming server to a first cluster manager that said first server is either inoperative, has an unmanageable server load, or is having communications/network problems; andupon receiving information that said first server is either inoperative, has an unmanageable server load, or is having communications/network problems, reassign a client streaming connection from said first streaming server to a second streaming server responsive to detecting a problem with said first streaming server so as to resume said client streaming connection by not sending streaming content already sent to said client streaming connection's corresponding client and by sending streaming content not yet sent to said corresponding client.
  • 19. The article of manufacture as in claim 18 including additional instructions which, when executed by a processor, cause said processor to: update said client connection data at predetermined time intervals.
  • 20. The article of manufacture as in claim 18 wherein said client connection data includes a media offset value identifying a position within a particular streaming file streamed to said corresponding client.
  • 21. The article of manufacture as in claim 19 wherein said client connection data includes a media offset value identifying a position within a buffer that stores a live webcast.
  • 22. A method comprising: streaming a first stream from a first server to a first client over a first streaming connection, said first server communicably connected to a first cluster manager, wherein said first server sends a data relating to said first streaming connection to said first cluster manager in periodic intervals;streaming a first portion of a second stream from a second server to a second client over a second streaming connection, said second server communicably connected to a second cluster manager, wherein said second server sends a data relating to said second streaming connection to said second cluster manager in periodic intervals;continually sending said second client connection data from said second cluster manager to said first cluster manager;continually sending said first client connection data from said first cluster manager to said second cluster manager;sending said second client connection data from said first cluster manager to said first server;sending said first client connection data from said second cluster manager to said second server;maintaining, at said first and second servers, a said data relating to said first and second streaming connections, wherein said data comprises an at least one handshake parameter for said second streaming connection;determining from said data relating to said second streaming connection periodically sent to the second cluster server that said second server is either inoperative, has an unmanageable server load, or is having communications/network problems;upon determining that said second server is either inoperative, has an unmanageable server load, or has communications/network problems, communicating to said first cluster manager by said second cluster manager that said second server has a problem;communication to said first server by said first cluster manager that said second server has said problem;upon said first server receiving communication that said second server has said problem, assigning said first server to stream a second portion of said second stream to said second client over a third streaming connection, said first server using said data relating to said second streaming connection to comprehend a starting point for said second portion, said second portion including content of said second stream that follows and is not included in the content of said second stream encompassed by said first portion; andwherein said second client connection data includes a media offset value identifying a position within a particular streaming file streamed to said second client over said second streaming connections, and streaming said streaming file from said first server to second client over said third streaming connection starting at a point identified by said media offset value.
US Referenced Citations (144)
Number Name Date Kind
3363416 Naeimi et al. Jan 1968 A
4786130 Georgiou et al. Nov 1988 A
4845614 Hanawa et al. Jul 1989 A
4866712 Chao Sep 1989 A
4920432 Eggers et al. Apr 1990 A
4949187 Cohen Aug 1990 A
4949248 Caro Aug 1990 A
5172413 Bradley et al. Dec 1992 A
5253341 Rozmanith et al. Oct 1993 A
5291554 Morales Mar 1994 A
5371532 Gelman et al. Dec 1994 A
5410343 Coddington et al. Apr 1995 A
5414455 Hooper et al. May 1995 A
5440688 Nishida Aug 1995 A
5442389 Blahut et al. Aug 1995 A
5442390 Hooper et al. Aug 1995 A
5442749 Northcutt et al. Aug 1995 A
5448723 Rowett Sep 1995 A
5452448 Sakuraba et al. Sep 1995 A
5461415 Wolf et al. Oct 1995 A
5463768 Cuddihy et al. Oct 1995 A
5475615 Lin Dec 1995 A
5508732 Bottomley et al. Apr 1996 A
5515511 Nguyen et al. May 1996 A
5519435 Anderson May 1996 A
5528281 Grady et al. Jun 1996 A
5544313 Shachnai et al. Aug 1996 A
5544327 Dan et al. Aug 1996 A
5550577 Verbiest et al. Aug 1996 A
5550863 Yurt et al. Aug 1996 A
5550982 Long et al. Aug 1996 A
5557317 Nishio et al. Sep 1996 A
5568181 Greenwood et al. Oct 1996 A
5614940 Cobbley et al. Mar 1997 A
5675723 Ekrot et al. Oct 1997 A
5696895 Hemphill et al. Dec 1997 A
5703938 Lucas et al. Dec 1997 A
5704031 Mikami et al. Dec 1997 A
5712976 Falcon, Jr. et al. Jan 1998 A
5774660 Brendel et al. Jun 1998 A
5778187 Monteiro et al. Jul 1998 A
5805824 Kappe Sep 1998 A
5812751 Ekrot et al. Sep 1998 A
5813009 Johnson et al. Sep 1998 A
5832069 Waters et al. Nov 1998 A
5870553 Shaw et al. Feb 1999 A
5875300 Kamel et al. Feb 1999 A
5892913 Adiga et al. Apr 1999 A
5892915 Duso et al. Apr 1999 A
5923840 Desnoyers et al. Jul 1999 A
5933835 Adams et al. Aug 1999 A
5933837 Kung Aug 1999 A
5948062 Tzelnic et al. Sep 1999 A
5956716 Kenner et al. Sep 1999 A
5987621 Duso et al. Nov 1999 A
6003030 Kenner et al. Dec 1999 A
6011881 Moslehi et al. Jan 2000 A
6016509 Dedrick Jan 2000 A
6016512 Huitema Jan 2000 A
6023470 Lee et al. Feb 2000 A
6026452 Pitts Feb 2000 A
6035412 Tamer et al. Mar 2000 A
6061504 Tzelnic et al. May 2000 A
6067545 Wolff May 2000 A
6070191 Narendran et al. May 2000 A
6078960 Ballard Jun 2000 A
6085193 Malkin et al. Jul 2000 A
6098096 Tsirgotis et al. Aug 2000 A
6108703 Leighton et al. Aug 2000 A
6112239 Kenner et al. Aug 2000 A
6112249 Bader et al. Aug 2000 A
6115752 Chauhan Sep 2000 A
6122752 Farah Sep 2000 A
6134673 Chrabaszcz Oct 2000 A
6151632 Chaddha et al. Nov 2000 A
6161008 Lee et al. Dec 2000 A
6178445 Dawkins et al. Jan 2001 B1
6195680 Goldszmidt et al. Feb 2001 B1
6216051 Hager et al. Apr 2001 B1
6223206 Dan et al. Apr 2001 B1
6230200 Forecast et al. May 2001 B1
6233607 Taylor et al. May 2001 B1
6233618 Shannon May 2001 B1
6233623 Jeffords et al. May 2001 B1
6240462 Agraharam et al. May 2001 B1
6256634 Moshaiov et al. Jul 2001 B1
6266335 Bhaskaran Jul 2001 B1
6298376 Rosner et al. Oct 2001 B1
6298455 Knapman et al. Oct 2001 B1
6301711 Nusbickel Oct 2001 B1
6305019 Dyer et al. Oct 2001 B1
6317787 Boyd et al. Nov 2001 B1
6321252 Bhola et al. Nov 2001 B1
6324582 Sridhar et al. Nov 2001 B1
6330250 Curry et al. Dec 2001 B1
6332157 Mighdoll et al. Dec 2001 B1
6345294 O'Tool et al. Feb 2002 B1
6351775 Yu et al. Feb 2002 B1
6351776 O'Brien et al. Feb 2002 B1
6356916 Yamatari et al. Mar 2002 B1
6363497 Chrabaszcz Mar 2002 B1
6370571 Medin, Jr. Apr 2002 B1
6378129 Zetts Apr 2002 B1
6381627 Kwan et al. Apr 2002 B1
6385690 Iida et al. May 2002 B1
6389462 Cohen et al. May 2002 B1
6408407 Sadler Jun 2002 B1
6415368 Glance et al. Jul 2002 B1
6421714 Rai et al. Jul 2002 B1
6427163 Arendt et al. Jul 2002 B1
6438595 Blumenau et al. Aug 2002 B1
6442588 Clarke et al. Aug 2002 B1
6445784 Uppaluru et al. Sep 2002 B2
6446224 Chang et al. Sep 2002 B1
6457011 Brace et al. Sep 2002 B1
6466949 Yang et al. Oct 2002 B2
6466980 Lumelsky et al. Oct 2002 B1
6484143 Swildens et al. Nov 2002 B1
6487555 Bharat et al. Nov 2002 B1
6502125 Kenner et al. Dec 2002 B1
6502205 Yanai et al. Dec 2002 B1
6516350 Lumelsky et al. Feb 2003 B1
6538996 West et al. Mar 2003 B1
6539381 Prasad et al. Mar 2003 B1
6564380 Murphy May 2003 B1
6598071 Hayashi et al. Jul 2003 B1
6604117 Lim et al. Aug 2003 B2
6697806 Cook Feb 2004 B1
6732186 Hebert May 2004 B1
6735717 Rostowfske et al. May 2004 B1
6779034 Mundy et al. Aug 2004 B1
6799221 Kenner et al. Sep 2004 B1
6804703 Allen et al. Oct 2004 B1
6816904 Ludwig et al. Nov 2004 B1
6823348 Benayoun et al. Nov 2004 B2
6920454 Chan Jul 2005 B1
6959368 St. Pierre et al. Oct 2005 B1
6970913 Albert et al. Nov 2005 B1
6983317 Bishop et al. Jan 2006 B1
7013322 Lahr Mar 2006 B2
7111052 Cook Sep 2006 B1
20010044851 Rothman et al. Nov 2001 A1
20020007494 Hodge Jan 2002 A1
20030055967 Worley Mar 2003 A1
Foreign Referenced Citations (7)
Number Date Country
0 649 121 Oct 1994 EP
0 651 554 Oct 1994 EP
WO 9742582 Nov 1997 WO
WO9742582 Nov 1997 WO
WO 9859486 Dec 1998 WO
WO 9948246 Sep 1999 WO
WW 9948246 Sep 1999 WO