This application claims priority to U.S. patent application Ser. No. 09/895,872, filed Jun. 28, 2001, titled “Improved Startup Methods And Apparatuses For Use In Streaming Content”, and is related to co-pending U.S. patent application Ser. No. 09/636,004, filed Aug. 9, 2000, and titled “Fast Dynamic Measurement of Connection Bandwidth”, which is incorporated by reference herein.
The present invention relates generally to streaming media devices, and more particularly to methods and apparatuses that provide improved end-user startup times when streaming content.
Today, nearly every computer user is well accustomed to the broadcast television medium. When a new television channel is requested, the new channel is generally displayed (rendered) very quickly. The same can be said for conventional broadcast radio stations.
Unfortunately, the same cannot be said for conventional Internet streamed content. Streaming media is typically provided from a server device to a client device over the Internet or other like network. For a variety of technical reasons, the end-user experience can be degraded, for example, by pauses experienced in the rendering due to late-arriving/resent data. Such pauses, however, tend to occur randomly and in certain configurations occur very rarely. However, there is one pause that most end-users experience every time they select a streaming media program, namely, a slow startup time.
This poor startup experience tends to inhibit the adoption of streaming media in many markets. It is also tends to reduce the amount of time end-users are willing to use the technology. Thus, channel “surfing” is largely unacceptable with conventional streaming techniques. Hence, there is a need for improved streaming media methods and apparatuses that can significantly reduce the startup time that the end-user experiences.
In accordance with certain aspects of the present invention, improved streaming media methods and apparatuses are provided that significantly reduce the startup time that the end-user experiences.
By way of example, the above stated needs and others are met by a system in accordance with certain implementations of the present invention. The system includes a client device and a server device, which are operatively connected through a communication link. The client device is configured to send at least one startup request to the server device over the communication link. The startup request identifies a streamable media content that is to be provided to the client device, a communication link bandwidth associated with the communication link, and an amount of the desired streamable media content that is to be provided at a bitrate greater than the encoded bitrate, but no greater than about the communication link bandwidth. The server device is configured to buffer at least the amount of the streamable media content and transmit the amount of the buffered streamable media content at the higher bitrate. After transmitting the amount of the buffered streamable media content, the server device transmits subsequent portions of the streamable media content to the client device at a bitrate about equal to the encoded bitrate. The client device is configured to buffer received streamable media content, and subsequently render the buffered streamed media content.
In accordance with certain implementations, the server device locates a discrete rendering point in the amount of the buffered streamable media content and initiates transmission beginning with the discrete rendering point.
In accordance with certain further implementations, the client device determines the communication link bandwidth.
A more complete understanding of the various methods and apparatuses of the present invention may be had by reference to the following detailed description when taken in conjunction with the accompanying drawings wherein:
Turning to the drawings, wherein like reference numerals refer to like elements, the invention is illustrated as being implemented in a suitable computing environment. Although not required, portions of the invention are described in the general context of computer-executable instructions, such as program modules, being executed by a computer or like device, which, for example, may take the form of a personal computer (PC), a workstation, a portable computer, a server, a plurality of processors, a mainframe computer, a wireless communications base station, a hand-held communications device, a streamed media player, a set-top box, etc.
Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The various exemplary implementations of the present invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
As provided herein, the term “logic” is meant to apply to any form of logic and requisite supporting elements, including, e.g., software, firmware, hardware, and/or any combination thereof.
Exemplary computing environment 120 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the improved methods and apparatuses described herein. Neither should computing environment 120 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in computing environment 120.
The improved methods and apparatuses herein are operational with numerous other general purpose and/or special purpose computing system environments or configurations.
As shown in
Bus 136 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus also known as Mezzanine bus.
Computer 130 typically includes a variety of computer readable media. Such media may be any available media that is accessible by computer 130, and it includes both volatile and non-volatile media, removable and non-removable media.
In
Computer 130 may further include other removable/non-removable, volatile/non-volatile computer storage media. For example,
The drives and associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules, and other data for computer 130. Although the exemplary environment described herein employs a hard disk, a removable magnetic disk 148 and a removable optical disk 152, it should be appreciated by those skilled in the art that other types of computer-readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, random access memories (RAMs), read only memories (ROM), and the like, may also be used in the exemplary operating environment.
A number of program modules may be stored on the hard disk, magnetic disk 148, optical disk 152, ROM 138, or RAM 140, including, e.g., an operating system 158, one or more application programs 160, other program modules 162, and program data 164.
The improved methods and apparatuses described herein may be implemented within operating system 158, one or more application programs 160, other program modules 162, and/or program data 164.
A user may provide commands and information into computer 130 through input devices such as keyboard 166 and pointing device 168 (such as a “mouse”). Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, serial port, scanner, camera, etc. These and other input devices are connected to the processing unit 132 through a user input interface 170 that is coupled to bus 136, but may be connected by other interface and bus structures, Such as a parallel port, game port, or a universal serial bus (USB).
A monitor 172 or other type of display device is also connected to bus 136 via an interface, such as a video adapter 174. In addition to monitor 172, personal computers typically include other peripheral output devices (not shown), such as speakers and printers, which may be connected through output peripheral interface 175.
Computer 130 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 182. Remote computer 182 may include many or all of the elements and features described herein relative to computer 130.
Logical connections shown in
When used in a LAN networking environment, computer 130 is connected to LAN 177 via network interface or adapter 186. When used in a WAN networking environment, the computer typically includes a modem 178 or other means for establishing communications over WAN 179. Modem 178, which may be internal or external, may be connected to system bus 136 via the user input interface 170 or other appropriate mechanism.
Depicted in
In a networked environment, program modules depicted relative to computer 130, or portions thereof, may be stored in a remote memory storage device. Thus, e.g., as depicted in
Reference is now made to
Streaming media system 200 is arranged in a client-centric configuration in which client device 204 selects a streaming media program on server device 202, specifies that a particular fast startup is desired, and provides information to server device 202 about the communication link over which the streaming media will be carried. In this manner, client device 204 is able to control the streaming media process and server device 202. As described in greater detail below, in controlling the streaming media the client device 204 causes server device 202 to stream media during an initial period of time at data bitrate that is greater than the media's encoded bitrate. This allows client device 204 to quickly receive data and begin the rendering process sooner.
This is unlike previous server-centric solutions used to provide video on-demand, such as, for example, the system and method presented in U.S. Pat. No. 5,963,202, issued to Nathaniel Polish. In such server-centric systems, the server device, rather than the client device, has control over a video data transfer. Thus, for example, a server needs to determine how much video data can be transferred over the communication link and when to transfer it during the progressive download. One of the drawbacks to a server-centric system is that the server is required to monitor, for every client device, the status of the communications link and data buffers in the client device. While a progressive video download technique may be efficient for an in-home or hotel video-on-demand system, it would likely prove inefficient in a larger network environment, such as, for example, the Internet, a corporate intranet, a wide area network, a wireless network, etc.
Returning to
As its name suggests, media server 208 is configured to serve or otherwise provide streaming media to client device 206. To accomplish this task, media server 208 exchanges information with client device 206 through interface 222 and network 204. The techniques and protocols used to provide communications between server device 202 and client device 206 are well known and therefore will not be described in to great of detail.
Media server 208 identifies the availability of streaming media programs to client device 206. In this example, media server 208 accesses/receives streaming media programs from two sources, namely media storage device 216 and broadcast media device 220. Media storage device 216 is representative of a data storage device, such as, for example, one or more magnetic disk drives, one or more optical disc drives, and the like. Here, media storage device 216 is configured to allow media server 208 to stream media “on-demand” to client device 206. As used herein, “on-demand” means that the media is stored in media storage device 216, and has since then been made available for streaming and replay at subsequent times. Thus, for example, an earlier news program may be recorded and stored in its entirety on media storage device 216 and subsequently made available on-demand.
To the contrary, broadcast media device 220 is representative of media that has not been significantly stored, and certainly not in its entirety. An example would be a news program that is being aired in real-time or near real-time. Such a media program would not, therefore, be available on-demand. When client device 206 selects this broadcast media stream, the streaming media will “jump” into the news program at about the point where it is being aired.
Buffer 212 is used by media server 208 to temporarily store media data in support of the streaming process. Buffer 208 would typically include random access memory.
As shown, client device 206 includes a media client 224. Media client 224 is configured to support the selection, receipt and rendering of streaming media from server device 202 via interface 232 and network 204. To further accomplish its tasks, media client 224 includes fast startup logic 226 and renderer 228. Media client 224 is also operatively coupled to a buffer 230. Buffer 230 typically includes random access memory. Renderer 228 is configured to process the streamed media data and render the data as applicable for client device 206 and the received media. Rendering processes are well known, and the details of such are beyond the scope of the present invention.
With this exemplary streaming media system in mind, this detailed description will now focus on the functionality of media client 224 and fast startup logic 210 in server device 202 and corresponding fast start logic 226 in client device 206.
Media client 224 requires buffering of the streaming data for a variety of reasons. For example, buffering allows client device 206 to request and successfully obtain retransmissions when content packets are lost without impacting continuous playback. Buffering also allows playback to be smooth on networks that have jitter or inconsistent bandwidth response. Highly efficient compression technology often requires a significant duration of content (e.g., an entire frame or more) to be present on the client before decompression can begin. All of these issues contribute to the necessity of buffering content by media client 224.
Depending on the compression technology and the content type, buffering can vary anywhere from less than 1 second to many seconds. Certain conventional media players, for example, buffer roughly five seconds worth of content before allowing any rendering to begin. Because conventional streaming media servers are designed to deliver the content at the encoded bitrate, the end-user will have to wait at least five seconds for the buffers to fill and rendering to begin.
Since broadcast media is typically already running when most client devices connect, a client device may be required to wait before even beginning the buffering process. With typical compression technologies in use today, for example, buffering needs to start at certain discrete points in the stream. When an individual client device subscribes to a conventional broadcast stream, it will need to wait for one of these discrete points to appear in the stream before even starting the buffering process. The frequency of the discrete points can vary dramatically depending on the compression technology used, the content type, and even the content characteristics. These discrete buffering points can vary in frequency from several times a second to as little as once every sixteen seconds or less.
Given that a conventional streaming media client must first wait to find a discrete entry point and then wait for the buffers to fill, the user often experiences significant delay when attempting to start rendering a streaming media broadcast. In accordance with certain aspects of the present invention, methods and apparatuses are provided that tend to significantly reduce the time required to fill the client device's buffer(s) and ultimately allow rendering to begin faster for both broadcast and on-demand content. Thus, for example, in certain exemplary implementations, additional available network bandwidth is utilized to accelerate the streaming of content and as such fill the client device's buffer(s) faster. Hence, the term fast startup.
In accordance with certain aspects of the present invention, the various fast startup methods and apparatuses can be implemented by extending the usage/syntax of conventional streaming protocols, such as, for example, Microsoft Media Server (MMS), Real Time Streaming Protocol (RTSP), HyperText Transfer Protocol (HTTP), and the like.
Co-pending U.S. patent application Ser. No. 09/636,004, filed Aug. 9, 2000, and titled “Fast Dynamic Measurement of Connection Bandwidth”, which is incorporated by reference herein, describes, in greater detail, techniques by which media client 224 can determine the bandwidth present between client device 206 and server device 202, prior to requesting the actual delivery of the streaming media. This bandwidth is known as the link bandwidth.
Basically, the fast dynamic measurement of connection bandwidth utilizes a single pair of packets to calculate bandwidth between client device 206 and server device 202. This calculation is based upon a packet-pair technique. This bandwidth measurement is extremely quick. On its journey across network 204, communication equipment and modems may compress a packet. This compression shrinks the size of the packet; thus, it can distort the bandwidth calculation using such a shrunken packet. To avoid this distortion, the fast dynamic measurement of connection bandwidth employs non-compressible packets. More specifically, it employs highly entropic packets. Therefore, a packet cannot be compressed during its journey. In addition, on its journey across network 204, packets may be rerouted, delayed, misrouted, and the like. These momentary delays may result in a momentary bad bandwidth calculation. This problem is ameliorated by using a history list (not shown) at media client 224 that keeps track of recent measurements. Media client 224 can then determine the median value from the history list. That median value is representative of the link bandwidth.
This represents one exemplary technique for determining the link bandwidth. Those skilled in the art will recognize that other techniques may be employed to determine to some degree of certainty the link bandwidth.
Media client 224 can use conventional protocol, such as, e.g., a session description protocol (SDP) to communicate with media server 208 and identify the location and characteristics of the available streaming media.
In this manner, media client 224 is therefore able to determine both the link bandwidth and also the bandwidth of the individual stream(s) in the streaming media program. As such, fast startup logic 226 in media client 224 can request that the content be initially streamed at a rate faster than the encoded bitrate of the content. This request for fast startup is handled by fast startup logic 210 in media server 208.
Assuming normal playback speed, streaming the content at a rate greater than the encoded bitrate implies that the amount of data in client buffer 230 will increase in size over time. It is undesirable to continue to stream the content at a rate faster than the encoded bitrate of the content indefinitely, given the limited amount of memory in buffer 230. Instead, client buffer 230 is sufficiently filled at the fast rate at the beginning of the streaming process, and subsequently the streaming rate changes to roughly match the encoded bitrate of the media program (file). This design has the benefit of using the additional link bandwidth to quickly fill client buffer 230 without requiring additional memory in buffer 230.
Fast startup logic 210, within media server 208, is configured to respond to the fast startup request by streaming the content at the faster rate. In the case of broadcast media, such as a live video feed, fast startup logic 210 temporarily stores a portion of the streaming broadcast media to server buffer 212. In this manner, new client devices connecting to server device 202 can be sent content packets at a rate greater than the encoded bitrate of the broadcast stream.
Thus, for example, in certain implementations if the content is encoded at 16 kbps, then fast startup logic 210 will store the previous 10 seconds of the broadcast media in buffer 212. This exemplary buffering process therefore requires 20 Kbytes of memory. As a result, client devices that connect after the broadcast has started are able to request approximately up to about 10 seconds of content at a rate much faster than 16 kbps.
This is just one example; in other implementations, the buffering process may store a longer or shorter amount of the broadcast media in buffer 212.
Startup logic 210 is further configured to intelligently decide where to start sending content packets from buffer 212 as new clients connect to the broadcast. For example, assume that a broadcast program is running and a new client connects to server 202. If startup logic 210 has buffered the previous 10 seconds of content in buffer 212, then theoretically fast startup logic 210 can start sending content at roughly any point from time ConnectTimeclientX-10 to ConnectTimecientX.
However, starting the streaming of content at the beginning of the 10 second buffer can be problematic because the content residing at ConnectTimeclientX-10 may not contain a discrete starting point as required by media client 224. Typically, for certain types of streamed content, media client 224 can only start rendering the content at discrete points within the streamed data, such as, for example, certain frame boundaries or “key frames”. By way of example, in MPEG streams, I frames are key frames, P frames are not. See, for further example,
Consequently, startup logic 210 is advantageously configured to selectively scan through the buffered content to locate, and/or otherwise identify/keep track of, a discrete point at which to start the streaming process for a new client device. Preferably, the discrete point will be the earliest one in buffer 212.
Propagation latency is another factor for determining where to start sending content from the buffered list. Since essentially old (i.e., buffered) content is sent to new client devices, and the event may be a live broadcast, a time shift is introduced. The magnitude of the time shift resulting from fast startup logic 210 (and media server 208) is related to the amount of buffering done as well as the starting point chosen for content sent to new client devices.
Clients can randomly connect at any point during a broadcast, and the 10 second buffer list used in this example is constantly changing similar to a “waterfall” or “sliding window”. Therefore, the amount of content sent at a rate greater than the encoded bitrate and the starting point for transmission of content will vary over time. Furthermore, since each client device may have a different link bandwidth, the rate of the accelerated transmission may vary too. Each client device may even have different client-side buffer settings. All of these factors imply that client devices will not be synchronized during the rendering process.
If a client device connecting to server 202 does not have considerable additional network bandwidth available, sending the earliest usable point in the content buffer list may unnecessarily increase the propagation latency for that specific client device. Thus, server device 202, and more particularly fast startup logic 210, is configured to “balance” the need for minimizing the startup time with the need for minimizing the propagation time. For example, to help balance the conflicting requirements of minimizing propagation latency and startup latency, logic similar to the following can be employed:
AccelDuration=RequestedAccelDuration−(AccelRate* RequestedAccelDuration);
The above exemplary logic essentially reduces the amount of “time-shifted” content sent from server buffer 212 as the encoded bitrate of the content approaches the available link bandwidth.
For on-demand content, fast startup logic 210 is configured similar to the broadcast scenario except that there is no existing buffer when client device 206 connects. Therefore, fast startup logic 210 builds a buffer list quickly to satisfy the fast startup request. This is possible because a typical media storage device 216 is capable of delivering the on-demand content at a rate that is much faster than what client device 206 is requesting.
In certain exemplary implementations, the actual protocol mechanism used by client device 206 to request the accelerated buffering involves the use of headers. By way of example, for the RTSP protocol, a header “X-Accelerate-Streaming” is defined, which is used with the PLAY command. This header includes information regarding the client request for the duration of the acceleration and also the bandwidth to use for the acceleration. For example, “AccelDuration=10000;AccelBandwidth=1048576” might be included in a typical “X-Accelerate-Streaming” header by the client. This would inform the server that the client wishes to have 10,000 ms worth of content accelerated at a rate of 1,048,576 bits/s.
With the HTTP protocol, for example, client fast startup logic 226 can use directives in the commonly used PRAGMA header in the GET command to specify the fast startup parameters. The text below shows the contents of a sample PRAGMA header in a GET request asking for fast startup.
“LinkBW=2147483647, AccelBW=1048576, AccelDuration=10000”
In this exemplary request, the client fast startup logic 226 is informing server fast startup logic 210 that the link bandwidth is 2,147,483,647 bits/s, but it only wants the content accelerated at a rate of 1,048,576 bits/s for a duration of 10,000 ms.
Reference is now made to
In step 410, fast startup logic 226 requests streamable content from fast startup logic 210. In step 410, fast startup logic 226 selects the initial fast streaming bitrate and the subsequent slower streaming bitrate. Fast startup logic 226 also determines an amount of streamed media that is to be sent at the initial fast streaming bitrate.
For example, based on buffer 230 settings, the link bandwidth, and the encoded bitrate of the content, fast startup logic 226 can decide whether to submit a request to accelerate the transmission of content in order to fill buffer 230 quickly. If client device 206 decides to request fast startup, custom header syntax can be added to the final command that initiates the delivery of content from server device 202.
Thereafter, in step 410, media client 224 begins receiving streamed content from media server 208. In step 412, a beginning portion of the content is received at the initial faster streaming bitrate, which is greater than the encoded bitrate. Subsequently, in step 414, further portions of the streamed content are received at the slower streaming bitrate, which is about equal to the encoded bitrate.
Reference is now made to
Alternatively, if the requested streamable content includes broadcast content, then, in step 508, fast startup logic 210 uses the fast startup parameters received from fast startup logic 226 to determine at what point in the broadcast content the content can begin streaming from buffer 212. When possible, fast startup logic 210 will preferably start the streaming at discrete starting points in the buffer list so that media client 224 can immediately begin buffering useful content packets.
Next, in step 510, fast startup logic 210 initially streams the applicable content at the faster streaming bitrate, and subsequently, in step 512, at the lower streaming bitrate.
Reference is now made to
For example, assume that content packets would normally be streamed at a fixed rate of about 56 kbps, even though the link bandwidth for the client device is about 700 kbps. In the fast startup scenario illustrated above, as requested, the content packets that comprise about the first 10 seconds of the media can be streamed at about the link bandwidth rate. Here, this would take roughly 0.8 seconds. Thereafter, the remaining content packets are streamed at the lower encoded bitrate.
In this example, if the round trip time is reasonably short in duration, then media server 208 will begin the fast startup stream about 0.1 seconds after the request is made. Media client 224 will have received about 5 seconds of the streaming media program at about 0.5 seconds following the initial request, and can begin rendering at about that time. The requested 10 seconds of fast startup streamed media will have been received at about 0.9 seconds following the initial request. At that time, renderer 228 will have rendered about 0.4 seconds of content, and about 9.6 seconds of content will be stored in buffer 230.
Thus, in this example, the startup time was reduced from over 5 seconds to less than about 1 second. Furthermore, client device 206 will be able to maintain about 10 seconds of buffered content. This additional buffering allows client device 206 to avoid short pauses due for example to jitter and other potentially longer network brownouts, etc.
Although some preferred implementations of the various methods and apparatuses of the present invention have been illustrated in the accompanying Drawings and described in the foregoing Detailed Description, it will be understood that the invention is not limited to the exemplary implementations disclosed, but is capable of numerous rearrangements, modifications and substitutions without departing from the spirit of the invention as set forth and defined by the following claims.
Number | Name | Date | Kind |
---|---|---|---|
4963995 | Lang | Oct 1990 | A |
5057932 | Lang | Oct 1991 | A |
5132964 | Esaki | Jul 1992 | A |
5164839 | Lang | Nov 1992 | A |
5262875 | Mincer et al. | Nov 1993 | A |
5440334 | Walters et al. | Aug 1995 | A |
5568181 | Greenwood et al. | Oct 1996 | A |
5710970 | Walters et al. | Jan 1998 | A |
5758076 | Wu et al. | May 1998 | A |
5787472 | Dan et al. | Jul 1998 | A |
5822524 | Chen et al. | Oct 1998 | A |
5822537 | Katseff et al. | Oct 1998 | A |
5835495 | Ferriere | Nov 1998 | A |
5850449 | McManis | Dec 1998 | A |
5872920 | Hausman et al. | Feb 1999 | A |
5890010 | Nishigami | Mar 1999 | A |
5913038 | Griffiths | Jun 1999 | A |
5931961 | Ranganathan et al. | Aug 1999 | A |
5963202 | Polish | Oct 1999 | A |
5978567 | Rebane et al. | Nov 1999 | A |
5983263 | Rothrock et al. | Nov 1999 | A |
5995705 | Lang | Nov 1999 | A |
5996015 | Day et al. | Nov 1999 | A |
6005621 | Linzer et al. | Dec 1999 | A |
6014694 | Aharoni et al. | Jan 2000 | A |
6014706 | Cannon et al. | Jan 2000 | A |
6041345 | Levi et al. | Mar 2000 | A |
6054943 | Lawrence | Apr 2000 | A |
6111567 | Savchenko et al. | Aug 2000 | A |
6118817 | Wang | Sep 2000 | A |
6120149 | Hosoi | Sep 2000 | A |
6161201 | Payne et al. | Dec 2000 | A |
6195692 | Hsu | Feb 2001 | B1 |
6209041 | Shaw et al. | Mar 2001 | B1 |
6216163 | Bharall et al. | Apr 2001 | B1 |
6262990 | Ejiri | Jul 2001 | B1 |
6272148 | Takagi et al. | Aug 2001 | B1 |
6292834 | Ravi et al. | Sep 2001 | B1 |
6292880 | Mattis et al. | Sep 2001 | B1 |
6314492 | Allen et al. | Nov 2001 | B1 |
6327421 | Tiwari et al. | Dec 2001 | B1 |
6329165 | Chattoraj et al. | Dec 2001 | B1 |
6343298 | Savchenko et al. | Jan 2002 | B1 |
6351767 | Batchelder et al. | Feb 2002 | B1 |
6369835 | Lin | Apr 2002 | B1 |
6385647 | Willis et al. | May 2002 | B1 |
6405256 | Lin et al. | Jun 2002 | B1 |
6407680 | Lai et al. | Jun 2002 | B1 |
6421348 | Gaudet et al. | Jul 2002 | B1 |
6449269 | Edholm | Sep 2002 | B1 |
6480498 | Gaudet et al. | Nov 2002 | B1 |
6484199 | Eyal | Nov 2002 | B2 |
6493748 | Nakayama et al. | Dec 2002 | B1 |
6502135 | Munger et al. | Dec 2002 | B1 |
6553376 | Lewis et al. | Apr 2003 | B1 |
6601009 | Florschuetz | Jul 2003 | B2 |
6611868 | Arutyunov | Aug 2003 | B1 |
6611898 | Slattery et al. | Aug 2003 | B1 |
6614763 | Kikuchi et al. | Sep 2003 | B1 |
6643259 | Borella et al. | Nov 2003 | B1 |
6691312 | Sen et al. | Feb 2004 | B1 |
6725333 | Degenaro et al. | Apr 2004 | B1 |
6735634 | Geagan, III et al. | May 2004 | B1 |
6757255 | Aoki et al. | Jun 2004 | B1 |
6760749 | Dunlap et al. | Jul 2004 | B1 |
6760765 | Asai et al. | Jul 2004 | B1 |
6765878 | Carlson | Jul 2004 | B1 |
6772375 | Banga | Aug 2004 | B1 |
6779043 | Crinion | Aug 2004 | B1 |
6785288 | Enns et al. | Aug 2004 | B1 |
6792449 | Colville et al. | Sep 2004 | B2 |
6801947 | Li | Oct 2004 | B1 |
6892354 | Servan-Schreiber et al. | May 2005 | B1 |
6952424 | Bass et al. | Oct 2005 | B1 |
6954430 | Haglund | Oct 2005 | B2 |
6986018 | O'Rourke et al. | Jan 2006 | B2 |
6990512 | Major et al. | Jan 2006 | B1 |
7007090 | Spangler et al. | Feb 2006 | B1 |
7020087 | Steinberg et al. | Mar 2006 | B2 |
7051110 | Hagai et al. | May 2006 | B2 |
7054774 | Batterberry et al. | May 2006 | B2 |
7054911 | Lango et al. | May 2006 | B1 |
7054949 | Jennings | May 2006 | B2 |
RE39184 | Schloss et al. | Jul 2006 | E |
7073028 | Lango et al. | Jul 2006 | B2 |
7076560 | Lango et al. | Jul 2006 | B1 |
7133881 | Sirivara et al. | Nov 2006 | B2 |
7185082 | del Val et al. | Feb 2007 | B1 |
7356605 | Khouri et al. | Apr 2008 | B1 |
20020047899 | Son et al. | Apr 2002 | A1 |
20020048448 | Daniels | Apr 2002 | A1 |
20020049817 | Drory et al. | Apr 2002 | A1 |
20020077900 | Thompson et al. | Jun 2002 | A1 |
20020090027 | Karczewicz et al. | Jul 2002 | A1 |
20020097727 | Prakash | Jul 2002 | A1 |
20020138641 | Taylor et al. | Sep 2002 | A1 |
20020170067 | Norstrom et al. | Nov 2002 | A1 |
20020194608 | Goldhor | Dec 2002 | A1 |
20030018799 | Eyal et al. | Jan 2003 | A1 |
20030055809 | Bhat | Mar 2003 | A1 |
20030099364 | Thompson et al. | May 2003 | A1 |
20030236902 | Weiss et al. | Dec 2003 | A1 |
20030236912 | Klemets et al. | Dec 2003 | A1 |
20040003101 | Roth et al. | Jan 2004 | A1 |
20040054912 | Adent et al. | Mar 2004 | A1 |
20040244010 | Kleyman et al. | Dec 2004 | A1 |
20050152400 | Suzuki | Jul 2005 | A1 |
20050157714 | Shlissel et al. | Jul 2005 | A1 |
20050256941 | Armstrong et al. | Nov 2005 | A1 |
Number | Date | Country |
---|---|---|
20020069272 | Aug 2002 | KR |
WO0143445 | Jun 2001 | WO |
WO0230125 | Apr 2002 | WO |
WO0245372 | Jun 2002 | WO |
Number | Date | Country | |
---|---|---|---|
20050044166 A1 | Feb 2005 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09895872 | Jun 2001 | US |
Child | 10929191 | US |