Apparatus, system, and method for adaptive-rate shifting of streaming content

Information

  • Patent Grant
  • 10225304
  • Patent Number
    10,225,304
  • Date Filed
    Monday, July 11, 2016
    8 years ago
  • Date Issued
    Tuesday, March 5, 2019
    6 years ago
Abstract
An apparatus for adaptive-rate shifting of streaming content includes an agent controller module configured to simultaneously request at least portions of a plurality of streamlets. The agent controller module is further configured to continuously monitor streamlet requests and subsequent responses, and accordingly request higher or lower quality streamlets. A staging module is configured to stage the streamlets and arrange the streamlets for playback on a content player. A system includes a data communications network, a content server coupled to the data communications network and having a content module configured to process content and generate a plurality of high and low quality streams, and the apparatus. A method includes simultaneously requesting at least portions of a plurality of streamlets, continuously monitoring streamlet requests and subsequent responses, and accordingly requesting higher or lower quality streamlets, and staging the streamlets and arranging the streamlets for playback on a content player.
Description
BACKGROUND OF THE INVENTION

Field of the Invention


The invention relates to video streaming over packet switched networks such as the Internet, and more particularly relates to adaptive-rate shifting of streaming content over such networks.


Description of the Related Art


The Internet is fast becoming a preferred method for distributing media files to end users. It is currently possible to download music or video to computers, cell phones, or practically any network capable device. Many portable media players are equipped with network connections and enabled to play music or videos. The music or video files (hereinafter “media files”) can be stored locally on the media player or computer, or streamed, or downloaded from a server.


“Streaming media” refers to technology that delivers content at a rate sufficient for presenting the media to a user in real time as the data is received. The data may be stored in memory temporarily until played and then subsequently deleted. The user has the immediate satisfaction of viewing the requested content without waiting for the media file to completely download. Unfortunately, the audio/video quality that can be received for real time presentation is constrained by the available bandwidth of the user's network connection. Streaming may be used to deliver content on demand (previously recorded) or from live broadcasts.


Alternatively, media files may be downloaded and stored on persistent storage devices, such as hard drives or optical storage, for later presentation. Downloading complete media tiles can take large amounts of time depending on the network connection. Once downloaded, however, the content can be viewed repeatedly anytime or anywhere. Media files prepared for downloading usually are encoded with a higher quality audio/video than can be delivered in real time. Users generally dislike this option, as they tend to want to see or hear the media file instantaneously.


Streaming offers the advantage of immediate access to the content but currently sacrifices quality compared with downloading a file of the same content. Streaming also provides the opportunity for a user to select different content for viewing on an ad hoc basis, while downloading is by definition restricted to receiving a specific content selection in its entirety or not at all. Downloading also supports rewind, fast forward, and direct seek operations, while streaming is unable to fully support these functions. Streaming is also vulnerable to network failures or congestion.


Another technology, known as “progressive downloads,” attempts to combine the strengths of the above two technologies. When a progressive download is initiated, the media file download begins, and the media player waits to begin playback until there is enough of the file downloaded that playback can begin with the hope that the remainder of the file will be completely downloaded, before playback “catches up.” This waiting period before playback can be substantial depending on network conditions, and therefore is not a complete or fully acceptable solution to the problem of media presentation over a network.


Generally, three basic challenges exist with regard to data transport streaming over a network such as the Internet that has a varying amount of data loss. The first challenge is reliability. Most streaming solutions use a TCP connection, or “virtual circuit,” for transmitting data. A TCP connection provides a guaranteed delivery mechanism so that data sent from one endpoint will be delivered to the destination, even if portions are lost and retransmitted. A break in the continuity of a TCP connection can have serious consequences when the data must be delivered in real-time. When a network adapter detects delays or losses in a TCP connection, the adapter “backs off” from transmission attempts for a moment and then slowly resumes the original transmission pace. This behavior is an attempt to alleviate the perceived congestion. Such a slowdown is detrimental to the viewing or listening experience of the user and therefore is not acceptable.


The second challenge to data transport is efficiency. Efficiency refers to how well the user's available bandwidth is used for delivery of the content stream. This measure is directly related to the reliability of the TCP connection. When the TCP connection is suffering reliability problems, a loss of bandwidth utilization results. The measure of efficiency sometimes varies suddenly, and can greatly impact the viewing experience.


The third-challenge is latency. Latency is the time measure form the client's point-of-view, of the interval between when a request is issued and the response data begins to arrive. This value is affected by the network connection's reliability and efficiency, and the processing time required by the origin to prepare the response. A busy or overloaded server, for example, will take more time to process a request. As well as affecting the start time of a particular request, latency has a significant impact on the network throughput of TCP.


From the foregoing discussion, it should be apparent that a need exists for an apparatus, system, and method that alleviate the problems of reliability, efficiency, and latency. Additionally, such an apparatus, system, and method would offer instantaneous viewing along with the ability to fast forward, rewind, direct seek, and browse multiple streams. Beneficially, such an apparatus, system, and method would utilize multiple connections between a source and destination, requesting varying bitrate streams depending upon network conditions.


SUMMARY OF THE INVENTION

The present invention has been developed in response to the present state of the art, and in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available content streaming systems. Accordingly, the present invention has been developed to provide an apparatus, system, and method for adaptive-rate content streaming that overcome many or all of the above-discussed shortcomings in the art.


The apparatus for adaptive-rate content streaming is provided with a logic unit containing a plurality of modules configured to functionally execute the necessary steps. These modules in the described embodiments include an agent controller module configured to simultaneously request a plurality of streamlets, the agent controller module further configured to continuously monitor streamlet requests and subsequent responses, and accordingly request higher or lower quality streamlets, and a staging module configured to stage the streamlets and arrange the streamlets for playback on a content player.


The apparatus is further configured, in one embodiment, to establish multiple Transmission Control Protocol (TCP) connections with a content server, and request streamlets of varying bitrates. Each streamlet may further comprise a portion of a content file. Additionally, the agent controller module may be configured to generate a performance factor according to responses from streamlet requests.


In a further embodiment, the agent controller module is configured to upshift to a higher quality streamlet when the performance factor is greater than a threshold, and the agent controller module determines the higher quality playback can be sustained according to a combination of factors. The factors may include an amount of contiguously available streamlets stored in the staging module, a minimum safety margin, and a current read ahead margin.


The agent controller module may be configured to downshift to a lower quality streamlet when the performance factor is less than a second threshold. Also, the agent controller module is further configured to anticipate streamlet requests and pre-request streamlets to enable fast-forward, skip randomly, and rewind functionality. In one embodiment, the agent controller module is configured to initially request low quality streamlets to enable instant playback of the content file, and subsequent upshifting according to the performance factor.


A system of the present invention is also presented to adaptive-rate content streaming. In particular, the system, in one embodiment, includes a data communications network, and a content server coupled to the data communications network and having a content module configured to process content and generate a plurality of high and low quality streams. In one embodiment, each of the high and low quality streams may include a plurality of streamlets.


In a further embodiment, the system also includes an agent controller module configured to simultaneously request a plurality of streamlets, the agent controller module further configured to continuously monitor streamlet requests and subsequent responses, and accordingly request higher or lower quality streamlets, and a staging module configured to stage the streamlets and arrange the streamlets for playback on a content player.


A method of the present invention is also presented for adaptive-rate content streaming. The method in the disclosed embodiments substantially includes the steps necessary to carry out the functions presented above with respect to the operation of the described apparatus and system. In one embodiment, the method includes simultaneously requesting a plurality of streamlets, continuously monitoring streamlet requests and subsequent responses, and accordingly requesting higher or lower quality streamlets, and staging the streamlets arid arranging the streamlets for playback on a content player.


In a further embodiment, the method may include establishing multiple Transmission Control Protocol (TCP) connections with a content server, and requesting streamlets of varying bitrates. Also, the method may include generating a performance factor according to responses from streamlet requests, upshifting to a higher quality streamlet when the performance factor is greater than a threshold, and determining if the higher quality playback can be sustained. Furthermore, the method may include downshifting to a lower quality streamlet when the performance factor is less than a second threshold.


In one embodiment, the method includes anticipating streamlet requests and pre-requesting streamlets to enable fast-forward, skip randomly, and rewind functionality. The method may also comprise initially requesting low quality streamlets to enable instant playback of a content file, and subsequent upshifting according to the performance factor.


Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or are in any single, embodiment of the invention. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Thus, discussion of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment.


Furthermore, the described features, advantages, and characteristics of the invention may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize that the invention may be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the invention.


These features and advantages of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.





BRIEF DESCRIPTION OF THE DRAWINGS

In order that the advantages of the invention will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings, in which:



FIG. 1 is a schematic block diagram illustrating one embodiment of a system for adaptive rate shifting of streaming content in accordance with the present invention;



FIG. 2a is a schematic block diagram graphically illustrating one embodiment of a content file in accordance with the present invention;



FIG. 2b is a schematic block diagram illustrating one embodiment of a plurality of streams having varying degrees of quality and bandwidth in accordance with the present invention;



FIG. 2c is a schematic block diagram illustrating one embodiment of a stream divided into a plurality of streamlets in accordance with the present invention;



FIG. 3 is a schematic block diagram illustrating one embodiment of a content module in accordance with the present invention;



FIG. 4 is a schematic block diagram graphically illustrating one embodiment of a client module in accordance with the present invention;



FIG. 5 is a schematic flow chart diagram illustrating one embodiment of a method for processing content in accordance with the present invention;



FIG. 6 is a schematic flow chart diagram illustrating one embodiment of a method for playback of a plurality of streamlets in accordance with the present invention; and



FIG. 7 is a schematic flow chart diagram illustrating one embodiment of a method for requesting streamlets within an adaptive-rate content streaming environment in accordance with the present invention.





DETAILED DESCRIPTION OF THE INVENTION

Many of the functional units described in this specification have been labeled as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like.


Modules may also be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical, or logical blocks of computer instructions which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.


Indeed, a module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network.


Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.


Reference to a signal bearing medium may take any form capable of generating a signal, causing a signal to be generated, or causing execution of a program of machine-readable instructions on a digital processing apparatus. A signal bearing medium may be embodied by a transmission line, a compact disk, digital-video disk, a magnetic tape, a Bernoulli drive, a magnetic disk, a punch card, flash memory, integrated circuits, or other digital processing apparatus memory device.


Furthermore, the described features, structures, or characteristics of the invention may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.



FIG. 1 is a schematic block diagram illustrating one embodiment of a system 100 for dynamic rate shifting of streaming content in accordance with the present invention. In one embodiment, the system 100 comprises a content server 102 and an end user 104. The content server 102 and the end user station 104 may be coupled by a data communications network. The data communications network may include the internet 106 and connections 108 to die internet 106. Alternatively, the content server 102 and the end user 104 may be located on a common local area network, wireless area network, cellular network, virtual local, area network, or the like. The end user station 104 may comprise a personal computer (PC), an entertainment system configured to communicate over a network, or a portable electronic device configured to present content.


In the depicted embodiment, the system 100 also includes a publisher 110, and a web server 116. The publisher 110 may be a creator or distributor of content. For example, if the content to be streamed were a broadcast of a television program, the publisher 110 may be a television or cable network channel such as NBC®, or MTV®. Content may be transferred over the Internet 106 to the content server 102, where the content is received by a content module 112. The content module 112 may be configured to receive, process, and store content. In one embodiment, processed content is accessed by a client module 114 configured to play the content on the end user station 104. In a further embodiment, the client module 114 is configured to receive different portions of a content stream from a plurality of locations simultaneously. For example, the client module 114 may request and receive content from any of the plurality of web servers 116.



FIG. 2a is a schematic block diagram graphically illustrating one embodiment of a content file 200. In one embodiment, the content file 200 is distributed by the publisher 110. The content file 200 may comprise a television broadcast, sports event, movie, music, concert, etc. The content file 200 may also be live or archived content. The content file 200 may comprise uncompressed video and audio, or alternatively, video or audio. Additionally, the content file 200 may be compressed. Examples of a compressed content file 200 include, but are not limited to, DivX®, Windows Media Video 9®, Quicktime 6.5 Sorenson 3®,or Quicktime 6.5/MPEG-4® encoded content.



FIG. 2b is a schematic block diagram illustrating one embodiment of a plurality of streams 202 having varying degrees of quality and bandwidth. In one embodiment, the plurality of streams 202 comprises a low quality stream 204, a medium quality stream 206, and a high quality stream 208. Each of the streams 204, 206, 208 is a copy of the content file 200 encoded and compressed to varying bit rates. For example, the low quality stream 204 may be encoded and compressed to a bit rate of 100 kilobits per second (kbps), the medium quality stream 206 may be encoded and compressed to a bit rate of 200 kbps, and the high quality stream 208 may be encoded and compressed to 600 kbps.



FIG. 2c is a schematic block diagram illustrating one embodiment of a stream 210 divided into a plurality of streamlets 212. As used herein, streamlet refers to any sized portion of the content file 200. Each streamlet 212 may comprise a portion of the content contained in stream 210, encapsulated as an independent media object. The content in a streamlet 212 may have a unique time index in relation to the beginning of the content contained in stream 210. In one embodiment, the content contained in each streamlet 212 has a duration of two seconds. For example, streamlet 0 may have a time index of 00:00 representing the beginning of content playback, and streamlet 1 may have a time index of 00:02, and so on. Alternatively, the time duration of the streamlets 212 may be any duration smaller than the entire playback duration of the content in stream 210. In a further embodiment, the streamlets 212 may be divided according to file size instead of a time index.



FIG. 3 is a schematic block diagram illustrating in greater detail one embodiment of the content module 112 in accordance with the present invention. The content module 112 may comprise a stream module 302, a streamlet module 304, an encoder module 306, a streamlet database 308, and the web server 116. In one embodiment, the stream module 302 is configured to receive the content file 200 from the publisher 110 and generate the plurality of streams 202 of varying qualities. The original content file 200 from the publisher may be digital in form and may comprise content having a high bit rate such as, for example, 2 mbps. The content may be transferred from the publisher 110 to the content module 112 over the Internet 106. Such transfers of data are well known in the art and do not require further discussion herein. Alternatively, the content may comprise a captured broadcast.


In the depicted embodiment, the plurality of streams 202 may comprise the low quality stream 204, the medium quality stream 206, and the high quality stream 208. Alternatively, the plurality of streams 202 may comprise any number of streams deemed necessary to accommodate end user bandwidth. The streamlet module 304 may be configured to receive the plurality of streams 202 from the stream module and generate a plurality of streams 312, each stream comprising a plurality of streamlets 212. As described with reference to FIG. 2c, each streamlet 212 may comprise a pre-defined portion of the stream. The encoder module 306 is configured to encode each streamlet from the plurality of streams 312 and store the streamlets in the streamlet database 308. The encoding module 306 may utilize encoding schemes such as DivX®, Windows Media Video 9®, Quicktime 6.5 Sorenson 3®, or Quicktime 6.5/MPEG-4®. Alternatively, a custom encoding scheme may be employed.


The content module 112 may also include a metadata module 312 and a metadata database 314. In one embodiment, metadata comprises static searchable content information. For example, metadata includes, but is not limited to, air date of the content, title, actresses, actors, length, and episode name. Metadata is generated by the publisher 110, and may be configured to define an end user environment. In one embodiment, the publisher 100 may define an end user navigational environment for the content including menus, thumbnails, sidebars, advertising, etc. Additionally, the publisher 110 may define functions such as fast forward, rewind, pause, and play that may be used with the content file 200. The metadata module 312 is configured to receive the metadata from the publisher 110 and store the metadata in the metadata database 314. In a further embodiment, the metadata module 312 is configured to interface with the client module 114, allowing the client module 114 to search for content based upon at least one of a plurality of metadata criteria. Additionally, metadata may be generated by the content module 112 through automated process(es) or manual definition.


Once the streamlets 212 have been received and processed, the client module 114 may request streamlets 212 using HTTP from the web server 116. Such use of client side initiated requests requires no additional configuration of firewalls. Additionally, since the client module 114 initiates the request, the web server 116 is only required to retrieve and serve the requested streamlet. In a further embodiment, the client module 114 may be configured to retrieve streamlets 212 from a plurality of web servers 310. Each web server 116 may be located in various locations across the Internet 106. The streamlets 212 are essentially static files. As such, no specialized media server or server-side intelligence is required for a client module 114 to retrieve streamlets 212. Streamlets 212 may be served by the web server 116 or cached by cache servers of Internet Service Providers (ISPs), or any other network infrastructure operators, and served by the cache server. Use of cache servers is well known to those skilled in the art, and will not be discussed further herein. Thus, a highly scalable solution is provided that is not hindered by massive amounts of client module 114 requests to the web server 116 at any specific location.



FIG. 4 is a schematic block diagram graphically illustrating one embodiment of a client module 114 in accordance with the present invention. The client module 114 may comprise an agent controller module 402, a streamlet cache module 404, and a network controller module 406. In one embodiment, the agent controller module 402 is configured to interface with a viewer 408, and transmit streamlets 212 to the viewer 408. In a further embodiment, the client module 114 may comprise a plurality of agent controller modules 402. Each agent controller module 402 may be configured to interface with one viewer 408. Alternatively, the agent controller module 402 may be configured to interface with a plurality of viewers 408. The viewer 408 may be a media player (not shown) operating on a PC or handheld electronic device.


The agent controller module 402 is configured to select a quality level of streamlets to transmit to the viewer 408. The agent controller module 402 requests lower or higher quality streams based upon continuous observation of time intervals between successive receive times of each requested streamlet. The method of requesting higher or lower quality streams will be discussed in greater detail below with reference to FIG. 7.


The agent controller module 402 may be configured to receive user commands from the viewer 408. Such commands may include play, fast forward, rewind, pause, and stop. In one embodiment, the agent controller module 402 requests streamlets 212 from the streamlet cache module 404 and arranges the received streamlets 212 in a staging module 409. The staging module 409 may be configured to arrange the streamlets 212 in order of ascending playback time. In the depleted embodiment, the streamlets 212 are numbered 0, 1, 2, 3, 4, etc. However, each streamlet 212 may be identified with a unique filename.


Additionally, the agent controller module 402 may be configured to anticipate streamlet 212 requests and pre-request streamlets 212. By pre-requesting streamlets 212, the user may fast-forward, skip randomly, or rewind through the content and experience no buffering delay. In a further embodiment, the agent controller module 402 may request the streamlets 212 that correspond to time index intervals of 30 seconds within the total play time of the content. Alternatively, the agent controller module 402 may request streamlets at any interval less than the length of the time index. This enables a “fast-start” capability with no buffering wait when starting or fast-forwarding through content file 200. In a further embodiment, the agent controller module 402 may be configured to pre-request streamlets 212 corresponding to specified index points within the content or within other content in anticipation of the end user 104 selecting new content to view.


In one embodiment, the streamlet cache module 404 is configured to receive streamlet 212 requests from the agent controller module 402. Upon receiving a request, the streamlet cache module 404 first checks a streamlet cache 410 to verify if the streamlet 212 is present. In a further embodiment, the streamlet cache module 404 handles streamlet 212 requests from a plurality of agent controller modules 402. Alternatively, a streamlet cache module 404 may be provided for each agent controller module 402. If the requested streamlet 212 is not present in the streamlet cache 410, the request is passed to the network controller module 406. In order to enable fast forward and rewind capabilities, the streamlet cache module 404 is configured to store the plurality of streamlets 212 in the streamlet cache 410 for a specified time period after the streamlet 212 has been viewed. However, once the streamlets 212 have been deleted, they may be requested again from the web server 116.


The network controller module 406 may be configured to receive streamlet requests from the streamlet cache module 404 and open a connection to the web server 116 or other remote streamlet 212 database (not shown). In one embodiment, the network controller module 406 opens a TCP/IP connection to the web server 116 and generates a standard HTTP GET request for the requested streamlet 212. Upon receiving the requested streamlet 212, the network controller module 406 passes the streamlet 212 to the streamlet cache module 404 where it is stored in the streamlet cache 410. In a further embodiment, the network controller module 406 is configured to process and request a plurality of streamlets 212 simultaneously. The network controller module 406 may also be configured to request a plurality of streamlets, where each streamlet 212 is subsequently requested in multiple parts.


In a further embodiment, streamlet requests may comprise requesting pieces of any streamlet file. Splitting the streamlet 212 into smaller pieces or portions beneficially allows for an increased efficiency potential, and also eliminates problems associated with multiple full-streamlet requests sharing the bandwidth at any given moment. This is achieved by using parallel TCP/IP connections for pieces of the streamlets 212. Consequently, efficiency and network loss problems are overcome, and the streamlets arrive with more useful and predictable timing.


In one embodiment, the client module 114 is configured to use multiple TCP connections between the client module 114 and the web server 116 or web cache. The intervention of a cache may be transparent to the client or configured by the client as a forward cache. By requesting more than one streamlet 212 at a time in a manner referred to as “parallel retrieval,” or more than one part of a streamlet 212 at a time, efficiency is raised significantly and latency is virtually eliminated. In a further embodiment, the client module allows a maximum of three outstanding streamlet 212 requests. The client module 114 may maintain additional open TCP connections as spares to be available should another connection fail. Streamlet 212 requests are rotated among all open connections to keep the TCP flow logic for any particular connection from falling into a slow-start or close mode, if the network controller module 406 has requested a streamlet 212 in multiple parts, with each part requested on mutually independent TCP/IP connections, the network controller module 406 reassembles the parts to present a complete streamlet 212 for use by all other components of the client module 114.


When a TCP connection fails completely, a new request may be sent on a different connection for the same streamlet 212. In a further embodiment, if a request is not being satisfied in a timely manner, a redundant request may be sent on a different connection for the same streamlet 212. If the first streamlet request's response arrives before the redundant request response, the redundant request can be aborted. If the redundant request response arrives before the first request response, the first request may be aborted.


Several streamlet 212 requests may be sent on a single TCP connection, and the responses are caused to flow back in matching order along the same connection. This eliminates all but the first request latency. Because multiple responses are always being transmitted, the processing latency of each new streamlet 212 response after the first is not a factor in performance. This technique is known in the industry as “pipelining.” Pipelining offers efficiency in request response processing by eliminating most of the effects of request latency. However, pipelining has serious vulnerabilities. Transmission delays affect all of the responses. If the single TCP connection fails, all of the outstanding requests and responses are lost. Pipelining causes a serial dependency between the requests.


Multiple TCP connections may be opened between the client module 114 and the web server 116 to achieve the latency-reduction efficiency benefits of pipelining while maintaining the independence of each streamlet 212 request. Several streamlet 212 requests may be sent concurrently, with each request being sent on a mutually distinct TCP connection. This technique is labeled “virtual pipelining” and is an innovation of the present invention. Multiple responses may be in transit concurrently, assuring that communication bandwidth between the client module 114 and the web server 116 is always being utilized. Virtual pipelining eliminates die vulnerabilities of traditional pipelining. A delay in or complete failure of one response does not affect the transmission of other responses because each response occupies an independent TCP connection. Any transmission bandwidth not in use by one of multiple responses (whether due to delays or TCP connection failure) may be utilized by other outstanding responses.


A single streamlet 212 request may be issued for an entire streamlet 212, or multiple requests may be issued, each for a different part or portion of the streamlet. If the streamlet is requested in several parts, the parts may be recombined by the client module 114 streamlet.


In order to maintain a proper balance between maximized bandwidth utilization and response time, the issuance of new streamlet requests must be timed such that the web server 116 does not transmit, the response before the client module 114 has fully received a response to one of the previously outstanding streamlet requests. For example, if three streamlet 212 requests are outstanding, the client module 114 should issue the next request slightly before one of the three responses is fully received and “out of the pipe.” In other words, request timing is adjusted to keep three responses in transit. Sharing of bandwidth among four responses diminishes the net response time of the other three responses. The timing adjustment may be calculated dynamically by observation, and the request timing adjusted accordingly to maintain the proper balance of efficiency and response times.


The schematic flow chart diagrams that follow are generally set forth as logical flow chart diagrams. As such, the depicted order and labeled steps are indicative of one embodiment of the presented method. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and symbols employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method. Although various arrow types and line types may be employed in the flow chart diagrams, they are understood not to limit the scope of the corresponding method. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown.



FIG. 5 is a schematic flowchart diagram illustrating one embodiment of a method 500 for processing content in accordance with the present invention. In one embodiment the method 500 starts 502, and the content module 112 receives 504 content from the publisher 110. Receiving content 504 may comprise receiving 504 a digital copy of the content file 200, or digitizing a physical copy of the content file 200. Alternatively, receiving 504 content may comprise capturing a radio or television broadcast. Once received 504, the stream module 302 generates 506 a plurality of streams 202, each stream 202 having a different quality. The quality may be predefined, or automatically set according to end user bandwidth, or in response to pre-designated publisher guidelines.


The streamlet module 304 receives the streams 202 and generates 508 a plurality of streamlets 212. In one embodiment, generating 508 streamlets comprises dividing the stream 202 into a plurality of two second streamlets 212. Alternatively, the streamlets may have any length less than or equal to the length of the stream 202. The encoder module 306 then encodes 510 the streamlets according to a compression algorithm. In a further embodiment, the algorithm comprises a proprietary codec such as WMV9®. The encoder module 306 then stores 512 the encoded streamlets in the streamlet database 308. Once stored 512, the web server 116 may then serve 514 the streamlets. In one embodiment, serving 514 the streamlets comprises receiving streamlet requests from the client module 114, retrieving the requested streamlet from the streamlet database 308, and subsequently transmitting the streamlet to the client module 114. The method 500 then ends 516.



FIG. 6 is a schematic flow chart diagram illustrating one embodiment of a method 600 for viewing a plurality of streamlets in accordance with the present invention. The method 600 starts and an agent control module 402 is provided 604 and associated with a viewer 408 and provided with a staging module 409. The agent controller module 402 then requests 606 a streamlet from the streamlet cache module 404. Alternatively, the agent controller module 402 may simultaneously request 606 a plurality of streamlets from the streamlet cache module 404. If the streamlet is stored 608 locally in the streamlet cache 410, the streamlet cache module 404 retrieves 610 the streamlet and sends the streamlet to the agent controller module 402. Upon retrieving 610 or receiving a streamlet, the agent controller module 402 makes 611 a determination of whether or not to shift, to a higher or lower quality stream 202. This determination will be described below in greater detail with reference to FIG. 7.


In one embodiment, the staging module 409 then arranges 612 tire streamlets into the proper order, and the agent controller module 402 delivers 614 the streamlets to the viewer 408. In a further embodiment, delivering 614 streamlets to the end user comprises playing video and or audio streamlets on the viewer 408. If the streamlets are not stored 608 locally, the streamlet request is passed to the network controller module 406. The network controller module 406 then requests 616 the streamlet from the web server 116. Once the streamlet is received, the network controller module 406 passes the streamlet to the streamlet cache module 404. The streamlet cache module 404 archives 618 the streamlet. Alternatively, the streamlet cache module 404 then archives 618 the streamlet and passes the streamlet to the agent controller module 402, and the method 600 then continues from operation 610 as described above.


Referring now to FIG. 7, shown therein is a schematic flow chart diagram illustrating one embodiment of a method 700 for requesting streamlets within a adaptive-rate shifting content streaming environment in accordance with the present invention. The method 700 may be used in one embodiment as the operation 611 of FIG. 6. The method 700 starts and the agent controller module 402 receives 704 a streamlet as described above with reference to FIG. 6. The agent controller module 402 then monitors 706 the receive time of the requested streamlet. In one embodiment, the agent controller module 402 monitors the time intervals Δ between successive receive times for each streamlet response. Ordering of the responses in relation to the order of their corresponding requests is not relevant.


Because network behavioral characteristics fluctuate, sometimes quite suddenly, any given Δ may vary substantially from another. In order to compensate for this fluctuation, the agent controller module 402 calculates 708 a performance ratio r across a window of n samples for streamlets of playback length S. In one embodiment, the performance ratio r is calculated using the equation






r
=

S



n




i
=
1

n







Δ
i



.






Due to multiple simultaneous streamlet processing, and in order to better judge the central tendency of the performance ratio r, the agent control module 402 may calculate a geometric mean, or alternatively an equivalent averaging algorithm, across a window of size m, and obtain a performance factor φ:







φ
current

=



(




j
=
1

m







r
j


)


1
m


.





The policy determination about whether or not to upshift 710 playback quality begins by comparing φcurrent with a trigger threshold Θup. If φcurrent≥Θup, then an up shift to the next higher quality stream may be considered 716. In one embodiment, the trigger threshold Θup is determined by a combination of factors relating to the current read ahead margin (i.e. the amount of contiguously available streamlets that have been sequentially arranged by the staging module 409 for presentation at the current playback time index), and a minimum safety margin. In one embodiment, the minimum safety margin may be 24 seconds. The smaller the read ahead margin, the larger Θup is to discourage upshifting until a larger read ahead margin may be established to withstand network disruptions. If the agent controller module 402 is able to sustain 716 upshift quality, then the agent controller module 402 will upshift 717 the quality and subsequently request higher quality streams. The determination of whether use of the higher quality stream is sustainable 716 is made by comparing an estimate of the higher quality stream's performance factor, φhigher, with Θup. If φhigher≥Θup then use of the higher quality stream is considered sustainable. If the decision of whether or not the higher stream rate is sustainable 716 is “no,” the agent control module 402 will not attempt to upshift 717 stream quality. If the end of the stream has been reached 714, the method 618 ends 716.


If the decision on whether or not to attempt upshift 710 is “no”, a decision about whether or not to downshift 712 is made. In one embodiment, a trigger threshold Θdown is defined in a manner analogous to Θup. If φcurrentdown then the stream quality may be adequate, and the agent controller module 402 does not downshift 718 stream quality. However, if φcurrent≤Θdown, the agent controller module 402 does downshift 718 the stream qualify. If the end of the stream has not been reached 714, the agent controller module 402 begins to request and receive 704 lower quality streamlets and the method 618 starts again. Of course, the above described equations and algorithms are illustrative only, and may be replaced by alternative streamlet monitoring solutions.


The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims
  • 1. An end user device for adaptive-rate content streaming of digital content from a video server over a network, the end user device comprising: a media player operating on the end user device configured to stream a video from the video server via at least one transmission control protocol (TCP) connection over the network, wherein multiple different copies of the video encoded at different bit rates are stored on the video server as multiple sets of streamlets, wherein each of the streamlets yields a different portion of the video on playback, wherein the streamlets across the different copies yield the same portions of the video on playback, and wherein each of the streamlets comprises a time index such that the streamlets whose playback is the same portion of the video for each of the different copies have the same time index in relation to the beginning of the video, and wherein the media player streams the video by:requesting sequential streamlets of one of the copies from the video server based on the time indexes;automatically requesting subsequent portions of the video from the video server by requesting, for each portion of the video, one of the streamlets from one of the copies of the video dependent upon successive determinations by the media player to shift the playback quality to a higher or lower quality one of the different copies, the automatically requesting including repeatedly generating a factor relating to the performance of the network that is indicative of an ability to sustain the streaming of the video;making the successive determinations to shift the playback quality based on the factor to achieve continuous playback of the video using the streamlets of the highest quality one of the copies that is determined to be sustainable at that time so that the media player upshifts to a higher quality one of the different copies when the factor is greater than a first threshold and downshifts to a lower quality one of the different copies when the factor is less than a second threshold; andpresenting the video by playing back the requested streamlets with the media player on the end user device in order of ascending playback time.
  • 2. The end user device of claim 1, wherein the at least one TCP connection comprises multiple Transmission Control protocol (TCP) connections with the content server.
  • 3. The end user device of claim 1 wherein each of the streamlets is a portion of a content file that represents one of the multiple different copies of the video content.
  • 4. The end user device of claim 1 wherein each of the streamlets is a portion of a content file that represents one of the multiple different copies of the video content, and wherein each streamlet is encapsulated as an independent media object.
  • 5. The end user device of claim 1 wherein each of the streamlets is a portion of a content file that represents one of the multiple different copies of the video content, and wherein each streamlet is independently-requestable by the end user device according to the time index.
  • 6. The end user device of claim 1 wherein each of the streamlets is a portion of a content file that represents one of the multiple different copies of the video content, and wherein each streamlet is independently-requestable by the end user device sending an hypertext transport protocol (HTTP) GET request via the network that identifies the streamlet to the video server according to the time index.
  • 7. The end user device of claim 1 wherein each of the streamlets is a content file representing the portion of one of the multiple different copies of the video content.
  • 8. The end user device of claim 1 wherein each of the streamlets is a content file representing the portion of one of the multiple different copies of the video content, wherein each content file is independently-requestable by the end user device sending a hypertext transport protocol (HTTP) GET request via the network that identifies the streamlet to the video server according to the time index.
  • 9. A method executable by an end user device to present rate-adaptive streams received via at least one transmission control protocol (TCP) connection with a server over a network, the method comprising; streaming, by a media player operating on the end user device, a video from the server via the at least one TCP connection over the network, wherein multiple different copies of the video encoded at different bit rates are stored as multiple sets of streamlets on the server, wherein each of the streamlets yields a different portion of the video on playback, wherein the streamlets across the different copies yield the same portions of the video on playback, and wherein each of the streamlets comprises a time index such that the streamlets whose playback is the same portion of the video for each of the different copies have the same time index in relation to the beginning of the video, and wherein the streaming comprises:requesting by the media player a plurality of sequential streamlets of one of the copies from the server based on the time indexes;automatically requesting by the media player from the server subsequent portions of the video by requesting for each such portion one of the streamlets from one of the copies dependent upon successive determinations by the media player to shift the playback quality to a higher or lower quality one of the different copies, the automatically requesting including repeatedly generating a factor relating to the performance of the network that is indicative of an ability to sustain the streaming of the video;making the successive determinations to shift the playback quality based on the factor to achieve continuous playback of the video using the streamlets of the highest quality copy determined sustainable at that time, wherein the making the successive determinations to shift comprises upshifting to a higher quality one of the different copies when the at least one factor is greater than a first threshold and downshifting to a lower quality one of the different copies when the at least one factor is less than a second threshold; andpresenting the video by playing back the requested media streamlets with the media player on the end user device in order of ascending playback time.
  • 10. The method of claim 9, wherein the at least one TCP connection comprises a plurality of different connections, and wherein the requesting the plurality of sequential streamlets includes requesting sub-parts of the streamlets over different ones of the plurality of different TCP connections, and wherein said presenting includes reassembling the streamlets from the received sub-parts.
  • 11. The method of claim 9, wherein the server is a web server, and wherein the streamlets are requested from the web server using Hyper Text Transfer Protocol (HTTP) messages sent via the at least one TCP connection.
  • 12. The method of claim 9, wherein the server comprises a cache server of a network infrastructure operator.
  • 13. The method of claim 9 wherein each of the streamlets is a portion of a content file that represents one of the multiple different copies of the video content.
  • 14. The method of claim 9 wherein each of the streamlets is a portion of a content file that represents one of the multiple different copies of the video content, and wherein each streamlet is encapsulated as an independent media object.
  • 15. The method of claim 9 wherein each of the streamlets is a portion of a content file that represents one of the multiple different copies of the video content, and wherein each streamlet is independently-requestable by the end user device according to the time index.
  • 16. The method of claim 9 wherein each of the streamlets is a portion of a content file that represents one of the multiple different copies of the video content, and wherein each streamlet is independently-requestable by the end user device sending an hypertext transport protocol (HTTP) GET request via the network that identifies the streamlet to the video server according to the time index.
  • 17. The method of claim 9 wherein each of the streamlets is a content file representing the portion of one of the multiple different copies of the video content.
  • 18. The method of claim 9 wherein each of the streamlets is a content file representing the portion of one of the multiple different copies of the video content, wherein each content file is independently-requestable by the end user device sending a hypertext transport protocol (HTTP) GET request via the network that identifies the streamlet to the video server according to the time index.
  • 19. An end user device to present rate-adaptive streams received via at least one transmission control protocol (TCP) connection with a server over a network, the end user device comprising a processor and a memory, wherein the memory comprises computer-executable instructions that, when executed by the processor, perform a method comprising: streaming, by a media player operating on the end user device, a video from the server via the at least one TCP connection over the network, wherein multiple different copies of the video encoded at different bit rates are stored as multiple sets of streamlets on the server, wherein each of the streamlets yields a different portion of the video on playback, wherein the streamlets across the different copies yield the same portions of the video on playback, and wherein each of the streamlets comprises a time index such that the streamlets whose playback is the same portion of the video for each of the different copies have the same time index in relation to the beginning of the video, and wherein the streaming comprises:requesting by the media player a plurality of sequential streamlets of one of the copies from the server based on the time indexes;automatically requesting by the media player from the server subsequent portions of the video by requesting for each such portion one of the streamlets from one of the copies dependent upon successive determinations by the media player to shift the playback quality to a higher or lower quality one of the different copies, the automatically requesting including repeatedly generating a factor relating to the performance of the network that is indicative of an ability to sustain the streaming of the video;making the successive determinations to shift the playback quality based on the factor to achieve continuous playback of the video using the streamlets of the highest quality copy determined sustainable at that time, wherein the making the successive determinations to shift comprises upshifting to a higher quality one of the different copies when the at least one factor is greater than a first threshold and downshifting to a lower quality one of the different copies when the at least one factor is less than a second threshold; andpresenting the video by playing back the requested media streamlets with the media player on the end user device in order of ascending playback time.
  • 20. The end user device of claim 19 wherein each of the streamlets is a portion of a content file that represents one of the multiple different copies of the video content.
  • 21. The end user device of claim 19 wherein each of the streamlets is a portion of a content file that represents one of the multiple different copies of the video content, and wherein each streamlet is encapsulated as an independent media object.
  • 22. The end user device of claim 19 wherein each of the streamlets is a portion of a content file that represents one of the multiple different copies of the video content, and wherein each streamlet is independently-requestable by the end user device according to the time index.
  • 23. The end user device of claim 19 wherein each of the streamlets is a portion of a content file that represents one of the multiple different copies of the video content, and wherein each streamlet is independently-requestable by the end user device sending an hypertext transport protocol (HTTP) GET request via the network that identifies the streamlet to the video server according to the time index.
  • 24. The end user device of claim 19 wherein each of the streamlets is a content file representing the portion of one of the multiple different copies of the video content.
  • 25. The end user device of claim 19 wherein each of the streamlets is a content file representing the portion of one of the multiple different copies of the video content, wherein each content file is independently-requestable by the end user device sending a hypertext transport protocol (HTTP) GET request via the network that identifies the streamlet to the video server according to the time index.
CROSS-REFERENCES TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 14/516,303 (now U.S. Pat. No. 9,407,564), which is a continuation of U.S. patent application Ser. No. 11/116,783 (now U.S. Pat. No. 8,868,772), which claims benefit of U.S. Provisional Patent Application 60/566,831 entitled “APPARATUS, SYSTEM, AND METHOD FOR DYNAMIC RATE SHIFTING OF STREAMING CONTENT” and filed on Apr. 30, 2004 for R. Drew Major and Mark B. Hurst, which is incorporated herein by reference.

US Referenced Citations (273)
Number Name Date Kind
4535355 Arn et al. Aug 1985 A
5168356 Acampora et al. Dec 1992 A
5267334 Normille et al. Nov 1993 A
5404446 Bowater et al. Apr 1995 A
5768527 Zhu et al. Jun 1998 A
5841432 Carmel et al. Nov 1998 A
5953506 Kalra et al. Sep 1999 A
6091775 Hibi et al. Jul 2000 A
6091777 Guetz et al. Jul 2000 A
6122660 Baransky et al. Sep 2000 A
6185736 Ueno Feb 2001 B1
6195680 Goldszmidt et al. Feb 2001 B1
6366614 Pian et al. Apr 2002 B1
6374289 Delaney et al. Apr 2002 B2
6389473 Carmel et al. May 2002 B1
6449719 Baker Sep 2002 B1
6486803 Luby et al. Nov 2002 B1
6490627 Kalra et al. Dec 2002 B1
6510553 Hazra Jan 2003 B1
6552227 Mendelovici et al. Apr 2003 B2
6574591 Kleiman et al. Jun 2003 B1
6604118 Kleiman et al. Aug 2003 B2
6618752 Moore et al. Sep 2003 B1
6654790 Ogle et al. Nov 2003 B2
6675199 Mohammed et al. Jan 2004 B1
6697072 Russell et al. Feb 2004 B2
6708213 Bommaiah et al. Mar 2004 B1
6721723 Gibson et al. Apr 2004 B1
6731600 Patel et al. May 2004 B1
6732183 Graham May 2004 B1
6757796 Hofmann Jun 2004 B1
6760772 Zou et al. Jul 2004 B2
6792449 Colville et al. Sep 2004 B2
6795863 Doty, Jr. Sep 2004 B1
6801947 Li Oct 2004 B1
6845107 Kitazawa et al. Jan 2005 B1
6850965 Allen Feb 2005 B2
6859839 Zahorjan et al. Feb 2005 B1
6874015 Kaminsky et al. Mar 2005 B2
6885471 Minowa et al. Apr 2005 B1
6968387 Lanphear Nov 2005 B2
6976090 Ben-Shaul et al. Dec 2005 B2
7031700 Weaver et al. Apr 2006 B1
7046805 Fitzhardinge et al. May 2006 B2
7054365 Kim et al. May 2006 B2
7054774 Batterberry et al. May 2006 B2
7054911 Lango et al. May 2006 B1
7075986 Girod et al. Jul 2006 B2
7093001 Yang et al. Aug 2006 B2
7096271 Omoigui et al. Aug 2006 B1
7099954 Li et al. Aug 2006 B2
7111044 Lee Sep 2006 B2
7116894 Chatterton Oct 2006 B1
7124164 Chemtob Oct 2006 B1
7174385 Li Feb 2007 B2
7176957 Ivashin et al. Feb 2007 B2
7177642 Sanchez Herrero et al. Feb 2007 B2
7190670 Varsa et al. Mar 2007 B2
7194549 Lee et al. Mar 2007 B1
7240100 Wein et al. Jul 2007 B1
7260640 Kramer et al. Aug 2007 B1
7274740 van Beek et al. Sep 2007 B2
7295520 Lee et al. Nov 2007 B2
7310678 Gunaseelan et al. Dec 2007 B2
7313236 Amini et al. Dec 2007 B2
7325073 Shao et al. Jan 2008 B2
7328243 Yeager et al. Feb 2008 B2
7330908 Jungck Feb 2008 B2
7334044 Allen Feb 2008 B1
7349358 Hennessey et al. Mar 2008 B2
7349976 Glaser et al. Mar 2008 B1
7369610 Xu et al. May 2008 B2
7376747 Hartop May 2008 B2
7391717 Klemets et al. Jun 2008 B2
7408984 Lu et al. Aug 2008 B2
7412531 Lange et al. Aug 2008 B1
7477688 Zhang et al. Jan 2009 B1
7478164 Lango Jan 2009 B1
7523181 Swildens et al. Apr 2009 B2
7529541 Cho et al. May 2009 B2
7536469 Chou et al. May 2009 B2
7546355 Kalnitsky Jun 2009 B2
7555464 Candelore Jun 2009 B2
7558472 Locket et al. Jul 2009 B2
7577750 Shen et al. Aug 2009 B2
7593333 Li et al. Sep 2009 B2
7599307 Seckni et al. Oct 2009 B2
7609652 Kellerer et al. Oct 2009 B2
7631039 Eisenberg Dec 2009 B2
7653735 Mandato et al. Jan 2010 B2
7657644 Zheng Feb 2010 B1
7660906 Armour Feb 2010 B1
7719985 Lee et al. May 2010 B2
7733830 Curcio et al. Jun 2010 B2
7760801 Ghanbari et al. Jul 2010 B2
7761609 Srinivasan et al. Jul 2010 B1
7779135 Hudson et al. Aug 2010 B2
7788395 Bowra et al. Aug 2010 B2
7797439 Cherkasova et al. Sep 2010 B2
7817985 Moon Oct 2010 B2
7818444 Brueck et al. Oct 2010 B2
7873040 Karlsgodt Jan 2011 B2
7912974 Alvarez Arevalo Mar 2011 B2
7925781 Chan et al. Apr 2011 B1
7930431 Kuroiwa Apr 2011 B2
7966374 Huynh Jun 2011 B2
8036265 Reynolds et al. Oct 2011 B1
8042132 Carney Oct 2011 B2
8135852 Nilsson et al. Mar 2012 B2
8209429 Jacobs et al. Jun 2012 B2
8370514 Hurst Feb 2013 B2
8402156 Brueck et al. Mar 2013 B2
8612624 Frueck et al. Dec 2013 B2
8683066 Hurst et al. Mar 2014 B2
8686066 Kwampian et al. Apr 2014 B2
8880721 Hurst et al. Nov 2014 B2
9344496 Hurst et al. May 2016 B2
9462074 Guo et al. Oct 2016 B2
9571551 Brueck et al. Feb 2017 B2
20010013068 Klemets Aug 2001 A1
20010013128 Hagai et al. Aug 2001 A1
20010047423 Shao et al. Nov 2001 A1
20020036640 Akiyoshi Mar 2002 A1
20020073167 Powell et al. Jun 2002 A1
20020087634 Ogle et al. Jul 2002 A1
20020087717 Artzi et al. Jul 2002 A1
20020091840 Pulier et al. Jul 2002 A1
20020097750 Gunaseelan et al. Jul 2002 A1
20020118809 Eisenberg Aug 2002 A1
20020122491 Karczewicz et al. Sep 2002 A1
20020131496 Vasudevan et al. Sep 2002 A1
20020133547 Lin Sep 2002 A1
20020136406 Fitzhardinge et al. Sep 2002 A1
20020138619 Ramaley et al. Sep 2002 A1
20020143972 Christopoulos Oct 2002 A1
20020144276 Radford Oct 2002 A1
20020146102 Lang Oct 2002 A1
20020152317 Wang et al. Oct 2002 A1
20020152318 Menon et al. Oct 2002 A1
20020156912 Hurst et al. Oct 2002 A1
20020161898 Hartop et al. Oct 2002 A1
20020161911 Pinckney, III et al. Oct 2002 A1
20020169887 MeLampy Nov 2002 A1
20020169926 Pinckney, III et al. Nov 2002 A1
20020174434 Lee et al. Nov 2002 A1
20020176418 Hunt et al. Nov 2002 A1
20020178138 Ender et al. Nov 2002 A1
20020178330 Schlowsky-Fischer et al. Nov 2002 A1
20020184391 Phillips Dec 2002 A1
20020188745 Hughes et al. Dec 2002 A1
20020194608 Goldhor Dec 2002 A1
20030005140 Dekel Jan 2003 A1
20030005455 Bowers Jan 2003 A1
20030007464 Balani Jan 2003 A1
20030014684 Kashyap Jan 2003 A1
20030018966 Cook et al. Jan 2003 A1
20030021166 Soloff Jan 2003 A1
20030037103 Salmi et al. Feb 2003 A1
20030037160 Wall Feb 2003 A1
20030055995 Ala-Honkola Mar 2003 A1
20030061370 Nakayama Mar 2003 A1
20030065803 Heuvelman Apr 2003 A1
20030067872 Harrell et al. Apr 2003 A1
20030078972 Tapissier et al. Apr 2003 A1
20030081582 Jain et al. May 2003 A1
20030093790 Logan et al. May 2003 A1
20030107994 Jacobs et al. Jun 2003 A1
20030135631 Li et al. Jul 2003 A1
20030140159 Campbell et al. Jul 2003 A1
20030151753 Li et al. Aug 2003 A1
20030152036 Quigg Brown et al. Aug 2003 A1
20030154239 Davis et al. Aug 2003 A1
20030204519 Sirivara et al. Oct 2003 A1
20030204602 Hudson et al. Oct 2003 A1
20030220972 Montet et al. Nov 2003 A1
20030236904 Walpole et al. Dec 2003 A1
20040003101 Roth et al. Jan 2004 A1
20040010613 Apostolopoulos et al. Jan 2004 A1
20040030547 Leaning et al. Feb 2004 A1
20040030599 Sie et al. Feb 2004 A1
20040030797 Akinlar et al. Feb 2004 A1
20040031054 Dankworth et al. Feb 2004 A1
20040049780 Gee Mar 2004 A1
20040054551 Ausubel et al. Mar 2004 A1
20040071209 Burg et al. Apr 2004 A1
20040083283 Sundaram et al. Apr 2004 A1
20040093420 Gamble May 2004 A1
20040098748 Bo et al. May 2004 A1
20040103444 Weinberg et al. May 2004 A1
20040117427 Allen et al. Jun 2004 A1
20040143672 Padmanabham et al. Jul 2004 A1
20040153458 Noble et al. Aug 2004 A1
20040168052 Clisham et al. Aug 2004 A1
20040170392 Lu et al. Sep 2004 A1
20040220926 Lamkin et al. Nov 2004 A1
20040260701 Lehikoinen et al. Dec 2004 A1
20050009520 Herrero et al. Jan 2005 A1
20050015509 Sitaraman Jan 2005 A1
20050024487 Chen Feb 2005 A1
20050033855 Moradi et al. Feb 2005 A1
20050050152 Penner et al. Mar 2005 A1
20050055425 Lango et al. Mar 2005 A1
20050066063 Grigorovitch et al. Mar 2005 A1
20050076136 Cho et al. Apr 2005 A1
20050084166 Bonch et al. Apr 2005 A1
20050108414 Taylor et al. May 2005 A1
20050120107 Kagan et al. Jun 2005 A1
20050123058 Greenbaum et al. Jun 2005 A1
20050185578 Padmanabhan et al. Aug 2005 A1
20050188051 Sneh Aug 2005 A1
20050204046 Watanabe Sep 2005 A1
20050204385 Sull et al. Sep 2005 A1
20050223087 Van Der Stok Oct 2005 A1
20050251832 Chiueh Nov 2005 A1
20050254508 Aksu et al. Nov 2005 A1
20050262257 Major et al. Nov 2005 A1
20060010003 Kruse Jan 2006 A1
20060047779 Deshpande Mar 2006 A1
20060059223 Klemets et al. Mar 2006 A1
20060080718 Gray et al. Apr 2006 A1
20060130118 Damm Jun 2006 A1
20060133809 Chow et al. Jun 2006 A1
20060165166 Chou et al. Jul 2006 A1
20060168290 Doron Jul 2006 A1
20060168295 Batterberry et al. Jul 2006 A1
20060184688 Ganguly et al. Aug 2006 A1
20060206246 Walker Sep 2006 A1
20060218264 Ogawa et al. Sep 2006 A1
20060236219 Grigorovitch et al. Oct 2006 A1
20060242315 Nichols Oct 2006 A1
20060270404 Tuohino et al. Nov 2006 A1
20060277564 Jarman Dec 2006 A1
20060282540 Tanimoto Dec 2006 A1
20060288099 Jefferson et al. Dec 2006 A1
20070024705 Richter et al. Feb 2007 A1
20070030833 Pirzada et al. Feb 2007 A1
20070037599 Tillet et al. Feb 2007 A1
20070067480 Beek et al. Mar 2007 A1
20070078768 Dawson Apr 2007 A1
20070079325 de Heer Apr 2007 A1
20070094405 Zhang Apr 2007 A1
20070204310 Hua et al. Aug 2007 A1
20070280255 Tsang et al. Dec 2007 A1
20080028428 Jeong et al. Jan 2008 A1
20080037527 Chan et al. Feb 2008 A1
20080046939 Lu et al. Feb 2008 A1
20080056373 Newlin et al. Mar 2008 A1
20080060029 Park et al. Mar 2008 A1
20080091838 Miceli Apr 2008 A1
20080120330 Reed et al. May 2008 A1
20080120342 Reed et al. May 2008 A1
20080133766 Luo Jun 2008 A1
20080162713 Bowra et al. Jul 2008 A1
20080195744 Bowra et al. Aug 2008 A1
20080195745 Bowra et al. Aug 2008 A1
20080205291 Li et al. Aug 2008 A1
20080219151 Ma et al. Sep 2008 A1
20080263180 Hurst et al. Oct 2008 A1
20080281803 Gentric Nov 2008 A1
20090006538 Risney, Jr. et al. Jan 2009 A1
20090049186 Agnihotri et al. Feb 2009 A1
20090055417 Hannuksela Feb 2009 A1
20090055471 Kozat et al. Feb 2009 A1
20090055547 Hudson et al. Feb 2009 A1
20090132599 Soroushian et al. May 2009 A1
20090132721 Soroushian et al. May 2009 A1
20090210549 Hudson et al. Aug 2009 A1
20100098103 Xiong et al. Apr 2010 A1
20100158101 Wu et al. Jun 2010 A1
20100262711 Bouazizi Oct 2010 A1
20110307545 Bouazizi Dec 2011 A1
20140207966 Hurst et al. Jul 2014 A1
20150058496 Hurst et al. Feb 2015 A1
Foreign Referenced Citations (23)
Number Date Country
2466482 May 2003 CA
0 711 077 May 1996 EP
0 919 952 Jun 1999 EP
1202487 Oct 2001 EP
1395014 Aug 2002 EP
1298931 Feb 2003 EP
1298931 Apr 2003 EP
1 641 271 Mar 2006 EP
1 670 256 Jun 2006 EP
1 777 969 Apr 2007 EP
2367219 Sep 2000 GB
2000201343 Jul 2000 JP
200192752 Apr 2001 JP
2004054930 Feb 2004 JP
2011004225 Jan 2011 JP
WO 0067469 Nov 2000 WO
2001067264 Sep 2001 WO
2003003760 Jan 2003 WO
2003009581 Jan 2003 WO
2003027876 Apr 2003 WO
2004025405 Mar 2004 WO
2004036824 Apr 2004 WO
2006010113 Jan 2006 WO
Non-Patent Literature Citations (64)
Entry
Bill Birney, “Intelligent Streaming”, Microsoft C orporation, May 2003 (Year: 2003).
USPTO, Office Action in U.S. Appl. No. 15/156,079 dated Feb. 16, 2017.
USPTO, Notice of Allowance and Fee(s) Due in U.S. Appl. No. 14/222,245 dated Apr. 12, 2017.
USPTO, Office Action in U.S. Appl. No. 15/414,025 dated Sep. 20, 2017.
USPTO, Office Action for U.S. Appl. No. 14/531,804, dated May 11, 2015.
USPTO, Notice of Allowance and Fee(s) Due for U.S. Appl. No. 14/106,051 dated Feb. 24, 2015.
USPTO, Final Office Action for U.S. Appl. No. 14/222,245 dated Mar. 18, 2015.
U.S. Patent and Trademark Office, Non-Final Office Action, dated Oct. 24, 2014 for U.S. Appl. No. 14/222,245.
Canadian Intellectual Property Office, Office Action, dated Sep. 10, 2014 for Canadian Application No. 2564861.
USPTO “International Search Report” dated Dec. 12, 2008; International Appln. No. PCT/US2008/061035, filed Apr. 21, 2008.
Australian Government “Examiner's First Report” dated Oct. 17, 2011; Australian Patent Appln. No. 2011213730.
Korean Intellectual Property Office “Official Notice of Preliminary Rejection” dated Jul. 28, 2011; Korean Patent Appln. No. 10-2006-7025274.
Japan Patent Office “Notice of Rejection Ground” dated Apr. 26, 2011; Japanese Patent Appln. No. 2007-511070.
Fujisawa, Hiroshi et al. “Implementation of Efficient Access Mechanism for Multiple Mirror-Servers” IPSJ SIG Technical Report, vol. 2004, No. 9 (2004-DPS-116), Jan. 30, 2004, Information Processing Society of Japan, pp. 37-42.
Liu, Jiangchuan et al. “Opportunities and Challenged of Peer-to-Peer Internet Video Broadcast,” School of Computing Science, Simon Fraser University, British Columbia, Canada.
USPTO International Searching Authority “International Search Report and Written Opinion,” dated Nov. 5, 2008; International Appln. No. PCT/US2008/009281, filed Aug. 1, 2008.
Zhang, Xinyan et al. “CoolStreaming/DONet: A Data-Driven Overlay Network for Peer-to-Peer Live Media Streaming” IEEE 2005.
Guo, Yang “DirectStream: A Directory-Based Peer-To-Peer Video Streaming Service” LexisNexis, Elsevier B.V. 2007.
Lu, Jiangchuan et al. “Adaptive Video Multicast Over the Internet” IEEE Computer Society, 2003.
Rejaie, Reza et al. “Architectural Considerations for Playback of Quality Adaptive Video Over the Internet” University of Southern California, Information Sciences Institute, 1998.)
Roy, Sumit et al. “A System Architecture for Managing Mobile Streaming Media Services” Streaming Media Systems Group, Hewlett-Packard Laboratories, 2003.
Xu, Dongyan et al. “On Peer-to-Peer Media Streaming” Department of Computer Sciences, Purdue University, 2002.
Kozamernik, Franc “Media Streaming Over the Internet—An Over of Delivery Technologies” EBU Technical Review, Oct. 2002.
Lienhart, Rainer et al. “Challenges in Distributed Video Management and Delivery” Intel Corporation, EECS Dept., UC Berkeley, 2000-2002.
Japan Patent Office “Final Office Action” dated Feb. 28, 2012 in Patent Application No. 2007-511070 filed on Oct. 26, 2006.
Japan Patent Office “Interrogation” dated Nov. 6, 2012 in Patent Application No. 2007-511070 filed on Oct. 26, 2006.
Canadian Intellectual Property Office “Office Action” dated Sep. 9, 2013 in Patent Application No. 2,564,861 filed on Oct. 30, 2006.
USPTO “Office Action” dated Sep. 13, 2013 in U.S. Appl. No. 13/757,571, filed Feb. 1, 2013.
USPTO “Notice of Allowance” dated Jun. 24, 2014 in U.S. Appl. No. 13/757,571, filed Feb. 1, 2013.
European Patent Office “Extended Search Repoit” dated Jul. 10, 2014 in Patent Application No. 12154559.4 filed on Sep. 20, 2002.
Nguyen, Thinh, “Multiple Sender Distributed Video Streaming” in IEEE Transactions on Multimedia, vol. 6, No. 2, Published Apr. 2, 2004.
Weblio, The Meaning of Performance Factor—English-Japanese Weblio Dictionary, [online], Feb. 24, 2012; retrieved from the internet—URL:http://ejje,weblio.jp/content/performance+factor.
Masato Tsuru et al., Recent Evolution of the Internet Measurement and Inference Techniques, IEICE Technical Report, vol. 103, No. 123 (IN2003-16 to 23), IEICE, Jun. 12, 2003, pp. 37 to 42, ISSN: 0913-05685.
Takeshi Yoshimura et al., Mobile Streaming Media CDN Enabled by Dynamic SMIL, WWW2002, May 7-11, 2002; retrieved from the Internet at http://www2002.org/CDROM/refereed/515.
Canadian Intellectual Property Office, Office Action, dated Oct. 15, 2012 for Patent Application No. 2,564,861.
Clement. B., Move Networks Closes $11.3 Million on First Round VC Funding, Page One PR, Move Networks, Inc. Press Releases, Feb. 7, 2007, http://www.move.tv/press/press20070201.html.
Move Networks, Inc., The Next Generation Video Publishing System, Apr. 11, 2007; http://www.movenetworks.com/wp-content/uploads/move-networks-publishing-system.pdf.
U.S. Patent and Trademark Office, Non-Final Office Action, dated Aug. 7, 2014 for U.S. Appl. No. 14/106,051.
Final Office Action for U.S. Appl. No. 11/673,483, dated Feb. 4, 2010, 21 pages.
Advisory Action for U.S. Appl. No. 11/673,483, dated Apr. 9, 2010, 3 pages.
Advisory Action for U.S. Appl. No. 11/673,483, dated May 26, 2010, 3 pages.
Notice of Allowance for U.S. Appl. No. 11/673,483, dated Aug. 5, 2010, 7 pages.
Wicker, Stephen B., “Error Control Systems for Digital Communication and Storage”, Prentice-Hail, Inc., New Jersey, USA, 1995 (Book: see NPL's Parts 1-6).
PCT Notification of Transmittal of the International Search Report and Written Opinion of the International Searching Authority, for PCT/US05/15091, dated Oct. 29, 2007, 8 pages.
PCT Notification of Transmittal of International Preliminary Report on Patentability, for PCT/US05/15091, dated Oct. 29, 2007, 6 pages.
Office Action for U.S. Appl. No. 11/673,483, dated Jul. 9, 2009, 14 pages.
Office Action for U.S. Appl. No. 11/673,483, dated Feb. 3, 2009, 9 pages.
Albanese, Andres, et al. “Priority Encoding Transmission”, TR-94-039, Aug. 1994, 36 pages, International Computer Science Institute, Berkeley, California.
Puri, Rohit, et al. “Multiple Description Source Coding Using Forward Error Correction Codes”, Oct. 1999, 5 pages, Department of Electrical Engineering and Computer Science, University of California, Berkeley, California.
Goyal, Vivek K., “Multiple Description Coding: Compression Meets the Network”, Sep. 2001, pp. 74-93, IEEE Signal Processing Magazine.
Supplemental European Search Report, dated Sep. 30, 2008, (3 pages).
Pathan, Al-Mukaddim et al., “A Taxonomy and Survey of Content Delivery Networks”, Australia, Feb. 2007. Available at http://www.gridbus.org/reports/CDN-Taxonomy.pdf.
On2 Technologies, Inc., “TrueMotion VP7 Video Codec”, White Paper, Document Version 1.0, Jan. 10, 2005, (13 pages).
USPTO, Notice of Allowance and Fee(s) Due in U.S. Appl. No. 15/414,025 dated Feb. 15, 2018.
Krasic et al., Quality-Adaptive Media Streaming by Priority Drop, Oregon Graduate Institute, 2001.
Krasic et al., QoS Scalability for Streamed Media Delivery, Oregon Graduate Institute School of Science & Engineering Technical Report CSE 99-011, Sep. 1999.
Huang et al., Adaptive Live Video Streaming by Priority Drop, Portland State University PDXScholar, Jul. 21, 2003.
Walpole et al, A Player for Adapctive MPEG Video Streaming Over the Internet, Oregon Graduate Institute of Science and Technology, Oct. 25, 2012.
Roy, S., et al., “Architecture of a Modular Streaming Media Server for Content Delivery Networks,” 2002 IEEE. Published in the 2003 International Conference on Multimedia and Expo ICME 2001.
Bommaiah, E., et al., “Design and Implementation of a Caching System for Streaming Media over the Internet,” 2000 IEEE. Published in RTAS '00 Proceedings of the Sixth IEEE Real Time Technology and Applications Symposium (RTAS 2000), p. 111.
Claim chart referencing claim 16 of U.S. Pat. No. 9,071,668 submitted on Aug. 31, 2018.
United States Patent and Trademark Office, Notice of Allowance for U.S. Appl. No. 14/222,245, dated Jul. 18, 2018.
United States Patent and Trademark Office, Notice of Allowance for U.S. Appl. No. 15/679,079, dated Aug. 31, 2018.
USPTO, Notice of Allowance and Fee(s) Due for U.S. Appl. No. 15/156,079 dated Jun. 30, 2017.
Related Publications (1)
Number Date Country
20160323341 A1 Nov 2016 US
Provisional Applications (1)
Number Date Country
60566831 Apr 2004 US
Continuations (2)
Number Date Country
Parent 14516303 Oct 2014 US
Child 15207172 US
Parent 11116783 Apr 2005 US
Child 14516303 US