Reduction of latency in video distribution networks using adaptive bit rates

Information

  • Patent Grant
  • 9204203
  • Patent Number
    9,204,203
  • Date Filed
    Tuesday, April 3, 2012
    13 years ago
  • Date Issued
    Tuesday, December 1, 2015
    9 years ago
Abstract
Systems and methods are provided for reducing and controlling playback latency in an unmanaged, buffered data network. A delay cost function is determined, the function representing the effect of playback latency on end user experience. An encoder transmits audiovisual data through the network to a client device. Network latency is measured, and the delay cost function is evaluated to establish an encoding bitrate for the encoder. The encoding of the audiovisual data is altered in response to dynamic network conditions, thereby controlling end-to-end playback latency of the system, which is represented by the playout length of data buffered between the encoder and the client device.
Description
TECHNICAL FIELD

The present invention relates to reducing playback latency in video distribution networks, and more particularly to adjusting an audiovisual encoding bitrate based on a detected network latency using a delay cost function that is indicative of the effect of playback latency on an end user experience.


BACKGROUND ART

Interactive television services provide a television viewer the ability to interact with their television. Such services have been used, for example, to provide navigable menuing systems and ordering systems that are used to implement electronic program guides and on-demand and pay-per-view program reservations without the need to call a television provider. These services typically employ an application that is executed on a server located remotely from the viewer. Such servers may be, for example, located at a cable television headend. The output of the application is streamed to the viewer, typically in the form of an audiovisual MPEG Transport Stream. This allows the stream to be displayed on virtually any client device that has MPEG decoding capabilities, including a television set top box. The client device allows the user to interact with the remote application by capturing keystrokes and passing these back to the application.


The client and the server are, in cable deployments, separated by a managed digital cable-TV network that uses well-known protocols such as ATSC or DVB-C. Here, ‘managed’ means that any bandwidth resources required to provide these services may be reserved prior to use. Once resources are allocated, the bandwidth is guaranteed to be available, and the viewer is assured of receiving a high-quality interactive application experience.


In recent years, audio-visual consumer electronics devices increasingly support a Local Area Network (LAN) connection, giving rise to a new class of client devices: so-called “Broadband Connected Devices”, or BCDs. These devices may be used in systems other than the traditional cable television space, such as on the Internet. For example, consider FIG. 1, in which a client device 110 (such as a Blu-ray player) implements a client application 112 to deliver audiovisual applications streamed over a public data network 120 from an audiovisual application streaming server 130 to a television 140. A user may employ a remote control 142 in conjunction with the client device 110 to transmit interactive commands back to the application streaming server 130, thereby controlling the content interactively.


However, because public data networks are not managed in the same way that private cable systems are, challenges arise. The transport protocols that are commonly used on the open Internet (such as TCP or RTSP) do not support bandwidth reservation. Since bandwidth cannot be guaranteed, the application server is not assured that the network connection can deliver the requested bandwidth. The actual throughput of an Internet connection can vary from second to second depending on many factors, including: network congestion anywhere between the application server and the client device; high-throughput downloads or uploads sharing the same physical internet connection as the client device (e.g. an ADSL line); mechanisms at lower (data link) layers that introduce delay, for example Adaptive Retransmission (ARQ) mechanisms in (wireless) access protocols; lost packets at any link between the client and the server; Transmission Control Protocol (TCP) state and more specifically TCP congestion window size; and reordering of packets caused by any link between the client and the server. To the server streaming the data to the client device, these factors all manifest themselves as fluctuations in actual achieved throughput. Small fluctuations can be addressed by using sufficient buffering, however buffering causes larger end-to-end delays (the time between the moment a user pressing a remote control button, and the moment that the screen update as a result of the key press has been rendered on the user's screen). Delays as short as five seconds may result in an unpleasant viewer experience in some applications such as an electronic program guide, while delays of even one-half second may be extremely noticeable in high-performance gaming applications. Further, the use of such buffering cannot compensate for large fluctuations in throughput.



FIGS. 2-4 illustrate an example of the type of end-to-end playback latency in a typical network system, such as that of FIG. 1, during a transient network outage. There are three sources of playback latency: a server buffer that represents a source of pre-transmission latency; a network buffer that represents transmission latency in the public data network; and a client buffer that represents post-transmission latency in the client device before the audiovisual data are shown. Because of these sources of latency, at a time T1, as the application streaming server 130 generates data for display, the client device 110 is displaying data generated at an earlier time T0. The data that have been generated but not yet viewed are distributed in the three buffers awaiting display. The data themselves are visually represented and discussed in terms of video frames for ease of understanding.



FIG. 2 shows the system operating normally at time T1, just before a network outage occurs between the public data network 120 and the client device 110. FIG. 3 shows the system at a time T2 that is 200 ms later, at the end of the network outage. FIG. 4 shows the system at a time T3 that is another 30 ms later (that is, 230 ms after the start of the outage), after the network has had a chance to transmit some of its buffered data to the client device. These figures are now described in more detail.


More particularly, FIG. 2 shows a server buffer, a network buffer, and a client buffer at a time T1. This network is operating in equilibrium: on average, application server 130 generates one frame of video data in the length of time that each frame of video is displayed on the client device 110 (typically, 1/30 of a second). There are 180 ms of buffered playout data in this Figure: 50 ms in the server buffer, 80 ms in the network buffer, and 50 ms in the client buffer. To be even more specific, the 50 ms of data in the server buffer represent data generated in the 50 ms prior to time T1. Thus, the server buffer contains data spanning the playback range (T1−50 ms, T1), and the first frame of data in the server buffer was generated at time T1−50 ms, as indicated. The data in the network buffer were generated over the 80 ms prior, and therefore span the playback range (T1−130 ms, T1−50 ms). The data in the client buffer were generated over the 50 ms prior, and span the playback range (T1−180 ms, T1−130 ms). Therefore, the display device 140 is playing out the video frame for T0=T1−180 ms from the top of the client buffer. Assuming that the system continues operating with these latencies, and assuming that the application server can generate a frame instantly in response to user input, a keystroke entered using remote control 142 at time T1 will cause a visible reaction on the display device 140 at time T1+180 ms. That is, the keystroke will have a visible effect as soon as the buffered frames have emptied out of the three buffers onto the display, and the new frame can be displayed. Thus, the system as shown includes a response time of just under two tenths of a second. This delay is barely noticeable for an electronic program guide application.


Continuing the example, suppose a network outage between the network buffer and the client occurs immediately after the time T1, and lasts for 200 ms. At this point, the buffers may appear as in FIG. 3. Here, the client has drained its 50 ms of data, and playout is paused at T1−130 ms. It has been paused there for 150 ms (i.e., the amount of time that has elapsed for which it has not received any data). Meanwhile, the server has generated 200 ms of additional audiovisual data for playback. Based on the particular bandwidths available in the network during the outage, only 110 ms of playback have been sent to the network. Thus, the network buffer has 190 ms of stored data: 110 ms of new data, plus the 80 ms that it had at the beginning of the outage. No data have been sent from the network buffer to the client buffer, so the network buffer has data for 190 ms of playback in the range (T1−130 ms, T1+60 ms). In FIG. 2 the server buffer had data that began at T1−50 ms. In the intervening 200 ms, 110 ms of data have passed through the buffer and 90 ms of data have accumulated there. These 90 ms are in addition to the 50 ms already there, so the server buffer now has 140 ms of playback data. These data span the range (T1+60 ms, T1+200 ms).


The 200 ms of playback generated by the server during the outage have been buffered in the network. The 200 ms of data are split between the server buffer (90 ms of increase) and the network buffer (110 ms of increase). The total non-client buffering has increased from only 130 ms (about ⅛ of a second) to 330 ms (about ⅓ of a second).


Thus, after the outage has been resolved, an additional 200 ms of data will be buffered in the system. This can be seen in FIG. 4, which corresponds to the state of the system 30 ms after the outage. In these 30 ms, the network provided enough bandwidth to the client to transfer 50 ms of playback data, which are seen in the client buffer. These data span the range (T1−130 ms, T1−80 ms). The client has just received enough data to safely resume playback, so playback is resumed at T1−130 ms. Looking at the network buffer, 50 ms of playout data have been sent to the client, but 50 ms of playout data have been received from the server, so the network buffer still has 190 ms of data, now spanning the range (T1−80 ms, T1+110 ms). Meanwhile, the server has generated an additional 30 ms of data and transmitted 50 ms of data to the network, so the server buffer has 120 ms of data spanning the range (T1+110 ms, T1+230 ms).


From these figures it is clear that a buffer underrun at the client can lead to playback latency buildup. The system of FIG. 2 had 130 ms of end-to-end delay, but by the end of FIG. 4 when playback resumed, an additional 230 ms of delay had been introduced into the system. Thus, in FIG. 4, there are 130 ms+230 ms=360 ms of total latency in the system, distributed between the three buffers. This playback latency buildup occurs for each client buffer underrun, and such buildups are cumulative. This is a highly undesirable situation for interactive applications.


The prior art does not adequately solve this problem. The client cannot simply skip individual frames because typical encoding schemes, such as MPEG, may encode each frame based on the data contained in previous and subsequent frames. The client could skip to its next intracoded frame, but these frames may be infrequent, and in any event such a strategy might be jarring for the viewer watching the stream. The server cannot pause frame generation, since it has no indication of the playout problems at the client. A new approach is therefore needed.


SUMMARY OF THE EMBODIMENTS

Various embodiments of the invention optimize playback latency across an unmanaged network as a function of measured network latency and available network bandwidth. Users tolerate variations in playback latency differently for different applications, such as interactive channel menus and program guides, video games, billing systems and the like. Thus, in accordance with various embodiments, these variations are captured in a delay cost function, and playback latency is optimized based on the application. The delay cost function represents, in a way, the effect of playback latency on the user experience.


Ideally, zero latency across all applications would be optimal, but this is not possible in practice because the data network introduces network latencies that are unknown in advance, and uncontrollable by the application. Thus, the systems and methods disclosed herein take measurements of the data network, and adjust playback latency accordingly. In some embodiments, playback latency is adjusted by varying the amount of new frame data being placed into the network for transmission. In others, playback latency is adjusted by notifying the application generating the source data, so that the source data themselves are modified.


Thus, in a first embodiment of the invention there is provided a method of controlling playback latency associated with transmission of source audiovisual data through an unmanaged, buffered data network. The method includes: encoding the source audiovisual data, according to an encoding bitrate, into transmission audiovisual data; transmitting the transmission audiovisual data to a client device through the data network; calculating a delay cost function based on a network latency associated with the data network; and altering the encoding of the source audiovisual data based on the calculated delay cost function.


Encoding may be performed according to an MPEG standard. The client device may be, among other things, a television set top box, a television, a personal computer, a tablet computer, a smartphone, or an optical disc player such as a Blu-ray or DVD player or game console. The unmanaged, buffered data network may be at least one of a cable data network, a broadcast wireless data network, a point-to-point wireless data network, a satellite network, and a portion of the Internet. Or, the unmanaged, buffered data network may be coupled to a managed data network that is capable of providing interactive television signals.


In a related embodiment, transmitting includes dividing the data of each video frame into at least one frame portion. In this embodiment, the method further calls for waiting to receive, from the client device, for each portion, an acknowledgement that the portion has been received by the client device, wherein the delay cost function is based on a length of time (“delay”) between the completion of the encoding of a portion and receipt from the client device of the acknowledgement of the portion. The delay cost function may be calculated as

cost=α+λ*(delay−rtt_min)γ,

where α is a number that represents a minimum cost to transmit data, λ is a number that indicates a scaling factor of the delay cost function, γ is a number that indicates a curvature of the delay cost function, and “rtt_min” is a minimum round trip time associated with the data network. In a different embodiment, the bitrate cost is not based on a delay, but based on variation of a round trip time (“rtt”) that is associated with the data network. In this case, the delay cost function may be calculated as cost=α+λ*(rtt−rtt_min)γ.


In a further embodiment, transmitting is performed in accordance with the Transmission Control Protocol (“TCP”), and the method calls for calculating an estimated available bandwidth as

estimate=8*mss*cwnd/rtt,

where “mss” is a TCP Maximum Segment Size, “cwnd” is a TCP Congestion Window Size, and “rtt” is a round trip time associated with the data network. An encoding bitrate is established, and is equal to the value of the ratio of “estimate” to the delay cost function. Altering the encoding of the source audiovisual data may include altering the encoding bitrate to be no greater than the established encoding bitrate. If the source audiovisual data includes source audio data having an audio bitrate and source video frames, altering the encoding of the source audiovisual data may instead include altering a video encoding bitrate so that the sum of the audio bitrate and the video encoding bitrate is no greater than the established encoding bitrate.


In another embodiment, the source audiovisual data are generated by an interactive software application according to the established encoding bitrate, and the method further includes notifying the application that the encoding bitrate has been increased or decreased. The application responds by altering the source audiovisual data that it produces, including adjusting various screen objects and optimizing the generation of dynamic elements. When the established encoding bitrate is lower than a given threshold defined by the application, the application may alter the source audiovisual data by not generating a transparent screen object. If the source audiovisual data include a graphical user interface, then when the established encoding bitrate is lower than a given threshold defined by the application, the application may alter the source audiovisual data by postponing the generation of a dynamic screen region in the graphical user interface. In yet another embodiment, the method includes pausing the encoder when the delay cost function falls below a given threshold.


System embodiments and computer program product embodiments that perform the above methods are also disclosed.





BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing features will be more readily understood by reference to the following detailed description, taken with reference to the accompanying drawings, in which:



FIG. 1 is a system diagram showing an environment in which some embodiments of the invention may be used;



FIG. 2 is a latency diagram showing three sources of end-to-end latency in a typical network system that is performing normally;



FIG. 3 is a latency diagram showing latencies in the system of FIG. 2 a short time later, after a blockage has developed between the network buffer and the client device;



FIG. 4 is a latency diagram showing latencies in the system of FIG. 3 a short time later, after the blockage has been corrected;



FIG. 5 is a functional diagram showing relevant functional components of a particular embodiment of the invention;



FIG. 6 is a functional diagram showing relevant functional components of an alternate embodiment;



FIG. 7 is a flowchart showing the processes associated with a method of reducing latency in accordance with an embodiment of the invention;



FIG. 8 is a graph of a delay cost function used to respond to detected latency in one embodiment;



FIG. 9 is a graph of a delay cost function used to respond to detected latency in an alternate embodiment;



FIG. 10 is a flowchart showing four processes associated with a method of detecting a client buffer underrun in another embodiment; and



FIG. 11 shows the relationship between two timestamps used to detect the client buffer underrun.





DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS

As used in this description and the accompanying claims, the following terms shall have the meanings indicated, unless the context otherwise requires:


Playback latency refers to a delay, experienced by the user of an interactive audiovisual application displayed on a display device, between an input provided by the user and a reaction of the audiovisual information. In more concrete terms, one kind of playback latency is the time between pressing a “channel menu” key on a television remote control and the actual appearance of the menu on a television screen. Assuming that the application generating the displayed video reacts instantly to the receipt of the key press, the television nevertheless does not react immediately, because some video data is in transit between the application and the television, buffered in various locations. This buffered data represents the playback latency. Playback latency is typically measured in frames of video or audio data.


Network latency refers to a delay resulting from transmission of data through a data network. Because data networks are made of physical components, and because these components can only transmit information at a maximum physical speed, there is a delay caused by the transit time of data through a network. Additionally, if the receiving system cannot receive the data as fast as the sending system is generating it, or for other reasons, the data network itself may buffer data, slowing its transit. These delays give rise to network latency. Network latency is typically measured in milliseconds (ms).


Bandwidth refers to the rate at which data may be transmitted through a data network. Bandwidth is independent of latency. For example, an interplanetary data network may have large network latency but also a high bandwidth, while a serial cable connecting a keyboard to a computer may have a very low latency but also a low bandwidth. Public data networks like the Internet generally strive to have a high bandwidth and a low network latency; that is, the ability to move ‘as much data’ as possible, ‘as quickly’ as possible.


A managed data network is one that can guarantee a particular bandwidth will be available. Typical managed networks, such as cable networks based on ATSC or QAM, use bandwidth reservation protocols, such as the Resource Reservation Protocol (RSVP), to guarantee availability. By contrast, an unmanaged data network is one that cannot make such a guarantee. Most public data networks are unmanaged.


Various embodiments of the invention optimize playback latency across an unmanaged network as a function of available network bandwidth and measured network latency. Users tolerate variations in playback latency differently for different applications, such as interactive channel menus and program guides, video games, billing systems and the like. Thus, in accordance with various embodiments, these variations are captured in a delay cost function, one for each application, and playback latency is optimized based on the application.


Ideally, zero latency across all applications would be optimal, but this is not possible in practice because the data network introduces network latencies that are unknown in advance, and uncontrollable by the application. Thus, the systems and methods disclosed herein carefully balance the amount of playback latency generated by a streaming server, as a function of the current state of the network. In particular, these systems take measurements of the data network, and adjust playback latency accordingly. In some embodiments, playback latency is adjusted by varying the amount of new frame data being placed into the network for transmission. In others, playback latency is adjusted by notifying the application that is generating the source data, so that the source data themselves are modified.


In addition to addressing network latency, different embodiments account for variations in the network bandwidth. Audiovisual data are produced at one bitrate, transmitted through the data network at a different and uncontrolled bitrate (namely the instantaneously available network bandwidth), and consumed at a third bitrate. In addition to balancing the playback latency, systems and methods disclosed herein also carefully balance the amount of data being put into the data network against fluctuations in network bandwidth to avoid client data underruns. While client data underruns are preferably avoided, various methods for dealing with them are also discussed.



FIG. 5 is a functional diagram showing relevant functional components of a particular embodiment of the invention. Similar to FIG. 1, there is a client device 510, a data network 520, a display device 530, and one or more servers 540 that produce audiovisual data for display on the display device. The audiovisual data may be, for example MPEG encoded data.


The client device 510 may include, among other things, a television set top box, a broadband connected television, a personal computer, a tablet computer, a smartphone, or an optical disc player such as a DVD player or Blu-ray player (either as a standalone unit or as part of a video game console). The client device 510 should be able to receive the audiovisual data and convert it into video and audio signals that may be shown and/or heard on display device 530. Thus, client device may be a ‘thin client’, as that phrase is known in the art. A buffer 512 in the client device corresponds to the client-buffer of FIGS. 2-4, and is used to delay frames of data until the proper time to transmit them to the display device 530 for immediate display. A decoder 514 receives audiovisual data (which has been encoded according to an encoding, such as MPEG) and decodes it into video and audio data that are placed into the buffer 512. An input/output unit 516 receives user input commands (for example, from a remote control) and transmits them to an interactive software application, in the one or more servers 540, to control the received video and audio. The buffer 512, decoder 514, and I/O unit 516 may be implemented in hardware, software, or a combination of these.


The data network 520 may include a cable data network, a broadcast wireless data network, a point-to-point wireless data network, a satellite network, or a portion of the Internet. In some embodiments, the data network 520 is coupled to a managed data network that is capable of providing interactive television signals to a viewer. Thus, for instance, the data network 520 may be the Internet, and a managed cable television network, including a cable headend, may be interposed between the data network 520 and the client device 510. In this embodiment, the servers 540 are remote from the cable headend, yet are controlled directly by the viewer. This kind of arrangement is described in more detail in U.S. patent application Ser. No. 10/253,109, filed Sep. 24, 2002 and titled “Interactive Cable System with Remote Processors,” the contents of which are incorporated herein by reference in their entirety. Alternatively, a managed cable television network may be interposed between the data network 520 and servers 540. This embodiment corresponds, for example, to a situation in which a cable company transmits interactive signals to neighborhood signal distribution nodes, but the nodes themselves aggregate bandwidth to individual homes or businesses in an unmanaged fashion. In this embodiment, data network 520 corresponds to the “last mile” of connectivity, as that phrase is known in the art. In such systems, bandwidth to an individual viewer cannot be guaranteed (managed) as a consequence of performing the aggregation, even if data network 520 is privately owned by the cable company. In a combined embodiment, the server(s) 540 are remote from a cable headend and the last mile is unmanaged, so the system has two sources of unmanaged network latency. Variations of the network topology, in accordance with other embodiments of the invention, may be contemplated by those having ordinary skill in the art. The data network 520 is thus unmanaged, in that bandwidth throughput cannot be guaranteed. The data network 520 is also a source of network latency; represented by the network buffer of FIGS. 2-4.


The display device 530 is configured to convert audiovisual signals into images and sounds. The display device 530 may be, among other things, a television, a computer monitor, or a smartphone display. However, it is not intended to limit the scope of the invention by enumerating these various embodiments, and a person of ordinary skill in the art may see how to adapt other technologies to meet the requirements of the display device 530.


The box 540 denotes a system of one or more computers (servers) that generate transmission audiovisual data in accordance with an embodiment of the invention. The functional components of the element 540 include an interactive software application 542, an encoder 544, and a delay cost module 548. Their representation as a single element does not limit the scope of the invention, as these functional components may be implemented in a variety of hardware and/or software environments using only one computing processor or using multiple processors, or using ASICs or FPGAs.


The interactive software application 542 is a hardware or software component that produces audiovisual data, and alters that audiovisual data in response to receiving interactive commands from the client device 510, as previously described. The application 542 may provide, for example, an electronic program guide or other graphical user interface, a billing system, an authorization mechanism, a video game, a web browser, an email client, a music browsing and purchasing application, Internet access, or another type of interactive application. The source audiovisual data may be raw frames of uncompressed video and/or audio data, or they may be compressed according to a compression algorithm known in the art.


The application 542 sends the source audiovisual data, according to a source bitrate, to an encoder 544. The encoder 544 encodes the source audiovisual data, according to an encoding bitrate, to produce transmission audiovisual data. The transmission audiovisual data are formatted for distribution according to a particular encoding standard, such as MPEG, that the client device 510 (and more particularly, decoder 514) is capable of decoding for playback.


Transmission audiovisual data are stored in a transmission buffer 546 that corresponds to the server buffer of FIGS. 2-4. This buffer 546 may be used to store data for transmission in the event that the network 520 has insufficient bandwidth to transmit data at the encoding bitrate used by the encoder 544. Data are added to the buffer 546 at the instantaneous encoding bitrate, and removed from the buffer 546 at the instantaneous network transmission bitrate. A ring buffer may be used for this purpose.


In the case that the encoding output bitrate is greater than the network throughput, buffer 546 stores the data that cannot be transmitted immediately. In general, it is very difficult to determine when sufficient network throughput is available. The underlying transport protocol, which is typically TCP, can best be seen as elastic. That is, when less data is offered to TCP for transmission, it will allow other competing connections to take a larger share. Similarly, when more data is offered to TCP for transmission, it will take more bandwidth. However, TCP cannot take more throughput than it rightfully may claim according to the fair sharing principles that have been built into its algorithms. The only effect that is observable to a server 540 is that when more data is queued for transmission than TCP can allow (as a collective of all connections competing for the same bandwidth), the data will queue up in the transmit buffer.


As will be appreciated by those having skill in the art of video encoding, different frames of video have different sizes based on their content, and sometimes based on the content of other video frames. Thus, the instantaneous encoding bitrate of the encoder 544 may vary over time, and may differ from an established (target) encoding bitrate. Similarly, the instantaneous available bandwidth of network 520 may vary over time. However, as long as buffer 546 does not completely empty, the rate of data being transmitted to data network 520 over time should converge to the established encoding bitrate.


The amount of playback latency in the buffer 546 should be inversely related to the encoding bitrate, for two reasons. First, if the buffer 546 starts to fill up, further queuing should be avoided because it introduces additional playback latency that manifests as ‘sluggishness’ in the user experience. Second, if the buffer 546 often does not hold any data at all because the data network 520 has extra throughput, then the server(s) 540 should try to grab a bit more bandwidth from other, competing TCP connections in the data network 520 by requesting transmission of playback frames having more data, thereby increasing the visual quality of the video.


Thus, in accordance with various embodiments of the invention, a bitrate control algorithm monitors the amount of data that is queuing up, and establishes the encoding bitrate of encoder 544. The encoding bitrate for the encoder 544 changes as a function of a delay cost, and a delay cost module 548 is employed to calculate this function. The delay cost function may be broadly viewed as a measure of the cost of increased playback latency to an end user experience. This cost varies from one application to the next. Applications for which end users are sharply intolerant of playback latency beyond a small number of frames, such as video games, may have a delay cost function shaped as in FIG. 8 (which is described in more detail below). Applications for which end users are relatively tolerant of delay in response to key presses, such as an interactive program guide, may have a delay cost function shaped as in FIG. 9 (also described below). Returning to FIG. 5, the delay cost module 548 is used to establish an encoding bitrate for the encoder 544 (and, in some embodiments, to interactive software application 542). Thus, as the measured network latency in the network 520 increases, the established encoding bitrate decreases, restricting the flow of new data (and new playback latency) into the system.


Encoding bitrates that are less than the playback bitrate cause new audiovisual data to be produced by the server 540 slower than they are consumed by the client device 510. If this behavior is prolonged, it will cause the buffer 512 to empty. If left uncorrected, this condition will manifest itself on the display device 530 as a frozen image, which may be viewed by an end user as even more unpleasant than excess latency.



FIG. 6 is a functional diagram showing relevant functional components of an alternate embodiment. This embodiment is similar to that shown in FIG. 5, except that the encoder has been replaced by a stitcher/encoder 610. An example of a stitcher/encoder as known in the art may be found in U.S. patent application Ser. No. 12/008,697, filed Jan. 11, 2008, the contents of which are incorporated herein by reference in their entirety. A stitcher is a hardware or software functional module that acts as a multiplexer of sorts: it stitches several video or audio frames together to form a single output frame. Such stitchers are useful, for example, in an interactive program guide application. In this connection, interactive software application 542 produces a textual or graphical channel listing in response to interactive commands. The stitcher portion of stitcher/encoder 610 combines this listing with the video and/or audio of a channel preview whose data come from other audiovisual data sources 620. The encoder portion of stitcher/encoder 610 then encodes the stitched content according to the target average bitrate, and places it in the buffer 612 for transmission to the data network 520.



FIG. 7 is a flowchart showing the processes associated with a method of reducing and controlling playback latency in accordance with an embodiment of the invention. Such a flowchart may be, for example, embodied as computer program code that is executed in one or more of the functional modules of FIGS. 5 and 6. In process 710, the method starts with encoding source audiovisual data into transmission audiovisual data. In process 720, the method requires transmitting the transmission audiovisual data to a client device through a data network. Process 730 includes calculating a delay cost function based on a network latency associated with the data network. In process 740, the method requires altering the stitching or encoding of the source audiovisual data based on the calculated delay cost function. As indicated in FIG. 7, these processes may repeat, thereby providing a dynamic method of controlling playback latency in the unmanaged data network.


Various processes 730 for calculating the delay cost function are now discussed, with reference to FIGS. 8 and 9. FIG. 8 is a graph of a delay cost function used to respond to detected network latency in one embodiment in which a user cannot tolerate much delay. FIG. 9 is a graph of a delay cost function used to respond to detected network latency in an alternate embodiment in which a user is more tolerant of delay.


The delay cost is a function of actual network latency, which may be measured using a number of different techniques. As is known in the art, TCP sends data in packets of a limited size. Therefore, when an application attempts to send more data than can be sent at once, the TCP sender breaks the data into sequenced packets, and sends the packets in sequence. The TCP receiver then acknowledges packets based on the sequence. MPEG video frames are often larger than the TCP packet size, so frames are often broken into multiple packets for transmission.


In one embodiment, the network latency is calculated in terms of MPEG video frame portions. Server(s) 540 divide the data of each video frame into at least one frame portion, but often into many portions. Each frame portion is transmitted to the data network 520 as a TCP packet, and eventually reaches the client device 510 which acknowledges it. The server(s) 540 wait to receive, from the client device, the acknowledgement for each frame portion sent through the network.


In this embodiment, the delay cost function is based on a length of time between the completion of the encoding of a frame portion and receipt of the acknowledgement of the frame portion (as a packet). This length of time is called the “delay”. To calculate the delay, a list of all TCP packets having MPEG frame data is maintained, and packets that have not been acknowledged by the client TCP stack are counted toward the network latency measure. Frames for which no packets have been acknowledged add their full playout length (e.g. 1/30 of a second) to the network latency. A frame for which some packets have been acknowledged but not others is counted as a fraction of its full playout length, based on the number of bytes required to transmit that given frame and the number of bytes in the unacknowledged packets. As some frames require more data to encode than others, this calculation will differ from one frame to the next.


Once the delay has been calculated, the delay cost may be calculated. In one embodiment, the delay cost has the formula cost=α+λ*(delay−rtt)γ. Here, rtt is the round trip time measured for the data network 520, discussed above. In some embodiments, the overall minimum round trip time rtt_min is used instead of the instantaneously measured round trip time rtt. The term (delay−rtt) represents the buffering delay at the server side, while the term (delay−rtt_min) represents this delay plus an amount of buffering (rtt−rtt_min) that is attributable to network delay. In some embodiments, this term (in either form) may be divided by a characteristic time, such as rtt or rtt_min, so that it becomes a dimensionless number prior to exponentiation. The minimum round-trip time is accepted as a given—it consists of network buffers and propagation delay that the server cannot control.


The shape of the cost function ensures that as the sender buffering delay increases, the server's generated bitrate is decreased. The parameter α specifies an offset with respect to the TCP estimated bitrate, and represents a minimum cost to transmit data. Selecting a value for α that is less than 1.0 will cause the server bitrate to be higher than the estimated available bitrate, which is useful to get TCP to ‘stretch up’ its maximum bandwidth. The positive parameter λ specifies a scaling factor, to allow the buffering delay to have a larger effect on the resulting bitrate. The parameter γ determines the shape of the cost function, and in particular its curvature. A γ value of 1.0 will provide a linear cost, so the target output bandwidth response will be purely hyperbolic. A γ value higher than 1.0 creates a steeply falling target bandwidth curve (and a more rapid response to detected network latency). Conversely, a γ value lower than 1.0 provides a more gradual response to detected network latency.


For example, FIG. 8 is a graph of a delay cost function used by the system to respond to detected network latency in one embodiment in which a user cannot tolerate much playback latency. For example, this curve may represent user tolerance in a video game application. This particular curve corresponds to an instantaneous estimated TCP bitrate of 3.75 Mbps, delay measured in tenths of a second, with α=1, λ=0.1, and γ=4. Thus, when the network is performing optimally with zero delay, the target output bitrate is 3.75 Mbps. When there is 0.1 s of delay detected, the target output bitrate is 3.50 Mbps, because this level of latency is still acceptable. But when delay rises to 0.2 s, the target output bitrate has fallen drastically to about 1.50 Mbps, and by 0.4 s of delay, the transmission of new data has virtually stopped. Thus, fewer data accumulate in the server buffer than before the bitrate reduction, but they will be timelier. This reduction in bitrate results in continued high responsiveness to user inputs, even if the quality of the resulting video is degraded. In an application that is intolerant of high playback latency, this is the proper tradeoff.


By contrast, FIG. 9 is a graph of a delay cost function used to respond to detected network latency in an alternate embodiment in which a user is more tolerant of playback latency, such as an interactive program guide. The instantaneous estimated TCP bitrate is still 3.75 Mbps, but now α=1, λ=0.3, and most importantly, γ=1. Since γ=1, the curve has a hyperbolic shape, and the response to delay is more gradual. When 0.1 s of delay is detected, the target output bitrate has fallen off to 2.90 Mbps. At 0.2 s of delay, however, the target output bitrate is still at 2.40 Mbps, and at 0.4 s of delay the bitrate is at 1.75 Mbps. These latter two numbers are much higher than in the previous example, because an end user of this application is much more tolerant of playback latency.


The algorithm discussed above does not take into account TCP's round-trip time (rtt) variations. On some occasions, especially when the data network has large network buffers, the observed round-trip time can become orders of magnitude larger than the minimum round-trip time rtt_min that is associated with the network. This is undesired since the rtt adds to the end-to-end network latency. Therefore, an alternate embodiment uses a cost function based on the instantaneous round-trip time, rather than the delay value calculated from MPEG frames. Thus, the delay cost function for this embodiment has the formula cost=α+λ*(rtt−rtt_min)γ, where α, λ, and γ are selected accordingly.


Various processes 740 of altering the encoding of the source audiovisual data are now discussed. In general, there are two different ways to alter the encoding of the source audiovisual data: to change the encoding bitrate, and to change the source audiovisual data themselves. These approaches are explained in turn.


Generally speaking, the established encoding bitrate for the encoder is expressed as a ratio between an estimated available bandwidth and the delay cost function that corresponds to the currently measured latency. If TCP is used, the actual available bandwidth is unknown (i.e., TCP is “unmanaged”), although other protocols may be used that give an exact measure of available bandwidth. One formula for providing a TCP estimated bandwidth is estimate=8*mss*cwnd/rtt, where “mss” is the TCP maximum segment size, “cwnd” is the TCP congestion window size, and “rtt” is a round trip time associated with the data network. The value of rtt may be measured using tools known in the art, such as the packet acknowledgement (ACK) mechanism. This formula operates on the principle that as network round trip time increases, TCP will buffer more data in its congestion window, and vice versa. As TCP stores data in bytes, multiplication by eight is necessary to yield a bitrate.


The actual reduction in encoder bitrate may be achieved in one of two ways. If the source audiovisual content is being encoded for the first time, the established encoding bitrate is passed directly to the encoder, which will encode the source audiovisual content accordingly. However, if the source data were pre-encoded, then the encoder will typically have access to a selection of different bitrates for the particular audiovisual content. When establishing a lower encoding bitrate, the playback latency in the server buffer is quantized, and each time a new quantized value is reached the next lower bitrate stream is selected. When establishing a higher bitrate, if a certain percentage of consecutive frames have been displayed on time, then the next higher bitrate stream is selected. The percentage is configurable to make the algorithm more ‘adventurous’. For example, a particular video stored for on-demand playback may be pre-encoded at bitrates of 0.5 Mbps, 1.0 Mbps, and 2.0 Mbps, shown in FIG. 6 as data sources 620. Then, if the encoding bitrate is established at 1.5 Mbps, the stitcher/encoder 610 may stitch the 1.0 Mbps source data into the transmission audiovisual data stream, rather than the 2.0 Mbps source data. If the established encoding bitrate falls below 0.5 Mbps, or another given threshold, the encoder may be paused, so that no more data accumulate in the output buffer at all. By contrast, when the established encoding bitrate increases again above 2.0 Mbps, and (say) at least 75% of the frames were transmitted and displayed on time, the stitcher/encoder 610 may revert to using the higher-bitrate source data.


More subtle bitrate manipulations may be performed in other embodiments. For example, the source audiovisual data may include both audio and visual data, each having their own, separate bitrate. When the established encoding bitrate is reduced or increased, an encoder may reduce or increase the encoding bitrate of only the video portion of the source audiovisual data, while leaving the audio bitrate intact. If the video encoding bitrate is reduced enough, then the sum of the audio bitrate and the (reduced) video encoding bitrate may be no greater than the established encoding bitrate. In this way, a lower (or higher) target may be met without switching between two streams that have vastly different bitrates (and markedly different video qualities), as was discussed above. Further, by maintaining a constant audio bitrate, one may establish a more precise encoding bitrate by determining a more accurate estimate of the available bandwidth in the data network 520. Systems and methods for doing so are taught in U.S. application Ser. No. 12/651,203, filed Dec. 31, 2009, the contents of which are incorporated herein by reference in their entirety.


Turning to a second way of altering the encoding of the source audiovisual data, in some embodiments of the invention, the source audiovisual data themselves are adapted to changing bitrates. This method of content adaptation corresponds to the dashed line in FIG. 5, whereby the delay cost module 548 notifies the interactive software application 542 that a new encoding bitrate has been established. In such embodiments, the interactive software application 542 responds not just to interactive commands from an end user, but also to changes in the encoding bitrate. In these embodiments, the application 542 creates source audiovisual data that may be encoded using a lower (or higher) bitrate, thereby cooperating with the encoder 544 that is trying to determine the best way to respond to the same, lower (or higher) established bitrate.


When an application is requested to generate a lower bitrate stream while maintaining high quality video, it can choose alternate video properties that require fewer bits to encode. This choice can still lead to satisfactory and attractive results, depending on the codec that is used for a given session. If MPEG is used, new screen elements are generally more expensive in terms of bitrate than moving elements, due to efficient encoding of motion vectors. Other seemingly simple effects, such as a fade in from black, are expensive (in MPEG2) because there is no compression primitive for recoloring or weighted prediction. However, MPEG4 has more facilities that allow such richer effects at low cost.


To reduce the encoding bitrate of the source audiovisual data to match the established target, the interactive software application 542 may use one or more of the following strategies. If it is generating a graphical user interface (GUI), it may select a less rich UI element. For example, instead of a cross-fade, fade-in from black or other color, the application 542 can generate the final element at once. Or the application can slide an object into view from the screen edge, rather than draw it all at once. Similarly, instead of an image having an alpha channel that smoothly blends into the background around the edges, an application can use a differently authored layout having an image without alpha channels (with smooth edges but no blending to the background). In this latter embodiment, a transparent or translucent screen object is not generated, in favor of an opaque object that requires fewer bits to encode.


An alternate strategy consists of postponing the production of selected screen regions until a later frame, if there are many updates in one frame and much fewer in later frames. For example, a GUI that includes a collection of buttons and a dynamic element (such as a channel preview) might update the dynamic element every other frame. While this reduces the effective frame rate of the dynamic element, it also reduces the actual size of the transmission audiovisual data. Another strategy includes showing only the end result (last frame) of an animation. Yet another strategy is to design a GUI background image to be a solid color rather than a complex pattern, to avoid having to re-render and retransmit the complex pattern when a pop-up window is removed from the screen. As a final strategy, for extremely low target average bitrates, the application could provide the source audiovisual data as a slideshow of images (i.e., JPEG or PNG images) rather than an MPEG stream.


There is another process 740 that may be employed, used when an underrun in the client buffer 512 underrun occurs. The use of the delay cost function described above is meant to avoid an actual client buffer underrun; thus, when one actually occurs, other measures are required. The alternative process 740 generally includes a cycle of four sub-processes that are shown in FIG. 10. The first process 1010 of the cycle involves estimating the size of the client buffer, and is described in more detail below. When the client buffer is estimated to be empty, the second process 1020 transitions from a ‘streaming’ state to a ‘draining’ state. This process includes pausing the encoder, as described above, to allow the network buffer to drain its playback latency. The third process 1030 estimates the size of the network buffer, and is described in more detail below. When the network buffer is estimated to be empty, the system is again ready to accept more playback latency, and the fourth process 1040 switches from the ‘draining’ state to the ‘streaming’ state. This process 1040 includes unpausing the encoder. The system shortly reaches equilibrium, and the cycle begins again.


There are two ways that the first process 1010 can detect a client buffer underrun, namely with or without direct client notice. Returning to FIGS. 5 and 6, in some embodiments, the client device 510 includes logic to detect the existence or approach of a buffer underrun condition. This condition manifests itself, in a different context, in the familiar “buffering” message shown in some video players. When the condition is detected, the client device 510 signals the encoder 544 (or stitcher/encoder 610) to indicate that corrective measures are urgently required. This notice is shown by a dashed line in FIGS. 5 and 6. When the client device 510 later signals that the buffer underrun has been cured, the encoder 542 unpauses to permit new data to reach the client.


In other embodiments, however, the client device 510 is unable to signal these underrun events, due to the modularity of software and shielding of software internals from developers. For example, often a set top box or a Blu-ray player incorporates a video decoding ASIC or system-on-chip, along with a software development kit (SDK) for the decoder. The SDK contains tools and software libraries to build a working product. Often the manufacturer of the client device can only access the ASIC via an application programming interface (API) that is provided with the SDK. If the API does not allow buffer underrun events to be signaled to other software modules, then this signal also cannot be sent from the client device 510 to the server.


In embodiments in which a client device 510 cannot signal an underrun in buffer 512 directly, the server estimates a buffer underrun event at the client. In one embodiment, the server(s) 540 detect when the server buffer is has more data than a given threshold. Based on the particular delay cost function used, the given threshold indicates that a network outage has occurred. If dynamic video is streamed, the threshold is set to a value that permits some server-side latency, because responsiveness to user input is not the overriding concern. However, if a user interface is streamed, the threshold is preferably set to zero. Thus, an interactive application will mostly remain in the ‘draining’ state, with no server-side playback latency. Furthermore, when draining occurs while streaming a user interface, the application 542 is not paused. This ensures minimum playback latency while the user is actively operating the application. Since no moving pictures are displayed, any ‘hiccup’ or stutter that would otherwise be noticed is not relevant here.


In one MPEG embodiment that does not rely on a buffer measure, the system uses the Program Clock Reference (PCR) with the Decoding Time Stamp (DTS) or Presentation Time Stamp (PTS). As is known in the art, the PCR is a datum in an MPEG-2 transport stream that provides a timestamp associated with the encoder's model of the presentation. The timestamp is used by the decoder 514 as a reference against which the other two timestamps are judged. The DTS provides a marker relative to the PCR that indicates a correct decoding time of a given audio or video frame, while the PTS indicates the correct playback time.


The relation between PCR and PTS in an ideal situation is shown in FIG. 11. Here, the encoder is encoding data using a playback latency of PTS−PCR. This value is equal to the end-to-end network latency, so the PTS time in the client device occurs just as the frame that should be displayed at that time arrives at the top of the client buffer. In prior art managed networks, the difference PTS−PCR is usually fixed, and is chosen based on a priori knowledge of (controllable) network latencies in the network path. Client devices often use PCR as a reference to determine playout timing in cable networks with fixed delay, but not when streaming over the open Internet. The use of the PTS in a public data network in this embodiment is advantageous, because it permits the detection of client buffer underruns in thin clients without explicit signaling, thus solving a different problem than its customary use for controlling display timing.


In this embodiment, the server detects a client underrun if the DTS or PTS for a frame has passed, but acknowledgement was not received for the last TCP packet of that frame. At the moment the DTS occurs, the client decoder 514 will be attempting to decode a particular frame to place in the buffer 512. Similarly, when the PTS occurs, the client device 510 will be attempting to display that frame. However, if the frame has not yet even been received by the client (as indicated by lack of packet acknowledgment), then the client buffer 512 must be empty or critically low. This condition may be detected by the server(s) 540, which may take corrective action even without receiving a signal from the client device 510 that a buffer underrun has occurred.


Corrective measures taken by a system that employs this method need not be limited to just pausing the encoder. When an underrun has occurred, the server may set a larger PTS−PCR difference, to allow the system more time to transfer the frames through the server, network, and client buffers. However, doing so would increase end-to-end playback latency. To reduce this playback latency, the server may receive buffer occupancy reports from the client. If the client buffer level is consistently above a certain threshold, then the server reduces the PTS−PCR difference accordingly.


The embodiments of the invention described above are intended to be merely exemplary; numerous variations and modifications will be apparent to those skilled in the art. All such variations and modifications are intended to be within the scope of the present invention as defined in any appended claims.


It should be noted that the logic flow diagrams are used herein to demonstrate various aspects of the invention, and should not be construed to limit the present invention to any particular logic flow or logic implementation. The described logic may be partitioned into different logic blocks (e.g., programs, modules, functions, or subroutines) without changing the overall results or otherwise departing from the true scope of the invention. Often times, logic elements may be added, modified, omitted, performed in a different order, or implemented using different logic constructs (e.g., logic gates, looping primitives, conditional logic, and other logic constructs) without changing the overall results or otherwise departing from the true scope of the invention.


The present invention may be embodied in many different forms, including, but in no way limited to, computer program logic for use with a processor (e.g., a microprocessor, microcontroller, digital signal processor, or general purpose computer), programmable logic for use with a programmable logic device (e.g., a Field Programmable Gate Array (FPGA) or other PLD), discrete components, integrated circuitry (e.g., an Application Specific Integrated Circuit (ASIC)), or any other means including any combination thereof. Hardware logic (including programmable logic for use with a programmable logic device) implementing all or part of the functionality previously described herein may be designed using traditional manual methods, or may be designed, captured, simulated, or documented electronically using various tools, such as Computer Aided Design (CAD), a hardware description language (e.g., VHDL or AHDL), or a PLD programming language (e.g., PALASM, ABEL, or CUPL).


The aforementioned computer program logic and programmable logic may be embodied in various forms, including, but in no way limited to, a source code form, a computer executable form, and various intermediate forms (e.g., forms generated by an assembler, compiler, linker, or locator). Source code may include a series of computer program instructions implemented in any of various programming languages (e.g., an object code, an assembly language, or a high-level language such as Fortran, C, C++, JAVA, or HTML) for use with various operating systems or operating environments. The source code may define and use various data structures and communication messages. The source code may be in a computer executable form (e.g., via an interpreter), or the source code may be converted (e.g., via a translator, assembler, or compiler) into a computer executable form.


The computer program may be fixed in any form (e.g., source code form, computer executable form, or an intermediate form) either permanently or transitorily in a tangible storage medium, such as a semiconductor memory device (e.g., a RAM, ROM, PROM, EEPROM, or Flash-Programmable RAM), a magnetic memory device (e.g., a diskette or fixed disk), an optical memory device (e.g., a CD-ROM), a PC card (e.g., PCMCIA card), or other memory device. The computer program may be fixed in any form in a signal that is transmittable to a computer using any of various communication technologies, including, but in no way limited to, analog technologies, digital technologies, optical technologies, wireless technologies (e.g., Bluetooth), networking technologies, and internetworking technologies. The computer program may be distributed in any form as a removable storage medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the communication system (e.g., the Internet or World Wide Web).

Claims
  • 1. A method of controlling playback latency associated with transmission of source audiovisual data through an unmanaged, buffered data network, the method comprising: at a server system coupled to a client device through the data network: encoding a first portion of the source audiovisual data into a first portion of transmission audiovisual data at a first encoding bitrate; buffering the first portion of transmission audiovisual data; transmitting the first portion of transmission audiovisual data to the client device through the data network;calculating a delay cost function based on a delay associated with the buffering and with network delay, raised to a power, scaled by a scaling factor, and added to a minimum cost to transmit data;altering an encoding bitrate from the first encoding bitrate to a second encoding bitrate based on the calculated delay cost function; encoding a second portion of the source audiovisual data into a second portion of transmission audiovisual data at the second encoding bitrate; buffering the second portion of transmission audiovisual data; and transmitting the second portion of transmission audiovisual data to the client device through the data network.
  • 2. The method of claim 1, wherein encoding the first and second portions of the source audiovisual data is performed according to an MPEG standard.
  • 3. The method of claim 1, wherein the client device comprises at least one of a television set top box, a television, a personal computer, a tablet computer, a smartphone, and an optical disc player.
  • 4. The method of claim 1, wherein the unmanaged, buffered data network includes at least one of a cable data network, a broadcast wireless data network, a point-to-point wireless data network, a satellite network, and a portion of the Internet.
  • 5. The method of claim 1, wherein the unmanaged, buffered data network is coupled to a managed data network that is capable of providing interactive television signals.
  • 6. The method of claim 1, wherein transmitting the first and second portions of transmission audiovisual data includes dividing the data of each video frame into at least one frame portion, the method further comprising: waiting to receive, from the client device, for each frame portion, an acknowledgement that the frame portion has been received by the client device, wherein the delay cost function is calculated using a length of time between the completion of the encoding of a frame portion and receipt from the client device of the acknowledgement of the frame portion.
  • 7. The method of claim 1, wherein transmitting the first and second portions of transmission audiovisual data is performed in accordance with the Transmission Control Protocol (hereinafter “TCP”), further comprising calculating an estimated available bandwidth as estimate=8*mss*cwnd/rtt,where “mss” is a TCP Maximum Segment Size, “cwnd” is a TCP Congestion Window Size, and “rtt” is a round trip time associated with the data network; and establishing the second encoding bitrate as being equal to the value of the ratio of “estimate” to the delay cost function.
  • 8. The method of claim 1, wherein transmitting the first and second portions of transmission audiovisual data is performed in accordance with the Transmission Control Protocol (hereinafter “TCP”), further comprising calculating an estimated available bandwidth as estimate=8*mss*cwnd/rtt,where “mss” is a TCP Maximum Segment Size, “cwnd” is a TCP Congestion Window Size, and “rtt” is a round trip time associated with the data network; and establishing the second encoding bitrate as being equal to the value of the ratio of “estimate” to the delay cost function;wherein the source audiovisual data includes source audio data having an audio bitrate and source video frames having a video encoding bitrate, and the altering includes altering the video encoding bitrate so that the sum of the audio bitrate and the video encoding bitrate is no greater than the second encoding bitrate.
  • 9. The method of claim 1, wherein the source audiovisual data are generated by an interactive software application according to encoding bitrates including the first and second encoding bitrates, the method further comprising notifying the application of the altering.
  • 10. The method of claim 9, wherein, when the second encoding bitrate is lower than a given threshold defined by the application, the application alters the source audiovisual data by not generating a transparent screen object.
  • 11. The method of claim 9, wherein the source audiovisual data include a graphical user interface, and wherein when the second encoding bitrate is lower than a given threshold defined by the application, the application alters the source audiovisual data by postponing the generation of a dynamic screen region in the graphical user interface.
  • 12. The method of claim 1, further comprising pausing the encoder when the value of the delay cost function falls below a given threshold.
  • 13. A non-transitory computer-usable data storage medium on which is stored computer program code for instructing a server system, that comprises at least one computing processor, to execute a method of controlling playback latency associated with transmission of source audiovisual data through an unmanaged, buffered data network to a client device, the program code comprising: program code for encoding a first portion of the source audiovisual data into a first portion of transmission audiovisual data at a first encoding bitrate;program code for buffering the first portion of transmission audiovisual data; program code for transmitting the first portion of transmission audiovisual data to the client device through the data network;program code for calculating a delay cost function based on a delay associated with the buffering and with network delay, raised to a power, scaled by a scaling factor, and added to a minimum cost to transmit data a network latency associated with the data;program code for altering an encoding bitrate from the first encoding bitrate to a second encoding bitrate based on the calculated delay cost function;program code for encoding a second portion of the source audiovisual data into a second portion of transmission audiovisual data at the second encoding bitrate;program code for buffering the second portion of transmission audiovisual data; andprogram code for transmitting the second portion of transmission audiovisual data to the client device through the data network.
  • 14. The non-transitory computer-usable data storage medium of claim 13, wherein encoding the first and second portions of the source audiovisual data is performed according to an MPEG standard.
  • 15. The non-transitory computer-usable data storage medium of claim 13, wherein the client device comprises at least one of a television set top box, a television, a personal computer, a tablet computer, a smartphone, and an optical disc player.
  • 16. The non-transitory computer-usable data storage medium of claim 13, wherein the unmanaged, buffered data network includes at least one of a cable data network, a broadcast wireless data network, a point-to-point wireless data network, a satellite network, and a portion of the Internet.
  • 17. The non-transitory computer-usable data storage medium of claim 13, wherein the unmanaged, buffered data network is coupled to a managed data network that is capable of providing interactive television signals.
  • 18. The non-transitory computer-usable data storage medium of claim 13, wherein the program code for transmitting the first and second portions of transmission audiovisual data includes program code for dividing the data of each video frame into at least one frame portion, the medium further comprising: program code for waiting to receive, from the client device, for each frame portion, an acknowledgement that the frame portion has been received by the client device, wherein the delay cost function is calculated using a length of time between the completion of the encoding of a frame portion and receipt from the client device of the acknowledgement of the frame portion.
  • 19. The non-transitory computer-usable data storage medium of claim 13, wherein transmitting the first and second portions of transmission audiovisual data is performed in accordance with the Transmission Control Protocol (hereinafter “TCP”), further comprising program code for calculating an estimated available bandwidth as estimate=8*mss*cwnd/rtt,where “mss” is a TCP Maximum Segment Size, “cwnd” is a TCP Congestion Window Size, and “rtt” is a round trip time associated with the data network; and for establishing the second encoding bitrate as being equal to the value of the ratio of “estimate” to the delay cost function, and wherein altering the encoding of the source audiovisual data includes encoding the source audiovisual data using the established encoding bitrate.
  • 20. The non-transitory computer-usable data storage medium of claim 13, wherein transmitting the first and second portions of transmission audiovisual data is performed in accordance with the Transmission Control Protocol (hereinafter “TCP”), further comprising program code for calculating an estimated available bandwidth as estimate=8*mss*cwnd/rtt,where “mss” is a TCP Maximum Segment Size, “cwnd” is a TCP Congestion Window Size, and “rtt” is a round trip time associated with the data network; and for establishing the second encoding bitrate as being equal to the value of the ratio of “estimate” to the delay cost function;wherein the source audiovisual data includes source audio data having an audio bitrate and source video frames having a video encoding bitrate, and the altering includes altering the video encoding bitrate so that the sum of the audio bitrate and the video encoding bitrate is no greater than the second encoding bitrate.
  • 21. The non-transitory computer-usable data storage medium of claim 13, wherein the source audiovisual data are generated by an interactive software application according to encoding bitrates including the first and second encoding bitrates, the medium further comprising program code for notifying the application of the altering.
  • 22. The non-transitory computer-usable data storage medium of claim 21, wherein, when the second encoding bitrate is lower than a given threshold defined by the application, the application alters the source audiovisual data by not generating a transparent screen object.
  • 23. The non-transitory computer-usable data storage medium of claim 21, wherein the source audiovisual data include a graphical user interface, and wherein when the second encoding bitrate is lower than a given threshold defined by the application, the application alters the source audiovisual data by postponing the generation of a dynamic screen region in the graphical user interface.
  • 24. The non-transitory computer-usable data storage medium of claim 13, further comprising program code for pausing the encoder when the value of the delay cost function falls below a given threshold.
  • 25. A server system for controlling playback latency associated with transmission of source audiovisual data through an unmanaged, buffered data network to a client device, the server system comprising: an interactive application, executing on one or more computing devices, the interactive application being controlled by commands received from the client device to generate source audiovisual data; an encoder for encoding the source audiovisual data into transmission audiovisual data according to an encoding bitrate; a transmission buffer for buffering the transmission audiovisual data;a transmitter for transmitting the transmission audio data through the data network; anda delay cost module for calculating a delay cost function based on a delay associated with the buffering and with network delay, raised to a power, scaled by a scaling factor, and added to a minimum cost to transmit data, the delay cost module calculating the encoding bitrate based on the delay cost function and providing the encoding bitrate to the encoder.
  • 26. The server system of claim 25, wherein the client device comprises at least one of a television set top box, a television, a personal computer, a tablet computer, a smartphone, and an optical disc player.
  • 27. The server system of claim 25, wherein the unmanaged, buffered data network includes at least one of a cable data network, a broadcast wireless data network, a point-to-point wireless data network, a satellite network, and a portion of the Internet.
  • 28. The server system of claim 25, wherein the unmanaged, buffered data network is coupled to a managed data network that is capable of providing interactive television signals.
CROSS-REFERENCE TO RELATED APPLICATION

This application claims the benefit of U.S. Provisional Application No. 61/473,085, filed Apr. 7, 2011, the contents of which are herein incorporated by reference in their entirety.

US Referenced Citations (714)
Number Name Date Kind
3889050 Thompson Jun 1975 A
3934079 Barnhart Jan 1976 A
3997718 Ricketts et al. Dec 1976 A
4002843 Rackman Jan 1977 A
4032972 Saylor Jun 1977 A
4077006 Nicholson Feb 1978 A
4081831 Tang et al. Mar 1978 A
4107734 Percy et al. Aug 1978 A
4107735 Frohbach Aug 1978 A
4145720 Weintraub et al. Mar 1979 A
4168400 de Couasnon et al. Sep 1979 A
4186438 Benson et al. Jan 1980 A
4222068 Thompson Sep 1980 A
4245245 Matsumoto et al. Jan 1981 A
4247106 Jeffers et al. Jan 1981 A
4253114 Tang et al. Feb 1981 A
4264924 Freeman Apr 1981 A
4264925 Freeman et al. Apr 1981 A
4290142 Schnee et al. Sep 1981 A
4302771 Gargini Nov 1981 A
4308554 Percy et al. Dec 1981 A
4350980 Ward Sep 1982 A
4367557 Stern et al. Jan 1983 A
4395780 Gohm et al. Jul 1983 A
4408225 Ensinger et al. Oct 1983 A
4450477 Lovett May 1984 A
4454538 Toriumi Jun 1984 A
4466017 Banker Aug 1984 A
4471380 Mobley Sep 1984 A
4475123 Dumbauld et al. Oct 1984 A
4484217 Block et al. Nov 1984 A
4491983 Pinnow et al. Jan 1985 A
4506387 Walter Mar 1985 A
4507680 Freeman Mar 1985 A
4509073 Baran et al. Apr 1985 A
4523228 Banker Jun 1985 A
4533948 McNamara et al. Aug 1985 A
4536791 Campbell et al. Aug 1985 A
4538174 Gargini et al. Aug 1985 A
4538176 Nakajima et al. Aug 1985 A
4553161 Citta Nov 1985 A
4554581 Tentler et al. Nov 1985 A
4555561 Sugimori et al. Nov 1985 A
4562465 Glaab Dec 1985 A
4567517 Mobley Jan 1986 A
4573072 Freeman Feb 1986 A
4591906 Morales-Garza et al. May 1986 A
4602279 Freeman Jul 1986 A
4614970 Clupper et al. Sep 1986 A
4616263 Eichelberger Oct 1986 A
4625235 Watson Nov 1986 A
4627105 Ohashi et al. Dec 1986 A
4633462 Stifle et al. Dec 1986 A
4670904 Rumreich Jun 1987 A
4682360 Frederiksen Jul 1987 A
4695880 Johnson et al. Sep 1987 A
4706121 Young Nov 1987 A
4706285 Rumreich Nov 1987 A
4709418 Fox et al. Nov 1987 A
4710971 Nozaki et al. Dec 1987 A
4718086 Rumreich et al. Jan 1988 A
4732764 Hemingway et al. Mar 1988 A
4734764 Pocock et al. Mar 1988 A
4748689 Mohr May 1988 A
4749992 Fitzemeyer et al. Jun 1988 A
4750036 Martinez Jun 1988 A
4754426 Rast et al. Jun 1988 A
4760442 O'Connell et al. Jul 1988 A
4763317 Lehman et al. Aug 1988 A
4769833 Farleigh et al. Sep 1988 A
4769838 Hasegawa Sep 1988 A
4789863 Bush Dec 1988 A
4792849 McCalley et al. Dec 1988 A
4801190 Imoto Jan 1989 A
4805134 Calo et al. Feb 1989 A
4807031 Broughton et al. Feb 1989 A
4816905 Tweedy et al. Mar 1989 A
4821102 Ichikawa et al. Apr 1989 A
4823386 Dumbauld et al. Apr 1989 A
4827253 Maltz May 1989 A
4827511 Masuko May 1989 A
4829372 McCalley et al. May 1989 A
4829558 Welsh May 1989 A
4847698 Freeman Jul 1989 A
4847699 Freeman Jul 1989 A
4847700 Freeman Jul 1989 A
4848698 Newell et al. Jul 1989 A
4860379 Schoeneberger et al. Aug 1989 A
4864613 Van Cleave Sep 1989 A
4876592 Von Kohorn Oct 1989 A
4889369 Albrecht Dec 1989 A
4890320 Monslow et al. Dec 1989 A
4891694 Way Jan 1990 A
4901367 Nicholson Feb 1990 A
4903126 Kassatly Feb 1990 A
4905094 Pocock et al. Feb 1990 A
4912760 West, Jr. et al. Mar 1990 A
4918516 Freeman Apr 1990 A
4920566 Robbins et al. Apr 1990 A
4922532 Farmer et al. May 1990 A
4924303 Brandon et al. May 1990 A
4924498 Farmer et al. May 1990 A
4937821 Boulton Jun 1990 A
4941040 Pocock et al. Jul 1990 A
4947244 Fenwick et al. Aug 1990 A
4961211 Tsugane et al. Oct 1990 A
4963995 Lang Oct 1990 A
4975771 Kassatly Dec 1990 A
4989245 Bennett Jan 1991 A
4994909 Graves et al. Feb 1991 A
4995078 Monslow et al. Feb 1991 A
5003384 Durden et al. Mar 1991 A
5008934 Endoh Apr 1991 A
5014125 Pocock et al. May 1991 A
5027400 Baji et al. Jun 1991 A
5051720 Kittirutsunetorn Sep 1991 A
5051822 Rhoades Sep 1991 A
5057917 Shalkauser et al. Oct 1991 A
5058160 Banker et al. Oct 1991 A
5060262 Bevins, Jr et al. Oct 1991 A
5077607 Johnson et al. Dec 1991 A
5083800 Lockton Jan 1992 A
5088111 McNamara et al. Feb 1992 A
5093718 Hoarty et al. Mar 1992 A
5109414 Harvey et al. Apr 1992 A
5113496 McCalley et al. May 1992 A
5119188 McCalley et al. Jun 1992 A
5130792 Tindell et al. Jul 1992 A
5132992 Yurt et al. Jul 1992 A
5133009 Rumreich Jul 1992 A
5133079 Ballantyne et al. Jul 1992 A
5136411 Paik et al. Aug 1992 A
5142575 Farmer et al. Aug 1992 A
5144448 Hornbaker, III et al. Sep 1992 A
5155591 Wachob Oct 1992 A
5172413 Bradley et al. Dec 1992 A
5191410 McCalley et al. Mar 1993 A
5195092 Wilson et al. Mar 1993 A
5208665 McCalley et al. May 1993 A
5220420 Hoarty et al. Jun 1993 A
5230019 Yanagimichi et al. Jul 1993 A
5231494 Wachob Jul 1993 A
5236199 Thompson, Jr. Aug 1993 A
5247347 Litteral et al. Sep 1993 A
5253341 Rozmanith et al. Oct 1993 A
5262854 Ng Nov 1993 A
5262860 Fitzpatrick et al. Nov 1993 A
5303388 Kreitman et al. Apr 1994 A
5319455 Hoarty et al. Jun 1994 A
5319707 Wasilewski et al. Jun 1994 A
5321440 Yanagihara et al. Jun 1994 A
5321514 Martinez Jun 1994 A
5351129 Lai Sep 1994 A
5355162 Yazolino et al. Oct 1994 A
5359601 Wasilewski et al. Oct 1994 A
5361091 Hoarty et al. Nov 1994 A
5371532 Gelman et al. Dec 1994 A
5404393 Remillard Apr 1995 A
5408274 Chang et al. Apr 1995 A
5410343 Coddington et al. Apr 1995 A
5410344 Graves et al. Apr 1995 A
5412415 Cook et al. May 1995 A
5412720 Hoarty May 1995 A
5418559 Blahut May 1995 A
5422674 Hooper et al. Jun 1995 A
5422887 Diepstraten et al. Jun 1995 A
5442389 Blahut et al. Aug 1995 A
5442390 Hooper et al. Aug 1995 A
5442700 Snell et al. Aug 1995 A
5446490 Blahut et al. Aug 1995 A
5469283 Vinel et al. Nov 1995 A
5469431 Wendorf et al. Nov 1995 A
5471263 Odaka Nov 1995 A
5481542 Logston et al. Jan 1996 A
5485197 Hoarty Jan 1996 A
5487066 McNamara et al. Jan 1996 A
5493638 Hooper et al. Feb 1996 A
5495283 Cowe Feb 1996 A
5495295 Long Feb 1996 A
5497187 Banker et al. Mar 1996 A
5517250 Hoogenboom et al. May 1996 A
5526034 Hoarty et al. Jun 1996 A
5528281 Grady et al. Jun 1996 A
5537397 Abramson Jul 1996 A
5537404 Bentley et al. Jul 1996 A
5539449 Blahut et al. Jul 1996 A
RE35314 Logg Aug 1996 E
5548340 Bertram Aug 1996 A
5550578 Hoarty et al. Aug 1996 A
5557316 Hoarty et al. Sep 1996 A
5559549 Hendricks et al. Sep 1996 A
5561708 Remillard Oct 1996 A
5570126 Blahut et al. Oct 1996 A
5570363 Holm Oct 1996 A
5579143 Huber Nov 1996 A
5581653 Todd Dec 1996 A
5583927 Ely et al. Dec 1996 A
5587734 Lauder et al. Dec 1996 A
5589885 Ooi Dec 1996 A
5592470 Rudrapatna et al. Jan 1997 A
5594507 Hoarty Jan 1997 A
5594723 Tibi Jan 1997 A
5594938 Engel Jan 1997 A
5596693 Needle et al. Jan 1997 A
5600364 Hendricks et al. Feb 1997 A
5600573 Hendricks et al. Feb 1997 A
5608446 Carr et al. Mar 1997 A
5617145 Huang et al. Apr 1997 A
5621464 Teo et al. Apr 1997 A
5625404 Grady et al. Apr 1997 A
5630757 Gagin et al. May 1997 A
5631693 Wunderlich et al. May 1997 A
5631846 Szurkowski May 1997 A
5632003 Davidson et al. May 1997 A
5649283 Galler et al. Jul 1997 A
5668592 Spaulding, II Sep 1997 A
5668599 Cheney et al. Sep 1997 A
5708767 Yeo et al. Jan 1998 A
5710815 Ming et al. Jan 1998 A
5712906 Grady et al. Jan 1998 A
5740307 Lane Apr 1998 A
5742289 Naylor et al. Apr 1998 A
5748234 Lippincott May 1998 A
5754941 Sharpe et al. May 1998 A
5786527 Tarte Jul 1998 A
5790174 Richard, III et al. Aug 1998 A
5802283 Grady et al. Sep 1998 A
5812665 Hoarty et al. Sep 1998 A
5812786 Seazholtz et al. Sep 1998 A
5815604 Simons et al. Sep 1998 A
5818438 Howe et al. Oct 1998 A
5821945 Yeo et al. Oct 1998 A
5822537 Katseff et al. Oct 1998 A
5828371 Cline et al. Oct 1998 A
5844594 Ferguson Dec 1998 A
5845083 Hamadani et al. Dec 1998 A
5862325 Reed et al. Jan 1999 A
5864820 Case Jan 1999 A
5867208 McLaren Feb 1999 A
5883661 Hoarty et al. Mar 1999 A
5903727 Nielsen May 1999 A
5903816 Broadwin et al. May 1999 A
5905522 Lawler May 1999 A
5907681 Bates et al. May 1999 A
5917822 Lyles et al. Jun 1999 A
5946352 Rowlands et al. Aug 1999 A
5952943 Walsh et al. Sep 1999 A
5959690 Toebes et al. Sep 1999 A
5961603 Kunkel et al. Oct 1999 A
5963203 Goldberg et al. Oct 1999 A
5966163 Lin et al. Oct 1999 A
5978756 Walker et al. Nov 1999 A
5982445 Eyer et al. Nov 1999 A
5990862 Lewis Nov 1999 A
5995146 Rasmussen Nov 1999 A
5995488 Kalkunte et al. Nov 1999 A
5999970 Krisbergh et al. Dec 1999 A
6014416 Shin et al. Jan 2000 A
6021386 Davis et al. Feb 2000 A
6031989 Cordell Feb 2000 A
6034678 Hoarty Mar 2000 A
6049539 Lee et al. Apr 2000 A
6049831 Gardell et al. Apr 2000 A
6052555 Ferguson Apr 2000 A
6055314 Spies et al. Apr 2000 A
6055315 Doyle et al. Apr 2000 A
6064377 Hoarty et al. May 2000 A
6078328 Schumann et al. Jun 2000 A
6084908 Chiang et al. Jul 2000 A
6100883 Hoarty Aug 2000 A
6108625 Kim Aug 2000 A
6131182 Beakes et al. Oct 2000 A
6141645 Chi-Min et al. Oct 2000 A
6141693 Perlman et al. Oct 2000 A
6144698 Poon et al. Nov 2000 A
6167084 Wang et al. Dec 2000 A
6169573 Sampath-Kumar et al. Jan 2001 B1
6177931 Alexander et al. Jan 2001 B1
6182072 Leak et al. Jan 2001 B1
6184878 Alonso et al. Feb 2001 B1
6192081 Chiang et al. Feb 2001 B1
6198822 Doyle et al. Mar 2001 B1
6205582 Hoarty Mar 2001 B1
6226041 Florencio et al. May 2001 B1
6236730 Cowieson et al. May 2001 B1
6243418 Kim Jun 2001 B1
6253238 Lauder et al. Jun 2001 B1
6256047 Isobe et al. Jul 2001 B1
6266369 Wang et al. Jul 2001 B1
6266684 Kraus et al. Jul 2001 B1
6275496 Burns et al. Aug 2001 B1
6292194 Powell, III Sep 2001 B1
6305020 Hoarty et al. Oct 2001 B1
6317151 Ohsuga et al. Nov 2001 B1
6317885 Fries Nov 2001 B1
6349284 Park et al. Feb 2002 B1
6385771 Gordon May 2002 B1
6386980 Nishino et al. May 2002 B1
6389075 Wang et al. May 2002 B2
6389218 Gordon et al. May 2002 B2
6415031 Colligan et al. Jul 2002 B1
6415437 Ludvig et al. Jul 2002 B1
6438140 Jungers et al. Aug 2002 B1
6446037 Fielder et al. Sep 2002 B1
6459427 Mao et al. Oct 2002 B1
6477182 Calderone Nov 2002 B2
6481012 Gordon et al. Nov 2002 B1
6512793 Maeda Jan 2003 B1
6525746 Lau et al. Feb 2003 B1
6536043 Guedalia Mar 2003 B1
6557041 Mallart Apr 2003 B2
6560496 Michener May 2003 B1
6564378 Satterfield et al. May 2003 B1
6578201 LaRocca et al. Jun 2003 B1
6579184 Tanskanen Jun 2003 B1
6584153 Gordon et al. Jun 2003 B1
6588017 Calderone Jul 2003 B1
6598229 Smyth et al. Jul 2003 B2
6604224 Armstrong et al. Aug 2003 B1
6614442 Ouyang et al. Sep 2003 B1
6621870 Gordon et al. Sep 2003 B1
6625574 Taniguchi et al. Sep 2003 B1
6645076 Sugai Nov 2003 B1
6651252 Gordon et al. Nov 2003 B1
6657647 Bright Dec 2003 B1
6675385 Wang Jan 2004 B1
6675387 Boucher et al. Jan 2004 B1
6681326 Son et al. Jan 2004 B2
6681397 Tsai et al. Jan 2004 B1
6684400 Goode et al. Jan 2004 B1
6687663 McGrath et al. Feb 2004 B1
6691208 Dandrea et al. Feb 2004 B2
6697376 Son et al. Feb 2004 B1
6704359 Bayrakeri et al. Mar 2004 B1
6717600 Dutta et al. Apr 2004 B2
6718552 Goode Apr 2004 B1
6721794 Taylor et al. Apr 2004 B2
6721956 Wasilewski Apr 2004 B2
6727929 Bates et al. Apr 2004 B1
6731605 Deshpande May 2004 B1
6732370 Gordon et al. May 2004 B1
6747991 Hemy et al. Jun 2004 B1
6754271 Gordon et al. Jun 2004 B1
6754905 Gordon et al. Jun 2004 B2
6758540 Adolph et al. Jul 2004 B1
6766407 Lisitsa et al. Jul 2004 B1
6771704 Hannah Aug 2004 B1
6785902 Zigmond et al. Aug 2004 B1
6807528 Truman et al. Oct 2004 B1
6810528 Chatani Oct 2004 B1
6817947 Tanskanen Nov 2004 B2
6886178 Mao et al. Apr 2005 B1
6907574 Xu et al. Jun 2005 B2
6931291 Alvarez-Tinoco et al. Aug 2005 B1
6941019 Mitchell et al. Sep 2005 B1
6941574 Broadwin et al. Sep 2005 B1
6947509 Wong Sep 2005 B1
6952221 Holtz et al. Oct 2005 B1
6956899 Hall et al. Oct 2005 B2
7016540 Gong et al. Mar 2006 B1
7030890 Jouet et al. Apr 2006 B1
7031385 Inoue et al. Apr 2006 B1
7050113 Campisano et al. May 2006 B2
7089577 Rakib et al. Aug 2006 B1
7095402 Kunii et al. Aug 2006 B2
7114167 Slemmer et al. Sep 2006 B2
7146615 Hervet et al. Dec 2006 B1
7158676 Rainsford Jan 2007 B1
7200836 Brodersen et al. Apr 2007 B2
7212573 Winger May 2007 B2
7224731 Mehrotra May 2007 B2
7272556 Aguilar et al. Sep 2007 B1
7310619 Baar et al. Dec 2007 B2
7325043 Rosenberg et al. Jan 2008 B1
7346111 Winger et al. Mar 2008 B2
7360230 Paz et al. Apr 2008 B1
7412423 Asano Aug 2008 B1
7412505 Slemmer et al. Aug 2008 B2
7421082 Kamiya et al. Sep 2008 B2
7444306 Varble Oct 2008 B2
7444418 Chou et al. Oct 2008 B2
7500235 Maynard et al. Mar 2009 B2
7508941 O'Toole, Jr. et al. Mar 2009 B1
7512577 Slemmer et al. Mar 2009 B2
7543073 Chou et al. Jun 2009 B2
7596764 Vienneau et al. Sep 2009 B2
7623575 Winger Nov 2009 B2
7669220 Goode Feb 2010 B2
7742609 Yeakel et al. Jun 2010 B2
7743400 Kurauchi Jun 2010 B2
7751572 Villemoes et al. Jul 2010 B2
7757157 Fukuda Jul 2010 B1
7830388 Lu Nov 2010 B1
7840905 Weber et al. Nov 2010 B1
7936819 Craig et al. May 2011 B2
7970263 Asch Jun 2011 B1
7987489 Krzyzanowski et al. Jul 2011 B2
8027353 Damola et al. Sep 2011 B2
8036271 Winger et al. Oct 2011 B2
8046798 Schlack et al. Oct 2011 B1
8074248 Sigmon et al. Dec 2011 B2
8118676 Craig et al. Feb 2012 B2
8136033 Bhargava et al. Mar 2012 B1
8149917 Zhang et al. Apr 2012 B2
8155194 Winger et al. Apr 2012 B2
8155202 Landau Apr 2012 B2
8170107 Winger May 2012 B2
8194862 Herr et al. Jun 2012 B2
8243630 Luo et al. Aug 2012 B2
8270439 Herr et al. Sep 2012 B2
8284842 Craig et al. Oct 2012 B2
8296424 Malloy et al. Oct 2012 B2
8370869 Paek et al. Feb 2013 B2
8411754 Zhang et al. Apr 2013 B2
8442110 Pavlovskaia et al. May 2013 B2
8473996 Gordon et al. Jun 2013 B2
8619867 Craig et al. Dec 2013 B2
8621500 Weaver et al. Dec 2013 B2
20010008845 Kusuda et al. Jul 2001 A1
20010049301 Masuda et al. Dec 2001 A1
20020007491 Schiller et al. Jan 2002 A1
20020013812 Krueger et al. Jan 2002 A1
20020016161 Dellien et al. Feb 2002 A1
20020021353 DeNies Feb 2002 A1
20020026642 Augenbraun et al. Feb 2002 A1
20020027567 Niamir Mar 2002 A1
20020032697 French et al. Mar 2002 A1
20020040482 Sextro et al. Apr 2002 A1
20020047899 Son et al. Apr 2002 A1
20020049975 Thomas et al. Apr 2002 A1
20020054578 Zhang et al. May 2002 A1
20020056083 Istvan May 2002 A1
20020056107 Schlack May 2002 A1
20020056136 Wistendahl et al. May 2002 A1
20020059644 Andrade et al. May 2002 A1
20020062484 De Lange et al. May 2002 A1
20020067766 Sakamoto et al. Jun 2002 A1
20020069267 Thiele Jun 2002 A1
20020072408 Kumagai Jun 2002 A1
20020078171 Schneider Jun 2002 A1
20020078456 Hudson et al. Jun 2002 A1
20020083464 Tomsen et al. Jun 2002 A1
20020095689 Novak Jul 2002 A1
20020105531 Niemi Aug 2002 A1
20020108121 Alao et al. Aug 2002 A1
20020131511 Zenoni Sep 2002 A1
20020136298 Anantharamu et al. Sep 2002 A1
20020152318 Menon et al. Oct 2002 A1
20020171765 Waki et al. Nov 2002 A1
20020175931 Holtz et al. Nov 2002 A1
20020178447 Plotnick et al. Nov 2002 A1
20020188628 Cooper et al. Dec 2002 A1
20020191851 Keinan Dec 2002 A1
20020194592 Tsuchida et al. Dec 2002 A1
20020196746 Allen Dec 2002 A1
20030018796 Chou et al. Jan 2003 A1
20030020671 Santoro et al. Jan 2003 A1
20030027517 Callway et al. Feb 2003 A1
20030035486 Kato et al. Feb 2003 A1
20030038893 Rajamaki et al. Feb 2003 A1
20030039398 McIntyre Feb 2003 A1
20030046690 Miller Mar 2003 A1
20030051253 Barone, Jr. Mar 2003 A1
20030058941 Chen et al. Mar 2003 A1
20030061451 Beyda Mar 2003 A1
20030065739 Shnier Apr 2003 A1
20030071792 Safadi Apr 2003 A1
20030072372 Shen et al. Apr 2003 A1
20030076546 Johnson et al. Apr 2003 A1
20030088328 Nishio et al. May 2003 A1
20030088400 Nishio et al. May 2003 A1
20030095790 Joshi May 2003 A1
20030107443 Yamamoto Jun 2003 A1
20030122836 Doyle et al. Jul 2003 A1
20030123664 Pedlow, Jr. et al. Jul 2003 A1
20030126608 Safadi et al. Jul 2003 A1
20030126611 Chernock et al. Jul 2003 A1
20030131349 Kuczynski-Brown Jul 2003 A1
20030135860 Dureau Jul 2003 A1
20030169373 Peters et al. Sep 2003 A1
20030177199 Zenoni Sep 2003 A1
20030188309 Yuen Oct 2003 A1
20030189980 Dvir et al. Oct 2003 A1
20030196174 Pierre Cote et al. Oct 2003 A1
20030208768 Urdang et al. Nov 2003 A1
20030229719 Iwata et al. Dec 2003 A1
20030229900 Reisman Dec 2003 A1
20030231218 Amadio Dec 2003 A1
20040016000 Zhang et al. Jan 2004 A1
20040034873 Zenoni Feb 2004 A1
20040040035 Carlucci et al. Feb 2004 A1
20040078822 Breen et al. Apr 2004 A1
20040088375 Sethi et al. May 2004 A1
20040091171 Bone May 2004 A1
20040111526 Baldwin et al. Jun 2004 A1
20040117827 Karaoguz et al. Jun 2004 A1
20040128686 Boyer et al. Jul 2004 A1
20040133704 Krzyzanowski et al. Jul 2004 A1
20040136698 Mock Jul 2004 A1
20040139158 Datta Jul 2004 A1
20040157662 Tsuchiya Aug 2004 A1
20040163101 Swix et al. Aug 2004 A1
20040184542 Fujimoto Sep 2004 A1
20040193648 Lai et al. Sep 2004 A1
20040210824 Shoff et al. Oct 2004 A1
20040261106 Hoffman Dec 2004 A1
20040261114 Addington et al. Dec 2004 A1
20050015259 Thumpudi et al. Jan 2005 A1
20050015816 Christofalo et al. Jan 2005 A1
20050021830 Urzaiz et al. Jan 2005 A1
20050034155 Gordon et al. Feb 2005 A1
20050034162 White et al. Feb 2005 A1
20050044575 Der Kuyl Feb 2005 A1
20050055685 Maynard et al. Mar 2005 A1
20050055721 Zigmond et al. Mar 2005 A1
20050071876 van Beek Mar 2005 A1
20050076134 Bialik et al. Apr 2005 A1
20050089091 Kim et al. Apr 2005 A1
20050091690 Delpuch et al. Apr 2005 A1
20050091695 Paz et al. Apr 2005 A1
20050105608 Coleman et al. May 2005 A1
20050114906 Hoarty et al. May 2005 A1
20050132305 Guichard et al. Jun 2005 A1
20050135385 Jenkins et al. Jun 2005 A1
20050141613 Kelly et al. Jun 2005 A1
20050149988 Grannan Jul 2005 A1
20050160088 Scallan et al. Jul 2005 A1
20050166257 Feinleib et al. Jul 2005 A1
20050180502 Puri Aug 2005 A1
20050198682 Wright Sep 2005 A1
20050213586 Cyganski et al. Sep 2005 A1
20050216933 Black Sep 2005 A1
20050216940 Black Sep 2005 A1
20050226426 Oomen et al. Oct 2005 A1
20050273832 Zigmond et al. Dec 2005 A1
20050283741 Balabanovic et al. Dec 2005 A1
20060001737 Dawson et al. Jan 2006 A1
20060020960 Relan et al. Jan 2006 A1
20060020994 Crane et al. Jan 2006 A1
20060031906 Kaneda Feb 2006 A1
20060039481 Shen et al. Feb 2006 A1
20060041910 Hatanaka et al. Feb 2006 A1
20060088105 Shen et al. Apr 2006 A1
20060095944 Demircin et al. May 2006 A1
20060112338 Joung et al. May 2006 A1
20060117340 Pavlovskaia et al. Jun 2006 A1
20060143678 Chou et al. Jun 2006 A1
20060161538 Kiilerich Jul 2006 A1
20060173985 Moore Aug 2006 A1
20060174026 Robinson et al. Aug 2006 A1
20060174289 Theberge Aug 2006 A1
20060195884 van Zoest et al. Aug 2006 A1
20060212203 Furuno Sep 2006 A1
20060218601 Michel Sep 2006 A1
20060230428 Craig et al. Oct 2006 A1
20060242570 Croft et al. Oct 2006 A1
20060256865 Westerman Nov 2006 A1
20060269086 Page et al. Nov 2006 A1
20060271985 Hoffman et al. Nov 2006 A1
20060285586 Westerman Dec 2006 A1
20060285819 Kelly et al. Dec 2006 A1
20070005783 Saint-Hillaire et al. Jan 2007 A1
20070009035 Craig et al. Jan 2007 A1
20070009036 Craig et al. Jan 2007 A1
20070009042 Craig et al. Jan 2007 A1
20070025639 Zhou et al. Feb 2007 A1
20070033528 Merrit et al. Feb 2007 A1
20070033631 Gordon et al. Feb 2007 A1
20070074251 Oguz et al. Mar 2007 A1
20070079325 de Heer Apr 2007 A1
20070115941 Patel et al. May 2007 A1
20070124282 Wittkotter May 2007 A1
20070124795 McKissick et al. May 2007 A1
20070130446 Minakami Jun 2007 A1
20070130592 Haeusel Jun 2007 A1
20070152984 Ording et al. Jul 2007 A1
20070162953 Bolliger et al. Jul 2007 A1
20070172061 Pinder Jul 2007 A1
20070174790 Jing et al. Jul 2007 A1
20070178243 Houck et al. Aug 2007 A1
20070237232 Chang et al. Oct 2007 A1
20070300280 Turner et al. Dec 2007 A1
20080046928 Poling et al. Feb 2008 A1
20080052742 Kopf et al. Feb 2008 A1
20080066135 Brodersen et al. Mar 2008 A1
20080084503 Kondo Apr 2008 A1
20080086688 Chandratillake et al. Apr 2008 A1
20080094368 Ording et al. Apr 2008 A1
20080098450 Wu et al. Apr 2008 A1
20080104520 Swenson et al. May 2008 A1
20080127255 Ress et al. May 2008 A1
20080154583 Goto et al. Jun 2008 A1
20080163059 Craner Jul 2008 A1
20080163286 Rudolph et al. Jul 2008 A1
20080170619 Landau Jul 2008 A1
20080170622 Gordon et al. Jul 2008 A1
20080178125 Elsbree et al. Jul 2008 A1
20080178243 Dong et al. Jul 2008 A1
20080178249 Gordon et al. Jul 2008 A1
20080181221 Kampmann et al. Jul 2008 A1
20080189740 Carpenter et al. Aug 2008 A1
20080195573 Onoda et al. Aug 2008 A1
20080201736 Gordon et al. Aug 2008 A1
20080212942 Gordon et al. Sep 2008 A1
20080232452 Sullivan et al. Sep 2008 A1
20080243918 Holtman Oct 2008 A1
20080243998 Oh et al. Oct 2008 A1
20080246759 Summers Oct 2008 A1
20080253440 Srinivasan et al. Oct 2008 A1
20080271080 Gossweiler et al. Oct 2008 A1
20090003446 Wu et al. Jan 2009 A1
20090003705 Zou et al. Jan 2009 A1
20090007199 La Joie Jan 2009 A1
20090025027 Craner Jan 2009 A1
20090031341 Schlack et al. Jan 2009 A1
20090041118 Pavlovskaia et al. Feb 2009 A1
20090083781 Yang et al. Mar 2009 A1
20090083813 Dolce et al. Mar 2009 A1
20090083824 McCarthy et al. Mar 2009 A1
20090089188 Ku et al. Apr 2009 A1
20090094113 Berry et al. Apr 2009 A1
20090094646 Walter et al. Apr 2009 A1
20090100465 Kulakowski Apr 2009 A1
20090100489 Strothmann Apr 2009 A1
20090106269 Zuckerman et al. Apr 2009 A1
20090106386 Zuckerman et al. Apr 2009 A1
20090106392 Zuckerman et al. Apr 2009 A1
20090106425 Zuckerman et al. Apr 2009 A1
20090106441 Zuckerman et al. Apr 2009 A1
20090106451 Zuckerman et al. Apr 2009 A1
20090106511 Zuckerman et al. Apr 2009 A1
20090113009 Slemmer et al. Apr 2009 A1
20090132942 Santoro et al. May 2009 A1
20090138966 Krause et al. May 2009 A1
20090144781 Glaser et al. Jun 2009 A1
20090146779 Kumar et al. Jun 2009 A1
20090157868 Chaudhry Jun 2009 A1
20090158369 Van Vleck et al. Jun 2009 A1
20090160694 Di Flora Jun 2009 A1
20090172757 Aldrey et al. Jul 2009 A1
20090178098 Westbrook et al. Jul 2009 A1
20090183219 Maynard et al. Jul 2009 A1
20090189890 Corbett et al. Jul 2009 A1
20090193452 Russ et al. Jul 2009 A1
20090196346 Zhang et al. Aug 2009 A1
20090204920 Beverley et al. Aug 2009 A1
20090210899 Lawrence-Apfelbaum et al. Aug 2009 A1
20090225790 Shay et al. Sep 2009 A1
20090228620 Thomas et al. Sep 2009 A1
20090228922 Haj-Khalil et al. Sep 2009 A1
20090233593 Ergen et al. Sep 2009 A1
20090251478 Maillot et al. Oct 2009 A1
20090254960 Yarom et al. Oct 2009 A1
20090265617 Randall et al. Oct 2009 A1
20090271512 Jorgensen Oct 2009 A1
20090271818 Schlack Oct 2009 A1
20090298535 Klein et al. Dec 2009 A1
20090313674 Ludvig et al. Dec 2009 A1
20090328109 Pavlovskaia et al. Dec 2009 A1
20100033638 O'Donnell et al. Feb 2010 A1
20100035682 Gentile et al. Feb 2010 A1
20100058404 Rouse Mar 2010 A1
20100067571 White et al. Mar 2010 A1
20100077441 Thomas et al. Mar 2010 A1
20100104021 Schmit Apr 2010 A1
20100115573 Srinivasan et al. May 2010 A1
20100118972 Zhang et al. May 2010 A1
20100131996 Gauld May 2010 A1
20100146139 Brockmann Jun 2010 A1
20100158109 Dahlby et al. Jun 2010 A1
20100161825 Ronca et al. Jun 2010 A1
20100166071 Wu et al. Jul 2010 A1
20100174776 Westberg et al. Jul 2010 A1
20100175080 Yuen et al. Jul 2010 A1
20100180307 Hayes et al. Jul 2010 A1
20100211983 Chou Aug 2010 A1
20100226428 Thevathasan et al. Sep 2010 A1
20100235861 Schein et al. Sep 2010 A1
20100242073 Gordon et al. Sep 2010 A1
20100251167 Deluca et al. Sep 2010 A1
20100254370 Jana et al. Oct 2010 A1
20100265344 Velarde et al. Oct 2010 A1
20100325655 Perez Dec 2010 A1
20110002376 Ahmed et al. Jan 2011 A1
20110002470 Purnhagen et al. Jan 2011 A1
20110023069 Dowens Jan 2011 A1
20110035227 Lee et al. Feb 2011 A1
20110067061 Karaoguz et al. Mar 2011 A1
20110096828 Chen et al. Apr 2011 A1
20110107375 Stahl et al. May 2011 A1
20110110642 Salomons et al. May 2011 A1
20110150421 Sasaki et al. Jun 2011 A1
20110153776 Opala et al. Jun 2011 A1
20110167468 Lee et al. Jul 2011 A1
20110191684 Greenberg Aug 2011 A1
20110243024 Osterling et al. Oct 2011 A1
20110258584 Williams et al. Oct 2011 A1
20110289536 Poder et al. Nov 2011 A1
20110317982 Xu et al. Dec 2011 A1
20120023126 Jin et al. Jan 2012 A1
20120030212 Koopmans et al. Feb 2012 A1
20120137337 Sigmon, Jr. et al. May 2012 A1
20120204217 Regis et al. Aug 2012 A1
20120209815 Carson et al. Aug 2012 A1
20120224641 Haberman et al. Sep 2012 A1
20120257671 Brockmann et al. Oct 2012 A1
20130003826 Craig et al. Jan 2013 A1
20130071095 Chauvier et al. Mar 2013 A1
20130086610 Brockmann Apr 2013 A1
20130179787 Brockmann et al. Jul 2013 A1
20130198776 Brockmann Aug 2013 A1
20130254308 Rose et al. Sep 2013 A1
20130272394 Brockmann et al. Oct 2013 A1
20140033036 Gaur et al. Jan 2014 A1
Foreign Referenced Citations (311)
Number Date Country
191599 Apr 2000 AT
198969 Feb 2001 AT
250313 Oct 2003 AT
472152 Jul 2010 AT
475266 Aug 2010 AT
550086 Feb 1986 AU
199060189 Nov 1990 AU
620735 Feb 1992 AU
199184838 Apr 1992 AU
643828 Nov 1993 AU
2004253127 Jan 2005 AU
2005278122 Mar 2006 AU
2010339376 Aug 2012 AU
2011249132 Nov 2012 AU
2011258972 Nov 2012 AU
2011315950 May 2013 AU
682776 Mar 1964 CA
2052477 Mar 1992 CA
1302554 Jun 1992 CA
2163500 May 1996 CA
2231391 May 1997 CA
2273365 Jun 1998 CA
2313133 Jun 1999 CA
2313161 Jun 1999 CA
2528499 Jan 2005 CA
2569407 Mar 2006 CA
2728797 Apr 2010 CA
2787913 Jul 2011 CA
2798541 Dec 2011 CA
2814070 Apr 2012 CA
1507751 Jun 2004 CN
1969555 May 2007 CN
101180109 May 2008 CN
101627424 Jan 2010 CN
101637023 Jan 2010 CN
102007773 Apr 2011 CN
4408355 Oct 1994 DE
69516139 Dec 2000 DE
69132518 Sep 2001 DE
69333207 Jul 2004 DE
98961961 Aug 2007 DE
602008001596 Aug 2010 DE
602006015650 D1 Sep 2010 DE
0093549 Nov 1983 EP
0128771 Dec 1984 EP
0419137 Mar 1991 EP
0449633 Oct 1991 EP
0477786 Apr 1992 EP
0523618 Jan 1993 EP
0534139 Mar 1993 EP
0568453 Nov 1993 EP
0588653 Mar 1994 EP
0594350 Apr 1994 EP
0612916 Aug 1994 EP
0624039 Nov 1994 EP
0638219 Feb 1995 EP
0643523 Mar 1995 EP
0661888 Jul 1995 EP
0714684 Jun 1996 EP
0746158 Dec 1996 EP
0761066 Mar 1997 EP
0789972 Aug 1997 EP
0830786 Mar 1998 EP
0861560 Sep 1998 EP
0933966 Aug 1999 EP
0933966 Aug 1999 EP
1026872 Aug 2000 EP
1038397 Sep 2000 EP
1038399 Sep 2000 EP
1038400 Sep 2000 EP
1038401 Sep 2000 EP
1051039 Nov 2000 EP
1055331 Nov 2000 EP
1120968 Aug 2001 EP
1345446 Sep 2003 EP
1422929 May 2004 EP
1428562 Jun 2004 EP
1521476 Apr 2005 EP
1645115 Apr 2006 EP
1725044 Nov 2006 EP
1767708 Mar 2007 EP
1771003 Apr 2007 EP
1772014 Apr 2007 EP
1877150 Jan 2008 EP
1887148 Feb 2008 EP
1900200 Mar 2008 EP
1902583 Mar 2008 EP
1908293 Apr 2008 EP
1911288 Apr 2008 EP
1918802 May 2008 EP
2100296 Sep 2009 EP
2105019 Sep 2009 EP
2106665 Oct 2009 EP
2116051 Nov 2009 EP
2124440 Nov 2009 EP
2248341 Nov 2010 EP
2269377 Jan 2011 EP
2271098 Jan 2011 EP
2304953 Apr 2011 EP
2364019 Sep 2011 EP
2384001 Nov 2011 EP
2409493 Jan 2012 EP
2477414 Jul 2012 EP
2487919 Aug 2012 EP
2520090 Nov 2012 EP
2567545 Mar 2013 EP
2577437 Apr 2013 EP
2628306 Aug 2013 EP
2632164 Aug 2013 EP
2632165 Aug 2013 EP
2695388 Feb 2014 EP
2207635 Jun 2004 ES
8211463 Jun 1982 FR
2529739 Jan 1984 FR
2891098 Mar 2007 FR
2207838 Feb 1989 GB
2248955 Apr 1992 GB
2290204 Dec 1995 GB
2365649 Feb 2002 GB
2378345 Feb 2003 GB
1134855 Oct 2010 HK
1116323 Dec 2010 HK
19913397 Apr 1992 IE
99586 Feb 1998 IL
215133 Dec 2011 IL
222829 Dec 2012 IL
222830 Dec 2012 IL
225525 Jun 2013 IL
180215 Jan 1998 IN
200701744 Nov 2007 IN
200900856 May 2009 IN
200800214 Jun 2009 IN
3759 Mar 1992 IS
60-054324 Mar 1985 JP
63-033988 Feb 1988 JP
63-263985 Oct 1988 JP
2001-241993 Sep 1989 JP
04-373286 Dec 1992 JP
06-054324 Feb 1994 JP
7015720 Jan 1995 JP
7-160292 Jun 1995 JP
8-265704 Oct 1996 JP
10-228437 Aug 1998 JP
10-510131 Sep 1998 JP
11-134273 May 1999 JP
H11-261966 Sep 1999 JP
2000-152234 May 2000 JP
2001-203995 Jul 2001 JP
2001-245271 Sep 2001 JP
2001-245291 Sep 2001 JP
2001-514471 Sep 2001 JP
2002-016920 Jan 2002 JP
2002-057952 Feb 2002 JP
2002-112220 Apr 2002 JP
2002-141810 May 2002 JP
2002-208027 Jul 2002 JP
2002-319991 Oct 2002 JP
2003-506763 Feb 2003 JP
2003-087785 Mar 2003 JP
2003-529234 Sep 2003 JP
2004-501445 Jan 2004 JP
2004-056777 Feb 2004 JP
2004-110850 Apr 2004 JP
2004-112441 Apr 2004 JP
2004-135932 May 2004 JP
2004-264812 Sep 2004 JP
2004-533736 Nov 2004 JP
2004-536381 Dec 2004 JP
2004-536681 Dec 2004 JP
2005-033741 Feb 2005 JP
2005-084987 Mar 2005 JP
2005-095599 Mar 2005 JP
8-095599 Apr 2005 JP
2005-156996 Jun 2005 JP
2005-519382 Jun 2005 JP
2005-523479 Aug 2005 JP
2005-309752 Nov 2005 JP
2006-067280 Mar 2006 JP
2006-512838 Apr 2006 JP
2007-522727 Aug 2007 JP
11-88419 Sep 2007 JP
2008-523880 Jul 2008 JP
2008-535622 Sep 2008 JP
04252727 Apr 2009 JP
2009-543386 Dec 2009 JP
2011-108155 Jun 2011 JP
2012-080593 Apr 2012 JP
04996603 Aug 2012 JP
05121711 Jan 2013 JP
53-004612 Oct 2013 JP
05331008 Oct 2013 JP
05405819 Feb 2014 JP
2006067924 Jun 2006 KR
2007038111 Apr 2007 KR
20080001298 Jan 2008 KR
2008024189 Mar 2008 KR
2010111739 Oct 2010 KR
2010120187 Nov 2010 KR
2010127240 Dec 2010 KR
2011030640 Mar 2011 KR
2011129477 Dec 2011 KR
20120112683 Oct 2012 KR
2013061149 Jun 2013 KR
2013113925 Oct 2013 KR
1333200 Nov 2013 KR
2008045154 Nov 2013 KR
2013138263 Dec 2013 KR
1032594 Apr 2008 NL
1033929 Apr 2008 NL
2004670 Nov 2011 NL
2004780 Jan 2012 NL
239969 Dec 1994 NZ
99110 Dec 1993 PT
WO 8202303 Jul 1982 WO
WO 8908967 Sep 1989 WO
WO 9013972 Nov 1990 WO
WO 9322877 Nov 1993 WO
WO 9416534 Jul 1994 WO
WO 9419910 Sep 1994 WO
WO 9421079 Sep 1994 WO
WO 9515658 Jun 1995 WO
WO 9532587 Nov 1995 WO
WO 9533342 Dec 1995 WO
WO 9614712 May 1996 WO
WO 9627843 Sep 1996 WO
WO 9631826 Oct 1996 WO
WO 9637074 Nov 1996 WO
WO 9642168 Dec 1996 WO
WO 9716925 May 1997 WO
WO 9733434 Sep 1997 WO
WO 9739583 Oct 1997 WO
WO 9826595 Jun 1998 WO
WO 9900735 Jan 1999 WO
WO 9904568 Jan 1999 WO
WO 9900735 Jan 1999 WO
WO 9930496 Jun 1999 WO
WO 9930497 Jun 1999 WO
WO 9930500 Jun 1999 WO
WO 9930501 Jun 1999 WO
WO 9935840 Jul 1999 WO
WO 9941911 Aug 1999 WO
WO 9956468 Nov 1999 WO
WO 9965232 Dec 1999 WO
WO 9965243 Dec 1999 WO
WO 9966732 Dec 1999 WO
WO 0002303 Jan 2000 WO
WO 0007372 Feb 2000 WO
WO 0008967 Feb 2000 WO
WO 0019910 Apr 2000 WO
WO 0038430 Jun 2000 WO
WO 0041397 Jul 2000 WO
WO 0139494 May 2001 WO
WO 0141447 Jun 2001 WO
WO 0182614 Nov 2001 WO
WO 0192973 Dec 2001 WO
WO 02089487 Jul 2002 WO
WO 02076097 Sep 2002 WO
WO 02076099 Sep 2002 WO
WO 03026232 Mar 2003 WO
WO 03026275 Mar 2003 WO
WO 03047710 Jun 2003 WO
WO 03065683 Aug 2003 WO
WO 03071727 Aug 2003 WO
WO 03091832 Nov 2003 WO
WO 2004012437 Feb 2004 WO
WO 2004018060 Mar 2004 WO
WO 2004073310 Aug 2004 WO
WO 2005002215 Jan 2005 WO
WO 2005041122 May 2005 WO
WO 2005053301 Jun 2005 WO
WO2005076575 Aug 2005 WO
WO 2005120067 Dec 2005 WO
WO 2006014362 Feb 2006 WO
WO 2006022881 Mar 2006 WO
WO 2006053305 May 2006 WO
WO 2006067697 Jun 2006 WO
WO 2006081634 Aug 2006 WO
WO 2006105480 Oct 2006 WO
WO 2006110268 Oct 2006 WO
WO 2007001797 Jan 2007 WO
WO 2007008319 Jan 2007 WO
WO 2007008355 Jan 2007 WO
WO 2007008356 Jan 2007 WO
WO 2007008357 Jan 2007 WO
WO 2007008358 Jan 2007 WO
WO 2007018722 Feb 2007 WO
WO 2007018726 Feb 2007 WO
WO 2008044916 Apr 2008 WO
WO 2008086170 Jul 2008 WO
WO 2008088741 Jul 2008 WO
WO 2008088752 Jul 2008 WO
WO 2008088772 Jul 2008 WO
WO 2008100205 Aug 2008 WO
WO 2009038596 Mar 2009 WO
WO 2009099893 Aug 2009 WO
WO 2009099895 Aug 2009 WO
WO 2009105465 Aug 2009 WO
WO 2009110897 Sep 2009 WO
WO 2009114247 Sep 2009 WO
WO 2009155214 Dec 2009 WO
WO 2010044926 Apr 2010 WO
WO 2010054136 May 2010 WO
WO 2010107954 Sep 2010 WO
WO 2011014336 Sep 2010 WO
WO 2011082364 Jul 2011 WO
WO 2011139155 Nov 2011 WO
WO 2011149357 Dec 2011 WO
WO 2012051528 Apr 2012 WO
WO 2012138660 Oct 2012 WO
WO 2013106390 Jul 2013 WO
WO 2013155310 Jul 2013 WO
Non-Patent Literature Citations (259)
Entry
Handley et al. “TCP Congestion Window Validation” RFC 2861. (Jun. 2000) Network Working Group.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2012/032010, Oct. 10, 2012, 6 pgs.
ActiveVideo Networks Inc., International Preliminary Report on Patentability, PCT/US2012/032010, Oct. 17, 2013, 4 pgs.
ActiveVideo, http://www.activevideo.com/, as printed out in year 2012, 1 pg.
ActiveVideo Networks Inc., International Preliminary Report on Patentability, PCT/US2013/020769, Jul. 24, 2014, 6 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2014/030773, Jul. 25, 2014, 8 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2014/041416, Aug. 27, 2014, 8 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 13168509.1, 10 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 13168376-5, 8 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 12767642-7, 12 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Rules 70(2) and 70a(2), EP10841764.3, Jun. 6, 2014, 1 pg.
ActiveVideo Networks Inc., Communication Pursuant to Article 94(3) EPC, EP08713106.6, Jun. 26, 2014, 5 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Article 94(3) EPC, EP08713106.6-2223, May 10, 2011, 7 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Article 94(3) EPC, EP09713486.0, Apr. 14, 2014, 6 pgs.
ActiveVideo Networks Inc., Examination Report No. 1, AU2011258972, Apr. 4, 2013, 5 pgs.
ActiveVideo Networks Inc., Examination Report No. 1, AU2010339376, Apr. 30, 2014, 4 pgs.
ActiveVideo Networks Inc., Examination Report, App. No. EP11749946.7, Oct. 8, 2013, 6 pgs.
ActiveVideo Networks Inc., Summons to attend oral-proceeding, Application No. EP09820936-4, Aug. 19, 2014, 4 pgs.
ActiveVideo Networks Inc., International Searching Authority, International Search Report—International application No. PCT/US2010/027724, dated Oct. 28, 2010, together with the Written Opinion of the International Searching Authority, 7 pages.
Adams, Jerry, NTZ Nachrichtechnische Zeitschrift. vol. 40, No. 7, Jul. 1987, Berlin DE pp. 534-536; Jerry Adams: ‘Glasfasernetz Für Breitbanddienste in London’, 5 pgs. No English Translation Found.
Avinity Systems B.V., Communication pursuant to Article 94(3) EPC, EP 07834561.8, Jan. 31, 2014, 10 pgs.
Avinity Systems B.V., Communication pursuant to Article 94(3) EPC, EP 07834561.8, Apr. 8, 2010, 5 pgs.
Avinity Systems B.V., International Preliminary Report on Patentability, PCT/NL2007/000245, Mar. 31, 2009, 12 pgs.
Avinity Systems B.V., International Search Report and Written Opinion, PCT/NL2007/000245, Feb. 19, 2009, 18 pgs.
Avinity Systems B.V., Notice of Grounds of Rejection for Patent, JP 2009-530298, Sep. 3, 2013, 4 pgs.
Avinity Systems B.V., Notice of Grounds of Rejection for Patent, JP 2009-530298, Sep. 25, 2012, 6 pgs.
Bird et al., “Customer Access to Broadband Services,” ISSLS 86—The International Symposium on Subrscriber Loops and Services Sep. 29, 1986, Tokyo,JP 6 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/668,004, Jul. 16, 2014, 20 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/686,548, Mar. 10, 2014, 11 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/668,004, Dec. 23, 2013, 9 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/438,617, May 12, 2014, 17 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 12/443,571, Mar. 7, 2014, 21 pgs.
Brockmann, Office Action, U.S. Appl. No. 12/443,571, Jun. 5, 2013, 18 pgs.
Chang, Shih-Fu, et al., “Manipulation and Compositing of MC-DOT Compressed Video,” IEEE Journal on Selected Areas of Communications, Jan. 1995, vol. 13, No. 1, 11 pgs.
Dahlby, Office Action, U.S. Appl. No. 12/651,203, Jun. 5, 2014, 18 pgs.
Dahlby, Final Office Action, U.S. Appl. No. 12/651,203, Feb. 4, 2013, 18 pgs.
Dahlby, Office Action, U.S. Appl. No. 12/651,203, Aug. 16, 2012, 18 pgs.
Dukes, Stephen D., “Photonics for cable television system design, Migrating to regional hubs and passive networks,” Communications Engineering and Design, May 1992, 4 pgs.
Ellis, et al., “INDAX: An Operation Interactive Cabletext System”, IEEE Journal on Selected Areas in Communications, vol. sac-1, No. 2, Feb. 1983, pp. 285-294.
European Patent Office, Supplementary European Search Report, Application No. EP 09 70 8211, dated Jan. 5, 2011, 6 pgs.
Frezza, W., “The Broadband Solution—Metropolitan CATV Networks,” Proceedings of Videotex '84, Apr. 1984, 15 pgs.
Gecsei, J., “Topology of Videotex Networks,” The Architecture of Videotex Systems, Chapter 6, 1983 by Prentice-Hall, Inc.
Gobl, et al., “ARIDEM—a multi-service broadband access demonstrator,” Ericsson Review No. 3, 1996, 7 pgs.
Gordon, Notice of Allowance, U.S. Appl. No. 12/008,697, Mar. 20, 2014, 10 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/008,722, Mar. 30, 2012, 16 pgs.
Gordon, Final Office Action, U.S. U.S. Appl. No. 12/035,236, Jun. 11, 2014, 14 pgs.
Gordon, Final Office Action, U.S. U.S. Appl. No. 12/035,236, Jun. 22, 2013, 7 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/035,236, Sep. 20, 2011, 8 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/035,236, Sep. 21, 2012, 9 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/008,697, Mar. 6, 2012, 48 pgs.
Gordon, Office Action, U.S. Appl. No. 12/035,236, Mar. 13, 2013, 9 pgs.
Gordon, Office Action, U.S. Appl. No. 12/035,236, Mar. 22, 2011, 8 pgs.
Gordon, Office Action, U.S. Appl. No. 12/035,236, Mar. 28, 2012, 8 pgs.
Gordon, Office Action, U.S. Appl. No. 12/035,236, Dec. 16, 2013, 11 pgs.
Gordon, Office Action, U.S. Appl. No. 12/008,697, Aug. 1, 2013, 43 pgs.
Gordon, Office Action, U.S. Appl. No. 12/008,697, Aug. 4, 2011, 39 pgs.
Gordon, Office Action, U.S. Appl. No. 12/008,722, Oct. 11, 2011, 16 pgs.
Henry et al. “Multidimensional Icons” ACM Transactions on Graphics, vol. 9, No. 1 Jan. 1990, 5 pgs.
Insight advertisement, “In two years this is going to be the most watched program on TV” on touch VCR programming, published not later than 2000, 10 pgs.
Isensee et al., “Focus Highlight for World Wide Web Frames,” Nov. 1, 1997, IBM Technical Disclosure Bulletin, vol. 40, No. 11, pp. 89-90.
ICTV, Inc., International Search Report / Written Opinion, PCT/US2008/000400, Jul. 14, 2009, 10 pgs.
ICTV, Inc., International Search Report / Written Opinion, PCT/US2008/000450, Jan. 26, 2009, 9 pgs.
Kato, Y., et al., “A Coding Control algorithm for Motion Picture Coding Accomplishing Optimal Assignment of Coding Distortion to Time and Space Domains,” Electronics and Communications in Japan, Part 1, vol. 72, No. 9, 1989, 11 pgs.
Koenen, Rob,“MPEG-4 Overview—Overview of the MPEG-4 Standard” Internet Citation, Mar. 2001, http://mpeg.telecomitalialab.com/standards/mpeg-4/mpeg-4.htm, May 9, 2005, 74 pgs.
Konaka, M. et al., “Development of Sleeper Cabin Cold Storage Type Cooling System,” SAE International, The Engineering Society for Advancing Mobility Land Sea Air and Space, SAE 2000 World Congress, Detroit, Michigan, Mar. 6-9,2000, 7 pgs.
Le Gall, Didier, “MPEG: A Video Compression Standard for Multimedia Applications”, Communication of the ACM, vol. 34, No. 4, Apr. 1991, New York, NY, 13 pgs.
Langenberg, E, et al., “Integrating Entertainment and Voice on the Cable Network,” SCTE , Conference on Emerging Technologies, Jan. 6-7, 1993, New Orleans, Louisiana, 9 pgs.
Large, D., “Tapped Fiber vs. Fiber-Reinforced Coaxial CATV Systems”, IEEE LCS Magazine, Feb. 1990, 7 pgs.
Mesiya, M.F, “A Passive Optical/Coax Hybrid Network Architecture for Delivery of CATV, Telephony and Data Services,” 1993 NCTA Technical Papers, 7 pgs.
“MSDL Specification Version 1.1” International Organisation for Standardisation Organisation Internationale EE Normalisation, ISO/IEC JTC1/SC29/WG11 Coding of Moving Pictures and Autdio, N1246, MPEG96/Mar. 1996, 101 pgs.
Noguchi, Yoshihiro, et al., “MPEG Video Compositing in the Compressed Domain,” IEEE International Symposium on Circuits and Systems, vol. 2, May 1, 1996, 4 pgs.
Regis, Notice of Allowance U.S. Appl. No. 13/273,803, Sep. 2, 2014, 8 pgs.
Regis, Notice of Allowance U.S. Appl. No. 13/273,803, May 14, 2014, 8 pgs.
Regis, Final Office Action U.S. Appl. No. 13/273,803, Oct. 11, 2013, 23 pgs.
Regis, Office Action U.S. Appl. No. 13/273,803, Mar. 27, 2013, 32 pgs.
Richardson, Ian E.G., “H.264 and MPEG-4 Video Compression, Video Coding for Next-Genertion Multimedia,” Johm Wiley & Sons, US, 2003, ISBN: 0-470-84837-5, pp. 103-105, 149-152, and 164.
Rose, K., “Design of a Switched Broad-Band Communications Network for Interactive Services,” IEEE Transactions on Communications, vol. com-23, No. 1, Jan. 1975, 7 pgs.
Saadawi, Tarek N., “Distributed Switching for Data Transmission over Two-Way CATV”, IEEE Journal on Selected Areas in Communications, vol. Sac-3, No. 2, Mar. 1985, 7 pgs.
Schrock, “Proposal for a Hub Controlled Cable Television System Using Optical Fiber,” IEEE Transactions on Cable Television, vol. CATV-4, No. 2, Apr. 1979, 8 pgs.
Sigmon, Notice of Allowance, U.S. Appl. No. 13/311,203, Sep. 22, 2014, 5 pgs.
Sigmon, Notice of Allowance, U.S. Appl. No. 13/311,203, Feb. 27, 2014, 14 pgs.
Sigmon, Final Office Action, U.S. Appl. No. 13/311,203, Sep. 13, 2013, 20 pgs.
Sigmon, Office Action, U.S. Appl. No. 13/311,203, May 10, 2013, 21 pgs.
Smith, Brian C., et al., “Algorithms for Manipulating Compressed Images,” IEEE Computer Graphics and Applications, vol. 13, No. 5, Sep. 1, 1993, 9 pgs.
Smith, J. et al., “Transcoding Internet Content for Heterogeneous Client Devices” Circuits and Systems, 1998. ISCAS '98. Proceedings of the 1998 IEEE International Symposium on Monterey, CA, USA May 31-Jun. 3, 1998, New York, NY, USA,IEEE, US, May 31, 1998, 4 pgs.
Stoll, G. et al., “GMF4iTV: Neue Wege zur-Interaktivitaet Mit Bewegten Objekten Beim Digitalen Fernsehen,” Fkt Fernseh Und Kinotechnik, Fachverlag Schiele & Schon GmbH, Berlin, DE, vol. 60, No. 4, Jan. 1, 2006, ISSN: 1430-9947, 9 pgs. No English Translation Found.
Tamitani et al., “An Encoder/Decoder Chip Set for the MPEG Video Standard,” 1992 IEEE International Conference on Acoustics, vol. 5, Mar. 1992, San Francisco, CA, 4 pgs.
Terry, Jack, “Alternative Technologies and Delivery Systems for Broadband ISDN Access”, IEEE Communications Magazine, Aug. 1992, 7 pgs.
Thompson, Jack, “DTMF-TV, The Most Economical Approach to Interactive TV,” GNOSTECH Incorporated, NCF'95 Session T-38-C, 8 pgs.
Thompson, John W. Jr., “The Awakening 3.0: PCs, TSBs, or DTMF-TV—Which Telecomputer Architecture is Right for the Next Generations's Public Network'?,” GNOSTECH Incorporated, 1995 The National Academy of Sciences, downloaded from the Unpredictable Certainty: White Papers, http://www.nap.edu/catalog/6062.html, pp. 546-552.
Tobagi, Fouad A., “Multiaccess Protocols in Packet Communication Systems,” IEEE Transactions on Communications, vol. Com-28, No. 4, Apr. 1980, 21 pgs.
Toms, N., “An Integrated Network Using Fiber Optics (Info) for the Distribution of Video, Data, and Telephone in Rural Areas,” IEEE Transactions on Communication, vol. Com-26, No. 7, Jul. 1978, 9 pgs.
Trott, A., et al.“An Enhanced Cost Effective Line Shuffle Scrambling System with Secure Conditional Access Authorization,” 1993 NCTA Technical Papers, 11 pgs.
Jurgen—Two-way applications for cable television systems in the '70s, IEEE Spectrum, Nov. 1971, 16 pgs.
va Beek, P., “Delay-Constrained Rate Adaptation for Robust Video Transmission over Home Networks,” Image Processing, 2005, ICIP 2005, IEEE International Conference, Sep. 2005, vol. 2, No. 11, 4 pgs.
Van der Star, Jack A. M., “Video on Demand Without Compression: A Review of the Business Model, Regulations and Future Implication,” Proceedings of PTC'93, 15th Annual Conference, 12 pgs.
Welzenbach et al., “The Application of Optical Systems for Cable TV,” AEG-Telefunken, Backnang, Federal Republic of Germany, ISSLS Sep. 15-19, 1980, Proceedings IEEE Cat. No. 80 CH1565-1, 7 pgs.
Yum, TS P., “Hierarchical Distribution of Video with Dynamic Port Allocation,” IEEE Transactions on Communications, vol. 39, No. 8, Aug. 1, 1991, XP000264287, 7 pgs.
AC-3 digital audio compression standard, Extract, Dec. 20, 1995, 11 pgs.
ActiveVideo Networks BV, International Preliminary Report on Patentability, PCT/NL2011/050308, Sep. 6, 2011, 8 pgs.
ActiveVideo Networks BV, International Search Report and Written Opinion, PCT/NL2011/050308, Sep. 6, 2011, 8 pgs.
Activevideo Networks Inc., International Preliminary Report on Patentability, PCT/US2011/056355, Apr. 16, 2013, 4 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2011/056355, Apr. 13, 2012, 6 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2013/020769, May 9, 2013, 9 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2013/036182, Jul. 29, 2013, 12 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2009/032457, Jul. 22, 2009, 7 pgs.
ActiveVideo Networks Inc. Extended EP Search Rpt, Application No. 09820936-4, 11 pgs.
ActiveVideo Networks Inc. Extended EP Search Rpt, Application No. 10754084-1, 11 pgs.
ActiveVideo Networks Inc. Extended EP Search Rpt, Application No. 10841764.3, 16 pgs.
ActiveVideo Networks Inc. Extended EP Search Rpt, Application No. 11833486.1, 6 pgs.
AcitveVideo Networks Inc., Korean Intellectual Property Office, International Search Report; PCT/US2009/032457, Jul. 22, 2009, 7 pgs.
Annex C—Video buffering verifier, information technology—generic coding of moving pictures and associated audio information: video, Feb. 2000, 6 pgs.
Antonoff, Michael, “Interactive Television,” Popular Science, Nov. 1992, 12 pages.
Avinity Systems B.V., Extended European Search Report, Application No. 12163713.6, 10 pgs.
Avinity Systems B.V., Extended European Search Report, Application No. 12163712-8, 10 pgs.
Benjelloun, A summation algorithm for MPEG-1 coded audio signals: a first step towards audio processed domain, 2000, 9 pgs.
Broadhead, Direct manipulation of MPEG compressed digital audio, Nov. 5-9, 1995, 41 pgs.
Cable Television Laboratories, Inc., “CableLabs Asset Distribution Interface Specification, Version 1.1”, May 5, 2006, 33 pgs.
CD 11172-3, Coding of moving pictures and associated audio for digital storage media at up to about 1.5 MBIT, Jan. 1, 1992, 39 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,176, Dec. 23, 2010, 8 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,183, Jan. 12, 2012, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,183, Jul. 19, 2012, 8 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,189, Oct. 12, 2011, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,176, Mar. 23, 2011, 8 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 13/609,183, Aug. 26, 2013, 8 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/103,838, Feb. 5, 2009, 30 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,181, Aug. 25, 2010, 17 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/103,838, Jul. 6, 2010, 35 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,176, Oct. 10, 2010, 8 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,183, Apr. 13, 2011, 16 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,177, Oct. 26, 2010, 12 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,181, Jun. 20, 2011, 21 pgs.
Craig, Office Action, U.S. Appl. No. 11/103,838, May 12, 2009, 32 pgs.
Craig, Office Action, U.S. Appl. No. 11/103,838, Aug. 19, 2008; 17 pgs.
Craig, Office Action, U.S. Appl. No. 11/103,838, Nov. 19, 2009, 34 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,176, May 6, 2010, 7 pgs.
Craig, Office-Action U.S. Appl. No. 11/178,177, Mar. 29, 2011, 15 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,177, Aug. 3, 2011, 26 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,177, Mar, 29, 2010, 11 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,181, Feb. 11, 2011, 19 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,181, Mar. 29, 2010, 10 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,182, Feb. 23, 2010, 15 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, Dec. 6, 2010, 12 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, Sep. 15, 2011, 12 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, Feb. 19, 2010, 17 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, Jul. 20, 2010, 13 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, Nov. 9, 2010, 13 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, Mar. 15, 2010, 11 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, Jul. 23, 2009, 10 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, May 26, 2011, 14 pgs.
Craig, Office Action, U.S. Appl. No. 13/609,183, May 9, 2013, 7 pgs.
Pavlovskaia, Office Action, JP 2011-516499, Feb. 14, 2014, 19 pgs.
Digital Audio Compression Standard(AC-3, E-AC-3), Advanced Television Systems Committee, Jun. 14, 2005, 236 pgs.
European Patent Office, Extended European Search Report for International Application No. PCT/US2010/027724, dated Jul. 24, 2012, 11 pages.
FFMPEG, http://www.ffmpeg.org, downloaded Apr. 8, 2010, 8 pgs.
FFMEG-0.4.9 Audio Layer 2 Tables Including Fixed Psycho Acoustic Model, 2001, 2 pgs.
Herr, Notice of Allowance, U.S. Appl. No. 11/620,593, May 23, 2012, 5 pgs.
Herr, Notice of Allowance, U.S. Appl. No. 12/534,016, Feb. 7, 2012, 5 pgs.
Herr, Notice of Allowance, U.S. Appl. No. 12/534,016, Sep. 28, 2011, 15 pgs.
Herr, Final Office Action, U.S. Appl. No. 11/620,593, Sep. 15, 2011, 104 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, Mar. 19, 2010, 58 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, Apr. 21, 2009 27 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, Dec. 23, 2009, 58 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, Jan. 24, 2011, 96 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, Aug. 27, 2010, 41 pgs.
Herre, Thoughts on an SAOC Architecture, Oct. 2006, 9 pgs.
Hoarty, The Smart Headend—A Novel Approach to Interactive Television, Montreux Int'l TV Symposium, Jun. 9, 1995, 21 pgs.
ICTV, Inc., International Preliminary Report on Patentability, PCT/US2006/022585, Jan. 29, 2008, 9 pgs.
ICTV, Inc., International Search Report / Written Opinion, PCT/US2006/022585, Oct. 12, 2007, 15 pgs.
ICTV, Inc., International Search Report / Written Opinion, PCT/US2008/000419, May 15, 2009, 20 pgs.
ICTV, Inc., International Search Report / Written Opinion; PCT/US2006/022533, Nov. 20, 2006; 8 pgs.
Isovic, Timing constraints of MPEG-2 decoding for high quality video: misconceptions and realistic assumptions, Jul. 2-4, 2003, 10 pgs.
MPEG-2 Video elementary stream supplemental information, Dec. 1999, 12 pgs.
Ozer, Video Compositing 101. available from http://www.emedialive.com, Jun. 2, 2004, 5pgs.
Porter, Compositing Digital Images, 18 Computer Graphics (No. 3), Jul. 1984, pp. 253-259.
RSS Advisory Board, “RSS 2.0 Specification”, published Oct. 15, 2007. Not Found.
SAOC use cases, draft requirements and architecture, Oct. 2006, 16 pgs.
Sigmon, Final Office Action, U.S. Appl. No. 11/258,602, Feb. 23, 2009, 15 pgs.
Sigmon, Office Action, U.S. Appl. No. 11/258,602, Sep. 2, 2008, 12 pgs.
TAG Networks, Inc., Communication pursuant to Article 94(3) EPC, European Patent Application, 06773714.8, May 6, 2009, 3 pgs.
TAG Networks Inc, Decision to Grant a Patent, JP 209-544985, Jun. 28, 2013, 1 pg.
TAG Networks Inc., IPRP, PCT/US2006/010080, Oct. 16, 2007, 6 pgs.
TAG Networks Inc., IPRP, PCT/US2006/024194, Jan. 10, 2008, 7 pgs.
TAG Networks Inc., IPRP, PCT/US2006/024195, Apr. 1, 2009, 11 pgs.
TAG Networks Inc., IPRP, PCT/US2006/024196, Jan. 10, 2008, 6 pgs.
TAG Networks Inc., International Search Report, PCT/US2008/050221, Jun. 12, 2008, 9 pgs.
TAG Networks Inc., Office Action, CN 200680017662.3, Apr. 26, 2010, 4 pgs.
TAG Networks Inc., Office Action, EP 06739032.8, Aug. 14, 2009, 4 pgs.
TAG Networks Inc., Office Action, EP 06773714.8, May 6, 2009, 3 pgs.
TAG Networks Inc., Office Action, EP 06773714.8, Jan. 12, 2010, 4 pgs.
TAG Networks Inc., Office Action, JP 2008-506474, Oct. 1, 2012, 5 pgs.
TAG Networks Inc., Office Action, JP 2008-506474, Aug. 8, 2011, 5 pgs.
TAG Networks Inc., Office Action, JP 2008-520254, Oct. 20, 2011, 2 pgs.
TAG Networks, IPRP, PCT/US2008/050221, Jul. 7, 2009, 6 pgs.
TAG Networks, International Search Report, PCT/US2010/041133, Oct. 19, 2010, 13 pgs.
TAG Networks, Office Action, CN 200880001325.4, Jun. 22, 2011, 4 pgs.
TAG Networks, Office Action, JP 2009-544985, Feb. 25, 2013, 3 pgs.
Talley, A general framework for continuous media transmission control, Oct. 13-16, 1997, 10 pgs.
The Toolame Project, Psych—nl.c, 1999, 1 pg.
Todd, AC-3: flexible perceptual coding for audio transmission and storage, Feb. 26-Mar. 1, 1994, 16 pgs.
Tudor, MPEG-2 Video Compression, Dec. 1995, 15 pgs.
TVHEAD, Inc., First Examination Report, in 1744/MUMMP/2007, Dec. 30, 2013, 6 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/010080, Jun. 20, 2006, 3 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/024194, Dec. 15, 2006, 4 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/024195, Nov. 29, 2006, 9 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/024196, Dec. 11, 2006, 4 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/024197, Nov. 28, 2006, 9 pgs.
Vernon, Dolby digital: audio coding for digital television and storage applications, Aug. 1999, 18 pgs.
Wang, A beat-pattern based error concealment scheme for music delivery with burst packet loss, Aug. 22-25, 2001, 4 pgs.
Wang, A compressed domain beat detector using MP3 audio bitstream, Sep. 30-Oct. 5, 2001, 9 pgs.
Wang, A multichannel audio coding algorithm for inter-channel redundancy removal, May 12-15, 2001, 6 pgs.
Wang, An excitation level based psychoacoustic model for audio compression, Oct. 30-Nov. 4, 1999, 4 pgs.
Wang, Energy compaction property of the MDCT in comparison with other transforms, Sep. 22-25, 2000, 23 pgs.
Wang, Exploiting excess masking for audio compression, Sep. 2-5, 1999, 4 pgs.
Wang, schemes for re-compressing mp3 audio bitstreams,Nov. 30-Dec. 3, 2001, 5 pgs.
Wang, Selected advances in audio compression and compressed domain processing, Aug. 2001, 68 pgs.
Wang, The impact of the relationship between MDCT and DFT on audio compression, Dec. 13-15, 2000, 9 pgs.
ActiveVideo Networks Inc., Decision to refuse a European patent application (Art. 97(2) EPC, EP09820936.4, Feb. 20, 2015, 4 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Article 94(3) EPC, 10754084.1, Feb. 10, 2015, 12 pgs.
ActiveVideo Networks Inc., Communication under Rule 71(3) EPC, Intention to Grant, EP08713106.6, Feb. 19, 2015, 12 pgs.
ActiveVideo Networks Inc., Notice of Reasons for Rejection, JP2014-100460, Jan. 15, 2015, 6 pgs.
ActiveVideo Networks Inc., Notice of Reasons for Rejection, JP2013-509016, Dec. 24, 2014 (Received Jan. 14, 2015), 11 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/737,097, Mar. 16, 2015, 18 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 14/298,796, Mar. 18, 2015, 11 pgs.
Craig, Decision on Appeal—Reversed—, U.S. Appl. No. 11/178,177, Feb. 25, 2015, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,177, Mar. 5, 2015, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,181, Feb. 13, 2015, 8 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentablity, PCT/US2013/036182, Oct. 14, 2014, 9 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Rule 94(3), EP08713106-6, Jun. 25, 2014, 5 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Rule 94(3), EP09713486.0, Apr. 14, 2014, 6 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Rules 70(2) and 70a(2), EP11833486.1, Apr. 24, 2014, 1 pg.
ActiveVideo Networks Inc., Communication Pursuant to Rules 161(2) & 162 EPC, EP13775121.0, Jan. 20, 2015, 3 pgs.
ActiveVideo Networks Inc., Examination Report No. 1, AU2011258972, Jul. 21, 2014, 3 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2014/041430, Oct. 9, 2014, 9 pgs.
Active Video Networks Inc., Notice of Reasons for Rejection, JP2012-547318, Sep. 26, 2014, 7 pgs.
ActiveVideo Networks Inc., Certificate of Patent JP5675765, Jan. 9, 2015, 3 pgs.
Avinity Systems B. V., Final Office Action, JP-2009-530298, Oct. 7, 2014, 8 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/445,104, Dec. 24, 2014, 14 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/686,548, Sep. 24, 2014, 13 pgs.
Brockmann, Office Action, U.S. Appl. No. 12/443,571, Nov. 5, 2014, 26 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/686,548, Jan. 5, 2015, 12 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/911,948, Jan. 29, 2015, 11 pgs.
Dahlby, Office Action, U.S. Appl. No. 12/651,203, Dec. 3, 2014, 19 pgs.
ETSI, “Hybrid Broadcast Broadband TV,” ETSI Technical Specification 102 796 V1.1.1, Jun. 2010, 75 pgs.
Gordon, Notice of Allowance, U.S. Appl. No. 12/008,697, Dec. 8, 2014, 10 pgs.
Gordon, Office Action, U.S. Appl. No. 12/008,722, Nov. 28, 2014, 18 pgs.
OIPF, “Declarative Application Environment,” Open IPTV Forum, Release 1 Specification, vol. 5, V1.1, Oct. 8, 2009, 281 pgs.
Regis, Notice of Allowance, U.S. Appl. No. 13/273,803, Nov. 18, 2014, 9 pgs.
Schierl, T., et al., 3GPP Compliant Adaptive Wireless Video Streaming Using H.264/AVC, © 2005, IEEE, 4 pgs.
Sigmon, Notice of Allowance, U.S. Appl. No. 13/311,203, Dec. 19, 2014, 5 pgs.
TAG Networks Inc, Decision to Grant a Patent, JP 2008-506474, Oct. 4, 2013, 5 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/668,004, filed Feb. 26, 2015, 17 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/911,948, filed Dec. 26, 2014, 12 pgs.
Regis, Notice of Allowance, U.S. Appl. No. 13/273,803, filed Mar. 2, 2015, 8 pgs.
Brockmann, Notice of Allowance, U.S. Appl No. 13/445,104, Apr. 23, 2015, 8 pgs.
Brockmann, Office Action, U.S. Appl. No. 14/262,674, May 21, 2015, 7 pgs.
Gordon, Notice of Allowance, U.S. Appl. No. 12/008,697, Apr. 1, 2015, 10 pgs.
Sigmon, Notice of Allowance, U.S. Appl. No. 13/311,203, Apr. 14, 2015, 5 pgs.
Avinity-Systems-BV, PreTrial-Reexam-Report-JP2009530298, Apr. 24, 2015, 6 pgs.
Related Publications (1)
Number Date Country
20120257671 A1 Oct 2012 US
Provisional Applications (1)
Number Date Country
61473085 Apr 2011 US