Class-based intelligent multiplexing over unmanaged networks

Information

  • Patent Grant
  • 10506298
  • Patent Number
    10,506,298
  • Date Filed
    Monday, October 23, 2017
    7 years ago
  • Date Issued
    Tuesday, December 10, 2019
    5 years ago
Abstract
A method of adapting content-stream bandwidth includes generating a content stream for transmission over an unmanaged network with varying capacity and sending the content stream toward a client device. The method includes monitoring the capacity of the unmanaged network and determining whether an aggregate bandwidth of an upcoming portion of the content stream fits the capacity. The upcoming portion of the content stream includes video content and user-interface data. The method further includes, in response to a determination that the aggregate bandwidth of the upcoming portion of the content stream does not fit the capacity, prioritizing low latency for the user-interface data over maintaining a frame rate of the video content when the user-interface data is the result of a user interaction and reducing a size of the upcoming portion of the content stream in accordance with the prioritizing. The reducing comprises decreasing the frame rate of the video content.
Description
TECHNICAL FIELD

The present disclosure pertains generally to cable television network technology, and particularly to adaptive and dynamic multiplexing techniques for interactive television services delivered over various network topologies including the Internet.


BACKGROUND

Interactive television services provide a television viewer with the ability to interact with their television in meaningful ways. Such services have been used, for example, to provide navigable menuing and ordering systems that are used to implement electronic program guides and pay-per-view or other on-demand program reservations and purchases, eliminating the need to phone the television provider. Other uses include interacting with television programming for more information on characters, plot, or actors, or interacting with television advertisements for more information on a product or for a discount coupon.


These services typically employ a software application that is executed on a server system located remotely from the TV viewer such as at a cable television headend. The output of the application is streamed to the viewer, typically in the form of an audio-visual MPEG Transport Stream. This enables the stream to be displayed on virtually any client device that has MPEG decoding capabilities such as a television set-top box. The client device allows the user to interact with the remote application by capturing keystrokes and passing these back to the application running on the server.


In cable system deployments, the headend server and its in-home set-top or other client are separated by a managed digital cable-TV network that uses well-known protocols such as ATSC or DVB-C. Here, “managed” means that any bandwidth resources required to provide these services may be reserved prior to use. Once resources are allocated, the bandwidth is guaranteed to be available, and the viewer is assured of receiving a high-quality interactive application experience.


In recent years, audio-visual consumer electronics devices increasingly support a Local Area Network (LAN) connection, giving rise to a new class of client devices: so-called “Broadband Connected Devices”, or BCDs. These devices may be used in systems other than the traditional cable television space, such as on the Internet. For example, a client device, such as a so-called smart TV, may implement a client application to deliver audio-visual applications streamed over a public data network from an audio-visual application streaming server to a television. A user may employ a remote control in conjunction with the client device to transmit interactive commands back to the application streaming server, thereby interacting with the server controlling the choice and delivery of desired content.


The “last mile” (the final leg of the telecommunications networks providing the actual connectivity to the end user) in public networks is typically made up of a number of network technologies, ranging from high-capacity fiber-optical networks to asymmetric digital subscription lines. In contrast inside a home, distribution is often realized by means of wireless technologies such as IEEE 802.11 networks (commonly known as Wi-Fi networks.) As a result, capacity (here meaning the maximum aggregate bandwidth a specific link is able to carry) varies between end-users, and due to the wireless technologies involved, capacity for a particular end-user also varies over time. Further, public data networks are not managed in the same way as private cable television distribution systems are. TCP, the most common transport protocol for the Internet, tries to maximize usage of its fair share of the capacity. As a result, it is impossible to guarantee a specific amount of bandwidth to applications running over such networks.


The intricacies of transmitting video over a network of varying capacity and available bandwidth (i.e., capacity not in use yet) conditions are a known challenge that has been successfully addressed. Examples of systems that transmit video over a network with varying capacity and available bandwidth (i.e., capacity not in use yet) include:

    • 1. Video conference call systems,
    • 2. Cloud game services,
    • 3. HLS (HTTP Live Streaming), and
    • 4. Progressive download video-on-demand.


Video conference call systems and cloud game services represent a type of system where a continuous low-delay video signal is encoded in real-time. The encoded stream adapts to changing network conditions by changing the picture quality, where a lower picture quality (typically realized by a higher average quantization of the coefficients that represent the picture) yields a lower average bitrate. Typically, these systems stream over an unreliable transport (such as UDP or RTP) and employ error correction and/or concealment mechanisms to compensate for loss. Any artifacts due to this loss or imperfect concealment are corrected over time due to the continuous nature of the signal. These systems require a complex and often proprietary client not only because of the complexity of the employed methods of concealment, but also because the client plays an important role in the measurement and reporting of the statistics that allow the server to make intelligent decisions about network conditions.


On the other end of the spectrum are systems that stream an offline-encoded, non-real-time stream over a reliable transport protocol like TCP/HTTP. These streams are progressively downloaded, where buffering makes the system robust for temporal variations in available bandwidth or capacity and, in the case of HLS for example, the stream changes to a different quality level depending on the capacity or sustained available bandwidth. In this case, the complexity of the client is relatively low and the components that make up the client are well-defined.


An interactive television service has a combination of properties of both of these previously mentioned types of systems. The streams exhibit low delay, real-time properties typically associated with UDP/RTP high-complexity, proprietary clients. However, the stream is received by relatively low-complexity clients using standard components. Typically such clients are more akin to progressive download clients using TCP/HTTP than to the clients that provide interactive or real-time services.


An interactive television service also has relatively static portions with a graphical user interface (GUI) that requires low-latency, artifact-free updates upon interactivity, combined with portions that have full motion video and audio that require smooth and uninterrupted play out.


Conventional systems do not adequately facilitate this combination of requirements. A new approach is therefore needed.


SUMMARY

Digital television over a managed network such as a cable television system uses constant-bandwidth channels to carry multiple program streams. Multiplexing within a fixed allocation of bandwidth requires a multiplexer controller to manage the allocation of bandwidth among a group of competing program streams or competing sessions. In this manner, an individual program stream or session competes for bandwidth against the remainder of the program streams or sessions in the group of program streams or sessions. Control logic in the multiplexer controller manages the byte allocation among the program streams so that as few compromises as possible in quality are required and the compromises are evenly distributed among the group.


Managed networks form the vast majority of commercial television program distribution networks. However, video program consumption is rapidly moving to both live and on-demand consumption via the Internet, an unmanaged network. Today fully one-third of all Internet data traffic at primetime is from the popular Internet video service Netflix. In the near future, over 80% of all Internet traffic will be video data.


On an unmanaged network, such as the Internet, a single program stream (or session) competes for bandwidth from a large number of other unknown streams over which the multiplexer has no control. One of the many advantages of the systems and methods described herein is a multiplexer controller that can control sending video information over unmanaged networks and utilize a class-based, multi-dimensional control logic that optimizes the interactive user experience for interactive and on-demand television programming.


Interactive television services provide the viewer with the ability to interact with their television for the purposes of selecting certain television programming, requesting more information about the programming, or responding to offers, among many possible uses. Such services have been used, for example, to provide navigable menu and ordering systems that are used to implement electronic program guides and on-demand and pay-per-view program reservations. These services typically employ an application that is executed on a server located remotely from the viewer. Such servers may be, for example, located at a cable television headend. The output of a software application running on the server is streamed to the viewer, typically in the form of an audio-visual MPEG Transport Stream. This enables the stream to be displayed on virtually any client device that has MPEG decoding capabilities, including a “smart” television, television set-top box, game console, and various network-connected consumer electronics devices and mobile devices. The client device enables the user to interact with the remote application by capturing keystrokes and passing the keystrokes to the software application over a network connection.


An interactive television service combines the properties of both of the aforementioned types of systems (i.e., managed and unmanaged network topologies). Such services require low delay, perceptually real-time properties typically associated with Real Time Transport Protocol running over User Datagram Protocol (UDP/RTP) on high-complexity, proprietary clients. However, in interactive television applications the stream is received by relatively low-complexity clients using consumer-electronics-grade components. Typically, the clients are more akin to progressive download clients using Transmission Control Protocol/Hypertext Transfer Protocol (TCP/HTTP) than to the clients that typically provide interactive services.


An interactive television service is also a combination of relatively static image portions representing a graphical user interface (graphical UI or GUI) that requires low-latency, artifact-free updates responsive to user input, and other portions that may have video with associated audio that require smooth and uninterrupted play-out. Conventional multiplexers do not adequately facilitate this combination of data types over the Internet. For instance, with existing system that send data over the Internet, when large user interface graphics of a particular session need to be sent to a particular client, if unpredictable network congestions impacts delivery, such systems have no means available (except a drastic reduction in image quality) to scale back or modify the order of multiplex elements to allow a temporary large data block representing the UI graphics to pass, for just one example.


With an extraordinarily high number of sessions active across the Internet, the probability for disruption to video, audio and/or GUI data is certain. The only alternative that conventional systems have is for often drastic reductions in video quality or greatly lowering of frame rate or, worse, the interruption of program material while the receiving client device attempts to buffer sufficient data to proceed.


The present embodiments overcome these common obstacles to sending video programming and interactive television services over unmanaged networks to receiving client devices by exploiting class-based asset allocation. For example, improvement in video transmission across an unmanaged network is realized using multi-dimensional control loop-logic that is programmed to make the best choice in managing adverse network conditions by trading off latency with frame rate with video quality. Critical data such as audio is maximally protected against packet loss, which is desirable because “the ears don't blink”: audio interruptions are usually very objectionable compared to the same in video.


Furthermore, network latency is measured such that useful measures of network congestion can be estimated.


In some embodiments, a method of adapting content-stream bandwidth includes generating a content stream for transmission over an unmanaged network with varying capacity; sending the content stream, via the unmanaged network, toward a client device; monitoring the capacity of the unmanaged network; determining whether an aggregate bandwidth of an upcoming portion of the content stream fits the capacity, wherein the upcoming portion of the content stream corresponds to a respective frame time and includes video content and user-interface data; and, in response to a determination that the aggregate bandwidth does not fit the capacity, reducing a size of the upcoming portion of the content stream.


In some embodiments, a server system includes one or more processors and memory storing one or more programs configured to be executed by the one or more processors. The one or more programs include instructions for performing the above-described method. In some embodiments, a non-transitory computer-readable storage medium stores one or more programs configured for execution by one or more processors of a server system. The one or more programs include instructions for performing the above-described method.





BRIEF DESCRIPTION OF THE DRAWINGS

For a better understanding of the various described embodiments, reference should be made to the Detailed Description below, in conjunction with the following drawings. Like reference numerals refer to corresponding parts throughout the figures and description.



FIG. 1A is a schematic according to some embodiments of an interactive television (ITV) application server, client device, and distribution network elements for exploiting adaptive bit rate communications over an unmanaged network such as the Internet.



FIG. 1B is a flow chart according to some embodiments of a method of testing network congestion and mitigating its effects on a client device that is interacting with an interactive television (ITV) application server, in accordance with some embodiments.



FIG. 2A is a multi-dimensional control graph showing decision paths for multiplexing audio, video and graphical user interface (UI) elements, according to some embodiments. Each dimension indicates which components of the user experience can contribute bandwidth for use by other components while minimizing the perceptual degradation of the composite user front-of-screen experience.



FIG. 2B is a multi-dimensional control graph showing decision paths as in FIG. 2A with the additional decision dimension of entire application groups, in accordance with some embodiments.



FIG. 3 is a schematic according to some embodiments of an interactive television (ITV) application server and client device depicting the base transport control protocol (TCP) used for communication between the server and the client device. The server exploits the disclosed proprietary (i.e., modified) TCP protocol while the client can advantageously receive the data stream by means of unmodified TCP.



FIG. 4 is a schematic according to some embodiments of an interactive television (ITV) application server and client device depicting distribution network elements.



FIG. 5 is a time-flow diagram of class-based allocation for a frame distribution of UI, video elements and audio with adequate bandwidth.



FIG. 6 is a time-flow diagram of class-based allocation, illustrating a constrained bandwidth allocation mitigated by reducing the frame rate of certain video elements, allowing UI and audio to pass unchanged, in accordance with some embodiments.



FIG. 7 is a time-flow diagram of class-based allocation, illustrating a constrained bandwidth allocation mitigated by maintaining the video frame rate at the expense of user-interface latency, in accordance with some embodiments.



FIG. 8 is a time-flow diagram depicting a multi-framerate encoding of a video stream transcoded at four video frame rates and depicting an example of transitioning from one framerate to another at only certain transition times that represent valid encoding sequence transitions.



FIG. 9 is a schematic of a client device (e.g., a set-top box or smart TV host system) running an ITV client application and third-party applications.



FIG. 10 is a flow chart depicting a method of stuffing the video decoder with null frames to prevent video buffer decoder under-run.





DETAILED DESCRIPTION

Reference will now be made to embodiments, examples of which are illustrated in the accompanying drawings. In the following description, numerous specific details are set forth in order to provide an understanding of the various described embodiments. However, it will be apparent to one of ordinary skill in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.


In recent years, audio-visual consumer electronics devices increasingly support a Local Area Network (LAN) connection, giving rise to a new class of client devices: so-called “Broadband Connected Devices”, or BCDs. These devices may be used in systems other than traditional cable television, such as on the Internet. For example, a client device such as a smart TV may implement a client application to deliver audio-visual applications streamed over a public data network from an audio-visual application streaming server (also referred to as an application server) to a television. A user may employ a remote control in conjunction with the client device to transmit interactive commands back to the application streaming server, thereby controlling the content interactively.


Quality of service for the delivery of digital media over an unmanaged network is optimized by exploiting a class-based management scheme to control an adaptive bit-rate network multiplexer. FIG. 1A depicts such a system in accordance with some embodiments. The system includes a video transcoder 101, application engine 102, compositor 103, global frame time clock 104, proprietary TCP component 105, unmanaged downstream 106 and upstream 108 communication channel (e.g., the Internet), and client firmware 107. The client firmware 107 runs on a client device. The video transcoder 101, application engine (i.e., application execution engine) 102, compositor 103, global frame time clock 104, and proprietary TCP component (i.e., stack) 105 are situated on (e.g., run on) a server system (e.g., at a cable television headend).


The compositor 103 composites fragments and video streams from various sources such as, but not limited to, the application engine 102, which generates fragments representing UI updates, and the transcoder 101, which transcodes video assets into composite-able assets. Feedback 109 from the proprietary TCP component 105, obtained through TCP's acknowledgment mechanism over the upstream channel 108, is used to determine a global frame time clock 104.



FIG. 4 depicts a more detailed decomposition of the system of FIG. 1A, in accordance with some embodiments. It includes the same components: a transcoder 401, an application engine 402, a compositor 412, a proprietary transport such as TCP 413, an unmanaged network 414 such as the Internet, and client firmware 415. In some embodiments, FIG. 1A's frame rate feedback signal 109 is derived by a control loop 409 in the compositor 412 from information passed from the transport receive process 410 to the control loop 409. The control loop 409 and a scheduler 403 (also in the compositor 412) optimize the composited stream within the optimization space of FIG. 2A or 2B.



FIG. 2A is a three-dimensional control graph showing decision paths for multiplexing audio, video and graphical user interface (UI) elements, according to some embodiments. Each dimension indicates which components of the user experience can contribute bandwidth for use by other components while minimizing the perceptual degradation of the composite user front-of-screen experience. The three dimensions are latency 201, frame rate 202, and quality 203. The three-dimensional decision logic thus may adjust (i.e., trade off) frame size versus frame rate (latency) versus frame quality (quantization). In some embodiments the control logic of the scheduler makes decisions based on trading-off frame size for frame rate, which affects latency, or further trades image quality for either of, or a combination of, frame size and frame quality. A diagrammatic representation of this decision logic may be seen in FIG. 2A showing the multidimensional control logic trading off Latency 201, Frame Rate 202, and Quality 203. FIG. 2B is a four-dimensional control graph showing decision paths as in FIG. 2A above with the additional decision dimension of application groups 204.


Conventional systems typically trade picture quality 203 for bitrate, but this does not yield satisfactory results in the disclosed system. The system of FIG. 4 combines graphical user interfaces with video. The end-user experience with respect to user interface portions of the screen benefit from low-latency, high-quality, and error-free updates. At the same time, the video signal is best served by smooth, uninterrupted playback, although quality should not be degraded too much or the picture becomes blocky or distorted. Tests on end-user experience have shown that trading frame rate 202 and latency 201 (FIGS. 2A-2B) for bit rate may actually result in a better experience than using quality. Statistics received from the proprietary TCP component 413 are provided as input to the control loop 409. Examples of such statistics include:

    • Capacity (C),
    • Available bandwidth (A),
    • Average Delta One Way Delay (˜DOWD),
    • Round Trip Time (RTT), and
    • Loss rate.


Based on these inputs the control loop 409 calculates a frame rate, maximum chunk size, and pause signal that are provided as input to the application engine 402 and scheduler 403. For example, the frame rate is provided to the application engine 402, while the frame rate, maximum chunk size, and pause signal are provided to the scheduler 403.


In some embodiments, the application engine 402 uses the frame rate to adapt to variable bandwidth conditions. A reduction in frame rate by a factor 2 roughly yields a similar reduction in bit rate for an equivalent picture quality. The fragments from the application engine 402 may use a fixed quantization parameter to keep quality uniform across the interface. The output of the application engine 402 is therefore generally more peaky than that of a typical video asset because the fragments may use these fixed quantization parameters instead of being rate controlled.


In some embodiments, the transcoder 401 may have video assets in different frame rates flavors instead of quality levels. Video assets may be transcoded ahead of time and adaptability to various bandwidth conditions is achieved by transcoding a video asset in multiple bit-rate flavors (i.e., using multiple bit rates). In conventional systems, a reduction in bitrate is typically achieved by increasing the quantization of the coefficients that constitute the video frames. The result of this increase in quantization is a reduction in picture quality generally perceived as ringing, contouring, posterizing, aliasing, blockiness or any combination there-of, especially in scene changes. Instead of reducing the quality of the video and maintaining the frame rate, in some embodiments the frame rate is reduced and the quality maintained to achieve a similar reduction. The advantage is that for varying bandwidth conditions, the quality of the video remains the same albeit at a different frame rate. Another advantage is that by having a choice of frame rate options for video assets, the scheduler 403 can tradeoff UI latency for video frame rate.


In some embodiments, the transport component 413 employs an UDP-like streaming behavior with TCP semantics. The advantage of using the TCP protocol's semantics is that the client can run a standard TCP/HTTP protocol implementation. Using the TCP protocol also allows for easier traversal of NAT routers and firewalls that are often found in the last mile. The disadvantage of standard TCP is that it is generally not well suited for real-time, low-delay streaming because of its random back-off, fairness and retransmission properties. Therefore, in some embodiments the server system does not adhere to typical TCP behavior such as the slow start, congestion window, and random back-off, and instead sends segments in a way that suits the described real-time streaming requirements, while maintaining enough compliancy (such as following TCP's receive window and retransmission rules) to be able to use standard TCP client implementations.


The transport component 413 may have a send process 404 and a receive process 410. The send process 404 sends scheduled chunks as bursts of TCP segments, without regards to traditional TCP fairness rules, and retransmits lost segments as mandated by the TCP protocol upon loss indications from the receive process 410. The receive process 410 processes TCP acknowledgments (ACKs) and selective acknowledgments (SACKs) and timestamps pertaining to the segments sent by the send process 404. RFC 1323 describes TCP timestamps. In standard TCP implementations, the TCP timestamps are used in an algorithm known as Protection Against Wrapped Sequence numbers (PAWS). PAWS is used when the TCP window size exceeds the possible number of sequence numbers, typically in networks with a very high bandwidth/delay product. In some embodiments, the timestamps are used to determine server-side the link's capacity and available bandwidth by leveraging the fact that the burst transmission timespan can be compared to the client-side reception timespan. Conventional systems have algorithms that use these delta one way delays to derive the link's capacity and, by varying the exact timing of the segments in the burst, make an approximation of the available bandwidth. Instead of using special probe data to determine these statistics only at the start of a session, the server system uses the audio and video data itself to continuously measure changes in the link's capacity and available bandwidth by means of reading the time stamps of the returning TCP ACKs from the burst of TCP packets to the client. This measurement of return ACKs provides a means to determine network latency and congestion allowing for more accurate use of available bandwidth.


The same mechanisms can be implemented on top of standard UDP instead of TCP, assuming packet loss is handled by standard mechanisms such as Forward Error Correction or retransmissions.


An unmanaged network 414 (e.g., the Internet), is the environment over which the described system is designed to work. Such a network is typified by a plurality of downstream networks with queues and associated properties 405 and upstream networks with queues and associated properties 411. The downstream and upstream networks are generally asymmetrical with respect to capacity, available bandwidth and latency properties. The disclosed system assumes no prior knowledge over the properties of the unmanaged network, other than that variable latency, loss and reordering are assumed to occur. Some links, such as Wi-Fi links, may also exhibit temporary loss of all connectivity.


The client device running the client firmware 415 may be a broadband-connected set-top box, a broadband-connected television, a computer, or any other device. In some embodiments, the client device has a standard TCP client implementation 406, a thin client implementation 407, and an audio/video decoder 408 (e.g., that may decode MPEG-2, H.264/MPEG-AUDIO, AC3 or AAC video/audio streams).


In some embodiments, the audio/video decoder 408 is a hardware decoder. Typically, hardware decoders rely on a constant stream of audio/video data and do not handle buffer under-runs very well. Therefore, the thin client implementation 407 may implement methods to prevent under-run, such as the method of FIG. 10. In the method of FIG. 10, the client injects the hardware decoder buffer with null-frames as needed to maintain the health of the decoding chain. These null-frames may be inter- or temporal encoded frames that only consist of skip macroblocks or macroblocks that do not change the reference frame. The null-frames may also be disposable, so that the state of the decoder does not change. If null-frames were inserted, the thin-client may compensate by later removing similar null-frames from the video stream. During the period that there are more frames added than removed, the client may have to apply a timestamp compensation mechanism (such as re-stamping the presentation time stamps (PTSs)) to keep the hardware decoder's timing mechanism satisfied.


The method of FIG. 10 begins with video data being retrieved (1201) from the TCP receive buffer. If the video data is not a complete video frame (1202-No), a null frame is generated (1203) and injected (i.e., stuffed) into the hardware decoder buffer, and the method proceeds to operation 1206, discussed below. If the video data is a complete frame (1202-Yes) and is not a non-null or non-disposable frame (1204-No), it is determined whether audio and video are in sync (1205). If audio and video are not in sync (1205-No), the method returns to operation 1201. If audio and video are in sync (1205-Yes), or if the video data is a non-null or non-disposable frame (1204-Yes), then the PTS is restamped (1206) and the frame is passed to the decoder. The method then waits (1207) for the next frame time and returns to operation 1201.


The compositor 412 may generate transport streams with a system frame rate of 29.97 Hz for NTSC systems or 25 Hz for PAL systems. When the compositor 412 is said to change to another frame rate, it is the effective frame rate that may be changed. The effective frame rate is the rate with which the display changes, as opposed to the system frame rate, which is the rate at the frame clock advances. If the effective frame rate is lower than the system frame rate, the compositor may output intermediate null-frames in between frames that carry data that change the display. Suppose the system frame rate is 30 Hz and the effective frame rate is 15 Hz. In this case the compositor may output the following frames; E0-N1-E2-N3-E4-N5, where Et denotes an effective frame at system frame time t and Nt denotes a null frame at system frame time t. This can be arbitrarily extended to any effective frame rate (e.g., E0-N1-N2-E3-N4-N5 for 10 Hz and E0-N1-N2-N3-E4-N5 for 7.5 Hz).


The client firmware 415 may remove null-frames to compensate for earlier null-frames it introduced as instructed by the server. When the effective frame rate equals the system frame rate the stream may not have frames that can be removed. It is therefore advantageous to always have a system frame rate that is double the maximum effective frame rate. For a NTSC system the system frame rate may be 59.94 Hz and for PAL the system frame rate may be 50 Hz, although the maximum effective frame rate of transcoded assets is 29.97 Hz or 25 Hz respectively.


Another reason to use a system frame rate that is higher than the maximum effective frame rate may be to allow for more freedom in resampling video assets from film or cinema rate to the system frame rate. Conversely, converting assets from 29.97 Hz to 25 Hz and vice versa may yield better results when the system frame rate is higher and frames can be scheduled closer to their original frame time.


In some embodiments of the invention, the higher system frame rate may be used to separate video material from user interface material. This may be achieved server side by using the even frames for encoded video, while using the odd frames for composited user interface fragments (or vice versa). The advantages of this approach would be a reduced compositing overhead and the fact that the video may use encoding parameters that are incompatible with the fragment compositing process employed for the user interface fragments (For example, an embodiment that uses H.264 may use CABAC for encoding the video assets while using CAVLC for the composited fragments.), resulting in higher quality video.


In some embodiments of the invention, the concept of alternating video and user interface frames may also be used to retrieve and decode an out-of-band video asset. Additional benefit of such an approach is that for the video stream a progressive download of the asset can be used in combination with low latency server side encoded user interfaces. In some embodiments, the user interface and video share the same latency. It is not possible to send ahead video data without additional complexity on the client. If a system does send ahead video data, the client may be required to change timestamps to correct playback. However, tolerance with respect to varying link conditions would improve if the audio and video could be decoupled from the user interface and be buffered as in a normal progressive download system.


In some embodiments, this decoupling may be partially achieved by sending ahead audio. Discontinuities in audio playback are much more noticeable than temporary disruptions in video. The user experience is considerably enhanced if a few hundred milliseconds of audio were available to bridge temporary loss in connectivity. The system may send ahead audio, because audio and video can be synched via timestamps. At the client, audio-video sync may be achieved by matching audio timestamps with video timestamps. Therefore, it is not a problem to send ahead audio up to a certain amount. This way, a certain degree of connectivity robustness and a continuous user experience is achieved without a latency penalty, which would otherwise spoil the interactive experience for the user.


In the event of a temporary disruption of link connectivity, the audio and video may become out of sync because the audio keeps playing while the video buffer may under-run. To alleviate this problem, the thin client 407 may use a null-frame stuffing/removing mechanism as has been described.


Audio may also be sent ahead over a separate logical or physical connection.


As has been described, the compositor 412 may use frame rate and latency instead of, or in addition to, picture quality to adapt the audio/video streams and user interface to the available bandwidth. Adapting by changing the quantization of the coefficients is well-known. Adaptability using latency and/or frame rate is best described by example.


In some embodiments, an interactive application includes a user interface with a partial-screen video stream. FIG. 5 depicts the situation where the bandwidth required for the composited user interface 501, video stream 502 and audio stream 503 fits the available bandwidth as expressed by the MaxChunkSize. (The MaxChunkSize is the maximum chunk in bytes the system uses for a given frame rate.) From frame times t through t+3, the aggregate bandwidth for these three sources that make up the stream never exceeds the maximum chunk size for the system frame rate and no policy decision has to be made.


Now suppose that the aggregate bandwidth does not fit (i.e., exceeds the maximum chunk size) because, for example, a user interface update at t is too big to fit the budget. Audio is typically a fixed component and the user experience benefits from uninterrupted audio playback. Therefore a policy decision has to be made whether to give precedence to the user interface update or the video stream.


If the user interface update is the result of the user interacting with the system, it may be assumed that low latency of the response is more important than maintaining video frame rate. An example of how this works out is depicted in FIG. 6. The user interface update, consisting of chunks 603 and 604 may be spread over t and t+1 and the video frame at t+1 may be skipped to make enough room. For sustained oversubscription (for example when the user interface animates for a number of frames), this allocation scheme may be repeated resulting in a similar strategy at t+2 and t+3. Because audio is fixed, no change is made to the scheduling of audio data.


If the user interface update is not the result of the user interacting but, for example, is application-timer induced, it may be assumed that the user is watching the video stream and it may be beneficial for the user experience to maintain the video frame rate and delay the user interface. Such a scenario is depicted in FIG. 7. In this scenario audio 701 and the first video frame 702 are scheduled as before. However, instead of sending the frame representing the user interface update at t, video frames V1 703, V2 704, and V2′705 are sent ahead and a user interface update is delayed until enough bandwidth is available to send the complete update. The trivial implementation of this strategy would generate UI0 as in FIGS. 6 (603 and 604), however, a more optimal user experience is achieved by extrapolating the update at t to the time where the frame is actually displayed, which may be t+2 and therefore the figure depicts UI2.


The examples of FIGS. 5 to 7 assume that the effective frame rate before adaptation equals the system frame rate. It should be noted, though, that this does not necessarily need to be; the policy decision can be made for every effective frame rate as long as there's a lower video frame rate available. If this is not the case, the system always has the option to delay the user interface graphical elements.


Video frames may be sent ahead because the video streams may be pre-transcoded or real-time transcoded video assets that have been transcoded at least a few frames ahead of time. The structure of a typical multi-frame-rate video asset is depicted in FIG. 8. It contains multiple (e.g., four) video streams 801-804 at distinct, respective frame rates (e.g., full, half, third and quarter, respectively) and an audio stream. To save on resources to transcode and store assets, the lower frame rate assets may only be available in a single or limited number of permutations. For example, the half-frame-rate is only available in even frames; the odd frame permutation may be omitted. This means that it is not always possible to switch from one frame rate to another instantaneously. For example, at time t the compositor 412 can switch from full frame rate V0-0 to V1-2, V2-3, and V3-4 because they all encode the difference to an equivalent frame at time t. At time t+4, however, the compositor can only return to full frame rate or half frame rate because the third frame rate does not have a frame that transcodes the transition from the picture at times t+4 to t+n.


An advantage of reducing frame rate instead of reducing picture quality is that frames at a particular time t are more or less equivalent; they represent roughly the same picture. Especially for scene changes this is a considerable advantage because it is typically at scene changes that blocking artifacts become very apparent. As has been noted before, a reduction in frame rate by 2 yields a reduction in bitrate by 2 for an equivalent picture quality. It should be noted, though, that equivalent frames may not be identical for different frame rates. Due to the intricacies of the block based encoder and its quantization process, the exact pixel values depend on the exact sequence of intra and inter prediction and quantization process. Switching from one frame rate to another may introduce small errors, albeit much smaller than when switching between streams of different quality. An appropriate intra refresh process may be applied to alleviate a buildup of these small errors.


The concept of the effective frame rate is also used by the transport. As has been outlined in FIGS. 5-7, data of one or more composited frames is aggregated in a chunk of up to MaxChunkSize and sent. The MaxChunkSize is determined by the control loop component and may be derived from the capacity or available bandwidth and frame rate. A simple example of how to derive MaxChunkSize is given below.


Assume Bapp (in bits per second) is the bit rate at which an application is specified to work full frame rate, with system frame rate Fs (in frames per second). Then the following may hold:

MaxChunkSize=(Bapp/Fs)/8


If the available bandwidth or capacity exceeds Bapp, the effective frame rate Fe may be equal to Fs. Or, half that of Fs if system is to benefit from the advantages outlined before. If the available bandwidth or capacity is less than Bapp, the control loop may decide to either shrink the MaxChunkSize or change the effective frame rate to the next available divider. In some embodiments, the effective frame rate may be changed. The advantages of maintaining the bit budget for individual frames and reducing the frame rate have been outlined for picture quality, but the advantage also extends to the transport; by reducing frame rate instead of picture quality, the average amount of data per frame remains the same for varying bitrates. For efficiency reasons it is advantageous to always send the data in the chunk using the maximum TCP segment size. Since the transport derives statistics per segment, reducing the amount of data would reduce the amount of segments over which statistics can be derived. Unless, of course, the segment size is reduced.


Maintaining a relatively high number of segments from which to derive statistics is important because clients may have limited TCP timestamp properties. RFC 1323 does not specify the units in which the timestamps are expressed, nor the resolution of its updates. Tests have shown that common timestamp granularity (the resolution at which different segments can be distinguished from each other) range from one millisecond up to ten. A typical TCP segment for a typical Internet connection to the home may carry approximately 1450 bytes of data. A typical Bapp setting for BCD sessions may be for example 6 Mbps, at which a TCP segment takes roughly 2 milliseconds. (Assuming that the link's capacity roughly equals Bapp.) A timer granularity of 10 milliseconds roughly equates to 5 segments, which is not enough to directly derive any useful statistics.


In the disclosed system, the transport 413 increases accuracy of the measurements by building a histogram of arrival times. Suppose a client has a timestamp granularity of 10 milliseconds. The first segment in a frame marks the first histogram slot 0. The timestamps of any subsequent segments are subtracted by the timestamp of this first slot, adding to the histogram's slot 0,1, . . . ,n. Note that the arrival of the first segment is typically not synchronized with the slot timing. Therefore, a typical histogram for 12 segments may look like (where # denotes a segment):


0: ###


1: ######


2: ###


3:


Histograms like these may be used to derive a number of network properties and conditions, some of which are specified below.


If the departure constellation (the intervals between the segments) of the segments was sufficiently dense, that is, the segments were transmitted as a burst with minimal inter segment intervals, the capacity of the narrow link, the link with the lowest capacity, may be derived from the slot with the largest number of hits.


If the width of the histogram of a dense departure constellation exceeds the number of slots expected within an effective frame time (four for NTSC at 30 frames per second, because the first and last slot are arbitrarily aligned to the arrival constellation), the stream may either be exceeding the capacity:


0: ##


1: ###


2: ###


3: ###


4: #


or may be experiencing intermittent network problems (such as latency due to Wi-Fi interference):


0: ###


1: ######


2: #


3:


4: ##


If the width of the histogram is lean (using only the first 2 slot), the system is not using the full capacity of the link:


0: ###########


1: #


2:


3:


4:


The histogram approach may be used even if the client allows for a better timestamp granularity by artificially limiting the granularity to a granularity around 10-millisecond. For example, a common granularity of 4 milliseconds may be changed to 8 milliseconds slot times.


Artificially limiting the granularity may also be used to scale the algorithm to lower effective frame rates. For example, when the effective frame rate is halved, the granularity may be halved as well (e.g., from 10 milliseconds to 20 milliseconds). This effectively scales the algorithm with the effective frame rate; all proportions including average amount of data and number of segments, picture quality, semantics of histogram, and others remain the same while only the effective frame rate changes. Note, however, that if more accuracy is available, histograms can be arbitrarily recalculated to see whether a ‘magnified’ histogram yields more information. Timestamps may also be used directly, if granularity permits.


If the timestamp granularity is too low, RTT (round trip time) may be used as an alternative with the disadvantage that variations in upstream congestion may skew the results.


Throughout the disclosure, references have been made to capacity and available bandwidth. Capacity, from an end-to-end perspective, is the maximum aggregate bandwidth the narrow link is able to carry, where narrow link is the link with the lowest capacity. Available bandwidth, from the same perspective, is the unused capacity. Overflowing the capacity should be avoided, while overflowing the available bandwidth is a policy decision. The system continuously measures capacity and estimates an effective frame rate that will fit the capacity. By sending chunks as tightly spaced TCP segments (or bursts), the system is able to capture its share of the capacity and minimize latency. Coexistence with unmanaged protocols such as unmodified TCP may be achieved by the fact that interactive applications have a strong variable bit rate (VBR) profile and hardly ever fully use the MaxChunkSize. Any additional knowledge about available bandwidth may enhance the decision to either maintain the current effective frame rate or reduce it.


In addition to on-demand feature files, more and more live cable television programming is moving to the Internet in addition to cable and satellite distribution. Internet-delivered (unmanaged network delivered) content is typically received via the equivalent of a cable or satellite set-top box. An example of this type of receiver is the processing capability built into contemporary smart TVs where, in addition to a standard TV tuner, the system of FIG. 9 is also implemented in the client device. And, in addition to a TV tuner selecting a television program to receive and display, the system of FIG. 9 receives data packets from an unmanaged network (the Internet) by means of a software programs running in the sub-system typically either installed by the manufacturer or downloaded into the smart TV by the user.


Typically, network-connected set-top boxes have components similar to those as are summarized in FIG. 9. The unmanaged network 905 is addressed, typically via the TCP/IP protocol via network interface 903, which feeds a data buffer 910. The audio, video and graphic components are decoded via audio/video (A/V) decoder 911 which feeds its output to graphic overlay mixer 912 which adds certain locally generated graphics and combines them with the video signal from A/V decoder 911 in accordance with information supplied to and associated with central processing unit (CPU) 901. Various third-party applications 907, 908, 909 in turn have access to the CPU 901 via the application program interface (API) 906. The result of received program information and locally generated information are mixed by the graphic overlay mixer 912 and output to the video display device as a video-out signal 913.



FIG. 3 summarizes the invention by illustrating the path of video program information where Transcode 301 provides video and audio compatible with the client receiver 304 to the Compositor 302 to the Transport Multiplexer 303 that employs the inventions Proprietary TCP. The Client 304 needs only an unmodified TCP transport means 305 to beneficially receive and display program material via the invention. It is the class-based management of the audio, video and graphic components of 302 in concert with the network information (congestion) sensed via 303 that allows the novel means of the invention to optimally fill available channel bandwidth for the best quality and lowest latency delivery of interactive video content host on a remote server means over an unmanaged network.



FIG. 1A is a flowchart of a method of testing network congestion and mitigating its effects in accordance with some embodiments. The proprietary TCP stack send (1501) per-frame-time downstream packet trains (i.e., bursts) and utilizes the resulting upstream ACK timing to determine (1502) connection quality. A class-based adaptive bit-rate process utilizes channel congestion information to make (1503) allocation decisions of audio, video and graphics information to optimize quality of playback and minimize latency. The client employs (1504) a deep audio buffer to maintain critical audio continuity, which assists the server to overcome unpredictable channel congestion. The client automatically inserts (1505) filler video frames on an empty video buffer at full frame-rate to assist the server to overcome unpredictable channel congestion and avoid buffer underruns.


The functionality described herein may be embodied in many different forms, including, but in no way limited to, computer program logic for use with a processor (e.g., a microprocessor, microcontroller, digital signal processor, or general purpose computer), programmable logic for use with a programmable logic device (e.g., a Field Programmable Gate Array (FPGA) or other PLD), discrete components, integrated circuitry (e.g., an Application Specific Integrated Circuit (ASIC)), or any other means including any combination thereof


Computer program logic implementing all or part of the functionality previously described herein may be embodied in various forms, including, but in no way limited to, a source code form, a computer executable form, and various intermediate forms (e.g., forms generated by an assembler, compiler, linker, or locator). Source code may include a series of computer program instructions implemented in any of various programming languages (e.g., an object code, an assembly language, or a high-level language such as Fortran, C, C++, JAVA, or HTML) for use with various operating systems or operating environments. The source code may define and use various data structures and communication messages. The source code may be in a computer executable form (e.g., via an interpreter), or the source code may be converted (e.g., via a translator, assembler, or compiler) into a computer executable form.


The computer program may be fixed in any form (e.g., source code form, computer executable form, or an intermediate form) either permanently or transitorily in a tangible storage medium, such as a semiconductor memory device (e.g., a RAM, ROM, PROM, EEPROM, or Flash-Programmable RAM), a magnetic memory device (e.g., a diskette or fixed disk), an optical memory device (e.g., a CD-ROM), a PC card (e.g., PCMCIA card), or other memory device. The computer program may be fixed in any form in a signal that is transmittable to a computer using any of various communication technologies, including, but in no way limited to, analog technologies, digital technologies, optical technologies, wireless technologies (e.g., Bluetooth), networking technologies, and internetworking technologies. The computer program may be distributed in any form as a removable storage medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the communication system (e.g., the Internet or World Wide Web).


Hardware logic (including programmable logic for use with a programmable logic device) implementing all or part of the functionality previously described herein may be designed using traditional manual methods, or may be designed, captured, simulated, or documented electronically using various tools, such as Computer Aided Design (CAD), a hardware description language (e.g., VHDL or AHDL), or a PLD programming language (e.g., PALASM, ABEL, or CUPL).


Programmable logic may be fixed either permanently or transitorily in a tangible storage medium, such as a semiconductor memory device (e.g., a RAM, ROM, PROM, EEPROM, or Flash-Programmable RAM), a magnetic memory device (e.g., a diskette or fixed disk), an optical memory device (e.g., a CD-ROM), or other memory device. The programmable logic may be fixed in a signal that is transmittable to a computer using any of various communication technologies, including, but in no way limited to, analog technologies, digital technologies, optical technologies, wireless technologies (e.g., Bluetooth), networking technologies, and internetworking technologies. The programmable logic may be distributed as a removable storage medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the communication system (e.g., the Internet or World Wide Web).


The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the scope of the claims to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen in order to best explain the principles underlying the claims and their practical applications, to thereby enable others skilled in the art to best use the embodiments with various modifications as are suited to the particular uses contemplated.

Claims
  • 1. A method of adapting content-stream bandwidth, comprising: generating a content stream for transmission over an unmanaged network with varying capacity;sending the content stream, via the unmanaged network, toward a client device;monitoring the capacity of the unmanaged network;determining whether an aggregate bandwidth of an upcoming portion of the content stream fits the capacity, wherein the upcoming portion of the content stream includes video content and user-interface data;in response to a determination that the aggregate bandwidth of the upcoming portion of the content stream does not fit the capacity: when the user-interface data is the result of a user interaction: prioritizing low latency for the user-interface data over maintaining a frame rate of the video content; andreducing a size of the upcoming portion of the content stream in accordance with the prioritizing, the reducing comprising decreasing the frame rate of the video content;determining whether decreasing the frame rate of the video content sufficiently reduces the aggregate bandwidth of the upcoming portion of the content stream; andin response to determining that decreasing the frame rate of the video content does not sufficiently reduce the aggregate bandwidth of the upcoming portion of the content stream, decreasing a frame rate of the user-interface data.
  • 2. The method of claim 1, wherein the upcoming portion of the content stream corresponds to a respective frame time.
  • 3. The method of claim 1, wherein the reducing comprises decreasing the frame rate of the video content while maintaining a quality of the video content.
  • 4. The method of claim 1, wherein: the upcoming portion of the content stream further includes audio data; andthe reducing comprises sending the audio data ahead in a portion of the content stream that precedes the upcoming portion.
  • 5. The method of claim 1, wherein the reducing further comprises spreading the user-interface data over the upcoming portion and a second portion of the content stream that follows the upcoming portion.
  • 6. The method of claim 1, wherein: the upcoming portion of the content stream further includes audio data; andthe reducing leaves the audio data unaffected.
  • 7. The method of claim 1, wherein: sending the content stream comprises sending bursts of TCP segments; andmonitoring the capacity of the unmanaged network comprises receiving acknowledgments of the bursts, the acknowledgments including timestamps, and using the timestamps to determine the capacity of the unmanaged network.
  • 8. The method of claim 7, wherein using the timestamps to determine the capacity of the unmanaged network comprises: building a histogram of arrival times in accordance with the timestamps; andderiving the capacity from the histogram.
  • 9. An electronic device, comprising: one or more processors; andmemory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: generating a content stream for transmission over an unmanaged network with varying capacity;monitoring the capacity of the unmanaged network;determining whether an aggregate bandwidth of an upcoming portion of the content stream fits the capacity, wherein the upcoming portion of the content stream includes video content and user-interface data;in response to a determination that the aggregate bandwidth of the upcoming portion of the content stream does not fit the capacity: when the user-interface data is the result of a user interaction: prioritizing low latency for the user-interface data over maintaining a frame rate of the video content; andreducing a size of the upcoming portion of the content stream in accordance with the prioritizing, the reducing comprising decreasing the frame rate of the video content;determining whether decreasing the frame rate of the video content sufficiently reduces the aggregate bandwidth of the upcoming portion of the content stream; andin response to determining that decreasing the frame rate of the video content does not sufficiently reduce the aggregate bandwidth of the upcoming portion of the content stream, decreasing a frame rate of the user-interface data.
  • 10. A non-transitory computer-readable storage medium storing one or more programs configured for execution by an electronic device, the one or more programs comprising instructions for: generating a content stream for transmission over an unmanaged network with varying capacity;monitoring the capacity of the unmanaged network;determining whether an aggregate bandwidth of an upcoming portion of the content stream fits the capacity, wherein the upcoming portion of the content stream includes video content and user-interface data;in response to a determination that the aggregate bandwidth of the upcoming portion of the content stream does not fit the capacity: when the user-interface data is the result of a user interaction: prioritizing low latency for the user-interface data over maintaining a frame rate of the video content; andreducing a size of the upcoming portion of the content stream in accordance with the prioritizing, the reducing comprising decreasing the frame rate of the video content;determining whether decreasing the frame rate of the video content sufficiently reduces the aggregate bandwidth of the upcoming portion of the content stream; andin response to determining that decreasing the frame rate of the video content does not sufficiently reduce the aggregate bandwidth of the upcoming portion of the content stream, decreasing a frame rate of the user-interface data.
  • 11. The method of claim 1, wherein decreasing the frame rate of the video content comprises decreasing the frame rate to a half, third, or quarter of the frame rate of the content stream.
  • 12. The method of claim 1, wherein decreasing the frame rate of the video content comprises transmitting even frames of the video content and omitting odd frames of the video content.
  • 13. The method of claim 1, wherein the video content comprises a total number of frames and decreasing the frame rate of the video content comprises transmitting a fewer number of frames than the total number of frames.
RELATED APPLICATIONS

This application is a continuation of U.S. Non-Provisional Patent Application Ser. No. 14/696,463, filed Apr. 26, 2015 entitled “Class-Based Intelligent Multiplexing Over Unmanaged Networks,” which claims priority to U.S. Provisional Patent Application No. 61/984,703, entitled “Class-Based Intelligent Multiplexing over Unmanaged Networks,” filed Apr. 25, 2014, and which is a continuation-in-part of U.S. patent application Ser. No. 13/438,617, entitled “Reduction of Latency in Video Distribution Networks using Adaptive Bit Rates,” filed Apr. 3, 2012, all of which are incorporated by reference herein in their entirety.

US Referenced Citations (784)
Number Name Date Kind
3889050 Thompson Jun 1975 A
3934079 Barnhart Jan 1976 A
3997718 Ricketts et al. Dec 1976 A
4002843 Rackman Jan 1977 A
4032972 Saylor Jun 1977 A
4077006 Nicholson Feb 1978 A
4081831 Tang et al. Mar 1978 A
4107734 Percy et al. Aug 1978 A
4107735 Frohbach Aug 1978 A
4145720 Weintraub et al. Mar 1979 A
4168400 de Couasnon et al. Sep 1979 A
4186438 Benson et al. Jan 1980 A
4222068 Thompson Sep 1980 A
4245245 Matsumoto et al. Jan 1981 A
4247106 Jeffers et al. Jan 1981 A
4253114 Tang et al. Feb 1981 A
4264924 Freeman Apr 1981 A
4264925 Freeman et al. Apr 1981 A
4290142 Schnee et al. Sep 1981 A
4302771 Gargini Nov 1981 A
4308554 Percy et al. Dec 1981 A
4350980 Ward Sep 1982 A
4367557 Stern et al. Jan 1983 A
4395780 Gohm et al. Jul 1983 A
4408225 Ensinger et al. Oct 1983 A
4450477 Lovett May 1984 A
4454538 Toriumi Jun 1984 A
4466017 Banker Aug 1984 A
4471380 Mobley Sep 1984 A
4475123 Dumbauld et al. Oct 1984 A
4484217 Block et al. Nov 1984 A
4491983 Pinnow et al. Jan 1985 A
4506387 Walter Mar 1985 A
4507680 Freeman Mar 1985 A
4509073 Baran et al. Apr 1985 A
4523228 Banker Jun 1985 A
4533948 McNamara et al. Aug 1985 A
4536791 Campbell et al. Aug 1985 A
4538174 Gargini et al. Aug 1985 A
4538176 Nakajima et al. Aug 1985 A
4553161 Citta Nov 1985 A
4554581 Tentler et al. Nov 1985 A
4555561 Sugimori et al. Nov 1985 A
4562465 Glaab Dec 1985 A
4567517 Mobley Jan 1986 A
4573072 Freeman Feb 1986 A
4591906 Morales-Garza et al. May 1986 A
4602279 Freeman Jul 1986 A
4614970 Clupper et al. Sep 1986 A
4616263 Eichelberger Oct 1986 A
4625235 Watson Nov 1986 A
4627105 Ohashi et al. Dec 1986 A
4633462 Stifle et al. Dec 1986 A
4670904 Rumreich Jun 1987 A
4682360 Frederiksen Jul 1987 A
4695880 Johnson et al. Sep 1987 A
4706121 Young Nov 1987 A
4706285 Rumreich Nov 1987 A
4709418 Fox et al. Nov 1987 A
4710971 Nozaki et al. Dec 1987 A
4718086 Rumreich et al. Jan 1988 A
4732764 Hemingway et al. Mar 1988 A
4734764 Pocock et al. Mar 1988 A
4748689 Mohr May 1988 A
4749992 Fitzemeyer et al. Jun 1988 A
4750036 Martinez Jun 1988 A
4754426 Rast et al. Jun 1988 A
4760442 O'Connell et al. Jul 1988 A
4763317 Lehman et al. Aug 1988 A
4769833 Farleigh et al. Sep 1988 A
4769838 Hasegawa Sep 1988 A
4789863 Bush Dec 1988 A
4792849 McCalley et al. Dec 1988 A
4801190 Imoto Jan 1989 A
4805134 Calo et al. Feb 1989 A
4807031 Broughton et al. Feb 1989 A
4816905 Tweedy et al. Mar 1989 A
4821102 Ichikawa et al. Apr 1989 A
4823386 Dumbauld et al. Apr 1989 A
4827253 Maltz May 1989 A
4827511 Masuko May 1989 A
4829372 McCalley et al. May 1989 A
4829558 Welsh May 1989 A
4847698 Freeman Jul 1989 A
4847699 Freeman Jul 1989 A
4847700 Freeman Jul 1989 A
4848698 Newell et al. Jul 1989 A
4860379 Schoeneberger et al. Aug 1989 A
4864613 Van Cleave Sep 1989 A
4876592 Von Kohorn Oct 1989 A
4889369 Albrecht Dec 1989 A
4890320 Monslow et al. Dec 1989 A
4891694 Way Jan 1990 A
4901367 Nicholson Feb 1990 A
4903126 Kassatly Feb 1990 A
4905094 Pocock et al. Feb 1990 A
4912760 West, Jr. et al. Mar 1990 A
4918516 Freeman Apr 1990 A
4920566 Robbins et al. Apr 1990 A
4922532 Farmer et al. May 1990 A
4924303 Brandon et al. May 1990 A
4924498 Farmer et al. May 1990 A
4937821 Boulton Jun 1990 A
4941040 Pocock et al. Jul 1990 A
4947244 Fenwick et al. Aug 1990 A
4961211 Tsugane et al. Oct 1990 A
4963995 Lang Oct 1990 A
4975771 Kassatly Dec 1990 A
4989245 Bennett Jan 1991 A
4994909 Graves et al. Feb 1991 A
4995078 Monslow et al. Feb 1991 A
5003384 Durden et al. Mar 1991 A
5008934 Endoh Apr 1991 A
5014125 Pocock et al. May 1991 A
5027400 Baji et al. Jun 1991 A
5051720 Kittirutsunetorn Sep 1991 A
5051822 Rhoades Sep 1991 A
5057917 Shalkauser et al. Oct 1991 A
5058160 Banker et al. Oct 1991 A
5060262 Bevins, Jr. et al. Oct 1991 A
5077607 Johnson et al. Dec 1991 A
5083800 Lockton Jan 1992 A
5088111 McNamara et al. Feb 1992 A
5093718 Hoarty et al. Mar 1992 A
5109414 Harvey et al. Apr 1992 A
5113496 McCalley et al. May 1992 A
5119188 McCalley et al. Jun 1992 A
5130792 Tindell et al. Jul 1992 A
5132992 Yurt et al. Jul 1992 A
5133009 Rumreich Jul 1992 A
5133079 Ballantyne et al. Jul 1992 A
5136411 Paik et al. Aug 1992 A
5142575 Farmer et al. Aug 1992 A
5155591 Wachob Oct 1992 A
5172413 Bradley et al. Dec 1992 A
5191410 McCalley et al. Mar 1993 A
5195092 Wilson et al. Mar 1993 A
5208665 McCalley et al. May 1993 A
5220420 Hoarty et al. Jun 1993 A
5230019 Yanagimichi et al. Jul 1993 A
5231494 Wachob Jul 1993 A
5236199 Thompson, Jr. Aug 1993 A
5247347 Letterel et al. Sep 1993 A
5253341 Rozmanith et al. Oct 1993 A
5262854 Ng Nov 1993 A
5262860 Fitzpatrick et al. Nov 1993 A
5303388 Kreitman et al. Apr 1994 A
5319455 Hoarty et al. Jun 1994 A
5319707 Wasilewski et al. Jun 1994 A
5321440 Yanagihara et al. Jun 1994 A
5321514 Martinez Jun 1994 A
5351129 Lai Sep 1994 A
5355162 Yazolino et al. Oct 1994 A
5359601 Wasilewski et al. Oct 1994 A
5361091 Hoarty et al. Nov 1994 A
5371532 Gelman et al. Dec 1994 A
5404393 Remillard Apr 1995 A
5408274 Chang et al. Apr 1995 A
5410343 Coddington et al. Apr 1995 A
5410344 Graves et al. Apr 1995 A
5412415 Cook et al. May 1995 A
5412720 Hoarty May 1995 A
5418559 Blahut May 1995 A
5422674 Hooper et al. Jun 1995 A
5422887 Diepstraten et al. Jun 1995 A
5442389 Blahut et al. Aug 1995 A
5442390 Hooper et al. Aug 1995 A
5442700 Snell et al. Aug 1995 A
5446490 Blahut et al. Aug 1995 A
5469283 Vinel et al. Nov 1995 A
5469431 Wendorf et al. Nov 1995 A
5471263 Odaka Nov 1995 A
5481542 Logston et al. Jan 1996 A
5485197 Hoarty Jan 1996 A
5487066 McNamara et al. Jan 1996 A
5493638 Hooper et al. Feb 1996 A
5495283 Cowe Feb 1996 A
5495295 Long Feb 1996 A
5497187 Banker et al. Mar 1996 A
5517250 Hoogenboom et al. May 1996 A
5526034 Hoarty et al. Jun 1996 A
5528281 Grady et al. Jun 1996 A
5537397 Abramson Jul 1996 A
5537404 Bentley et al. Jul 1996 A
5539449 Blahut et al. Jul 1996 A
RE35314 Logg Aug 1996 E
5548340 Bertram Aug 1996 A
5550578 Hoarty et al. Aug 1996 A
5557316 Hoarty et al. Sep 1996 A
5559549 Hendricks et al. Sep 1996 A
5561708 Remillard Oct 1996 A
5570126 Blahut et al. Oct 1996 A
5570363 Holm Oct 1996 A
5579143 Huber Nov 1996 A
5581653 Todd Dec 1996 A
5583927 Ely et al. Dec 1996 A
5587734 Lauder et al. Dec 1996 A
5589885 Ooi Dec 1996 A
5592470 Rudrapatna et al. Jan 1997 A
5594507 Hoarty Jan 1997 A
5594723 Tibi Jan 1997 A
5594938 Engel Jan 1997 A
5596693 Needle et al. Jan 1997 A
5600364 Hendricks et al. Feb 1997 A
5600573 Hendricks et al. Feb 1997 A
5608446 Carr et al. Mar 1997 A
5617145 Huang et al. Apr 1997 A
5621464 Teo et al. Apr 1997 A
5625404 Grady et al. Apr 1997 A
5630757 Gagin et al. May 1997 A
5631693 Wunderlich et al. May 1997 A
5631846 Szurkowski May 1997 A
5632003 Davidson et al. May 1997 A
5642498 Kutner Jun 1997 A
5649283 Galler et al. Jul 1997 A
5668592 Spaulding, II Sep 1997 A
5668599 Cheney et al. Sep 1997 A
5708767 Yeo et al. Jan 1998 A
5710815 Ming et al. Jan 1998 A
5712906 Grady et al. Jan 1998 A
5740307 Lane Apr 1998 A
5748234 Lippincott May 1998 A
5754941 Sharpe et al. May 1998 A
5786527 Tarte Jul 1998 A
5790174 Richard, III et al. Aug 1998 A
5802283 Grady et al. Sep 1998 A
5812665 Hoarty et al. Sep 1998 A
5812786 Seazholtz et al. Sep 1998 A
5815604 Simons et al. Sep 1998 A
5818438 Howe et al. Oct 1998 A
5821945 Yeo et al. Oct 1998 A
5822537 Katseff et al. Oct 1998 A
5828371 Cline et al. Oct 1998 A
5844594 Ferguson Dec 1998 A
5845083 Hamadani et al. Dec 1998 A
5862325 Reed et al. Jan 1999 A
5864820 Case Jan 1999 A
5867208 McLaren Feb 1999 A
5883661 Hoarty Mar 1999 A
5903727 Nielsen May 1999 A
5903816 Broadwin et al. May 1999 A
5905522 Lawler May 1999 A
5907681 Bates et al. May 1999 A
5917822 Lyles et al. Jun 1999 A
5946352 Rowlands et al. Aug 1999 A
5952943 Walsh et al. Sep 1999 A
5961603 Kunkel et al. Oct 1999 A
5963203 Goldberg et al. Oct 1999 A
5966163 Lin et al. Oct 1999 A
5978756 Walker et al. Nov 1999 A
5982445 Eyer et al. Nov 1999 A
5990862 Lewis Nov 1999 A
5995146 Rasmussen Nov 1999 A
5995488 Kalhunte et al. Nov 1999 A
5999970 Krisbergh et al. Dec 1999 A
6014416 Shin et al. Jan 2000 A
6021386 Davis et al. Feb 2000 A
6031989 Cordell Feb 2000 A
6034678 Hoarty et al. Mar 2000 A
6049539 Lee et al. Apr 2000 A
6049831 Gardell et al. Apr 2000 A
6052555 Ferguson Apr 2000 A
6055247 Kubota Apr 2000 A
6055314 Spies et al. Apr 2000 A
6055315 Doyle et al. Apr 2000 A
6064377 Hoarty et al. May 2000 A
6078328 Schumann et al. Jun 2000 A
6084908 Chiang et al. Jul 2000 A
6100883 Hoarty Aug 2000 A
6108625 Kim Aug 2000 A
6115076 Linzer Sep 2000 A
6141645 Chi-Min et al. Oct 2000 A
6141693 Perlman et al. Oct 2000 A
6144698 Poon et al. Nov 2000 A
6167084 Wang et al. Dec 2000 A
6177931 Alexander et al. Jan 2001 B1
6182072 Leak et al. Jan 2001 B1
6184878 Alonso et al. Feb 2001 B1
6192081 Chiang et al. Feb 2001 B1
6198822 Doyle et al. Mar 2001 B1
6205582 Hoarty Mar 2001 B1
6226041 Florencio et al. May 2001 B1
6236730 Cowieson et al. May 2001 B1
6243418 Kim Jun 2001 B1
6253238 Lauder et al. Jun 2001 B1
6256047 Isobe et al. Jul 2001 B1
6266369 Wang et al. Jul 2001 B1
6268864 Chen et al. Jul 2001 B1
6275496 Burns et al. Aug 2001 B1
6292194 Powell, III Sep 2001 B1
6305020 Hoarty et al. Oct 2001 B1
6310601 Moore et al. Oct 2001 B1
6317151 Ohsuga et al. Nov 2001 B1
6317885 Fries Nov 2001 B1
6349284 Park et al. Feb 2002 B1
6386980 Nishino et al. May 2002 B1
6389075 Wang et al. May 2002 B2
6446037 Fielder et al. Sep 2002 B1
6459427 Mao et al. Oct 2002 B1
6480210 Martino et al. Nov 2002 B1
6481012 Gordon et al. Nov 2002 B1
6512793 Maeda Jan 2003 B1
6536043 Guedalia Mar 2003 B1
6539545 Dureau et al. Mar 2003 B1
6557041 Mallart Apr 2003 B2
6560496 Michnener May 2003 B1
6564378 Satterfield et al. May 2003 B1
6579184 Tanskanen Jun 2003 B1
6614442 Ouyang et al. Sep 2003 B1
6625574 Taniguchi et al. Sep 2003 B1
6645076 Sugai Nov 2003 B1
6657647 Bright Dec 2003 B1
6675385 Wang Jan 2004 B1
6675387 Boucher Jan 2004 B1
6687663 McGrath et al. Feb 2004 B1
6717600 Dutta et al. Apr 2004 B2
6721956 Wsilewski Apr 2004 B2
6727929 Bates et al. Apr 2004 B1
6731605 Deshpande May 2004 B1
6747991 Hemy et al. Jun 2004 B1
6754271 Gordon et al. Jun 2004 B1
6758540 Adolph et al. Jul 2004 B1
6766407 Lisitsa et al. Jul 2004 B1
6771704 Hannah Aug 2004 B1
6785902 Zigmond et al. Aug 2004 B1
6807528 Truman et al. Oct 2004 B1
6810528 Chatani Oct 2004 B1
6813690 Lango et al. Nov 2004 B1
6817947 Tanskanen Nov 2004 B2
6850490 Woo et al. Feb 2005 B1
6886178 Mao et al. Apr 2005 B1
6907574 Xu et al. Jun 2005 B2
6931291 Alvarez-Tinoco et al. Aug 2005 B1
6941019 Mitchell et al. Sep 2005 B1
6941574 Broadwin et al. Sep 2005 B1
6947509 Wong Sep 2005 B1
6952221 Holtz et al. Oct 2005 B1
6956899 Hall et al. Oct 2005 B2
7016540 Gong et al. Mar 2006 B1
7031385 Inoue et al. Apr 2006 B1
7050113 Campisano et al. May 2006 B2
7089577 Rakib et al. Aug 2006 B1
7093028 Shao et al. Aug 2006 B1
7095402 Kunil et al. Aug 2006 B2
7114167 Slemmer et al. Sep 2006 B2
7151782 Oz et al. Dec 2006 B1
7158676 Rainsford Jan 2007 B1
7212573 Winger May 2007 B2
7224731 Mehrotra May 2007 B2
7272556 Aguilar et al. Sep 2007 B1
7310619 Baar et al. Dec 2007 B2
7325043 Rosenberg et al. Jan 2008 B1
7346111 Winger et al. Mar 2008 B2
7412423 Asano Aug 2008 B1
7412505 Slemmer et al. Aug 2008 B2
7421082 Kamiya et al. Sep 2008 B2
7444306 Varble Oct 2008 B2
7500235 Maynard et al. Mar 2009 B2
7508941 O'Toole, Jr. et al. Mar 2009 B1
7512577 Slemmer et al. Mar 2009 B2
7596764 Vienneau et al. Sep 2009 B2
7623575 Winger Nov 2009 B2
7669220 Goode Feb 2010 B2
7742609 Yeakel et al. Jun 2010 B2
7743400 Kurauchi Jun 2010 B2
7751572 Villemoes et al. Jul 2010 B2
7830388 Lu Nov 2010 B1
7925775 Nishida Apr 2011 B2
7936819 Craig et al. May 2011 B2
7941645 Riach et al. May 2011 B1
7945616 Zeng et al. May 2011 B2
7987489 Krzyzanowski et al. Jul 2011 B2
8027353 Damola et al. Sep 2011 B2
8036271 Winger et al. Oct 2011 B2
8046798 Schlack et al. Oct 2011 B1
8074248 Sigmon et al. Dec 2011 B2
8078603 Chandratillake et al. Dec 2011 B1
8118676 Craig et al. Feb 2012 B2
8136033 Bhargava et al. Mar 2012 B1
8149917 Zhang et al. Apr 2012 B2
8155194 Winger et al. Apr 2012 B2
8155202 Landau Apr 2012 B2
8170107 Winger May 2012 B2
8194862 Herr et al. Jun 2012 B2
8270439 Herr et al. Sep 2012 B2
8284842 Craig et al. Oct 2012 B2
8370869 Paek et al. Feb 2013 B2
8411754 Zhang et al. Apr 2013 B2
8442110 Pavlovskaia et al. May 2013 B2
8473996 Gordon et al. Jun 2013 B2
8619867 Craig et al. Dec 2013 B2
8656430 Doyle Feb 2014 B2
8781240 Srinivasan et al. Jul 2014 B2
8839317 Rieger et al. Sep 2014 B1
8914813 Sigurdsson et al. Dec 2014 B1
9204113 Kwok et al. Dec 2015 B1
9226018 Filippov et al. Dec 2015 B1
9621926 Lee et al. Apr 2017 B1
9635440 Lacroix Apr 2017 B2
20010005360 Lee Jun 2001 A1
20010008845 Kusuda et al. Jul 2001 A1
20010027563 White et al. Oct 2001 A1
20010043215 Middleton, III et al. Nov 2001 A1
20010049301 Masuda et al. Dec 2001 A1
20020007491 Schiller et al. Jan 2002 A1
20020013812 Krueger et al. Jan 2002 A1
20020016161 Dellien et al. Feb 2002 A1
20020021353 DeNies Feb 2002 A1
20020026642 Augenbraun et al. Feb 2002 A1
20020027567 Niamir Mar 2002 A1
20020032697 French et al. Mar 2002 A1
20020040482 Sextro et al. Apr 2002 A1
20020047899 Son et al. Apr 2002 A1
20020049975 Thomas et al. Apr 2002 A1
20020054578 Zhang et al. May 2002 A1
20020056083 Istvan May 2002 A1
20020056107 Schlack May 2002 A1
20020056136 Wistendahl et al. May 2002 A1
20020059644 Andrade et al. May 2002 A1
20020062484 De Lange et al. May 2002 A1
20020067766 Sakamoto et al. Jun 2002 A1
20020069267 Thiele Jun 2002 A1
20020072408 Kumagai Jun 2002 A1
20020078171 Schneider Jun 2002 A1
20020078456 Hudson et al. Jun 2002 A1
20020083464 Tomsen et al. Jun 2002 A1
20020091738 Rohrabaugh Jul 2002 A1
20020095689 Novak Jul 2002 A1
20020105531 Niemi Aug 2002 A1
20020108121 Alao et al. Aug 2002 A1
20020116705 Perlman et al. Aug 2002 A1
20020131511 Zenoni Sep 2002 A1
20020136298 Anantharamu et al. Sep 2002 A1
20020152318 Menon et al. Oct 2002 A1
20020171765 Waki et al. Nov 2002 A1
20020175931 Holtz et al. Nov 2002 A1
20020178278 Ducharme Nov 2002 A1
20020178447 Plotnick et al. Nov 2002 A1
20020188628 Cooper et al. Dec 2002 A1
20020191851 Keinan Dec 2002 A1
20020196746 Allen Dec 2002 A1
20030005452 Rodriguez Jan 2003 A1
20030020671 Santoro et al. Jan 2003 A1
20030027517 Callway et al. Feb 2003 A1
20030035486 Kato et al. Feb 2003 A1
20030038893 Rajamaki et al. Feb 2003 A1
20030046690 Miller Mar 2003 A1
20030051253 Barone, Jr. Mar 2003 A1
20030058941 Chen et al. Mar 2003 A1
20030061451 Beyda Mar 2003 A1
20030065739 Shnier Apr 2003 A1
20030066093 Cruz-Rivera et al. Apr 2003 A1
20030071792 Safadi Apr 2003 A1
20030072372 Shen et al. Apr 2003 A1
20030088328 Nishio et al. May 2003 A1
20030088400 Nishio et al. May 2003 A1
20030107443 Yamamoto Jun 2003 A1
20030122836 Doyle et al. Jul 2003 A1
20030123664 Pedlow, Jr. et al. Jul 2003 A1
20030126608 Safadi Jul 2003 A1
20030126611 Chernock et al. Jul 2003 A1
20030131349 Kuczynski-Brown Jul 2003 A1
20030135860 Dureau Jul 2003 A1
20030169373 Peters et al. Sep 2003 A1
20030177199 Zenoni Sep 2003 A1
20030188309 Yuen Oct 2003 A1
20030189980 Dvir et al. Oct 2003 A1
20030196174 Pierre Cote et al. Oct 2003 A1
20030208768 Urdang et al. Nov 2003 A1
20030217360 Gordon et al. Nov 2003 A1
20030229719 Iwata et al. Dec 2003 A1
20030229900 Reisman Dec 2003 A1
20030231218 Amadio Dec 2003 A1
20040016000 Zhang et al. Jan 2004 A1
20040034873 Zenoni Feb 2004 A1
20040040035 Carlucci et al. Feb 2004 A1
20040055007 Allport Mar 2004 A1
20040073924 Pendakur Apr 2004 A1
20040078822 Breen et al. Apr 2004 A1
20040088375 Sethi et al. May 2004 A1
20040091171 Bone May 2004 A1
20040111526 Baldwin et al. Jun 2004 A1
20040128686 Boyer et al. Jul 2004 A1
20040133704 Krzyzanowski et al. Jul 2004 A1
20040139158 Datta Jul 2004 A1
20040151385 Oneda Aug 2004 A1
20040157662 Tsuchiya Aug 2004 A1
20040163101 Swix et al. Aug 2004 A1
20040184542 Fujimoto Sep 2004 A1
20040193648 Lai et al. Sep 2004 A1
20040210824 Shoff et al. Oct 2004 A1
20040216045 Martin et al. Oct 2004 A1
20040261106 Hoffman Dec 2004 A1
20040261114 Addington et al. Dec 2004 A1
20040268419 Danker et al. Dec 2004 A1
20050015259 Thumpudi et al. Jan 2005 A1
20050015816 Christofalo et al. Jan 2005 A1
20050021830 Urzaiz et al. Jan 2005 A1
20050034155 Gordon Feb 2005 A1
20050034162 White et al. Feb 2005 A1
20050042999 Rappaport Feb 2005 A1
20050044575 Der Kuyl Feb 2005 A1
20050055685 Maynard et al. Mar 2005 A1
20050055721 Zigmond et al. Mar 2005 A1
20050071876 van Beek Mar 2005 A1
20050076134 Bialik et al. Apr 2005 A1
20050089091 Kim et al. Apr 2005 A1
20050091690 Delpuch et al. Apr 2005 A1
20050091695 Paz et al. Apr 2005 A1
20050105608 Coleman et al. May 2005 A1
20050114906 Hoarty et al. May 2005 A1
20050135385 Jenkins et al. Jun 2005 A1
20050141613 Kelly et al. Jun 2005 A1
20050149988 Grannan Jul 2005 A1
20050155063 Bayrakeri Jul 2005 A1
20050160088 Scallan et al. Jul 2005 A1
20050166257 Feinleib et al. Jul 2005 A1
20050177853 Williams et al. Aug 2005 A1
20050180502 Puri Aug 2005 A1
20050216933 Black Sep 2005 A1
20050216940 Black Sep 2005 A1
20050226426 Oomen et al. Oct 2005 A1
20050232309 Kavaler Oct 2005 A1
20050273832 Zigmond et al. Dec 2005 A1
20050283741 Balabanovic et al. Dec 2005 A1
20060001737 Dawson et al. Jan 2006 A1
20060020960 Relan et al. Jan 2006 A1
20060020994 Crane et al. Jan 2006 A1
20060026663 Kortum et al. Feb 2006 A1
20060031906 Kaneda Feb 2006 A1
20060039481 Shen et al. Feb 2006 A1
20060041910 Hatanaka et al. Feb 2006 A1
20060064716 Sull et al. Mar 2006 A1
20060088105 Shen et al. Apr 2006 A1
20060095401 Krikorian et al. May 2006 A1
20060095944 Demircin et al. May 2006 A1
20060112338 Joung et al. May 2006 A1
20060117340 Pavlovskaia et al. Jun 2006 A1
20060161538 Kiilerich Jul 2006 A1
20060173985 Moore Aug 2006 A1
20060174021 Osborne et al. Aug 2006 A1
20060174026 Robinson et al. Aug 2006 A1
20060174289 Theberge Aug 2006 A1
20060184614 Baratto et al. Aug 2006 A1
20060195884 van Zoest et al. Aug 2006 A1
20060203913 Kim et al. Sep 2006 A1
20060212203 Furuno Sep 2006 A1
20060218601 Michel Sep 2006 A1
20060230428 Craig et al. Oct 2006 A1
20060242570 Croft et al. Oct 2006 A1
20060267995 Radloff et al. Nov 2006 A1
20060269086 Page et al. Nov 2006 A1
20060271985 Hoffman et al. Nov 2006 A1
20060285586 Westerman Dec 2006 A1
20060285819 Kelly et al. Dec 2006 A1
20070009035 Craig et al. Jan 2007 A1
20070009036 Craig et al. Jan 2007 A1
20070009042 Craig Jan 2007 A1
20070011702 Vaysman Jan 2007 A1
20070043667 Qawami et al. Feb 2007 A1
20070074251 Oguz et al. Mar 2007 A1
20070115941 Patel et al. May 2007 A1
20070124282 Wittkotter May 2007 A1
20070124795 McKissick et al. May 2007 A1
20070130446 Minakami Jun 2007 A1
20070130592 Haeusel Jun 2007 A1
20070152984 Ordin et al. Jul 2007 A1
20070162953 Bollinger et al. Jul 2007 A1
20070172061 Pinder Jul 2007 A1
20070174790 Jing et al. Jul 2007 A1
20070180047 Dong et al. Aug 2007 A1
20070192798 Morgan Aug 2007 A1
20070234220 Khan et al. Oct 2007 A1
20070237232 Chang et al. Oct 2007 A1
20070266412 Trowbridge et al. Nov 2007 A1
20070300280 Turner et al. Dec 2007 A1
20080034306 Ording Feb 2008 A1
20080046373 Kim Feb 2008 A1
20080046928 Poling et al. Feb 2008 A1
20080052742 Kopf et al. Feb 2008 A1
20080060034 Egnal et al. Mar 2008 A1
20080066135 Brodersen et al. Mar 2008 A1
20080084503 Kondo Apr 2008 A1
20080086688 Chandratillake et al. Apr 2008 A1
20080086747 Rasanen et al. Apr 2008 A1
20080094368 Ording et al. Apr 2008 A1
20080097953 Levy et al. Apr 2008 A1
20080098212 Helms et al. Apr 2008 A1
20080098450 Wu et al. Apr 2008 A1
20080104520 Swenson et al. May 2008 A1
20080109556 Karlberg May 2008 A1
20080127255 Ress et al. May 2008 A1
20080144711 Chui et al. Jun 2008 A1
20080154583 Goto et al. Jun 2008 A1
20080163286 Rudolph et al. Jul 2008 A1
20080170619 Landau Jul 2008 A1
20080170622 Gordon et al. Jul 2008 A1
20080172441 Speicher et al. Jul 2008 A1
20080178125 Elsbree et al. Jul 2008 A1
20080178243 Dong et al. Jul 2008 A1
20080178249 Gordon et al. Jul 2008 A1
20080181221 Kampmann et al. Jul 2008 A1
20080184120 O-Brien-Strain et al. Jul 2008 A1
20080189740 Carpenter et al. Aug 2008 A1
20080195573 Onoda et al. Aug 2008 A1
20080201736 Gordon et al. Aug 2008 A1
20080212942 Gordon et al. Sep 2008 A1
20080222199 Tiu et al. Sep 2008 A1
20080232243 Oren et al. Sep 2008 A1
20080232452 Sullivan et al. Sep 2008 A1
20080243918 Holtman Oct 2008 A1
20080243998 Oh et al. Oct 2008 A1
20080244681 Gossweiler et al. Oct 2008 A1
20080253440 Srinivasan et al. Oct 2008 A1
20080253685 Kuranov et al. Oct 2008 A1
20080271080 Grossweiler et al. Oct 2008 A1
20090003446 Wu et al. Jan 2009 A1
20090003705 Zou et al. Jan 2009 A1
20090007199 La Joie Jan 2009 A1
20090025027 Craner Jan 2009 A1
20090031341 Schlack et al. Jan 2009 A1
20090041118 Pavlovskaia et al. Feb 2009 A1
20090083781 Yang et al. Mar 2009 A1
20090083813 Dolce et al. Mar 2009 A1
20090083824 McCarthy et al. Mar 2009 A1
20090089188 Ku et al. Apr 2009 A1
20090094113 Berry et al. Apr 2009 A1
20090094646 Walter et al. Apr 2009 A1
20090100465 Kulakowski Apr 2009 A1
20090100489 Strothmann Apr 2009 A1
20090106269 Zuckerman et al. Apr 2009 A1
20090106386 Zuckerman et al. Apr 2009 A1
20090106392 Zuckerman et al. Apr 2009 A1
20090106425 Zuckerman et al. Apr 2009 A1
20090106441 Zuckerman et al. Apr 2009 A1
20090106451 Zuckerman et al. Apr 2009 A1
20090106511 Zuckerman et al. Apr 2009 A1
20090113009 Slemmer et al. Apr 2009 A1
20090132942 Santoro et al. May 2009 A1
20090138966 Krause et al. May 2009 A1
20090144781 Glaser et al. Jun 2009 A1
20090146779 Kumar et al. Jun 2009 A1
20090157868 Chaudhry Jun 2009 A1
20090158369 Van Vleck et al. Jun 2009 A1
20090160694 Di Flora Jun 2009 A1
20090172431 Gupta et al. Jul 2009 A1
20090172726 Vantalon et al. Jul 2009 A1
20090172757 Aldrey et al. Jul 2009 A1
20090178098 Westbrook et al. Jul 2009 A1
20090183197 Matthews Jul 2009 A1
20090183219 Maynard et al. Jul 2009 A1
20090189890 Corbett et al. Jul 2009 A1
20090193452 Russ et al. Jul 2009 A1
20090196346 Zhang et al. Aug 2009 A1
20090204920 Beverly et al. Aug 2009 A1
20090210899 Lawrence-Apfelbaum et al. Aug 2009 A1
20090225790 Shay et al. Sep 2009 A1
20090228620 Thomas et al. Sep 2009 A1
20090228922 Haj-Khalil et al. Sep 2009 A1
20090233593 Ergen et al. Sep 2009 A1
20090251478 Maillot et al. Oct 2009 A1
20090254960 Yarom et al. Oct 2009 A1
20090265617 Randall et al. Oct 2009 A1
20090265746 Halen et al. Oct 2009 A1
20090271818 Schlack Oct 2009 A1
20090298535 Klein et al. Dec 2009 A1
20090313674 Ludvig et al. Dec 2009 A1
20090316709 Polcha et al. Dec 2009 A1
20090328109 Pavlovskaia et al. Dec 2009 A1
20100009623 Hennenhoefer et al. Jan 2010 A1
20100033638 O'Donnell et al. Feb 2010 A1
20100035682 Gentile et al. Feb 2010 A1
20100054268 Divivier Mar 2010 A1
20100058404 Rouse Mar 2010 A1
20100067571 White et al. Mar 2010 A1
20100073371 Ernst et al. Mar 2010 A1
20100077441 Thomas et al. Mar 2010 A1
20100104021 Schmit Apr 2010 A1
20100115573 Srinivasan et al. May 2010 A1
20100118972 Zhang et al. May 2010 A1
20100131411 Jogand-Coulomb et al. May 2010 A1
20100131996 Gauld May 2010 A1
20100146139 Brockmann Jun 2010 A1
20100153885 Yates Jun 2010 A1
20100158109 Dahlby Jun 2010 A1
20100161825 Ronca et al. Jun 2010 A1
20100166062 Perlman et al. Jul 2010 A1
20100166071 Wu et al. Jul 2010 A1
20100174776 Westberg et al. Jul 2010 A1
20100175080 Yuen et al. Jul 2010 A1
20100180307 Hayes et al. Jul 2010 A1
20100211983 Chou Aug 2010 A1
20100226428 Thevathasan et al. Sep 2010 A1
20100235861 Schein et al. Sep 2010 A1
20100242073 Gordon et al. Sep 2010 A1
20100251167 DeLuca et al. Sep 2010 A1
20100254370 Jana et al. Oct 2010 A1
20100265344 Velarde et al. Oct 2010 A1
20100325655 Perez Dec 2010 A1
20100325668 Young et al. Dec 2010 A1
20110002376 Ahmed et al. Jan 2011 A1
20110002470 Purnhagen et al. Jan 2011 A1
20110023069 Dowens Jan 2011 A1
20110035227 Lee et al. Feb 2011 A1
20110040894 Shrum, Jr. et al. Feb 2011 A1
20110067061 Karaoguz et al. Mar 2011 A1
20110072474 Springer et al. Mar 2011 A1
20110099594 Chen et al. Apr 2011 A1
20110107375 Stahl et al. May 2011 A1
20110110433 Bjontegaard May 2011 A1
20110110642 Salomons et al. May 2011 A1
20110150421 Sasaki et al. Jun 2011 A1
20110153776 Opala et al. Jun 2011 A1
20110161517 Ferguson Jun 2011 A1
20110167468 Lee et al. Jul 2011 A1
20110173590 Yanes Jul 2011 A1
20110191684 Greenberg Aug 2011 A1
20110202948 Bildgen et al. Aug 2011 A1
20110211591 Traub et al. Sep 2011 A1
20110231878 Hunter et al. Sep 2011 A1
20110258584 Williams et al. Oct 2011 A1
20110261889 Francisco Oct 2011 A1
20110283304 Roberts Nov 2011 A1
20110289536 Poder et al. Nov 2011 A1
20110296312 Boyer et al. Dec 2011 A1
20110317982 Xu et al. Dec 2011 A1
20120008786 Cronk et al. Jan 2012 A1
20120023126 Jin et al. Jan 2012 A1
20120023250 Chen et al. Jan 2012 A1
20120030212 Koopmans et al. Feb 2012 A1
20120030706 Hulse et al. Feb 2012 A1
20120092443 Mauchly Apr 2012 A1
20120137337 Sigmon et al. May 2012 A1
20120204217 Regis et al. Aug 2012 A1
20120209815 Carson et al. Aug 2012 A1
20120216232 Chen et al. Aug 2012 A1
20120221853 Wingert et al. Aug 2012 A1
20120224641 Haberman et al. Sep 2012 A1
20120257671 Brockmann et al. Oct 2012 A1
20120271920 Isaksson Oct 2012 A1
20120284753 Roberts et al. Nov 2012 A1
20120297081 Karlsson et al. Nov 2012 A1
20130003826 Craig et al. Jan 2013 A1
20130042271 Yellin et al. Feb 2013 A1
20130046863 Bastian et al. Feb 2013 A1
20130047074 Vestergaard et al. Feb 2013 A1
20130071095 Chauvier et al. Mar 2013 A1
20130086610 Brockmann Apr 2013 A1
20130132986 Mack et al. May 2013 A1
20130179787 Brockmann et al. Jul 2013 A1
20130198776 Brockmann Aug 2013 A1
20130254308 Rose et al. Sep 2013 A1
20130254675 de Andrade Sep 2013 A1
20130272394 Brockmann et al. Oct 2013 A1
20130276015 Rothschild Oct 2013 A1
20130283318 Wannamaker Oct 2013 A1
20130297887 Woodward et al. Nov 2013 A1
20130304818 Brumleve et al. Nov 2013 A1
20130305051 Fu et al. Nov 2013 A1
20140032635 Pimmel et al. Jan 2014 A1
20140033036 Gaur et al. Jan 2014 A1
20140081954 Elizarov Mar 2014 A1
20140089469 Ramamurthy et al. Mar 2014 A1
20140123169 Koukarine et al. May 2014 A1
20140157298 Murphy Jun 2014 A1
20140168515 Sagliocco et al. Jun 2014 A1
20140223307 McIntosh et al. Aug 2014 A1
20140223482 McIntosh et al. Aug 2014 A1
20140267074 Balci Sep 2014 A1
20140269930 Robinson et al. Sep 2014 A1
20140289627 Brockmann et al. Sep 2014 A1
20140317532 Ma et al. Oct 2014 A1
20140344861 Berner et al. Nov 2014 A1
20150023372 Boatright Jan 2015 A1
20150037011 Hubner et al. Feb 2015 A1
20150103880 Diard Apr 2015 A1
20150135209 LaBosco et al. May 2015 A1
20150139603 Silverstein et al. May 2015 A1
20150195525 Sullivan et al. Jul 2015 A1
20160050069 Griffin et al. Feb 2016 A1
20160119624 Frishman Apr 2016 A1
20160142468 Song et al. May 2016 A1
20160357583 Decker et al. Dec 2016 A1
20170078721 Brockmann et al. Mar 2017 A1
Foreign Referenced Citations (275)
Number Date Country
191599 Apr 2000 AT
198969 Feb 2001 AT
250313 Oct 2003 AT
472152 Jul 2010 AT
475266 Aug 2010 AT
620735 Feb 1992 AU
643828 Nov 1993 AU
2004253127 Jan 2005 AU
2005278122 Mar 2006 AU
682776 Mar 1964 CA
2052477 Mar 1992 CA
1302554 Jun 1992 CA
2163500 May 1996 CA
2231391 May 1997 CA
2273365 Jun 1998 CA
2313133 Jun 1999 CA
2313161 Jun 1999 CA
2528499 Jan 2005 CA
2569407 Mar 2006 CA
2728797 Apr 2010 CA
2787913 Jul 2011 CA
2798541 Dec 2011 CA
2814070 Apr 2012 CA
1507751 Jun 2004 CN
1969555 May 2007 CN
101180109 May 2008 CN
101627424 Jan 2010 CN
101637023 Jan 2010 CN
102007773 Apr 2011 CN
103647980 Mar 2014 CN
4408355 Oct 1994 DE
69516139 Dec 2000 DE
69132518 Sep 2001 DE
69333207 Jul 2004 DE
98961961 Aug 2007 DE
0128771 Dec 1984 EP
0419137 Mar 1991 EP
0449633 Oct 1991 EP
0477786 Apr 1992 EP
0523618 Jan 1993 EP
0534139 Mar 1993 EP
0568453 Nov 1993 EP
0588653 Mar 1994 EP
0594350 Apr 1994 EP
0612916 Aug 1994 EP
0624039 Nov 1994 EP
0638219 Feb 1995 EP
0643523 Mar 1995 EP
0661888 Jul 1995 EP
0714684 Jun 1996 EP
0746158 Dec 1996 EP
0761066 Mar 1997 EP
0789972 Aug 1997 EP
0830786 Mar 1998 EP
0861560 Sep 1998 EP
0 881 808 Dec 1998 EP
0933966 Aug 1999 EP
0933966 Aug 1999 EP
1026872 Aug 2000 EP
1038397 Sep 2000 EP
1038399 Sep 2000 EP
1038400 Sep 2000 EP
1038401 Sep 2000 EP
1051039 Nov 2000 EP
1055331 Nov 2000 EP
1120968 Aug 2001 EP
1345446 Sep 2003 EP
1422929 May 2004 EP
1428562 Jun 2004 EP
1521476 Apr 2005 EP
1645115 Apr 2006 EP
1725044 Nov 2006 EP
1767708 Mar 2007 EP
1771003 Apr 2007 EP
1772014 Apr 2007 EP
1877150 Jan 2008 EP
1887148 Feb 2008 EP
1900200 Mar 2008 EP
1902583 Mar 2008 EP
1908293 Apr 2008 EP
1911288 Apr 2008 EP
1918802 May 2008 EP
2100296 Sep 2009 EP
2105019 Sep 2009 EP
2106665 Oct 2009 EP
2116051 Nov 2009 EP
2124440 Nov 2009 EP
2248341 Nov 2010 EP
2269377 Jan 2011 EP
2271098 Jan 2011 EP
2304953 Apr 2011 EP
2357555 Aug 2011 EP
2364019 Sep 2011 EP
2409493 Jan 2012 EP
2477414 Jul 2012 EP
2487919 Aug 2012 EP
2520090 Nov 2012 EP
2567545 Mar 2013 EP
2577437 Apr 2013 EP
2628306 Aug 2013 EP
2632164 Aug 2013 EP
2632165 Aug 2013 EP
2695388 Feb 2014 EP
2207635 Jun 2004 ES
2529739 Jan 1984 FR
2891098 Mar 2007 FR
2207838 Feb 1989 GB
2248955 Apr 1992 GB
2290204 Dec 1995 GB
2378345 Feb 2003 GB
2479164 Oct 2011 GB
1134855 Oct 2010 HK
1116323 Dec 2010 HK
19913397 Apr 1992 IE
99586 Feb 1998 IL
180215 Jan 1998 IN
3759 Mar 1992 IS
60-054324 Mar 1985 JP
63-033988 Feb 1988 JP
63-263985 Oct 1988 JP
2001-241993 Sep 1989 JP
7-160292 Jun 1995 JP
8-265704 Oct 1996 JP
10-228437 Aug 1998 JP
11-134273 May 1999 JP
H11-261966 Sep 1999 JP
2001-145112 May 2001 JP
2001-203995 Jul 2001 JP
2001-245271 Sep 2001 JP
2001-245291 Sep 2001 JP
2002-057952 Feb 2002 JP
2002-112220 Apr 2002 JP
2002-141810 May 2002 JP
2002-208027 Jul 2002 JP
2002-300556 Oct 2002 JP
2002-319991 Oct 2002 JP
2003-506763 Feb 2003 JP
2003-087673 Mar 2003 JP
2004-056777 Feb 2004 JP
2004-110850 Apr 2004 JP
2004-112441 Apr 2004 JP
2004-135932 May 2004 JP
2004-264812 Sep 2004 JP
2004-312283 Nov 2004 JP
2004-533736 Nov 2004 JP
2004-536381 Dec 2004 JP
2004-536681 Dec 2004 JP
2005-033741 Feb 2005 JP
2005-084987 Mar 2005 JP
2005-095599 Mar 2005 JP
8-095599 Apr 2005 JP
2005-123981 May 2005 JP
2005-156996 Jun 2005 JP
2005-519382 Jun 2005 JP
2005-523479 Aug 2005 JP
2005-260289 Sep 2005 JP
2005-309752 Nov 2005 JP
2006-067280 Mar 2006 JP
2006-246358 Sep 2006 JP
2007-129296 May 2007 JP
2007-522727 Aug 2007 JP
11-88419 Sep 2007 JP
2007-264440 Oct 2007 JP
2008-535622 Sep 2008 JP
2009-159188 Jul 2009 JP
2009-543386 Dec 2009 JP
2012-080593 Apr 2012 JP
10-2005-0001362 Jan 2005 KR
10-2005-0085827 Aug 2005 KR
10-2006-0095821 Sep 2006 KR
20080001298 Jan 2008 KR
1032594 Apr 2008 NL
1033929 Apr 2008 NL
2004780 Jan 2012 NL
239969 Dec 1994 NZ
99110 Dec 1993 PT
WO 1982002303 Jul 1982 WO
WO 1989008967 Sep 1989 WO
WO 9013972 Nov 1990 WO
WO 9322877 Nov 1993 WO
WO 1994016534 Jul 1994 WO
WO 1994019910 Sep 1994 WO
WO 1994021079 Sep 1994 WO
WO 9515658 Jun 1995 WO
WO 1995032587 Nov 1995 WO
WO 1995033342 Dec 1995 WO
WO 1996014712 May 1996 WO
WO 1996027843 Sep 1996 WO
WO 1996031826 Oct 1996 WO
WO 1996037074 Nov 1996 WO
WO 1996042168 Dec 1996 WO
WO 1997016925 May 1997 WO
WO 1997033434 Sep 1997 WO
WO 1997039583 Oct 1997 WO
WO 1998026595 Jun 1998 WO
WO 9904568 Jan 1999 WO
WO 1999000735 Jan 1999 WO
WO 1999030496 Jun 1999 WO
WO 1999030497 Jun 1999 WO
WO 1999030500 Jun 1999 WO
WO 1999030501 Jun 1999 WO
WO 1999035840 Jul 1999 WO
WO 1999041911 Aug 1999 WO
WO 1999056468 Nov 1999 WO
WO 9965232 Dec 1999 WO
WO 9965243 Dec 1999 WO
WO 1999066732 Dec 1999 WO
WO 2000002303 Jan 2000 WO
WO 0007372 Feb 2000 WO
WO 0008967 Feb 2000 WO
WO 0019910 Apr 2000 WO
WO 0038430 Jun 2000 WO
WO 0041397 Jul 2000 WO
WO 0139494 May 2001 WO
WO 0141447 Jun 2001 WO
WO0156293 Aug 2001 WO
WO 0182614 Nov 2001 WO
WO 02089487 Jul 2002 WO
WO 02076097 Sep 2002 WO
WO 02076099 Sep 2002 WO
WO 03026232 Mar 2003 WO
WO 03026275 Mar 2003 WO
WO 03047710 Jun 2003 WO
WO 03065683 Aug 2003 WO
WO 03071727 Aug 2003 WO
WO 03091832 Nov 2003 WO
WO 2004012437 Feb 2004 WO
WO 2004018060 Mar 2004 WO
WO2004057609 Jul 2004 WO
WO 2004073310 Aug 2004 WO
WO 2005002215 Jan 2005 WO
WO 2005053301 Jun 2005 WO
WO 2005076575 Aug 2005 WO
WO 2006014362 Feb 2006 WO
WO 2006022881 Mar 2006 WO
WO 2006053305 May 2006 WO
WO 2006081634 Aug 2006 WO
WO 2006105480 Oct 2006 WO
WO 2006110268 Oct 2006 WO
WO 2007001797 Jan 2007 WO
WO 2007008319 Jan 2007 WO
WO 2007008355 Jan 2007 WO
WO 2007008356 Jan 2007 WO
WO 2007008357 Jan 2007 WO
WO 2007008358 Jan 2007 WO
WO 2007018722 Feb 2007 WO
WO 2007018726 Feb 2007 WO
WO2008044916 Apr 2008 WO
WO 2008044916 Apr 2008 WO
WO 2008086170 Jul 2008 WO
WO 2008088741 Jul 2008 WO
WO 2008088752 Jul 2008 WO
WO 2008088772 Jul 2008 WO
WO 2008100205 Aug 2008 WO
WO2009038596 Mar 2009 WO
WO 2009038596 Mar 2009 WO
WO 2009099893 Aug 2009 WO
WO 2009099895 Aug 2009 WO
WO 2009105465 Aug 2009 WO
WO 2009110897 Sep 2009 WO
WO 2009114247 Sep 2009 WO
WO 2009155214 Dec 2009 WO
WO 2010044926 Apr 2010 WO
WO 2010054136 May 2010 WO
WO 2010107954 Sep 2010 WO
WO 2011014336 Sep 2010 WO
WO 2011082364 Jul 2011 WO
WO 2011139155 Nov 2011 WO
WO 2011149357 Dec 2011 WO
WO 2012051528 Apr 2012 WO
W O 2012138660 Oct 2012 WO
WO 2012138660 Oct 2012 WO
WO 2013106390 Jul 2013 WO
WO 2013155310 Jul 2013 WO
WO2013184604 Dec 2013 WO
Non-Patent Literature Citations (301)
Entry
AC-3 digital audio compression standard, Extract, Dec. 20, 1995, 11 pgs.
ActiveVideo Networks BV, International Preliminary Report on Patentability, PCT/NL2011/050308, dated Sep. 6, 2011, 8 pgs.
ActiveVideo Networks BV, International Search Report and Written Opinion, PCT/NL2011/050308, dated Sep. 6, 2011, 8 pgs.
Activevideo Networks Inc., International Preliminary Report on Patentability, PCT/US2011/056355, dated Apr. 16, 2013, 4 pgs.
ActiveVideo Networks Inc., International Preliminary Report on Patentability, PCT/US2012/032010, dated Oct. 8, 2013, 4 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2011/056355, dated Apr. 13, 2012, 6 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2012/032010, dated Oct. 10, 2012, 6 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2013/020769, dated May 9, 2013, 9 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2013/036182, dated Jul. 29, 2013, 12 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2009/032457, dated Jul. 22, 2009, 7 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 09820936-4, dated Oct. 26, 2012, 11 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 10754084-1, dated Jul. 24, 2012, 11 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 10841764.3, dated May 20, 2014, 16 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 11833486.1, dated Apr. 3, 2014, 6 pgs.
Annex C—Video buffering verifier, information technology—generic coding of moving pictures and associated audio information: video, Feb. 2000, 6 pgs.
Antonoff, Michael, “Interactive Television,” Popular Science, Nov. 1992, 12 pages.
Avinity Systems B.V., Extended European Search Report, Application No. 12163713.6, dated Feb. 7, 2014, 10 pgs.
Avinity Systems B.V., Extended European Search Report, Application No. 12163712-8, dated Feb. 3, 2014, 10 pgs.
Benjelloun, A summation algorithm for MPEG-1 coded audio signals: a first step towards audio processed domain, Annals of Telecommunications, Get Laudisier, Paris, vol. 55, No. 3/04, Mar. 1, 2000, 9 pgs.
Broadhead, Direct manipulation of MPEG compressed digital audio, Nov. 5-9, 1995, 41 pgs.
Cable Television Laboratories, Inc., “CableLabs Asset Distribution Interface Specification, Version 1.1”, May 5, 2006, 33 pgs.
CD 11172-3, Coding of moving pictures and associated audio for digital storage media at up to about 1.5 MBIT, Jan. 1, 1992, 39 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,176, dated Dec. 23, 2010, 8 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,183, dated Jan. 12, 2012, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,183, dated Jul. 19, 2012, 8 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,189, dated Oct. 12, 2011, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,176, dated Mar. 23, 2011, 8 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 13/609,183, dated Aug. 26, 2013, 8 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/103,838, dated Feb. 5, 2009, 30 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,181, dated Aug. 25, 2010, 17 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/103,838, dated Jul. 6, 2010, 35 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,176, dated Oct. 1, 2010, 8 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,183, dated Apr. 13, 2011, 16 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,177, dated Oct. 26, 2010, 12 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,181, dated Jun. 20, 2011, 21 pgs.
Craig, Office Action, U.S. Appl. No. 11/103,838, dated May 12, 2009, 32 pgs.
Craig, Office Action, U.S. Appl. No. 11/103,838, dated Aug. 19, 2008, 17 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/103,838, dated Nov. 19, 2009, 34 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,176, dated May 6, 2010, 7 pgs.
Craig, Office-Action U.S. Appl. No. 11/178,177, dated Mar. 29, 2011, 15 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,177, dated Aug. 3, 2011, 26 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,177, dated Mar. 29, 2010, 11 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,181, dated Feb. 11, 2011, 19 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,181, dated Mar. 29, 2010, 10 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,182, dated Feb. 23, 2010, 15 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, dated Dec. 6, 2010, 12 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, dated Sep. 15, 2011, 12 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, dated Feb. 19, 2010, 17 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, dated Jul. 20, 2010, 13 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, dated Nov. 9, 2010, 13 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, dated Mar. 15, 2010, 11 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, dated Jul. 23, 2009, 10 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, dated May 26, 2011, 14 pgs.
Craig, Office Action, U.S. Appl. No. 13/609,183, dated May 9, 2013, 7 pgs.
Pavlovskaia, Office Action, JP 2011-516499, dated Feb. 14, 2014, 19 pgs.
Digital Audio Compression Standard(AC-3, E-AC-3), Advanced Television Systems Committee, Jun. 14, 2005, 236 pgs.
European Patent Office, Extended European Search Report for International Application No. PCT/US2010/027724, dated Jul. 24, 2012, 11 pages.
FFMPEG, http://www.ffmpeg.org downloaded Apr. 8, 2010, 8 pgs.
FFMEG-0.4.9 Audio Layer 2 Tables Including Fixed Psycho Acoustic Model, 2001, 2 pgs.
Herr, Notice of Allowance, U.S. Appl. No. 11/620,593, dated May 23, 2012, 5 pgs.
Herr, Notice of Allowance, U.S. Appl. No. 12/534,016, dated Feb. 7, 2012, 5 pgs.
Herr, Notice of Allowance, U.S. Appl. No. 12/534,016, dated Sep. 28, 2011, 15 pgs.
Herr, Final Office Action, U.S. Appl. No. 11/620,593, dated Sep. 15, 2011, 104 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, dated Mar. 19, 2010, 58 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, dated Apr. 21, 2009 27 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, dated Dec. 23, 2009, 58 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, dated Jan. 24, 2011, 96 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, dated Aug. 27, 2010, 41 pgs.
Herre, Thoughts on an SAOC Architecture, Oct. 2006, 9 pgs.
Hoarty, The Smart Headend—A Novel Approach to Interactive Television, Montreux Int'l TV Symposium, Jun. 9, 1995, 21 pgs.
ICTV, Inc., International Preliminary Report on Patentability, PCT/US2006/022585, dated Jan. 29, 2008, 9 pgs.
ICTV, Inc., International Search Report / Written Opinion, PCT/US2006/022585, dated Oct. 12, 2007, 15 pgs.
ICTV, Inc., International Search Report / Written Opinion, PCT/US2008/000419, dated May 15, 2009, 20 pgs.
ICTV, Inc., International Search Report / Written Opinion; PCT/US2006/022533, dated Nov. 20, 2006; 8 pgs.
Isovic, Timing constraints of MPEG-2 decoding for high quality video: misconceptions and realistic assumptions, Jul. 2-4, 2003, 10 pgs.
MPEG-2 Video elementary stream supplemental information, Dec. 1999, 12 pgs.
Ozer, Video Compositing 101. available from hitp://www.emedialive.com, Jun. 2, 2004, 5pgs.
Porter, Compositing Digital Images, 18 Computer Graphics (No. 3), Jul. 1984, pp. 253-259.
RSS Advisory Board, “RSS 2.0 Specification”, published Oct. 15, 2007.
SAOC use cases, draft requirements and architecture, Oct. 2006, 16 pgs.
Sigmon, Final Office Action, U.S. Appl. No. 11/258,602, dated Feb. 23, 2009, 15 pgs.
Sigmon, Office Action, U.S. Appl. No. 11/258,602, dated Sep. 2, 2008, 12 pgs.
TAG Networks, Inc., Communication pursuant to Article 94(3) EPC, European Patent Application, 06773714.8, dated May 6, 2009, 3 pgs.
TAG Networks Inc, Decision to Grant a Patent, JP 2009-544985, dated Jun 28, 2013, 1 pg.
TAG Networks Inc., IPRP, PCT/US2006/010080, dated Oct. 16, 2007, 6 pgs.
TAG Networks Inc., IPRP, PCT/US2006/024194, dated Jan. 10, 2008, 7 pgs.
TAG Networks Inc., IPRP, PCT/US2006/024195, dated Apr. 1, 2009, 11 pgs.
TAG Networks Inc., IPRP, PCT/US2006/024196, dated Jan. 10, 2008, 6 pgs.
TAG Networks Inc., International Search Report, PCT/US2008/050221, dated Jun. 12, 2008, 9 pgs.
TAG Networks Inc., Office Action, CN 200680017662.3, dated Apr. 26, 2010, 4 pgs.
TAG Networks Inc., Office Action, EP 06739032.8, dated Aug. 14, 2009, 4 pgs.
TAG Networks Inc., Office Action, EP 06773714.8, dated May 6, 2009, 3 pgs.
TAG Networks Inc., Office Action, EP 06773714.8, dated Jan. 12, 2010, 4 pgs.
TAG Networks Inc., Office Action, JP 2008-506474, dated Oct. 1, 2012, 5 pgs.
TAG Networks Inc., Office Action, JP 2008-506474, dated Aug. 8, 2011, 5 pgs.
TAG Networks Inc., Office Action, JP 2008-520254, dated Oct. 20, 2011, 2 pgs.
TAG Networks, IPRP, PCT/US2008/050221, dated Jul. 7, 2009, 6 pgs.
TAG Networks, International Search Report, PCT/US2010/041133, dated Oct. 19, 2010, 13 pgs.
TAG Networks, Office Action, CN 200880001325.4, dated Jun. 22, 2011, 4 pgs.
TAG Networks, Office Action, JP 2009-544985, dated Feb. 25, 2013, 3 pgs.
Talley, A general framework for continuous media transmission control, 21st IEEE Conference on Local Computer Networks, Oct. 13-16, 1996, 10 pgs.
The Toolame Project, Psych_nl.c, Oct. 1, 1999, 1 pg.
Todd, AC-3: flexible perceptual coding for audio transmission and storage, 96th Convention, Audio Engineering Society, Feb. 26-Mar. 1, 1994, 16 pgs.
Tudor, MPEG-2 Video Compression, Dec. 1995, 15 pgs.
TVHEAD, Inc., First Examination Report, IN 1744/MUMNP/2007, dated Dec. 30, 2013, 6 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/010080, dated Jun. 20, 2006, 3 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/024194, dated Dec. 15, 2006, 4 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/024195, dated Nov. 29, 2006, 9 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/024196, dated Dec. 11, 2006, 4 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/024197, dated Nov. 28, 2006, 9 pgs.
Vernon, Dolby digital: audio coding for digital television and storage applications, Aug. 1999, 18 pgs.
Wang, A beat-pattern based error concealment scheme for music delivery with burst packet loss, IEEE International Conference on Multimedia and Expo, ICME, Aug. 22, 2001, 4 pgs.
Wang, A compressed domain beat detector using MP3 audio bitstream, Sep. 30, 2001, 9 pgs.
Wang, A multichannel audio coding algorithm for inter-channel redundancy removal, May 12-15, 2001, 6 pgs.
Wang, An excitation level based psychoacoustic model for audio compression, Oct. 30, 1999, 4 pgs.
Wang, Energy compaction property of the MDCT in comparison with other transforms, AES 109th Convention, Los Angeles, Sep. 22-25, 2000, 23 pgs.
Wang, Exploiting excess masking for audio compression, 17th International Conference on High Quality Audio Coding, Sep. 2-5, 1999, 4 pgs.
Wang, schemes for re-compressing mp3 audio bitstreams, Audio Engineering Society, 111th Convention Sep. 21-24, 2001, New York, 5 pgs.
Wang, Selected advances in audio compression and compressed domain processing, Aug. 2001, 68 pgs.
Wang, The impact of the relationship between MDCT and DFT on audio compression, IEEE-PCM2000, Dec. 13-15, 2000, Sydney, Australia, 9 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentablity, PCT/US2013/036182, dated Oct. 14, 2014, 9 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Rule 94(3), EP08713106-6, dated Jun. 26, 2014, 5 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Rule 94(3), EP09713486.0, dated Apr. 14, 2014, 6 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Rules 70(2) and 70a(2), EP11833486.1, dated Apr. 24, 2014, 1 pg.
ActiveVideo Networks Inc., Communication Pursuant to Rules 161(2) & 162 EPC, EP13775121.0, dated Jan. 20, 2015, 3 pgs.
ActiveVideo Networks Inc., Examination Report No. 1, AU2011258972, dated Jul. 21, 2014, 3 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2014/041430, dated Oct. 9, 2014, 9 pgs.
Active Video Networks, Notice of Reasons for Rejection, JP2012-547318, dated Sep. 26, 2014, 7 pgs.
ActiveVideo Networks Inc., Certificate of Patent JP5675765, Jan. 9, 2015, 3 pgs.
ActiveVideo Networks Inc., Decision to refuse a European patent application (Art. 97(2) EPC, EP09820936.4, dated Feb. 20, 2015, 4 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Article 94(3) EPC, EP10754084.1, dated Feb. 10, 2015, 12 pgs.
ActiveVideo Networks Inc., Communication under Rule 71(3) EPC, Intention to Grant, EP08713106.6, dated Feb. 19, 2015, 12 pgs.
ActiveVideo Networks Inc., Examination Report No. 2, AU2011249132, dated May 29, 2015, 4 pgs.
Activevideo Networks Inc., Examination Report No. 2, AU2011315950, dated Jun. 25, 2015, 3 pgs.
ActiveVideo, International Search Report and Written Opinion, PCT/US2015/027803, dated Jun. 24, 2015, 18 pgs.
ActiveVideo, International Search Report and Written Opinion, PCT/US2015/027804, dated Jun. 25, 2015, 10 pgs.
ActiveVideo Networks B.V., Office Action, IL222830, dated Jun. 28, 2015, 7 pgs.
ActiveVideo Networks, Inc., Office Action, JP2013534034, dated Jun. 16, 2015, 6 pgs.
ActiveVideo Networks Inc., Notice of Reasons for Rejection, JP2014-100460, dated Jan. 15, 2015, 6 pgs.
ActiveVideo Networks Inc., Notice of Reasons for Rejection, JP2013-509016, dated Dec. 24, 2014, 11 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, EP08713106.6-1908, dated Aug. 5, 2015, 2 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, AU2011258972, dated Nov. 19, 2015, 2 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, AU2011315950, dated Dec. 17, 2015, 2 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, AU2011249132, dated Jan. 7, 2016, 2 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, HK10102800.4, dated Jun. 10, 2016, 3 pgs.
ActiveVideo Networks, Inc., Certificate of Grant , EP13168509.11908, dated Sep. 30, 2015, 2 pgs.
ActiveVideo Networks, Inc., Certificate of Patent, JP2013534034, dateed Jan. 8, 2016, 4 pgs.
ActiveVideo Networks, Inc., Certificate of Patent, IL215133, dated Mar. 31, 2016, 1 pg.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(1) and 162 EPC, EP14722897.7, dated Oct. 28, 2015, 2 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Article 94(3) EPC, EP14722897.7, dated Jun. 29, 2016, 6 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Article 94(3) EPC, EP11738835.5, dated Jun. 10, 2016, 3 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(1) and 162 EPC, EP14740004.8, dated Jan. 26, 2016, 2 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(1) and 162 EPC, EP14736535.7, dated Jan. 26, 2016, 2 pgs.
ActiveVideo Networks, Inc., Decision to Grant, EP08713106.6-1908, dated Jul. 9, 2015, 2 pgs.
ActiveVideo Networks, Inc., Decision to Grant, EP13168509.6-1908, dated Sep. 3, 2015, 2 pgs.
ActiveVideo Networks, Inc., Decision to Grant, JP2014100460, dated Jul. 24, 2015, 5 pgs.
ActiveVideo Networks, Inc., Decision to Refuse a European Patent Application, EP08705578.6, dated Nov. 26, 2015, 10 pgs.
ActiveVideo Networks, Inc., Extended European Search Report, EP13735906.3, dated Nov. 11, 2015, 10 pgs.
ActiveVideo Networks, Inc., Partial Supplementary Extended European Search Report, EP13775121.0, dated Jun. 14, 2016, 7 pgs.
ActiveVideo Networks, Inc., KIPO's Notice of Preliminary Rejection, KR10-2010-7019512, dated Jul. 15, 2015, 15 pgs.
ActiveVideo Networks, Inc., KIPO's 2nd-Notice of Preliminary Rejection, KR10-2010-7019512, dated Feb. 12, 2016, 5 pgs.
ActiveVideo Networks, Inc., KIPO's Notice of Preliminary Rejection, KR10-20107021116, dated Jul. 13, 2015, 19 pgs.
ActiveVideo Networks, Inc., KIPO's Notice of Preliminary Rejection, KR10-2011-7024417, dated Feb. 18, 2016, 16 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT-US2015028072, dated Aug. 7, 2015, 9 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT-US2014030773, dated Sep. 15, 2015, 6 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT/US2014041430, dated Dec. 8, 2015, 6 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT-US2014041416, dated Dec. 8, 2015, 6 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2015/000502, dated May 6, 2016, 8 pgs.
ActiveVideo, Communication Pursuant to Article-94(3) EPC, EP12767642.7, dated Sep. 4, 2015, 4 pgs.
ActiveVideo, Communication Pursuant to Article 94(3) EPC, EP10841764.3, dated Dec. 18, 2015, 6 pgs. Dec. 18, 2015.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 70(2) abd 70a(2) EP13735906.3, dated Nov. 27, 2015, 1 pg.
ActiveVideo, Notice of Reasons for Rejection, JP2013-509016, dated Dec. 3, 2015, 7 pgs.
ActiveVideo, Notice of German Patent, EP602008040474-9, dated Jan. 6, 2016, 4 pgs.
Avinity Systems B. V., Final Office Action, JP-2009-530298, dated Oct. 7, 2014, 8 pgs.
Avinity Systems B.V., PreTrial-Reexam-Report, JP2009530298, dated Apr. 24, 2015, 6 pgs.
Avinity Systems B.V., Notice of Grant—JP2009530298, dated Apr. 12, 2016, 3 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/445,104, dated Dec. 24, 2014, 14 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/686,548, dated Sep. 24, 2014, 13 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/438,617, dated Oct. 3, 2014, 19 pgs.
Brockmann, Office Action, U.S. Appl. No. 12/443,571, dated Nov. 5, 2014, 26 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/668,004, dated Feb. 26, 2015, 17 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/686,548, dated Jan. 5, 2015, 12 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/911,948, dated Dec. 26, 2014, 12 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/911,948, dated Jan. 29, 2015, 11 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/737,097, dated Mar. 16, 2015, 18 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/911,948, dated Jul. 10, 2015, 5 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 14/298,796, dated Mar. 18, 2015, 11 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/438,617, dated May 22, 2015, 18 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/445,104, dated Apr. 23, 2015, 8 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 12/443,571, dated Jul. 9, 2015, 28 pgs.
Brockmann, Office Action, U.S. Appl. No. 14/262,674, dated May 21, 2015, 7 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 14/262,674, dated Sep. 30, 2015, 7 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/911,948, dated Aug. 21, 2015, 6 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/911,948, dated Aug. 5, 2015, 5 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/668,004, dated Aug. 3, 2015, 18 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/668,004, dated Mar. 25, 2016, 17 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/686,548, dated Aug. 12, 2015, 13 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/686,548, dated Feb. 8, 2016, 13 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/737,097, dated Aug. 14, 2015, 17 pgs.
Brockmann, Office Action, U.S. Appl. No. 14/298,796, dated Sep. 11, 2015, 11 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 14/298,796, dated Mar. 17, 2016, 9 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 12/443,571, dated Aug. 1, 2016, 32 pgs.
Brockmann, Office Action, U.S. Appl. No. 12/443,571, dated Dec. 4, 2015, 30 pgs.
Craig, Decision on Appeal—Reversed—, U.S. Appl. No. 11/178,177, dated Feb. 25, 2015, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,177, dated Mar. 5, 2015, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,181, dated Feb. 13, 2015, 8 pgs.
Dahlby, Office Action, U.S. Appl. No. 12/651,203, dated Dec. 3, 2014, 19 pgs.
Dahlby, Office Action U.S. Appl. No. 12/651,203, dated Jul. 2, 2015, 25 pgs.
Dahlby, Final Office Action, U.S. Appl. No. 12/651,203, dated Dec. 11, 2015, 25 pgs.
ETSI, “Hybrid Broadcast Broadband TV,” ETSI Technical Specification 102 796 V1.1.1, Jun. 2010, 75 pgs.
Gecsei, J., “Adaptation in Distributed Multimedia Systems,” IEEE Multimedia, IEEE Service Center, New York, NY, vol. 4, No. 2, Apr. 1, 1997, 10 pgs.
Gordon, Notice of Allowance, U.S. Appl. No. 12/008,697, dated Dec. 8, 2014, 10 pgs.
Gordon, Office Action, U.S. Appl. No. 12/008,722, dated Nov. 28, 2014, 18 pgs.
Gordon, Notice of Allowance, U.S. Appl. No. 12/008,697, datd Apr. 1, 2015, 10 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/008,722, dated Jul. 2, 2015, 20 pgs.
Gordon, Notice of Allowance, UU.S. Appl. No. 12/008,722, dated Feb. 17, 2016, 10 pgs.
Jacob, Bruce, “Memory Systems: Cache, DRAM, Disk,” Oct. 19, 2007, The Cache Layer, Chapter 22, p. 739.
Ohta, K., et al., “Selective Multimedia Access Protocol for Wireless Multimedia Communication,” Communications, Computers and Signal Processing, 1997, IEEE Pacific Rim Conference NCE Victoria, BC, Canada, Aug. 1997, vol. 1, 4 pgs.
OIPF, “Declarative Application Environment,” Open IPTV Forum, Release 1 Specification, vol. 5, V.1.1, Oct. 8, 2009, 281 pgs.
Regis, Notice of Allowance, U.S. Appl. No. 13/273,803, dated Nov. 18, 2014, 9 pgs.
Regis, Notice of Allowance, U.S. Appl. No. 13/273,803, dated Mar. 2, 2015, 8 pgs.
Schierl, T., et al., 3GPP Compliant Adaptive Wireless Video Streaming Using H.264/AVC, © 2005, IEEE, 4 pgs.
Sigmon, Notice of Allowance, U.S. Appl. No. 13/311,203, dated Dec. 19, 2014, 5 pgs.
Sigmon, Notice of Allowance, U.S. Appl. No. 13/311,203, dated Apr. 14, 2015, 5 pgs.
TAG Networks Inc, Decision to Grant a Patent, JP 2008-506474, dated Oct. 4, 2013, 5 pgs.
Wei, S., “QoS Tradeoffs Using an Application-Oriented Transport Protocol (AOTP) for Multimedia Applications Over IP.” Sep. 23-26 1999, Proceedings of the Third International Conference on Computational Intelligence and Multimedia Applications, New Delhi, India, 5 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, HK14101604, dated Sep. 8, 2016, 4 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(1) and 162 EPC, EP15785776.4, dated Dec. 8, 2016, 2 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(1) and 162 EPC, EP15721482.6, dated Dec. 13, 2016, 2 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(1) and 162 EPC, EP15721483.4, dated Dec. 15, 2016, 2 pgs.
ActiveVideo Networks, Inc., Communication Under Rule 71(3), Intention to Grant, EP11833486.1, dated Apr. 21, 2017, 7 pgs.
ActiveVideo Networks, Inc., Decision to Refuse an EP Patent Application, EP 10754084.1, dated Nov. 3, 2016, 4 pgs.
ActiveVideo Networks, Inc. Notice of Reasons for Rejection, JP2015-159309, dated Aug. 29, 2016, 11 pgs.
ActiveVideo Networks, Inc. Denial of Entry of Amendment, JP2013-509016, dated Aug. 30, 2016, 7 pgs.
ActiveVideo Networks, Inc. Notice of Final Rejection, JP2013-509016, dated Aug. 30, 2016, 3 pgs.
ActiveVideo Networks, Inc., KIPO's Notice of Preliminary Rejection, KR10-2012-7031648, dated Mar. 27, 2017, 3 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT-US2015028072, dated Nov. 1, 2016, 7 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT-US2015/027803, dated Oct. 25, 2016, 8 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT-US2015/027804, dated Oct. 25, 2016, 6 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2016/040547, dated Sep. 19, 2016, 6 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2016/051283, dated Nov. 29, 2016, 10 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Article 94(3), EP13735906.3, dated Jul. 18, 2016, 5 pgs.
ActiveVideo, Intent to Grant, EP12767642.7, dated Jan. 2, 2017, 15 pgs.
Avinity Systems B.V., Decision to Refuse an EP Patent Application, EP07834561.8, dated Oct. 10, 2016, 17 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/668,004, dated Nov. 2, 2016, 20 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/668,004, dated Mar. 31, 2017, 21 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/737,097, dated May 16, 2016, 23 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/737,097, dated Oct. 20, 2016, 22 pgs.
Brockmann, Office Action, U.S. Appl. No. 14/217,108, dated Apr. 13, 2016, 8 pgs.
Brockmann, Office Action, U.S. Appl. No. 14/696,462, dated Feb. 8, 2017, 6 pgs.
Brockmann, Office Action, U.S. Appl. No. 15/139,166, dated Feb. 28, 2017, 10 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 14/217,108, dated Dec. 1, 2016, 9 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 14/696,463, dated Aug. 14, 2017, 14 pgs.
Dahlby, Advisory Action, U.S. Appl. No. 12/651,203, dated Nov. 21, 2016, 5 pgs.
Hoeben, Office Action, U.S. Appl. No. 14/757,935, dated Sep. 23, 2016, 28 pgs.
Hoeben, Final Office Action, U.S. Appl. No. 14/757,935, dated Apr. 12, 2017, 29 pgs.
McElhatten, Office Action, U.S. Appl. No. 14/698,633, dated Feb. 22, 2016, 14 pgs.
McElhatten, Final Office Action, U.S. Appl. No. 14/698,633, dated Aug. 18, 2016, 16 pgs.
McElhatten, Office Action, U.S. Appl. No. 14/698,633, dated Feb. 10, 2017, 15 pgs.
ActiveVideo Networks, Inc., Decision to grant an European Patent, EP12767642.7, dated May 11, 2017, 2 pgs.
ActiveVideo Networks, Inc., Decision to Grant an European Patent, EP06772771.9, dated Oct. 26, 2017, 2 pgs.
ActiveVideo Networks, Inc., Certficate of Grant, EP06772771.9, dated Nov. 22, 2017, 1 pg.
ActiveVideo Networks, Inc., Decision to grant an European Patent, EP11833486.1, dated Oct. 26, 2017, 2 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, EP11833486.1, dated Nov. 22, 2017, 1 pg.
ActiveVideo Networks, Inc., Transmission of Certificate of Grant, EP12767642-7, dated Jun. 7, 2017, 1 pg.
ActiveVideo Networks, Inc., Certificate of Grant, EP12767642-7, dated Jun. 7, 2017, 1 pg.
ActiveVideo Networks, Inc., Intention to Grant, EP06772771.9, dated Jun. 12, 2017, 5 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Article 94(3), EP14722897.7, dated Jul. 19, 2017, 7 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Article 94(3), EP14740004.8, dated Aug. 24, 2017, 7 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Article 94(3), EP15721482.6, dated Nov. 20, 2017, 7 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Article 94(3), EP15721483.4, dated Dec. 10, 2018, 5 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(2) and 162,EP16818840.7, dated Feb. 20, 2018, 3 pgs.
ActiveVideo Networks, Inc., Extended European Search Report, EP15785776.4, dated Aug. 18, 2017, 8 pgs.
ActiveVideo Networks, Inc., Extended European Search Report, EP15873840.1, dated May 18, 2018, 9 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 70(2) and 70a(2), EP15873840.1, dated Jun. 6, 2018, 1 pg.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 70(2) and 70a(2), EP16845261.3, dated Jan. 18, 2019, 1 pg.
ActiveVideo Networks, Inc., Notification of German Patent, DE602012033235.2, dated Jun. 13, 2017, 3 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT/US2016/040547, dated Jan. 2, 2018, 5 pgs.
ActiveVideo Networks, Inc., Extended European Search Report, EP16818840.7, dated Nov. 30, 2018, 5 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2016/064972, dated Feb. 17, 2017, 9 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT/US2016/064972, dated Jun. 14, 2018, 7 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2017/068293, dated Mar. 19, 2018, 7 pgs.
Brockmann, Office Action, U.S. Appl. No. 12/443,571, dated May 31, 2017, 36 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 14/696,462, dated Jul. 21, 2017, 6 pgs.
Brockmann, Office Action, U.S. Appl. No. 15/728,430, dated Jul. 27, 2018, 6 pgs.
Brockmann, Office Action, U.S. Appl. No.14/217,108, dated Aug. 10, 2017, 14 pgs.
Brockmann, Office Action, U.S. Appl. No. 15/139,166, dated Nov. 22, 2017, 9 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 15/139,166, dated Oct. 1, 2018, 7 pgs.
Brockmann, Office Action, U.S. Appl. No. 15/791,198, dated Dec. 21, 2018, 18 pgs.
Brockmann, Office Action, U.S. Appl. No. 15/199,503, dated Feb. 7, 2018, 12 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 15/199,503, dated Aug. 16, 2018, 13 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 15/199,503, dated Dec. 12, 2018, 9 pgs.
Brockmann, Office Action, U.S. Appl. No. 15/261,791, dated Feb. 21, 2018, 26 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 15/261,791, dated Oct. 16, 2018, 17 pgs.
Hoeben, Final Office Action, U.S. Appl. No. 14/757,935, dated Feb. 28, 2018, 33 pgs.
Hoeben, Office Action, U.S. Appl. No. 14/757,935, dated Jun. 28, 2018, 37 pgs.
Hoeben, Office Action, U.S. Appl. No. 15/851,589, dated Sep. 21, 2018, 19 pgs.
Hoeben, Notice of Allowance, U.S. Appl. No. 14/757,935, dated Jan. 28, 2019, 9 pgs.
Visscher, Office Action, U.S. Appl. No. 15/368,527, dated Feb. 23, 2018, 23 pgs.
Visscher, Final Office Action, U.S. Appl. No. 15/368,527, dated Sep. 11, 2018, 25 pgs.
Visscher, Office Action, U.S. Appl. No. 15/368,527, dated Feb. 1, 2019, 29 pgs.
Related Publications (1)
Number Date Country
20180139511 A1 May 2018 US
Provisional Applications (1)
Number Date Country
61984703 Apr 2014 US
Continuations (1)
Number Date Country
Parent 14696463 Apr 2015 US
Child 15791198 US
Continuation in Parts (1)
Number Date Country
Parent 13438617 Apr 2012 US
Child 14696463 US