Intelligent multiplexing using class-based, multi-dimensioned decision logic for managed networks

Abstract
Switched digital television programming for video-on-demand and other interactive television services are combined utilizing a class-based, multi-dimensional decision logic to simultaneously optimize video quality and audio uniformity while minimizing latency during user interactions with the system over managed networks such as cable and satellite television networks. A group of user sessions are assigned to a single modulator. The user sessions include data in a plurality of classes, each class having a respective priority. In response to a determination that an aggregate bandwidth of the group of user sessions for a first frame time exceeds a specified budget, bandwidth is allocated for the group of user sessions during the first frame time in accordance with the class priorities. The group of user sessions is multiplexed onto a channel corresponding to the modulator in accordance with the allocated bandwidth and transmitted over a managed network.
Description
TECHNICAL FIELD

The present disclosure generally pertains to cable television network technology, and particularly to adaptive and dynamic multiplexing techniques for traffic over various network topologies.


BACKGROUND

The delivery of digital television to the home was launched in earnest in 1995 for both cable television as well as satellite-delivery systems. This new technology enabled multi-channel video program distributors (MVPD) to provide far more television programming using available network bandwidth as compared to what had been possible using analog signals of the same programs. A plurality of digital television signals are combined to fit multiple digital channels into the space of one legacy analog channel via a process called “multiplexing.” When television programs were digitally encoded at a fixed bit rate (called Constant Bit Rate or ‘CBR’), then the early digital cable TV systems could carry perhaps six to eight digital television programs in the space of a single legacy analog channel (6 MHz for NTSC or 8 MHz for non-NTSC-based systems).


The distribution networks of MVPD systems, whether cable TV or satellite, are known as “managed networks” because the output of a multiplexer is typically of a fixed bit rate. For comparison, the Internet data network is known as an “unmanaged” network, since the public use of the Internet is not regulated by a central controlling mechanism and bandwidth between two points on the network varies unpredictably.


Variable bit rate (VBR) video encoding is more efficient in the use of bandwidth than CBR encoding. VBR also generally delivers a better quality picture for the same average bandwidth. However, VBR is more difficult to manage on a distribution network. Statistical multiplexing is used to address this difficulty.


With the advent of interactive services hosted in a central location, such as a cable TV headend as well as with media originating “in the cloud” and routing over a managed network to a consumer set-top box, the difficulty of managing the VBR session within a multiplexer becomes far more challenging and more prone to adverse interactions among the sessions within a multiplex stream.


Interactive television services provide the viewer with the ability to interact with their television for the purposes of selecting certain television programming, requesting more information about the programming, or responding to offers, among many possible uses. Such services have been used, for example, to provide navigable menu and ordering systems that are used to implement electronic program guides and on-demand and pay-per-view program reservations without the need to call a service provider. These services typically employ an application that is executed on a server located remotely from the viewer. Such servers may be, for example, located at a cable television headend. The output of a software application running on such servers is streamed to the viewer, typically in the form of an audio-visual MPEG Transport Stream. This enables the stream to be displayed on (or using) virtually any client device that has MPEG decoding capabilities, including a “smart” television, television set-top box, game console, and various network-connected consumer electronics devices and mobile devices. The client device enables the user to interact with the remote application by capturing keystrokes and passing these to the software application over a network connection.


An interactive television service combines the properties of managed and unmanaged network topologies. Such services require low delay, perceptually real-time properties typically associated with Real Time Transport Protocol running over User Datagram Protocol (UDP/RTP) high-complexity, proprietary clients. However, in interactive television applications the stream must be received by relatively low-complexity clients using consumer electronics-grade components. Typically, these clients do not have the capability of much more powerful laptop and tablet computers to which the user has grown accustom. Hence, interactive applications hosted on a cable or satellite set-top box are perceived as slow and old-fashioned compared to the contemporary norm. Hosting the application in a central means (e.g., a remote server located at a cable television headend) and providing the picture output to the set-top device mitigates this shortcoming and allow for the delivery of rich, highly interactive applications and services. It also places stronger demands on the distribution network to deliver these services.


A centrally (remotely) hosted interactive television service provides a combination of relatively static image portions representing a Graphical User Interface (graphical UI or GUI) that requires low-latency, artifact-free updates responsive to user input, and other portions that may have video with associated audio that require smooth and uninterrupted play-out. Conventional network distribution systems do not adequately facilitate this combination of data types. For instance, with existing statistical multiplexers for cable or satellite television systems, when large user interface graphics of a particular session need to be sent to a particular client, the many other sessions sharing the same multiplex have no means available (except a drastic reduction in image quality) to scale back the bandwidth requirements of adjacent streams to allow a temporary large data block representing the UI graphics to pass.


With many interactive sessions active within a single multiplex stream, a possibility exists for disruption to video, audio and/or GUI data. The only alternative that conventional systems have is for the conservative allocation of bandwidth which then supports many fewer simultaneous sessions per multiplex stream.


Therefore, it is desirable to provide an improved way for multiplexing interactive program streams.


Additional background information is provided in U.S. patent application Ser. Nos. 12/443,571; 13/438,617; and 14/217,108, all of which are incorporated by reference herein in their entirety.


SUMMARY

Digital television over a managed network such as a cable television system uses constant-bandwidth channels to carry multiple program streams. Multiplexing within a fixed allocation of bandwidth requires a multiplexer controller to manage the allocation of bandwidth among a group of competing program streams or competing sessions. In this manner, an individual program stream or session competes for bandwidth against the remainder of the program streams in the group of program streams. Control logic in the multiplexer controller is used to manage the byte allocation among the program streams so that as few compromises as possible in quality are required, and the necessary compromises are evenly distributed among the group.


On an unmanaged network, such as the Internet, a single program stream (or session) competes for bandwidth with a large number of other unknown streams over which the multiplexer's controller has no control. One of the many advantages of the systems and methods described herein is that a multiplexer controller can control both managed and unmanaged network traffic and utilize a class-based, multi-dimensional control logic that optimizes the interactive user experience for television programming distributed across any type of network.


In some embodiments, a method for prioritizing content classes in multiplexed content streams is performed at a server system. The method includes assigning a group of user sessions to a single modulator. The user sessions include data in a plurality of classes, each class having a respective priority. An aggregate bandwidth of the group of user sessions for a first frame time is computed. It is determined that the aggregate bandwidth for the first frame time exceeds a specified budget for the modulator. In response to determining that the aggregate bandwidth for the first frame time exceeds the specified budget, bandwidth is allocated for the group of user sessions during the first frame time in accordance with the class priorities. Using the modulator, the group of user sessions is multiplexed onto a channel corresponding to the modulator, in accordance with the allocated bandwidth. The multiplexed group of user sessions is transmitted over a managed network.


In some embodiments, a server system includes a plurality of modulators to multiplex respective groups of user sessions onto respective channels for transmission over a managed network, in accordance with allocated bandwidth. The server system also includes one or more processors and memory storing one or more programs configured to be executed by the one or more processors. The one or more programs include instructions for performing the above-described method.


In some embodiments, a non-transitory computer-readable storage medium stores one or more programs configured for execution by one or more processors of a server system that also includes a plurality of modulators to multiplex respective groups of user sessions onto respective channels for transmission over a managed network in accordance with allocated bandwidth. The one or more programs include instructions for performing the above-described method.





BRIEF DESCRIPTION OF THE DRAWINGS

For a better understanding of the various described embodiments, reference should be made to the Detailed Description below, in conjunction with the following drawings. Like reference numerals refer to corresponding parts throughout the figures and description.



FIG. 1 illustrates a distribution of bandwidth peaks within multiple VBR streams over time and a resulting aggregated bandwidth for a group of signals corresponding to the combined streams, with peaks at typical intervals illustrating the random dispersal of peaks within the group of signals.



FIG. 2 illustrates a vertical allocation strategy of assigning user sessions to a particular quadrature amplitude modulation (QAM) modulator until it is full, then allocating sessions to the next modulator with the goal of fully loading one modulator before allocating sessions to another, in accordance with some embodiments. This allocation strategy is also known as modulator (or QAM) affinity and is beneficial where high volumes of interactive television signals are multiplexed.



FIG. 3 illustrates a horizontal allocation strategy of evenly assigning user sessions among all available QAM modulators. This allocation strategy is beneficial for multiplexing mostly linear television programming.



FIG. 4A is a multi-dimensional control graph showing decision paths for multiplexing audio, video and graphical user interface (UI) elements, according to some embodiments. Each dimension indicates which components of the user experience can contribute bandwidth for use by other components while minimizing the perceptual degradation of the composite user front-of-screen experience.



FIG. 4B is a multi-dimensional control graph showing decision paths as in FIG. 4A above with the additional decision dimension of whole application groups, in accordance with some embodiments.



FIG. 5 is a time-flow diagram of class-based allocation, illustrating packet distribution where UI elements, video and audio elements of two sessions have adequate bandwidth within a QAM modulator.



FIG. 6 is a time-flow diagram of class-based allocation where one-frame latency is introduced for the UI elements of session b to accommodate full frames for video elements and UI elements of both session a and session b.



FIG. 7 is a time-flow diagram of an alternative class-based allocation scheme where session b's video elements are given precedence over session b's UI elements, by selectively dropping and rescheduling UI frames to make room for full-frame-rate video elements.



FIG. 8 is a time-flow diagram of an alternative class-based allocation scheme where session b's UI elements are given precedence over session b's video elements, by selectively dropping video frames to make room for UI elements.



FIG. 9 is a time-flow diagram of an alternative class-based allocation scheme where the quality of session b's UI elements is reduced to make room for session b's full-frame-rate video elements, while keeping the UI elements also at full frame rate.



FIG. 10 is a schematic of an application server platform and client device according to some embodiments of a cloud-based application server system that generates the various audio, video and graphical elements to be multiplexed and sent to the client device.



FIG. 11 is a schematic according to some embodiments of an interactive television (ITV) application server and client device depicting distribution-network elements.



FIG. 12 is a schematic of a client device (e.g., a set-top box or smart TV host system) running an ITV client application and third-party applications.



FIG. 13 is a flow chart depicting a method of assigning an interactive television or video-on-demand (VOD) session to a specific QAM modulator using the QAM affinity process in accordance with some embodiments. The interactive television session streams are managed as a group and by geographic region such that the sessions can be managed by class-based intelligent multiplexing as a group without impacting non-interactive content which cannot be managed.



FIG. 14 is a flow chart depicting a process of managing a group of streams within a multiplex group in accordance with some embodiments.



FIG. 15 is a flow chart depicting a method of managing class-based allocation for multiplexing audio, video and graphical user interface (UI) elements, in accordance with some embodiments.



FIG. 16 is a flow chart depicting the process, in the method of FIG. 15, of individually grooming a single stream with the first time domain of the intelligent multiplexing process, in accordance with some embodiments.





DETAILED DESCRIPTION

Reference will now be made to embodiments, examples of which are illustrated in the accompanying drawings. In the following description, numerous specific details are set forth in order to provide an understanding of the various described embodiments. However, it will be apparent to one of ordinary skill in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.


The legacy analog television channels of 6 MHz (8 MHz for non-U.S.-based systems) are now largely filled with digital carriers containing many digital TV channels each. More recent cable television distribution networks were designed for broadcast TV, and not interactive television. Hence, there are many obstacles for providing interactive experiences using such legacy distribution networks.


In order to take maximum advantage of the digital bandwidth on a managed network, digital encoding is employed that utilizes a variable-bit-rate (VBR) method to efficiently accommodate differences in video content. For example, a video program might have a prolonged scene with little action and hence have a comparatively low bit rate. When in this same program an action scene arrives, the bit rate will usually increase substantially. VBR encoding can generally follow the action, so to speak, and adjust the resulting bit rate in essentially direct proportion to the demands of the program or service producing the audio-video information: low bit rate for still scenes, slightly more for talking heads and a high bit rate for action. In contrast, conventional systems typically can only employ a Constant Bit Rate (CBR) encoding regime. Hence, the bit rate would have to be high enough to accommodate the maximum requirement of the action scene even though the average bit rate might be less than one-third of the action scene's peak bit rate. Unlike CBR encoding, it is this very difference between peak and average bit rate that can be exploited when employing VBR encoding to maximize the number of television programs that can be combined into a single multiplexed stream.


“Statistical multiplexing” effectively aggregates multiple VBR streams to smooth-out transmission peaks. The peak utilization of the aggregated stream is less than the sum of the peak utilization of the individual streams. In other words, the streams share a certain amount of headroom. “Headroom” represents a bits-per-second amount (e.g., eight megabits-per-second (Mbps)) that is set aside for one or more streams of audio-video-data. As the system employs VBR encoding, a stream that may average 2 Mbps will peak occasionally to perhaps 6 Mbps. So if there is a multiplex of, for example, ten streams, one of the ten may peak at any given time. If it is expected that two of ten might peak simultaneously at any given instant, one allots 8 Mbps to allow the two streams to overshoot by 4 Mbps each. As more streams are multiplexing into a group, less collective overhead is needed because each stream will also drop below the 2 Mbps momentarily and the collective under-utilized bandwidth can contribute to the headroom reserve.


The term “statistical” comes from combining (i.e., multiplexing) digital video streams with the expectation that, on average, not all streams will require maximum headroom at the same time. In fact, MVPD operators specifically group their channels by program type so that not all sports channels are in one multiplex, and the like. Hence, if one can disperse bursty traffic, such as MPEG where scene changes and intra-encoded frames (I-frames) are much larger than the average bit rate of the stream, among the less active program channels, statistical multiplexing can provide a significant gain.


There is a probability, although small, that too many individual streams within a multiplex group experience a peak at the same time. FIG. 1 illustrates a distribution of peaks within multiple VBR streams over time. The bit rates of three example streams 101, 102, and 103 are shown with each having a peak bit rate of approximately 3.5 Mbps. Without statistical multiplexing, 3×3.5=10.5 Mbps would be required for the multiplex group. The fourth stream 104, which corresponds to the multiplex group, shows the aggregated bandwidth of the three individual streams, with a peak bit rate of 6.0 Mbps. Thus, the multiplexing gain is 4.5 Mbps (i.e., 10.5 Mbps-6.0 Mbps). This savings can be used either for better quality for all streams, or for serving more streams in the same amount of bandwidth.


“Intelligent multiplexing” (i-muxing) adds a variety of mitigation techniques to statistical multiplexing when too many peaks would otherwise collide, i.e. when the bandwidth allocated to the aggregate of the streams would be exceeded. For example, screen updates to the user interface for interactive television applications can be deferred to a later frame. This reduces the aggregated bandwidth, but at the cost of increasing the delay before the user sees an expected update to an on-screen menu, for example. This delay is known as latency. In another example, if the screen update for the user interface provides the most information (e.g., the user is expecting a menu to appear), a multiplexer controller with knowledge of this event sequence can decide to not delay the UI screen update and rather drop video frames, which the user may be less likely to notice. Other embodiments vary the picture quality to reduce bandwidth utilization when required by varying the quality of video from external sources, where the quality is reduced by changing encoding parameters in the encoding/transcoding process.


In order to fully exploit these bandwidth allocation means, a class-based prioritization scheme is used. The class basis is established by giving priority to various components of information sent to the client. In some embodiments, the classes are as below; listed in order of priority, from highest priority to lowest priority:

    • 1. Transport Control Information (for example, the MPEG Transport Stream), as this component carries the critical access control and timing information;
    • 2. Audio, as humans notice and react badly to interrupted audio much more than to glitches in video, possibly because our ears do not blink the way our eyes do;
    • 3. Video, such that video frames are dropped to open room for more critical data rather than reduce picture quality, in a process that is often unnoticeable;
    • 4. User interface elements, as the graphical elements of a user interface can be delayed to avoid too many video frames being dropped, thus adding latency to the interface which is rarely noticed; and
    • 5. Error correction information used to “repaint” parts of the video image to mitigate error build-up, which is lowest in priority since its contribution to the user experience is subtle and often subliminal.


In some embodiments, the system provides a “holistic” approach to mitigating problems in one layer (e.g., a layer of the OSI model) by changing behavior of another layer. For example, if the system (e.g., a server located at a cable television headend) detects bandwidth constraints on the data link layer, it mitigates this by manipulating packets on the transport layer by changing transmission timing or by modifying the on-screen presentation by postponing UI updates or reducing frame rates. The system can do this because the system spans almost all layers.


In some embodiments, the system is also aware of the actual program type and can dynamically alter the priorities described above, possibly even reversing them. For example, video frames may be dropped to give user-interface elements priority to pass based on the knowledge that in this particular content, the user is expecting the UI graphics to appear and will not notice the dropped video frames (e.g., because the video is in a reduced size and not the focus of the user's attention). The priorities of video and user-interface elements thus may be reversed.


These three bandwidth mitigation techniques, combined with a flexible, class-based prioritization scheme, provide the ability to optimally control a digital-media multiplexing system.


The systems and methods described herein address (i) low session count on QAM (i.e., on a QAM channel), (ii) unused bandwidth, (iii) latency and non-smooth animations. Other problems addressed are if requested bandwidth temporarily exceeds available bandwidth, deferring bits could cause unpleasant user experiences, particularly connection loss, audio dropouts, and loss of video synchronization or uneven playback speed. By prioritizing these classes of bits, continuity can be guaranteed at the expense of only occasional increase in UI latency.


In addition, some embodiments described herein provide the following functionality:

    • Allocating multiple sessions on one compositor on the Session Resource Manager (SRM) on one QAM (i.e., on one QAM channel), with static bandwidth allocations on the resource manager, and dynamically shifting bandwidth between sessions;
    • Class-based bit budget allocation (audio/control, video, UI, intra-refresh); and/or
    • Latency optimized scheduling of UI updates.


This flexible and dynamic bandwidth allocation allows for combining (multiplexing) streams that have variable bit rate requirements with a high peak-to-average ratio. Traditionally on a cable network, a CBR session is allocated on a QAM to guarantee sufficient quality of service. The bandwidth allocated is either set high, resulting in a large amount of unused bandwidth, or set low, resulting in higher latency and a non-smooth user experience. Other advantages of this flexible and dynamic bandwidth allocation include increased session count on QAM transport streams; less unused bandwidth, better (lower) latency, better animation smoothness, and the ability to trade off some quality for even higher session count.


In intelligent multiplexing (i-muxing), multiple user sessions share a set amount of bandwidth available to the i-mux group. The allocation occurs per frame time (e.g., 33 msec or 40 msec). If, during a frame time, the total sum of requested bandwidth for all sessions exceeds the available bandwidth of the i-mux group, the bit budget is allocated among the sessions and certain bits are deferred. Different classes of bits are allocated according to different priorities in the following order: (i) basic transport stream control and audio, (ii) video or partial-screen video, (iii) UI updates, and (iv) intra-refresh of video frames. (This list corresponds to the above list of five priority classes, except that bits for basic transport stream control and for audio are combined into a single highest-priority class.) The prioritizing of bits takes place while still generating a compliant MPEG transport stream for each session. Accordingly, the systems and methods described herein categorize bits into different classes, provide class-based allocation and, in case of delayed bits, create “filler slices” to maintain MPEG compliancy and provide feedback to the user-interface rendering engine to defer UI updates if necessary.


The following sections will explain how class-based, intelligent multiplexing is performed in cable television systems that use Switched Digital Video, Video-On-Demand (VOD), or interactive television services and simultaneously optimizes video quality and minimizes response time of interactive service over the same managed network of the cable plant while maintaining a standards-compliant transport stream suitable for decoding by the cable TV set-tops (e.g., set-top boxes) connected to the system.



FIGS. 5-9 are time-flow diagrams of class-based allocations for various scenarios. Each diagram depicts two sessions, a full-screen audio-video session (“session a”) and a mixed user-interface and partial-screen audio-and-video session (“session b”), and their allotted allocations for CBR audio A, VBR video V, and VBR UI (User Interface) packets, during frame times t (i.e., t+0) to t+4. Each block in the time-flow diagram is labeled according to its allocation for a specific class, session and frame. For example, UIb/0 identifies the allotted share of bandwidth for user-interface packets of session b, frame 0. The depicted blocks merely indicate a share of the total available bandwidth at frame time t+n. The blocks may include one or more transport stream (TS) packets and the scheduling of the packets within a frame time may be subject to other considerations than the class-based i-muxing allocation scheme explained herein. Also, a typical QAM modulator has more capacity than just for two sessions.


Throughout the scenarios depicted in FIGS. 5-9, the allocations for session a's audio and video elements are kept the same. Although all sessions within an i-mux group may be subject to class-based allocation policies, for clarity of the discussion the allocations in these examples only affect session b. Furthermore, the sessions' CBR audio is never part of the tradeoff. As noted earlier, the human brain is far more sensitive to audio glitches than it is to minor video latency, frame rate or quality degradation. (As it's known: the ears don't blink.)


A time-flow diagram of class-based allocation is shown in FIG. 5 where the allotted shares of audio, video and user-interface bandwidth for each frame fit the QAM modulator's capacity (i.e., the maximum bandwidth). The aggregate 506 of session a′s audio share 505 and video share 503 and session b's audio share 504, video share 502 and user-interface share 501 for the frame at t+0 fits the QAM modulator's capacity available for the frame at t+0, as do the aggregates 507 and 508 for the capacity available at t+1 and t+2. (“Share” in this context refers to a share of the data in an i-mux group at a given frame time.)


For comparison, FIG. 6 is a time-flow diagram of a class-based allocation where the data for session b′s user-interface elements, starting with 601, are delayed by one frame time (i.e., from t to t+1 for UIb/0, from t+1 to t+2 for UIb/1, from t+2 to t+3 for UIb/2, and from t+2 to t+3 for UIb/3) to accommodate for session b's larger share of data from its video elements. This delay is implemented in response to the fact that the aggregate 606 for frame 0 would exceed the QAM modulator capacity if UIb/0 were included. Although the introduction of latency (in this example, a latency of one frame time) for the user interface may not generally be beneficial from an interactivity perspective, the skilled person may understand that it is by far the easiest to delay frames from the user interface. Moreover, not all data from the user-interface class may be the result of interactivity. Unsolicited updates (e.g., cover art, spinners or commercial banners) may occasionally animate. For such updates it is perfectly acceptable to give the data from the session's video elements preference over the session's user-interface elements.


Conversely, FIG. 7 provides a time-flow diagram of an alternative class-based allocation scheme where instead of delaying the user-interface data of frame 0 to t+1, the frame rate for user-interface data is reduced. The user-interface data for frame 1 is rescheduled to be partially sent at t+0 (701) and the remainder at t+1 (709). Although user-interface data for frame 0 and frame 2 are dropped, the remaining elements are at their intended positions, thus masking the latency and possibly achieving intended synchronization of the user-interface elements with the video elements. Dropping frames for user-interface elements is particularly efficient since user-interface elements typically produce bursty data, easily exceeding the session's average bit rate. Reducing the frame rate by the factor the data exceeds the average is an effective method to keep the data within the allotted share without the introduction of latency, especially if the updates are part of an animation.



FIG. 8 provides a time-flow diagram of an alternative class-based allocation scheme where instead of reducing the frame rate for user-interface elements of session b, frames (and corresponding packets) from the video elements are dropped. When comparing aggregate 807 to aggregate 507, it can be seen that Vb/1 (i.e., session-b video data for frame 1) is dropped at t+1 to accommodate the larger UIb/1 (i.e., the session-b user-interface data for frame 1). If the user-interface elements update because of input from the user, it may be assumed that the focus of the user is on the animating (i.e., updating) user-interface elements. Hence, the user may not notice a reduced frame rate of the video elements and frames from these elements may be dropped to accommodate the data from session b's user-interface elements.


Finally, FIG. 9 provides a time-flow diagram of an alternative class-based allocation scheme where no frames are dropped, but the size of the user-interface elements data 901 for frame 0 (i.e., UIb/0) is reduced compared to 501 by reducing the quality of the update, for example through a higher quantization, to accommodate the larger share of video data 902 compared to 502. Such a reduction may be particularly suited for updates that are part of a series of updates or animation, but may be less suited for updates that will remain on the screen for a longer time, such as for example the last frame of an animation or the end state of an updated user-interface element.


To help illustrate the interaction between the elements of the server and relationship between the server elements and the client device without regard to specific elements of the communications network (i.e., of the distribution network), FIG. 10 provides a block diagram overview. An application server platform 1000 (also referred to as an application server or remote application server), which is an example of the server system in the allocation examples of FIGS. 5-9, includes two or more sources that generate audio-visual information to be conveyed to a client device 1010 of a specific consumer at a remote location. One source of audio-visual information is the application engine 1001, which hosts (executes) a software application on an HTML5 application processor 1002, which itself is but one possible processor of many known to those skilled in the art. The consumer interacts with the application engine 1001 via a network, which could be a cable television network or the Internet. The application engine 1001 creates at least two types of visual information. One type of visual information is graphical bitmaps, such as on-screen menus or other static information displays. The bitmaps are encoded by a bitmap encoder 1003. In this example, the bitmap output of 1003 is transmitted through the network to a client device 1010 that is capable of rendering graphic overlays on command from the application server platform 1000. Otherwise, for client devices not capable of locally rendering graphic overlays, the bitmap outputs are first rendered to MPEG and then combined in the compositor 1006 prior to transmission to client device as a completed video frame.


Another source of audio-visual information that produces output in response to the consumer interaction at the client device 1010 with the remote application server 1000 is by MPEG encoder 1004, which encodes or re-encodes full-motion video from a plurality of sources such as Internet videos (e.g., from a service such as YouTube) or video servers associated with the cable television system that hosts the application server 1000. Yet another source of full-motion video comes from content source 1011 that might come from an Internet-based video service or website featuring video clips (e.g., YouTube) or other service (e.g., Netflix) which provides long-form content such as feature films.


Most Internet video sources do not produce video streams that are in an encoded-video format compatible with cable television digital set-top boxes, and, therefore, the server system may transcode the video from the native format of the content source 1011 to MPEG-compatible format compatible with the client device 1010. An additional benefit of the application server platform 1000 is the ability to mix video and graphics information in many combinations of overlays as well as picture-in-picture windows, among many other options. This combination of resources can be programmatically determined by the application engine 1001, which outputs a single MPEG stream representing the composite audio-video information that is streamed to the client device 1010. In other embodiments, full-motion, full screen video from either MPEG encoder 1004 or content source 1011 is sent to the Client 1010, and user-interface controls (e.g., digital video recorder (DVR) controls to play, stop, fast-forward, etc.) are sent to client 1010 by the bitmap encoder 1003 to be rendered by bitmap decoder 1007 and displayed as an overlay of the full-screen video as rendered by MPEG decoder 1008.


The client (e.g., client device 1010) and the server (e.g., application server platform 1000) are, in cable television systems, separated by a managed digital network that uses well-known digital video transport protocols such as the Society of Cable Telecommunications Engineers' Service Multiplex and Transport Subsystem Standard for Cable Television (SMTS) for United Stated cable TV or DVB-C for most international cable systems. In this instance, ‘managed’ means that bandwidth resources for providing these services may be reserved prior to use. Once resources are allocated, the bandwidth is generally guaranteed to be available, and the viewer is assured of receiving a high-quality interactive television viewing experience. When configured properly, class-based intelligent multiplexing provides consistent delivery such that the user generally cannot tell that the interactive television application is being executed on a remote server and is not, in fact, executing in the set-top box, smart TV or whatever client device is displaying the resulting audio-video information.



FIG. 12 is a schematic according to some embodiments of a client device (e.g., a set-top box or smart TV host system) running a dedicated ITV client application 1207 and third-party applications 1208 and 1209. A tuner 1210 feeds the signal from a particular program and channel to a video frame buffer 1211. A graphic overlay mixer 1212 adds certain locally generated graphics and combines them with the video signal from the video frame buffer 1211, in accordance with information supplied to central processing unit (CPU) 1201. CPU 1201 draws on inputs from the managed cable system network 1204, an unmanaged network (e.g., the Internet) 1205, various third-party applications 1208 and 1209, and the dedicated ITV client application 1207. A network interface 1203 provides inputs from the managed cable system network 1204 and unmanaged network 1205 to the CPU 1201. An application programming interface (API) 1206 serves as an interface between the CPU 1201 on one side and the various third-party applications 1208 and 1209 and dedicated ITV client application 1207 on the other side. The graphic overlay mixer 1212 generates a video output (“video out”) signal that is sent to a display.


The systems and methods for intelligent multiplexing described herein allow high-quality interactive television applications to be offered to client devices, and thus to the consumer, from a central location (e.g., the cable television headend) in place of interactive applications that run on the client device (e.g., in the consumer's home). Intelligent multiplexing thus provides interactive television applications to consumers through client devices that are not running dedicated ITV client applications 1207.



FIG. 11 is a schematic of an ITV application server (e.g., the application server platform 1000, FIG. 10), a client device 1107 (e.g., client device 1010, FIG. 10), and distribution-network elements, in accordance with some embodiments. The application engine 1102 is the programmatic means by which the user interacts with the application server via the client device 1107 via bandwidth provided by the headend or hub distribution network 1105 utilizing QAM modulators 1106 (e.g., for a cable TV network). The session output (audio, video, and graphics) from the application engine 1102 and/or the video transcoder 1101 is sent downstream to the client device 1107 over the network.


The session resource manager 1108 advantageously assigns dedicated user sessions from the headend 1105 to client devices 1107 using the managed network of the cable television system.


The timing of compositor 1103 is controlled by means of a global frame-time clock 1104, which maintains a cadence that coordinates the combination of video, audio, and user-interface graphical elements such that as smooth as possible playback occurs on the client device 1107.


To perform statistical multiplexing, groups of streams that can share bandwidth are identified. (In this document, bandwidth means bits per second, not hertz). In existing cable television distribution systems employing switched digital video (SDV) and video-on-demand (VOD) services, multiple user sessions already share a common resource, which is the digital video Transport Stream (MPEG Transport) usually transmitted via a quadrature-amplitude modulated (QAM) carrier which is modulated into a six megahertz bandwidth slot (i.e., channel) in place of the older analog television signal of NTSC or an eight megahertz bandwidth slot (i.e., channel) in place of the older SECAM or PAL television signals. It is therefore logical to choose the Transport Stream on a QAM (or frequency-QAM pair) as the group within which class-based intelligent multiplexing (“i-muxing”) is performed.


When allocating resources for a switched video session of any type, the digital information is routed to one or more specific QAM modulators 1106 serving the neighborhood that services the subscriber requesting the service. The resulting set of QAM-modulated transport streams may already carry allocations for other i-mux sessions. To have maximum benefit from intelligent multiplexing, multiple sessions should be allocated to a single QAM modulator until its bandwidth is fully allocated following the vertical allocation strategy, called QAM affinity, illustrated in FIG. 2.


Managing Groups of Interactive Television and VOD Sessions Utilizing QAM Affinity


QAM affinity is a process by which each multiplex group is filled, as much as possible, before additional multiplex groups are utilized. QAM affinity is the converse of linear (or non-interactive) video services where QAM loading is spread out among available modulators as evenly as possible. In addition, all sessions in a statistical multiplexing group will be served by the same i-mux process. QAM affinity is performed instead of allowing sessions to be load-balanced in an arbitrary fashion: if an intelligent multiplexer (i-mux) is already muxing a stream onto a given QAM channel, then a next session on that same QAM channel should be allocated to the same i-mux so that bandwidth can be shared between the sessions and the merits of intelligent multiplexing can be enjoyed. Additionally, it is beneficial in some cases to group sessions on the same service area on the same i-mux such that the i-mux controller can allocate bandwidth across multiple mux channels into a single neighborhood.



FIG. 13 is a flow chart depicting a method of assigning an interactive television (ITV) or video-on-demand (VOD) session to a specific QAM modulator using the QAM affinity process in accordance with some embodiments. In the method, a task of assigning an interactive session to a QAM modulator is identified (1301). It is determined (1302) whether there is an existing QAM modulator serving the client service area (i.e., the service location of the client device). The service location of a client device (e.g., set-top box) is provided by the client device itself in some cable television systems, or it may be read from a central database maintained by the cable system. If there is no such QAM modulator (1302-no), an available RF channel is found (1303) and a corresponding QAM modulator is assigned (1303) to the client service area.


It is then determined (1304) whether there is capacity available on the existing (or newly assigned) QAM modulator for another session. If capacity is available (1304-Yes), then the ITV or VOD session is assigned (1306) to that QAM modulator. The determination 1304 and subsequent assignment 1306 keeps sessions for a particular service area grouped on the same QAM channel as much as possible. If capacity is not available (1304-No), then an available RF channel is found (1305) and a corresponding QAM modulator is assigned (1305) to the client service area. The ITV or VOD session is then assigned (1306) to the QAM modulator. Resource classes for the ITV or VOD session are assigned in a class management table (1307), and the method of FIG. 13 ends. The class management table is maintained by the compositor 1103 in accordance with some embodiments.


As defined above, there are two methods to allocate resources to a QAM-modulated channel: vertically (e.g., as in the method of FIG. 13) to fill up one QAM first before allocation to another and horizontally to allocate resources to QAM channels in a round-robin fashion to evenly distribute channel loading and to maximize redundancy.



FIG. 2 illustrates a situation where resources for eight sessions have been allocated using the vertical distribution approach with 201 (QAM 1) being filled first and 202 (QAM 2) being used for the next group of interactive sessions. FIG. 3 illustrates the horizontal approach with the first session being allocated to 301 (QAM 1) and the next session to 302 (QAM 2) and so on across all QAM channels.


A vertical allocation strategy results in more opportunity for statistical multiplexing using class-based allocation: in the horizontal-allocation strategy example each QAM channel has only two sessions. The drawback of vertical allocation is that QAM channels are not equally loaded, and should a fully loaded QAM channel develop a fault and cease operating, more sessions are affected than in the horizontal allocation case. For ordinary television broadcasts, a failed QAM channel impacts many end-users and an operator may choose to have redundant QAM modulators allocated via automatic failover; however, for unicast (one-to-one) interactive sessions, a failed QAM impacts far fewer end-users and redundancy is less important. The improved quality and efficiency benefits of vertical allocation outweigh the drawbacks of reduced redundancy.


In some embodiments, from a high-level view, intelligent multiplexing includes the following functions:

    • A grouping function in the i-mux that manages bit allocations for each session in the group of sessions as a whole;
    • A load-balancing and resource-management function that selects an i-mux to serve a session based on QAM affinity, determined by the location of the service area of the session.
    • A vertical allocation function.


The compositor, via communications with a session resource manager, creates and manages groups. In some embodiments, the compositor has the following capabilities:

    • Define (add) a group, where the group receives an identifier and an aggregated bandwidth (in bits per second);
    • Modify the bandwidth for a certain group; and/or
    • Delete a group and close the session.


Furthermore, when configuring a session, an optional group identifier can be passed to signal the application of intelligent multiplexing. If the group identifier is not present, intelligent multiplexing is not performed. The bit allocation of the compositor continues to respect the individual stream's bandwidth; when multiplexing the streams at the output of the compositor, the aggregated bandwidth allocated to the group of sessions (if present) is respected as well, Screen updates are delayed that do not fit in the session's bit budget or in the group bit budget, where an update gets a priority that increases with waiting time.


Alternative behaviors to mitigate bandwidth constraints could be to select a representation of the same content with fewer bits:

    • For video content encoded with multiple bitrates, select a lower bit rate version.
    • For UI content, select a representation of the same fragment with fewer bits, by applying greater video compression to the encoded fields.


Furthermore, the server system can configure the average bit rate (as opposed to ‘base’ amount) and add a headroom (or reserve) amount that decreases as the number of sessions grows. The more sessions there are, the better the i-mux is able to make use of the multiple channels' fungible bandwidth so the need for additional headroom gradually decreases. The advantage to this approach is that as the QAM channel is nearing full capacity, the headroom is almost zero, thus maximizing the QAM usage.


For the streaming GUI use case (i.e., a user application operating from a location remote from the user) the bandwidth requested for a session depends on the application that is executed. Furthermore, this bandwidth is requested in advance. Since multiple sessions share a part of the bandwidth, less bandwidth is reserved than the expected nominal application bandwidth. If for example the application has been designed for a maximum video stream density requiring 8 Mbps, then the system typically reserves only 2 Mbps or a “base amount”. Therefore, 6 Mbps of additional bandwidth is considered “headroom” and needs to be reserved separately, to ensure that the necessary bandwidth is always available. Therefore, the following steps are typically taken:

    • 1. Reserve a headroom amount for a dummy session, that is, a session that will not output any data but helps with the establishment of other sessions by reserving a common overhead space to be shared by the other sessions.
    • 2. For each new (actual) session, the system reserves the base amount (e.g.—2 Mbps in the example above.)
    • 3. The dummy session will be associated with a unique address and port at the QAM modulator but no bits are actually sent. Hence, the associated program identifier (service ID) on the corresponding transport stream will carry no bits. Likewise, in some cases, the dummy session does not require a service ID to be associated to it. This is typically a scarce resource on a QAM (a small amount of predefined service IDs exist per Transport Stream). Therefore, preferably a service ID is not allocated for the dummy sessions (or dummy resource blocks).
    • 4. When the last session in a group has terminated, the dummy session can be also terminated.


Once the base amount and headroom are defined, the application stays within the base amount plus headroom to accommodate worst-case situations. (For some applications the base amount would fluctuate depending on number of video titles available.) This aspect is not different from the non-intelligent-muxing use case. However, what is different is that when more sessions are added to the group, the headroom is shared among more sessions and the probability of undesirable congestion due to insufficient aggregate bandwidth increases. So with intelligent multiplexing, if an interactive session is the only one, then the application may respond quickly. Correspondingly, when more sessions are added, the latency imposed on the interactive session may increase. If that occurs, then the base amount and headroom could be increased.


In a further refinement, the headroom for a multiplex channel can be allocated as a whole, or it can be allocated as blocks (e.g., of one Mbps) to allow headroom to be gradually reduced as described above, to allow different headroom sizes to be used as different application mixes are used on the same QAM channel; each application has its own characterization in terms of base & headroom, or average bit rate & max bit rate.


The i-mux bit rate allocation works on two time domains. The first is per-frame: it must stay within the group bandwidth, and will do this by delaying UI updates or dropping frames. The second time domain is a feedback loop that requests more bandwidth for a group, rejects new sessions, or refuses requests from applications to play video (and indicates to the application that video is being refused so that the application can display a user-friendly message). The second-time-domain feedback loop is an algorithm that uses inputs such as needed vs. obtained bandwidth, measurement of latency due to i-muxing (for example as a histogram), frame drop rate, and other metrics. These inputs lead allow the i-mux to automatically increase or decrease an i-mux group bandwidth. In the absence of such an algorithm, the application could indicate to the i-mux that it requires more bandwidth, which the i-mux will then request from the Session Resource Manager (SRM) 1108 of FIG. 11.



FIG. 14 is a flow chart depicting a process of managing a group of streams within a multiplex group in accordance with some embodiments. FIG. 14 represents the top layer (i.e., level) of a multi-layer process of intelligent multiplexing. Element 1400 represents the process depicted in FIG. 15 while FIG. 15, in turn, includes an element 1506 detailed in FIG. 16.


An aggregate multiplexed load for the first time domain is computed (1400), The aggregate multiplexed load is compared to the available channel bandwidth to determine (1402) whether an overflow condition has occurred. If an overflow condition has not occurred (1402-No), feedback of state information is performed for the second time domain and the computation 1400 is repeated. If an overflow condition has occurred (1402-Yes), the server system requests (1403) additional bandwidth from the cable system for a potentially over-subscribed multiplex stream. If additional bandwidth is available (1404-Yes), the overhead calculation is adjusted (1406) to account for the additional bandwidth, and execution of the method transitions to element 1401. If additional bandwidth is not available (1404-no), additional sessions are refused (1405), such that no additional sections can be added to the i-mux group. In some embodiments, existing session are informed that bit sacrifices to one or more sessions will be necessary (e.g., are instructed to sacrifice bits), in response to the lack of availability of additional bandwidth (1404-no).


Class-Based Allocation


The transport stream bits cannot be delayed or dropped as the transport information provides the necessary support information, such as access control and critical timing signals among many other duties. Hence, it is assumed that the transport stream is provided with the bandwidth required.


Similarly, audio should not be interrupted as humans react poorly to disrupted audio, much more so than disrupted video. Hence, several strategies are employed to maintain smooth audio flow. In general, audio gets priority over video but should the need arise, audio can be sent ahead in blocks to an audio buffer of the client device and played out from the client audio buffer in timestamp-directed coordination with its respective video stream. Unlike conventional systems, the system described herein sends audio ahead to make room for oversize graphical elements of a user interface.


Video has the third priority, with the understanding that excessive compression of video is generally undesirable. Video bit-rate is generally reduced by conventional systems by raising the quantization level of the compression means (of, e.g. MPEG2 or H.264/5). The invention seeks to avoid the visual degradation of the system and will instead first seek to sparingly drop video frames. Especially in the case of video in a window of an interactive application, the means of substituting video frames with null frames provides an effective means to control video bandwidth.


The user interface graphics have the next priority after video. As these elements are generally not moving or are moving slowly, the delivery may be delayed briefly when congestion in the channel is eminent. However, the decision logic of the intelligent multiplex can be programmatically informed of the nature of the interactive application and can choose to raise the priority of the user interface graphics in the event that the video associated with the application is less important that the prompt appearance of a user interface element such as a menu appearing when a user clicks a button on the remote or clicks a virtual button displayed on the screen. A ready example would be a television program guide displaying many video windows of different television channels. When a user elects to interact with a menu, signaled by the button press, that person's attention is not likely on the various video windows. Hence, video frames can be dropped to make way for graphical elements of the interface without the user noticing the difference in the video picture, which is predictable especially for video in a window within a larger user interface application.



FIG. 15 is a flow diagram illustrating a method of managing class-based allocation for multiplexing audio, video, and GUI elements, using the priorities described above, in accordance with some embodiments. Bandwidth is allocated (1501) to the audio, video, and graphical user interface (UI) elements based on the following priorities, listed in order from highest priority to lowest priority: (1) transport control information, (2) audio data, (3) video quality (quantization level), (4) video frames, (5) GUI elements (i.e., UI graphical elements), and (6) error-correction information. This list of priorities corresponds to the previously presented list of five priority levels, except that video quality and video frames are broken out into two different priority levels. These priorities are not absolute and can be altered (e.g., in real time) depending on specific circumstances such as a change in the application program creating the output to be multiplexed (e.g., the user switches from a video game to a program guide.)


The aggregate bandwidth of the next data frame is computed (1502). If the aggregate bandwidth is within the bit budget (1503-Yes), no action is taken and the method returns to operation 1502 for a subsequent data frame. If the aggregate bandwidth is not within the bit budget (1503-No), all priority-6 packets (i.e., with error-correction information) are dropped. If this action drops the aggregate bandwidth to within the bit budget (1505-yes), no further action is taken and the method returns to operation 1502 for a subsequent data frame. If, however, the aggregate bandwidth is still not within the bit budget (1505-no), individual streams are groomed until the aggregate bandwidth is within the bit budget (1507-yes), at which point the method returns to operation 1502 for a subsequent data frame. Grooming of an individual stream is performed in the method of FIG. 16, described below. If grooming streams does not reduce the aggregate bandwidth to within the bit budget (1507-no), then the session load is reallocated (e.g., one or more sessions is reassigned to a different QAM modulator and corresponding channel).



FIG. 16 is a flow chart depicting the process, in the method of FIG. 15, of individually grooming (1506) a stream with the first time domain of the intelligent multiplexing process, in accordance with some embodiments. This process is performed (1602) for each stream in a multiplexed group. If the stream is a UI session (1603-Yes), it is determined (1604) whether the corresponding graphical element is unimportant (i.e., whether an importance of the graphical element satisfies a criterion). If the graphical element is not important (i.e., its importance does not satisfy the criterion) (1604-Yes), it is decided (1605) whether to delay the information in the UI session in favor of sending ahead one or more type-4 (i.e., video frame) packets. If so (1605-Yes), this action is taken and grooming then begins (1602) for another stream. If this action is not taken (1605-No), or if the graphical element is determined (1604-No) to be important (i.e., its importance satisfies the criterion), then processing continues (1612) for other sessions, and thus for other streams in the multiplex group. (Operation 1612 is reached through operations 1606-No and 1611-No.)


If the stream is video (i.e., corresponds to a video frame) (1606-Yes), it is determined (1607) whether a count of previously dropped video frames satisfies (e.g., is greater than, or greater than or equal to) a minimum count. If not (1607-No), a type-4 packet is dropped (1608) to free up bandwidth for the multiplex group. Grooming then begins (1602) for another stream. If so (1607-Yes), it is decided (1609) whether to adjust the quantization level of the video. If so (1609-Yes), the quantization level is raised (1610), thus lowering the video quality and freeing bandwidth for the multiplex group. If not (1609-No), then processing continues (1612) for other sessions, and thus for other streams in the multiplex group.


If the stream is audio (1611-yes), one or more type-2 (i.e., audio) packets may be sent ahead (1614) to be to be queued at the client device before being played. If this and all other options have been exhausted (1613-yes), however, then one or more type-2 (i.e., audio) packets are dropped (1615) and an overflow alarm issues (1616).



FIG. 4A is a multi-dimensional control graph showing decision paths for multiplexing audio, video and graphical user interface (UI) elements, according to some embodiments. Each dimension indicates which components of the user experience can contribute bandwidth for use by other components while minimizing the perceptual degradation of the composite user front-of-screen experience. FIG. 4A thus is a diagrammatic representation of three-dimensional decision logic used in class-based intelligent multiplexing in accordance with some embodiments. The three dimensions are latency 401, frame rate 402, and quality 403. The three dimensional decision logic thus may adjust (i.e., trade off) frame size versus frame rate (latency) versus frame quality (quantization). In some embodiments the control logic of the multiplexer makes decisions based on trading-off frame size for frame rate, which affects latency, or further trades image quality for either of, or a combination of, frame size and frame quality. In some embodiments, this trading off of variables is performed regardless of (i.e., independently of) whether the multiplex ensemble is approaching overflow. A diagrammatic representation of this decision logic may be seen in FIG. 4A showing the multidimensional control logic trading off Latency 401, Frame Rate 402, and Quality 403.



FIG. 4B is a multi-dimensional control graph showing decision paths as in FIG. 4A above with the additional decision dimension of whole application groups. Class-based intelligent multiplexing thus may use four-dimensional decision logic in accordance with some embodiments; the four dimensions are latency 401, frame rate 402, quality 403, and application group 404.


The functionality described herein may be embodied in many different forms, including, but in no way limited to, computer program logic for use with a processor (e.g., a microprocessor, microcontroller, digital signal processor, or general purpose computer), programmable logic for use with a programmable logic device (e.g., a Field Programmable Gate Array (FPGA) or other PLD), discrete components, integrated circuitry (e.g., an Application Specific Integrated Circuit (ASIC)), or any other means including any combination thereof


Computer program logic implementing all or part of the functionality previously described herein may be embodied in various forms, including, but in no way limited to, a source code form, a computer executable form, and various intermediate forms (e.g., forms generated by an assembler, compiler, linker, or locator). Source code may include a series of computer program instructions implemented in any of various programming languages (e.g., an object code, an assembly language, or a high-level language such as Fortran, C, C++, JAVA, or HTML) for use with various operating systems or operating environments. The source code may define and use various data structures and communication messages. The source code may be in a computer executable form (e.g., via an interpreter), or the source code may be converted (e.g., via a translator, assembler, or compiler) into a computer executable form.


The computer program may be fixed in any form (e.g., source code form, computer executable form, or an intermediate form) either permanently or transitorily in a tangible storage medium, such as a semiconductor memory device (e.g., a RAM, ROM, PROM, EEPROM, or Flash-Programmable RAM), a magnetic memory device (e.g., a diskette or fixed disk), an optical memory device (e.g., a CD-ROM), a PC card (e.g., PCMCIA card), or other memory device. The computer program may be fixed in any form in a signal that is transmittable to a computer using any of various communication technologies, including, but in no way limited to, analog technologies, digital technologies, optical technologies, wireless technologies (e.g., Bluetooth), networking technologies, and internetworking technologies. The computer program may be distributed in any form as a removable storage medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the communication system (e.g., the Internet or World Wide Web).


Hardware logic (including programmable logic for use with a programmable logic device) implementing all or part of the functionality previously described herein may be designed using traditional manual methods, or may be designed, captured, simulated, or documented electronically using various tools, such as Computer Aided Design (CAD), a hardware description language (e.g., VHDL or AHDL), or a PLD programming language (e.g., PALASM, ABEL, or CUPL).


Programmable logic may be fixed either permanently or transitorily in a tangible storage medium, such as a semiconductor memory device (e.g., a RAM, ROM, PROM, EEPROM, or Flash-Programmable RAM), a magnetic memory device (e.g., a diskette or fixed disk), an optical memory device (e.g., a CD-ROM), or other memory device. The programmable logic may be fixed in a signal that is transmittable to a computer using any of various communication technologies, including, but in no way limited to, analog technologies, digital technologies, optical technologies, wireless technologies (e.g., Bluetooth), networking technologies, and internetworking technologies. The programmable logic may be distributed as a removable storage medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the communication system (e.g., the Internet or World Wide Web).


The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the scope of the claims to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen in order to best explain the principles underlying the claims and their practical applications, to thereby enable others skilled in the art to best use the embodiments with various modifications as are suited to the particular uses contemplated.

Claims
  • 1. A method for prioritizing content classes in multiplexed content streams, comprising: at a server system: assigning a group of user sessions to a single modulator, wherein the user sessions comprise data in a plurality of classes, each class having a respective priority, wherein the plurality of classes includes, in order of priority from highest priority to lowest priority, audio data, video data, and user-interface graphical elements;computing an aggregate bandwidth of the group of user sessions for a first frame time;determining that the aggregate bandwidth for the first frame time exceeds a specified budget for the modulator;in response to determining that the aggregate bandwidth for the first frame time exceeds the specified budget, allocating bandwidth for the group of user sessions during the first frame time in accordance with the class priorities;using the modulator, multiplexing the group of user sessions onto a channel corresponding to the modulator, in accordance with the allocated bandwidth; andtransmitting the multiplexed group of user sessions over a managed network.
  • 2. The method of claim 1, wherein the plurality of classes further includes transport control information and error-correction information, wherein the transport control information has a higher priority than the audio data and the error-correction information has a lower priority than the user-interface graphical elements.
  • 3. The method of claim 2, wherein allocating the bandwidth comprises: providing bandwidth for all transport control information in the group of user sessions for the first frame time; anddropping packets containing error-correction information for one or more user sessions of the group of user sessions.
  • 4. The method of claim 3, wherein allocating the bandwidth comprises dropping all packets containing error-correction information for the group of user sessions during the first frame time.
  • 5. The method of claim 4, wherein allocating the bandwidth further comprises: after dropping all packets containing error-correction information, determining that the aggregate bandwidth for the first frame time still exceeds the specified budget; andin response to determining that the aggregate bandwidth for the first frame time still exceeds the specified budget, reducing the size of one or more user sessions in the group of user sessions until the aggregate bandwidth for the first frame time does not exceed the specified budget,wherein reducing the size of the one or more user sessions is performed in accordance with the class priorities.
  • 6. The method of claim 5, wherein reducing the size of the one or more user sessions comprises, for a respective user session of the group of user sessions: determining that an importance of a user-interface graphical element satisfies a criterion; andin response to determining that the importance of the user-interface graphical element satisfies the criterion, allocating bandwidth to the user-interface graphical element.
  • 7. The method of claim 5, wherein reducing the size of the one or more user sessions comprises, for a respective user session of the group of user sessions: determining that an importance of a user-interface graphical element does not satisfy a criterion; andin response to determining that the importance of the user-interface graphical element does not satisfy the criterion, delaying the user-interface graphical element until a second frame time subsequent to the first frame time.
  • 8. The method of claim 5, wherein reducing the size of the one or more user sessions comprises, for a respective user session of the group of user sessions: determining that a count of previously dropped video frames does not satisfy a threshold; andin response to determining that the count of previously dropped video frames does not satisfy the threshold, dropping a video frame.
  • 9. The method of claim 5, wherein reducing the size of the one or more user sessions comprises, for a respective user session of the group of user sessions: determining that a count of previously dropped video frames satisfies a threshold; andin response to determining that the count of previously dropped video frames satisfies the threshold, increasing a quantization level of a video frame to reduce the quality of the video frame.
  • 10. The method of claim 5, wherein reducing the size of the one or more user sessions comprises, for a respective user session of the group of user sessions: in response to a determination that all possible size reductions for video data and user-interface graphical elements have been achieved or that size reductions for video data and user-interface graphical elements cannot be achieved, sending audio data for the first frame time in an earlier frame time that precedes the first frame time.
  • 11. The method of claim 5, wherein reducing the size of the one or more user sessions comprises, for a respective user session of the group of user sessions: in response to a first determination that all possible size reductions for video data and user-interface graphical elements have been achieved or that size reductions for video data and user-interface graphical elements cannot be achieved, dropping packets containing audio data and issuing an overflow alarm.
  • 12. The method of claim 1, wherein: allocating the bandwidth comprises: providing bandwidth for all audio data in the group of user sessions;delaying packets containing data for a user-interface graphical element until a second frame time subsequent to the first frame time; andallocating bandwidth freed by delaying the packets containing data for the user-interface graphical element to video data.
  • 13. The method of claim 1, wherein: allocating the bandwidth comprises: providing bandwidth for all audio data in the group of user sessions;reducing a frame rate for a user-interface graphical element; andallocating bandwidth freed by reducing the frame rate for the user-interface graphical element to video data.
  • 14. The method of claim 1, further comprising, at the server system, altering the class priorities in real time, the altering comprising prioritizing a user-interface graphical element higher than video data for a video frame; wherein allocating the bandwidth comprises: providing bandwidth for all audio data in the group of user sessions;dropping the video frame; andallocating bandwidth freed by dropping the video frame to the user-interface graphical element.
  • 15. The method of claim 14, wherein prioritizing the user-interface graphical element higher than the video data for the video frame, dropping the video frame, and allocating the bandwidth freed by dropping the video frame to the user-interface graphical element are performed in response to the user-interface graphical element containing a user-interface update generated based on user input.
  • 16. The method of claim 1, wherein: allocating the bandwidth comprises: providing bandwidth for all audio data in the group of user sessions;reducing the quality of an update for a user-interface graphical element; andallocating bandwidth freed by reducing the quality of the update for the user-interface graphical element to video data.
  • 17. The method of claim 1, wherein assigning the group of user sessions to the single modulator comprises: allocating bandwidth headroom to the channel corresponding to the modulator;adding successive user sessions for client devices in a client service area served by the modulator to the group of user sessions until the group is full;for each successive user session added to the group of user sessions, allocating a base amount of bandwidth to the channel; andas each successive user session is added to the group of user sessions, reducing the bandwidth headroom.
  • 18. The method of claim 1, wherein the modulator is a quadrature-amplitude-modulation (QAM) modulator.
  • 19. A server system, comprising: a plurality of modulators to multiplex respective groups of user sessions onto respective channels for transmission over a managed network, in accordance with allocated bandwidth;one or more processors; andmemory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: assigning a group of user sessions to a single modulator of the plurality of modulators, wherein the user sessions comprise data in a plurality of classes, each class having a respective priority, wherein the plurality of classes includes, in order of priority from highest priority to lowest priority, audio data, video data, and user-interface graphical elements;computing an aggregate bandwidth of the group of user sessions for a first frame time;determining that the aggregate bandwidth for the first frame time exceeds a specified budget for the modulator; andin response to determining that the aggregate bandwidth for the first frame time exceeds the specified budget, allocating bandwidth for the group of user sessions during the first frame time in accordance with the class priorities.
  • 20. A non-transitory computer-readable storage medium storing one or more programs configured for execution by one or more processors of a server system that further comprises a plurality of modulators to multiplex respective groups of user sessions onto respective channels for transmission over a managed network in accordance with allocated bandwidth, the one or more programs comprising instructions for: assigning a group of user sessions to a single modulator of the plurality of modulators, wherein the user sessions comprise data in a plurality of classes, each class having a respective priority, wherein the plurality of classes includes, in order of priority from highest priority to lowest priority, audio data, video data, and user-interface graphical elements;computing an aggregate bandwidth of the group of user sessions for a first frame time;determining that the aggregate bandwidth for the first frame time exceeds a specified budget for the modulator; andin response to determining that the aggregate bandwidth for the first frame time exceeds the specified budget, allocating bandwidth for the group of user sessions during the first frame time in accordance with the class priorities.
RELATED APPLICATION

This application claims priority to U.S. Provisional Patent Application No. 61/984,697, titled “Intelligent Multiplexing Using Class-Based, Multi-Dimensioned Decision Logic for Managed and Unmanaged Networks,” filed Apr. 25, 2014, which is incorporated by reference herein in its entirety.

US Referenced Citations (712)
Number Name Date Kind
3889050 Thompson Jun 1975 A
3934079 Barnhart Jan 1976 A
3997718 Ricketts et al. Dec 1976 A
4002843 Rackman Jan 1977 A
4032972 Saylor Jun 1977 A
4077006 Nicholson Feb 1978 A
4081831 Tang et al. Mar 1978 A
4107734 Percy et al. Aug 1978 A
4107735 Frohbach Aug 1978 A
4145720 Weintraub et al. Mar 1979 A
4168400 de Couasnon et al. Sep 1979 A
4186438 Benson et al. Jan 1980 A
4222068 Thompson Sep 1980 A
4245245 Matsumoto et al. Jan 1981 A
4247106 Jeffers et al. Jan 1981 A
4253114 Tang et al. Feb 1981 A
4264924 Freeman Apr 1981 A
4264925 Freeman et al. Apr 1981 A
4290142 Schnee et al. Sep 1981 A
4302771 Gargini Nov 1981 A
4308554 Percy et al. Dec 1981 A
4350980 Ward Sep 1982 A
4367557 Stern et al. Jan 1983 A
4395780 Gohm et al. Jul 1983 A
4408225 Ensinger et al. Oct 1983 A
4450477 Lovett May 1984 A
4454538 Toriumi Jun 1984 A
4466017 Banker Aug 1984 A
4471380 Mobley Sep 1984 A
4475123 Dumbauld et al. Oct 1984 A
4484217 Block et al. Nov 1984 A
4491983 Pinnow et al. Jan 1985 A
4506387 Walter Mar 1985 A
4507680 Freeman Mar 1985 A
4509073 Baran et al. Apr 1985 A
4523228 Banker Jun 1985 A
4533948 McNamara et al. Aug 1985 A
4536791 Campbell et al. Aug 1985 A
4538174 Gargini et al. Aug 1985 A
4538176 Nakajima et al. Aug 1985 A
4553161 Citta Nov 1985 A
4554581 Tentler et al. Nov 1985 A
4555561 Sugimori et al. Nov 1985 A
4562465 Glaab Dec 1985 A
4567517 Mobley Jan 1986 A
4573072 Freeman Feb 1986 A
4591906 Morales-Garza et al. May 1986 A
4602279 Freeman Jul 1986 A
4614970 Clupper et al. Sep 1986 A
4616263 Eichelberger Oct 1986 A
4625235 Watson Nov 1986 A
4627105 Ohashi et al. Dec 1986 A
4633462 Stifle et al. Dec 1986 A
4670904 Rumreich Jun 1987 A
4682360 Frederiksen Jul 1987 A
4695880 Johnson et al. Sep 1987 A
4706121 Young Nov 1987 A
4706285 Rumreich Nov 1987 A
4709418 Fox et al. Nov 1987 A
4710971 Nozaki et al. Dec 1987 A
4718086 Rumreich et al. Jan 1988 A
4732764 Hemingway et al. Mar 1988 A
4734764 Pocock et al. Mar 1988 A
4748689 Mohr May 1988 A
4749992 Fitzemeyer et al. Jun 1988 A
4750036 Martinez Jun 1988 A
4754426 Rast et al. Jun 1988 A
4760442 O'Connell et al. Jul 1988 A
4763317 Lehman et al. Aug 1988 A
4769833 Farleigh et al. Sep 1988 A
4769838 Hasegawa Sep 1988 A
4789863 Bush Dec 1988 A
4792849 McCalley et al. Dec 1988 A
4801190 Imoto Jan 1989 A
4805134 Calo et al. Feb 1989 A
4807031 Broughton et al. Feb 1989 A
4816905 Tweedy et al. Mar 1989 A
4821102 Ichikawa et al. Apr 1989 A
4823386 Dumbauld et al. Apr 1989 A
4827253 Maltz May 1989 A
4827511 Masuko May 1989 A
4829372 McCalley et al. May 1989 A
4829558 Welsh May 1989 A
4847698 Freeman Jul 1989 A
4847699 Freeman Jul 1989 A
4847700 Freeman Jul 1989 A
4848698 Newell et al. Jul 1989 A
4860379 Schoeneberger et al. Aug 1989 A
4864613 Van Cleave Sep 1989 A
4876592 Von Kohorn Oct 1989 A
4889369 Albrecht Dec 1989 A
4890320 Monslow et al. Dec 1989 A
4891694 Way Jan 1990 A
4901367 Nicholson Feb 1990 A
4903126 Kassatly Feb 1990 A
4905094 Pocock et al. Feb 1990 A
4912760 West, Jr. et al. Mar 1990 A
4918516 Freeman Apr 1990 A
4920566 Robbins et al. Apr 1990 A
4922532 Farmer et al. May 1990 A
4924303 Brandon et al. May 1990 A
4924498 Farmer et al. May 1990 A
4937821 Boulton Jun 1990 A
4941040 Pocock et al. Jul 1990 A
4947244 Fenwick et al. Aug 1990 A
4961211 Tsugane et al. Oct 1990 A
4963995 Lang Oct 1990 A
4975771 Kassatly Dec 1990 A
4989245 Bennett Jan 1991 A
4994909 Graves et al. Feb 1991 A
4995078 Monslow et al. Feb 1991 A
5003384 Durden et al. Mar 1991 A
5008934 Endoh Apr 1991 A
5014125 Pocock et al. May 1991 A
5027400 Baji et al. Jun 1991 A
5051720 Kittirutsunetorn Sep 1991 A
5051822 Rhoades Sep 1991 A
5057917 Shalkauser et al. Oct 1991 A
5058160 Banker et al. Oct 1991 A
5060262 Bevins, Jr. et al. Oct 1991 A
5077607 Johnson et al. Dec 1991 A
5083800 Lockton Jan 1992 A
5088111 McNamara et al. Feb 1992 A
5093718 Hoarty et al. Mar 1992 A
5109414 Harvey et al. Apr 1992 A
5113496 McCalley et al. May 1992 A
5119188 McCalley et al. Jun 1992 A
5130792 Tindell et al. Jul 1992 A
5132992 Yurt et al. Jul 1992 A
5133009 Rumreich Jul 1992 A
5133079 Ballantyne et al. Jul 1992 A
5136411 Paik et al. Aug 1992 A
5142575 Farmer et al. Aug 1992 A
5155591 Wachob Oct 1992 A
5172413 Bradley et al. Dec 1992 A
5191410 McCalley et al. Mar 1993 A
5195092 Wilson et al. Mar 1993 A
5208665 McCalley et al. May 1993 A
5220420 Hoarty et al. Jun 1993 A
5230019 Yanagimichi et al. Jul 1993 A
5231494 Wachob Jul 1993 A
5236199 Thompson, Jr. Aug 1993 A
5247347 Litteral et al. Sep 1993 A
5253341 Rozmanith et al. Oct 1993 A
5262854 Ng Nov 1993 A
5262860 Fitzpatrick et al. Nov 1993 A
5303388 Kreitman et al. Apr 1994 A
5319455 Hoarty et al. Jun 1994 A
5319707 Wasilewski et al. Jun 1994 A
5321440 Yanagihara et al. Jun 1994 A
5321514 Martinez Jun 1994 A
5351129 Lai Sep 1994 A
5355162 Yazolino et al. Oct 1994 A
5359601 Wasilewski et al. Oct 1994 A
5361091 Hoarty et al. Nov 1994 A
5371532 Gelman et al. Dec 1994 A
5404393 Remillard Apr 1995 A
5408274 Chang et al. Apr 1995 A
5410343 Coddington et al. Apr 1995 A
5410344 Graves et al. Apr 1995 A
5412415 Cook et al. May 1995 A
5412720 Hoarty May 1995 A
5418559 Blahut May 1995 A
5422674 Hooper et al. Jun 1995 A
5422887 Diepstraten et al. Jun 1995 A
5442389 Blahut et al. Aug 1995 A
5442390 Hooper et al. Aug 1995 A
5442700 Snell et al. Aug 1995 A
5446490 Blahut et al. Aug 1995 A
5469283 Vinel et al. Nov 1995 A
5469431 Wendorf et al. Nov 1995 A
5471263 Odaka Nov 1995 A
5481542 Logston et al. Jan 1996 A
5485197 Hoarty Jan 1996 A
5487066 McNamara et al. Jan 1996 A
5493638 Hooper et al. Feb 1996 A
5495283 Cowe Feb 1996 A
5495295 Long Feb 1996 A
5497187 Banker et al. Mar 1996 A
5517250 Hoogenboom et al. May 1996 A
5526034 Hoarty et al. Jun 1996 A
5528281 Grady et al. Jun 1996 A
5537397 Abramson Jul 1996 A
5537404 Bentley et al. Jul 1996 A
5539449 Blahut et al. Jul 1996 A
RE35314 Logg Aug 1996 E
5548340 Bertram Aug 1996 A
5550578 Hoarty et al. Aug 1996 A
5557316 Hoarty et al. Sep 1996 A
5559549 Hendricks et al. Sep 1996 A
5561708 Remillard Oct 1996 A
5570126 Blahut et al. Oct 1996 A
5570363 Holm Oct 1996 A
5579143 Huber Nov 1996 A
5581653 Todd Dec 1996 A
5583927 Ely et al. Dec 1996 A
5587734 Lauder et al. Dec 1996 A
5589885 Ooi Dec 1996 A
5592470 Rudrapatna et al. Jan 1997 A
5594507 Hoarty Jan 1997 A
5594723 Tibi Jan 1997 A
5594938 Engel Jan 1997 A
5596693 Needle et al. Jan 1997 A
5600364 Hendricks et al. Feb 1997 A
5600573 Hendricks et al. Feb 1997 A
5608446 Carr et al. Mar 1997 A
5617145 Huang et al. Apr 1997 A
5621464 Teo et al. Apr 1997 A
5625404 Grady et al. Apr 1997 A
5630757 Gagin et al. May 1997 A
5631693 Wunderlich et al. May 1997 A
5631846 Szurkowski May 1997 A
5632003 Davidson et al. May 1997 A
5649283 Galler et al. Jul 1997 A
5668592 Spaulding, II Sep 1997 A
5668599 Cheney et al. Sep 1997 A
5708767 Yeo et al. Jan 1998 A
5710815 Ming et al. Jan 1998 A
5712906 Grady et al. Jan 1998 A
5740307 Lane Apr 1998 A
5748234 Lippincott May 1998 A
5754941 Sharpe et al. May 1998 A
5786527 Tarte Jul 1998 A
5790174 Richard, III et al. Aug 1998 A
5802283 Grady et al. Sep 1998 A
5812665 Hoarty et al. Sep 1998 A
5812786 Seazholtz et al. Sep 1998 A
5815604 Simons et al. Sep 1998 A
5818438 Howe et al. Oct 1998 A
5821945 Yeo et al. Oct 1998 A
5822537 Katseff et al. Oct 1998 A
5828371 Cline et al. Oct 1998 A
5844594 Ferguson Dec 1998 A
5845083 Hamadani et al. Dec 1998 A
5862325 Reed et al. Jan 1999 A
5864820 Case Jan 1999 A
5867208 McLaren Feb 1999 A
5883661 Hoarty Mar 1999 A
5903727 Nielsen May 1999 A
5903816 Broadwin et al. May 1999 A
5905522 Lawler May 1999 A
5907681 Bates et al. May 1999 A
5917822 Lyles et al. Jun 1999 A
5946352 Rowlands et al. Aug 1999 A
5952943 Walsh et al. Sep 1999 A
5961603 Kunkel et al. Oct 1999 A
5963203 Goldberg et al. Oct 1999 A
5966163 Lin et al. Oct 1999 A
5978756 Walker et al. Nov 1999 A
5982445 Eyer et al. Nov 1999 A
5990862 Lewis Nov 1999 A
5995146 Rasmussen Nov 1999 A
5995488 Kalkunte et al. Nov 1999 A
5999970 Krisbergh et al. Dec 1999 A
6014416 Shin et al. Jan 2000 A
6021386 Davis et al. Feb 2000 A
6031989 Cordell Feb 2000 A
6034678 Hoarty et al. Mar 2000 A
6049539 Lee et al. Apr 2000 A
6049831 Gardell et al. Apr 2000 A
6052555 Ferguson Apr 2000 A
6055314 Spies et al. Apr 2000 A
6055315 Doyle et al. Apr 2000 A
6064377 Hoarty et al. May 2000 A
6078328 Schumann et al. Jun 2000 A
6084908 Chiang et al. Jul 2000 A
6100883 Hoarty Aug 2000 A
6108625 Kim Aug 2000 A
6115076 Linzer Sep 2000 A
6141645 Chi-Min et al. Oct 2000 A
6141693 Perlman et al. Oct 2000 A
6144698 Poon et al. Nov 2000 A
6167084 Wang et al. Dec 2000 A
6177931 Alexander et al. Jan 2001 B1
6182072 Leak et al. Jan 2001 B1
6184878 Alonso et al. Feb 2001 B1
6192081 Chiang et al. Feb 2001 B1
6198822 Doyle et al. Mar 2001 B1
6205582 Hoarty Mar 2001 B1
6226041 Florencio et al. May 2001 B1
6236730 Cowieson et al. May 2001 B1
6243418 Kim Jun 2001 B1
6253238 Lauder et al. Jun 2001 B1
6256047 Isobe et al. Jul 2001 B1
6266369 Wang et al. Jul 2001 B1
6268864 Chen et al. Jul 2001 B1
6275496 Burns et al. Aug 2001 B1
6292194 Powell, III Sep 2001 B1
6305020 Hoarty et al. Oct 2001 B1
6317151 Ohsuga et al. Nov 2001 B1
6317885 Fries Nov 2001 B1
6349284 Park et al. Feb 2002 B1
6386980 Nishino et al. May 2002 B1
6389075 Wang et al. May 2002 B2
6446037 Fielder et al. Sep 2002 B1
6459427 Mao et al. Oct 2002 B1
6480210 Martino et al. Nov 2002 B1
6481012 Gordon et al. Nov 2002 B1
6512793 Maeda Jan 2003 B1
6536043 Guedalia Mar 2003 B1
6557041 Mallart Apr 2003 B2
6560496 Michener May 2003 B1
6564378 Satterfield et al. May 2003 B1
6579184 Tanskanen Jun 2003 B1
6614442 Ouyang et al. Sep 2003 B1
6625574 Taniguchi et al. Sep 2003 B1
6645076 Sugai Nov 2003 B1
6657647 Bright Dec 2003 B1
6675385 Wang Jan 2004 B1
6675387 Boucher et al. Jan 2004 B1
6687663 McGrath et al. Feb 2004 B1
6717600 Dutta et al. Apr 2004 B2
6721956 Wsilewski Apr 2004 B2
6727929 Bates et al. Apr 2004 B1
6731605 Deshpande May 2004 B1
6747991 Hemy et al. Jun 2004 B1
6754271 Gordon et al. Jun 2004 B1
6758540 Adolph et al. Jul 2004 B1
6766407 Lisitsa et al. Jul 2004 B1
6771704 Hannah Aug 2004 B1
6785902 Zigmond et al. Aug 2004 B1
6807528 Truman et al. Oct 2004 B1
6810528 Chatani Oct 2004 B1
6813690 Lango et al. Nov 2004 B1
6817947 Tanskanen Nov 2004 B2
6886178 Mao et al. Apr 2005 B1
6907574 Xu et al. Jun 2005 B2
6931291 Alvarez-Tinoco et al. Aug 2005 B1
6941019 Mitchell et al. Sep 2005 B1
6941574 Broadwin et al. Sep 2005 B1
6947509 Wong Sep 2005 B1
6952221 Holtz et al. Oct 2005 B1
6956899 Hall et al. Oct 2005 B2
7016540 Gong et al. Mar 2006 B1
7031385 Inoue et al. Apr 2006 B1
7050113 Campisano et al. May 2006 B2
7089577 Rakib et al. Aug 2006 B1
7093028 Shao et al. Aug 2006 B1
7095402 Kunil et al. Aug 2006 B2
7114167 Slemmer et al. Sep 2006 B2
7151782 Oz et al. Dec 2006 B1
7158676 Rainsford Jan 2007 B1
7212573 Winger May 2007 B2
7224731 Mehrotra May 2007 B2
7272556 Aguilar et al. Sep 2007 B1
7310619 Baar et al. Dec 2007 B2
7325043 Rosenberg et al. Jan 2008 B1
7346111 Winger et al. Mar 2008 B2
7412423 Asano Aug 2008 B1
7412505 Slemmer et al. Aug 2008 B2
7421082 Kamiya et al. Sep 2008 B2
7444306 Varble Oct 2008 B2
7500235 Maynard et al. Mar 2009 B2
7508941 O'Toole, Jr. et al. Mar 2009 B1
7512577 Slemmer et al. Mar 2009 B2
7596764 Vienneau et al. Sep 2009 B2
7623575 Winger Nov 2009 B2
7669220 Goode Feb 2010 B2
7742609 Yeakel et al. Jun 2010 B2
7743400 Kurauchi Jun 2010 B2
7751572 Villemoes et al. Jul 2010 B2
7830388 Lu Nov 2010 B1
7936819 Craig et al. May 2011 B2
7941645 Riach et al. May 2011 B1
7945616 Zeng et al. May 2011 B2
7987489 Krzyzanowski et al. Jul 2011 B2
8027353 Damola et al. Sep 2011 B2
8036271 Winger et al. Oct 2011 B2
8046798 Schlack et al. Oct 2011 B1
8074248 Sigmon et al. Dec 2011 B2
8078603 Chandratillake et al. Dec 2011 B1
8118676 Craig et al. Feb 2012 B2
8136033 Bhargava et al. Mar 2012 B1
8149917 Zhang et al. Apr 2012 B2
8155194 Winger et al. Apr 2012 B2
8155202 Landau Apr 2012 B2
8170107 Winger May 2012 B2
8194862 Herr et al. Jun 2012 B2
8270439 Herr et al. Sep 2012 B2
8284842 Craig et al. Oct 2012 B2
8370869 Paek et al. Feb 2013 B2
8411754 Zhang et al. Apr 2013 B2
8442110 Pavlovskaia et al. May 2013 B2
8473996 Gordon et al. Jun 2013 B2
8619867 Craig et al. Dec 2013 B2
8656430 Doyle Feb 2014 B2
8781240 Srinivasan et al. Jul 2014 B2
8839317 Rieger Sep 2014 B1
9204113 Kwok Dec 2015 B1
20010005360 Lee Jun 2001 A1
20010008845 Kusuda et al. Jul 2001 A1
20010043215 Middleton, III et al. Nov 2001 A1
20010049301 Masuda et al. Dec 2001 A1
20020007491 Schiller et al. Jan 2002 A1
20020013812 Krueger et al. Jan 2002 A1
20020016161 Dellien et al. Feb 2002 A1
20020021353 DeNies Feb 2002 A1
20020026642 Augenbraun et al. Feb 2002 A1
20020027567 Niamir Mar 2002 A1
20020032697 French et al. Mar 2002 A1
20020040482 Sextro et al. Apr 2002 A1
20020047899 Son et al. Apr 2002 A1
20020049975 Thomas et al. Apr 2002 A1
20020054578 Zhang et al. May 2002 A1
20020056083 Istvan May 2002 A1
20020056107 Schlack May 2002 A1
20020056136 Wistendahl et al. May 2002 A1
20020059644 Andrade et al. May 2002 A1
20020062484 De Lange et al. May 2002 A1
20020067766 Sakamoto et al. Jun 2002 A1
20020069267 Thiele Jun 2002 A1
20020072408 Kumagai Jun 2002 A1
20020078171 Schneider Jun 2002 A1
20020078456 Hudson et al. Jun 2002 A1
20020083464 Tomsen et al. Jun 2002 A1
20020091738 Rohrabaugh Jul 2002 A1
20020095689 Novak Jul 2002 A1
20020105531 Niemi Aug 2002 A1
20020108121 Alao et al. Aug 2002 A1
20020131511 Zenoni Sep 2002 A1
20020136298 Anantharamu et al. Sep 2002 A1
20020152318 Menon et al. Oct 2002 A1
20020171765 Waki et al. Nov 2002 A1
20020175931 Holtz et al. Nov 2002 A1
20020178447 Plotnick et al. Nov 2002 A1
20020188628 Cooper et al. Dec 2002 A1
20020191851 Keinan Dec 2002 A1
20020196746 Allen Dec 2002 A1
20030005452 Rodriguez Jan 2003 A1
20030020671 Santoro et al. Jan 2003 A1
20030027517 Callway et al. Feb 2003 A1
20030035486 Kato et al. Feb 2003 A1
20030038893 Rajamaki et al. Feb 2003 A1
20030046690 Miller Mar 2003 A1
20030051253 Barone, Jr. Mar 2003 A1
20030058941 Chen et al. Mar 2003 A1
20030061451 Beyda Mar 2003 A1
20030065739 Shnier Apr 2003 A1
20030066093 Cruz-Rivera et al. Apr 2003 A1
20030071792 Safadi Apr 2003 A1
20030072372 Shen et al. Apr 2003 A1
20030088328 Nishio et al. May 2003 A1
20030088400 Nishio et al. May 2003 A1
20030107443 Yamamoto Jun 2003 A1
20030122836 Doyle et al. Jul 2003 A1
20030123664 Pedlow, Jr. et al. Jul 2003 A1
20030126608 Safadi Jul 2003 A1
20030126611 Chernock et al. Jul 2003 A1
20030131349 Kuczynski-Brown Jul 2003 A1
20030135860 Dureau Jul 2003 A1
20030169373 Peters et al. Sep 2003 A1
20030177199 Zenoni Sep 2003 A1
20030188309 Yuen Oct 2003 A1
20030189980 Dvir et al. Oct 2003 A1
20030196174 Pierre Cote et al. Oct 2003 A1
20030208768 Urdang et al. Nov 2003 A1
20030229719 Iwata et al. Dec 2003 A1
20030229900 Reisman Dec 2003 A1
20030231218 Amadio Dec 2003 A1
20040016000 Zhang et al. Jan 2004 A1
20040034873 Zenoni Feb 2004 A1
20040040035 Carlucci et al. Feb 2004 A1
20040055007 Allport Mar 2004 A1
20040073924 Pendakur Apr 2004 A1
20040078822 Breen et al. Apr 2004 A1
20040088375 Sethi et al. May 2004 A1
20040091171 Bone May 2004 A1
20040111526 Baldwin et al. Jun 2004 A1
20040128686 Boyer et al. Jul 2004 A1
20040133704 Krzyzanowski et al. Jul 2004 A1
20040139158 Datta Jul 2004 A1
20040157662 Tsuchiya Aug 2004 A1
20040163101 Swix et al. Aug 2004 A1
20040184542 Fujimoto Sep 2004 A1
20040193648 Lai et al. Sep 2004 A1
20040210824 Shoff et al. Oct 2004 A1
20040261106 Hoffman Dec 2004 A1
20040261114 Addington et al. Dec 2004 A1
20040268419 Danker et al. Dec 2004 A1
20050015259 Thumpudi et al. Jan 2005 A1
20050015816 Christofalo et al. Jan 2005 A1
20050021830 Urzaiz et al. Jan 2005 A1
20050034155 Gordon et al. Feb 2005 A1
20050034162 White et al. Feb 2005 A1
20050044575 Der Kuyl Feb 2005 A1
20050055685 Maynard et al. Mar 2005 A1
20050055721 Zigmond et al. Mar 2005 A1
20050071876 van Beek Mar 2005 A1
20050076134 Bialik et al. Apr 2005 A1
20050089091 Kim et al. Apr 2005 A1
20050091690 Delpuch et al. Apr 2005 A1
20050091695 Paz et al. Apr 2005 A1
20050105608 Coleman et al. May 2005 A1
20050114906 Hoarty et al. May 2005 A1
20050135385 Jenkins et al. Jun 2005 A1
20050141613 Kelly et al. Jun 2005 A1
20050149988 Grannan Jul 2005 A1
20050155063 Bayrakeri Jul 2005 A1
20050160088 Scallan et al. Jul 2005 A1
20050166257 Feinleib et al. Jul 2005 A1
20050177853 Williams et al. Aug 2005 A1
20050180502 Puri Aug 2005 A1
20050216933 Black Sep 2005 A1
20050216940 Black Sep 2005 A1
20050226426 Oomen et al. Oct 2005 A1
20050273832 Zigmond et al. Dec 2005 A1
20050283741 Balabanovic et al. Dec 2005 A1
20060001737 Dawson et al. Jan 2006 A1
20060020960 Relan et al. Jan 2006 A1
20060020994 Crane et al. Jan 2006 A1
20060026663 Kortum et al. Feb 2006 A1
20060031906 Kaneda Feb 2006 A1
20060039481 Shen et al. Feb 2006 A1
20060041910 Hatanaka et al. Feb 2006 A1
20060064716 Sull et al. Mar 2006 A1
20060088105 Shen et al. Apr 2006 A1
20060095944 Demircin et al. May 2006 A1
20060112338 Joung et al. May 2006 A1
20060117340 Pavlovskaia et al. Jun 2006 A1
20060161538 Kiilerich Jul 2006 A1
20060173985 Moore Aug 2006 A1
20060174289 Theberge Aug 2006 A1
20060195884 van Zoest et al. Aug 2006 A1
20060203913 Kim et al. Sep 2006 A1
20060212203 Furuno Sep 2006 A1
20060218601 Michel Sep 2006 A1
20060230428 Craig et al. Oct 2006 A1
20060242570 Croft et al. Oct 2006 A1
20060269086 Page et al. Nov 2006 A1
20060271985 Hoffman et al. Nov 2006 A1
20060285586 Westerman Dec 2006 A1
20060285819 Kelly et al. Dec 2006 A1
20070009035 Craig et al. Jan 2007 A1
20070009036 Craig et al. Jan 2007 A1
20070009042 Craig Jan 2007 A1
20070011702 Vaysman Jan 2007 A1
20070074251 Oguz et al. Mar 2007 A1
20070115941 Patel et al. May 2007 A1
20070124282 Wittkotter May 2007 A1
20070124795 McKissick et al. May 2007 A1
20070130446 Minakami Jun 2007 A1
20070130592 Haeusel Jun 2007 A1
20070152984 Ordin et al. Jul 2007 A1
20070162953 Bolliger et al. Jul 2007 A1
20070172061 Pinder Jul 2007 A1
20070174790 Jing et al. Jul 2007 A1
20070180047 Dong et al. Aug 2007 A1
20070192798 Morgan Aug 2007 A1
20070234220 Khan et al. Oct 2007 A1
20070237232 Chang et al. Oct 2007 A1
20070300280 Turner et al. Dec 2007 A1
20080034306 Ording Feb 2008 A1
20080046928 Poling et al. Feb 2008 A1
20080052742 Kopf et al. Feb 2008 A1
20080060034 Egnal et al. Mar 2008 A1
20080066135 Brodersen et al. Mar 2008 A1
20080084503 Kondo Apr 2008 A1
20080086688 Chandratillake et al. Apr 2008 A1
20080086747 Rasanen et al. Apr 2008 A1
20080094368 Ording et al. Apr 2008 A1
20080097953 Levy et al. Apr 2008 A1
20080098450 Wu et al. Apr 2008 A1
20080104520 Swenson et al. May 2008 A1
20080127255 Ress et al. May 2008 A1
20080144711 Chui et al. Jun 2008 A1
20080154583 Goto et al. Jun 2008 A1
20080163286 Rudolph et al. Jul 2008 A1
20080170619 Landau Jul 2008 A1
20080170622 Gordon et al. Jul 2008 A1
20080178125 Elsbree et al. Jul 2008 A1
20080178243 Dong et al. Jul 2008 A1
20080178249 Gordon et al. Jul 2008 A1
20080181221 Kampmann et al. Jul 2008 A1
20080184120 O-Brien-Strain et al. Jul 2008 A1
20080189740 Carpenter et al. Aug 2008 A1
20080195573 Onoda et al. Aug 2008 A1
20080201736 Gordon et al. Aug 2008 A1
20080212942 Gordon et al. Sep 2008 A1
20080222199 Tiu et al. Sep 2008 A1
20080232452 Sullivan et al. Sep 2008 A1
20080243918 Holtman Oct 2008 A1
20080253440 Srinivasan et al. Oct 2008 A1
20080253685 Kuranov et al. Oct 2008 A1
20080271080 Gossweiler et al. Oct 2008 A1
20090003446 Wu et al. Jan 2009 A1
20090003705 Zou et al. Jan 2009 A1
20090007199 La Joie Jan 2009 A1
20090025027 Craner Jan 2009 A1
20090031341 Schlack et al. Jan 2009 A1
20090041118 Pavlovskaia et al. Feb 2009 A1
20090083781 Yang et al. Mar 2009 A1
20090083813 Dolce et al. Mar 2009 A1
20090083824 McCarthy et al. Mar 2009 A1
20090089188 Ku et al. Apr 2009 A1
20090094113 Berry et al. Apr 2009 A1
20090094646 Walter et al. Apr 2009 A1
20090100465 Kulakowski Apr 2009 A1
20090100489 Strothmann Apr 2009 A1
20090106269 Zuckerman et al. Apr 2009 A1
20090106386 Zuckerman et al. Apr 2009 A1
20090106392 Zuckerman et al. Apr 2009 A1
20090106425 Zuckerman et al. Apr 2009 A1
20090106441 Zuckerman et al. Apr 2009 A1
20090106451 Zuckerman et al. Apr 2009 A1
20090106511 Zuckerman et al. Apr 2009 A1
20090113009 Slemmer et al. Apr 2009 A1
20090132942 Santoro et al. May 2009 A1
20090138966 Krause et al. May 2009 A1
20090144781 Glaser et al. Jun 2009 A1
20090146779 Kumar et al. Jun 2009 A1
20090157868 Chaudhry Jun 2009 A1
20090158369 Van Vleck et al. Jun 2009 A1
20090160694 Di Flora Jun 2009 A1
20090172757 Aldrey et al. Jul 2009 A1
20090178098 Westbrook et al. Jul 2009 A1
20090183219 Maynard et al. Jul 2009 A1
20090189890 Corbett et al. Jul 2009 A1
20090193452 Russ et al. Jul 2009 A1
20090196346 Zhang et al. Aug 2009 A1
20090204920 Beverley et al. Aug 2009 A1
20090210899 Lawrence-Apfelbaum et al. Aug 2009 A1
20090225790 Shay et al. Sep 2009 A1
20090228620 Thomas et al. Sep 2009 A1
20090228922 Haj-khalil et al. Sep 2009 A1
20090233593 Ergen et al. Sep 2009 A1
20090251478 Maillot et al. Oct 2009 A1
20090254960 Yarom et al. Oct 2009 A1
20090265617 Randall et al. Oct 2009 A1
20090271818 Schlack Oct 2009 A1
20090298535 Klein et al. Dec 2009 A1
20090313674 Ludvig et al. Dec 2009 A1
20090328109 Pavlovskaia et al. Dec 2009 A1
20100009623 Hennenhoefer Jan 2010 A1
20100033638 O'Donnell et al. Feb 2010 A1
20100035682 Gentile et al. Feb 2010 A1
20100058404 Rouse Mar 2010 A1
20100067571 White et al. Mar 2010 A1
20100073371 Ernst et al. Mar 2010 A1
20100077441 Thomas et al. Mar 2010 A1
20100104021 Schmit Apr 2010 A1
20100115573 Srinivasan et al. May 2010 A1
20100118972 Zhang et al. May 2010 A1
20100131996 Gauld May 2010 A1
20100146139 Brockmann Jun 2010 A1
20100158109 Dahlby et al. Jun 2010 A1
20100161825 Ronca et al. Jun 2010 A1
20100166071 Wu et al. Jul 2010 A1
20100174776 Westberg et al. Jul 2010 A1
20100175080 Yuen et al. Jul 2010 A1
20100180307 Hayes et al. Jul 2010 A1
20100211983 Chou Aug 2010 A1
20100226428 Thevathasan et al. Sep 2010 A1
20100235861 Schein et al. Sep 2010 A1
20100242073 Gordon et al. Sep 2010 A1
20100251167 DeLuca et al. Sep 2010 A1
20100254370 Jana et al. Oct 2010 A1
20100265344 Velarde et al. Oct 2010 A1
20100325655 Perez Dec 2010 A1
20100325668 Young et al. Dec 2010 A1
20110002376 Ahmed et al. Jan 2011 A1
20110002470 Purnhagen et al. Jan 2011 A1
20110023069 Dowens Jan 2011 A1
20110035227 Lee et al. Feb 2011 A1
20110067061 Karaoguz et al. Mar 2011 A1
20110072474 Springer Mar 2011 A1
20110099594 Chen et al. Apr 2011 A1
20110107375 Stahl et al. May 2011 A1
20110110433 Bjontegaard May 2011 A1
20110110642 Salomons et al. May 2011 A1
20110150421 Sasaki et al. Jun 2011 A1
20110153776 Opala et al. Jun 2011 A1
20110167468 Lee et al. Jul 2011 A1
20110191684 Greenberg Aug 2011 A1
20110231878 Hunter et al. Sep 2011 A1
20110258584 Williams et al. Oct 2011 A1
20110283304 Roberts et al. Nov 2011 A1
20110289536 Poder et al. Nov 2011 A1
20110296312 Boyer et al. Dec 2011 A1
20110317982 Xu et al. Dec 2011 A1
20120008786 Cronk et al. Jan 2012 A1
20120023126 Jin et al. Jan 2012 A1
20120023250 Chen et al. Jan 2012 A1
20120030212 Koopmans et al. Feb 2012 A1
20120030706 Hulse et al. Feb 2012 A1
20120137337 Sigmon et al. May 2012 A1
20120204217 Regis et al. Aug 2012 A1
20120209815 Carson et al. Aug 2012 A1
20120216232 Chen et al. Aug 2012 A1
20120221853 Wingert et al. Aug 2012 A1
20120257671 Brockmann et al. Oct 2012 A1
20120284753 Roberts et al. Nov 2012 A1
20130003826 Craig et al. Jan 2013 A1
20130047074 Vestergaard et al. Feb 2013 A1
20130071095 Chauvier et al. Mar 2013 A1
20130086610 Brockmann Apr 2013 A1
20130179787 Brockmann et al. Jul 2013 A1
20130198776 Brockmann Aug 2013 A1
20130254308 Rose et al. Sep 2013 A1
20130272394 Brockmann et al. Oct 2013 A1
20130304818 Brumleve et al. Nov 2013 A1
20130305051 Fu et al. Nov 2013 A1
20140032635 Pimmel et al. Jan 2014 A1
20140033036 Gaur et al. Jan 2014 A1
20140081954 Elizarov Mar 2014 A1
20140223307 McIntosh et al. Aug 2014 A1
20140223482 McIntosh et al. Aug 2014 A1
20140267074 Balci Sep 2014 A1
20140269930 Robinson et al. Sep 2014 A1
20140289627 Brockmann et al. Sep 2014 A1
20140317532 Ma Oct 2014 A1
20140344861 Berner et al. Nov 2014 A1
20150195525 Sullivan et al. Jul 2015 A1
Foreign Referenced Citations (270)
Number Date Country
191599 Apr 2000 AT
198969 Feb 2001 AT
250313 Oct 2003 AT
472152 Jul 2010 AT
475266 Aug 2010 AT
620735 Feb 1992 AU
643828 Nov 1993 AU
2004253127 Jan 2005 AU
2005278122 Mar 2006 AU
682776 Mar 1964 CA
2052477 Mar 1992 CA
1302554 Jun 1992 CA
2163500 May 1996 CA
2231391 May 1997 CA
2273365 Jun 1998 CA
2313133 Jun 1999 CA
2313161 Jun 1999 CA
2528499 Jan 2005 CA
2569407 Mar 2006 CA
2728797 Apr 2010 CA
2787913 Jul 2011 CA
2798541 Dec 2011 CA
2814070 Apr 2012 CA
1507751 Jun 2004 CN
1969555 May 2007 CN
101180109 May 2008 CN
101627424 Jan 2010 CN
101637023 Jan 2010 CN
102007773 Apr 2011 CN
103647980 Mar 2014 CN
4408355 Oct 1994 DE
69516139 D1 Dec 2000 DE
69132518 D1 Sep 2001 DE
69333207 D1 Jul 2004 DE
98961961 Aug 2007 DE
0128771 Dec 1984 EP
0419137 Mar 1991 EP
0449633 Oct 1991 EP
0477786 Apr 1992 EP
0523618 Jan 1993 EP
0534139 Mar 1993 EP
0568453 Nov 1993 EP
0588653 Mar 1994 EP
0594350 Apr 1994 EP
0612916 Aug 1994 EP
0624039 Nov 1994 EP
0638219 Feb 1995 EP
0643523 Mar 1995 EP
0661888 Jul 1995 EP
0714684 Jun 1996 EP
0746158 Dec 1996 EP
0761066 Mar 1997 EP
0789972 Aug 1997 EP
0830786 Mar 1998 EP
0861560 Sep 1998 EP
0 881 808 Dec 1998 EP
0933966 Aug 1999 EP
0933966 Aug 1999 EP
1026872 Aug 2000 EP
1038397 Sep 2000 EP
1038399 Sep 2000 EP
1038400 Sep 2000 EP
1038401 Sep 2000 EP
1051039 Nov 2000 EP
1055331 Nov 2000 EP
1120968 Aug 2001 EP
1345446 Sep 2003 EP
1422929 May 2004 EP
1428562 Jun 2004 EP
1521476 Apr 2005 EP
1645115 Apr 2006 EP
1725044 Nov 2006 EP
1767708 Mar 2007 EP
1771003 Apr 2007 EP
1772014 Apr 2007 EP
1877150 Jan 2008 EP
1887148 Feb 2008 EP
1900200 Mar 2008 EP
1902583 Mar 2008 EP
1908293 Apr 2008 EP
1911288 Apr 2008 EP
1918802 May 2008 EP
2100296 Sep 2009 EP
2105019 Sep 2009 EP
2106665 Oct 2009 EP
2116051 Nov 2009 EP
2124440 Nov 2009 EP
2248341 Nov 2010 EP
2269377 Jan 2011 EP
2304953 Apr 2011 EP
2364019 Sep 2011 EP
2409493 Jan 2012 EP
2477414 Jul 2012 EP
2487919 Aug 2012 EP
2520090 Nov 2012 EP
2567545 Mar 2013 EP
2577437 Apr 2013 EP
2628306 Aug 2013 EP
2632164 Aug 2013 EP
2632165 Aug 2013 EP
2695388 Feb 2014 EP
2207635 Jun 2004 ES
2529739 Jan 1984 FR
2891098 Mar 2007 FR
2207838 Feb 1989 GB
2248955 Apr 1992 GB
2290204 Dec 1995 GB
2378345 Feb 2003 GB
1134855 Oct 2010 HK
1116323 Dec 2010 HK
19913397 Apr 1992 IE
99586 Feb 1998 IL
180215 Jan 1998 IN
3759 Mar 1992 IS
60-054324 Mar 1985 JP
63-033988 Feb 1988 JP
63-263985 Oct 1988 JP
2001-241993 Sep 1989 JP
7-160292 Jun 1995 JP
8-265704 Oct 1996 JP
10-228437 Aug 1998 JP
11-134273 May 1999 JP
H11-261966 Sep 1999 JP
2001-145112 May 2001 JP
2001-203995 Jul 2001 JP
2001-245271 Sep 2001 JP
2001-245291 Sep 2001 JP
2002-057952 Feb 2002 JP
2002-112220 Apr 2002 JP
2002-141810 May 2002 JP
2002-208027 Jul 2002 JP
2002-300556 Oct 2002 JP
2002-319991 Oct 2002 JP
2003-506763 Feb 2003 JP
2003-087673 Mar 2003 JP
2004-056777 Feb 2004 JP
2004-110850 Apr 2004 JP
2004-112441 Apr 2004 JP
2004-135932 May 2004 JP
2004-264812 Sep 2004 JP
2004-312283 Nov 2004 JP
2004-533736 Nov 2004 JP
2004-536381 Dec 2004 JP
2004-536681 Dec 2004 JP
2005-033741 Feb 2005 JP
2005-084987 Mar 2005 JP
2005-095599 Mar 2005 JP
8-095599 Apr 2005 JP
2005-123981 May 2005 JP
2005-156996 Jun 2005 JP
2005-519382 Jun 2005 JP
2005-523479 Aug 2005 JP
2005-260289 Sep 2005 JP
2005-309752 Nov 2005 JP
2006-067280 Mar 2006 JP
2006-246358 Sep 2006 JP
2007-129296 May 2007 JP
2007-522727 Aug 2007 JP
11-88419 Sep 2007 JP
2007-264440 Oct 2007 JP
2008-535622 Sep 2008 JP
2009-159188 Jul 2009 JP
2009-543386 Dec 2009 JP
2012-080593 Apr 2012 JP
10-2005-0001362 Jan 2005 KR
10-2005-0085827 Aug 2005 KR
10-2005-0095821 Sep 2006 KR
20080001298 Jan 2008 KR
1032594 Apr 2008 NL
1033929 Apr 2008 NL
2004780 Jan 2012 NL
239969 Dec 1994 NZ
99110 Dec 1993 PT
WO 8202303 Jul 1982 WO
WO 8908967 Sep 1989 WO
WO 9013972 Nov 1990 WO
WO 9322877 Nov 1993 WO
WO 9416534 Jul 1994 WO
WO 9419910 Sep 1994 WO
WO 9421079 Sep 1994 WO
WO 9515658 Jun 1995 WO
WO 9532587 Nov 1995 WO
WO 9533342 Dec 1995 WO
WO 9614712 May 1996 WO
WO 9627843 Sep 1996 WO
WO 9631826 Oct 1996 WO
WO 9637074 Nov 1996 WO
WO 9642168 Dec 1996 WO
WO 9716925 May 1997 WO
WO 9733434 Sep 1997 WO
WO 9739583 Oct 1997 WO
WO 9826595 Jun 1998 WO
WO 9904568 Jan 1999 WO
WO 9900735 Jan 1999 WO
WO 9930496 Jun 1999 WO
WO 9930497 Jun 1999 WO
WO 9930500 Jun 1999 WO
WO 9930501 Jun 1999 WO
WO 9935840 Jul 1999 WO
WO 9941911 Aug 1999 WO
WO 9956468 Nov 1999 WO
WO 9965232 Dec 1999 WO
WO 9965243 Dec 1999 WO
WO 9966732 Dec 1999 WO
WO 0002303 Jan 2000 WO
WO 0007372 Feb 2000 WO
WO 0008967 Feb 2000 WO
WO 0019910 Apr 2000 WO
WO 0038430 Jun 2000 WO
WO 0041397 Jul 2000 WO
WO 0139494 May 2001 WO
WO 0141447 Jun 2001 WO
WO 0182614 Nov 2001 WO
WO 02089487 Jul 2002 WO
WO 02076097 Sep 2002 WO
WO 02076099 Sep 2002 WO
WO 03026232 Mar 2003 WO
WO 03026275 Mar 2003 WO
WO 03047710 Jun 2003 WO
WO 03065683 Aug 2003 WO
WO 03071727 Aug 2003 WO
WO 03091832 Nov 2003 WO
WO 2004012437 Feb 2004 WO
WO 2004018060 Mar 2004 WO
WO2004057609 Jul 2004 WO
WO 2004073310 Aug 2004 WO
WO 2005002215 Jan 2005 WO
WO 2005053301 Jun 2005 WO
WO 2005076575 Aug 2005 WO
WO 2006014362 Feb 2006 WO
WO 2006022881 Mar 2006 WO
WO 2006053305 May 2006 WO
WO 2006081634 Aug 2006 WO
WO 2006105480 Oct 2006 WO
WO 2006110268 Oct 2006 WO
WO 2007001797 Jan 2007 WO
WO 2007008319 Jan 2007 WO
WO 2007008355 Jan 2007 WO
WO 2007008356 Jan 2007 WO
WO 2007008357 Jan 2007 WO
WO 2007008358 Jan 2007 WO
WO 2007018722 Feb 2007 WO
WO 2007018726 Feb 2007 WO
WO2008044916 Apr 2008 WO
WO 2008044916 Apr 2008 WO
WO 2008086170 Jul 2008 WO
WO 2008088741 Jul 2008 WO
WO 2008088752 Jul 2008 WO
WO 2008088772 Jul 2008 WO
WO 2008100205 Aug 2008 WO
WO2009038596 Mar 2009 WO
WO 2009038596 Mar 2009 WO
WO 2009099893 Aug 2009 WO
WO 2009099895 Aug 2009 WO
WO 2009105465 Aug 2009 WO
WO 2009110897 Sep 2009 WO
WO 2009114247 Sep 2009 WO
WO 2009155214 Dec 2009 WO
WO 2010044926 Apr 2010 WO
WO 2010054136 May 2010 WO
WO 2010107954 Sep 2010 WO
WO 2011014336 Feb 2011 WO
WO 2011082364 Jul 2011 WO
WO 2011139155 Nov 2011 WO
WO 2011149357 Dec 2011 WO
WO 2012051528 Apr 2012 WO
WO 2012138660 Oct 2012 WO
WO 2013106390 Jul 2013 WO
WO 2013155310 Jul 2013 WO
WO2013184604 Dec 2013 WO
Non-Patent Literature Citations (254)
Entry
AC-3 digital audio compression standard, Extract, Dec. 20, 1995, 11 pgs.
ActiveVideo Networks BV, International Preliminary Report on Patentability, PCT/NL2011/050308, dated Sep. 6, 2011, 8 pgs.
ActiveVideo Networks BV, International Search Report and Written Opinion, PCT/NL2011/050308, dated Sep. 6, 2011, 8 pgs.
Activevideo Networks Inc., International Preliminary Report on Patentability, PCT/US2011/056355, dated Apr. 16, 2013, 4 pgs.
ActiveVideo Networks Inc., International Preliminary Report on Patentability, PCT/US2012/032010, dated Oct. 8, 2013, 4 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2011/056355, dated Apr. 13, 2012, 6 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2012/032010, dated Oct. 10, 2012, 6 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2013/020769, dated May 9, 2013, 9 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2013/036182, dated Jul. 29, 2013, 12 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2009/032457, dated Jul. 22, 2009, 7 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 09820936-4, dated Oct. 26, 2012, 11 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 10754084-1, dated Jul. 24, 2012, 11 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 10841764.3, dated May 20, 2014, 16 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 11833486.1, dated Apr. 3, 2014, 6 pgs.
Annex C—Video buffering verifier, information technology—generic coding of moving pictures and associated audio information: video, Feb. 2000, 6 pgs.
Antonoff, Michael, “Interactive Television,” Popular Science, Nov. 1992, 12 pages.
Avinity Systems B.V., Extended European Search Report, Application No. 12163713.6, dated Feb. 7, 2014, 10 pgs.
Avinity Systems B.V., Extended European Search Report, Application No. 12163712-8, dated Feb. 3, 2014, 10 pgs.
Benjelloun, A summation algorithm for MPEG-1 coded audio signals: a first step towards audio processed domain, Annals of Telecommunications, Get Laudisier, Paris, vol. 55, No. 3/04, Mar. 1, 2000, 9 pgs.
Broadhead, Direct manipulation of MPEG compressed digital audio, Nov. 5-9, 1995, 41 pgs.
Cable Television Laboratories, Inc., “CableLabs Asset Distribution Interface Specification, Version 1.1”, May 5, 2006, 33 pgs.
CD 11172-3, Coding of moving pictures and associated audio for digital storage media at up to about 1.5 MBIT, Jan. 1, 1992, 39 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,176, dated Dec. 23, 2010, 8 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,183, dated Jan. 12, 2012, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,183, dated Jul. 19, 2012, 8 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,189, dated Oct. 12, 2011, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,176, dated Mar. 23, 2011, 8 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 13/609,183, dated Aug. 26, 2013, 8 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/103,838, dated Feb. 5, 2009, 30 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,181, dated Aug. 25, 2010, 17 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/103,838, dated Jul. 6, 2010, 35 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,176, dated Oct. 1, 2010, 8 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,183, dated Apr. 13, 2011, 16 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,177, dated Oct. 26, 2010, 12 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,181, dated Jun. 20, 2011, 21 pgs.
Craig, Office Action, U.S. Appl. No. 11/103,838, dated May 12, 2009, 32 pgs.
Craig, Office Action, U.S. Appl. No. 11/103,838, dated Aug. 19, 2008, 17 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/103,838, dated Nov. 19, 2009, 34 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,176, dated May 6, 2010, 7 pgs.
Craig, Office-Action U.S. Appl. No. 11/178,177, dated Mar. 29, 2011, 15 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,177, dated Aug. 3, 2011, 26 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,177, dated Mar. 29, 2010, 11 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,181, dated Feb. 11, 2011, 19 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,181, dated Mar. 29, 2010, 10 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,182, dated Feb. 23, 2010, 15 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, dated Dec. 6, 2010, 12 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, dated Sep. 15, 2011, 12 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, dated Feb. 19, 2010, 17 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, dated Jul. 20, 2010, 13 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, dated Nov. 9, 2010, 13 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, dated Mar. 15, 2010, 11 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, dated Jul. 23, 2009, 10 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, dated May 26, 2011, 14 pgs.
Craig, Office Action, U.S. Appl. No. 13/609,183, dated May 9, 2013, 7 pgs.
Pavlovskaia, Office Action, JP 2011-516499, dated Feb. 14, 2014, 19 pgs.
Digital Audio Compression Standard(AC-3, E-AC-3), Advanced Television Systems Committee, Jun. 14, 2005, 236 pgs.
European Patent Office, Extended European Search Report for International Application No. PCT/US2010/027724, dated Jul. 24, 2012, 11 pages.
FFMPEG, fttp://www.ffmpeg.org, downloaded Apr. 8, 2010, 8 pgs.
FFMEG-0.4.9 Audio Layer 2 Tables Including Fixed Psycho Acoustic Model, 2001, 2 pgs.
Herr, Notice of Allowance, U.S. Appl. No. 11/620,593, dated May 23, 2012, 5 pgs.
Herr, Notice of Allowance, U.S. Appl. No. 12/534,016, dated Feb. 7, 2012, 5 pgs.
Herr, Notice of Allowance, U.S. Appl. No. 12/534,016, dated Sep. 28, 2011, 15 pgs.
Herr, Final Office Action, U.S. Appl. No. 11/620,593, dated Sep. 15, 2011, 104 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, dated Mar. 19, 2010, 58 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, dated Apr. 21, 2009 27 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, dated Dec. 23, 2009, 58 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, dated Jan. 24, 2011, 96 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, dated Aug. 27, 2010, 41 pgs.
Herre, Thoughts on an SAOC Architecture, Oct. 2006, 9 pgs.
Hoarty, The Smart Headend—A Novel Approach to Interactive Television, Montreux Int'l TV Symposium, Jun. 9, 1995, 21 pgs.
ICTV, Inc., International Preliminary Report on Patentability, PCT/US2006/022585, dated Jan. 29, 2008, 9 pgs.
ICTV, Inc., International Search Report / Written Opinion, PCT/US2006/022585, dated Oct. 12, 2007, 15 pgs.
ICTV, Inc., International Search Report / Written Opinion, PCT/US2008/000419, dated May 15, 2009, 20 pgs.
ICTV, Inc., International Search Report / Written Opinion; PCT/US2006/022533, dated Nov. 20, 2006; 8 pgs.
Isovic, Timing constraints of MPEG-2 decoding for high quality video: misconceptions and realistic assumptions, Jul. 2-4, 2003, 10 pgs.
MPEG-2 Video elementary stream supplemental information, Dec. 1999, 12 pgs.
Ozer, Video Compositing 101. available from http://www.emedialive.com, Jun. 2, 2004, 5pgs.
Porter, Compositing Digital Images, 18 Computer Graphics (No. 3), Jul. 1984, pp. 253-259.
RSS Advisory Board, “RSS 2.0 Specification”, published Oct. 15, 2007.
SAOC use cases, draft requirements and architecture, Oct. 2006, 16 pgs.
Sigmon, Final Office Action, U.S. Appl. No. 11/258,602, dated Feb. 23, 2009, 15 pgs.
Sigmon, Office Action, U.S. Appl. No. 11/258,602, dated Sep. 2, 2008, 12 pgs.
TAG Networks, Inc., Communication pursuant to Article 94(3) EPC, European Patent Application, 06773714.8, dated May 6, 2009, 3 pgs.
TAG Networks Inc, Decision to Grant a Patent, JP 2009-544985, dated Jun. 28, 2013, 1 pg.
TAG Networks Inc., IPRP, PCT/US2006/010080, dated Oct. 16, 2007, 6 pgs.
TAG Networks Inc., IPRP, PCT/US2006/024194, dated Jan. 10, 2008, 7 pgs.
TAG Networks Inc., IPRP, PCT/US2006/024195, dated Apr. 1, 2009, 11 pgs.
TAG Networks Inc., IPRP, PCT/US2006/024196, dated Jan. 10, 2008, 6 pgs.
TAG Networks Inc., International Search Report, PCT/US2008/050221, dated Jun. 12, 2008, 9 pgs.
TAG Networks Inc., Office Action, CN 200680017662.3, dated Apr. 26, 2010, 4 pgs.
TAG Networks Inc., Office Action, EP 06739032.8, dated Aug. 14, 2009, 4 pgs.
TAG Networks Inc., Office Action, EP 06773714.8, dated May 6, 2009, 3 pgs.
TAG Networks Inc., Office Action, EP 06773714.8, dated Jan. 12, 2010, 4 pgs.
TAG Networks Inc., Office Action, JP 2008-506474, dated Oct. 1, 2012, 5 pgs.
TAG Networks Inc., Office Action, JP 2008-506474, dated Aug. 8, 2011, 5 pgs.
TAG Networks Inc., Office Action, JP 2008-520254, dated Oct. 20, 2011, 2 pgs.
TAG Networks, IPRP, PCT/US2008/050221, dated Jul. 7, 2009, 6 pgs.
TAG Networks, International Search Report, PCT/US2010/041133, dated Oct. 19, 2010, 13 pgs.
TAG Networks, Office Action, CN 200880001325.4, dated Jun. 22, 2011, 4 pgs.
TAG Networks, Office Action, JP 2009-544985, dated Feb. 25, 2013, 3 pgs.
Talley, A general framework for continuous media transmission control, 21st IEEE Conference on Local Computer Networks, Oct. 13-16, 1996, 10 pgs.
The Toolame Project, Psych—nl.c, Oct. 1, 1999, 1 pg.
Todd, AC-3: flexible perceptual coding for audio transmission and storage, 96th Convention, Audio Engineering Society, Feb. 26-Mar. 1, 1994, 16 pgs.
Tudor, MPEG-2 Video Compression, Dec. 1995, 15 pgs.
TVHEAD, Inc. First Examination Report, in 1744/MUMNP/2007, dated Dec. 30, 2013, 6 pgs.
TVHEAD, Inc. International Search Report, PCT/US2006/010080, dated Jun. 20, 2006, 3 pgs.
TVHEAD, Inc. International Search Report, PCT/US2006/024194, dated Dec. 15, 2006, 4 pgs.
TVHEAD, Inc. International Search Report, PCT/US2006/024195, dated Nov. 29, 2006, 9 pgs.
TVHEAD, Inc. International Search Report, PCT/US2006/024196, dated Dec. 11, 2006, 4 pgs.
TVHEAD, Inc. International Search Report, PCT/US2006/024197, dated Nov. 28, 2006, 9 pgs.
Vernon, Dolby digital: audio coding for digital television and storage applications, Aug. 1999, 18 pgs.
Wang, A beat-pattern based error concealment scheme for music delivery with burst packet loss, IEEE International Conference on Multimedia and Expo, ICME, Aug. 22, 2001, 4 pgs.
Wang, A compressed domain beat detector using MP3 audio bitstream, Sep. 30, 2001, 9 pgs.
Wang, A multichannel audio coding algorithm for inter-channel redundancy removal, May 12-15, 2001, 6 pgs.
Wang, An excitation level based psychoacoustic model for audio compression, Oct. 30, 1999, 4 pgs.
Wang, Energy compaction property of the MDCT in comparison with other transforms, AES 109th Convention, Los Angeles, Sep. 22-25, 2000, 23 pgs.
Wang, Exploiting excess masking for audio compression, 17th International Conference on High Quality Audio Coding, Sep. 2-5, 1999, 4 pgs.
Wang, schemes for re-compressing mp3 audio bitstreams, Audio Engineering Society, 111th Convention Sep. 21-24, 2001, New York, 5 pgs.
Wang, Selected advances in audio compression and compressed domain processing, Aug. 2001, 68 pgs.
Wang, The impact of the relationship between MDCT and DFT on audio compression, Dec. 13-15, 2000, 9 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentablity, PCT/US2013/036182, dated Oct. 14, 2014, 9 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Rule 94(3), EP08713106-6, dated Jun. 26, 2014, 5 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Rule 94(3), EP09713486.0, dated Apr. 14, 2014, 6 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Rules 70(2) and 70a(2), EP11833486.1, dated Apr. 24, 2014, 1 pg.
ActiveVideo Networks Inc., Communication Pursuant to Rules 161(2) & 162 EPC, EP13775121.0, dated Jan. 20, 2015, 3 pgs.
ActiveVideo Networks Inc., Examination Report No. 1, AU2011258972, dated Jul. 21, 2014, 3 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2014/041430, dated Oct. 9, 2014, 9 pgs.
Active Video Networks, Notice of Reasons for Rejection, JP2012-547318, dated Sep. 26, 2014, 7 pgs.
ActiveVideo Networks Inc., Certificate of Patent JP5675765, dated Jan. 9, 2015, 3 pgs.
ActiveVideo Networks Inc., Decision to refuse a European patent application (Art. 97(2) EPC, EP09820936.4, dated Feb. 20, 2015, 4 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Article 94(3) EPC, EP10754084.1, dated Feb. 10, 2015, 12 pgs.
ActiveVideo Networks Inc., Communication under Rule 71(3) EPC, Intention to Grant, EP08713106.6, dated Feb. 19, 2015, 12 pgs.
ActiveVideo Networks Inc., Examination Report No. 2, AU2011249132, dated May 29, 2015, 4 pgs.
Activevideo Networks Inc., Examination Report No. 2, AU2011315950, dated Jun. 25, 2015, 3 pgs.
ActiveVideo, International Search Report and Written Opinion, PCT/US2015/027803, dated Jun. 24, 2015, 18 pgs.
ActiveVideo, International Search Report and Written Opinion, PCT/US2015/027804, dated Jun. 25, 2015, 10 pgs.
ActiveVideo Networks B.V., Office Action, IL222830, dated Jun. 28, 2015, 7 pgs.
ActiveVideo Networks, Inc., Office Action, JP2013534034, dated Jun. 16, 2015, 6 pgs.
ActiveVideo Networks Inc., Notice of Reasons for Rejection, JP2014-100460, dated Jan. 15, 2015, 6 pgs.
ActiveVideo Networks Inc., Notice of Reasons for Rejection, JP2013-509016, dated Dec. 24, 2014, 11 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, EP08713106.6-1908, dated Aug. 5, 2015, 2 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, AU2011258972, dated Nov. 19, 2015, 2 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, AU2011315950, dated Dec. 17, 2015, 2 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, AU2011249132, dated Jan. 7, 2016, 2 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, HK10102800.4, dated Jun. 10, 2016, 3 pgs.
ActiveVideo Networks, Inc., Certificate of Grant , EP13168509.11908, dated Sep. 30, 2015, 2 pgs.
ActiveVideo Networks, Inc., Certificate of Patent, JP2013534034, dated Jan. 8, 2016, 4 pgs.
ActiveVideo Networks, Inc., Certificate of Patent, IL215133, dated Mar. 31, 2016, 1 pg.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(1) and 162 EPC, EP14722897.7, dated Oct. 28, 2015, 2 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Article 94(3) EPC, EP14722897.7, dated Jun. 29, 2016, 6 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Article 94(3) EPC, EP11738835.5, dated Jun. 10, 2016, 3 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(1) and 162 EPC, EP14740004.8, dated Jan. 26, 2016, 2 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(1) and 162 EPC, EP14736535.7, dated Jan. 26, 2016, 2 pgs.
ActiveVideo Networks, Inc., Decision to Grant, EP08713106.6-1908, dated Jul. 9, 2015, 2 pgs.
ActiveVideo Networks, Inc., Decision to Grant, EP13168509.1-1908, dated Sep. 3, 2015, 2 pgs.
ActiveVideo Networks, Inc., Decision to Grant, JP2014100460, dated Jul. 24, 2015, 5 pgs.
ActiveVideo Networks, Inc., Decision to Refuse a European Patent Application, EP08705578.6, dated Nov. 26, 2015, 10 pgs.
ActiveVideo Networks, Inc., Extended European Search Report, EP13735906.3, dated Nov. 11, 2015, 10 pgs.
ActiveVideo Networks, Inc., Partial Supplementary Extended European Search Report, EP13775121.0, dated Jun. 14, 2016, 7 pgs.
ActiveVideo Networks, Inc., KIPO's Notice of Preliminary Rejection, KR10-2010-7019512, dated Jul. 15, 2015, 15 pgs.
ActiveVideo Networks, Inc., KIPO's 2nd-Notice of Preliminary Rejection, KR10-2010-7019512, dated Feb. 12, 2016, 5 pgs.
ActiveVideo Networks, Inc., KIPO's Notice of Preliminary Rejection, KR10-20107021116, dated Jul. 13, 2015, 19 pgs.
ActiveVideo Networks, Inc., KIPO's Notice of Preliminary Rejection, KR10-2011-7024417, dated Feb. 18, 2016, 16 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT-US2015028072, dated Aug. 7, 2015, 9 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT-US2014030773, dated Sep. 15, 2015, 6 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT/US2014041430, dated Dec. 8, 2015, 6 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT-US2014041416, dated Dec. 8, 2015, 6 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2015/000502, dated May 6, 2016, 8 pgs.
ActiveVideo, Communication Pursuant to Article-94(3) EPC, EP12767642.7, dated Sep. 4, 2015, 4 pgs.
ActiveVideo, Communication Pursuant to Article 94(3) EPC, EP10841764.3, dated Dec. 18, 2015, 6 pgs. dated Dec. 18, 2015.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 70(2) abd 70a(2) EP13735906.3, dated Nov. 27, 2015, 1 pg.
ActiveVideo, Notice of Reasons for Rejection, JP2013-509016, dated Dec. 3, 2015, 7 pgs.
ActiveVideo, Notice of German Patent, EP602008040474-9, dated Jan. 6, 2016, 4 pgs.
Avinity Systems B. V., Final Office Action, JP-2009-530298, dated Oct. 7, 2014, 8 pgs.
Avinity Systems B.V., Notice of Grant—JP2009530298, dated Apr. 12, 2016, 3 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/445,104, dated Dec. 24, 2014, 14 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/686,548, dated Sep. 24, 2014, 13 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/438,617, dated Oct. 3, 2014, 19 pgs.
Brockmann, Office Action, U.S. Appl. No. 12/443,571, dated Nov. 5, 2014, 26 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/668,004, dated Feb. 26, 2015, 17 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/686,548, dated Jan. 5, 2015, 12 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/911,948, dated Dec. 26, 2014, 12 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/911,948, dated Jan. 29, 2015, 11 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/737,097, dated Mar. 16, 2015, 18 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/911,948, dated Jul. 10, 2015, 5 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 14/298,796, dated Mar. 18, 2015, 11 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/438,617, dated May 22, 2015, 18 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/445,104, dated Apr. 23, 2015, 8 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 12/443,571, dated Jul. 9, 2015, 28 pgs.
Brockmann, Office Action, U.S. Appl. No. 14/262,674, dated May 21, 2015, 7 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 14/262,674, dated Sep. 30, 2015, 7 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/911,948, dated Aug. 21, 2015, 6 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/911,948, dated Aug. 5, 2015, 5 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/668,004, dated Aug. 3, 2015, 18 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/668,004, dated Mar. 25, 2016, 17 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/686,548, dated Aug. 12, 2015, 13 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/686,548, dated Feb. 8, 2016, 13 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/737,097, dated Aug. 14, 2015, 17 pgs.
Brockmann, Office Action, U.S. Appl. No. 14/298,796, dated Sep. 11, 2015, 11 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 14/298,796, dated Mar. 17, 2016, 9 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 12/443,571, dated Aug. 1, 2016, 32 pgs.
Brockmann, Office Action, U.S. Appl. No. 12/443,571, dated Dec. 4, 2015, 30 pgs.
Craig, Decision on Appeal—Reversed-, U.S. Appl. No. 11/178,177, dated Feb. 25, 2015, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,177, dated Mar. 5, 2015, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,181, dated Feb. 13, 2015, 8 pgs.
Dahlby, Office Action, U.S. Appl. No. 12/651,203, dated Dec. 3, 2014, 19 pgs.
Dahlby, Office Action U.S. Appl. No. 12/651,203, dated Jul. 2, 2015, 25 pgs.
Dahlby, Final Office Action, U.S. Appl. No. 12/651,203, dated Dec. 11, 2015, 25 pgs.
Gecsei, J., “Adaptation in Distributed Multimedia Systems,” IEEE Multimedia, IEEE Service Center, New York, NY, vol. 4, No. 2, Apr. 1, 1997, 10 pgs.
Gordon, Notice of Allowance, U.S. Appl. No. 12/008,697, dated Dec. 8, 2014, 10 pgs.
Gordon, Office Action, U.S. Appl. No. 12/008,722, dated Nov. 28, 2014, 18 pgs.
Gordon, Notice of Allowance, U.S. Appl. No. 12/008,697, dated Apr. 1, 2015, 10 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/008,722, dated Jul. 2, 2015, 20 pgs.
Gordon, Notice of Allowance, U.S. Appl. No. 12/008,722, dated Feb. 17, 2016, 10 pgs.
Jacob, Bruce, “Memory Systems: Cache, DRAM, Disk,” Oct. 19, 2007, The Cache Layer, Chapter 22, p. 739.
Ohta, K., et al., “Selective Multimedia Access Protocol for Wireless Multimedia Communication,” Communications, Computers and Signal Processing, 1997, IEEE Pacific Rim Conference NCE Victoria, BC, Canada, Aug. 1997, vol. 1, 4 pgs.
Regis, Notice of Allowance, U.S. Appl. No. 13/273,803, dated Nov. 18, 2014, 9 pgs.
Regis, Notice of Allowance, U.S. Appl. No. 13/273,803, dated Mar. 2, 2015, 8 pgs.
Sigmon, Notice of Allowance, U.S. Appl. No. 13/311,203, dated Dec. 19, 2014, 5 pgs.
Sigmon, Notice of Allowance, U.S. Appl. No. 13/311,203, dated Apr. 14, 2015, 5 pgs.
Tag Networks Inc, Decision to Grant a Patent, JP 2008-506474, dated Oct. 4, 2013, 5 pgs.
Wei, S., “QoS Tradeoffs Using an Application-Oriented Transport Protocol (AOTP) for Multimedia Applications Over IP.” Sep. 23-26, 1999, Proceedings of the Third International Conference on Computational Intelligence and Multimedia Applications, New Delhi, India, 5 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, HK14101604, dated Sep. 8, 2016, 4 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(1) and 162 EPC, EP15785776.4, dated Dec. 8, 2016, 2 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(1) and 162 EPC, EP15721482.6, dated Dec. 13, 2016, 2 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(1) and 162 EPC, EP15721483.4, dated Dec. 15, 2016, 2 pgs.
ActiveVideo Networks, Inc., Communication Under Rule 71(3), Intention to Grant, EP11833486.1, dated Apr. 21, 2017, 7 pgs.
ActiveVideo Networks, Inc., Decision to Refuse an EP Patent Application, EP 10754084.1, dated Nov. 3, 2016, 4 pgs.
ActiveVideo Networks, Inc. Notice of Reasons for Rejection, JP2015-159309, dated Aug. 29, 2016, 11 pgs.
ActiveVideo Networks, Inc. Denial of Entry of Amendment, JP2013-509016, dated Aug. 30, 2016, 7 pgs.
ActiveVideo Networks, Inc. Notice of Final Rejection, JP2013-509016, dated Aug. 30, 2016, 3 pgs.
ActiveVideo Networks, Inc., KIPO's Notice of Preliminary Rejection, KR10-2012-7031648, dated Mar. 27, 2017, 3 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT-US2015028072, dated Nov. 1, 2016, 7 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT-US2015/027803, dated Oct. 25, 2016, 8 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT-US2015/027804, dated Oct. 25, 2016, 6 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2016/040547, dated Sep. 19, 2016, 6 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2016/051283, dated Nov. 29, 2016, 10 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Article 94(3), EP13735906.3, dated Jul. 18, 2016, 5 pgs.
ActiveVideo, Intent to Grant, EP12767642.7, dated Jan. 2, 2017, 15 pgs.
Avinity Systems B.V., Decision to Refuse an EP Patent Application, EP07834561.8, dated Oct. 10, 2016, 17 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/668,004, dated Nov. 2, 2016, 20 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/668,004, dated Mar. 31, 2017, 21 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/737,097, dated May 16, 2016, 23 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/737,097, dated Oct. 20, 2016, 22 pgs.
Brockmann, Office Action, U.S. Appl. No. 14/217,108, dated Apr. 13, 2016, 8 pgs.
Brockmann, Office Action, U.S. Appl. No. 14/696,462, dated Feb. 8, 2017, 6 pgs.
Brockmann, Office Action, U.S. Appl. No. 15/139,166, dated Feb. 28, 2017, 10 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 14/217,108, dated Dec. 1, 2016, 9 pgs.
Dahlby, Advisory Action, U.S. Appl. No. 12/651,203, dated Nov. 21, 2016, 5 pgs.
Hoeben, Office Action, U.S. Appl. No. 14/757,935, dated Sep. 23, 2016, 28 pgs.
Hoeben, Final Office Action, U.S. Appl. No. 14/757,935, dated Apr. 12, 2017, 29 pgs.
McElhatten, Office Action, U.S. Appl. No. 14/698,633, dated Feb. 22, 2016, 14 pgs.
McElhatten, Final Office Action, U.S. Appl. No. 14/698,633, dated Aug. 18, 2016, 16 pgs.
McElhatten, Office Action, U.S. Appl. No. 14/698,633, dated Feb. 10, 2017, 15 pgs.
Related Publications (1)
Number Date Country
20150312599 A1 Oct 2015 US
Provisional Applications (1)
Number Date Country
61984697 Apr 2014 US