Multiple-mode system and method for providing user selectable video content

Abstract
The method of providing audiovisual content to a client device configured to be coupled to a display. The method detects a selection of a graphical element corresponding to a video content item. In response to detecting the selection of the graphical element, a transmission mode is determined. The transmission mode is a function of: (i) one or more decoding capabilities of the client device; (ii) a video encoding format of the video content item; (ii) whether the video content item should be displayed in a full screen or a partial screen format; and (iv) whether the client device is capable of overlaying image data into a video stream. Next, audiovisual data that includes the video content item is prepared for transmission according to the determined transmission mode. Finally, the prepared audiovisual data is transmitted from the server toward the client device, according to the determined transmission mode, for display on the display.
Description
TECHNICAL FIELD

The present invention relates to providing user selectable content with a graphical user interface in a streaming multimedia system, and more particularly to a multiple mode system with automatic control logic for determining which mode to implement based upon a plurality of characteristics including capabilities of the decoding device and the selected user-selectable multimedia content.


BACKGROUND ART

It is known in the prior art to provide streaming video content to a client device and to allow a user to select the content to be streamed. In cable television systems that include legacy set-top boxes, providing a graphical user interface with full-screen streaming video content has proven to be quite challenging, since legacy set-top boxes often have disparate operating capabilities. Most legacy set-top boxes are capable of decoding MPEG-2 streams. These legacy systems have little capability with respect to providing graphic overlays and for receiving graphical user interface (GUI) data in a separate stream from the streaming video content. Thus, these legacy systems generally either provide no graphical user-interface during full screen playback or provide some rudimentary overlays that are generated by the cable-television set-top box. As technology progresses, cable television systems have become more diverse with multiple generations and even different brands of set-top boxes with widely varying capabilities. Hence, there is a need for an adaptive system that can provide advanced graphical user interface elements to all users while dynamically using the resources within the cable television network to provide a consistent user experience.


SUMMARY OF THE EMBODIMENTS

In accordance with a first embodiment of the invention, a method provides an audiovisual experience to an individual having a client device that is capable of decoding audiovisual data using a video codec, and an audiovisual display coupled to the client device for display of decoded audiovisual data. The method includes first providing the client device with a first graphical user interface (GUI) that indicates a plurality of videos and includes an input for selecting a video from the plurality of videos. Next, in response to receiving a selection of a video in the plurality of videos by the individual using the input, the method includes determining a transmission mode as a function of: 1) the decoding capabilities of the client device, 2) a video encoding format of the selected video, 3) whether the selected video should be displayed full screen or partial screen, and 4) whether the client device is capable of overlaying image data into a video stream. Then, in a server device remote from the client device, the method calls for preparing, for transmission according to the determined transmission mode, audiovisual data that include the selected video. Finally, the method requires transmitting the prepared audiovisual data, from the server device to the client device, according to the predetermined transmission mode, for display on the audiovisual device associated with said client.


In accordance with a first embodiment of the invention, several transmission modes are possible. According to a first transmission mode, the audiovisual data includes the first GUI, and when the video should be displayed in a partial area of the screen, preparation further includes: rendering the first GUI according to a previously determined screen resolution and stitching the selected video into the previously rendered first GUI where stitching is a method of combining previously encoded video streams by any of a variety of suitable processes. According to a second transmission mode, when the selected video should be displayed full screen and the client device cannot decode the video encoding format of the selected video, preparation includes transcoding the selected video where transcoding is a method of altering already encoded video by changing format or changing encoding means or both. According to a third transmission mode, when the selected video will be displayed full screen and the client device can decode the format of the selected video, and further, no image data will be overlaid on the selected video then preparation includes repackaging the selected video. According to a fourth transmission mode, when the selected video should be displayed full screen and the client device can decode the selected video, and further, the client device is capable of overlaying image data onto the selected video, and still further, the audiovisual data from the server includes a second GUI that provides various GUI elements such as video playback controls, then preparation includes rendering the second GUI according to the client overlay resolution. According to a fifth transmission mode, when the selected video will be displayed full screen and the client device can decode the selected video, and further, the client device is not capable of overlaying image data onto the selected video, then preparation includes: rendering the second GUI according to the video resolution, video size, and video frame rate compatible with the client device; decoding a portion of the selected video; blending the rendered second GUI into the decoded portion; and re-encoding the blended portion according to the video encoding format.


In accordance with a second embodiment of the invention, a computer program product provides an audiovisual experience to an individual having a client device that is capable of decoding audiovisual data using a video codec, and an audiovisual display coupled to the client device for display of decoded audiovisual data. The computer program product has a computer useable medium on which is stored non-transitory computer program code for executing the above-described method in its various transmission modes.


To implement these methods and execute their program code, there is also disclosed a third embodiment: a computer system for providing an audiovisual experience to an individual having a client device that is capable of decoding audiovisual data using a video codec, and an audiovisual display coupled to the client device for display of decoded audiovisual data. The computer system has an application engine for providing a first graphical user interface (GUI) that indicates a plurality of videos and includes an input for selecting a video from the plurality of videos, and for providing a second GUI that includes video playback controls. The computer system also has control logic for determining a transmission mode in response to receiving a selection of a video in the plurality of videos by the individual using the input. Determining the transmission mode is a function of: 1) the decoding capabilities of the client device, 2) a video encoding format of the selected video, 3) whether the selected video should be displayed full screen or partial screen, and 4) whether the client device is capable of overlaying image data into a video stream. The computer system also has a transcoder for transcoding the selected video from a second encoding format into the first encoding format, according to the determined transmission mode. The computer system also has a blender for blending the second GUI into the selected video using the first encoding format, according to the determined transmission mode. The computer system also has a stitcher for stitching the output of the application engine with the output of the transcoder and the blender, according to the determined transmission mode. The computer system also has a packager for packaging audiovisual data according to the determined transmission mode. Finally, the computer system has a transmitter for transmitting the packaged audiovisual data, toward the client device, according to the determined transmission mode, for display on the audiovisual display.


The components of the computer system may be configured according to the transmission mode. Thus, according to a first transmission mode, the audiovisual data further include the first GUI, the application engine is configured to render the first GUI according to a previously set screen resolution; the transcoder is configured to transcode the selected video; and the stitcher is configured to stitch the transcoded video into the rendered first GUI, when the video should be displayed partial screen. According to a second transmission mode, the transcoder is configured to transcode the selected video, when the selected video should be displayed full screen and the client device cannot decode the video encoding format of the selected video. According to a third transmission mode, the packager is configured to repackage the selected video, when the selected video should be displayed full screen, the client device can decode the video encoding format of the selected video, and no image data should be overlaid on the selected video. According to a fourth transmission mode, the audiovisual data further include a second GUI that includes video playback controls, and wherein the application engine is configured to render the second GUI according to a client overlay resolution, when the selected video should be displayed full screen, the client device can decode the video encoding format of the selected video, and the client device is capable of overlaying image data into the selected video. According to a fifth transmission mode, the application engine is configured to render the second GUI according to a video resolution, video size, and video frame rate; the transcoder is configured to decode a portion of the selected video; the blender is configured to blend the rendered second GUI into the decoded portion; and the transcoder is further configured to re-encode the blended portion according to the video encoding format, when the selected video should be displayed full screen, the client device can decode the video encoding format of the selected video, and the client device is not capable of overlaying image data into the selected video. It should be clear that not all of these components must be active in each transmission mode. Therefore, operation of each of the transcoder, blender, stitcher, and packager may be optional according to the determined transmission mode.


In accordance with a fourth embodiment of the invention, a method is disclosed for streaming user-selected video content encoded in a first protocol format having a protocol container. The method requires first receiving a request for streaming the user-selected video content, and obtaining the user-selected video content from a first source. Next, the method calls for removing the protocol container from the user-selected video content and repackaging the user-selected video content into an MPEG-2 transport stream. Finally, the method requires transmitting the MPEG-2 transport stream with the user-selected video content encoded in the first protocol wherein the first protocol is different than MPEG and the client device is capable of decoding the first protocol.


Variations on the fourth embodiment are contemplated. For example, the method may also include adapting the presentation and synchronization timing of the stream based upon the presentation and synchronization timing of the user-selected video content. The method may be performed within a cable television network. The first protocol container may be MP4, DASH, or HTTP, and the first protocol container and the first encoded protocol may be the same.


There is also provided a fifth embodiment of the invention: a method for adaptation of a stream for streaming a user-selected video asset. This method includes first streaming a graphical user interface from a server to a client device wherein the stream has a plurality of stream characteristics. Next, the method includes receiving a user request for playback of encoded video content encoded with one or more different streaming characteristics. Then, the method includes generating graphical user interface elements in accordance with the one or more different streaming characteristics. Finally, the method includes combining the encoded video content and the generated graphical user interface elements to form an encoded transport stream. The user requested encoded elements may have a picture size that is less than a full video frame and the generated user elements when combined with the user requested encoded video may form a complete video frame. A different streaming characteristic between the graphical user interface and the requested encoded video content may be the frame rate and the generated graphical user interface elements may have the same frame rate as the requested encoded video content. Moreover, the generated graphical user interface elements may have the same sampling rate as the requested encoded video content.





BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing features of embodiments will be more readily understood by reference to the following detailed description, taken with reference to the accompanying drawings, in which:



FIG. 1 is an illustration of a screen layout;



FIG. 2 is an environment for implementation of at least one embodiment of the invention;



FIG. 3 is a flow chart that discloses the control logic sequence for switching between different modes of operation;



FIG. 4 shows the steps for repackaging and resynchronizing a full-frame video sequence;



FIG. 5 shows an exemplary functional architecture for implementing a multi-modal platform for providing user-selectable video content;



FIG. 5A shows the functional architecture for supporting mode 1, which is a partial screen video with a stitched graphical user interface;



FIG. 5B shows the functional architecture to support modes 2 and 5, which are the display of full-screen video with and without blended overlays;



FIG. 5C shows the functional architecture for mode 3, which is a full-screen pass through where encoded video content is repackaged and re-streamed;



FIG. 5D shows the functional architecture for supporting mode 4, which is a full screen transcode due to decoding limitation of the client device; and



FIG. 6 shows the source architecture with overlays showing structural elements.





DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS

Definitions. As used in this description and the accompanying claims, the following terms shall have the meanings indicated, unless the context otherwise requires: The term “functional block” shall mean a function that may be performed by a hardware element either alone or in combination with software. The term “module” shall refer to either to hardware or a hardware and software combination wherein the software is operational on the hardware.


Embodiments of the present invention are directed to network transmission of user-selected multimedia content (video, audio, and audiovisual content). The selected multimedia content may be encoded data that may be encoded with a codec. Various embodiments also use one or more container protocols for putting encoded data into a format for transmission in a network and use transport protocols for transferring the containers of encoded video and audio data to a client device within a network. Many of the embodiments described below mention MPEG, MPEG-2 transport streams and H.264 encoding and transcoding. The descriptions are meant for exemplary purposes and one should not see the present invention as being limited to only these protocols, as other encoding, container, and transport protocols may be used without deviating from the intended scope of the invention. Additionally, embodiments of the present invention operate on multimedia content. For simplicity, disclosed embodiments in general describe video content. However, the embodiments may readily be adapted for the distribution of user-selectable audio content and user-selectable audiovisual content.



FIG. 1 is an illustration of a frame layout that includes graphical user interface elements for allowing a subscriber to select video content to be distributed to a client device in a content distribution network, such as a cable television network. The frame layout provides the location of various graphical user interface elements and video content that are to be added to the frame layout. In FIG. 1, the graphical user interface elements include buttons for the selection of SD (standard definition) and HD (high definition) content for both renting and purchasing. The video content elements to be added include a scaled movie preview which is full-motion video, along with text based video content elements such as “Movie Title”, “Actor, Year, Duration, Director”, and “Synopsis”. It should be understood that all of these elements are video elements, since the content distribution network transmits video content in a video stream. As a result, even static elements are displayed as frames of video. In a standard content distribution network, such as a cable television network, video content is distributed via a transport protocol, using a container protocol, wherein the video content is encoded in an encoded format (e.g., MPEG-2, H.264 etc.).


As should be understood by a person with ordinary skill in the art, the content distribution network includes a multitude of components including a central platform that includes a plurality of processors for serving content. The processors generally perform the functions of providing broadcast video, user-interface guides, interactive content, and video-on-demand. The processors that are part of a content distribution platform are coupled to a number of nodes that broadcast and stream on-demand video content to a plurality of subscriber client devices. The subscriber client devices may include set-top boxes, tablets, televisions and other electronic communications devices. Each client device has certain capabilities based upon both the hardware and software that are available to the client device. For example, disparate client devices may have different processors, memory, codecs, and capabilities to download and execute programs. In a cable television environment, most, if not all, devices can access MPEG-2 Transport streams and decode MPEG-2 elementary streams. Some devices may have more advanced capabilities, including a local operating system, software components, and the ability to download and execute additional programs. Further client device may be able to receive and work with different transport protocols such as UDP, HLS, HTTP, MPEG-DASH, and smooth streaming, work with different content containers, such as MP4, MPEG-2 transport stream, MPEG-2 program stream and decode different codecs including H.264, MPEG-2.



FIG. 2 shows an exemplary environment including a content distribution platform for providing a multi-modal operation of selectable video content to be streamed to a client device. The platform is a structure that includes a plurality of components. The platform includes an application engine for selection of a graphical user interface to be provided to a client device in response to a request from the client device. The application engine responds to requests from the client device for content. For example, the application engine may include an HTML5 application that defines a graphical user interface (GUI). The GUI may include a frame layout along with position information for insertion of content, for example as shown in FIG. 1. The layout includes a plurality of blocks (movie title, movie preview (video), SD price etc.) for video elements to be inserted into the layout. Thus, encoded video elements, such as MPEG encoded fragments, may be referenced at the block locations.


The HTML5 application keeps track of state information regarding the elements of the graphical user interface. Thus, the HTML5 application can be reused for presenting different content to the user in a graphical display. The HTML5 application may access the encoded elements and may cause elements that are not already properly encoded to be encoded in an MPEG encoder. The MPEG elements of the HTML5 application may include MPEG fragments of buttons, sliders, switches, etc. that may be part of the HTML5 application. The MPEG elements may be static images and scaled versions of video content, such as movie previews.


Additionally, the HTML5 application may include encoded fragments that represent the layout. For example, the layout may be considered a background and therefore the layout may include a plurality of encoded elements to represent the layout. Similar content distribution platforms that can be used with embodiments of the present invention can be found in U.S. patent application Ser. No. 12/008,697 and U.S. patent application Ser. No. 13/445,104 both of which are incorporated herein by reference in their entirety.


The HTML5 application may also include reference to video content from an outside source or stored at a different location. As shown in FIG. 2 there is a content source, which may be on a content source server. In response to a request from the content distribution platform, content from the content source server is provided to the content distribution platform.


The platform determines if the content needs to be transcoded based upon the capabilities of the client device. If necessary, the content from the content server is provided to a transcoder. The transcoder then scales and/or transcodes the video from the content source, so that the video from the content source can be stitched together with other encoded elements in a compositor module. If the content does not require scaling or transcoding, the content will be provided directly to a compositor. Whether a source is transcoded is determined by control logic that is part of the platform and will be explained in further detail below. The compositor receives in encoded fragments, such as encoded MPEG fragments and may receive in encoded video content. The compositor takes the various encoded elements and creates an MPEG elementary stream based upon the encoded elements and the frame layout from the HTML5 application. If a request for full frame encoded video content is received, the compositor may receive in the encoded video content in its native encoded format and may package the encoded video content in an MPEG transport stream without requiring the encoded video content to be transcoded. For example, if the client device is capable of decoding an H.264 encoded file and a full screen video is requested from a source, the H.264 video will not be transcoded and will only be encapsulated into an MPEG-2 transport stream for transmission to the client device. The type of request, the content to be presented, along with the available processing resources at both the server and on the client device are used in determining the mode of operation and the format of the data to be transmitted to a requesting client device.


The client device in general includes an MPEG decoder and optionally may include a bitmap decoder for an overlay plane. The MPEG decoder receives an MPEG transport stream that contains one or more MPEG elementary streams or other encoded streams (e.g., H.264 etc.). The MPEG decoder decodes the encoded stream and presents the output to a display device. The bitmap decoder receives in a bitmap overlay stream separate from the full screen MPEG video content. The client device receives the bitmap overlay and displays the bitmap overlay on top of the decoded MPEG video content. The bitmap and the decoded MPEG video may be blended together in the spatial domain by the client device or elements of the bitmap may replace elements of the spatially decoded MPEG video content. Thus, a decoded MPEG video frame may have elements replaced wherein the bitmap represent a graphical user interface. FIG. 2 represents one version of a content distribution platform and should not be viewed as limiting the scope of the present invention.


Thus, resources at both the server-side and client-side are relevant to determining how to efficiently process requests from client devices. As can be imagined, the user can select from a plurality of different content and therefore the content distribution platform will operate in one of a plurality of different modes. A first mode provides a graphical user interface for selection of content to be displayed (e.g., movies, television shows, specials etc.) which may either be static or have less-than full frame video streaming using an HTML5 application wherein MPEG elements are stitched into a layout to form MPEG video frames. A second mode provides a full screen trick-play mode wherein full-screen video is overlaid with graphical user interface controls (e.g., fast forward, rewind, pause, stop etc.) where at least partial decoding and re-encoding of the video content and blending occurs. A third mode provides a full screen display wherein the video content is provided to the client device for full-screen playback in an encoded format compatible with the client device without transcoding. A fourth mode provides a full-screen trick play mode wherein the client device performs the blending and encoding graphical controls. A fifth mode provides a full screen transcode of the source material dependent in part on the client device's decoding capabilities. In various embodiments of the present invention, these modes of operation can be selectively and automatically switched between based upon both requests from the client device, and the capabilities of the client device wherein the control is performed on the server-side.



FIG. 3 is a flow chart showing the operation of a control processing logic within the content distribution platform. The control processing logic queries if the requested content by the client device contains visible video. If the answer to the query is no, the graphical user interface is processed at the server wherein a selection of encoded elements, such as MPEG elements, are selected based upon a frame layout of an HTML5 application and the MPEG elements are stitched together to form a full video frame (static) that can then be transmitted to the client device and displayed on a display device.


If video is to be presented on the screen, the logic queries if full screen video is to be shown. If there is only a partial screen of video to be shown, the server switches to mode 1 and identifies an HTML5 application and frame layout. It then accesses source content that is scaled and stitched together to form a series of MPEG encoded video frames defining an MPEG elementary stream. If, however, the video content that has been selected by the user is full screen video, for example if the user indicates a desire to view a movie, TV show, full-screen video clip, or full-screen promotional content, the logic performs further queries.


During the establishment of a network session between a requesting client device and the server, the client device identifies itself and thus, identifies its capabilities. The client capabilities may be transmitted during the communication or may be stored in a user/device profile that can be accessed by the server and the control logic. The logic thus queries whether the video properties of the selected video content are compatible with the capabilities of the client device. For example, if the client device can only decode MPEG-2 and the selected video content is encoded using H.264, the logic switches to mode 4 and a full-screen transcode is performed, so that the selected video content will be transmitted to the client device in a format that the client device can decode (e.g., MPEG-2).


If the video properties of the selected video content are compatible with the client device's capabilities, the logic determines if a graphical user interface element is required to be placed on top of the video. A graphical user interface element may be required based upon signals received from the client device. For example, a user may have a full screen video playing and may use a remote control coupled to the client device to pause or fast-forward the video content. The client device initiated signal informs the control logic that graphical user interface elements should be placed on top of the full screen video to enable trick-play features (fast forward, rewind, play, pause, skip etc.).


If user elements are not to be placed on top of the video, the logic will initiate mode 3, which provides a video pass-through capability. In said situation, the client device does not require content to be transcoded and therefore the content will remain in its native format. The video content will then be repackaged and streamed to the client device. Repackaging and streaming will be explained in further detail with respect to FIG. 4.


If the logic determines that the graphical user interface is to be placed on top of the video, the logic then queries if the client and control protocol support a local overlay. Thus, the control logic looks at the control protocol as implemented by the platform and the connecting network between the control logic (i.e. server) and the client device. Certain networks will provide for more than one channel of communication with a client device, so that control data may be transmitted separately from MPEG elementary stream data (the requested video content). The control data may be transmitted using a different signaling system or may be provided in a separate MPEG elementary stream. Similarly, the client device must be capable of receiving instructions that a local overlay should be created.


As a result, if the control logic determines that the client and the control protocol support a local overlay, either graphical or full-motion video, the control logic switches to mode 5. In mode 5, the system renders GUI elements in accordance with the overlay resolution, size, and frame rate for the video content. For example, if the full screen video is being rendered as, for example, 720 p, the GUI elements will be scaled and rendered as 720 p elements. These graphical user interface elements may be transmitted as encoded fragments. In such an embodiment, the client device is capable of receiving encoded fragments or spatially rendered fragments and includes local software for stitching the encoded fragments or spatially rendered fragments with the full screen encoded video content. The client device may have a predetermined layout using templates that define screen locations and other parameters for adding in the GUI elements for the interface or the platform may transmit a layout for the interface. The client device will then insert the GUI elements onto the full screen video. This may be performed either in the encoded domain or in the spatial domain. If the client device performs the combination in the spatial domain, blending may occur wherein the GUI elements and the video content may be blended using an alpha layer.


If the client and control protocol do not support local overlay, the control logic will cause the graphical user interface elements to be added on the server side (at the platform) in mode 2. In order to efficiently use resources, only a partial decode of the selected encoded video content may occur. For example, macroblock locations of the video content that will include GUI elements may be decoded to the spatial domain and then alpha blended together with the graphical user interface elements. The GUI elements either may be stored locally or retrieved from a remote location. The GUI elements may be saved at a high resolution and then scaled as needed to meet the requirements of the respective client. The blended elements will then be encoded and a compatible MPEG elementary stream will be formed and placed into an MPEG-2 transport stream. The platform sends the MPEG-2 transport stream to the client device and the client device decodes and displays the user-selected video content with embedded user-interface overlays.


One benefit of the present multi-modal system is that the number of transcoding steps is reduced compared to existing content distribution platforms and especially to content distribution platforms coupled to cable television networks. Rather than having to transcode encoded video content selected by a user, the content may be passed-through in its native encoding. This assumes that the client device is capable of decoding the native format. In this configuration, processor resources are conserved on the content distribution platform and therefore, more video streams can be processed simultaneously. Additionally, the present multi-modal system allows for all of the user-interface graphics to be centrally controlled by the content distribution platform. The multi-modal system passes the graphical user interface elements from the platform to the client device or incorporates the graphical user interface elements into the video that is being streamed to the client. A further advantage of the present multi-modal systems is that content providers do not need to re-author their applications (YouTube, Netflix, Amazon etc.) or transcode their content for operation within this system. The native applications (ex. YouTube, Netflix, Amazon) can be run on the content distribution platform in their native operating system and language (Java, iOS, Linux etc.) and the content will either be passed through or transcoded without intervention by the content provider. Yet another advantage of the present multi-modal system occurs because full-screen playback and control of full-screen playback can be controlled by the content distribution platform without requiring client devices to take over control. Thus, the client device becomes another processing element in load balancing and both legacy client devices and modern client devices can be serviced with comparable content and consistent graphical presentations. Further, the client devices on the content distribution network do not need to be updated before more advanced features can be presented to a user. The content distribution system will automatically adapt to the capabilities of the client devices within the network and therefore, updates can be made on a rolling basis.


As mentioned above, video content can be repackaged and re-streamed without requiring transcoding. Thus, a content source having video content for streaming in a particular container format and with audio and video synchronization information will have the source video container format, such as MP4, DASH, HTTP, Smooth Streaming and HLS removed so that only the actual encoded video and audio data remains. The encoded audio and video data are repackaged in a transport container that is compatible with the content distribution network. For example, the compressed video and audio content is repackaged into an MPEG-2 transport stream container. Additionally, the audio and video synchronization data is preserved and the video stream from the content distribution platform to the client device is adapted based upon the audio and video synchronization data so that the stream timing complies with the transport protocol specifications (e.g., MPEG transport stream specifications).



FIG. 4 shows an example of the repackaging and re-streaming process. Element 400 shows video content from a content source in its native format. The video content has a first container format having a video bit stream, an audio bit stream, synchronization information and header information. In element 410, the container is removed from the video content and the audio and video bit streams are extracted and stored to a memory location. Additionally, the synchronization data is extracted and stored to a memory location. Other information may also be extracted and saved such as subtitles for the bit stream. In element 420, a new container is created. Synchronization is re-done using the new container's format. The stream is packetized and then transmitted to the client device observing packet layer jitter and delay requirements as caused by the network infrastructure. Element 430 shows a representation of video frames that have been decoded from a compressed format and are transformed into the spatial domain for presentation on a video playback display associated with the requesting client device.


As mentioned above, a stream may be adapted for streaming a user-selected video asset. A stream may be adapted when a graphical user interface, having certain streaming characteristics, is streamed from a server to a client, and a user requests playback of video content encoded with streaming characteristics different from those of the graphical user interface. In one embodiment, a method includes first streaming a graphical user interface from a server to a client device wherein the stream has a plurality of streaming characteristics. Next, the method includes receiving a user request for playback of encoded video content encoded with one or more streaming characteristics different from the plurality of streaming characteristics of the graphical user interface stream. Then, the method includes generating graphical user interface elements in accordance with the one or more different streaming characteristics. Finally, the method includes combining the encoded video content and the generated graphical user interface elements to form an encoded transport stream. The user requested encoded video content may have a picture size that is less than a full video frame, and the generated user interface elements, when combined with the user requested encoded video content, may form a complete video frame. A different streaming characteristic between the graphical user interface and the user requested encoded video content may be the frame rate, and in one embodiment, the generated graphical user interface elements may have the same frame rate as the user requested encoded video content. Moreover, the generated graphical user interface elements may have the same sampling rate as the user requested encoded video content.



FIG. 5 shows an architecture of one embodiment of the content distribution platform in a network that can support all of the modes of delivering a graphical user interface and video content to a client device. As shown, the architecture presents functional blocks and data paths for video, audio and graphical user interface elements. To support each specific mode only a subset of the possible data paths and functional blocks are required.


A user may select between various types of content and the associated screens (VOD screen, content selection screen, video content information screen, ordering screen, full screen playback etc.). In response to a user selection, control logic automatically selects an appropriate application and corresponding frame layout for presentation of the desired content/screen to the client device, selects an appropriate mode of operation, and controls the various functional blocks. The control logic of the application content distribution platform determines how to process the received request based upon capacity of the network and capabilities of the client device. The server-side architecture of FIG. 5 includes several functional blocks controlled by the control logic, including a source content network, an application execution environment, an audio encoder, a bitmap (image) encoder, an audiovisual (MPEG) encoder, a transcoder, a blender, a stitcher, and a packager/multiplexer/streamer. These functional blocks are now described in more detail.


The source content network provides various audio, video and audiovisual content to be supplied to the client device as part of an audiovisual experience. Content is provided by various content providers, each of whom may have a different storage and streaming format for their content. In particular, the content may be encoded in a variety of different formats, including MPEG-4, Flash video, AVI, RTMP, MKV, and others.


The architecture includes an application execution environment for generation of a graphical user interface in response to requests received from a client device. Based on the correct application, the application execution environment selects the appropriate graphical user interface (frame layout along with references/addresses of elements to be stitched into the frame layout) and provides audio and screen updates to data paths of the architecture. For example, state of a button may have changed in response to action by a user, and therefore the application will have a screen update for the graphics of the button and perhaps play an audible sound.


The application execution environment (AEE) requests video elements, such as encoded fragments, for example, encoded MPEG fragments, for incorporation into the frame layout from one or more sources including from a source content network and one or more source content servers. The AEE may provide actual spatial data for the screen updates or the AEE may provide pointers or addresses to content that is to be combined with a frame layout. Other examples of content that may be combined under the direction of the AEE is full-motion video such as MPEG2 or animated graphic elements which are encoded as MPEG2. As MPEG2 does not provide alpha channel information, which is useful for overlaying said information, the AEE can embed said alpha channel information either as a coded frame which is then not directly displayed or as non-displaying coded information embedded in a portion of a frame (e.g., as an MPEG custom format). In either case of a full-frame alpha channel mask or of an alpha channel mask embedded in a portion of a frame, the alpha mask information is extracted, by the client device, as illustrated in FIG. 5, from the stream upon detection of an alpha channel mask identifier and an empty frame or empty macroblocks are substituted by the receiving software of the client prior to the decoding of said video information. The application execution environment may include links to one or more graphical elements that may be either in the encoded domain (MPEG fragments i.e. groups of encoded macroblocks) or the spatial domain.


It should be recognized that a screen update may be either a full frame or part of a frame. An API can be used such as the OpenGL API, where scene information (including bitmaps) is exchanged. In other embodiments, the screen update may be in a format where an abstract scene description (application description) is passed to the modules along with references to bitmaps/encoded fragments and textures.


The audio encoder receives audio output provided by the executing application and encodes it according to an audio encoding format supported by the client device. The audio format may be, for example, MP3, AAC, AC3, or others known in the art. The audio encoder may be employed if the client device is capable of mixing audio into a video stream; if not, then the audio encoder is not employed, and all audio received by the client device from the content distribution platform occurs as an integral part of an audiovisual stream.


The image encoder receives screen updates provided by the executing application and encodes them according to an image encoding format supported by the client device. The image format may be, for example, PNG, BMP, GIF, or JPG, or others known in the art. The image encoder may be employed if the client device is capable of overlaying image graphics onto a video stream. The images are directed to the client device through the network infrastructure. The images are received by the client device, which combines them with decoded MPEG content so that the bitmap is used as an overlay, and blending may be efficiently done in the spatial domain.


In some embodiments, the method comprises the steps of adding a tag, such as a URL or other means, for identification of graphic fragments to said fragments. This enables the tracking of data relating to the frequency of use of a given fragment, and on this basis a certain priority can be given to a fragment which further determines how long said fragment will remain in said cache. Furthermore, a method is provided for associating the data related to where said fragments are used on a client display in order to reuse said fragments correctly in other parts of the respective user interface of said client display.


In some embodiments, systems for performing methods described herein include fast access memory, such as a cache memory for temporary storing of encoded fragments. By temporarily storing and re-using said graphic fragments and by combining them with other elements of the user interface, a highly efficient personalized audiovisual experience can be generated using relatively small computational power and with short reaction times.


The MPEG encoder receives both audio and screen updates provided by the executing application and encodes them according to an MPEG format into a stitchable MPEG format. The MPEG encoder may be employed if the user has selected a mode in which a selected video is displayed on a partial screen only. The architecture also includes a transcoder. The transcoder receives audio and video content from the source content network, and transcodes it when the source content is in an audio or video format that is not supported by the client device. Once transcoded if required, the audiovisual content may be blended using a blender that receives graphical user interface audio and screen updates from the application execution environment. The output of the transcoder and blender is also in a stitchable MPEG format. If the application requires blending of screen elements or transcoding, the screen elements will be retrieved from a source (application execution environment or the content distribution network) and the screen elements may be transcoded into a stitchable MPEG element or resized for the frame layout (e.g., 480 p to 200×200 pixels).


A stitching module receives stitchable MPEG from the MPEG encoder and from the transcoder and blender, and stitches them into a standards-compliant MPEG transport stream. Suppose the application changes the state of a button in response to a user input. Then the graphical element for the changed state of the button will be provided to the stitching module if the graphic is already an MPEG fragment, or if the graphical element is spatially encoded, the graphical element will be encoded as an MPEG fragment and passed to the stitching module that stitches together the frame layout. The MPEG fragments and the frame layout may be stitched together to form a complete MPEG frame in the stitching module.


The complete MPEG frames are packaged into an MPEG elementary stream and then into an MPEG transport stream container in a packaging and multiplexing stream module. More than one MPEG elementary stream may be multiplexed together and there may be multiple audio and/or video streams. It should be recognized that the graphical user interface elements that are to be placed on top of a video element (animation, scaled movie trailer, other partial video frame content) can be sent either as an overlay graphic to the client (e.g., a bitmap) or the overlay graphical user interface elements can be blended with the video element in the transcoding and blending module. The packaged MPEG transport stream (e.g., MPEG2 transport stream) is then sent through the network infrastructure to the client device. The client device will receive the MPEG transport stream and decode the MPEG elementary streams for display on a display device.



FIGS. 5A-5D show the functional blocks and data paths that may be used for each of the modes of operation. FIG. 5A shows the functional architecture for supporting mode 1, which is a partial screen video with a stitched graphical user interface. FIG. 5B shows the functional architecture to support modes 2 and 5, which are the display of full-screen video with and without blended overlays. FIG. 5C shows the functional architecture for mode 3, which is a full-screen pass through where encoded video content is repackaged and re-streamed. FIG. 5D shows the functional architecture for supporting mode 4, which is a full screen transcode due to a decoding limitation of the client device.



FIG. 5A is now described in more detail with respect to mode 1 and operation of its relevant functional blocks. In mode 1, the user has requested an application that provides partial screen video with a stitched graphical user interface. In this case, the application execution environment provides the graphical user interface, including screen updates and audio updates that make up the portion of the screen not occupied by the video, while the partial screen video itself is provided by the source content network. The screen updates are either provided to the image encoder (if the client is capable of performing graphics overlay) or to the MPEG encoder (if the client is incapable of performing graphics overlay). Video controls that overlay the video itself, for example to permit a trick-play mode, may be provided by the application execution environment to the transcoder, and blended into the video received from the source content network. The encoded screen updates and the transcoded and blended video are stitched together to form MPEG video frames, which are packaged, multiplexed with audio, and streamed to the client. If the client is capable of performing a graphics overlay function, then the transcoded and blended video is provided separately from screen updates to the graphical user interface.



FIG. 5B provides the functional architecture for providing full-screen video content to a requesting client device wherein the capabilities of the client device do not permit the client device to decode the content of the selected full-screen video in its native encoding scheme. Thus, the control logic determines that either mode 2 or mode 5 is required. When full-screen video is requested by a user through a request signal from the client device, the stitching components of the functional architecture of FIG. 5 are not used. Instead, the control logic evaluates whether selected video content to be displayed on the display device through the client device of the user is in a format that is compatible with the decoder of the client device. In the scenario wherein the client device cannot decode the native format of the content retrieved from the content distribution network, the content is provided to either a blending module if the requested full screen video content is to have a graphical user overlay (e.g., trick-play, captions etc.) and then to a transcoding module, or the video content is provided directly to the transcoding module. The transcoding module transcodes the full screen video content from the native format to a format that can be decoded by the requesting client device. During display of a transcoded full screen video, a user may use a device such as a remote control to request a graphical user interface for changing a parameter of the full-motion video. In response to such a request, the application execution environment will send screen update commands to the transcode and blend module. The transcode and blend module will first decode the full-screen video into the spatial domain and will obtain the graphical elements for the graphical user interface overlay. The transcode and blend module will then alpha blend the graphical elements and the portion of the underlying full-motion video and then the transcode and blend module will take the spatially encoded full-motion video with the graphical user interface overlaid and will encode the data using an encoding format that can be decoded by the client device. The transcoded video data either with the overlay or without the overlay is presented to a repackaging, re-streaming, and re-multiplexing module. The module will encapsulate the encoded video content using a container and transport protocol that can be transmitted through the network infrastructure and be extracted and decoded by the client device.



FIG. 5C shows the functional architecture wherein selected full-screen video content is passed through the system to the client device without transcoding. In modes 5 and 6 of operation, the user selects full-screen playback content and the control logic determines that the selected video content is in an encoding format that can be decoded by the requesting client device. Thus, the full-screen video does not need to be transcoded and can simply be passed to the repackaging, re-multiplexing, and re-streaming module. The repackaging, re-multiplexing, and re-streaming module performs the functions as described with respect to FIG. 4 and can further multiplex both multiple audio and video streams together (multiple MPEG elementary streams, H.264 streams, AAC audio, AC3 audio MPEG-2 audio). The full screen pass-through can also enable overlays where the client device is capable of receiving separate bitmap encoded data, JPEG, GIF or other data in an encoded data format for a graphical user interface element. Thus, the control logic confirms whether the client device can add overlays before initiating this mode. The full-screen video content can be passed through from the content distribution network to the client device, without transcoding the full-screen video content. The client device thus receives both the full-screen video content and any required overlays, removes the video content from its container, and decodes and displays the video content on the user's display device. In certain embodiments this can be further optimized. For example, in mode 6 of operation, if the client has the capability to parse the container format of the source video, then there is no need to repackage and this step can be omitted. The client then fetches the source video directly from the source content server. The client then resumes playing the source asset using the local video player. One example of such a container format is HTTP Live Streaming (HLS). This decouples the low latency overlay path from the video playout path, allowing deeper buffering for the video, and hence, a potentially more stable video picture. With the video received and decoded separate from the GUI, the GUI decoding can be done with very shallow buffers, allowing a more responsive user interface with less latency.



FIG. 5D shows the functional architecture for supporting mode 4, which is a full-screen transcode due to a decoding limitation of the client device. In this mode, the user selects to view a source video that is encoded using a codec that cannot be decoded by the client device. The full-screen video must be transcoded, and cannot be simply passed to the repackaging, re-multiplexing, and re-streaming module. Therefore, the video is obtained from the source content network and transcoded in the transcoder. Once transcoded, the video may be blended with a graphical user interface overlay for GUI elements such as video controls (e.g., with start, stop, rewind, captions and etc.) and any associated sounds if the client is incapable of performing graphics overlay and/or audio mixing, or these may be transmitted separately to the client (if the client is capable of performing these functions). Once the transcoded video has been blended, if required, then the video is packaged in a compliant MPEG transport stream and sent to the client device for display.



FIG. 6 shows the source architecture with overlays showing structural elements. Each of the structural elements may include one or more hardware processors for performing the functions of the architecture. The structural elements include an application engine, media farm, a compositor and a set-top box (i.e. a client device). The application engine is a structural element that encapsulates the application execution environment, audio encoder, image encoder, and MPEG encoder. These functions are tightly coupled in that the outputs of the encoders are all encoded data that are used by other functional components in the system. These functions may be advantageously distributed among the one or more hardware processors in a parallel fashion to improve response time and maximize the use of the processors. As executing graphical applications and encoding are CPU-intensive operations, the application engine may include a great deal of computational power, with less emphasis on storage and input/output operations.


The media farm controls bulk expensive media operations, including transcoding and blending. The media farm receives audio, video, and audiovisual content from the source content network and receives audio and screen updates from the application engine. Transcoding and blending must be performed in real time, while screen updates may be pre-encoded, and much more data passes through the media farm than is generated by the application engine. Therefore, managing operation of the media farm structural element is different from managing the application engine, and requires more storage and network bandwidth. The output of the media farm is stitchable MPEG.


The compositor receives stitchable MPEG from the application engine and the media farm, and stitches it together. Because the compositor outputs standards-compliant MPEG transport streams, it also includes the packager, multiplexer, and streamer. As with the other two structural elements, the compositor has its own unique responsibilities. All source video passes through the compositor, which must therefore have a great deal of network bandwidth available to it.


The client device, or set-top box, may or may not include graphics overlay capabilities and audio mixing capabilities. However, it can decode video content according to at least one codec, for example MPEG-2. As described above, any given client device may decode a variety of video formats, and the network infrastructure connects the content distribution framework to a wide variety of client devices in a heterogeneous network. The control logic in accordance with various embodiments of the invention is flexible enough to accommodate this variety.


The present invention may be embodied in many different forms, including, but in no way limited to, computer program logic for use with a processor (e.g., a microprocessor, microcontroller, digital signal processor, or general purpose computer), programmable logic for use with a programmable logic device (e.g., a Field Programmable Gate Array (FPGA) or other PLD), discrete components, integrated circuitry (e.g., an Application Specific Integrated Circuit (ASIC)), or any other means including any combination thereof. In an embodiment of the present invention, predominantly all of the reordering logic may be implemented as a set of computer program instructions that is converted into a computer executable form, stored as such in a computer readable medium, and executed by a microprocessor within the array under the control of an operating system.


Computer program logic implementing all or part of the functionality previously described herein may be embodied in various forms, including, but in no way limited to, a source code form, a computer executable form, and various intermediate forms (e.g., forms generated by an assembler, compiler, networker, or locator.) Source code may include a series of computer program instructions implemented in any of various programming languages (e.g., an object code, an assembly language, or a high-level language such as Fortran, C, C++, JAVA, or HTML) for use with various operating systems or operating environments. The source code may define and use various data structures and communication messages. The source code may be in a computer executable form (e.g., via an interpreter), or the source code may be converted (e.g., via a translator, assembler, or compiler) into a computer executable form.


The computer program may be fixed in any form (e.g., source code form, computer executable form, or an intermediate form) either permanently or transitorily in a tangible storage medium, such as a semiconductor memory device (e.g., a RAM, ROM, PROM, EEPROM, or Flash-Programmable RAM), a magnetic memory device (e.g., a diskette or fixed disk), an optical memory device (e.g., a CD-ROM), a PC card (e.g., PCMCIA card), or other memory device. The computer program may be fixed in any form in a signal that is transmittable to a computer using any of various communication technologies, including, but in no way limited to, analog technologies, digital technologies, optical technologies, wireless technologies, networking technologies, and internetworking technologies. The computer program may be distributed in any form as a removable storage medium with accompanying printed or electronic documentation (e.g., shrink wrapped software or a magnetic tape), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the communication system (e.g., the Internet or World Wide Web.)


Hardware logic (including programmable logic for use with a programmable logic device) implementing all or part of the functionality previously described herein may be designed using traditional manual methods, or may be designed, captured, simulated, or documented electronically using various tools, such as Computer Aided Design (CAD), a hardware description language (e.g., VHDL or AHDL), or a PLD programming language (e.g., PALASM, ABEL, or CUPL.)


While the invention has been particularly shown and described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended clauses. As will be apparent to those skilled in the art, techniques described above for panoramas may be applied to images that have been captured as non-panoramic images, and vice versa.


Embodiments of the present invention may be described, without limitation, by the following clauses. While these embodiments have been described in the clauses by process steps, an apparatus comprising a computer with associated display capable of executing the process steps in the clauses below is also included in the present invention. Likewise, a computer program product including computer executable instructions for executing the process steps in the clauses below and stored on a computer readable medium is included within the present invention.

Claims
  • 1. A method of providing audiovisual content to a client device configured to be coupled to a display, the method comprising, at a server: detecting a selection of a graphical element corresponding to a video content item;in response to detecting the selection of the graphical element, determining a transmission mode as a function of:1) one or more decoding capabilities of the client device;2) a video encoding format of the video content item;3) whether the video content item is to be displayed in a full screen or a partial screen format; and4) whether the client device is capable of overlaying image data into a video stream;preparing, for transmission according to the determined transmission mode, a series of frames that includes audiovisual data including the video content item; andtransmitting the prepared series of frames, from the server toward the client device, according to the determined transmission mode, for display on the display;wherein, in accordance with a determination that the video content item is to be displayed in a partial screen format, the transmission mode is a first transmission mode in which the series of frames includes the video content item and a first GUI, and the preparing includes, prior to transmitting the prepared series of frames: rendering the first GUI according to a previously set screen resolution; andstitching the video content item into the rendered first GUI, wherein the series of frames includes the video content item as stitched into the rendered first GUI, and wherein a respective frame of the series of frames combines content for the video content item and the rendered first GUI.
  • 2. The method of claim 1, wherein, in accordance with a determination that: the video content item is to be displayed in a full screen format, andthe client device is not capable of decoding the video encoding format of the video content item: the transmission mode is a second transmission mode in which the preparing includes transcoding the video content item.
  • 3. The method of claim 2, wherein, in accordance with a determination that: the video content item is to be displayed in a full screen format,the client device is capable of decoding the video encoding format of the video content item, andno image data is to be overlaid on the video content item: the transmission mode is a third transmission mode in which, the preparing includes repackaging the video content item.
  • 4. The method of claim 3, wherein, in accordance with a determination that: the video content item is to be displayed in a full screen format,the client device is capable of decoding the video encoding format of the video content item, andthe client device is capable of overlaying image data into the video content item: the transmission mode is a fourth transmission mode in which: the overlaying image data includes a second GUI that includes various user interface elements, andthe preparing includes rendering the second GUI according to a client overlay resolution.
  • 5. The method of claim 4, wherein, in accordance with a determination that: the video content item is to be displayed in a full screen format,the client device is capable of decoding the video encoding format of the video content item, andthe client device is not capable of overlaying image data into the video content item, the transmission mode is a fifth transmission mode in which the preparing includes: rendering the second GUI according to a video resolution, video size, and video frame rate;decoding all or a portion of the video content item;blending the rendered second GUI into the decoded portion; andre-encoding the blended portion according to the video encoding format of the video content item.
  • 6. The method of claim 1, wherein, in accordance with a determination that the client device is capable of receiving streaming content in a native format, the transmitting step comprises passing through content that is in the native format.
  • 7. A computer program product for providing audiovisual content to a client device configured to be coupled to a display, the computer program product comprising a computer useable medium on which is stored non-transitory computer program code comprising program code for: detecting a selection of a graphical element corresponding to a video content item;in response to detecting the selection of the graphical element, determining a transmission mode as a function of:1) one or more decoding capabilities of the client device;2) a video encoding format of the video content item;3) whether the video content item should is to be displayed in a full screen or a partial screen format; and4) whether the client device is capable of overlaying image data into a video stream;preparing, for transmission according to the determined transmission mode, a series of frames that includes audiovisual data including the video content item; andtransmitting the prepared series of frames, from the server toward the client device, according to the determined transmission mode, for display on the display;wherein, in accordance with a determination that the video content item is to be displayed in a partial screen format, the transmission mode is a first transmission mode in which the series of frames includes the video content item and a first GUI, and the preparing includes, prior to transmitting the prepared series of frames: rendering the first GUI according to a previously set screen resolution; andstitching the video content item into the rendered first GUI, wherein the series of frames includes the video content item as stitched into the rendered first GUI, and wherein a respective frame of the series of frames combines content for the video content item and the rendered first GUI.
  • 8. The computer program product of claim 7, wherein the program code for preparing further includes program code for, in accordance with a determination that the video content item is to be displayed in a full screen format, andthe client device is not capable of decoding the video encoding format of the video content item: transcoding the video content item according to a second transmission mode.
  • 9. The computer program product of claim 8, wherein the program code for preparing further includes program code for, in accordance with a determination that: the video content item is to be displayed in a full screen format,the client device is capable of decoding the video encoding format of the video content item, andno image data should be overlaid on the video content item: repackaging the video content item according to a third transmission mode.
  • 10. The computer program product of claim 9, wherein the overlaying image data includes a second GUI, the second GUI includes video playback controls, and the program code for preparing includes program code for, in accordance with a determination that: the video content item is to be displayed in a full screen format,the client device is capable of decoding the video encoding format of the video content item, andthe client device is capable of overlaying image data into the video content item: rendering the second GUI according to a client overlay resolution according to a fourth transmission mode.
  • 11. The computer program product of claim 10, wherein the program code for preparing further includes program code for, in accordance with a determination that: the video content item is to be displayed in a full screen format,the client device is capable of decoding the video encoding format of the video content item, andthe client device is not capable of overlaying image data into the video content item: according to a fifth transmission mode: rendering the second GUI according to a video resolution, video size, and video frame rate;decoding a portion of the video content item;blending the rendered second GUI into the decoded portion;re-encoding the blended portion according to the video encoding format of the video content item.
  • 12. The computer program product of claim 7, wherein, in accordance with a determination that the client device is capable of receiving streaming content in a native format, the transmitting step comprises passing through content that is in the native format.
  • 13. A computer system for providing audiovisual content to a client device configured to be coupled to a display, the computer system comprising: an application engine for providing a graphical user interface (GUI);control logic for: in response to detecting a selection of a graphical element corresponding to a video content item, determining a transmission mode as a function of:1) one or more decoding capabilities of the client device;2) a video encoding format of the video content item;3) whether the video content item is to be displayed in a full screen or a partial screen format; and4) whether the client device is capable of overlaying image data into a video stream; andpreparing, for transmission according to the determined transmission mode, a series of frames that includes audiovisual data including the video content item;anda transmitter for transmitting the prepared series of frames, from the computer system toward the client device, according to the determined transmission mode, for display on the display;wherein, in accordance with a determination that the video content item is to be displayed in a partial screen format, the transmission mode is a first transmission mode in which the series of frames includes the video content item and a first GUI, and the preparing includes, prior to transmitting the prepared series of frames: rendering the first GUI according to a previously set screen resolution; andstitching the video content item into the rendered first GUI, wherein the series of frames includes the video content item as stitched into the rendered first GUI, and wherein a respective frame of the series of frames combines content for the video content item and the rendered first GUI.
  • 14. The computer system of claim 13, further comprising a transcoder configured to, according to a second transmission mode, transcode the video content item, in accordance with a determination that the video content item is to be displayed in a full screen format and the client device is not capable of the video encoding format of the video content item.
  • 15. The computer system of claim 14, further comprising a packager configured to, according to a third transmission mode, repackage the video content item, in accordance with a determination that the video content item is to be displayed in a full screen format, the client device is capable of decoding the video encoding format of the video content item, and no image data is to be overlaid on the video content item.
  • 16. The computer system of claim 15, wherein the overlaying image data includes a second GUI, the second GUI includes interface elements, and wherein the application engine is configured to, according to a fourth transmission mode, render the second GUI according to a client overlay resolution, in accordance with a determination that the selected video is to be displayed in a full screen format, the client device is capable of decoding the video encoding format of the video content item, and the client device is capable of overlaying image data into the video content item.
  • 17. The computer system of claim 16, wherein, in accordance with a determination that the video content item is to be displayed in a full screen format, the client device is capable of decoding the video encoding format of the video content item, and the client device is not capable of overlaying image data into the video content item, the transmission mode is a fifth transmission mode in which: the application engine is configured to render the second GUI according to a video resolution, video size, and video frame rate;the transcoder is configured to decode a portion of the video content item;a blender is configured to blend the rendered second GUI into the decoded portion;andthe transcoder is further configured to re-encode the blended portion according to the video encoding format.
  • 18. The computer system of claim 13, wherein, in accordance with a determination that the client device is capable of receiving streaming content in a native format, the transmitting comprises passing through content that is in the native format.
  • 19. The computer system of claim 13, wherein operation of a transcoder, a blender, a stitcher, and a packager is optional according to the determined transmission mode.
RELATED APPLICATIONS

This application claims priority to U.S. Provisional Patent Application No. 61/793,898, entitled “Multiple-Mode System for Providing User Selectable Video Content,” filed Mar. 15, 2013, which is incorporated by reference herein in its entirety.

US Referenced Citations (835)
Number Name Date Kind
3889050 Thompson Jun 1975 A
3934079 Barnhart Jan 1976 A
3997718 Ricketts et al. Dec 1976 A
4002843 Rackman Jan 1977 A
4032972 Saylor Jun 1977 A
4077006 Nicholson Feb 1978 A
4081831 Tang et al. Mar 1978 A
4107734 Percy et al. Aug 1978 A
4107735 Frohbach Aug 1978 A
4145720 Weintraub et al. Mar 1979 A
4168400 de Couasnon et al. Sep 1979 A
4186438 Benson et al. Jan 1980 A
4222068 Thompson Sep 1980 A
4245245 Matsumoto et al. Jan 1981 A
4247106 Jeffers et al. Jan 1981 A
4253114 Tang et al. Feb 1981 A
4264924 Freeman Apr 1981 A
4264925 Freeman et al. Apr 1981 A
4290142 Schnee et al. Sep 1981 A
4302771 Gargini Nov 1981 A
4308554 Percy et al. Dec 1981 A
4350980 Ward Sep 1982 A
4367557 Stern et al. Jan 1983 A
4395780 Gohm et al. Jul 1983 A
4408225 Ensinger et al. Oct 1983 A
4450477 Lovett May 1984 A
4454538 Toriumi Jun 1984 A
4466017 Banker Aug 1984 A
4471380 Mobley Sep 1984 A
4475123 Dumbauld et al. Oct 1984 A
4484217 Block et al. Nov 1984 A
4491983 Pinnow et al. Jan 1985 A
4506387 Walter Mar 1985 A
4507680 Freeman Mar 1985 A
4509073 Baran et al. Apr 1985 A
4523228 Banker Jun 1985 A
4533948 McNamara et al. Aug 1985 A
4536791 Campbell et al. Aug 1985 A
4538174 Gargini et al. Aug 1985 A
4538176 Nakajima et al. Aug 1985 A
4553161 Citta Nov 1985 A
4554581 Tentler et al. Nov 1985 A
4555561 Sugimori et al. Nov 1985 A
4562465 Glaab Dec 1985 A
4567517 Mobley Jan 1986 A
4573072 Freeman Feb 1986 A
4591906 Morales-Garza et al. May 1986 A
4602279 Freeman Jul 1986 A
4614970 Clupper et al. Sep 1986 A
4616263 Eichelberger Oct 1986 A
4625235 Watson Nov 1986 A
4627105 Ohashi et al. Dec 1986 A
4633462 Stifle et al. Dec 1986 A
4670904 Rumreich Jun 1987 A
4682360 Frederiksen Jul 1987 A
4695880 Johnson et al. Sep 1987 A
4706121 Young Nov 1987 A
4706285 Rumreich Nov 1987 A
4709418 Fox et al. Nov 1987 A
4710971 Nozaki et al. Dec 1987 A
4718086 Rumreich et al. Jan 1988 A
4732764 Hemingway et al. Mar 1988 A
4734764 Pocock et al. Mar 1988 A
4748689 Mohr May 1988 A
4749992 Fitzemeyer et al. Jun 1988 A
4750036 Martinez Jun 1988 A
4754426 Rast et al. Jun 1988 A
4760442 O'Connell et al. Jul 1988 A
4763317 Lehman et al. Aug 1988 A
4769833 Farleigh et al. Sep 1988 A
4769838 Hasegawa Sep 1988 A
4789863 Bush Dec 1988 A
4792849 McCalley et al. Dec 1988 A
4801190 Imoto Jan 1989 A
4805134 Calo et al. Feb 1989 A
4807031 Broughton et al. Feb 1989 A
4816905 Tweedy et al. Mar 1989 A
4821102 Ichikawa et al. Apr 1989 A
4823386 Dumbauld et al. Apr 1989 A
4827253 Maltz May 1989 A
4827511 Masuko May 1989 A
4829372 McCalley et al. May 1989 A
4829558 Welsh May 1989 A
4847698 Freeman Jul 1989 A
4847699 Freeman Jul 1989 A
4847700 Freeman Jul 1989 A
4848698 Newell et al. Jul 1989 A
4860379 Schoeneberger et al. Aug 1989 A
4864613 Van Cleave Sep 1989 A
4876592 Von Kohorn Oct 1989 A
4889369 Albrecht Dec 1989 A
4890320 Monslow et al. Dec 1989 A
4891694 Way Jan 1990 A
4901367 Nicholson Feb 1990 A
4903126 Kassatly Feb 1990 A
4905094 Pocock et al. Feb 1990 A
4912760 West, Jr. et al. Mar 1990 A
4918516 Freeman Apr 1990 A
4920566 Robbins et al. Apr 1990 A
4922532 Farmer et al. May 1990 A
4924303 Brandon et al. May 1990 A
4924498 Farmer et al. May 1990 A
4937821 Boulton Jun 1990 A
4941040 Pocock et al. Jul 1990 A
4947244 Fenwick et al. Aug 1990 A
4961211 Tsugane et al. Oct 1990 A
4963995 Lang Oct 1990 A
4975771 Kassatly Dec 1990 A
4989245 Bennett Jan 1991 A
4994909 Graves et al. Feb 1991 A
4995078 Monslow et al. Feb 1991 A
5003384 Durden et al. Mar 1991 A
5008934 Endoh Apr 1991 A
5014125 Pocock et al. May 1991 A
5027400 Baji et al. Jun 1991 A
5051720 Kittirutsunetorn Sep 1991 A
5051822 Rhoades Sep 1991 A
5057917 Shalkauser et al. Oct 1991 A
5058160 Banker et al. Oct 1991 A
5060262 Bevins, Jr. et al. Oct 1991 A
5077607 Johnson et al. Dec 1991 A
5083800 Lockton Jan 1992 A
5088111 McNamara et al. Feb 1992 A
5093718 Hoarty et al. Mar 1992 A
5109414 Harvey et al. Apr 1992 A
5113496 McCalley et al. May 1992 A
5119188 McCalley et al. Jun 1992 A
5130792 Tindell et al. Jul 1992 A
5132992 Yurt et al. Jul 1992 A
5133009 Rumreich Jul 1992 A
5133079 Ballantyne et al. Jul 1992 A
5136411 Paik et al. Aug 1992 A
5142575 Farmer et al. Aug 1992 A
5144448 Hombaker, III et al. Sep 1992 A
5155591 Wachob Oct 1992 A
5172413 Bradley et al. Dec 1992 A
5191410 McCalley et al. Mar 1993 A
5195092 Wilson et al. Mar 1993 A
5208665 McCalley et al. May 1993 A
5220420 Hoarty et al. Jun 1993 A
5230019 Yanagimichi et al. Jul 1993 A
5231494 Wachob Jul 1993 A
5236199 Thompson, Jr. Aug 1993 A
5247347 Letteral et al. Sep 1993 A
5253341 Rozmanith et al. Oct 1993 A
5262854 Ng Nov 1993 A
5262860 Fitzpatrick et al. Nov 1993 A
5303388 Kreitman et al. Apr 1994 A
5319455 Hoarty et al. Jun 1994 A
5319707 Wasilewski et al. Jun 1994 A
5321440 Yanagihara et al. Jun 1994 A
5321514 Martinez Jun 1994 A
5351129 Lai Sep 1994 A
5355162 Yazolino et al. Oct 1994 A
5359601 Wasilewski et al. Oct 1994 A
5361091 Hoarty et al. Nov 1994 A
5371532 Gelman et al. Dec 1994 A
5404393 Remillard Apr 1995 A
5408274 Chang et al. Apr 1995 A
5410343 Coddington et al. Apr 1995 A
5410344 Graves et al. Apr 1995 A
5412415 Cook et al. May 1995 A
5412720 Hoarty May 1995 A
5418559 Blahut May 1995 A
5422674 Hooper et al. Jun 1995 A
5422887 Diepstraten et al. Jun 1995 A
5442389 Blahut et al. Aug 1995 A
5442390 Hooper et al. Aug 1995 A
5442700 Snell et al. Aug 1995 A
5446490 Blahut et al. Aug 1995 A
5469283 Vinel et al. Nov 1995 A
5469431 Wendorf et al. Nov 1995 A
5471263 Odaka Nov 1995 A
5481542 Logston et al. Jan 1996 A
5485197 Hoarty Jan 1996 A
5487066 McNamara et al. Jan 1996 A
5493638 Hooper et al. Feb 1996 A
5495283 Cowe Feb 1996 A
5495295 Long Feb 1996 A
5497187 Banker et al. Mar 1996 A
5517250 Hoogenboom et al. May 1996 A
5526034 Hoarty et al. Jun 1996 A
5528281 Grady et al. Jun 1996 A
5537397 Abramson Jul 1996 A
5537404 Bentley et al. Jul 1996 A
5539449 Blahut et al. Jul 1996 A
RE35314 Logg Aug 1996 E
5548340 Bertram Aug 1996 A
5550578 Hoarty et al. Aug 1996 A
5557316 Hoarty et al. Sep 1996 A
5559549 Hendricks et al. Sep 1996 A
5561708 Remillard Oct 1996 A
5570126 Blahut et al. Oct 1996 A
5570363 Holm Oct 1996 A
5579143 Huber Nov 1996 A
5581653 Todd Dec 1996 A
5583927 Ely et al. Dec 1996 A
5587734 Lauder et al. Dec 1996 A
5589885 Ooi Dec 1996 A
5592470 Rudrapatna et al. Jan 1997 A
5594507 Hoarty Jan 1997 A
5594723 Tibi Jan 1997 A
5594938 Engel Jan 1997 A
5596693 Needle et al. Jan 1997 A
5600364 Hendricks et al. Feb 1997 A
5600573 Hendricks et al. Feb 1997 A
5608446 Carr et al. Mar 1997 A
5617145 Huang et al. Apr 1997 A
5621464 Teo et al. Apr 1997 A
5625404 Grady et al. Apr 1997 A
5630757 Gagin et al. May 1997 A
5631693 Wunderlich et al. May 1997 A
5631846 Szurkowski May 1997 A
5632003 Davidson et al. May 1997 A
5642498 Kutner Jun 1997 A
5649283 Galler et al. Jul 1997 A
5668592 Spaulding, II Sep 1997 A
5668599 Cheney et al. Sep 1997 A
5708767 Yeo et al. Jan 1998 A
5710815 Ming et al. Jan 1998 A
5712906 Grady et al. Jan 1998 A
5740307 Lane Apr 1998 A
5742289 Naylor et al. Apr 1998 A
5748234 Lippincott May 1998 A
5754941 Sharpe et al. May 1998 A
5786527 Tarte Jul 1998 A
5790174 Richard, III et al. Aug 1998 A
5802283 Grady et al. Sep 1998 A
5812665 Hoarty et al. Sep 1998 A
5812786 Seazholtz et al. Sep 1998 A
5815604 Simons et al. Sep 1998 A
5818438 Howe et al. Oct 1998 A
5821945 Yeo et al. Oct 1998 A
5822537 Katseff et al. Oct 1998 A
5828371 Cline et al. Oct 1998 A
5844594 Ferguson Dec 1998 A
5845083 Hamadani et al. Dec 1998 A
5862325 Reed et al. Jan 1999 A
5864820 Case Jan 1999 A
5867208 McLaren Feb 1999 A
5883661 Hoarty Mar 1999 A
5903727 Nielsen May 1999 A
5903816 Broadwin et al. May 1999 A
5905522 Lawler May 1999 A
5907681 Bates et al. May 1999 A
5917822 Lyles et al. Jun 1999 A
5946352 Rowlands et al. Aug 1999 A
5952943 Walsh et al. Sep 1999 A
5959690 Toebes et al. Sep 1999 A
5961603 Kunkel et al. Oct 1999 A
5963203 Goldberg et al. Oct 1999 A
5966163 Lin et al. Oct 1999 A
5978756 Walker et al. Nov 1999 A
5982445 Eyer et al. Nov 1999 A
5990862 Lewis Nov 1999 A
5995146 Rasmusse Nov 1999 A
5995488 Kalhunte et al. Nov 1999 A
5999970 Krisbergh et al. Dec 1999 A
6014416 Shin et al. Jan 2000 A
6021386 Davis et al. Feb 2000 A
6031989 Cordell Feb 2000 A
6034678 Hoarty et al. Mar 2000 A
6049539 Lee et al. Apr 2000 A
6049831 Gardell et al. Apr 2000 A
6052555 Ferguson Apr 2000 A
6055247 Kubota et al. Apr 2000 A
6055314 Spies et al. Apr 2000 A
6055315 Doyle et al. Apr 2000 A
6064377 Hoarty et al. May 2000 A
6078328 Schumann et al. Jun 2000 A
6084908 Chiang et al. Jul 2000 A
6100883 Hoarty Aug 2000 A
6108625 Kim Aug 2000 A
6115076 Linzer Sep 2000 A
6131182 Beakes et al. Oct 2000 A
6141645 Chi-Min et al. Oct 2000 A
6141693 Perlman et al. Oct 2000 A
6144698 Poon et al. Nov 2000 A
6167084 Wang et al. Dec 2000 A
6169573 Sampath-Kumar et al. Jan 2001 B1
6177931 Alexander et al. Jan 2001 B1
6182072 Leak et al. Jan 2001 B1
6184878 Alonso et al. Feb 2001 B1
6192081 Chiang et al. Feb 2001 B1
6198822 Doyle et al. Mar 2001 B1
6205582 Hoarty Mar 2001 B1
6226041 Florencio et al. May 2001 B1
6236730 Cowieson et al. May 2001 B1
6243418 Kim Jun 2001 B1
6253238 Lauder et al. Jun 2001 B1
6256047 Isobe et al. Jul 2001 B1
6259826 Pollard et al. Jul 2001 B1
6266369 Wang et al. Jul 2001 B1
6266684 Kraus et al. Jul 2001 B1
6268864 Chen et al. Jul 2001 B1
6275496 Burns et al. Aug 2001 B1
6292194 Powell, III Sep 2001 B1
6305020 Hoarty et al. Oct 2001 B1
6310601 Moore et al. Oct 2001 B1
6317151 Ohsuga et al. Nov 2001 B1
6317885 Fries Nov 2001 B1
6349284 Park et al. Feb 2002 B1
6385771 Gordon May 2002 B1
6386980 Nishino et al. May 2002 B1
6389075 Wang et al. May 2002 B2
6389218 Gordon et al. May 2002 B2
6415031 Colligan et al. Jul 2002 B1
6415437 Ludvig et al. Jul 2002 B1
6438140 Jungers et al. Aug 2002 B1
6446037 Fielder et al. Sep 2002 B1
6459427 Mao et al. Oct 2002 B1
6477182 Calderone Nov 2002 B2
6480210 Martino et al. Nov 2002 B1
6481012 Gordon et al. Nov 2002 B1
6512793 Maeda Jan 2003 B1
6525746 Lau et al. Feb 2003 B1
6536043 Guedalia Mar 2003 B1
6539545 Dureau et al. Mar 2003 B1
6557041 Mallart Apr 2003 B2
6560496 Michnener May 2003 B1
6564378 Satterfield et al. May 2003 B1
6578201 LaRocca et al. Jun 2003 B1
6579184 Tanskanen Jun 2003 B1
6584153 Gordon et al. Jun 2003 B1
6588017 Calderone Jul 2003 B1
6598229 Smyth et al. Jul 2003 B2
6604224 Armstrong et al. Aug 2003 B1
6614442 Ouyang et al. Sep 2003 B1
6621870 Gordon et al. Sep 2003 B1
6625574 Taniguchi et al. Sep 2003 B1
6639896 Goode et al. Oct 2003 B1
6645076 Sugai Nov 2003 B1
6651252 Gordon et al. Nov 2003 B1
6657647 Bright Dec 2003 B1
6675385 Wang Jan 2004 B1
6675387 Boucher Jan 2004 B1
6681326 Son et al. Jan 2004 B2
6681397 Tsai et al. Jan 2004 B1
6684400 Goode et al. Jan 2004 B1
6687663 McGrath et al. Feb 2004 B1
6691208 Dandrea et al. Feb 2004 B2
6697376 Son et al. Feb 2004 B1
6704359 Bayrakeri et al. Mar 2004 B1
6717600 Dutta et al. Apr 2004 B2
6718552 Goode Apr 2004 B1
6721794 Taylor et al. Apr 2004 B2
6721956 Wsilewski Apr 2004 B2
6727929 Bates et al. Apr 2004 B1
6731605 Deshpande May 2004 B1
6732370 Gordon et al. May 2004 B1
6747991 Hemy et al. Jun 2004 B1
6754271 Gordon et al. Jun 2004 B1
6754905 Gordon et al. Jun 2004 B2
6758540 Adolph et al. Jul 2004 B1
6766407 Lisitsa et al. Jul 2004 B1
6771704 Hannah Aug 2004 B1
6785902 Zigmond et al. Aug 2004 B1
6807528 Truman et al. Oct 2004 B1
6810528 Chatani Oct 2004 B1
6813690 Lango et al. Nov 2004 B1
6817947 Tanskanen Nov 2004 B2
6850490 Woo et al. Feb 2005 B1
6886178 Mao et al. Apr 2005 B1
6907574 Xu et al. Jun 2005 B2
6931291 Alvarez-Tinoco et al. Aug 2005 B1
6941019 Mitchell et al. Sep 2005 B1
6941574 Broadwin et al. Sep 2005 B1
6947509 Wong Sep 2005 B1
6952221 Holtz et al. Oct 2005 B1
6956899 Hall et al. Oct 2005 B2
7016540 Gong et al. Mar 2006 B1
7030890 Jouet et al. Apr 2006 B1
7031385 Inoue et al. Apr 2006 B1
7050113 Campisano et al. May 2006 B2
7089577 Rakib et al. Aug 2006 B1
7093028 Shao et al. Aug 2006 B1
7095402 Kunil et al. Aug 2006 B2
7114167 Slemmer et al. Sep 2006 B2
7146615 Hervet et al. Dec 2006 B1
7151782 Oz et al. Dec 2006 B1
7158676 Rainsford Jan 2007 B1
7200836 Brodersen et al. Apr 2007 B2
7212573 Winger May 2007 B2
7224731 Mehrotra May 2007 B2
7272556 Aguilar et al. Sep 2007 B1
7310619 Baar et al. Dec 2007 B2
7325043 Rosenberg et al. Jan 2008 B1
7346111 Winger et al. Mar 2008 B2
7360230 Paz et al. Apr 2008 B1
7412423 Asano Aug 2008 B1
7412505 Slemmer et al. Aug 2008 B2
7421082 Kamiya et al. Sep 2008 B2
7444306 Varble Oct 2008 B2
7444418 Chou et al. Oct 2008 B2
7500235 Maynard et al. Mar 2009 B2
7508941 O'Toole, Jr. et al. Mar 2009 B1
7512577 Slemmer et al. Mar 2009 B2
7543073 Chou et al. Jun 2009 B2
7596764 Vienneau et al. Sep 2009 B2
7623575 Winger Nov 2009 B2
7669220 Goode Feb 2010 B2
7742609 Yeakel et al. Jun 2010 B2
7743400 Kurauchi Jun 2010 B2
7751572 Villemoes et al. Jul 2010 B2
7757157 Fukuda Jul 2010 B1
7830388 Lu Nov 2010 B1
7840905 Weber et al. Nov 2010 B1
7925775 Nishida Apr 2011 B2
7936819 Craig et al. May 2011 B2
7941645 Riach et al. May 2011 B1
7945616 Zeng et al. May 2011 B2
7970263 Asch Jun 2011 B1
7987489 Krzyzanowski et al. Jul 2011 B2
8027353 Damola et al. Sep 2011 B2
8036271 Winger et al. Oct 2011 B2
8046798 Schlack et al. Oct 2011 B1
8074248 Sigmon et al. Dec 2011 B2
8078603 Chandratillake et al. Dec 2011 B1
8118676 Craig et al. Feb 2012 B2
8136033 Bhargava et al. Mar 2012 B1
8149917 Zhang et al. Apr 2012 B2
8155194 Winger et al. Apr 2012 B2
8155202 Landau Apr 2012 B2
8170107 Winger May 2012 B2
8194862 Herr et al. Jun 2012 B2
8243630 Luo et al. Aug 2012 B2
8270439 Herr et al. Sep 2012 B2
8284842 Craig et al. Oct 2012 B2
8296424 Malloy et al. Oct 2012 B2
8370869 Paek et al. Feb 2013 B2
8411754 Zhang et al. Apr 2013 B2
8442110 Pavlovskaia et al. May 2013 B2
8473996 Gordon et al. Jun 2013 B2
8619867 Craig et al. Dec 2013 B2
8621500 Weaver et al. Dec 2013 B2
8656430 Doyle Feb 2014 B2
8781240 Srinivasan et al. Jul 2014 B2
8839317 Rieger et al. Sep 2014 B1
8914813 Sigurdsson et al. Dec 2014 B1
9204113 Kwok Dec 2015 B1
20010005360 Lee Jun 2001 A1
20010008845 Kusuda et al. Jul 2001 A1
20010027563 White et al. Oct 2001 A1
20010043215 Middleton, III et al. Nov 2001 A1
20010049301 Masuda et al. Dec 2001 A1
20020007491 Schiller et al. Jan 2002 A1
20020013812 Krueger et al. Jan 2002 A1
20020016161 Dellien et al. Feb 2002 A1
20020021353 DeNies Feb 2002 A1
20020026642 Augenbraun et al. Feb 2002 A1
20020027567 Niamir Mar 2002 A1
20020032697 French et al. Mar 2002 A1
20020040482 Sextro et al. Apr 2002 A1
20020047899 Son et al. Apr 2002 A1
20020049975 Thomas et al. Apr 2002 A1
20020054578 Zhang et al. May 2002 A1
20020056083 Istvan May 2002 A1
20020056107 Schlack May 2002 A1
20020056136 Wistendahl et al. May 2002 A1
20020059644 Andrade et al. May 2002 A1
20020062484 De Lange et al. May 2002 A1
20020067766 Sakamoto et al. Jun 2002 A1
20020069267 Thiele Jun 2002 A1
20020072408 Kumagai Jun 2002 A1
20020078171 Schneider Jun 2002 A1
20020078456 Hudson et al. Jun 2002 A1
20020083464 Tomsen et al. Jun 2002 A1
20020091738 Rohrabaugh Jul 2002 A1
20020095689 Novak Jul 2002 A1
20020105531 Niemi Aug 2002 A1
20020108121 Alao et al. Aug 2002 A1
20020116705 Perlman Aug 2002 A1
20020131511 Zenoni Sep 2002 A1
20020136298 Anantharamu et al. Sep 2002 A1
20020152318 Menon et al. Oct 2002 A1
20020171765 Waki et al. Nov 2002 A1
20020175931 Holtz et al. Nov 2002 A1
20020178278 Ducharme Nov 2002 A1
20020178447 Plotnick et al. Nov 2002 A1
20020188628 Cooper et al. Dec 2002 A1
20020191851 Keinan Dec 2002 A1
20020194592 Tsuchida et al. Dec 2002 A1
20020196746 Allen Dec 2002 A1
20030005452 Rodriguez Jan 2003 A1
20030018796 Chou et al. Jan 2003 A1
20030020671 Santoro et al. Jan 2003 A1
20030027517 Callway et al. Feb 2003 A1
20030035486 Kato et al. Feb 2003 A1
20030038893 Rajamaki et al. Feb 2003 A1
20030039398 McIntyre Feb 2003 A1
20030046690 Miller Mar 2003 A1
20030051253 Barone, Jr. Mar 2003 A1
20030058941 Chen et al. Mar 2003 A1
20030061451 Beyda Mar 2003 A1
20030065739 Shnier Apr 2003 A1
20030066093 Cruz-Rivera et al. Apr 2003 A1
20030071792 Safadi Apr 2003 A1
20030072372 Shen et al. Apr 2003 A1
20030076546 Johnson et al. Apr 2003 A1
20030088328 Nishio et al. May 2003 A1
20030088400 Nishio et al. May 2003 A1
20030095790 Joshi May 2003 A1
20030107443 Clancy Jun 2003 A1
20030122836 Doyle et al. Jul 2003 A1
20030123664 Pedlow, Jr. et al. Jul 2003 A1
20030126608 Safadi Jul 2003 A1
20030126611 Kuczynski-Brown Jul 2003 A1
20030131349 Kuczynski-Brown Jul 2003 A1
20030135860 Dureau Jul 2003 A1
20030169373 Peters et al. Sep 2003 A1
20030177199 Zenoni Sep 2003 A1
20030188309 Yuen Oct 2003 A1
20030189980 Dvir et al. Oct 2003 A1
20030196174 Pierre Cote et al. Oct 2003 A1
20030208768 Urdang et al. Nov 2003 A1
20030217360 Gordon et al. Nov 2003 A1
20030229719 Iwata et al. Dec 2003 A1
20030229900 Reisman Dec 2003 A1
20030231218 Amadio Dec 2003 A1
20040016000 Zhang et al. Jan 2004 A1
20040034873 Zenoni Feb 2004 A1
20040040035 Carlucci et al. Feb 2004 A1
20040055007 Allport Mar 2004 A1
20040073924 Pendakur Apr 2004 A1
20040078822 Breen et al. Apr 2004 A1
20040088375 Sethi et al. May 2004 A1
20040091171 Bone May 2004 A1
20040111526 Baldwin et al. Jun 2004 A1
20040117827 Karaoguz et al. Jun 2004 A1
20040128686 Boyer et al. Jul 2004 A1
20040133704 Krzyzanowski et al. Jul 2004 A1
20040136698 Mock Jul 2004 A1
20040139158 Datta Jul 2004 A1
20040157662 Tsuchiya Aug 2004 A1
20040163101 Swix et al. Aug 2004 A1
20040184542 Fujimoto Sep 2004 A1
20040193648 Lai et al. Sep 2004 A1
20040210824 Shoff et al. Oct 2004 A1
20040216045 Martin et al. Oct 2004 A1
20040261106 Hoffman Dec 2004 A1
20040261114 Addington et al. Dec 2004 A1
20040268419 Danker et al. Dec 2004 A1
20050015259 Thumpudi et al. Jan 2005 A1
20050015816 Christofalo et al. Jan 2005 A1
20050021830 Urzaiz et al. Jan 2005 A1
20050034155 Gordon et al. Feb 2005 A1
20050034162 White et al. Feb 2005 A1
20050044575 Der Kuyl Feb 2005 A1
20050055685 Maynard et al. Mar 2005 A1
20050055721 Zigmond et al. Mar 2005 A1
20050071876 van Beek Mar 2005 A1
20050076134 Bialik et al. Apr 2005 A1
20050089091 Kim et al. Apr 2005 A1
20050091690 Delpuch et al. Apr 2005 A1
20050091695 Paz et al. Apr 2005 A1
20050105608 Coleman et al. May 2005 A1
20050114906 Hoarty et al. May 2005 A1
20050132305 Guichard et al. Jun 2005 A1
20050135385 Jenkins et al. Jun 2005 A1
20050141613 Kelly et al. Jun 2005 A1
20050149988 Grannan Jul 2005 A1
20050155063 Bayrakeri Jul 2005 A1
20050160088 Scallan et al. Jul 2005 A1
20050166257 Feinleib et al. Jul 2005 A1
20050177853 Williams et al. Aug 2005 A1
20050180502 Puri Aug 2005 A1
20050198682 Wright Sep 2005 A1
20050213586 Cyganski et al. Sep 2005 A1
20050216933 Black Sep 2005 A1
20050216940 Black Sep 2005 A1
20050226426 Oomen et al. Oct 2005 A1
20050232309 Kavaler Oct 2005 A1
20050273832 Zigmond et al. Dec 2005 A1
20050283741 Balabanovic et al. Dec 2005 A1
20060001737 Dawson et al. Jan 2006 A1
20060020960 Relan et al. Jan 2006 A1
20060020994 Crane et al. Jan 2006 A1
20060026663 Kortum et al. Feb 2006 A1
20060031906 Kaneda Feb 2006 A1
20060039481 Shen et al. Feb 2006 A1
20060041910 Hatanaka et al. Feb 2006 A1
20060064716 Sull et al. Mar 2006 A1
20060088105 Shen et al. Apr 2006 A1
20060095944 Demircin et al. May 2006 A1
20060112338 Joung et al. May 2006 A1
20060117340 Pavlovskaia et al. Jun 2006 A1
20060143678 Cho et al. Jun 2006 A1
20060161538 Kiilerich Jul 2006 A1
20060173985 Moore Aug 2006 A1
20060174021 Osborne Aug 2006 A1
20060174026 Robinson et al. Aug 2006 A1
20060174289 Theberge Aug 2006 A1
20060184614 Baratto et al. Aug 2006 A1
20060195884 van Zoest et al. Aug 2006 A1
20060203913 Kim et al. Sep 2006 A1
20060212203 Furuno Sep 2006 A1
20060218601 Michel Sep 2006 A1
20060230428 Craig et al. Oct 2006 A1
20060242570 Croft et al. Oct 2006 A1
20060256865 Westerman Nov 2006 A1
20060267995 Radloff Nov 2006 A1
20060269086 Page et al. Nov 2006 A1
20060271985 Hoffman et al. Nov 2006 A1
20060285586 Westerman Dec 2006 A1
20060285819 Kelly et al. Dec 2006 A1
20070009035 Craig et al. Jan 2007 A1
20070009036 Craig et al. Jan 2007 A1
20070009042 Craig Jan 2007 A1
20070011702 Vaysman Jan 2007 A1
20070025639 Zhou et al. Feb 2007 A1
20070033528 Merrit et al. Feb 2007 A1
20070033631 Gordon et al. Feb 2007 A1
20070043667 Qawami et al. Feb 2007 A1
20070074251 Oguz et al. Mar 2007 A1
20070079325 de Heer Apr 2007 A1
20070115941 Patel et al. May 2007 A1
20070124282 Wittkotter May 2007 A1
20070124795 McKissick et al. May 2007 A1
20070130446 Minakami Jun 2007 A1
20070130592 Haeusel Jun 2007 A1
20070152984 Ording et al. Jul 2007 A1
20070162953 Bollinger et al. Jul 2007 A1
20070172061 Pinder Jul 2007 A1
20070174790 Jing et al. Jul 2007 A1
20070178243 Dong et al. Aug 2007 A1
20070192798 Morgan Aug 2007 A1
20070234220 Khan et al. Oct 2007 A1
20070237232 Chang et al. Oct 2007 A1
20070266412 Trowbridge et al. Nov 2007 A1
20070300280 Turner et al. Dec 2007 A1
20080034306 Ording Feb 2008 A1
20080046373 Kim Feb 2008 A1
20080046928 Poling et al. Feb 2008 A1
20080052742 Kopf Feb 2008 A1
20080060034 Egnal et al. Mar 2008 A1
20080066135 Brodersen et al. Mar 2008 A1
20080084503 Kondo Apr 2008 A1
20080086688 Chandratillake et al. Apr 2008 A1
20080086747 Rasanen et al. Apr 2008 A1
20080094368 Ording et al. Apr 2008 A1
20080097953 Levy et al. Apr 2008 A1
20080098212 Helms et al. Apr 2008 A1
20080098450 Wu et al. Apr 2008 A1
20080104520 Swenson et al. May 2008 A1
20080127255 Ress et al. May 2008 A1
20080144711 Chui et al. Jun 2008 A1
20080154583 Goto et al. Jun 2008 A1
20080163059 Craner Jul 2008 A1
20080163286 Rudolph et al. Jul 2008 A1
20080170619 Landau Jul 2008 A1
20080170622 Gordon Jul 2008 A1
20080172441 Speicher et al. Jul 2008 A1
20080178125 Elsbree et al. Jul 2008 A1
20080178243 Dong et al. Jul 2008 A1
20080178249 Gordon et al. Jul 2008 A1
20080181221 Kampmann et al. Jul 2008 A1
20080184120 O-Brien-Strain et al. Jul 2008 A1
20080189740 Carpenter et al. Aug 2008 A1
20080195573 Onoda et al. Aug 2008 A1
20080201736 Gordon et al. Aug 2008 A1
20080212942 Gordon et al. Sep 2008 A1
20080222199 Tiu et al. Sep 2008 A1
20080232243 Oren et al. Sep 2008 A1
20080232452 Sullivan et al. Sep 2008 A1
20080243918 Holtman Oct 2008 A1
20080243998 Oh et al. Oct 2008 A1
20080244681 Gossweiler et al. Oct 2008 A1
20080246759 Summers Oct 2008 A1
20080253440 Srinivasan et al. Oct 2008 A1
20080253685 Kuranov et al. Oct 2008 A1
20080271080 Grossweiler et al. Oct 2008 A1
20090003446 Wu et al. Jan 2009 A1
20090003705 Zou et al. Jan 2009 A1
20090007199 La Joie Jan 2009 A1
20090025027 Craner Jan 2009 A1
20090031341 Schlack et al. Jan 2009 A1
20090041118 Pavlovskaia et al. Feb 2009 A1
20090083781 Yang et al. Mar 2009 A1
20090083813 Dolce et al. Mar 2009 A1
20090083824 McCarthy et al. Mar 2009 A1
20090089188 Ku et al. Apr 2009 A1
20090094113 Berry et al. Apr 2009 A1
20090094646 Walter et al. Apr 2009 A1
20090100465 Kulakowski Apr 2009 A1
20090100489 Strothmann Apr 2009 A1
20090106269 Zuckerman et al. Apr 2009 A1
20090106386 Zuckerman et al. Apr 2009 A1
20090106392 Zuckerman et al. Apr 2009 A1
20090106425 Zuckerman et al. Apr 2009 A1
20090106441 Zuckerman et al. Apr 2009 A1
20090106451 Zuckerman et al. Apr 2009 A1
20090106511 Zuckerman et al. Apr 2009 A1
20090113009 Slemmer et al. Apr 2009 A1
20090132942 Santoro et al. May 2009 A1
20090138966 Krause et al. May 2009 A1
20090144781 Glaser et al. Jun 2009 A1
20090146779 Kumar et al. Jun 2009 A1
20090157868 Chaudhry Jun 2009 A1
20090158369 Van Vleck et al. Jun 2009 A1
20090160694 Di Flora Jun 2009 A1
20090172431 Gupta et al. Jul 2009 A1
20090172726 Vantalon et al. Jul 2009 A1
20090172757 Aldrey et al. Jul 2009 A1
20090178098 Westbrook et al. Jul 2009 A1
20090183197 Matthews Jul 2009 A1
20090183219 Maynard et al. Jul 2009 A1
20090189890 Corbett et al. Jul 2009 A1
20090193452 Russ et al. Jul 2009 A1
20090196346 Zhang et al. Aug 2009 A1
20090204920 Beverly et al. Aug 2009 A1
20090210899 Lawrence-Apfelbaum et al. Aug 2009 A1
20090225790 Shay et al. Sep 2009 A1
20090228620 Thomas et al. Sep 2009 A1
20090228922 Haj-khalil et al. Sep 2009 A1
20090233593 Ergen et al. Sep 2009 A1
20090251478 Maillot et al. Oct 2009 A1
20090254960 Yarom et al. Oct 2009 A1
20090265617 Randall et al. Oct 2009 A1
20090271512 Jorgensen Oct 2009 A1
20090271818 Schlack Oct 2009 A1
20090298535 Klein et al. Dec 2009 A1
20090313674 Ludvig et al. Dec 2009 A1
20090316709 Polcha et al. Dec 2009 A1
20090328109 Pavlovskaia et al. Dec 2009 A1
20100009623 Hennenhoefer et al. Jan 2010 A1
20100033638 O'Donnell et al. Feb 2010 A1
20100035682 Gentile et al. Feb 2010 A1
20100054268 Divivier Mar 2010 A1
20100058404 Rouse Mar 2010 A1
20100067571 White et al. Mar 2010 A1
20100073371 Ernst et al. Mar 2010 A1
20100077441 Thomas et al. Mar 2010 A1
20100104021 Schmit Apr 2010 A1
20100115573 Srinivasan et al. May 2010 A1
20100118972 Zhang et al. May 2010 A1
20100131411 Jogand-Coulomb et al. May 2010 A1
20100131996 Gauld May 2010 A1
20100146139 Brockmann Jun 2010 A1
20100153885 Yates Jun 2010 A1
20100158109 Dahlby et al. Jun 2010 A1
20100161825 Ronca et al. Jun 2010 A1
20100166071 Wu et al. Jul 2010 A1
20100174776 Westberg et al. Jul 2010 A1
20100175080 Yuen et al. Jul 2010 A1
20100180307 Hayes et al. Jul 2010 A1
20100211983 Chou Aug 2010 A1
20100226428 Thevathasan et al. Sep 2010 A1
20100235861 Schein et al. Sep 2010 A1
20100242073 Gordon et al. Sep 2010 A1
20100251167 DeLuca et al. Sep 2010 A1
20100254370 Jana et al. Oct 2010 A1
20100265344 Velarde et al. Oct 2010 A1
20100325655 Perez Dec 2010 A1
20100325668 Young et al. Dec 2010 A1
20110002376 Ahmed et al. Jan 2011 A1
20110002470 Purnhagen et al. Jan 2011 A1
20110023069 Dowens Jan 2011 A1
20110035227 Lee et al. Feb 2011 A1
20110067061 Karaoguz et al. Mar 2011 A1
20110072474 Springer et al. Mar 2011 A1
20110096828 Chen et al. Apr 2011 A1
20110099594 Chen et al. Apr 2011 A1
20110107375 Stahl et al. May 2011 A1
20110110433 Bjontegaard May 2011 A1
20110110642 Salomons et al. May 2011 A1
20110150421 Sasaki et al. Jun 2011 A1
20110153776 Opala et al. Jun 2011 A1
20110161517 Ferguson Jun 2011 A1
20110167468 Lee et al. Jul 2011 A1
20110173590 Yanes Jul 2011 A1
20110191684 Greenberg Aug 2011 A1
20110202948 Bildgen et al. Aug 2011 A1
20110211591 Traub et al. Sep 2011 A1
20110231878 Hunter et al. Sep 2011 A1
20110243024 Osterling et al. Oct 2011 A1
20110258584 Williams et al. Oct 2011 A1
20110261889 Francisco Oct 2011 A1
20110283304 Roberts Nov 2011 A1
20110289536 Poder et al. Nov 2011 A1
20110296312 Boyer et al. Dec 2011 A1
20110317982 Xu et al. Dec 2011 A1
20120008786 Cronk et al. Jan 2012 A1
20120023126 Jin et al. Jan 2012 A1
20120023250 Chen et al. Jan 2012 A1
20120030212 Koopmans et al. Feb 2012 A1
20120030706 Hulse et al. Feb 2012 A1
20120137337 Sigmon et al. May 2012 A1
20120204217 Regis et al. Aug 2012 A1
20120209815 Carson et al. Aug 2012 A1
20120216232 Chen et al. Aug 2012 A1
20120221853 Wingert et al. Aug 2012 A1
20120224641 Haberman et al. Sep 2012 A1
20120257671 Brockmann et al. Oct 2012 A1
20120271920 Isaksson Oct 2012 A1
20120284753 Roberts et al. Nov 2012 A1
20120297081 Karlsson et al. Nov 2012 A1
20130003826 Craig et al. Jan 2013 A1
20130042271 Yellin et al. Feb 2013 A1
20130047074 Vestergaard et al. Feb 2013 A1
20130071095 Chauvier et al. Mar 2013 A1
20130086610 Brockmann Apr 2013 A1
20130179787 Brockmann Jul 2013 A1
20130198776 Brockmann Aug 2013 A1
20130254308 Rose et al. Sep 2013 A1
20130254675 de Andrade et al. Sep 2013 A1
20130272394 Brockmann et al. Oct 2013 A1
20130276015 Rothschild Oct 2013 A1
20130283318 Wannamaker Oct 2013 A1
20130297887 Woodward et al. Nov 2013 A1
20130304818 Brumleve et al. Nov 2013 A1
20130305051 Fu et al. Nov 2013 A1
20140032635 Pimmel et al. Jan 2014 A1
20140033036 Gaur et al. Jan 2014 A1
20140081954 Elizarov Mar 2014 A1
20140089469 Ramamurthy et al. Mar 2014 A1
20140123169 Koukarine et al. May 2014 A1
20140157298 Murphy Jun 2014 A1
20140168515 Sagliocco Jun 2014 A1
20140223307 McIntosh et al. Aug 2014 A1
20140223482 McIntosh et al. Aug 2014 A1
20140267074 Balci Sep 2014 A1
20140269930 Robinson Sep 2014 A1
20140289627 Brockmann et al. Sep 2014 A1
20140317532 Ma et al. Oct 2014 A1
20140344861 Berner et al. Nov 2014 A1
20150023372 Boatright Jan 2015 A1
20150037011 Hubner et al. Feb 2015 A1
20150103880 Diard Apr 2015 A1
20150135209 LaBosco et al. May 2015 A1
20150139603 Silverstein et al. May 2015 A1
20150195525 Sullivan et al. Jul 2015 A1
20160119624 Frishman Apr 2016 A1
20160142468 Song May 2016 A1
20160357583 Decker et al. Dec 2016 A1
20170078721 Brockmann et al. Mar 2017 A1
Foreign Referenced Citations (336)
Number Date Country
191599 Apr 2000 AT
198969 Feb 2001 AT
250313 Oct 2003 AT
472152 Jul 2010 AT
475266 Aug 2010 AT
550086 Feb 1986 AU
199060189 Nov 1990 AU
620735 Feb 1992 AU
199184838 Apr 1992 AU
643828 Nov 1993 AU
2004253127 Jan 2005 AU
2005278122 Mar 2006 AU
2010339376 Aug 2012 AU
2011249132 Nov 2012 AU
2011258972 Nov 2012 AU
2011315950 May 2013 AU
682776 Mar 1964 CA
2052477 Mar 1992 CA
1302554 Jun 1992 CA
2163500 May 1996 CA
2231391 May 1997 CA
2273365 Jun 1998 CA
2313133 Jun 1999 CA
2313161 Jun 1999 CA
2528499 Jan 2005 CA
2569407 Mar 2006 CA
2728797 Apr 2010 CA
2787913 Jul 2011 CA
2798541 Dec 2011 CA
2814070 Apr 2012 CA
1507751 Jun 2004 CN
1969555 May 2007 CN
101180109 May 2008 CN
101627424 Jan 2010 CN
101637023 Jan 2010 CN
102007773 Apr 2011 CN
103647980 Mar 2014 CN
4408355 Oct 1994 DE
69516139 Dec 2000 DE
69132518 Sep 2001 DE
69333207 Jul 2004 DE
98961961 Aug 2007 DE
602008001596 Aug 2010 DE
602006015650 Sep 2010 DE
0093549 Nov 1983 EP
0128771 Dec 1984 EP
0419137 Mar 1991 EP
0449633 Oct 1991 EP
0477786 Apr 1992 EP
0523618 Jan 1993 EP
0534139 Mar 1993 EP
0568453 Nov 1993 EP
0588653 Mar 1994 EP
0594350 Apr 1994 EP
0612916 Aug 1994 EP
0624039 Nov 1994 EP
0638219 Feb 1995 EP
0643523 Mar 1995 EP
0661888 Jul 1995 EP
0714684 Jun 1996 EP
0746158 Dec 1996 EP
0761066 Mar 1997 EP
0789972 Aug 1997 EP
0830786 Mar 1998 EP
0861560 Sep 1998 EP
0 881 808 Dec 1998 EP
0933966 Aug 1999 EP
0933966 Aug 1999 EP
1026872 Aug 2000 EP
1038397 Sep 2000 EP
1038399 Sep 2000 EP
1038400 Sep 2000 EP
1038401 Sep 2000 EP
1051039 Nov 2000 EP
1055331 Nov 2000 EP
1120968 Aug 2001 EP
1345446 Sep 2003 EP
1422929 May 2004 EP
1428562 Jun 2004 EP
1521476 Apr 2005 EP
1645115 Apr 2006 EP
1725044 Nov 2006 EP
1767708 Mar 2007 EP
1771003 Apr 2007 EP
1772014 Apr 2007 EP
1877150 Jan 2008 EP
1887148 Feb 2008 EP
1900200 Mar 2008 EP
1902583 Mar 2008 EP
1908293 Apr 2008 EP
1911288 Apr 2008 EP
1918802 May 2008 EP
2100296 Sep 2009 EP
2105019 Sep 2009 EP
2106665 Oct 2009 EP
2116051 Nov 2009 EP
2124440 Nov 2009 EP
2248341 Nov 2010 EP
2269377 Jan 2011 EP
2271098 Jan 2011 EP
2304953 Apr 2011 EP
2357555 Aug 2011 EP
2364019 Sep 2011 EP
2384001 Nov 2011 EP
2409493 Jan 2012 EP
2477414 Jul 2012 EP
2487919 Aug 2012 EP
2520090 Nov 2012 EP
2567545 Mar 2013 EP
2577437 Apr 2013 EP
2628306 Aug 2013 EP
2632164 Aug 2013 EP
2632165 Aug 2013 EP
2695388 Feb 2014 EP
2207635 Jun 2004 ES
8211463 Jun 1982 FR
2529739 Jan 1984 FR
2891098 Mar 2007 FR
2207838 Feb 1989 GB
2248955 Apr 1992 GB
2290204 Dec 1995 GB
2365649 Feb 2002 GB
2378345 Feb 2003 GB
2479164 Oct 2011 GB
1134855 Oct 2010 HK
1116323 Dec 2010 HK
19913397 Apr 1992 IE
99586 Feb 1998 IL
215133 Dec 2011 IL
222829 Dec 2012 IL
222830 Dec 2012 IL
225525 Jun 2013 IL
180215 Jan 1998 IN
200701744 Nov 2007 IN
200900856 May 2009 IN
200800214 Jun 2009 IN
3759 Mar 1992 IS
60-054324 Mar 1985 JP
63-033988 Feb 1988 JP
63-263985 Oct 1988 JP
2001-241993 Sep 1989 JP
04-373286 Dec 1992 JP
06-054324 Feb 1994 JP
7015720 Jan 1995 JP
7-160292 Jun 1995 JP
7160292 Jun 1995 JP
8095599 Apr 1996 JP
8-265704 Oct 1996 JP
8265704 Oct 1996 JP
10-228437 Aug 1998 JP
10-510131 Sep 1998 JP
11-134273 May 1999 JP
H11-261966 Sep 1999 JP
2000-152234 May 2000 JP
2001-145112 May 2001 JP
2001-203995 Jul 2001 JP
2001-245271 Sep 2001 JP
2001-245291 Sep 2001 JP
2001-514471 Sep 2001 JP
2002-016920 Jan 2002 JP
2002-057952 Feb 2002 JP
2002-112220 Apr 2002 JP
2002-141810 May 2002 JP
2002-208027 Jul 2002 JP
2002-300556 Oct 2002 JP
2002-319991 Oct 2002 JP
2003-506763 Feb 2003 JP
2003-087673 Mar 2003 JP
2003-087785 Mar 2003 JP
2003-529234 Sep 2003 JP
2004-501445 Jan 2004 JP
2004-056777 Feb 2004 JP
2004-110850 Apr 2004 JP
2004-112441 Apr 2004 JP
2004-135932 May 2004 JP
2004-264812 Sep 2004 JP
2004-312283 Nov 2004 JP
2004-533736 Nov 2004 JP
2004-536381 Dec 2004 JP
2004-536681 Dec 2004 JP
2005-033741 Feb 2005 JP
2005-084987 Mar 2005 JP
2005-095599 Mar 2005 JP
8-095599 Apr 2005 JP
2005-123981 May 2005 JP
2005-156996 Jun 2005 JP
2005-519382 Jun 2005 JP
2005-523479 Aug 2005 JP
2005-260289 Sep 2005 JP
2005-309752 Nov 2005 JP
2006-067280 Mar 2006 JP
2006-512838 Apr 2006 JP
2006-246358 Sep 2006 JP
2007-129296 May 2007 JP
2007-522727 Aug 2007 JP
11-88419 Sep 2007 JP
2007-264440 Oct 2007 JP
2008-523880 Jul 2008 JP
2008-535622 Sep 2008 JP
04252727 Apr 2009 JP
2009-159188 Jul 2009 JP
2009-543386 Dec 2009 JP
2011-108155 Jun 2011 JP
2012-080593 Apr 2012 JP
04996603 Aug 2012 JP
05121711 Jan 2013 JP
53-004612 Oct 2013 JP
05331008 Oct 2013 JP
05405819 Feb 2014 JP
10-2005-0001362 Jan 2005 KR
10-2005-0085827 Aug 2005 KR
2006067924 Jun 2006 KR
10-2006-0095821 Sep 2006 KR
2007038111 Apr 2007 KR
20080001298 Jan 2008 KR
2008024189 Mar 2008 KR
2010111739 Oct 2010 KR
2010120187 Nov 2010 KR
2010127240 Dec 2010 KR
2011030640 Mar 2011 KR
2011129477 Dec 2011 KR
20120112683 Oct 2012 KR
2013061149 Jun 2013 KR
2013113925 Oct 2013 KR
1333200 Nov 2013 KR
2008045154 Nov 2013 KR
2013138263 Dec 2013 KR
1032594 Apr 2008 NL
1033929 Apr 2008 NL
2004670 Nov 2011 NL
2004780 Jan 2012 NL
239969 Dec 1994 NZ
99110 Dec 1993 PT
WO 1982002303 Jul 1982 WO
WO 1989008967 Sep 1989 WO
WO 9013972 Nov 1990 WO
WO 9322877 Nov 1993 WO
WO 1994016534 Jul 1994 WO
WO 1994019910 Sep 1994 WO
WO 1994021079 Sep 1994 WO
WO 9515658 Jun 1995 WO
WO 1995032587 Nov 1995 WO
WO 1995033342 Dec 1995 WO
WO 1996014712 May 1996 WO
WO 1996027843 Sep 1996 WO
WO 1996031826 Oct 1996 WO
WO 1996037074 Nov 1996 WO
WO 1996042168 Dec 1996 WO
WO 1997016925 May 1997 WO
WO 1997033434 Sep 1997 WO
WO 1997039583 Oct 1997 WO
WO 1998026595 Jun 1998 WO
WO 9900735 Jan 1999 WO
WO 9904568 Jan 1999 WO
WO 1999000735 Jan 1999 WO
WO 1999030496 Jun 1999 WO
WO 1999030497 Jun 1999 WO
WO 1999030500 Jun 1999 WO
WO 1999030501 Jun 1999 WO
WO 1999035840 Jul 1999 WO
WO 1999041911 Aug 1999 WO
WO 1999056468 Nov 1999 WO
WO 9965232 Dec 1999 WO
WO 9965243 Dec 1999 WO
WO 1999066732 Dec 1999 WO
WO 2000002303 Jan 2000 WO
WO 0007372 Feb 2000 WO
WO 0008967 Feb 2000 WO
WO 0019910 Apr 2000 WO
WO 0038430 Jun 2000 WO
WO 0041397 Jul 2000 WO
WO 0139494 May 2001 WO
WO 0141447 Jun 2001 WO
WO0156293 Aug 2001 WO
WO 0182614 Nov 2001 WO
WO 0192973 Dec 2001 WO
WO 02089487 Jul 2002 WO
WO 02076097 Sep 2002 WO
WO 02076099 Sep 2002 WO
WO 03026232 Mar 2003 WO
WO 03026275 Mar 2003 WO
WO 03047710 Jun 2003 WO
WO 03065683 Aug 2003 WO
WO 03071727 Aug 2003 WO
WO 03091832 Nov 2003 WO
WO 2004012437 Feb 2004 WO
WO 2004018060 Mar 2004 WO
WO2004057609 Jul 2004 WO
WO 2004073310 Aug 2004 WO
WO 2005002215 Jan 2005 WO
WO 2005041122 May 2005 WO
WO 2005053301 Jun 2005 WO
WO2005076575 Aug 2005 WO
WO 05120067 Dec 2005 WO
WO 2006014362 Feb 2006 WO
WO 2006022881 Mar 2006 WO
WO 2006053305 May 2006 WO
WO 2006067697 Jun 2006 WO
WO 2006081634 Aug 2006 WO
WO 2006105480 Oct 2006 WO
WO 2006110268 Oct 2006 WO
WO 2007001797 Jan 2007 WO
WO 2007008319 Jan 2007 WO
WO 2007008355 Jan 2007 WO
WO 2007008356 Jan 2007 WO
WO 2007008357 Jan 2007 WO
WO 2007008358 Jan 2007 WO
WO 2007018722 Feb 2007 WO
WO 2007018726 Feb 2007 WO
WO2008044916 Apr 2008 WO
WO 2008044916 Apr 2008 WO
WO 2008086170 Jul 2008 WO
WO 2008088741 Jul 2008 WO
WO 2008088752 Jul 2008 WO
WO 2008088772 Jul 2008 WO
WO 2008100205 Aug 2008 WO
WO2009038596 Mar 2009 WO
WO 2009038596 Mar 2009 WO
WO 2009099893 Aug 2009 WO
WO 2009099895 Aug 2009 WO
WO 2009105465 Aug 2009 WO
WO 2009110897 Sep 2009 WO
WO 2009114247 Sep 2009 WO
WO 2009155214 Dec 2009 WO
WO 2010044926 Apr 2010 WO
WO 2010054136 May 2010 WO
WO 2010107954 Sep 2010 WO
WO 2011014336 Sep 2010 WO
WO 2011082364 Jul 2011 WO
WO 2011139155 Nov 2011 WO
WO 2011149357 Dec 2011 WO
WO 2012051528 Apr 2012 WO
WO 2012138660 Oct 2012 WO
WO 2013106390 Jul 2013 WO
WO 2013155310 Jul 2013 WO
WO2013184604 Dec 2013 WO
Non-Patent Literature Citations (390)
Entry
ActiveVideo, http://www.activevideo.com/, as printed out in year 2012, 1 pg.
ActiveVideo Networks Inc., International Preliminary Report on Patentability, PCT/US2013/020769, dated Jul. 24, 2014, 6 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2014/030773, dated Jul. 25, 2014, 8 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2014/041416, dated Aug. 27, 2014, 8 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 13168509.1, 10 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 13168376-5, 8 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 12767642-7, 12 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Rules 70(2) and 70a(2), EP10841764.3, dated Jun. 6, 2014, 1 pg.
ActiveVideo Networks Inc., Communication Pursuant to Rules 70(2) and 70a(2), EP11833486.1, dated Apr. 24, 2014, 1 pg.
ActiveVideo Networks Inc., Communication Pursuant to Article 94(3) EPC, EP08713106.6-1908, dated Jun. 26, 2014, 5 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Article 94(3) EPC, EP08713106.6-2223, dated May 10, 2011, 7 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Article 94(3) EPC, EP09713486.0, dated Apr. 14, 2014, 6 pgS.
ActiveVideo Networks Inc., Examination Report No. 1, AU2011258972, dated Apr. 4, 2013, 5 pgs.
ActiveVideo Networks Inc., Examination Report No. 1, AU2010339376, dated Apr. 30, 2014, 4 pgs.
ActiveVideo Networks Inc., Examination Report, App. No. EP11749946.7, dated Oct. 8, 2013, 6 pgs.
ActiveVideo Networks Inc., Summons to attend oral-proceeding, Application No. EP09820936-4, Aug. 19, 2014, 4 pgs.
ActiveVideo Networks Inc., International Searching Authority, International Search Report—International application No. PCT/US2010/027724, dated Oct. 28, 2010, together with the Written Opinion of the International Searching Authority, 7 pages.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2014/041430, dated Oct. 9, 2014, 9 pgs.
Active Video Networks, Inc., Notice of Reasons for Rejection, JP2012-547318, dated Sep. 26, 2014, 7 pgs.
Adams, Jerry, NTZ Nachrichtechnische Zeitschrift. vol. 40, No. 7, Jul. 1987, Berlin DE pp. 534-536; Jerry Adams: ‘Glasfasernetz für Breitbanddienste in London’, 5 pgs. No English Translation Found.
Avinity Systems B.V., Communication pursuant to Article 94(3) EPC, EP 07834561.8, dated Jan. 31, 2014, 10 pgs.
Avinity Systems B.V., Communication pursuant to Article 94(3) EPC, EP 07834561.8, dated Apr. 8, 2010, 5 pgs.
Avinity Systems B.V., International Preliminary Report on Patentability, PCT/NL2007/000245, dated Mar. 31, 2009, 12 pgs.
Avinity Systems B.V., International Search Report and Written Opinion, PCT/NL2007/000245, dated Feb. 19, 2009, 18 pgs.
Avinity Systems B.V., Notice of Grounds of Rejection for Patent, JP 2009-530298, dated Sep. 3, 2013, 4 pgs.
Avinity Systems B.V., Notice of Grounds of Rejection for Patent, JP 2009-530298, dated Sep. 25, 2012, 6 pgs.
Avinity Systems B. V., Final Office Action, JP-2009-530298, dated Oct. 7, 2014, 8 pgs.
Bird et al., “Customer Access to Broadband Services,” ISSLS 86—The International Symposium on Subrscriber Loops and Services Sep. 29, 1986, Tokyo,JP 6 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 12/443,571, dated Mar. 7, 2014, 21 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/668,004, dated Jul. 16, 2014, 20 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/686,548, dated Sep. 24, 2014, 13 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/438,617, dated Oct. 3, 2014, 19 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/686,548, dated Mar. 10, 2014, 11 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/668,004, dated Dec. 23, 2013, 9 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/438,617, dated May 12, 2014, 17 pgs.
Brockmann, Office Action, U.S. Appl. No. 12/443,571, dated Jun. 5, 2013, 18 pgs.
Brockmann, Office Action, U.S. Appl. No. 12/443,571, dated Nov. 5, 2014, 26 pgs.
Chang, Shih-Fu, et al., “Manipulation and Compositing of MC-DOT Compressed Video,” IEEE Journal on Selected Areas of Communications, Jan. 1995, vol. 13, No. 1, 11 pgs.
Dahlby, Office Action, U.S. Appl. No. 12/651,203, dated Jun. 5, 2014, 18 pgs.
Dahlby, Final Office Action, U.S. Appl. No. 12/651,203, dated Feb. 4, 2013, 18 pgs.
Dahlby, Office Action, U.S. Appl. No. 12/651,203, dated Aug. 16. 2012, 18 pgs.
Dukes, Stephen D., “Photonics for cable television system design, Migrating to regional hubs and passive networks,” Communications Engineering and Design, May 1992, 4 pgs.
Ellis, et al., “INDAX: An Operation Interactive Cabletext System”, IEEE Journal on Selected Areas in Communications, vol. sac-1, No. 2, Feb. 1983, pp. 285-294.
European Patent Office, Supplementary European Search Report, Application No. EP 09 70 8211, dated Jan. 5, 2011, 6 pgs.
Frezza, W., “The Broadband Solution—Metropolitan CATV Networks,” Proceedings of Videotex '84, Apr. 1984, 15 pgs.
Gecsei, J., “Topology of Videotex Networks,” The Architecture of Videotex Systems, Chapter 6, 1983 by Prentice-Hall, Inc.
Gobl, et al., “ARIDEM—a multi-service broadband access demonstrator,” Ericsson Review No. 3, 1996, 7 pgs.
Gordon, Notice of Allowance, U.S. Appl. No. 12/008,697, dated Mar. 20, 2014, 10 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/008,722, dated Mar. 30, 2012, 16 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/035,236, dated Jun. 11, 2014, 14 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/035,236, dated Jul. 22, 2013, 7 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/035,236, dated Sep. 20, 2011, 8 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/035,236, dated Sep. 21, 2012, 9 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/008,697, dated Mar. 6, 2012, 48 pgs.
Gordon, Office Action, U.S. Appl. No. 12/035,236, dated Mar. 13, 2013, 9 pgs.
Gordon, Office Action, U.S. Appl. No. 12/035,236, dated Mar. 22, 2011, 8 pgs.
Gordon, Office Action, U.S. Appl. No. 12/035,236, dated Mar. 28, 2012, 8 pgs.
Gordon, Office Action, U.S. Appl. No. 12/035,236, dated Dec. 16, 2013, 11 pgs.
Gordon, Office Action, U.S. Appl. No. 12/008,697, dated Aug. 1, 2013, 43 pgs.
Gordon, Office Action, U.S. Appl. No. 12/008,697, dated Aug. 4, 2011, 39 pgs.
Gordon, Office Action, U.S. Appl. No. 12/008,722, dated Oct. 11, 2011, 16 pgs.
Handley et al, “TCP Congestion Window Validation,” RFC 2861, Jun. 2000, Network Working Group, 22 pgs.
Henry et al. “Multidimensional Icons” ACM Transactions on Graphics, vol. 9, No. 1 Jan. 1990, 5 pgs.
Insight advertisement, “In two years this is going to be the most watched program on TV” On touch VCR programming, published not later than 2000, 10 pgs.
Isensee et al., “Focus Highlight for World Wide Web Frames,” Nov. 1, 1997, IBM Technical Disclosure Bulletin, vol. 40, No. 11, pp. 89-90.
ICTV, Inc., International Search Report / Written Opinion, PCT/US2008/000400, dated Jul. 14, 2009, 10 pgs.
ICTV, Inc., International Search Report / Written Opinion, PCT/US2008/000450, dated Jan. 26, 2009, 9 pgs.
Kato, Y., et al., “A Coding Control algorithm for Motion Picture Coding Accomplishing Optimal Assignment of Coding Distortion to Time and Space Domains,” Electronics and Communications in Japan, Part 1, vol. 72, No. 9, 1989, 11 pgs.
Koenen, Rob,“MPEG-4 Overview—Overview of the MPEG-4 Standard” Internet Citation, Mar. 2001 (Mar. 2001), http://mpeg.telecomitalialab.com/standards/mpeg-4/mpeg-4.htm, May 9, 2002, 74 pgs.
Konaka, M. et al., “Development of Sleeper Cabin Cold Storage Type Cooling System,” SAE International, The Engineering Society for Advancing Mobility Land Sea Air and Space, SAE 2000 World Congress, Detroit, Michigan, Mar. 6-9, 2000, 7 pgs.
Le Gall, Didier, “MPEG: A Video Compression Standard for Multimedia Applications”, Communication of the ACM, vol. 34, No. 4, Apr. 1991, New York, NY, 13 pgs.
Langenberg, E, et al., “Integrating Entertainment and Voice on the Cable Network,” SCTE , Conference on Emerging Technologies, Jan. 6-7, 1993, New Orleans, Louisiana, 9 pgs.
Large, D., “Tapped Fiber vs. Fiber-Reinforced Coaxial CATV Systems”, IEEE LCS Magazine, Feb. 1990, 7 pgs.
Mesiya, M.F, “A Passive Optical/Coax Hybrid Network Architecture for Delivery of CATV, Telephony and Data Services,” 1993 NCTA Technical Papers, 7 pgs.
“MSDL Specification Version 1.1” International Organisation for Standardisation Organisation Internationale EE Normalisation, ISO/IEC JTC1/SC29/WG11 Coding of Moving Pictures and Autdio, N1246, MPEG96/Mar. 1996, 101 pgs.
Noguchi, Yoshihiro, et al., “MPEG Video Compositing in the Compressed Domain,” IEEE International Symposium on Circuits and Systems, vol. 2, May 1, 1996, 4 pgs.
Regis, Notice of Allowance U.S. Appl. No. 13/273,803, dated Sep. 2, 2014, 8 pgs.
Regis, Notice of Allowance U.S. Appl. No. 13/273,803, dated May 14, 2014, 8 pgs.
Regis, Final Office Action U.S. Appl. No. 13/273,803, dated Oct. 11, 2013, 23 pgs.
Regis, Office Action U.S. Appl. No. 13/273,803, dated Mar. 27, 2013, 32 pgs.
Richardson, Ian E.G., “H.264 and MPEG-4 Video Compression, Video Coding for Next-Genertion Multimedia,” Johm Wiley & Sons, US, 2003, ISBN: 0-470-84837-5, pp. 103-105, 149-152, and 164.
Rose, K., “Design of a Switched Broad-Band Communications Network for Interactive Services,” IEEE Transactions on Communications, vol. com-23, No. 1, Jan. 1975, 7 pgs.
Saadawi, Tarek N., “Distributed Switching for Data Transmission over Two-Way CATV”, IEEE Journal on Selected Areas in Communications, vol. Sac-3, No. 2, Mar. 1985, 7 pgs.
Schrock, “Proposal for a Hub Controlled Cable Television System Using Optical Fiber,” IEEE Transactions on Cable Television, vol. CATV-4, No. 2, Apr. 1979, 8 pgs.
Sigmon, Notice of Allowance, U.S. Appl. No. 13/311,203, dated Sep. 22, 2014, 5 pgs.
Sigmon, Notice of Allowance, U.S. Appl. No. 13/311,203, dated Feb. 27, 2014, 14 pgs.
Sigmon, Final Office Action, U.S. Appl. No. 13/311,203, dated Sep. 13, 2013, 20 pgs.
Sigmon, Office Action, U.S. Appl. No. 13/311,203, dated May 10, 2013, 21 pgs.
Smith, Brian C., et al., “Algorithms for Manipulating Compressed Images,” IEEE Computer Graphics and Applications, vol. 13, No. 5, Sep. 1, 1993, 9 pgs.
Smith, J. et al., “Transcoding Internet Content for Heterogeneous Client Devices” Circuits and Systems, 1998. ISCAS '98. Proceedings of the 1998 IEEE International Symposium on Monterey, CA, USA May 31-Jun. 3, 1998, New York, NY, USA,IEEE, US, May 31, 1998 (May 31, 1998), 4 pgs.
Stoll, G. et al., “GMF4iTV: Neue Wege zur-Interaktivitaet Mit Bewegten Objekten Beim Digitalen Fernsehen,” Fkt Fernseh Und Kinotechnik, Fachverlag Schiele & Schon GmbH, Berlin, DE, vol. 60, No. 4, Jan. 1, 2006, ISSN: 1430-9947, 9 pgs. No English Translation Found.
Tamitani et al., “An Encoder/Decoder Chip Set for the MPEG Video Standard,” 1992 IEEE International Conference on Acoustics, vol. 5, Mar. 1992, San Francisco, CA, 4 pgs.
Terry, Jack, “Alternative Technologies and Delivery Systems for Broadband ISDN Access”, IEEE Communications Magazine, Aug. 1992, 7 pgs.
Thompson, Jack, “DTMF-TV, The Most Economical Approach to Interactive TV,” GNOSTECH Incorporated, NCF'95 Session T-38-C, 8 pgs.
Thompson, John W. Jr., “The Awakening 3.0: PCs, TSBs, or DTMF-TV—Which Telecomputer Architecture is Right for the Next Generations's Public Network?,” GNOSTECH Incorporated, 1995 The National Academy of Sciences, downloaded from the Unpredictable Certainty: White Papers, http://www.nap.edu/catalog/6062.html, pp. 546-552.
Tobagi, Fouad A., “Multiaccess Protocols in Packet Communication Systems,” IEEE Transactions on Communications, vol. Com-28, No. 4, Apr. 1980, 21 pgs.
Toms, N., “An Integrated Network Using Fiber Optics (Info) for the Distribution of Video, Data, and Telephone in Rural Areas,” IEEE Transactions on Communication, vol. Com-26, No. 7, Jul. 1978, 9 pgs.
Trott, A., et al.“An Enhanced Cost Effective Line Shuffle Scrambling System with Secure Conditional Access Authorization,” 1993 NCTA Technical Papers, 11 pgs.
Jurgen_Two-way applications for cable television systems in the '70s, IEEE Spectrum, Nov. 1971, 16 pgs.
Va Beek, P., “Delay-Constrained Rate Adaptation for Robust Video Transmission over Home Networks,” Image Processing, 2005, ICIP 2005, IEEE International Conference, Sep. 2005, vol. 2, No. 11, 4 pgs.
Van der Star, Jack A. M., “Video on Demand Without Compression: A Review of the Business Model, Regulations and Future Implication,” Proceedings of PTC'93, 15th Annual Conference, 12 pgs.
Welzenbach et al., “The Application of Optical Systems for Cable TV,” AEG-Telefunken, Backnang, Federal Republic of Germany, ISSLS Sep. 15-19, 1980, Proceedings IEEE Cat. No. 80 CH1565-1, 7 pgs.
Yum, TS P., “Hierarchical Distribution of Video with Dynamic Port Allocation,” IEEE Transactions on Communications, vol. 39, No. 8, Aug. 1, 1991, XP000264287, 7 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, EP08713106.6-1908, dated Aug. 5, 2015, 2 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, AU2011258972, dated Nov. 19, 2015, 2 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, AU2011315950, dated Dec. 17, 2015, 2 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, AU2011249132, dated Jan. 7, 2016, 2 pgs.
ActiveVideo Networks, Inc., Certificate of Grant , EP13168509.11908, dated Sep. 30, 2015, 2 pgs.
ActiveVideo Networks, Inc., Certificate of Patent, JP2013534034, dated Jan. 8, 2016, 4 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(1) and 162 EPC, EP14722897.7, dated Oct. 28, 2015, 2 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(1) and 162 EPC, EP14740004.8, dated Jan. 26, 2016, 2 pgs.
ActiveVideo Networks, Inc., Decision to Grant, EP08713106.6-1908, dated Jul. 9, 2015, 2 pgs.
ActiveVideo Networks, Inc., Decision to Grant, EP13168509.1-1908, dated Sep. 3, 2015, 2 pgs.
ActiveVideo Networks, Inc., Decision to Grant, JP2014100460, dated Jul. 24, 2015, 5 pgs.
ActiveVideo Networks, Inc., Decision to Refuse a European Patent Application, EP08705578.6, dated Nov. 26, 2015, 10 pgs.
ActiveVideo Networks Inc., Examination Report No. 2, AU2011249132, dated May 29, 2015, 4 pgs.
Activevideo Networks Inc., Examination Report No. 2, AU2011315950, dated Jun. 25, 2015, 3 pgs.
ActiveVideo Networks, Inc., Extended European Search Report, EP13735906.3, dated Nov. 11, 2015, 10 pgs.
ActiveVideo Networks, Inc., KIPO's Notice of Preliminary Rejection, KR10-2010-7019512, dated Jul. 15, 2015, 15 pgs.
ActiveVideo Networks, Inc., KIPO's 2nd-Notice of Preliminary Rejection, KR10-2010-7019512, dated Feb. 12, 2016, 5 pgs.
ActiveVideo Networks, Inc., KIPO's Notice of Preliminary Rejection, KR10-20107021116, dated Jul. 13, 2015, 19 pgs.
ActiveVideo Networks, Inc., KIPO's Notice of Preliminary Rejection, KR10-2011-7024417, dated Feb. 18, 2016, 16 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT-US2015028072, dated Aug. 7, 2015, 9 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT-US2014030773, dated Sep. 15, 2015, 6 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT/US2014041430, dated Dec. 8, 2015, 6 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT-US2014041416, dated Dec. 8, 2015, 6 pgs.
ActiveVideo, International Search Report and Written Opinion, PCT/US2015/027803, dated Jun. 24, 2015, 18 pgs.
ActiveVideo, International Search Report and Written Opinion, PCT/US2015/027804, dated Jun. 25, 2015, 10 pgs.
ActiveVideo, Communication Pursuant to Article-94(3) EPC, EP12767642.7, dated Sep. 4, 2015, 4 pgs.
AcriveVideo, Communication Pursuant to Article 94(3) EPC, EP10841764.3, dated Dec. 18, 2015, 6 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 70(2) abd 70a(2) EP13735906.3, dated Nov. 27, 2015, 1 pg.
ActiveVideo Networks, Inc., Office Action, JP2013534034, dated Jun. 16, 2015, 6 pgs.
ActiveVideo, Notice of German Patent, EP602008040474-9, dated Jan. 6, 2016, 4 pgs.
ActiveVideo Networks B.V., Office Action, IL222830, dated Jun. 28, 2015, 7 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 14/262,674, dated Sep. 30, 2015, 7 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/911,948, dated Aug. 21, 2015, 6 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/911,948, dated Aug. 5, 2015, 5 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/668,004, dated Aug. 3, 2015, 18 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/686,548, dated Aug. 12, 2015, 13 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/911,948, dated Jul. 10, 2015, 5 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 12/443,571, dated Jul. 9, 2015, 28 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/737,097, dated Aug. 14, 2015, 17 pgs.
Brockmann, Office Action, U.S. Appl. No. 14/298,796, dated Sep. 11, 2015, 11 pgs.
Brockmann, Notice of Alowance, U.S. Appl. No. 14/298,796, dated Mar. 17, 2016, 9 pgs.
Brockmann, Office Action, U.S. Appl. No. 12/443,571, dated Dec. 4, 2015, 30 pgs.
Dahlby, Final Office Action, U.S. Appl. No. 12/651,203, dated Dec. 11, 2015, 25 pgs.
Dahlby, Office Action U.S. Appl. No. 12/651,203, dated Jul. 2, 2015, 25 pgs.
Gecsei, J., “Adaptation in Distributed Multimedia Systems,” IEEE Multimedia, IEEE Service Center, New York, NY, vol. 4, No. 2, Apr. 1, 1997, 10 pgs.
Gordon, Notice of Allowance, U.S. Appl. No. 12/008,722, dated Feb. 17, 2016, 10 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/008,722, dated Jul. 2, 2015, 20 pgs.
Jacob, Bruce, “Memory Systems: Cache, DRAM, Disk,” Oct. 19, 2007, The Cache Layer, Chapter 22, p. 739.
McElhatten, Office Action, U.S. Appl. No. 14/698,633, dated Feb. 22, 2016, 14 pgs.
Ohta, K., et al., “Selective Multimedia Access Protocol for Wireless Multimedia Communication,” Communications, Computers and Signal Processing, 1997, IEEE Pacific Rim Conference NCE Victoria, BC, Canada, Aug. 1997, vol. 1, 4 pgs.
Wei, S., “QoS Tradeoffs Using an Application-Oriented Transport Protocol (AOTP) for Multimedia Applications Over IP.” Sep. 23-26, 1999, Proceedings of the Third International Conference on Computational Intelligence and Multimedia Applications, New Delhi, India, 5 pgs.
AC-3 digital audio compression standard, Extract, Dec. 20, 1995, 11 pgs.
ActiveVideo Networks BV, International Preliminary Report on Patentability, PCT/NL2011/050308, dated Sep. 6, 2011, 8 pgs.
ActiveVideo Networks BV, International Search Report and Written Opinion, PCT/NL2011/050308, dated Sep. 6, 2011, 8 pgs.
Activevideo Networks Inc., International Preliminary Report on Patentability, PCT/US2011/056355, dated Apr. 16, 2013, 4 pgs.
ActiveVideo Networks Inc., International Preliminary Report on Patentability, PCT/U52012/032010, dated Oct. 8, 2013, 4 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2011/056355, dated Apr. 13, 2012, 6 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2012/032010, dated Oct. 10, 2012, 6 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2013/020769, dated May 9, 2013, 9 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2013/036182, dated Jul. 29, 2013, 12 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2009/032457, dated Jul. 22, 2009, 7 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 09820936-4, 11 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 10754084-1, 11 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 10841764.3, 16 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 11833486.1, 6 pgs.
AcitveVideo Networks Inc., Korean Intellectual Property Office, International Search Report; PCT/US2009/032457, dated Jul. 22, 2009, 7 pgs.
Annex C—Video buffering verifier, information technology—generic coding of moving pictures and associated audio information: video, Feb. 2000, 6 pgs.
Antonoff, Michael, “Interactive Television,” Popular Science, Nov. 1992, 12 pages.
Avinity Systems B.V., Extended European Search Report, Application No. 12163713.6, 10 pgs.
Avinity Systems B.V., Extended European Search Report, Application No. 12163712-8, 10 pgs.
Benjelloun, A summation algorithm for MPEG-1 coded audio signals: a first step towards audio processed domain, 2000, 9 pgs.
Broadhead, Direct manipulation of MPEG compressed digital audio, Nov. 5-9, 1995, 41 pgs.
Cable Television Laboratories, Inc., “CableLabs Asset Distribution Interface Specification, Version 1.1”, May 5, 2006, 33 pgs.
CD 11172-3, Coding of moving pictures and associated audio for digital storage media at up to about 1.5 MBIT, Jan. 1, 1992, 39 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,176, dated Dec. 23, 2010, 8 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,183, dated Jan. 12, 2012, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,183, dated Jul. 19, 2012, 8 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,189, dated Oct. 12, 2011, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,176, dated Mar. 23, 2011, 8 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 13/609,183, dated Aug. 26, 2013, 8 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/103,838, dated Feb. 5, 2009, 30 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,181, dated Aug. 25, 2010, 17 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/103,838, dated Jul. 6, 2010, 35 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,176, dated Oct. 1, 2010, 8 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,183, dated Apr. 13, 2011, 16 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,177, dated Oct. 26, 2010, 12 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,181, dated Jun. 20, 2011, 21 pgs.
Craig, Office Action, U.S. Appl. No. 11/103,838, dated May 12, 2009, 32 pgs.
Craig, Office Action, U.S. Appl. No. 11/103,838, dated Aug. 19, 2008, 17 pgs.
Craig, Office Action, U.S. Appl. No. 11/103,838, dated Nov. 19, 2009, 34 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,176, dated May 6, 2010, 7 pgs.
Craig, Office-Action U.S. Appl. No. 11/178,177, dated Mar. 29, 2011, 15 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,177, dated Aug. 3, 2011, 26 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,177, dated Mar. 29, 2010, 11 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,181, dated Feb. 11, 2011, 19 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,181, dated Mar. 29, 2010, 10 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,182, dated Feb. 23, 2010, 15 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, dated Dec. 6, 2010, 12 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, dated Sep. 15, 2011, 12 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, dated Feb. 19, 2010, 17 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, dated Jul. 20, 2010, 13 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, dated Nov. 9, 2010, 13 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, dated Mar. 15, 2010, 11 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, dated Jul. 23, 2009, 10 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, dated May 26, 2011, 14 pgs.
Craig, Office Action, U.S. Appl. No. 13/609,183, dated May 9, 2013, 7 pgs.
Pavlovskaia, Office Action, JP 2011-516499, dated Feb. 14, 2014, 19 pgs.
Digital Audio Compression Standard(AC-3, E-AC-3), Advanced Television Systems Committee, Jun. 14, 2005, 236 pgs.
European Patent Office, Extended European Search Report for International Application No. PCT/US2010/027724, dated Jul. 24, 2012, 11 pages.
FFMPEG, http://www.ffmpeg.org, downloaded Apr. 8, 2010, 8 pgs.
FFMEG-0.4.9 Audio Layer 2 Tables Including Fixed Psycho Acoustic Model, 2001, 2 pgs.
Herr, Notice of Allowance, U.S. Appl. No. 11/620,593, dated May 23, 2012, 5 pgs.
Herr, Notice of Allowance, U.S. Appl. No. 12/534,016, dated Feb. 7, 2012, 5 pgs.
Herr, Notice of Allowance, U.S. Appl. No. 12/534,016, dated Sep. 28, 2011, 15 pgs.
Herr, Final Office Action, U.S. Appl. No. 11/620,593, dated Sep. 15, 2011, 104 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, dated Mar. 19, 2010, 58 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, dated Apr. 21, 2009 27 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, dated Dec. 23, 2009, 58 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, dated Jan. 24, 2011, 96 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, dated Aug. 27, 2010, 41 pgs.
Herre, Thoughts on an SAOC Architecture, Oct. 2006, 9 pgs.
Hoarty, The Smart Headend—A Novel Approach to Interactive Television, Montreux Int'l TV Symposium, Jun. 9, 1995, 21 pgs.
ICTV, Inc., International Preliminary Report on Patentability, PCT/US2006/022585, dated Jan. 29, 2008, 9 pgs.
ICTV, Inc., International Search Report / Written Opinion, PCT/US2006/022585, dated Oct. 12, 2007, 15 pgs.
ICTV, Inc., International Search Report / Written Opinion, PCT/US2008/000419, dated May 15, 2009, 20 pgs.
ICTV, Inc., International Search Report / Written Opinion; PCT/US2006/022533, dated Nov. 20, 2006; 8 pgs.
Isovic, Timing constraints of MPEG-2 decoding for high quality video: misconceptions and realistic assumptions, Jul. 2-4, 2003, 10 pgs.
MPEG-2 Video elementary stream supplemental information, Dec. 1999, 12 pgs.
Ozer, Video Compositing 101. available from http://www.emedialive.com, Jun. 2, 2004, 5pgs.
Porter, Compositing Digital Images, 18 Computer Graphics (No. 3), Jul. 1984, pp. 253-259.
RSS Advisory Board, “RSS 2.0 Specification”, published Oct. 15, 2007. Not Found.
SAOC use cases, draft requirements and architecture, Oct. 2006, 16 pgs.
Sigmon, Final Office Action, U.S. Appl. No. 11/258,602, dated Feb. 23, 2009, 15 pgs.
Sigmon, Office Action, U.S. Appl. No. 11/258,602, dated Sep. 2, 2008, 12 pgs.
TAG Networks, Inc., Communication pursuant to Article 94(3) EPC, European Patent Application, 06773714.8, dated May 6, 2009, 3 pgs.
TAG Networks Inc, Decision to Grant a Patent, JP 209-544985, dated Jun. 28, 2013, 1 pg.
TAG Networks Inc., IPRP, PCT/US2006/010080, Oct. 16, 2007, 6 pgs.
TAG Networks Inc., IPRP, PCT/US2006/024194, Jan. 10, 2008, 7 pgs.
TAG Networks Inc., IPRP, PCT/US2006/024195, Apr. 1, 2009, 11 pgs.
TAG Networks Inc., IPRP, PCT/US2006/024196, Jan. 10, 2008, 6 pgs.
TAG Networks Inc., International Search Report, PCT/US2008/050221, dated Jun. 12, 2008, 9 pgs.
TAG Networks Inc., Office Action, CN 200680017662.3, dated Apr. 26, 2010, 4 pgs.
TAG Networks Inc., Office Action, EP 06739032.8, dated Aug. 14, 2009, 4 pgs.
TAG Networks Inc., Office Action, EP 06773714.8, dated May 6, 2009, 3 pgs.
TAG Networks Inc., Office Action, EP 06773714.8, dated Jan. 12, 2010, 4 pgs.
TAG Networks Inc., Office Action, JP 2008-506474, dated Oct. 1, 2012, 5 pgs.
TAG Networks Inc., Office Action, JP 2008-506474, dated Aug. 8, 2011, 5 pgs.
TAG Networks Inc., Office Action, JP 2008-520254, dated Oct. 20, 2011, 2 pgs.
TAG Networks, IPRP, PCT/US2008/050221, Jul. 7, 2009, 6 pgs.
TAG Networks, International Search Report, PCT/US2010/041133, dated Oct. 19, 2010, 13 pgs.
TAG Networks, Office Action, CN 200880001325.4, dated Jun. 22, 2011, 4 pgs.
TAG Networks, Office Action, JP 2009-544985, dated Feb. 25, 2013, 3 pgs.
Talley, A general framework for continuous media transmission control, Oct. 13-16, 1997, 10 pgs.
The Toolame Project, Psych_nl.c, 1999, 1 pg.
Todd, AC-3: flexible perceptual coding for audio transmission and storage, Feb. 26-Mar. 1, 1994, 16 pgs.
Tudor, MPEG-2 Video Compression, Dec. 1995, 15 pgs.
TVHEAD, Inc., First Examination Report, IN 1744/MUMNP/2007, dated Dec. 30, 2013, 6 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/010080, dated Jun. 20, 2006, 3 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/024194, dated Dec. 15, 2006, 4 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/024195, dated Nov. 29, 2006, 9 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/024196, dated Dec. 11, 2006, 4 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/024197, dated Nov. 28, 2006, 9 pgs.
Vernon, Dolby digital: audio coding for digital television and storage applications, Aug. 1999, 18 pgs.
Wang, A beat-pattern based error concealment scheme for music delivery with burst packet loss, Aug. 22-25, 2001, 4 pgs.
Wang, A compressed domain beat detector using MP3 audio bitstream, Sep. 30-Oct. 5, 2001, 9 pgs.
Wang, A multichannel audio coding algorithm for inter-channel redundancy removal, May 12-15, 2001, 6 pgs.
Wang, An excitation level based psychoacoustic model for audio compression, Oct. 30-Nov. 4, 1999, 4 pgs.
Wang, Energy compaction property of the MDCT in comparison with other transforms, Sep. 22-25, 2000, 23 pgs.
Wang, Exploiting excess masking for audio compression, Sep. 2-5, 1999, 4 pgs.
Wang, schemes for re-compressing mp3 audio bitstreams,Nov. 30-Dec. 3, 2001, 5 pgs.
Wang, Selected advances in audio compression and compressed domain processing, Aug. 2001, 68 pgs.
Wang, The impact of the relationship between MDCT and DFT on audio compression, Dec. 13-15, 2000, 9 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentablity, PCT/US2013/036182, dated Oct. 14, 2014, 9 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Rule 94(3), EP08713106-6, dated Jun. 25, 2014, 5 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Rules 161(2) & 162 EPC, EP13775121.0, dated Jan. 20, 2015, 3 pgs.
ActiveVideo Networks Inc., Examination Report No. 1, AU2011258972, dated Jul. 21, 2014, 3 pgs.
ActiveVideo Networks Inc., Certificate of Patent JP5675765, dated Jan. 9, 2015, 3 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/445,104, dated Dec. 24, 2014, 14 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/668,004, dated Feb. 26, 2015, 17 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/686,548, dated Jan. 5, 2015, 12 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/911,948, dated Dec. 26, 2014, 12 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/911,948, dated Jan. 29, 2015, 11 pgs.
Dahlby, Office Action, U.S. Appl. No. 12/651,203, dated Dec. 3, 2014, 19 pgs.
Gordon, Notice of Allowance, U.S. Appl. No. 12/008,697, dated Dec. 8, 2014, 10 pgs.
Gordon, Office Action, U.S. Appl. No. 12/008,722, dated Nov. 28, 2014, 18 pgs.
Regis, Notice of Allowance, U.S. Appl. No. 13/273,803, dated Nov. 18, 2014, 9 pgs.
Regis, Notice of Allowance, U.S. Appl. No. 13/273,803, dated Mar. 2, 2015, 8 pgs.
Sigmon, Notice of Allowance, U.S. Appl. No. 13/311,203, dated Dec. 19, 2014, 5 pgs.
TAG Networks Inc, Decision to Grant a Patent, JP 2008-506474, dated Oct. 4, 2013, 5 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/438,617, dated May 22, 2015, 18 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/445,104, dated Apr. 23, 2015, 8 pgs.
Brockmann, Office Action, U.S. Appl. No. 14/262,674, dated May 21, 2015, 7 pgs.
Gordon, Notice of Allowance, U.S. Appl. No. 12/008,697, dated Apr. 1, 2015, 10 pgs.
Sigmon, Notice of Allowance, U.S. Appl. No. 13/311,203, dated Apr. 14, 2015, 5 pgs.
Avinity-Systems-BV, PreTrial-Reexam-Report-JP2009530298, dated Apr. 24, 2015, 6 pgs.
ActiveVideo Networks Inc., Decision to refuse a European patent application (Art. 97(2) EPC, EP09820936.4, dated Feb. 20, 2015, 4 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Article 94(3) EPC, 10754084.1, dated Feb. 10, 2015, 12 pgs.
ActiveVideo Networks Inc., Communication under Rule 71(3) EPC, Intention to Grant, EP08713106.6, dated Feb. 19, 2015, 12 pgs.
ActiveVideo Networks Inc., Notice of Reasons for Rejection, JP2014-100460, dated Jan. 15, 2015, 6 pgs.
ActiveVideo Networks Inc., Notice of Reasons for Rejection, JP2013-509016, dated Dec. 24, 2014, 11 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/737,097, dated Mar. 16, 2015, 18 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 14/298,796, dated Mar. 18, 2015, 11 pgs.
Craig, Decision on Appeal—Reversed—, U.S. Appl. No. 11/178,177, dated Feb. 25, 2015, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,177, dated Mar. 5, 2015, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,181, dated Feb. 13, 2015, 8 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, HK10102800.4, dated Jun. 10, 2016, 3 pgs.
ActiveVideo Networks, Inc., Certificate of Patent, IL215133, dated Mar. 31, 2016, 1 pg.
ActiveVideo Networks, Inc., Communication Pursuant to Article 94(3) EPC, EP14722897.7, dated Jun. 29, 2016, 6 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Article 94(3) EPC, EP11738835.5, dated Jun. 10, 2016, 3 pgs.
ActiveVideo Networks, Inc., Partial Supplementary Extended European Search Report, EP13775121.0, dated Jun. 14, 2016, 7 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2015/000502, dated May 6, 2016, 8 pgs.
Avinity Systems B.V., Notice of Grant—JP2009530298, dated Apr. 12, 2016, 3 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/668,004, dated Mar. 25, 2016, 17 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 12/443,571, dated Aug. 1, 2016, 32 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, HK14101604, dated Sep. 8, 2016, 4 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(1) and 162 EPC, EP14736535.7, dated Jan. 26, 2016, 2 pgs.
ActiveVideo Networks, Inc., Decision to Refuse an EP Patent Application, EP 10754084.1, dated Nov. 3, 2016, 4 pgs.
ActiveVideo Networks, Inc. Notice of Reasons for Rejection, JP2015-159309, dated Aug. 29, 2016. 11 pgs.
ActiveVideo Networks, Inc. Denial of Entry of Amendment, JP2013-509016, dated Aug. 30, 2016, 7 pgs.
ActiveVideo Networks, Inc. Notice of Final Rejection, JP2013-509016, dated Aug. 30, 2016, 3 pgs.
ActiveVideo, Notice of Reasons for Rejection, JP2013-509016, dated Dec. 3, 2015, 7 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT-US2015028072, dated Nov. 1, 2016, 7 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT-US2015/027803, dated Oct. 25, 2016, 8 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT-US2015/027804, dated Oct. 25, 2016, 6 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2016/040547, dated Sep. 19, 2016, 6 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Article 94(3), EP13735906.3, dated Jul. 18, 2016, 5 pgs.
Avinity Systems B.V., Decision to Refuse an EP Patent Application, EP07834561.8, dated Oct. 10, 2016, 17 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/668,004, dated Nov. 2, 2016, 20 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/686,548, dated Feb. 8, 2016, 13 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/737,097, dated May 16, 2016, 23 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/737,097, dated Oct. 20, 2016, 22 pgs.
Hoeben, Office Action, U.S. Appl. No. 14/757,935, dated Sep. 23, 2016, 28 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(1) and 162 EPC, EP15785776.4, dated Dec. 8, 2016, 2 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(1) and 162 EPC, EP15721482.6, dated Dec. 13, 2016, 2 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(1) and 162 EPC, EP15721483.4, dated Dec. 15, 2016, 2 pgs.
ActiveVideo Networks, Inc., Communication Under Rule 71(3), Intention to Grant, EP11833486.1, dated Apr. 21, 2017, 7 pgs.
ActiveVideo Networks, Inc., KIPO's Notice of Preliminary Rejection, KR10-2012-7031648, dated Mar. 27, 2017, 4 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2016/051283, dated Nov. 29, 2016, 10 pgs.
ActiveVideo, Intent to Grant, EP12767642.7, dated Jan. 2, 2017, 15 pgs.
ActiveVideo Networks, Inc., Intent to Grant, EP06772771.9, dated Jan. 12, 2017, 5 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/668,004, dated Mar. 31, 2017, 21 pgs.
Brockmann, Office Action, U.S. Appl. No. 14/217,108, dated Apr. 13, 2016, 8 pgs.
Brockmann, Office Action, U.S. Appl. No. 14/696,462, dated Feb. 8, 2017, 6 pgs.
Brockmann, Office Action, U.S. Appl. No. 15/139,166, dated Feb. 28, 2017, 10 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 14/217,108, dated Dec. 1, 2016, 9 pgs.
Dahlby, Advisory Action, U.S. Appl. No. 12/651,203, dated Nov. 21, 2016, 5 pgs.
Hoeben, Office Action, U.S. Appl. No. 14/757,935, dated Apr. 12, 2017, 29 pgs.
McElhatten, Final Office Action, U.S. Appl. No. 14/698,633, dated Aug. 18, 2016, 16 pgs.
McElhatten, Office Action, U.S. Appl. No. 14/698,633, dated Feb. 10, 2017, 15 pgs.
Brockmann, Office Action, U.S. Appl. No. 12/443,571, dated May 31, 2017, 36 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 14/696,462, dated Jul. 21, 2017, 6 pgs.
Brockmann, Office Action, U.S. Appl. No. 14/217,108, dated Aug. 10, 2017, 14 pgs.
ActiveVideo Networks, Inc., Decision to grant an European Patent, EP12767642.7, dated May 11, 2017, 2 pgs.
ActiveVideo Networks, Inc., Transmission of Certificate of Grant, EP12767642-7, dated Jun. 7, 2017, 1 pg.
ActiveVideo Networks, Inc., Intention to Grant, EP06772771.9, dated Jun. 12, 2017, 5 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Article 94(3), EP14722897.7, dated Jul. 19, 2017, 7 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Article 94(3), EP14740004.8, dated Aug. 24, 2017, 7 pgs.
ActiveVideo Networks, Inc., Extended European Search Report, EP15785776.4, dated Aug. 18, 2017, 8 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2016/064972, dated Feb. 17, 2017, 9 pgs.
ActiveVideo Networks, Inc., Decision to Grant an European Patent, EP06772771.9, dated Oct. 26, 2017, 2 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, EP06772771.9, dated Nov. 22, 2017, 1 pg.
ActiveVideo Networks, Inc., Decision to grant an European Patent, EP11833486.1, dated Oct. 26, 2017, 2 pgs.
ActiveVideo Networks, Inc., Certificate of Grant, EP11833486.1, dated Nov. 22, 2017, 1 pg.
ActiveVideo Networks, Inc., Certificate of Grant, EP12767642-7, dated Jun. 7, 2017, 1 pg.
ActiveVideo Networks, Inc., Communication Pursuant to Article 94(3), EP15721482.6, dated Nov. 20, 2017, 7 pgs.
ActiveVideo Networks, Inc., Notification of German Patent, DE602012033235.2, dated Jun. 13, 2017, 3 pgs.
Brockmann, Office Action, U.S. Appl. No. 15/139,166, dated Nov. 22, 2017, 9 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 161(2) and 162,EP16818840.7, dated Feb. 20, 2018, 3 pgs.
ActiveVideo Networks, Inc., Extended European Search Report, EP15873840.1, dated May 18, 2018, 9 pgs.
ActiveVideo Networks, Inc., Communication Pursuant to Rules 70(2) and 70a(2), EP15873840.1, dated Jun. 6, 2018, 1 pg.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT/US2016/040547, dated Jan. 2, 2018, 5 pgs.
ActiveVideo Networks, Inc., Extended European Search Report, EP16818840.7, dated Nov. 30, 2018, 5 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentability, PCT/US2016/064972, dated Jun. 14, 2018, 7 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2017/068293, dated Mar. 19, 2018, 7 pgs.
Brockmann, Office Action, U.S. Appl. No. 15/199,503, dated Feb. 7, 2018, 12 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 15/199,503, dated Aug. 16, 2018, 13 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 15/199,503, dated Dec. 12, 2018, 9 pgs.
Brockmann, Office Action, U.S. Appl. No. 15/261,791, dated Feb. 21, 2018, 26 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 15/261,791, dated Oct. 16, 2018, 17 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 15/139,166, dated Oct. 1, 2018, 7 pgs.
Brockmann, Office Action, U.S. Appl. No. 15/728,430, dated Jul. 27, 2018, 6 pgs.
Brockmann, Office Action, U.S. Appl. No. 15/791,198, dated Dec. 21, 2018, 18 pgs.
Hoeben, Final Office Action, U.S. Appl. No. 14/757,935, dated Feb. 28, 2018, 33 pgs.
Hoeben, Office Action, U.S. Appl. No. 14/757,935, dated Jun. 28, 2018, 37 pgs.
Hoeben, Office Action, U.S. Appl. No. 15/851,589, dated Sep. 21, 2018, 19 pgs.
Visscher, Office Action, U.S. Appl. No. 15/368,527, dated Feb. 23, 2018, 23 pgs.
Visscher, Final Office Action, U.S. Appl. No. 15/368,527, dated Sep. 11, 2018, 25 pgs.
Related Publications (1)
Number Date Country
20140289627 A1 Sep 2014 US
Provisional Applications (1)
Number Date Country
61793898 Mar 2013 US