Interactive encoded content system including object models for viewing on a remote device

Information

  • Patent Grant
  • 9042454
  • Patent Number
    9,042,454
  • Date Filed
    Friday, January 11, 2008
    18 years ago
  • Date Issued
    Tuesday, May 26, 2015
    10 years ago
Abstract
A system for creating composite encoded video from two or more encoded video sources in the encoded domain. In response to user input, a markup language-based graphical layout is retrieved. The graphical layout includes frame locations within a composite frame for at least a first encoded source and a second encoded source. The system either retrieves or receives the first and second encoded sources. The sources include block-based transform encoded data. The system also includes a stitcher module for stitching together the first encoded source and the second encoded source according to the frame locations of the graphical layout to form an encoded frame. The system outputs an encoded video stream that is transmitted to a client device associated with the user. In response to further user input, the system updates the state of an object model and replaces all or a portion of one or more frames of the encoded video stream. The system may be used with MPEG encoded video.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

U.S. patent application No. 12/008,722 entitled “MPEG Objects and Systems and Methods for Using MPEG Objects” and assigned to the same assignee filed contemporaneously herewith on Jan. 11, 2008 is related generally to the subject matter of the present application and is incorporated herein by reference in its entirety.


The present application claims priority from U.S. provisional application Ser. No. 60/884,773, filed Jan. 12, 2007, Ser. No. 60/884,744, filed Jan. 12, 2007, and Ser. No. 60/884,772, filed Jan. 12, 2007, the full disclosures of which are hereby incorporated herein by reference.


TECHNICAL FIELD AND BACKGROUND ART

The present invention relates to systems and methods for providing interactive content to a remote device and more specifically to systems and methods wherein an object model is associated with pre-encoded video content.


In cable television systems, the cable head-end transmits content to one or more subscribers wherein the content is transmitted in an encoded form. Typically, the content is encoded as digital MPEG video and each subscriber has a set-top box or cable card that is capable of decoding the MPEG video stream. Beyond providing linear content, cable providers can now provide interactive content, such as web pages or walled-garden content. As the Internet has become more dynamic, including video content on web pages and requiring applications or scripts for decoding the video content, cable providers have adapted to allow subscribers the ability to view these dynamic web pages. In order to composite a dynamic web page for transmission to a requesting subscriber in encoded form, the cable head end retrieves the requested web page and renders the web page. Thus, the cable headend must first decode any encoded content that appears within the dynamic webpage. For example, if a video is to be played on the webpage, the headend must retrieve the encoded video and decode each frame of the video. The cable headend then renders each frame to form a sequence of bitmap images of the Internet web page. Thus, the web page can only be composited together if all of the content that forms the web page is first decoded. Once the composite frames are complete, the composited video is sent to an encoder, such as an MPEG encoder to be re-encoded. The compressed MPEG video frames are then sent in an MPEG video stream to the user's set-top box.


Creating such composite encoded video frames in a cable television network requires intensive CPU and memory processing, since all encoded content must first be decoded, then composited, rendered, and re-encoded. In particular, the cable headend must decode and re-encode all of the content in real-time. Thus, allowing users to operate in an interactive environment with dynamic web pages is quite costly to cable operators because of the required processing. Additionally, such systems have the additional drawback that the image quality is degraded due to re-encoding of the encoded video.


SUMMARY OF THE INVENTION

Embodiments of the invention disclose a system for encoding at least one composite encoded video frame for display on a display device. The system includes a markup language-based graphical layout, the graphical layout including frame locations within the composite frame for at least the first encoded source and the second encoded source. Additionally, the system has a stitcher module for stitching together the first encoded source and the second encoded source according to the frame locations of the graphical layout. The stitcher forms an encoded frame without having to decode the block-based transform encoded data for at least the first source. The encoded video may be encoded using one of the MPEG standards, AVS, VC-1 or another block-based encoding protocol.


In certain embodiments of the invention, the system allows a user to interact with graphical elements on a display device. The processor maintains state information about one or more graphical elements identified in the graphical layout. The graphical elements in the graphical layout are associated with one of the encoded sources. A user transmits a request to change state of one of the graphical elements through a client device in communication with the system. The request for the change in state causes the processor to register the change in state and to obtain a new encoded source. The processor causes the stitcher to stitch the new encoded source in place of the encoded source representing the graphic element. The processor may also execute or interpret computer code associated with the graphic element.


For example, the graphic element may be a button object that has a plurality of states, associated encoded content for each state, and methods associated which each of the states. The system may also include a transmitter for transmitting to the client device the composited video content. The client device can then decode the composited video content and cause the composited video content to be displayed on a display device. In certain embodiments each graphical element within the graphical layout is associated with one or more encoded MPEG video frames or portions of a video frame, such as one or more macroblocks or slices. The compositor may use a single graphical element repeatedly within the MPEG video stream. For example, the button may be only a single video frame in one state and a single video frame in another state and the button may be composited together with MPEG encoded video content wherein the encoded macroblocks representing the button are stitched into the MPEG encoded video content in each frame.


Other embodiments of the invention disclose a system for creating one or more composite MPEG video frames forming an MPEG video stream. The MPEG video stream is provided to a client device that includes an MPEG decoder. The client device decodes the MPEG video stream and outputs the video to a display device. The composite MPEG video frames are created by obtaining a graphical layout for a video frame. The graphical layout includes frame locations within the composite MPEG video frame for at least a first MPEG source and a second MPEG source. Based upon the graphical layout the first and second MPEG sources are obtained. The first and second MPEG sources are provided to a stitcher module. The stitcher module stitches together the first MPEG source and the second MPEG source according to the frame locations of the graphical layout to form an MPEG frame without having to decode the macroblock data of the MPEG sources. In certain embodiments, the MPEG sources are only decoded to the slice layer and a processor maintains the positions of the slices within the frame for the first and second MPEG sources. This process is repeated for each frame of MPEG data in order to form an MPEG video stream.


In certain embodiments, the system includes a groomer. The groomer grooms the MPEG sources so that each MPEG element of the MPEG source is converted to an MPEG P-frame format. The groomer module may also identify any macroblocks in the second MPEG source that include motion vectors that reference other macroblocks in a section of the first MPEG source and re-encodes those macroblocks as intracoded macroblocks.


The system may include an association between an MPEG source and a method for the MPEG source forming an MPEG object. In such a system, a processor would receive a request from a client device and in response to the request, a method of the MPEG object would be used. The method may change the state of the MPEG object and cause the selection of a different MPEG source. Thus, the stitcher may replace a first MPEG source with a third MPEG source and stitch together the third and second MPEG sources to form a video frame. The video frame would be streamed to the client device and the client device could decode the updated MPEG video frame and display the updated material on the client's display. For example, an MPEG button object may have an “on” state and an “off” state and the MPEG button object may also include two MPEG graphics composed of a plurality of macroblocks forming slices. In response to a client requesting to change the state of the button from off to on, a method would update the state and cause the MPEG encoded graphic representing an “on” button to be passed to the stitcher.


In certain embodiments, the video frame may be constructed from an unencoded graphic or a graphic that is not MPEG encoded and a groomed MPEG video source. The unencoded graphic may first be rendered. For example, a background may be rendered as a bit map. The background may then be encoded as a series of MPEG macroblocks divided up into slices. The stitcher can then stitch together the background and the groomed MPEG video content to form an MPEG video stream. The background may then be saved for later reuse. In such a configuration, the background would have cut-out regions wherein the slices in those regions would have no associated data, thus video content slices could be inserted into the cut-out. In other embodiments, real-time broadcasts may be received and groomed for creating MPEG video streams.





BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing features of the invention will be more readily understood by reference to the following detailed description, taken with reference to the accompanying drawings, in which:



FIG. 1 is a block diagram showing a communications environment for implementing one version of the present invention;



FIG. 1A shows the regional processing offices and the video content distribution network;



FIG. 1B is a sample composite stream presentation and interaction layout file;



FIG. 1C shows the construction of a frame within the authoring environment;



FIG. 1D shows breakdown of a frame by macroblocks into elements;



FIG. 2 is a diagram showing multiple sources composited onto a display;



FIG. 3 is a diagram of a system incorporating grooming;



FIG. 4 is a diagram showing a video frame prior to grooming, after grooming, and with a video overlay in the groomed section;



FIG. 5 is a diagram showing how grooming is done, for example, removal of B-frames;



FIG. 6 is a diagram showing an MPEG frame structure;



FIG. 7 is a flow chart showing the grooming process for I, B, and P frames;



FIG. 8 is a diagram depicting removal of region boundary motion vectors;



FIG. 9 is a diagram showing the reordering of the DCT coefficients;



FIG. 10 shows an alternative groomer;



FIG. 11 shows an environment for a stitcher module;



FIG. 12 is a diagram showing video frames starting in random positions relative to each other;



FIG. 13 is a diagram of a display with multiple MPEG elements composited within the picture;



FIG. 14 is a diagram showing the slice breakdown of a picture consisting of multiple elements;



FIG. 15 is a diagram showing slice based encoding in preparation for stitching;



FIG. 16 is a diagram detailing the compositing of a video element into a picture;



FIG. 17 is a diagram detailing compositing of a 16×16 sized macroblock element into a background comprised of 24×24 sized macroblocks;



FIG. 18 is a diagram depicting elements of a frame;



FIG. 19 is a flowchart depicting compositing multiple encoded elements;



FIG. 20 is a diagram showing that the composited element does not need to be rectangular nor contiguous;



FIG. 21 shows a diagram of elements on a screen wherein a single element is non-contiguous;



FIG. 22 shows a groomer for grooming linear broadcast content for multicasting to a plurality of processing offices and/or session processors;



FIG. 23 shows an example of a customized mosaic when displayed on a display device;



FIG. 24 is a diagram of an IP based network for providing interactive MPEG content;



FIG. 25 is a diagram of a cable based network for providing interactive MPEG content;



FIG. 26 is a flow-chart of the resource allocation process for a load balancer for use with a cable based network; and



FIG. 27 is a system diagram used to show communication between cable network elements for load balancing.





DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS

As used in the following detailed description and in the appended claims the term “region” shall mean a logical grouping of MPEG (Motion Picture Expert Group) slices that are either contiguous or non-contiguous. When the term MPEG is used it shall refer to all variants of the MPEG standard including MPEG-2 and MPEG-4. The present invention as described in the embodiments below provides an environment for interactive MPEG content and communications between a processing office and a client device having an associated display, such as a television. Although the present invention specifically references the MPEG specification and encoding, principles of the invention may be employed with other encoding techniques that are based upon block-based transforms. As used in the following specification and appended claims, the terms encode, encoded, and encoding shall refer to the process of compressing a digital data signal and formatting the compressed digital data signal to a protocol or standard. Encoded video data can be in any state other than a spatial representation. For example, encoded video data may be transform coded, quantized, and entropy encoded or any combination thereof. Therefore, data that has been transform coded will be considered to be encoded.


Although the present application refers to the display device as a television, the display device may be a cell phone, a Personal Digital Assistant (PDA) or other device that includes a display. A client device including a decoding device, such as a set-top box that can decode MPEG content, is associated with the display device of the user. In certain embodiments, the decoder may be part of the display device. The interactive MPEG content is created in an authoring environment allowing an application designer to design the interactive MPEG content creating an application having one or more scenes from various elements including video content from content providers and linear broadcasters. An application file is formed in an Active Video Markup Language (AVML). The AVML file produced by the authoring environment is an XML-based file defining the video graphical elements (i.e. MPEG slices) within a single frame/page, the sizes of the video graphical elements, the layout of the video graphical elements within the page/frame for each scene, links to the video graphical elements, and any scripts for the scene. In certain embodiments, an AVML file may be authored directly as opposed to being authored in a text editor or generated by an authoring environment. The video graphical elements may be static graphics, dynamic graphics, or video content. It should be recognized that each element within a scene is really a sequence of images and a static graphic is an image that is repeatedly displayed and does not change over time. Each of the elements may be an MPEG object that can include both MPEG data for graphics and operations associated with the graphics. The interactive MPEG content can include multiple interactive MPEG objects within a scene with which a user can interact. For example, the scene may include a button MPEG object that provides encoded MPEG data forming the video graphic for the object and also includes a procedure for keeping track of the button state. The MPEG objects may work in coordination with the scripts. For example, an MPEG button object may keep track of its state (on/off), but a script within the scene will determine what occurs when that button is pressed. The script may associate the button state with a video program so that the button will indicate whether the video content is playing or stopped. MPEG objects always have an associated action as part of the object. In certain embodiments, the MPEG objects, such as a button MPEG object, may perform actions beyond keeping track of the status of the button. In such, embodiments, the MPEG object may also include a call to an external program, wherein the MPEG object will access the program when the button graphic is engaged. Thus, for a play/pause MPEG object button, the MPEG object may include code that keeps track of the state of the button, provides a graphical overlay based upon a state change, and/or causes a video player object to play or pause the video content depending on the state of the button.


Once an application is created within the authoring environment, and an interactive session is requested by a requesting client device, the processing office assigns a processor for the interactive session.


The assigned processor operational at the processing office runs a virtual machine and accesses and runs the requested application. The processor prepares the graphical part of the scene for transmission in the MPEG format. Upon receipt of the MPEG transmission by the client device and display on the user's display, a user can interact with the displayed content by using an input device in communication with the client device. The client device sends input requests from the user through a communication network to the application running on the assigned processor at the processing office or other remote location. In response, the assigned processor updates the graphical layout based upon the request and the state of the MPEG objects hereinafter referred to in total as the application state. New elements may be added to the scene or replaced within the scene or a completely new scene may be created. The assigned processor collects the elements and the objects for the scene, and either the assigned processor or another processor processes the data and operations according to the object(s) and produces the revised graphical representation in an MPEG format that is transmitted to the transceiver for display on the user's television. Although the above passage indicates that the assigned processor is located at the processing office, the assigned processor may be located at a remote location and need only be in communication with the processing office through a network connection. Similarly, although the assigned processor is described as handling all transactions with the client device, other processors may also be involved with requests and assembly of the content (MPEG objects) of the graphical layout for the application.



FIG. 1 is a block diagram showing a communications environment 100 for implementing one version of the present invention. The communications environment 100 allows an applications programmer to create an application for two-way interactivity with an end user. The end user views the application on a client device 110, such as a television, and can interact with the content by sending commands upstream through an upstream network 120 wherein upstream and downstream may be part of the same network or a separate network providing the return path link to the processing office. The application programmer creates an application that includes one or more scenes. Each scene is the equivalent of an HTML webpage except that each element within the scene is a video sequence. The application programmer designs the graphical representation of the scene and incorporates links to elements, such as audio and video files and objects, such as buttons and controls for the scene. The application programmer uses a graphical authoring environment 130 to graphically select the objects and elements. The authoring environment 130 may include a graphical interface that allows an application programmer to associate methods with elements creating video objects. The graphics may be MPEG encoded video, groomed MPEG video, still images or video in another format. The application programmer can incorporate content from a number of sources including content providers 160 (news sources, movie studios, RSS feeds etc.) and linear broadcast sources (broadcast media and cable, on demand video sources and web-based video sources) 170 into an application. The application programmer creates the application as a file in AVML (active video mark-up language) and sends the application file to a proxy/cache 140 within a video content distribution network 150. The AVML file format is an XML format. For example see FIG. 1B that shows a sample AVML file.


The content provider 160 may encode the video content as MPEG video/audio or the content may be in another graphical format (e.g. JPEG, BITMAP, H263, H264, VC-1 etc.). The content may be subsequently groomed and/or scaled in a Groomer/Scaler 190 to place the content into a preferable encoded MPEG format that will allow for stitching. If the content is not placed into the preferable MPEG format, the processing office will groom the format when an application that requires the content is requested by a client device. Linear broadcast content 170 from broadcast media services, like content from the content providers, will be groomed. The linear broadcast content is preferably groomed and/or scaled in Groomer/Scaler 180 that encodes the content in the preferable MPEG format for stitching prior to passing the content to the processing office.


The video content from the content producers 160 along with the applications created by application programmers are distributed through a video content distribution network 150 and are stored at distribution points 140. These distribution points are represented as the proxy/cache within FIG. 1. Content providers place their content for use with the interactive processing office in the video content distribution network at a proxy/cache 140 location. Thus, content providers 160 can provide their content to the cache 140 of the video content distribution network 150 and one or more processing office that implements the present architecture may access the content through the video content distribution network 150 when needed for an application. The video content distribution network 150 may be a local network, a regional network or a global network. Thus, when a virtual machine at a processing office requests an application, the application can be retrieved from one of the distribution points and the content as defined within the application's AVML file can be retrieved from the same or a different distribution point.


An end user of the system can request an interactive session by sending a command through the client device 110, such as a set-top box, to a processing office 105. In FIG. 1, only a single processing office is shown. However, in real-world applications, there may be a plurality of processing offices located in different regions, wherein each of the processing offices is in communication with a video content distribution network as shown in FIG. 1B. The processing office 105 assigns a processor for the end user for an interactive session. The processor maintains the session including all addressing and resource allocation. As used in the specification and the appended claims the term “virtual machine” 106 shall refer to the assigned processor, as well as, other processors at the processing office that perform functions, such as session management between the processing office and the client device as well as resource allocation (i.e. assignment of a processor for an interactive session).


The virtual machine 106 communicates its address to the client device 110 and an interactive session is established. The user can then request presentation of an interactive application (AVML) through the client device 110. The request is received by the virtual machine 106 and in response, the virtual machine 106 causes the AVML file to be retrieved from the proxy/cache 140 and installed into a memory cache 107 that is accessible by the virtual machine 106. It should be recognized that the virtual machine 106 may be in simultaneous communication with a plurality of client devices 110 and the client devices may be different device types. For example, a first device may be a cellular telephone, a second device may be a set-top box, and a third device may be a personal digital assistant wherein each device access the same or a different application.


In response to a request for an application, the virtual machine 106 processes the application and requests elements and MPEG objects that are part of the scene to be moved from the proxy/cache into memory 107 associated with the virtual machine 106. An MPEG object includes both a visual component and an actionable component. The visual component may be encoded as one or more MPEG slices or provided in another graphical format. The actionable component may be storing the state of the object, may include performing computations, accessing an associated program, or displaying overlay graphics to identify the graphical component as active. An overlay graphic may be produced by a signal being transmitted to a client device wherein the client device creates a graphic in the overlay plane on the display device. It should be recognized that a scene is not a static graphic, but rather includes a plurality of video frames wherein the content of the frames can change over time.


The virtual machine 106 determines based upon the scene information, including the application state, the size and location of the various elements and objects for a scene. Each graphical element may be formed from contiguous or non-contiguous MPEG slices. The virtual machine keeps track of the location of all of the slices for each graphical element. All of the slices that define a graphical element form a region. The virtual machine 106 keeps track of each region. Based on the display position information within the AVML file, the slice positions for the elements and background within a video frame are set. If the graphical elements are not already in a groomed format, the virtual machine passes that element to an element renderer. The renderer renders the graphical element as a bitmap and the renderer passes the bitmap to an MPEG element encoder 109. The MPEG element encoder encodes the bitmap as an MPEG video sequence. The MPEG encoder processes the bitmap so that it outputs a series of P-frames. An example of content that is not already pre-encoded and pre-groomed is personalized content. For example, if a user has stored music files at the processing office and the graphic element to be presented is a listing of the user's music files, this graphic would be created in real-time as a bitmap by the virtual machine. The virtual machine would pass the bitmap to the element renderer 108 which would render the bitmap and pass the bitmap to the MPEG element encoder 109 for grooming.


After the graphical elements are groomed by the MPEG element encoder, the MPEG element encoder 109 passes the graphical elements to memory 107 for later retrieval by the virtual machine 106 for other interactive sessions by other users. The MPEG encoder 109 also passes the MPEG encoded graphical elements to the stitcher 115. The rendering of an element and MPEG encoding of an element may be accomplished in the same or a separate processor from the virtual machine 106. The virtual machine 106 also determines if there are any scripts within the application that need to be interpreted. If there are scripts, the scripts are interpreted by the virtual machine 106.


Each scene in an application can include a plurality of elements including static graphics, object graphics that change based upon user interaction, and video content. For example, a scene may include a background (static graphic), along with a media player for playback of audio video and multimedia content (object graphic) having a plurality of buttons, and a video content window (video content) for displaying the streaming video content. Each button of the media player may itself be a separate object graphic that includes its own associated methods.


The virtual machine 106 acquires each of the graphical elements (background, media player graphic, and video frame) for a frame and determines the location of each element. Once all of the objects and elements (background, video content) are acquired, the elements and graphical objects are passed to the stitcher/compositor 115 along with positioning information for the elements and MPEG objects. The stitcher 115 stitches together each of the elements (video content, buttons, graphics, background) according to the mapping provided by the virtual machine 106. Each of the elements is placed on a macroblock boundary and when stitched together the elements form an MPEG video frame. On a periodic basis all of the elements of a scene frame are encoded to form a reference P-frame in order to refresh the sequence and avoid dropped macroblocks. The MPEG video stream is then transmitted to the address of client device through the down stream network. The process continues for each of the video frames. Although the specification refers to MPEG as the encoding process, other encoding processes may also be used with this system.


The virtual machine 106 or other processor or process at the processing office 105 maintains information about each of the elements and the location of the elements on the screen. The virtual machine 106 also has access to the methods for the objects associated with each of the elements. For example, a media player may have a media player object that includes a plurality of routines. The routines can include, play, stop, fast forward, rewind, and pause. Each of the routines includes code and upon a user sending a request to the processing office 105 for activation of one of the routines, the object is accessed and the routine is run. The routine may be a JAVA-based applet, a script to be interpreted, or a separate computer program capable of being run within the operating system associated with the virtual machine.


The processing office 105 may also create a linked data structure for determining the routine to execute or interpret based upon a signal received by the processor from the client device associated with the television. The linked data structure may be formed by an included mapping module. The data structure associates each resource and associated object relative to every other resource and object. For example, if a user has already engaged the play control, a media player object is activated and the video content is displayed. As the video content is playing in a media player window, the user can depress a directional key on the user's remote control. In this example, the depression of the directional key is indicative of pressing a stop button. The transceiver produces a directional signal and the assigned processor receives the directional signal. The virtual machine 106 or other processor at the processing office 105 accesses the linked data structure and locates the element in the direction of the directional key press. The database indicates that the element is a stop button that is part of a media player object and the processor implements the routine for stopping the video content. The routine will cause the requested content to stop. The last video content frame will be frozen and a depressed stop button graphic will be interwoven by the stitcher module into the frame. The routine may also include a focus graphic to provide focus around the stop button. For example, the virtual machine can cause the stitcher to enclose the graphic having focus with a boarder that is 1 macroblock wide. Thus, when the video frame is decoded and displayed, the user will be able to identify the graphic/object that the user can interact with. The frame will then be passed to a multiplexor and sent through the downstream network to the client device. The MPEG encoded video frame is decoded by the client device displayed on either the client device (cell phone, PDA) or on a separate display device (monitor, television). This process occurs with a minimal delay. Thus, each scene from an application results in a plurality of video frames each representing a snapshot of the media player application state.


The virtual machine 106 will repeatedly receive commands from the client device and in response to the commands will either directly or indirectly access the objects and execute or interpret the routines of the objects in response to user interaction and application interaction model. In such a system, the video content material displayed on the television of the user is merely decoded MPEG content and all of the processing for the interactivity occurs at the processing office and is orchestrated by the assigned virtual machine. Thus, the client device only needs a decoder and need not cache or process any of the content.


It should be recognized that through user requests from a client device, the processing office could replace a video element with another video element. For example, a user may select from a list of movies to display and therefore a first video content element would be replaced by a second video content element if the user selects to switch between two movies. The virtual machine, which maintains a listing of the location of each element and region forming an element can easily replace elements within a scene creating a new MPEG video frame wherein the frame is stitched together including the new element in the stitcher 115.



FIG. 1A shows the interoperation between the digital content distribution network 100A, the content providers 110A and the processing offices 120A. In this example, the content providers 130A distribute content into the video content distribution network 100A. Either the content providers 130A or processors associated with the video content distribution network convert the content to an MPEG format that is compatible with the processing office's 120A creation of interactive MPEG content. A content management server 140A of the digital content distribution network 100A distributes the MPEG-encoded content among proxy/caches 150A-154A located in different regions if the content is of a global/national scope. If the content is of a regional/local scope, the content will reside in a regional/local proxy/cache. The content may be mirrored throughout the country or world at different locations in order to increase access times. When an end user, through their client device 160A, requests an application from a regional processing office, the regional processing office will access the requested application. The requested application may be located within the video content distribution network or the application may reside locally to the regional processing office or within the network of interconnected processing offices. Once the application is retrieved, the virtual machine assigned at the regional processing office will determine the video content that needs to be retrieved. The content management server 140A assists the virtual machine in locating the content within the video content distribution network. The content management server 140A can determine if the content is located on a regional or local proxy/cache and also locate the nearest proxy/cache. For example, the application may include advertising and the content management server will direct the virtual machine to retrieve the advertising from a local proxy/cache. As shown in FIG. 1A, both the Midwestern and Southeastern regional processing offices 120A also have local proxy/caches 153A, 154A. These proxy/caches may contain local news and local advertising. Thus, the scenes presented to an end user in the Southeast may appear different to an end user in the Midwest. Each end user may be presented with different local news stories or different advertising. Once the content and the application are retrieved, the virtual machine processes the content and creates an MPEG video stream. The MPEG video stream is then directed to the requesting client device. The end user may then interact with the content requesting an updated scene with new content and the virtual machine at the processing office will update the scene by requesting the new video content from the proxy/cache of the video content distribution network.


Authoring Environment


The authoring environment includes a graphical editor as shown in FIG. 1C for developing interactive applications. An application includes one or more scenes. As shown in FIG. 1B the application window shows that the application is composed of three scenes (scene 1, scene 2 and scene 3). The graphical editor allows a developer to select elements to be placed into the scene forming a display that will eventually be shown on a display device associated with the user. In some embodiments, the elements are dragged-and-dropped into the application window. For example, a developer may want to include a media player object and media player button objects and will select these elements from a toolbar and drag and drop the elements in the window. Once a graphical element is in the window, the developer can select the element and a property window for the element is provided. The property window includes at least the location of the graphical element (address), and the size of the graphical element. If the graphical element is associated with an object, the property window will include a tab that allows the developer to switch to a bitmap event screen and alter the associated object parameters. For example, a user may change the functionality associated with a button or may define a program associated with the button.


As shown in FIG. 1D, the stitcher of the system creates a series of MPEG frames for the scene based upon the AVML file that is the output of the authoring environment. Each element/graphical object within a scene is composed of different slices defining a region. A region defining an element/object may be contiguous or non-contiguous. The system snaps the slices forming the graphics on a macro-block boundary. Each element need not have contiguous slices. For example, the background has a number of non-contiguous slices each composed of a plurality of macroblocks. The background, if it is static, can be defined by intracoded macroblocks. Similarly, graphics for each of the buttons can be intracoded; however the buttons are associated with a state and have multiple possible graphics. For example, the button may have a first state “off” and a second state “on” wherein the first graphic shows an image of a button in a non-depressed state and the second graphic shows the button in a depressed state. FIG. 1C also shows a third graphical element, which is the window for the movie. The movie slices are encoded with a mix of intracoded and interceded macroblocks and dynamically changes based upon the content. Similarly if the background is dynamic, the background can be encoded with both intracoded and interceded macroblocks, subject to the requirements below regarding grooming.


When a user selects an application through a client device, the processing office will stitch together the elements in accordance with the layout from the graphical editor of the authoring environment. The output of the authoring environment includes an Active Video Mark-up Language file (AVML) The AVML file provides state information about multi-state elements such as a button, the address of the associated graphic, and the size of the graphic. The AVML file indicates the locations within the MPEG frame for each element, indicates the objects that are associated with each element, and includes the scripts that define changes to the MPEG frame based upon user's actions. For example, a user may send an instruction signal to the processing office and the processing office will use the AVML file to construct a set of new MPEG frames based upon the received instruction signal. A user may want to switch between various video elements and may send an instruction signal to the processing office. The processing office will remove a video element within the layout for a frame and will select the second video element causing the second video element to be stitched into the MPEG frame at the location of the first video element. This process is described below.


AVML File


The application programming environment outputs an AVML file. The AVML file has an XML-based syntax. The AVML file syntax includes a root object <AVML>. Other top level tags include <initialscene> that specifies the first scene to be loaded when an application starts. The <script> tag identifies a script and a <scene> tag identifies a scene. There may also be lower level tags to each of the top level tags, so that there is a hierarchy for applying the data within the tag. For example, a top level stream tag may include <aspect ratio> for the video stream, <video format>, <bit rate>, <audio format> and <audio bit rate>. Similarly, a scene tag may include each of the elements within the scene. For example, <background> for the background, <button> for a button object, and <static image> for a still graphic. Other tags include <size> and <pos> for the size and position of an element and may be lower level tags for each element within a scene. An example of an AVML file is provided in FIG. 1B. Further discussion of the AVML file syntax is provided in Appendix A attached hereto.


Groomer



FIG. 2 is a diagram of a representative display that could be provided to a television of a requesting client device. The display 200 shows three separate video content elements appearing on the screen. Element #1211 is the background in which element #2215 and element #3217 are inserted.



FIG. 3 shows a first embodiment of a system that can generate the display of FIG. 2. In this diagram, the three video content elements come in as encoded video: element #1303, element #2305, and element #3307. The groomers 310 each receive an encoded video content element and the groomers process each element before the stitcher 340 combines the groomed video content elements into a single composited video 380. It should be understood by one of ordinary skill in the art that groomers 310 may be a single processor or multiple processors that operate in parallel. The groomers may be located either within the processing office, at content providers' facilities, or linear broadcast provider's facilities. The groomers may not be directly connected to the stitcher, as shown in FIG. 1 wherein the groomers 190 and 180 are not directly coupled to stitcher 115.


The process of stitching is described below and can be performed in a much more efficient manner if the elements have been groomed first.


Grooming removes some of the interdependencies present in compressed video. The groomer will convert I and B frames to P frames and will fix any stray motion vectors that reference a section of another frame of video that has been cropped or removed. Thus, a groomed video stream can be used in combination with other groomed video streams and encoded still images to form a composite MPEG video stream. Each groomed video stream includes a plurality of frames and the frames can be can be easily inserted into another groomed frame wherein the composite frames are grouped together to form an MPEG video stream. It should be noted that the groomed frames may be formed from one or more MPEG slices and may be smaller in size than an MPEG video frame in the MPEG video stream.



FIG. 4 is an example of a composite video frame that contains a plurality of elements 410, 420. This composite video frame is provided for illustrative purposes. The groomers as shown in FIG. 1 only receive a single element and groom the element (video sequence), so that the video sequence can be stitched together in the stitcher. The groomers do not receive a plurality of elements simultaneously. In this example, the background video frame 410 includes 1 row per slice (this is an example only; the row could be composed of any number of slices). As shown in FIG. 1, the layout of the video frame including the location of all of the elements within the scene are defined by the application programmer in the AVML file. For example, the application programmer may design the background element for a scene. Thus, the application programmer may have the background encoded as MPEG video and may groom the background prior to having the background placed into the proxy cache 140. Therefore, when an application is requested, each of the elements within the scene of the application may be groomed video and the groomed video can easily be stitched together. It should be noted that although two groomers are shown within FIG. 1 for the content provider and for the linear broadcasters, groomers may be present in other parts of the system.


As shown, video element 420 is inserted within the background video frame 410 (also for example only; this element could also consist of multiple slices per row). If a macroblock within the original video frame 410 references another macroblock in determining its value and the reference macroblock is removed from the frame because the video image 420 is inserted in its place, the macroblocks value needs to be recalculated. Similarly, if a macroblock references another macroblock in a subsequent frame and that macroblock is removed and other source material is inserted in its place, the macroblock values need to be recalculated. This is addressed by grooming the video 430. The video frame is processed so that the rows contain multiple slices some of which are specifically sized and located to match the substitute video content. After this process is complete, it is a simple task to replace some of the current slices with the overlay video resulting in a groomed video with overlay 440. The groomed video stream has been specifically defined to address that particular overlay. A different overlay would dictate different grooming parameters. Thus, this type of grooming addresses the process of segmenting a video frame into slices in preparation for stitching. It should be noted that there is never a need to add slices to the overlay element. Slices are only added to the receiving element, that is, the element into which the overlay will be placed. The groomed video stream can contain information about the stream's groomed characteristics. Characteristics that can be provided include: 1. the locations for the upper left and lower right corners of the groomed window. 2. The location of upper left corner only and then the size of the window. The size of the slice accurate to the pixel level.


There are also two ways to provide the characteristic information in the video stream. The first is to provide that information in the slice header. The second is to provide the information in the extended data slice structure. Either of these options can be used to successfully pass the necessary information to future processing stages, such as the virtual machine and stitcher.



FIG. 5 shows the video sequence for a video graphical element before and after grooming. The original incoming encoded stream 500 has a sequence of MPEG I-frames 510, B-frames 530550, and P-frames 570 as are known to those of ordinary skill in the art. In this original stream, the I-frame is used as a reference 512 for all the other frames, both B and P. This is shown via the arrows from the I-frame to all the other frames. Also, the P-frame is used as a reference frame 572 for both B-frames. The groomer processes the stream and replaces all the frames with P-frames. First the original I-frame 510 is converted to an intracoded P-frame 520. Next the B-frames 530, 550 are converted 535 to P-frames 540 and 560 and modified to reference only the frame immediately prior. Also, the P-frames 570 are modified to move their reference 574 from the original I-frame 510 to the newly created P-frame 560 immediately in preceding themselves. The resulting P-frame 580 is shown in the output stream of groomed encoded frames 590.



FIG. 6 is a diagram of a standard MPEG-2 bitstream syntax. MPEG-2 is used as an example and the invention should not be viewed as limited to this example. The hierarchical structure of the bitstream starts at the sequence level. This contains the sequence header 600 followed by group of picture (GOP) data 605. The GOP data contains the GOP header 620 followed by picture data 625. The picture data 625 contains the picture header 640 followed by the slice data 645. The slice data 645 consists of some slice overhead 660 followed by macroblock data 665. Finally, the macroblock data 665 consists of some macroblock overhead 680 followed by block data 685 (the block data is broken down further but that is not required for purposes of this reference). Sequence headers act as normal in the groomer. However, there are no GOP headers output of the groomer since all frames are P-frames. The remainder of the headers may be modified to meet the output parameters required.



FIG. 7 provides a flow for grooming the video sequence. First the frame type is determining 700: I-frame 703 B-frame 705, or P-frame 707. I-frames 703 as do B-frames 705 need to be converted to P-frames. In addition, I-frames need to match the picture information that the stitcher requires. For example, this information may indicate the encoding parameters set in the picture header. Therefore, the first step is to modify the picture header information 730 so that the information in the picture header is consistent for all groomed video sequences. The stitcher settings are system level settings that may be included in the application. These are the parameters that will be used for all levels of the bit stream. The items that require modification are provided in the table below:









TABLE 1







Picture Header Information









#
Name
Value





A
Picture Coding Type
P-Frame


B
Intra DC Precision
Match stitcher setting


C
Picture structure
Frame


D
Frame prediction frame DCT
Match stitcher setting


E
Quant scale type
Match stitcher setting


F
Intra VLC format
Match stitcher setting


G
Alternate scan
Normal scan


H
Progressive frame
Progressive scan










Next, the slice overhead information 740 must be modified. The parameters to modify are given in the table below.









TABLE 2







Slice Overhead Information









#
Name
Value












A
Quantizer Scale Code
Will change if there is a “scale type” change




in the picture header.










Next, the macroblock overhead 750 information may require modification. The values to be modified are given in the table below.









TABLE 3







Macroblock Information









#
Name
Value





A
Macroblock type
Change the variable length code from that




for an I frame to that for a P frame)


B
DCT type
Set to frame if not already


C
Concealment motion
Removed



vectors










Finally, the block information 760 may require modification. The items to modify are given in the table below.









TABLE 4







Block Information









#
Name
Value





A
DCT coefficient
Require updating if there were any quantizer



values
changes at the picture or slice level.


B
DCT coefficient
Need to be reordered if “alternate scan” was



ordering
changed from what it was before.










Once the block changes are complete, the process can start over with the next frame of video. If the frame type is a B-frame 705, the same steps required for an I-frame are also required for the B-frame. However, in addition, the motion vectors 770 need to be modified. There are two scenarios: B-frame immediately following an I-frame or P-frame, or a B-frame following another B-frame. Should the B-frame follow either an I or P frame, the motion vector, using the I or P frame as a reference, can remain the same and only the residual would need to change. This may be as simple as converting the forward looking motion vector to be the residual.


For the B-frames that follow another B-frame, the motion vector and its residual will both need to be modified. The second B-frame must now reference the newly converted B to P frame immediately preceding it. First, the B-frame and its reference are decoded and the motion vector and the residual are recalculated. It must be noted that while the frame is decoded to update the motion vectors, there is no need to re-encode the DCT coefficients. These remain the same. Only the motion vector and residual are calculated and modified.


The last frame type is the P-frame. This frame type also follows the same path as an I-frame FIG. 8 diagrams the motion vector modification for macroblocks adjacent to a region boundary. It should be recognized that motion vectors on a region boundary are most relevant to background elements into which other video elements are being inserted. Therefore, grooming of the background elements may be accomplished by the application creator. Similarly, if a video element is cropped and is being inserted into a “hole” in the background element, the cropped element may include motion vectors that point to locations outside of the “hole”. Grooming motion vectors for a cropped image may be done by the content creator if the content creator knows the size that the video element needs to be cropped, or the grooming may be accomplished by the virtual machine in combination with the element renderer and MPEG encoder if the video element to be inserted is larger than the size of the “hole” in the background.



FIG. 8 graphically shows the problems that occur with motion vectors that surround a region that is being removed from a background element. In the example of FIG. 8, the scene includes two regions: #1800 and #2820. There are two examples of improper motion vector references. In the first instance, region #2820 that is inserting into region #1800 (background), uses region #1800 (background) as a reference for motion 840. Thus, the motion vectors in region #2 need to be corrected. The second instance of improper motion vector references occurs where region #1800 uses region #2820 as a reference for motion 860. The groomer removes these improper motion vector references by either re-encoding them using a frame within the same region or converting the macroblocks to be intracoded blocks.


In addition to updating motion vectors and changing frame types, the groomer may also convert field based encoded macroblocks to frame based encoded macroblocks. FIG. 9 shows the conversion of a field based encoded macroblocks to frame based. For reference, a frame based set of blocks 900 is compressed. The compressed block set 910 contains the same information in the same blocks but now it is contained in compressed form. On the other hand, a field based macroblock 940 is also compressed. When this is done, all the even rows (0, 2, 4, 6) are placed in the upper blocks (0 & 1) while the odd rows (1, 3, 5, 7) are placed in the lower blocks (2&3). When the compressed field based macroblock 950 is converted to a frame based macroblock 970, the coefficients need to be moved from one block to another 980. That is, the rows must be reconstructed in numerical order rather than in even odd. Rows 1 & 3, which in the field based encoding were in blocks 2 & 3, are now moved back up to blocks 0 or 1 respectively. Correspondingly, rows 4 & 6 are moved from blocks 0 & 1 and placed down in blocks 2 & 3.



FIG. 10 shows a second embodiment of the grooming platform. All the components are the same as the first embodiment: groomers 1110A and stitcher 1130A. The inputs are also the same: input #11103A, input #21105A, and input #31107A as well as the composited output 1280. The difference in this system is that the stitcher 1140A provides feedback, both synchronization and frame type information, to each of the groomers 1110A. With the synchronization and frame type information, the stitcher 1240 can define a GOP structure that the groomers 1110A follow. With this feedback and the GOP structure, the output of the groomer is no longer P-frames only but can also include I-frames and B-frames. The limitation to an embodiment without feedback is that no groomer would know what type of frame the stitcher was building. In this second embodiment with the feedback from the stitcher 1140A, the groomers 1110A will know what picture type the stitcher is building and so the groomers will provide a matching frame type. This improves the picture quality assuming the same data rate and may decrease the data rate assuming that the quality level is kept constant due to more reference frames and less modification of existing frames while, at the same time, reducing the bit rate since B-frames are allowed.


Stitcher



FIG. 11 shows an environment for implementing a stitcher module, such as the stitcher shown in FIG. 1. The stitcher 1200 receives video elements from different sources. Uncompressed content 1210 is encoded in an encoder 1215, such as the MPEG element encoder shown in FIG. 1 prior to its arrival at the stitcher 1200. Compressed or encoded video 1220 does not need to be encoded. There is, however, the need to separate the audio 12171227 from the video 12191229 in both cases. The audio is fed into an audio selector 1230 to be included in the stream. The video is fed into a frame synchronization block 1240 before it is put into a buffer 1250. The frame constructor 1270 pulls data from the buffers 1250 based on input from the controller 1275. The video out of the frame constructor 1270 is fed into a multiplexer 1280 along with the audio after the audio has been delayed 1260 to align with the video. The multiplexer 1280 combines the audio and video streams and outputs the composited, encoded output streams 1290 that can be played on any standard decoder. Multiplexing a data stream into a program or transport stream is well known to those familiar in the art. The encoded video sources can be real-time, from a stored location, or a combination of both. There is no requirement that all of the sources arrive in real-time.



FIG. 12 shows an example of three video content elements that are temporally out of sync. In order to synchronize the three elements, element #11300 is used as an “anchor” or “reference” frame. That is, it is used as the master frame and all other frames will be aligned to it (this is for example only; the system could have its own master frame reference separate from any of the incoming video sources). The output frame timing 13701380 is set to match the frame timing of element #11300. Elements #2 & 31320 and 1340 do not align with element #11300. Therefore, their frame start is located and they are stored in a buffer. For example, element #21320 will be delayed one frame so an entire frame is available before it is composited along with the reference frame. Element #3 is much slower than the reference frame. Element #3 is collected over two frames and presented over two frames. That is, each frame of element #31340 is displayed for two consecutive frames in order to match the frame rate of the reference frame. Conversely if a frame, not shown, was running at twice the rate of the reference frame, then every other frame would be dropped (not shown). More than likely all elements are running at almost the same speed so only infrequently would a frame need to be repeated or dropped in order to maintain synchronization.



FIG. 13 shows an example composited video frame 1400. In this example, the frame is made up of 40 macroblocks per row 1410 with 30 rows per picture 1420. The size is used as an example and it not intended to restrict the scope of the invention. The frame includes a background 1430 that has elements 1440 composited in various locations. These elements 1440 can be video elements, static elements, etc. That is, the frame is constructed of a full background, which then has particular areas replaced with different elements. This particular example shows four elements composited on a background.



FIG. 14 shows a more detailed version of the screen illustrating the slices within the picture. The diagram depicts a picture consisting of 40 macroblocks per row and 30 rows per picture (non-restrictive, for illustration purposes only). However, it also shows the picture divided up into slices. The size of the slice can be a full row 1590 (shown as shaded) or a few macroblocks within a row 1580 (shown as rectangle with diagonal lines inside element #41528). The background 1530 has been broken into multiple regions with the slice size matching the width of each region. This can be better seen by looking at element #11522. Element #11522 has been defined to be twelve macroblocks wide. The slice size for this region for both the background 1530 and element #11522 is then defined to be that exact number of macroblocks. Element #11522 is then comprised of six slices, each slice containing 12 macroblocks. In a similar fashion, element #21524 consists of four slices of eight macroblocks per slice; element #31526 is eighteen slices of 23 macroblocks per slice; and element #41528 is seventeen slices of five macroblocks per slice. It is evident that the background 1530 and the elements can be defined to be composed of any number of slices which, in turn, can be any number of macroblocks. This gives full flexibility to arrange the picture and the elements in any fashion desired. The process of determining the slice content for each element along with the positioning of the elements within the video frame are determined by the virtual machine of FIG. 1 using the AVML file.



FIG. 15 shows the preparation of the background 1600 by the virtual machine in order for stitching to occur in the stitcher. The virtual machine gathers an uncompressed background based upon the AVML file and forwards the background to the element encoder. The virtual machine forwards the locations within the background where elements will be placed in the frame. As shown the background 1620 has been broken into a particular slice configuration by the virtual machine with a hole(s) that exactly aligns with where the element(s) will (are to) be placed prior to passing the background to the element encoder. The encoder compresses the background leaving a “hole” or “holes” where the element(s) will be placed. The encoder passes the compressed background to memory. The virtual machine then access the memory and retrieves each element for a scene and passes the encoded elements to the stitcher along with a list of the locations for each slice for each of the elements. The stitcher takes each of the slices and places the slices into the proper position.


This particular type of encoding is called “slice based encoding”. A slice based encoder/virtual machine is one that is aware of the desired slice structure of the output frame and performs its encoding appropriately. That is, the encoder knows the size of the slices and where they belong. It knows where to leave holes if that is required. By being aware of the desired output slice configuration, the virtual machine provides an output that is easily stitched.



FIG. 16 shows the compositing process after the background element has been compressed. The background element 1700 has been compressed into seven slices with a hole where the element 1740 is to be placed. The composite image 1780 shows the result of the combination of the background element 1700 and element 1740. The composite video frame 1780 shows the slices that have been inserted in grey. Although this diagram depicts a single element composited onto a background, it is possible to composite any number of elements that will fit onto a user's display. Furthermore, the number of slices per row for the background or the element can be greater than what is shown. The slice start and slice end points of the background and elements must align.



FIG. 17 is a diagram showing different macroblock sizes between the background element 1800 (24 pixels by 24 pixels) and the added video content element 1840 (16 pixels by 16 pixels). The composited video frame 1880 shows two cases. Horizontally, the pixels align as there are 24 pixels/block×4 blocks=96 pixels wide in the background 800 and 16 pixels/block*6 blocks=96 pixels wide for the video content element 1840. However, vertically, there is a difference. The background 1800 is 24 pixels/block*3 blocks=72 pixels tall. The element 1840 is 16 pixels/block*4 blocks=64 pixels tall. This leaves a vertical gap of 8 pixels 1860. The stitcher is aware of such differences and can extrapolate either the element or the background to fill the gap. It is also possible to leave a gap so that there is a dark or light border region. Any combination of macroblock sizes is acceptable even though this example uses macroblock sizes of 24×24 and 16×16. DCT based compression formats may rely on macroblocks of sizes other than 16×16 without deviating from the intended scope of the invention. Similarly, a DCT based compression format may also rely on variable sized macroblocks for temporal prediction without deviating from the intended scope of the invention Finally, frequency domain representations of content may also be achieved using other Fourier related transforms without deviating from the intended scope of the invention.


It is also possible for there to be an overlap in the composited video frame. Referring back to FIG. 17, the element 1840 consisted of four slices. Should this element actually be five slices, it would overlap with the background element 1800 in the composited video frame 1880. There are multiple ways to resolve this conflict with the easiest being to composite only four slices of the element and drop the fifth. It is also possible to composite the fifth slice into the background row, break the conflicting background row into slices and remove the background slice that conflicts with the fifth element slice (then possibly add a sixth element slice to fill any gap).


The possibility of different slice sizes requires the compositing function to perform a check of the incoming background and video elements to confirm they are proper. That is, make sure each one is complete (e.g., a full frame), there are no sizing conflicts, etc.



FIG. 18 is a diagram depicting elements of a frame. A simple composited picture 1900 is composed of an element 1910 and a background element 1920. To control the building of the video frame for the requested scene, the stitcher builds a data structure 1940 based upon the position information for each element as provided by the virtual machine. The data structure 1940 contains a linked list describing how many macroblocks and where the macroblocks are located. For example, the data row 11943 shows that the stitcher should take 40 macroblocks from buffer B, which is the buffer for the background. Data row 21945 should take 12 macroblocks from buffer B, then 8 macroblocks from buffer E (the buffer for element 1910), and then another 20 macroblocks from buffer B. This continues down to the last row 1947 wherein the stitcher uses the data structure to take 40 macroblocks from buffer B. The buffer structure 1970 has separate areas for each background or element. The B buffer 1973 contains all the information for stitching in B macroblocks. The E buffer 1975 has the information for stitching in E macroblocks.



FIG. 19 is a flow chart depicting the process for building a picture from multiple encoded elements. The sequence 2000 begins by starting the video frame composition 2010. First the frames are synchronized 2015 and then each row 2020 is built up by grabbing the appropriate slice 2030. The slice is then inserted 2040 and the system checks to see if it is the end of the row 2050. If not, the process goes back to “fetch next slice” block 2030 until the end of row 2050 is reached. Once the row is complete, the system checks to see if it is the end of frame 2080. If not, the process goes back to the “for each row” 2020 block. Once the frame is complete, the system checks if it is the end of the sequence 2090 for the scene. If not, it goes back to the “compose frame” 2010 step. If it is, the frame or sequence of video frames for the scene is complete 2090. If not, it repeats the frame building process. If the end of sequence 2090 has been reached, the scene is complete and the process ends or it can start the construction of another frame.


The performance of the stitcher can be improved (build frames faster with less processor power) by providing the stitcher advance information on the frame format. For example, the virtual machine may provide the stitcher with the start location and size of the areas in the frame to be inserted. Alternatively, the information could be the start location for each slice and the stitcher could then figure out the size (the difference between the two start locations). This information could be provided externally by the virtual machine or the virtual machine could incorporate the information into each element. For instance, part of the slice header could be used to carry this information. The stitcher can use this foreknowledge of the frame structure to begin compositing the elements together well before they are required.



FIG. 20 shows a further improvement on the system. As explained above in the groomer section, the graphical video elements can be groomed thereby providing stitchable elements that are already compressed and do not need to be decoded in order to be stitched together. In FIG. 20, a frame has a number of encoded slices 2100. Each slice is a full row (this is used as an example only; the rows could consist of multiple slices prior to grooming). The virtual machine in combination with the AVML file determines that there should be an element 2140 of a particular size placed in a particular location within the composited video frame. The groomer processes the incoming background 2100 and converts the full-row encoded slices to smaller slices that match the areas around and in the desired element 2140 location. The resulting groomed video frame 2180 has a slice configuration that matches the desired element 2140. The stitcher then constructs the stream by selecting all the slices except #3 and #6 from the groomed frame 2180. Instead of those slices, the stitcher grabs the element 2140 slices and uses those in its place. In this manner, the background never leaves the compressed domain and the system is still able to composite the element 2140 into the frame.



FIG. 21 shows the flexibility available to define the element to be composited. Elements can be of different shapes and sizes. The elements need not reside contiguously and in fact a single element can be formed from multiple images separated by the background. This figure shows a background element 2230 (areas colored grey) that has had a single element 2210 (areas colored white) composited on it. In this diagram, the composited element 2210 has areas that are shifted, are different sizes, and even where there are multiple parts of the element on a single row. The stitcher can perform this stitching just as if there were multiple elements used to create the display. The slices for the frame are labeled contiguously S1-S45. These include the slice locations where the element will be placed. The element also has its slice numbering from ES1-ES14. The element slices can be placed in the background where desired even though they are pulled from a single element file.


The source for the element slices can be any one of a number of options. It can come from a real-time encoded source. It can be a complex slice that is built from separate slices, one having a background and the other having text. It can be a pre-encoded element that is fetched from a cache. These examples are for illustrative purposes only and are not intended to limit the options for element sources.



FIG. 22 shows an embodiment using a groomer 2340 for grooming linear broadcast content. The content is received by the groomer 2340 in real-time. Each channel is groomed by the groomer 2340 so that the content can be easily stitched together. The groomer 2340 of FIG. 22 may include a plurality of groomer modules for grooming all of the linear broadcast channels. The groomed channels may then be multicast to one or more processing offices 2310, 2320, 2330 and one or more virtual machines within each of the processing offices for use in applications. As shown, client devices request an application for receipt of a mosaic 2350 of linear broadcast sources and/or other groomed content that are selected by the client. A mosaic 2350 is a scene that includes a background frame 2360 that allows for viewing of a plurality of sources 2371-2376 simultaneously as shown in FIG. 23. For example, if there are multiple sporting events that a user wishes to watch, the user can request each of the channels carrying the sporting events for simultaneous viewing within the mosaic. The user can even select an MPEG object (edit) 2380 and then edit the desired content sources to be displayed. For example, the groomed content can be selected from linear/live broadcasts and also from other video content (i.e. movies, pre-recorded content etc.). A mosaic may even include both user selected material and material provided by the processing office/session processor, such as, advertisements. As shown in FIG. 22, client devices 2301-2305 each request a mosaic that includes channel 1. Thus, the multicast groomed content for channel 1 is used by different virtual machines and different processing offices in the construction of personalized mosaics.


When a client device sends a request for a mosaic application, the processing office associated with the client device assigns a processor/virtual machine for the client device for the requested mosaic application. The assigned virtual machine constructs the personalized mosaic by compositing the groomed content from the desired channels using a stitcher. The virtual machine sends the client device an MPEG stream that has a mosaic of the channels that the client has requested. Thus, by grooming the content first so that the content can be stitched together, the virtual machines that create the mosaics do not need to first decode the desired channels, render the channels within the background as a bitmap and then encode the bitmap.


An application, such as a mosaic, can be requested either directly through a client device or indirectly through another device, such as a PC, for display of the application on a display associated with the client device. The user could log into a website associated with the processing office by providing information about the user's account. The server associated with the processing office would provide the user with a selection screen for selecting an application. If the user selected a mosaic application, the server would allow the user to select the content that the user wishes to view within the mosaic. In response to the selected content for the mosaic and using the user's account information, the processing office server would direct the request to a session processor and establish an interactive session with the client device of the user. The session processor would then be informed by the processing office server of the desired application. The session processor would retrieve the desired application, the mosaic application in this example, and would obtain the required MPEG objects. The processing office server would then inform the session processor of the requested video content and the session processor would operate in conjunction with the stitcher to construct the mosaic and provide the mosaic as an MPEG video stream to the client device. Thus, the processing office server may include scripts or application for performing the functions of the client device in setting up the interactive session, requesting the application, and selecting content for display. While the mosaic elements may be predetermined by the application, they may also be user configurable resulting in a personalized mosaic.



FIG. 24 is a diagram of an IP based content delivery system. In this system, content may come from a broadcast source 2400, a proxy cache 2415 fed by a content provider 2410, Network Attached Storage (NAS) 2425 containing configuration and management files 2420, or other sources not shown. For example, the NAS may include asset metadata that provides information about the location of content. This content could be available through a load balancing switch 2460. BladeSession processors/virtual machines 2460 can perform different processing functions on the content to prepare it for delivery. Content is requested by the user via a client device such as a set top box 2490. This request is processed by the controller 2430 which then configures the resources and path to provide this content. The client device 2490 receives the content and presents it on the user's display 2495.



FIG. 25 provides a diagram of a cable based content delivery system. Many of the components are the same: a controller 2530, broadcast source 2500, a content provider 2510 providing their content via a proxy cache 2515, configuration and management files 2520 via a file server NAS 2525, session processors 2560, load balancing switch 2550, a client device, such as a set top box 2590, and a display 2595. However, there are also a number of additional pieces of equipment required due to the different physical medium. In this case the added resources include: QAM modulators 2575, a return path receiver 2570, a combiner and diplexer 2580, and a Session and Resource Manager (SRM) 2540. QAM upconverter 2575 are required to transmit data (content) downstream to the user. These modulators convert the data into a form that can be carried across the coax that goes to the user. Correspondingly, the return path receiver 2570 also is used to demodulate the data that comes up the cable from the set top 2590. The combiner and diplexer 2580 is a passive device that combines the downstream QAM channels and splits out the upstream return channel. The SRM is the entity that controls how the QAM modulators are configured and assigned and how the streams are routed to the client device.


These additional resources add cost to the system. As a result, the desire is to minimize the number of additional resources that are required to deliver a level of performance to the user that mimics a non-blocking system such as an IP network. Since there is not a one-to-one correspondence between the cable network resources and the users on the network, the resources must be shared. Shared resources must be managed so they can be assigned when a user requires a resource and then freed when the user is finished utilizing that resource. Proper management of these resources is critical to the operator because without it, the resources could be unavailable when needed most. Should this occur, the user either receives a “please wait” message or, in the worst case, a “service unavailable” message.



FIG. 26 is a diagram showing the steps required to configure a new interactive session based on input from a user. This diagram depicts only those items that must be allocated or managed or used to do the allocation or management. A typical request would follow the steps listed below:


(1) The Set Top 2609 requests content 2610 from the Controller 2607


(2) The Controller 2607 requests QAM bandwidth 2620 from the SRM 2603


(3) The SRM 2603 checks QAM availability 2625


(4) The SRM 2603 allocates the QAM modulator 2630


(5) The QAM modulator returns confirmation 2635


(6) The SRM 2603 confirms QAM allocation success 2640 to the Controller


(7) The Controller 407 allocates the Session processor 2650


(8) The Session processor confirms allocation success 2653


(9) The Controller 2607 allocates the content 2655


(10) The Controller 2607 configures 2660 the Set Top 2609. This includes:

    • a. Frequency to tune
    • b. Programs to acquire or alternatively PIDs to decode
    • c. IP port to connect to the Session processor for keystroke capture


(11) The Set Top 2609 tunes to the channel 2663


(12) The Set Top 2609 confirms success 2665 to the Controller 2607 The Controller 2607 allocates the resources based on a request for service from a set top box 2609. It frees these resources when the set top or server sends an “end of session”. While the controller 2607 can react quickly with minimal delay, the SRM 2603 can only allocate a set number of QAM sessions per second i.e. 200. Demand that exceeds this rate results in unacceptable delays for the user. For example, if 500 requests come in at the same time, the last user would have to wait 5 seconds before their request was granted. It is also possible that rather than the request being granted, an error message could be displayed such as “service unavailable”.


While the example above describes the request and response sequence for an AVDN session over a cable TV network, the example below describes a similar sequence over an IPTV network. Note that the sequence in itself is not a claim, but rather illustrates how AVDN would work over an IPTV network.

    • (1) Client device requests content from the Controller via a Session Manager (i.e. controller proxy).
    • (2) Session Manager forwards request to Controller.
    • (3) Controller responds with the requested content via Session Manager (i.e. client proxy).
    • (4) Session Manager opens a unicast session and forwards Controller response to client over unicast IP session.
    • (5) Client device acquires Controller response sent over unicast IP session.
    • (6) Session manager may simultaneously narrowcast response over multicast IP session to share with other clients on node group that request same content simultaneously as a bandwidth usage optimization technique.



FIG. 27 is a simplified system diagram used to break out each area for performance improvement. This diagram focuses only on the data and equipment that will be managed and removes all other non-managed items. Therefore, the switch, return path, combiner, etc. are removed for the sake of clarity. This diagram will be used to step through each item, working from the end user back to the content origination.


A first issue is the assignment of QAMs 2770 and QAM channels 2775 by the SRM 2720. In particular, the resources must be managed to prevent SRM overload, that is, eliminating the delay the user would see when requests to the SRM 2720 exceed its sessions per second rate.


To prevent SRM “overload”, “time based modeling” may be used. For time based modeling, the Controller 2700 monitors the history of past transactions, in particular, high load periods. By using this previous history, the Controller 2700 can predict when a high load period may occur, for example, at the top of an hour. The Controller 2700 uses this knowledge to pre-allocate resources before the period comes. That is, it uses predictive algorithms to determine future resource requirements. As an example, if the Controller 2700 thinks 475 users are going to join at a particular time, it can start allocating those resources 5 seconds early so that when the load hits, the resources have already been allocated and no user sees a delay.


Secondly, the resources could be pre-allocated based on input from an operator. Should the operator know a major event is coming, e.g., a pay per view sporting event, he may want to pre-allocate resources in anticipation. In both cases, the SRM 2720 releases unused QAM 2770 resources when not in use and after the event.


Thirdly, QAMs 2770 can be allocated based on a “rate of change” which is independent of previous history. For example, if the controller 2700 recognizes a sudden spike in traffic, it can then request more QAM bandwidth than needed in order to avoid the QAM allocation step when adding additional sessions. An example of a sudden, unexpected spike might be a button as part of the program that indicates a prize could be won if the user selects this button.


Currently, there is one request to the SRM 2720 for each session to be added. Instead the controller 2700 could request the whole QAM 2770 or a large part of a single QAM's bandwidth and allow this invention to handle the data within that QAM channel 2775. Since one aspect of this system is the ability to create a channel that is only 1, 2, or 3 Mb/sec, this could reduce the number of requests to the SRM 2720 by replacing up to 27 requests with a single request.


The user will also experience a delay when they request different content even if they are already in an active session. Currently, if a set top 2790 is in an active session and requests a new set of content 2730, the Controller 2700 has to tell the SRM 2720 to de-allocate the QAM 2770, then the Controller 2700 must de-allocate the session processor 2750 and the content 2730, and then request another QAM 2770 from the SRM 2720 and then allocate a different session processor 2750 and content 2730. Instead, the controller 2700 can change the video stream 2755 feeding the QAM modulator 2770 thereby leaving the previously established path intact. There are a couple of ways to accomplish the change. First, since the QAM Modulators 2770 are on a network so the controller 2700 can merely change the session processor 2750 driving the QAM 2770. Second, the controller 2700 can leave the session processor 2750 to set top 2790 connection intact but change the content 2730 feeding the session processor 2750, e.g., “CNN Headline News” to “CNN World Now”. Both of these methods eliminate the QAM initialization and Set Top tuning delays.


Thus, resources are intelligently managed to minimize the amount of equipment required to provide these interactive services. In particular, the Controller can manipulate the video streams 2755 feeding the QAM 2770. By profiling these streams 2755, the Controller 2700 can maximize the channel usage within a QAM 2770. That is, it can maximize the number of programs in each QAM channel 2775 reducing wasted bandwidth and the required number of QAMs 2770. There are three primary means to profile streams: formulaic, pre-profiling, and live feedback.


The first profiling method, formulaic, consists of adding up the bit rates of the various video streams used to fill a QAM channel 2775. In particular, there may be many video elements that are used to create a single video stream 2755. The maximum bit rate of each element can be added together to obtain an aggregate bit rate for the video stream 2755. By monitoring the bit rates of all video streams 2755, the Controller 2700 can create a combination of video streams 2755 that most efficiently uses a QAM channel 2775. For example, if there were four video streams 2755: two that were 16 Mb/sec and two that were 20 Mb/sec then the controller could best fill a 38.8 Mb/sec QAM channel 2775 by allocating one of each bit rate per channel. This would then require two QAM channels 2775 to deliver the video. However, without the formulaic profiling, the result could end up as 3 QAM channels 2775 as perhaps the two 16 Mb/sec video streams 2755 are combined into a single 38.8 Mb/sec QAM channel 2775 and then each 20 Mb/sec video stream 2755 must have its own 38.8 Mb/sec QAM channel 2775.


A second method is pre-profiling. In this method, a profile for the content 2730 is either received or generated internally. The profile information can be provided in metadata with the stream or in a separate file. The profiling information can be generated from the entire video or from a representative sample. The controller 2700 is then aware of the bit rate at various times in the stream and can use this information to effectively combine video streams 2755 together. For example, if two video streams 2755 both had a peak rate of 20 Mb/sec, they would need to be allocated to different 38.8 Mb/sec QAM channels 2775 if they were allocated bandwidth based on their peaks. However, if the controller knew that the nominal bit rate was 14 Mb/sec and knew their respective profiles so there were no simultaneous peaks, the controller 2700 could then combine the streams 2755 into a single 38.8 Mb/sec QAM channel 2775. The particular QAM bit rate is used for the above examples only and should not be construed as a limitation.


A third method for profiling is via feedback provided by the system. The system can inform the controller 2700 of the current bit rate for all video elements used to build streams and the aggregate bit rate of the stream after it has been built. Furthermore, it can inform the controller 2700 of bit rates of stored elements prior to their use. Using this information, the controller 2700 can combine video streams 2755 in the most efficient manner to fill a QAM channel 2775.


It should be noted that it is also acceptable to use any or all of the three profiling methods in combination. That is, there is no restriction that they must be used independently.


The system can also address the usage of the resources themselves. For example, if a session processor 2750 can support 100 users and currently there are 350 users that are active, it requires four session processors. However, when the demand goes down to say 80 users, it would make sense to reallocate those resources to a single session processor 2750, thereby conserving the remaining resources of three session processors. This is also useful in failure situations. Should a resource fail, the invention can reassign sessions to other resources that are available. In this way, disruption to the user is minimized.


The system can also repurpose functions depending on the expected usage. The session processors 2750 can implement a number of different functions, for example, process video, process audio, etc. Since the controller 2700 has a history of usage, it can adjust the functions on the session processors 2700 to meet expected demand. For example, if in the early afternoons there is typically a high demand for music, the controller 2700 can reassign additional session processors 2750 to process music in anticipation of the demand. Correspondingly, if in the early evening there is a high demand for news, the controller 2700 anticipates the demand and reassigns the session processors 2750 accordingly. The flexibility and anticipation of the system allows it to provide the optimum user experience with the minimum amount of equipment. That is, no equipment is idle because it only has a single purpose and that purpose is not required.


The present invention may be embodied in many different forms, including, but in no way limited to, computer program logic for use with a processor (e.g., a microprocessor, microcontroller, digital signal processor, or general purpose computer), programmable logic for use with a programmable logic device (e.g., a Field Programmable Gate Array (FPGA) or other PLD), discrete components, integrated circuitry (e.g., an Application Specific Integrated Circuit (ASIC)), or any other means including any combination thereof. In an embodiment of the present invention, predominantly all of the reordering logic may be implemented as a set of computer program instructions that is converted into a computer executable form, stored as such in a computer readable medium, and executed by a microprocessor within the array under the control of an operating system.


Computer program logic implementing all or part of the functionality previously described herein may be embodied in various forms, including, but in no way limited to, a source code form, a computer executable form, and various intermediate forms (e.g., forms generated by an assembler, compiler, networker, or locator.) Source code may include a series of computer program instructions implemented in any of various programming languages (e.g., an object code, an assembly language, or a high-level language such as Fortran, C, C++, JAVA, or HTML) for use with various operating systems or operating environments. The source code may define and use various data structures and communication messages. The source code may be in a computer executable form (e.g., via an interpreter), or the source code may be converted (e.g., via a translator, assembler, or compiler) into a computer executable form.


The computer program may be fixed in any form (e.g., source code form, computer executable form, or an intermediate form) either permanently or transitorily in a tangible storage medium, such as a semiconductor memory device (e.g., a RAM, ROM, PROM, EEPROM, or Flash-Programmable RAM), a magnetic memory device (e.g., a diskette or fixed disk), an optical memory device (e.g., a CD-ROM), a PC card (e.g., PCMCIA card), or other memory device. The computer program may be fixed in any form in a signal that is transmittable to a computer using any of various communication technologies, including, but in no way limited to, analog technologies, digital technologies, optical technologies, wireless technologies, networking technologies, and internetworking technologies. The computer program may be distributed in any form as a removable storage medium with accompanying printed or electronic documentation (e.g., shrink wrapped software or a magnetic tape), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the communication system (e.g., the Internet or World Wide Web.)


Hardware logic (including programmable logic for use with a programmable logic device) implementing all or part of the functionality previously described herein may be designed using traditional manual methods, or may be designed, captured, simulated, or documented electronically using various tools, such as Computer Aided Design (CAD), a hardware description language (e.g., VHDL or AHDL), or a PLD programming language (e.g., PALASM, ABEL, or CUPL.)


While the invention has been particularly shown and described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended clauses. As will be apparent to those skilled in the art, techniques described above for panoramas may be applied to images that have been captured as non-panoramic images, and vice versa.


Embodiments of the present invention may be described, without limitation, by the following clauses. While these embodiments have been described in the clauses by process steps, an apparatus comprising a computer with associated display capable of executing the process steps in the clauses below is also included in the present invention. Likewise, a computer program product including computer executable instructions for executing the process steps in the clauses below and stored on a computer readable medium is included within the present invention.

Claims
  • 1. A system allowing for interactivity with video content that is transmitted to a client device as compressed video content, wherein the compressed video content includes a plurality of composited video frames, the system comprising: a processor located remotely from the client device for controlling an interactive session with the client device and maintaining state information about a first compressed graphical element within the compressed video content, wherein the state information comprises a state of the first compressed graphical element, and wherein the state is configured to be associated with a user input operation performed with respect to the first compressed graphical element; anda stitcher responsive to the processor for stitching together, in a frequency domain, at least the first compressed graphical element and a second compressed graphical element to form a new composited video frame;wherein when the processor receives a signal requesting a change in state of the first compressed graphical element, the processor retrieves a new compressed graphical element indicative of the state change of the first compressed graphical element and the stitcher stitches the new compressed graphical element with the second compressed graphical element in the frequency domain to form a new composited video frame defining new compressed video content to be transmitted to the client device.
  • 2. A system according to claim 1, further comprising: a transmitter for transmitting the new compressed video content to the requesting client device.
  • 3. A system according to claim 1, wherein the first graphical element is one or more MPEG encoded video frames.
  • 4. A system according to claim 1 wherein each of the graphical elements are block-based transform encoded.
  • 5. The system of claim 1, wherein the state of the first compressed graphical element indicates an on state or an off state of the first compressed graphical element.
  • 6. The system of claim 1, wherein the state of the first compressed graphical element indicates whether a video program within the compressed video content is playing or stopped.
  • 7. The system of claim 1, wherein when the processor receives a signal requesting a change in state of the first compressed graphical element, the processor further executes a script.
  • 8. The system of claim 7, wherein the script includes instructions for: determining if a video program within the compressed video content is playing or stopped;if the video program is playing, executing an operation such that the video program stops playing; andif the video program is stopped, executing an operation such that the video program resumes playing.
  • 9. The system of claim 1, wherein when the processor receives a signal requesting a change in state of the first compressed graphical element, the processor further executes a call to an external program.
  • 10. A method for creating a custom MPEG mosaic, the method comprising: receiving a request from a requesting client device for a custom MPEG mosaic containing a number of video sources including at least a first video source and a second video source, wherein the first and second video sources are video streams of different broadcast channels;receiving one or more groomed MPEG content streams including the first video source and the second video source;stitching the groomed MPEG content streams including the first video source and the second video source with a background, without transform decoding the first video source and the second video source into a spatial domain, thereby creating a sequence of MPEG mosaic frames configured to simultaneously present the first and second video sources; andtransmitting the sequence of MPEG mosaic frames to the requesting client device as an MPEG elementary stream.
  • 11. The method of claim 10, wherein the first video source includes user selected material and the second video source includes material selected by a processing office.
  • 12. The method of claim 11, wherein the material selected by the processing office includes one or more advertisements.
  • 13. The method of claim 10, further comprising: receiving a request from a requesting client device to edit the custom MPEG mosaic;receiving an additional groomed MPEG content stream including a third video source;replacing the first video source with the third video source by stitching the additional groomed MPEG content stream including the third video source with the one or more groomed MPEG content streams including the second video source, without transform decoding the second video source and the third video source into a spatial domain, thereby creating a new sequence of MPEG mosaic frames wherein the second and third video sources are presented simultaneously;transmitting the new sequence of MPEG mosaic frames to the requesting client device as an MPEG elementary stream.
  • 14. The method of claim 10, further comprising: receiving a request from a requesting client device to edit the custom MPEG mosaic;receiving an additional groomed MPEG content stream including a third video source;stitching the additional groomed MPEG content stream including the third video source with the one or more groomed MPEG content streams including the first video source and the second video source, without transform decoding the first, second, and third video sources into a spatial domain, thereby creating a new sequence of MPEG mosaic frames wherein the first, second, and third video sources are presented simultaneously;transmitting the new sequence of MPEG mosaic frames to the requesting client device as an MPEG elementary stream.
  • 15. The method of claim 10, wherein each of the one or more groomed MPEG content streams includes a plurality of intracoded MPEG P-frames.
US Referenced Citations (709)
Number Name Date Kind
3889050 Thompson Jun 1975 A
3934079 Barnhart Jan 1976 A
3997718 Ricketts et al. Dec 1976 A
4002843 Rackman Jan 1977 A
4032972 Saylor Jun 1977 A
4077006 Nicholson Feb 1978 A
4081831 Tang et al. Mar 1978 A
4107734 Percy et al. Aug 1978 A
4107735 Frohbach Aug 1978 A
4145720 Weintraub et al. Mar 1979 A
4168400 de Couasnon et al. Sep 1979 A
4186438 Benson et al. Jan 1980 A
4222068 Thompson Sep 1980 A
4245245 Matsumoto et al. Jan 1981 A
4247106 Jeffers et al. Jan 1981 A
4253114 Tang et al. Feb 1981 A
4264924 Freeman Apr 1981 A
4264925 Freeman et al. Apr 1981 A
4290142 Schnee et al. Sep 1981 A
4302771 Gargini Nov 1981 A
4308554 Percy et al. Dec 1981 A
4350980 Ward Sep 1982 A
4367557 Stern et al. Jan 1983 A
4395780 Gohm et al. Jul 1983 A
4408225 Ensinger et al. Oct 1983 A
4450477 Lovett May 1984 A
4454538 Toriumi Jun 1984 A
4466017 Banker Aug 1984 A
4471380 Mobley Sep 1984 A
4475123 Dumbauld et al. Oct 1984 A
4484217 Block et al. Nov 1984 A
4491983 Pinnow et al. Jan 1985 A
4506387 Walter Mar 1985 A
4507680 Freeman Mar 1985 A
4509073 Baran et al. Apr 1985 A
4523228 Banker Jun 1985 A
4533948 McNamara et al. Aug 1985 A
4536791 Campbell et al. Aug 1985 A
4538174 Gargini et al. Aug 1985 A
4538176 Nakajima et al. Aug 1985 A
4553161 Citta Nov 1985 A
4554581 Tentler et al. Nov 1985 A
4555561 Sugimori et al. Nov 1985 A
4562465 Glaab Dec 1985 A
4567517 Mobley Jan 1986 A
4573072 Freeman Feb 1986 A
4591906 Morales-Garza et al. May 1986 A
4602279 Freeman Jul 1986 A
4614970 Clupper et al. Sep 1986 A
4616263 Eichelberger Oct 1986 A
4625235 Watson Nov 1986 A
4627105 Ohashi et al. Dec 1986 A
4633462 Stifle et al. Dec 1986 A
4670904 Rumreich Jun 1987 A
4682360 Frederiksen Jul 1987 A
4695880 Johnson et al. Sep 1987 A
4706121 Young Nov 1987 A
4706285 Rumreich Nov 1987 A
4709418 Fox et al. Nov 1987 A
4710971 Nozaki et al. Dec 1987 A
4718086 Rumreich et al. Jan 1988 A
4732764 Hemingway et al. Mar 1988 A
4734764 Pocock et al. Mar 1988 A
4748689 Mohr May 1988 A
4749992 Fitzemeyer et al. Jun 1988 A
4750036 Martinez Jun 1988 A
4754426 Rast et al. Jun 1988 A
4760442 O'Connell et al. Jul 1988 A
4763317 Lehman et al. Aug 1988 A
4769833 Farleigh et al. Sep 1988 A
4769838 Hasegawa Sep 1988 A
4789863 Bush Dec 1988 A
4792849 McCalley et al. Dec 1988 A
4801190 Imoto Jan 1989 A
4805134 Calo et al. Feb 1989 A
4807031 Broughton et al. Feb 1989 A
4816905 Tweety et al. Mar 1989 A
4821102 Ichikawa et al. Apr 1989 A
4823386 Dumbauld et al. Apr 1989 A
4827253 Maltz May 1989 A
4827511 Masuko May 1989 A
4829372 McCalley et al. May 1989 A
4829558 Welsh May 1989 A
4847698 Freeman Jul 1989 A
4847699 Freeman Jul 1989 A
4847700 Freeman Jul 1989 A
4848698 Newell et al. Jul 1989 A
4860379 Schoeneberger et al. Aug 1989 A
4864613 Van Cleave Sep 1989 A
4876592 Von Kohorn Oct 1989 A
4889369 Albrecht Dec 1989 A
4890320 Monslow et al. Dec 1989 A
4891694 Way Jan 1990 A
4901367 Nicholson Feb 1990 A
4903126 Kassatly Feb 1990 A
4905094 Pocock et al. Feb 1990 A
4912760 West, Jr. et al. Mar 1990 A
4918516 Freeman Apr 1990 A
4920566 Robbins et al. Apr 1990 A
4922532 Farmer et al. May 1990 A
4924303 Brandon et al. May 1990 A
4924498 Farmer et al. May 1990 A
4937821 Boulton Jun 1990 A
4941040 Pocock et al. Jul 1990 A
4947244 Fenwick et al. Aug 1990 A
4961211 Tsugane et al. Oct 1990 A
4963995 Lang Oct 1990 A
4975771 Kassatly Dec 1990 A
4989245 Bennett Jan 1991 A
4994909 Graves et al. Feb 1991 A
4995078 Monslow et al. Feb 1991 A
5003384 Durden et al. Mar 1991 A
5008934 Endoh Apr 1991 A
5014125 Pocock et al. May 1991 A
5027400 Baji et al. Jun 1991 A
5051720 Kittirutsunetorn Sep 1991 A
5051822 Rhoades Sep 1991 A
5057917 Shalkauser et al. Oct 1991 A
5058160 Banker et al. Oct 1991 A
5060262 Bevins, Jr. et al. Oct 1991 A
5077607 Johnson et al. Dec 1991 A
5083800 Lockton Jan 1992 A
5088111 McNamara et al. Feb 1992 A
5093718 Hoarty et al. Mar 1992 A
5109414 Harvey et al. Apr 1992 A
5113496 McCalley et al. May 1992 A
5119188 McCalley et al. Jun 1992 A
5130792 Tindell et al. Jul 1992 A
5132992 Yurt et al. Jul 1992 A
5133009 Rumreich Jul 1992 A
5133079 Ballantyne et al. Jul 1992 A
5136411 Paik et al. Aug 1992 A
5142575 Farmer et al. Aug 1992 A
5144448 Hornbaker, III et al. Sep 1992 A
5155591 Wachob Oct 1992 A
5172413 Bradley et al. Dec 1992 A
5191410 McCalley et al. Mar 1993 A
5195092 Wilson et al. Mar 1993 A
5208665 McCalley et al. May 1993 A
5220420 Hoarty et al. Jun 1993 A
5230019 Yanagimichi et al. Jul 1993 A
5231494 Wachob Jul 1993 A
5236199 Thompson, Jr. Aug 1993 A
5247347 Litteral et al. Sep 1993 A
5253341 Rozmanith et al. Oct 1993 A
5262854 Ng Nov 1993 A
5262860 Fitzpatrick et al. Nov 1993 A
5303388 Kreitman et al. Apr 1994 A
5319455 Hoarty et al. Jun 1994 A
5319707 Wasilewski et al. Jun 1994 A
5321440 Yanagihara et al. Jun 1994 A
5321514 Martinez Jun 1994 A
5351129 Lai Sep 1994 A
5355162 Yazolino et al. Oct 1994 A
5359601 Wasilewski et al. Oct 1994 A
5361091 Hoarty et al. Nov 1994 A
5371532 Gelman et al. Dec 1994 A
5404393 Remillard Apr 1995 A
5408274 Chang et al. Apr 1995 A
5410343 Coddington et al. Apr 1995 A
5410344 Graves et al. Apr 1995 A
5412415 Cook et al. May 1995 A
5412720 Hoarty May 1995 A
5418559 Blahut May 1995 A
5422674 Hooper et al. Jun 1995 A
5422887 Diepstraten et al. Jun 1995 A
5442389 Blahut et al. Aug 1995 A
5442390 Hooper et al. Aug 1995 A
5442700 Snell et al. Aug 1995 A
5446490 Blahut et al. Aug 1995 A
5469283 Vinel et al. Nov 1995 A
5469431 Wendorf et al. Nov 1995 A
5471263 Odaka Nov 1995 A
5481542 Logston et al. Jan 1996 A
5485197 Hoarty Jan 1996 A
5487066 McNamara et al. Jan 1996 A
5493638 Hooper et al. Feb 1996 A
5495283 Cowe Feb 1996 A
5495295 Long Feb 1996 A
5497187 Banker et al. Mar 1996 A
5517250 Hoogenboom et al. May 1996 A
5526034 Hoarty et al. Jun 1996 A
5528281 Grady et al. Jun 1996 A
5537397 Abramson Jul 1996 A
5537404 Bentley et al. Jul 1996 A
5539449 Blahut et al. Jul 1996 A
RE35314 Logg Aug 1996 E
5548340 Bertram Aug 1996 A
5550578 Hoarty et al. Aug 1996 A
5557316 Hoarty et al. Sep 1996 A
5559549 Hendricks et al. Sep 1996 A
5561708 Remillard Oct 1996 A
5570126 Blahut et al. Oct 1996 A
5570363 Holm Oct 1996 A
5579143 Huber Nov 1996 A
5581653 Todd Dec 1996 A
5583927 Ely et al. Dec 1996 A
5587734 Lauder et al. Dec 1996 A
5589885 Ooi Dec 1996 A
5592470 Rudrapatna et al. Jan 1997 A
5594507 Hoarty Jan 1997 A
5594723 Tibi Jan 1997 A
5594938 Engel Jan 1997 A
5596693 Needle et al. Jan 1997 A
5600364 Hendricks et al. Feb 1997 A
5600573 Hendricks et al. Feb 1997 A
5608446 Carr et al. Mar 1997 A
5617145 Huang et al. Apr 1997 A
5621464 Teo et al. Apr 1997 A
5625404 Grady et al. Apr 1997 A
5630757 Gagin et al. May 1997 A
5631693 Wunderlich et al. May 1997 A
5631846 Szurkowski May 1997 A
5632003 Davidson et al. May 1997 A
5649283 Galler et al. Jul 1997 A
5668592 Spaulding, II Sep 1997 A
5668599 Cheney et al. Sep 1997 A
5708767 Yeo et al. Jan 1998 A
5710815 Ming et al. Jan 1998 A
5712906 Grady et al. Jan 1998 A
5740307 Lane Apr 1998 A
5742289 Naylor et al. Apr 1998 A
5748234 Lippincott May 1998 A
5754941 Sharpe et al. May 1998 A
5786527 Tarte Jul 1998 A
5790174 Richard, III et al. Aug 1998 A
5802283 Grady et al. Sep 1998 A
5812665 Hoarty et al. Sep 1998 A
5812786 Seazholtz et al. Sep 1998 A
5815604 Simons et al. Sep 1998 A
5818438 Howe et al. Oct 1998 A
5821945 Yeo et al. Oct 1998 A
5822537 Katseff et al. Oct 1998 A
5828371 Cline et al. Oct 1998 A
5844594 Ferguson Dec 1998 A
5845083 Hamadani et al. Dec 1998 A
5862325 Reed et al. Jan 1999 A
5864820 Case Jan 1999 A
5867208 McLaren Feb 1999 A
5883661 Hoarty Mar 1999 A
5903727 Nielsen May 1999 A
5903816 Broadwin et al. May 1999 A
5905522 Lawler May 1999 A
5907681 Bates et al. May 1999 A
5917822 Lyles et al. Jun 1999 A
5946352 Rowlands et al. Aug 1999 A
5952943 Walsh et al. Sep 1999 A
5959690 Toebes, et al. Sep 1999 A
5961603 Kunkel et al. Oct 1999 A
5963203 Goldberg et al. Oct 1999 A
5966163 Lin et al. Oct 1999 A
5978756 Walker et al. Nov 1999 A
5982445 Eyer et al. Nov 1999 A
5990862 Lewis Nov 1999 A
5995146 Rasmussen Nov 1999 A
5995488 Kalkunte et al. Nov 1999 A
5999970 Krisbergh et al. Dec 1999 A
6014416 Shin et al. Jan 2000 A
6021386 Davis et al. Feb 2000 A
6031989 Cordell Feb 2000 A
6034678 Hoarty et al. Mar 2000 A
6049539 Lee et al. Apr 2000 A
6049831 Gardell et al. Apr 2000 A
6052555 Ferguson Apr 2000 A
6055314 Spies et al. Apr 2000 A
6055315 Doyle et al. Apr 2000 A
6064377 Hoarty et al. May 2000 A
6078328 Schumann et al. Jun 2000 A
6084908 Chiang et al. Jul 2000 A
6100883 Hoarty Aug 2000 A
6108625 Kim Aug 2000 A
6131182 Beakes et al. Oct 2000 A
6141645 Chi-Min et al. Oct 2000 A
6141693 Perlman et al. Oct 2000 A
6144698 Poon et al. Nov 2000 A
6167084 Wang et al. Dec 2000 A
6169573 Sampath-Kumar et al. Jan 2001 B1
6177931 Alexander et al. Jan 2001 B1
6182072 Leak et al. Jan 2001 B1
6184878 Alonso et al. Feb 2001 B1
6192081 Chiang et al. Feb 2001 B1
6198822 Doyle et al. Mar 2001 B1
6205582 Hoarty Mar 2001 B1
6226041 Florencio et al. May 2001 B1
6236730 Cowieson et al. May 2001 B1
6243418 Kim Jun 2001 B1
6253238 Lauder et al. Jun 2001 B1
6253375 Gordon et al. Jun 2001 B1
6256047 Isobe et al. Jul 2001 B1
6259826 Pollard et al. Jul 2001 B1
6266369 Wang et al. Jul 2001 B1
6266684 Kraus et al. Jul 2001 B1
6275496 Burns et al. Aug 2001 B1
6292194 Powell, III Sep 2001 B1
6305020 Hoarty et al. Oct 2001 B1
6310915 Wells et al. Oct 2001 B1
6317151 Ohsuga et al. Nov 2001 B1
6317885 Fries Nov 2001 B1
6324217 Gordon Nov 2001 B1
6349284 Park et al. Feb 2002 B1
6385771 Gordon May 2002 B1
6386980 Nishino et al. May 2002 B1
6389075 Wang et al. May 2002 B2
6389218 Gordon et al. May 2002 B2
6415031 Colligan et al. Jul 2002 B1
6415437 Ludvig et al. Jul 2002 B1
6438140 Jungers et al. Aug 2002 B1
6446037 Fielder et al. Sep 2002 B1
6459427 Mao et al. Oct 2002 B1
6477182 Calderone Nov 2002 B2
6481012 Gordon et al. Nov 2002 B1
6512793 Maeda Jan 2003 B1
6525746 Lau et al. Feb 2003 B1
6536043 Guedalia Mar 2003 B1
6557041 Mallart Apr 2003 B2
6560496 Michener May 2003 B1
6564378 Satterfield et al. May 2003 B1
6579184 Tanskanen Jun 2003 B1
6584153 Comito et al. Jun 2003 B1
6588017 Calderone Jul 2003 B1
6598229 Smyth et al. Jul 2003 B2
6604224 Armstrong et al. Aug 2003 B1
6606746 Zdepski et al. Aug 2003 B1
6614442 Ouyang et al. Sep 2003 B1
6614843 Gordon et al. Sep 2003 B1
6621870 Gordon et al. Sep 2003 B1
6625574 Taniguchi et al. Sep 2003 B1
6639896 Goode et al. Oct 2003 B1
6645076 Sugai Nov 2003 B1
6651252 Gordon et al. Nov 2003 B1
6657647 Bright Dec 2003 B1
6675385 Wang Jan 2004 B1
6675387 Boucher Jan 2004 B1
6681326 Son et al. Jan 2004 B2
6681397 Tsai et al. Jan 2004 B1
6684400 Goode et al. Jan 2004 B1
6687663 McGrath et al. Feb 2004 B1
6691208 Dandrea et al. Feb 2004 B2
6697376 Son et al. Feb 2004 B1
6704359 Bayrakeri et al. Mar 2004 B1
6717600 Dutta et al. Apr 2004 B2
6718552 Goode Apr 2004 B1
6721794 Taylor et al. Apr 2004 B2
6721956 Wasilewski Apr 2004 B2
6727929 Bates et al. Apr 2004 B1
6732370 Gordon et al. May 2004 B1
6747991 Hemy et al. Jun 2004 B1
6754271 Gordon et al. Jun 2004 B1
6754905 Gordon et al. Jun 2004 B2
6758540 Adolph et al. Jul 2004 B1
6766407 Lisitsa et al. Jul 2004 B1
6771704 Hannah Aug 2004 B1
6785902 Zigmond et al. Aug 2004 B1
6807528 Truman et al. Oct 2004 B1
6810528 Chatani Oct 2004 B1
6817947 Tanskanen Nov 2004 B2
6886178 Mao et al. Apr 2005 B1
6907574 Xu et al. Jun 2005 B2
6931291 Alvarez-Tinoco et al. Aug 2005 B1
6934965 Gordon et al. Aug 2005 B2
6941019 Mitchell et al. Sep 2005 B1
6941574 Broadwin et al. Sep 2005 B1
6947509 Wong Sep 2005 B1
6952221 Holtz et al. Oct 2005 B1
6956899 Hall et al. Oct 2005 B2
7030890 Jouet et al. Apr 2006 B1
7050113 Campisano et al. May 2006 B2
7089577 Rakib et al. Aug 2006 B1
7095402 Kunil et al. Aug 2006 B2
7114167 Slemmer et al. Sep 2006 B2
7124424 Gordon et al. Oct 2006 B2
7146615 Hervet et al. Dec 2006 B1
7146628 Gordon et al. Dec 2006 B1
7158676 Rainsford Jan 2007 B1
7200836 Brodersen et al. Apr 2007 B2
7212573 Winger May 2007 B2
7224731 Mehrotra May 2007 B2
7272556 Aguilar et al. Sep 2007 B1
7310619 Baar et al. Dec 2007 B2
7325043 Rosenberg et al. Jan 2008 B1
7346111 Winger et al. Mar 2008 B2
7360230 Paz et al. Apr 2008 B1
7412423 Asano Aug 2008 B1
7412505 Slemmer et al. Aug 2008 B2
7421082 Kamiya et al. Sep 2008 B2
7444306 Varble Oct 2008 B2
7444418 Chou et al. Oct 2008 B2
7500235 Maynard et al. Mar 2009 B2
7508941 O'Toole, Jr. et al. Mar 2009 B1
7512577 Slemmer et al. Mar 2009 B2
7543073 Chou et al. Jun 2009 B2
7596764 Vienneau et al. Sep 2009 B2
7623575 Winger Nov 2009 B2
7669220 Goode Feb 2010 B2
7742609 Yeakel et al. Jun 2010 B2
7743400 Kurauchi Jun 2010 B2
7751572 Villemoes et al. Jul 2010 B2
7757157 Fukuda Jul 2010 B1
7830388 Lu Nov 2010 B1
7840905 Weber et al. Nov 2010 B1
7936819 Craig et al. May 2011 B2
7970263 Asch Jun 2011 B1
7987489 Krzyzanowski et al. Jul 2011 B2
8027353 Damola et al. Sep 2011 B2
8036271 Winger et al. Oct 2011 B2
8046798 Schlack et al. Oct 2011 B1
8074248 Sigmon et al. Dec 2011 B2
8118676 Craig et al. Feb 2012 B2
8136033 Bhargava et al. Mar 2012 B1
8149917 Zhang et al. Apr 2012 B2
8155194 Winger et al. Apr 2012 B2
8155202 Landau Apr 2012 B2
8170107 Winger May 2012 B2
8194862 Herr et al. Jun 2012 B2
8243630 Luo et al. Aug 2012 B2
8270439 Herr et al. Sep 2012 B2
8284842 Craig et al. Oct 2012 B2
8296424 Malloy et al. Oct 2012 B2
8370869 Paek et al. Feb 2013 B2
8411754 Zhang et al. Apr 2013 B2
8442110 Pavlovskaia et al. May 2013 B2
8473996 Gordon et al. Jun 2013 B2
8619867 Craig et al. Dec 2013 B2
8621500 Weaver et al. Dec 2013 B2
20010008845 Kusuda et al. Jul 2001 A1
20010049301 Masuda et al. Dec 2001 A1
20020007491 Schiller et al. Jan 2002 A1
20020013812 Krueger et al. Jan 2002 A1
20020016161 Dellien et al. Feb 2002 A1
20020021353 DeNies Feb 2002 A1
20020026642 Augenbraun et al. Feb 2002 A1
20020027567 Niamir Mar 2002 A1
20020032697 French et al. Mar 2002 A1
20020040482 Sextro et al. Apr 2002 A1
20020047899 Son et al. Apr 2002 A1
20020049975 Thomas et al. Apr 2002 A1
20020056083 Istvan May 2002 A1
20020056107 Schlack May 2002 A1
20020056136 Wistendahl et al. May 2002 A1
20020059644 Andrade et al. May 2002 A1
20020062484 De Lange et al. May 2002 A1
20020066101 Gordon et al. May 2002 A1
20020067766 Sakamoto et al. Jun 2002 A1
20020069267 Thiele Jun 2002 A1
20020072408 Kumagai Jun 2002 A1
20020078171 Schneider Jun 2002 A1
20020078456 Hudson et al. Jun 2002 A1
20020083464 Tomsen et al. Jun 2002 A1
20020095689 Novak Jul 2002 A1
20020105531 Niemi Aug 2002 A1
20020108121 Alao et al. Aug 2002 A1
20020131511 Zenoni Sep 2002 A1
20020136298 Anantharamu et al. Sep 2002 A1
20020152318 Menon et al. Oct 2002 A1
20020171765 Waki et al. Nov 2002 A1
20020175931 Holtz et al. Nov 2002 A1
20020178447 Plotnick et al. Nov 2002 A1
20020188628 Cooper et al. Dec 2002 A1
20020191851 Keinan Dec 2002 A1
20020194592 Tsuchida et al. Dec 2002 A1
20020196746 Allen Dec 2002 A1
20030018796 Chou et al. Jan 2003 A1
20030027517 Callway et al. Feb 2003 A1
20030035486 Kato et al. Feb 2003 A1
20030038893 Rajamaki et al. Feb 2003 A1
20030039398 McIntyre Feb 2003 A1
20030046690 Miller Mar 2003 A1
20030051253 Barone, Jr. Mar 2003 A1
20030058941 Chen et al. Mar 2003 A1
20030061451 Beyda Mar 2003 A1
20030065739 Shnier Apr 2003 A1
20030071792 Safadi Apr 2003 A1
20030072372 Shen et al. Apr 2003 A1
20030076546 Johnson et al. Apr 2003 A1
20030088328 Nishio et al. May 2003 A1
20030088400 Nishio et al. May 2003 A1
20030095790 Joshi May 2003 A1
20030107443 Clancy Jun 2003 A1
20030122836 Doyle et al. Jul 2003 A1
20030123664 Pedlow, Jr. et al. Jul 2003 A1
20030126608 Safadi Jul 2003 A1
20030126611 Kuczynski-Brown Jul 2003 A1
20030131349 Kuczynski-Brown Jul 2003 A1
20030135860 Dureau Jul 2003 A1
20030169373 Peters et al. Sep 2003 A1
20030177199 Zenoni Sep 2003 A1
20030188309 Yuen Oct 2003 A1
20030189980 Dvir et al. Oct 2003 A1
20030196174 Pierre Cote et al. Oct 2003 A1
20030208768 Urdang et al. Nov 2003 A1
20030229719 Iwata et al. Dec 2003 A1
20030229900 Reisman Dec 2003 A1
20030231218 Amadio Dec 2003 A1
20040016000 Zhang et al. Jan 2004 A1
20040034873 Zenoni Feb 2004 A1
20040040035 Carlucci et al. Feb 2004 A1
20040078822 Breen et al. Apr 2004 A1
20040088375 Sethi et al. May 2004 A1
20040091171 Bone May 2004 A1
20040111526 Baldwin et al. Jun 2004 A1
20040117827 Karaoguz et al. Jun 2004 A1
20040128686 Boyer et al. Jul 2004 A1
20040133704 Krzyzanowski et al. Jul 2004 A1
20040136698 Mock Jul 2004 A1
20040139158 Datta Jul 2004 A1
20040157662 Tsuchiya Aug 2004 A1
20040163101 Swix et al. Aug 2004 A1
20040184542 Fujimoto Sep 2004 A1
20040193648 Lai et al. Sep 2004 A1
20040210824 Shoff et al. Oct 2004 A1
20040261106 Hoffman Dec 2004 A1
20040261114 Addington et al. Dec 2004 A1
20050015259 Thumpudi et al. Jan 2005 A1
20050015816 Christofalo et al. Jan 2005 A1
20050021830 Urzaiz et al. Jan 2005 A1
20050034155 Gordon et al. Feb 2005 A1
20050034162 White et al. Feb 2005 A1
20050044575 Der Kuyl Feb 2005 A1
20050055685 Maynard et al. Mar 2005 A1
20050055721 Zigmond et al. Mar 2005 A1
20050071876 van Beek Mar 2005 A1
20050076134 Bialik et al. Apr 2005 A1
20050089091 Kim et al. Apr 2005 A1
20050091690 Delpuch et al. Apr 2005 A1
20050091695 Paz et al. Apr 2005 A1
20050105608 Coleman et al. May 2005 A1
20050114906 Hoarty et al. May 2005 A1
20050132305 Guichard et al. Jun 2005 A1
20050135385 Jenkins et al. Jun 2005 A1
20050141613 Kelly et al. Jun 2005 A1
20050149988 Grannan Jul 2005 A1
20050160088 Scallan et al. Jul 2005 A1
20050166257 Feinleib et al. Jul 2005 A1
20050180502 Puri Aug 2005 A1
20050198682 Wright Sep 2005 A1
20050213586 Cyganski et al. Sep 2005 A1
20050216933 Black Sep 2005 A1
20050216940 Black Sep 2005 A1
20050226426 Oomen et al. Oct 2005 A1
20050273832 Zigmond et al. Dec 2005 A1
20050283741 Balabanovic et al. Dec 2005 A1
20060001737 Dawson et al. Jan 2006 A1
20060020960 Relan et al. Jan 2006 A1
20060020994 Crane et al. Jan 2006 A1
20060031906 Kaneda Feb 2006 A1
20060039481 Shen et al. Feb 2006 A1
20060041910 Hatanaka et al. Feb 2006 A1
20060088105 Shen et al. Apr 2006 A1
20060095944 Demircin et al. May 2006 A1
20060112338 Joung et al. May 2006 A1
20060117340 Pavlovskaia et al. Jun 2006 A1
20060143678 Chou et al. Jun 2006 A1
20060161538 Kiilerich Jul 2006 A1
20060173985 Moore Aug 2006 A1
20060174026 Robinson et al. Aug 2006 A1
20060174289 Theberge Aug 2006 A1
20060195884 van Zoest et al. Aug 2006 A1
20060212203 Furuno Sep 2006 A1
20060218601 Michel Sep 2006 A1
20060230428 Craig et al. Oct 2006 A1
20060239563 Chebil et al. Oct 2006 A1
20060242570 Croft et al. Oct 2006 A1
20060256865 Westerman Nov 2006 A1
20060269086 Page et al. Nov 2006 A1
20060271985 Hoffman et al. Nov 2006 A1
20060285586 Westerman Dec 2006 A1
20060285819 Kelly et al. Dec 2006 A1
20070009035 Craig et al. Jan 2007 A1
20070009036 Craig et al. Jan 2007 A1
20070009042 Craig Jan 2007 A1
20070025639 Zhou et al. Feb 2007 A1
20070033528 Merrit et al. Feb 2007 A1
20070033631 Gordon et al. Feb 2007 A1
20070074251 Oguz et al. Mar 2007 A1
20070079325 de Heer Apr 2007 A1
20070115941 Patel et al. May 2007 A1
20070124282 Wittkotter May 2007 A1
20070124795 McKissick et al. May 2007 A1
20070130446 Minakami Jun 2007 A1
20070130592 Haeusel Jun 2007 A1
20070147804 Zhang et al. Jun 2007 A1
20070152984 Ording et al. Jul 2007 A1
20070172061 Pinder Jul 2007 A1
20070174790 Jing et al. Jul 2007 A1
20070178243 Dong et al. Aug 2007 A1
20070237232 Chang et al. Oct 2007 A1
20070300280 Turner et al. Dec 2007 A1
20080052742 Kopf et al. Feb 2008 A1
20080066135 Brodersen et al. Mar 2008 A1
20080084503 Kondo Apr 2008 A1
20080094368 Ording et al. Apr 2008 A1
20080098450 Wu et al. Apr 2008 A1
20080104520 Swenson et al. May 2008 A1
20080127255 Ress et al. May 2008 A1
20080154583 Goto et al. Jun 2008 A1
20080163059 Craner Jul 2008 A1
20080163286 Rudolph et al. Jul 2008 A1
20080170619 Landau Jul 2008 A1
20080170622 Gordon et al. Jul 2008 A1
20080178243 Dong et al. Jul 2008 A1
20080178249 Gordon et al. Jul 2008 A1
20080187042 Jasinschi Aug 2008 A1
20080189740 Carpenter et al. Aug 2008 A1
20080195573 Onoda et al. Aug 2008 A1
20080201736 Gordon et al. Aug 2008 A1
20080212942 Gordon et al. Sep 2008 A1
20080232452 Sullivan et al. Sep 2008 A1
20080243918 Holtman Oct 2008 A1
20080243998 Oh et al. Oct 2008 A1
20080246759 Summers Oct 2008 A1
20080253440 Srinivasan et al. Oct 2008 A1
20080271080 Gossweiler et al. Oct 2008 A1
20090003446 Wu et al. Jan 2009 A1
20090003705 Zou et al. Jan 2009 A1
20090007199 La Joie Jan 2009 A1
20090025027 Craner Jan 2009 A1
20090031341 Schlack et al. Jan 2009 A1
20090041118 Pavlovskaia et al. Feb 2009 A1
20090083781 Yang et al. Mar 2009 A1
20090083813 Dolce et al. Mar 2009 A1
20090083824 McCarthy et al. Mar 2009 A1
20090089188 Ku et al. Apr 2009 A1
20090094113 Berry et al. Apr 2009 A1
20090094646 Walter et al. Apr 2009 A1
20090100465 Kulakowski Apr 2009 A1
20090100489 Strothmann Apr 2009 A1
20090106269 Zuckerman et al. Apr 2009 A1
20090106386 Zuckerman et al. Apr 2009 A1
20090106392 Zuckerman et al. Apr 2009 A1
20090106425 Zuckerman et al. Apr 2009 A1
20090106441 Zuckerman et al. Apr 2009 A1
20090106451 Zuckerman et al. Apr 2009 A1
20090106511 Zuckerman et al. Apr 2009 A1
20090113009 Slemmer et al. Apr 2009 A1
20090138966 Krause et al. May 2009 A1
20090144781 Glaser et al. Jun 2009 A1
20090146779 Kumar et al. Jun 2009 A1
20090157868 Chaudhry Jun 2009 A1
20090158369 Van Vleck et al. Jun 2009 A1
20090160694 Di Flora Jun 2009 A1
20090172757 Aldrey et al. Jul 2009 A1
20090178098 Westbrook et al. Jul 2009 A1
20090183219 Maynard et al. Jul 2009 A1
20090189890 Corbett et al. Jul 2009 A1
20090193452 Russ et al. Jul 2009 A1
20090196346 Zhang et al. Aug 2009 A1
20090204920 Beverley et al. Aug 2009 A1
20090210899 Lawrence-Apfelbaum et al. Aug 2009 A1
20090225790 Shay et al. Sep 2009 A1
20090228620 Thomas et al. Sep 2009 A1
20090228922 Haj-Khalil et al. Sep 2009 A1
20090233593 Ergen et al. Sep 2009 A1
20090251478 Maillot et al. Oct 2009 A1
20090254960 Yarom et al. Oct 2009 A1
20090265617 Randall et al. Oct 2009 A1
20090271512 Jorgensen Oct 2009 A1
20090271818 Schlack Oct 2009 A1
20090298535 Klein et al. Dec 2009 A1
20090313674 Ludvig et al. Dec 2009 A1
20090328109 Pavlovskaia et al. Dec 2009 A1
20100033638 O'Donnell et al. Feb 2010 A1
20100058404 Rouse Mar 2010 A1
20100067571 White et al. Mar 2010 A1
20100077441 Thomas et al. Mar 2010 A1
20100104021 Schmit Apr 2010 A1
20100115573 Srinivasan et al. May 2010 A1
20100118972 Zhang et al. May 2010 A1
20100131996 Gauld May 2010 A1
20100146139 Brockmann Jun 2010 A1
20100158109 Dahlby et al. Jun 2010 A1
20100166071 Wu et al. Jul 2010 A1
20100174776 Westberg et al. Jul 2010 A1
20100175080 Yuen et al. Jul 2010 A1
20100180307 Hayes et al. Jul 2010 A1
20100211983 Chou Aug 2010 A1
20100226428 Thevathasan et al. Sep 2010 A1
20100235861 Schein et al. Sep 2010 A1
20100242073 Gordon et al. Sep 2010 A1
20100251167 Deluca et al. Sep 2010 A1
20100254370 Jana et al. Oct 2010 A1
20100325655 Perez Dec 2010 A1
20110002376 Ahmed et al. Jan 2011 A1
20110002470 Purnhagen et al. Jan 2011 A1
20110023069 Dowens Jan 2011 A1
20110035227 Lee et al. Feb 2011 A1
20110067061 Karaoguz et al. Mar 2011 A1
20110096828 Chen et al. Apr 2011 A1
20110107375 Stahl et al. May 2011 A1
20110110642 Salomons et al. May 2011 A1
20110150421 Sasaki et al. Jun 2011 A1
20110153776 Opala et al. Jun 2011 A1
20110167468 Lee et al. Jul 2011 A1
20110243024 Osterling et al. Oct 2011 A1
20110258584 Williams et al. Oct 2011 A1
20110289536 Poder et al. Nov 2011 A1
20110317982 Xu et al. Dec 2011 A1
20120023126 Jin et al. Jan 2012 A1
20120030212 Koopmans et al. Feb 2012 A1
20120137337 Sigmon et al. May 2012 A1
20120204217 Regis et al. Aug 2012 A1
20120209815 Carson et al. Aug 2012 A1
20120224641 Haberman et al. Sep 2012 A1
20120257671 Brockmann et al. Oct 2012 A1
20130003826 Craig et al. Jan 2013 A1
20130086610 Brockmann Apr 2013 A1
20130179787 Brockmann et al. Jul 2013 A1
20130198776 Brockmann Aug 2013 A1
20130272394 Brockmann et al. Oct 2013 A1
20140033036 Gaur et al. Jan 2014 A1
Foreign Referenced Citations (313)
Number Date Country
191599 Apr 2000 AT
198969 Feb 2001 AT
250313 Oct 2003 AT
472152 Jul 2010 AT
475266 Aug 2010 AT
550086 Feb 1986 AU
199060189 Nov 1990 AU
620735 Feb 1992 AU
199184838 Apr 1992 AU
643828 Nov 1993 AU
2004253127 Jan 2005 AU
2005278122 Mar 2006 AU
2010339376 Aug 2012 AU
2011249132 Nov 2012 AU
2011258972 Nov 2012 AU
2011315950 May 2013 AU
682776 Mar 1964 CA
2052477 Mar 1992 CA
1302554 Jun 1992 CA
2163500 May 1996 CA
2231391 May 1997 CA
2273365 Jun 1998 CA
2313133 Jun 1999 CA
2313161 Jun 1999 CA
2528499 Jan 2005 CA
2569407 Mar 2006 CA
2728797 Apr 2010 CA
2787913 Jul 2011 CA
2798541 Dec 2011 CA
2814070 Apr 2012 CA
1507751 Jun 2004 CN
1969555 May 2007 CN
101180109 May 2008 CN
101627424 Jan 2010 CN
101637023 Jan 2010 CN
102007773 Apr 2011 CN
4408355 Oct 1994 DE
69516139 D1 Dec 2000 DE
69132518 D1 Sep 2001 DE
69333207 D1 Jul 2004 DE
98961961 Aug 2007 DE
602008001596 D1 Aug 2010 DE
602006015650 D1 Sep 2010 DE
0093549 Nov 1983 EP
0128771 Dec 1984 EP
0419137 Mar 1991 EP
0449633 Oct 1991 EP
0477786 Apr 1992 EP
0477786 Apr 1992 EP
0523618 Jan 1993 EP
0534139 Mar 1993 EP
0568453 Nov 1993 EP
0588653 Mar 1994 EP
0594350 Apr 1994 EP
0612916 Aug 1994 EP
0624039 Nov 1994 EP
0638219 Feb 1995 EP
0643523 Mar 1995 EP
0661888 Jul 1995 EP
0714684 Jun 1996 EP
0746158 Dec 1996 EP
0761066 Mar 1997 EP
0789972 Aug 1997 EP
0830786 Mar 1998 EP
0861560 Sep 1998 EP
0933966 Aug 1999 EP
0933966 Aug 1999 EP
1026872 Aug 2000 EP
1038397 Sep 2000 EP
1038399 Sep 2000 EP
1038400 Sep 2000 EP
1038401 Sep 2000 EP
1051039 Nov 2000 EP
1055331 Nov 2000 EP
1120968 Aug 2001 EP
1345446 Sep 2003 EP
1422929 May 2004 EP
1428562 Jun 2004 EP
1521476 Apr 2005 EP
1645115 Apr 2006 EP
1 725 044 Nov 2006 EP
1767708 Mar 2007 EP
1771003 Apr 2007 EP
1772014 Apr 2007 EP
1877150 Jan 2008 EP
1887148 Feb 2008 EP
1900200 Mar 2008 EP
1902583 Mar 2008 EP
1908293 Apr 2008 EP
1911288 Apr 2008 EP
1918802 May 2008 EP
2100296 Sep 2009 EP
2105019 Sep 2009 EP
2106665 Oct 2009 EP
2116051 Nov 2009 EP
2124440 Nov 2009 EP
2248341 Nov 2010 EP
2269377 Jan 2011 EP
2271098 Jan 2011 EP
2304953 Apr 2011 EP
2364019 Sep 2011 EP
2384001 Nov 2011 EP
2409493 Jan 2012 EP
2477414 Jul 2012 EP
2487919 Aug 2012 EP
2520090 Nov 2012 EP
2567545 Mar 2013 EP
2577437 Apr 2013 EP
2628306 Aug 2013 EP
2632164 Aug 2013 EP
2632165 Aug 2013 EP
2695388 Feb 2014 EP
2207635 Jun 2004 ES
8211463 Jun 1982 FR
2529739 Jan 1984 FR
2891098 Mar 2007 FR
2207838 Feb 1989 GB
2248955 Apr 1992 GB
2290204 Dec 1995 GB
2365649 Feb 2002 GB
2378345 Feb 2003 GB
1134855 Oct 2010 HK
1116323 Dec 2010 HK
19913397 Apr 1992 IE
99586 Feb 1998 IL
215133D0 Dec 2011 IL
222829D0 Dec 2012 IL
222830D0 Dec 2012 IL
225525D0 Jun 2013 IL
180215 Jan 1998 IN
200701744 Nov 2007 IN
200900856 May 2009 IN
200800214 Jun 2009 IN
3759 Mar 1992 IS
60-054324 Mar 1985 JP
63-033988 Feb 1988 JP
63-263985 Oct 1988 JP
2001-241993 Sep 1989 JP
04-373286 Dec 1992 JP
06-054324 Feb 1994 JP
7015720 Jan 1995 JP
7-160292 Jun 1995 JP
8-265704 Oct 1996 JP
10-228437 Aug 1998 JP
10-510131 Sep 1998 JP
11-134273 May 1999 JP
H11-261966 Sep 1999 JP
2000-152234 May 2000 JP
2001-203995 Jul 2001 JP
2001-245271 Sep 2001 JP
2001-514471 Sep 2001 JP
2002-016920 Jan 2002 JP
2002-057952 Feb 2002 JP
2002-112220 Apr 2002 JP
2002-141810 May 2002 JP
2002-208027 Jul 2002 JP
2002-319991 Oct 2002 JP
2003-506763 Feb 2003 JP
2003-087785 Mar 2003 JP
2003-529234 Sep 2003 JP
2004-501445 Jan 2004 JP
2004-056777 Feb 2004 JP
2004-110850 Apr 2004 JP
2004-112441 Apr 2004 JP
2004-135932 May 2004 JP
2004-264812 Sep 2004 JP
2004-533736 Nov 2004 JP
2004-536381 Dec 2004 JP
2004-536681 Dec 2004 JP
2005-033741 Feb 2005 JP
2005-084987 Mar 2005 JP
2005-095599 Mar 2005 JP
8-095599 Apr 2005 JP
2005-156996 Jun 2005 JP
2005-519382 Jun 2005 JP
2005-523479 Aug 2005 JP
2005-309752 Nov 2005 JP
2006-067280 Mar 2006 JP
2006-512838 Apr 2006 JP
11-88419 Sep 2007 JP
2008-523880 Jul 2008 JP
2008-535622 Sep 2008 JP
04252727 Apr 2009 JP
2009-543386 Dec 2009 JP
2011-108155 Jun 2011 JP
2012-080593 Apr 2012 JP
04996603 Aug 2012 JP
05121711 Jan 2013 JP
53-004612 Oct 2013 JP
05331008 Oct 2013 JP
05405819 Feb 2014 JP
2006067924 Jun 2006 KR
2007038111 Apr 2007 KR
20080001298 Jan 2008 KR
2008024189 Mar 2008 KR
2010111739 Oct 2010 KR
2010120187 Nov 2010 KR
2010127240 Dec 2010 KR
2011030640 Mar 2011 KR
2011129477 Dec 2011 KR
20120112683 Oct 2012 KR
2013061149 Jun 2013 KR
2013113925 Oct 2013 KR
1333200 Nov 2013 KR
2008045154 Nov 2013 KR
2013138263 Dec 2013 KR
1032594 Apr 2008 NL
1033929 Apr 2008 NL
2004670 Nov 2011 NL
2004780 Jan 2012 NL
239969 Dec 1994 NZ
99110 Dec 1993 PT
WO 8202303 Jul 1982 WO
WO 8908967 Sep 1989 WO
WO 9013972 Nov 1990 WO
WO 9322877 Nov 1993 WO
WO 9416534 Jul 1994 WO
WO 9419910 Sep 1994 WO
WO 9421079 Sep 1994 WO
WO 9515658 Jun 1995 WO
WO 9532587 Nov 1995 WO
WO 9533342 Dec 1995 WO
WO 9533342 Dec 1995 WO
WO 9614712 May 1996 WO
WO 9627843 Sep 1996 WO
WO 9631826 Oct 1996 WO
WO 9637074 Nov 1996 WO
WO 9637074 Nov 1996 WO
WO 9642168 Dec 1996 WO
WO 9716925 May 1997 WO
WO 9733434 Sep 1997 WO
WO 9739583 Oct 1997 WO
WO 9826595 Jun 1998 WO
WO 9900735 Jan 1999 WO
WO 9904568 Jan 1999 WO
WO 9900735 Jan 1999 WO
WO 9930496 Jun 1999 WO
WO 9930497 Jun 1999 WO
WO 9930500 Jun 1999 WO
WO 9930501 Jun 1999 WO
WO 9935840 Jul 1999 WO
WO 9941911 Aug 1999 WO
WO 9956468 Nov 1999 WO
WO 9965232 Dec 1999 WO
WO 9965243 Dec 1999 WO
WO 9966732 Dec 1999 WO
WO 9966732 Dec 1999 WO
WO 0002303 Jan 2000 WO
WO 0007372 Feb 2000 WO
WO 0008967 Feb 2000 WO
WO 0019910 Apr 2000 WO
WO 0038430 Jun 2000 WO
WO 0041397 Jul 2000 WO
WO 0139494 May 2001 WO
WO 0141447 Jun 2001 WO
WO 0182614 Nov 2001 WO
WO 0192973 Dec 2001 WO
WO 02089487 Jul 2002 WO
WO 02076097 Sep 2002 WO
WO 02076099 Sep 2002 WO
WO 03026232 Mar 2003 WO
WO 2003026275 Mar 2003 WO
WO 03047710 Jun 2003 WO
WO 03065683 Aug 2003 WO
WO 03071727 Aug 2003 WO
WO 03091832 Nov 2003 WO
WO 2004012437 Feb 2004 WO
WO 2004018060 Mar 2004 WO
WO 2004073310 Aug 2004 WO
WO 2005002215 Jan 2005 WO
WO 2005041122 May 2005 WO
WO 2005053301 Jun 2005 WO
WO 2005120067 Dec 2005 WO
WO 2006014362 Feb 2006 WO
WO 2006022881 Mar 2006 WO
WO 2006053305 May 2006 WO
WO 2006067697 Jun 2006 WO
WO 2006081634 Aug 2006 WO
WO 2006105480 Oct 2006 WO
WO 2006110268 Oct 2006 WO
WO 2007001797 Jan 2007 WO
WO 2007008319 Jan 2007 WO
WO 2007008355 Jan 2007 WO
WO 2007008356 Jan 2007 WO
WO 2007008357 Jan 2007 WO
WO 2007008358 Jan 2007 WO
WO 2007018722 Feb 2007 WO
WO 2007018722 Feb 2007 WO
WO 2007018726 Feb 2007 WO
WO 2008044916 Apr 2008 WO
WO 2008086170 Jul 2008 WO
WO 2008088741 Jul 2008 WO
WO 2008088752 Jul 2008 WO
WO 2008088772 Jul 2008 WO
WO 2008100205 Aug 2008 WO
WO 2009038596 Mar 2009 WO
WO 2009099893 Aug 2009 WO
WO 2009099895 Aug 2009 WO
WO 2009105465 Aug 2009 WO
WO 2009110897 Sep 2009 WO
WO 2009114247 Sep 2009 WO
WO 2009155214 Dec 2009 WO
WO 2010044926 Apr 2010 WO
WO 2010054136 May 2010 WO
WO 2010107954 Sep 2010 WO
WO 2011014336 Sep 2010 WO
WO 2011082364 Jul 2011 WO
WO 2011139155 Nov 2011 WO
WO 2011149357 Dec 2011 WO
WO 2012051528 Apr 2012 WO
WO 2012138660 Oct 2012 WO
WO 2013106390 Jul 2013 WO
WO 2013155310 Jul 2013 WO
Non-Patent Literature Citations (255)
Entry
Authorized Officer Jürgen Güttlich, International Search Report and Written Opinion, dated Jan. 12, 2007, PCT/US2008/000400.
Authorized Officer Jürgen Güttlich, International Search Report and Written Opinion, dated Jan. 12, 2007, PCT/US2008/000450.
Hoarty, W. L., “The Smart Headend—A Novel Approach to Interactive Television”, Montreux Int'l TV Symposium, Jun. 9, 1995.
Robert Koenen, “MPEG-4 Overview—Overview of the MPEG-4 Standard” Internet Citation, Mar. 2001.
Avaro, O., et al., “MPEG-4 Systems: Overview” Signal Processing, Image Communication, Elsevier Science Publishers, vol. 15, Jan. 1, 2000, pp. 281-298.
Stoll, G. et al., “GMF4iTV: Neue Wege zur Interaktivität mit bewegten Objekten beim digitalen Fernsehen” FKT Fernseh Und Kinotechnik, Fachverlag Schiele & Schon Gmgh., vol. 60, No. 4, Jan. 1, 2006, pp. 171-178.
AC-3 digital audio compression standard, Extract, Dec. 20, 1995, 11 pgs.
ActiveVideo Networks Bv, International Search Report and Written Opinion, PCT/NL2011/050308, Sep. 6, 2011, 8 pgs.
ActiveVideo Networks Inc., International Preliminary Report on Patentability, PCT/US2011/056355, Apr. 16, 2013, 4 pgs.
ActiveVideo Networks Inc., International Preliminary Report on Patentability, PCT/US2012/032010, Oct. 17, 2013, 4 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2011/056355, Apr. 13, 2012, 6 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2012/032010, Oct. 10, 2012, 6 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2013/020769, May 9, 2013, 9 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2013/036182, Jul. 29, 2013, 12 pgs.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2009/032457, Jul. 22, 2009, 7 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 09820936-4, 11 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 10754084-1, 11 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 10841764.3, 16 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 11833486.1, 6 pgs.
Annex C—Video buffering verifier, information technology—generic coding of moving pictures and associated audio information: video, Feb. 2000, 6 pgs.
Antonoff, Michael, “Interactive Television,” Popular Science, Nov. 1992, 12 pages.
Askenas, M., U.S. Appl. No. 10/253,109 (unpublished), filed Sep. 24, 2002. Not Found.
Avinity Systems B.V., Extended European Search Report, Application No. 12163713.6, 10 pgs.
Benjelloun, a summation algorithm for MPEG-1 coded audio signals: a first step towards audio processed domain, 2000, 9 pgs.
Broadhead, Direct manipulation of MPEG compressed digital audio, Nov. 5-9, 1995, 41 pgs.
Cable Television Laboratories, Inc., “CableLabs Asset Distribution Interface Specification, Version 1.1”, May 5, 2006, 33 pgs.
CD 11172-3, Coding of moving pictures and associated audio for digital storage media at up to about 1.5 MBIT, Jan. 1, 1992, 39 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,176, filed Dec. 23, 2010, 8 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,183, filed Jan. 12, 2012, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,183, filed Jul. 19, 2012, 8 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,189, filed Oct. 12, 2011, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,176, filed Mar. 23, 2011, 8 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 13/609,183, filed Aug. 26, 2013, 8 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/103,838, filed Feb. 5, 2009, 30 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/103,838, filed Jul. 6, 2010, 35 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,176, filed Oct. 1, 2010, 8 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,183, filed Apr. 13, 2011, 16 pgs.
Craig, Final Office Action, U.S. Appl. No. 11/178,177, filed Oct. 26, 2010, 12 pgs.
Craig, Office Action, U.S. Appl. No. 11/103,838, filed May 12, 2009, 32 pgs.
Craig, Office Action, U.S. Appl. No. 11/103,838, filed Aug. 19, 2008, 17 pgs.
Craig, Office Action, U.S. Appl. No. 11/103,838, filed Nov. 19, 2009, 34 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,176, filed May 6, 2010, 7 pgs.
Craig, Office-Action U.S. Appl. No. 11/178,177, filed Mar. 29, 2011, 15 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,177, filed Aug. 3, 2011, 26 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,177, filed Mar. 29, 2010, 11 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,181, filed Feb. 11, 2011, 19 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,181, filed Jun. 20, 2011, 21 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,181, filed Aug. 25, 2010, 17 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,181, filed Mar. 29, 2010, 10 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,182, filed Feb. 23, 2010, 15 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, filed Dec. 6, 2010, 12 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, filed Sep. 15, 2011, 12 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, filed Feb. 19, 2010, 17 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,183, filed Jul. 20, 2010, 13 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, filed Nov. 9, 2010, 13 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, filed Mar. 15, 2010, 11 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, filed Jul. 23, 2009, 10 pgs.
Craig, Office Action, U.S. Appl. No. 11/178,189, filed May 26, 2011, 14 pgs.
Craig, Office Action, U.S. Appl. No. 13/609,183, filed May 9, 2013, 7 pgs.
Pavlovskaia, Office Action, JP 2011-516499, Feb. 14, 2014, 19 pgs.
Digital Audio Compression Standard(AC-3, E-AC-3), Advanced Television Systems Committee, Jun. 14, 2005, 236 pgs.
European Patent Office, Extended European Search Report for International Application No. PCT/US2010/027724, dated Jul. 24, 2012, 11 pages.
FFMPEG, http://www.ffmpeg.orq, downloaded Apr. 8, 2010, 8 pgs.
FFMEG-0.4.9 Audio Layer 2 Tables Including Fixed Psycho Acoustic Model, 2001, 2 pgs.
Herr, Notice of Allowance, U.S. Appl. No. 11/620,593, filed May 23, 2012, 5 pgs.
Herr, Notice of Allowance, U.S. Appl. No. 12/534,016, filed Feb. 7, 2012, 5 pgs.
Herr, Notice of Allowance, U.S. Appl. No. 12/534,016, filed Sep. 28, 2011, 15 pgs.
Herr, Final Office Action, U.S. Appl. No. 11/620,593, filed Sep. 15, 2011, 104 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, filed Mar. 19, 2010, 58 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, filed Apr. 21, 2009 27 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, filed Dec. 23, 2009, 58 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, filed Jan. 24, 2011, 96 pgs.
Herr, Office Action, U.S. Appl. No. 11/620,593, filed Aug. 27, 2010, 41 pgs.
Herre, Thoughts on an SAOC Architecture, Oct. 2006, 9 pgs.
Hoarty, The Smart Headend—A Novel Approach to Interactive Television, Montreux Int'l TV Symposium, Jun. 9, 1995, 21 pgs.
ICTV, Inc., International Preliminary Report on Patentability, PCT/US2006/022585, Jan. 29, 2008, 9 pgs.
ICTV, Inc., International Search Report / Written Opinion, PCT/US2006/022585, Oct. 12, 2007, 15 pgs.
ICTV, Inc., International Search Report / Written Opinion, PCT/US2008/000419, May 15, 2009, 20 pgs.
ICTV, Inc., International Search Report / Written Opinion; PCT/US2006/022533, Nov. 20, 2006; 8 pgs.
Isovic, Timing constraints of MPEG-2 decoding for high quality video: misconceptions and realistic assumptions, Jul. 2-4, 2003, 10 pgs.
Korean Intellectual Property Office, International Search Report; PCT/US2009/032457, Jul. 22, 2009, 7 pgs.
MPEG-2 Video elementary stream supplemental information, Dec. 1999, 12 pgs.
Ozer, Video Compositing 101. available from http://www.emedialive.com, Jun. 2, 2004, 5pgs.
Porter, Compositing Digital Images, 18 Computer Graphics (No. 3), Jul. 1984, pp. 253-259.
RSS Advisory Board, “RSS 2.0 Specification”, published Oct. 15, 2007. Not Found.
SAOC use cases, draft requirements and architecture, Oct. 2006, 16 pgs.
Sigmon, Final Office Action, U.S. Appl. No. 11/258,602, Feb. 23, 2009, 15 pgs.
Sigmon, Office Action, U.S. Appl. No. 11/258,602, Sep. 2, 2008, 12 pgs.
TAG Networks, Inc., Communication pursuant to Article 94(3) EPC, European Patent Application, 06773714.8, May 6, 2009, 3 pgs.
TAG Networks Inc, Decision to Grant a Patent, JP 209-544985, Jun. 28, 2013, 1 pg.
TAG Networks Inc., IPRP, PCT/US2006/010080, Oct. 16, 2007, 6 pgs.
TAG Networks Inc., IPRP, PCT/US2006/024194, Jan. 10, 2008, 7 pgs.
TAG Networks Inc., IPRP, PCT/US2006/024195, Apr. 1, 2009, 11 pgs.
TAG Networks Inc., IPRP, PCT/US2006/024196, Jan. 10, 2008, 6 pgs.
TAG Networks Inc., International Search Report, PCT/US2008/050221, Jun. 12, 2008, 9 pgs.
TAG Networks Inc., Office Action, CN 200680017662.3, Apr. 26, 2010, 4 pgs.
TAG Networks Inc., Office Action, EP 06739032.8, Aug. 14, 2009, 4 pgs.
TAG Networks Inc., Office Action, EP 06773714.8, May 6, 2009, 3 pgs.
TAG Networks Inc., Office Action, EP 06773714.8, Jan. 12, 2010, 4 pgs.
TAG Networks Inc., Office Action, JP 2008-506474, Oct. 1, 2012, 5 pgs.
TAG Networks Inc., Office Action, JP 2008-506474, Aug. 8, 2011, 5 pgs.
TAG Networks Inc., Office Action, JP 2008-520254, Oct. 20, 2011, 2 pgs.
TAG Networks, IPRP, PCT/US2008/050221, Jul. 7, 2009, 6 pgs.
TAG Networks, International Search Report, PCT/US2010/041133, Oct. 19, 2010, 13 pgs.
TAG Networks, Office Action, CN 200880001325.4, Jun. 22, 2011, 4 pgs.
TAG Networks, Office Action, JP 2009-544985, Feb. 25, 2013, 3 pgs.
Talley, A general framework for continuous media transmission control, Oct. 13-16, 1997, 10 pgs.
The Toolame Project, Psych—nl.c, 1999, 1 pg.
Todd, AC-3: flexible perceptual coding for audio transmission and storage, Feb. 26-Mar. 1, 1994, 16 pgs.
Tudor, MPEG-2 Video Compression, Dec. 1995, 15 pgs.
TVHEAD, Inc., First Examination Report, in 1744/MUMNP/2007, Dec. 30, 2013, 6 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/010080, Jun. 20, 2006, 3 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/024194, Dec. 15, 2006, 4 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/024195, Nov. 29, 2006, 9 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/024196, Dec. 11, 2006, 4 pgs.
TVHEAD, Inc., International Search Report, PCT/US2006/024197, Nov. 28, 2006, 9 pgs.
Vernon, Dolby digital: audio coding for digital television and storage applications, Aug. 1999, 18 pgs.
Wang, A beat-pattern based error concealment scheme for music delivery with burst packet loss, Aug. 22-25, 2001, 4 pgs.
Wang, A compressed domain beat detector using MP3 audio bitstream, Sep. 30-Oct. 5, 2001, 9 pgs.
Wang, A multichannel audio coding algorithm for inter-channel redundancy removal, May 12-15, 2001, 6 pgs.
Wang, An excitation level based psychoacoustic model for audio compression, Oct. 30-Nov. 4, 1999, 4 pgs.
Wang, Energy compaction property of the MDCT in comparison with other transforms, Sep. 22-25, 2000, 23 pgs.
Wang, Exploiting excess masking for audio compression, Sep. 2-5, 1999, 4 pgs.
Wang, schemes for re-compressing mp3 audio bitstreams,Nov. 30-Dec. 3, 2001, 5 pgs.
Wang, Selected advances in audio compression and compressed domain processing, Aug. 2001, 68 pgs.
Wang, The impact of the relationship between MDCT and DFT on audio compression, Dec. 13-15, 2000, 9 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Rules 70(2) and 70a(2), EP11833486.1, Apr. 24, 2014, 1 pg.
ActiveVideo Networks, Inc., International Search Report and Written Opinion, PCT/US2014/041430, Oct. 9, 2014, 9 pgs.
ActiveVideo Networks Inc., Examination Report No. 1, AU2011258972, Jul. 21, 2014, 3 pgs.
Active Video Networks, Notice of Reasons for Rejection, JP2012-547318, Sep. 26, 2014, 7 pgs.
Avinity Systems B. V., Final Office Action, JP-2009-530298, Oct. 7, 2014, 8 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/686,548, filed Sep. 24, 2014, 13 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/438,617, filed Oct. 3, 2014, 19 pgs.
Brockmann, Office Action, U.S. Appl. No. 12/443,571, filed Nov. 5, 2014, 26 pgs.
ActiveVideo, http://www.activevideo.com/, as printed out in year 2012, 1 pg.
ActiveVideo Networks Inc., International Preliminary Report on Patentability, PCT/US2013/020769, Jul. 24, 2014, 6 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2014/030773, Jul. 25, 2014, 8 pgs.
ActiveVideo Networks Inc., International Search Report and Written Opinion, PCT/US2014/041416, Aug. 27, 2014, 8 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 13168509.1, 10 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 13168376-5, 8 pgs.
ActiveVideo Networks Inc., Extended EP Search Rpt, Application No. 12767642-7, 12 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Rules 70(2) and 70a(2), EP10841764.3, Jun. 22, 2011, 1 pg.
ActiveVideo Networks Inc., Communication Pursuant to Article 94(3) EPC, EP08713106.6, Jun. 26, 2014, 5 pgS.
ActiveVideo Networks Inc., Communication Pursuant to Article 94(3) EPC, EP09713486.0, Apr. 14, 2014, 6 pgS.
ActiveVideo Networks Inc., Examination Report No. 1, AU2011258972, Apr. 4, 2013, 5 pgs.
ActiveVideo Networks Inc., Examination Report No. 1, AU2010339376, Apr. 30, 2014, 4 pgs.
ActiveVideo Networks Inc., Examination Report, App. No. EP11749946.7, Oct. 8, 2013, 6 pgs.
ActiveVideo Networks Inc., Summons to attend oral-proceeding, Application No. EP09820936-4, Aug. 19, 2014, 4 pgs.
ActiveVideo Networks Inc., International Searching Authority, International Search Report—International application No. PCT/US2010/027724, dated Oct. 28, 2010, together with the Written Opinion of the International Searching Authority, 7 pages.
Adams, Jerry, NTZ Nachrichtechnische Zeitschrift. vol. 40, No. 7, Jul. 1987, Berlin DE pp. 534-536; Jerry Adams: 'Glasfasernetz für Breitbanddienste in London', 5 pgs. No English Translation Found.
Avinity Systems B.V., Communication pursuant to Article 94(3) EPC, EP 07834561.8, Jan. 31, 2014, 10 pgs.
Avinity Systems B.V., Extended European Search Report, Application No. 12163712-8, 10 pgs.
Avinity Systems B.V., Communication pursuant to Article 94(3) EPC, EP 07834561.8, Apr. 8, 2010, 5 pgs.
Avinity Systems B.V., International Preliminary Report on Patentability, PCT/NL2007/000245, Feb. 19, 2009, 7 pgs.
Avinity Systems B.V., International Search Report and Written Opinion, PCT/NL2007/000245, Feb. 19, 2009, 18 pgs.
Avinity Systems B.V., Notice of Grounds of Rejection for Patent, JP 2009-530298, Sep. 3, 2013, 4 pgs.
Avinity Systems B.V., Notice of Grounds of Rejection for Patent, JP 2009-530298, Sep. 25, 2012, 6 pgs.
Bird et al., “Customer Access to Broadband Services,” ISSLS 86—The International Symposium on Subrscriber Loops and Services Sep. 29, 1986, Tokyo,JP 6 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 13/668,004, filed Jul. 16, 2014, 20 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/686,548, filed Mar. 10, 2014, 11 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/668,004, filed Dec. 23, 2013, 9 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/438,617, filed May 12, 2014, 17 pgs.
Brockmann, Final Office Action, U.S. Appl. No. 12/443,571, filed Mar. 7, 2014, 21 pgs.
Brockmann, Office Action, U.S. Appl. No. 12/443,571, filed Jun. 5, 2013, 18 pgs.
Chang, Shih-Fu, et al., “Manipulation and Compositing of MC-DOT Compressed Video, ” IEEE Journal on Selected Areas of Communications, Jan. 1995, vol. 13, No. 1, 11 pgs. Best Copy Available.
Dahlby, Office Action, U.S. Appl. No. 12/651,203, filed Jun. 5, 2014, 18 pgs.
Dahlby, Final Office Action, U.S. Appl. No. 12/651,203, filed Feb. 4, 2013, 18 pgs.
Dahlby, Office Action, U.S. Appl. No. 12/651,203, filed Aug. 16, 2012, 18 pgs.
Dukes, Stephen D., “Photonics for cable television system design, Migrating to regional hubs and passive networks,” Communications Engineering and Design, May 1992, 4 pgs.
Ellis, et al., “INDAX: An Operation Interactive Cabletext System”, IEEE Journal on Selected Areas in Communications, vol. sac-1, No. 2, Feb. 1983, pp. 285-294.
European Patent Office, Supplementary European Search Report, Application No. EP 09 70 8211, dated Jan. 5, 2011, 6 pgs.
Frezza, W., “The Broadband Solution—Metropolitan CATV Networks, ” Proceedings of Videotex '84, Apr. 1984, 15 pgs.
Gecsei, J., “Topology of Videotex Networks,” The Architecture of Videotex Systems, Chapter 6, 1983 by Prentice-Hall, Inc.
Gobi, et al., “ARIDEM—a multi-service broadband access demonstrator,” Ericsson Review No. 3, 1996, 7 pgs.
Gordon, Notice of Allowance, U.S. Appl. No. 12/008,697, filed Mar. 20, 2014, 10 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/008,722, filed Mar. 30, 2012, 16 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/035,236, filed Jun. 11, 2014, 14 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/035,236, filed Jul. 22, 2013, 7 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/035,236, filed Sep. 20, 2011, 8 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/035,236, filed Sep. 21, 2012, 9 pgs.
Gordon, Final Office Action, U.S. Appl. No. 12/008,697, filed Mar. 6, 2012, 48 pgs.
Gordon, Office Action, U.S. Appl. No. 12/035,236, filed Mar. 13, 2013, 9 pgs.
Gordon, Office Action, U.S. Appl. No. 12/035,236, filed Mar. 22, 2011, 8 pgs.
Gordon, Office Action, U.S. Appl. No. 12/035,236, filed Mar. 28, 2012, 8 pgs.
Gordon, Office Action, U.S. Appl. No. 12/035,236, filed Dec. 16, 2013, 11 pgs.
Gordon, Office Action, U.S. Appl. No. 12/008,697, filed Aug. 1, 2013, 43 pgs.
Gordon, Office Action, U.S. Appl. No. 12/008,697, filed Aug. 4, 2011, 39 pgs.
Gordon, Office Action, U.S. Appl. No. 12/008,722, filed Oct. 11, 2011, 16 pgs.
Handley et al, “TCP Congestion Window Validation,” RFC 2861, Jun. 2000, Network Working Group, 22 pgs.
Henry et al. “Multidimensional Icons” ACM Transactions on Graphics, vol. 9, No. 1 Jan. 1990, 5 pgs.
Insight advertisement, “In two years this is going to be the most watched program on TV” On touch VCR programming, published not later than 2000, 10 pgs.
Isensee et al., “Focus Highlight for World Wide Web Frames,” Nov. 1, 1997, IBM Technical Disclosure Bulletin, vol. 40, No. 11, pp. 89-90.
ICTV, Inc., International Search Report/Written Opinion, PCT/US2008/000400, Jul. 14, 2009, 10 pgs.
Kato, Y., et al., “A Coding Control algorithm for Motion Picture Coding Accomplishing Optimal Assignment of Coding Distortion to Time and Space Domains,” Electronics and Communications in Japan, Part 1, vol. 72, No. 9, 1989, 11 pgs.
Koenen, Rob,“MPEG-4 Overview—Overview of the MPEG-4 Standard” Internet Citation, Mar. 2001, http://mpeg.telecomitalialab.com/standards/mpeg-4/mpeg-4.htm May 9, 2002, 74 pgs.
Konaka, M. et al., “Development of Sleeper Cabin Cold Storage Type Cooling System,” SAE International, The Engineering Society for Advancing Mobility Land Sea Air and Space, SAE 2000 World Congress, Detroit, Michigan, Mar. 6-9, 2000, 7 pgs.
Le Gall, Didier, “MPEG: A Video Compression Standard for Multimedia Applications”, Communication of the ACM, vol. 34, No. 4, Apr. 1991, New York, NY, 13 pgs.
Langenberg, E, Integrating Entertainment and Voice on the Cable Network by Earl Langenberg 0 TeleWest International and Ed Callahan—ANTEC. work on this one.
Large, D., “Tapped Fiber vs. Fiber-Reinforced Coaxial CATV Systems”, IEEE LCS Magazine, Feb. 1990, 7 pgs. Best Copy Available.
Mesiya, M.F, “A Passive Optical/Coax Hybrid Network Architecture for Delivery of CATV, Telephony and Data Services,” 1993 NCTA Technical Papers, 7 pgs.
“MSDL Specification Version 1.1” International Organisation for Standardisation Organisation Internationale EE Normalisation, ISO/IEC JTC1/SC29NVG11 Coding of Moving Pictures and Autdio, N1246, MPEG96/Mar. 1996, 101 pgs.
Noguchi, Yoshihiro, et al., “MPEG Video Compositing in the Compressed Domain,” IEEE International Symposium on Circuits and Systems, vol. 2, May 1, 1996, 4 pgs.
Regis, Notice of Allowance U.S. Appl. No. 13/273,803, filed Sep. 2, 2014, 8 pgs.
Regis, Notice of Allowance U.S. Appl. No. 13/273,803, filed May 14, 2014, 8 pgs.
Regis, Final Office Action U.S. Appl. No. 13/273,803, filed Oct. 11, 2013, 23 pgs.
Regis, Office Action U.S. Appl. No. 13/273,803, filed Mar. 27, 2013, 32 pgs.
Richardson, Ian E.G., “H.264 and MPEG-4 Video Compression, Video Coding for Next-Genertion Multimedia,” Johm Wiley & Sons, US, 2003, ISBN: 0-470-84837-5, pp. 103-105, 149-152, and 164.
Rose, K., “Design of a Switched Broad-Band Communications Network for Interactive Services,” IEEE Transactions on Communications, vol. com-23, No. 1, Jan. 1975, 7 pgs.
Saadawi, Tarek N., “Distributed Switching for Data Transmission over Two-Way CATV”, IEEE Journal on Selected Areas in Communications, vol. Sac-3, No. 2, Mar. 1985, 7 pgs.
Schrock, “Proposal for a Hub Controlled Cable Television System Using Optical Fiber,” IEEE Transactions on Cable Television, vol. CATV-4, No. 2, Apr. 1979, 8 pgs.
Sigmon, Notice of Allowance, U.S. Appl. No. 13/311,203, filed Sep. 22, 2014, 5 pgs.
Sigmon, Notice of Allowance, U.S. Appl. No. 13/311,203, filed Feb. 27, 2014, 14 pgs.
Sigmon, Final Office Action, U.S. Appl. No. 13/311,203, filed Sep. 13, 2013, 20 pgs.
Sigmon, Office Action, U.S. Appl. No. 13/311,203, filed May 10, 2013, 21 pgs.
Smith, Brian C., et al., “Algorithms for Manipulating Compressed Images,” IEEE Computer Graphics and Applications, vol. 13, No. 5, Sep. 1, 1993, 9 pgs.
Smith, J. et al., “Transcoding Internet Content for Heterogeneous Client Devices” Circuits and Systems, 1998. ISCAS '98. Proceedings of the 1998 IEEE International Symposium on Monterey, CA, USA May 31-Jun. 3, 1998, New York, NY, USA,IEEE, US, May 31, 1998, 4 pgs.
Stoll, G. et al., “GMF4iTV: Neue Wege zur-Interaktivitaet Mit Bewegten Objekten Beim Digitalen Fernsehen,” Fkt Fernseh Und Kinotechnik, Fachverlag Schiele & Schon GmbH, Berlin, DE, vol. 60, No. 4, Jan. 1, 2006, ISSN: 1430-9947, 9 pgs. No English Translation Found.
Tamitani et al., “An Encoder/Decoder Chip Set for the MPEG Video Standard,” 1992 IEEE International Conference on Acoustics, vol. 5, Mar. 1992, San Francisco, CA, 4 pgs.
Terry, Jack, “Alternative Technologies and Delivery Systems for Broadband ISDN Access”, IEEE Communications Magazine, Aug. 1992, 7 pgs.
Thompson, Jack, “DTMF-TV, The Most Economical Approach to Interactive TV,” GNOSTECH Incorporated, NCF'95 Session T-38-C, 8 pgs.
Thompson, John W. Jr., “The Awakening 3.0: PCs, TSBs, or DTMF-TV—Which Telecomputer Architecture is Right for the Next Generations's Public Network?,” GNOSTECH Incorporated, 1995 The National Academy of Sciences, downloaded from the Unpredictable Certainty: White Papers, http://www.nap.edu/catalog/6062.html, pp. 546-552.
Tobagi, Fouad A., “Multiaccess Protocols in Packet Communication Systems,” IEEE Transactions on Communications, Vol. Com-28, No. 4, Apr. 1980, 21 pgs.
Toms, N., “An Integrated Network Using Fiber Optics (Info) for the Distribution of Video, Data, and Telephone in Rural Areas,” IEEE Transactions on Communication, vol. Com-26, No. 7, Jul. 1978, 9 pgs.
Trott, A., et al.“An Enhanced Cost Effective Line Shuffle Scrambling System with Secure Conditional Access Authorization,” 1993 NCTA Technical Papers, 11 pgs.
Jurgen—Two-way applications for cable television systems in the '70s, IEEE Spectrum, Nov. 1971, 16 pgs.
va Beek, P., “Delay-Constrained Rate Adaptation for Robust Video Transmission over Home Networks,” Image Processing, 2005, ICIP 2005, IEEE International Conference, Sep. 2005, vol. 2, No. 11, 4 pgs.
Van der Star, Jack A. M., “Video on Demand Without Compression: A Review of the Business Model, Regulations and Future Implication,” Proceedings of PTC'93, 15th Annual Conference, 12 pgs.
Welzenbach et al., “The Application of Optical Systems for Cable TV,” AEG-Telefunken, Backnang, Federal Republic of Germany, ISSLS Sep. 15-19, 1980, Proceedings IEEE Cat. No. 80 CH1565-1, 7 pgs.
Yum, TS P., “Hierarchical Distribution of Video with Dynamic Port Allocation,” IEEE Transactions on Communications, vol. 39, No. 8, Aug. 1, 1991, XP000264287, 7 pgs.
ActiveVideo Networks, Inc., International Preliminary Report on Patentablity, PCT/US2013/036182, Oct. 14, 2014, 9 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Rule 94(3), EP08713106-6, Jun. 25, 2014, 5 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Rule 94(3), EP09713486.0, Apr. 14, 2014, 6 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Rules 161(2) & 162 EPC, EP13775121.0, Jan. 20, 2015, 3 pgs.
ActiveVideo Networks Inc., Certificate of Patent JP5675765, Jan. 9, 2015, 3 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 13/445,104, filed Dec. 24, 2014, 14 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/668,004, filed Feb. 26, 2015, 17 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/686,548, filed Jan. 5, 2015, 12 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/911,948, filed Dec. 26, 2014, 12 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/911,948, filed Jan. 29, 2015, 11 pgs.
Dahlby, Office Action, U.S. Appl. No. 12/651,203, filed Dec. 3, 2014, 19 pgs.
Gordon, Office Action, U.S. Appl. No. 12/008,722, filed Nov. 28, 2014, 18 pgs.
Regis, Notice of Allowance, U.S. Appl. No. 13/273,803, filed Nov. 18, 2014, 9 pgs.
Regis, Notice of Allowance, U.S. Appl. No. 13/273,803, filed Mar. 2, 2015, 8 pgs.
Sigmon, Notice of Allowance, U.S. Appl. No. 13/311,203, filed Dec. 19, 2014, 5 pgs.
TAG Networks Inc, Decision to Grant a Patent, JP 2008-506474, Oct. 4, 2013, 5 pg.
ActiveVideo Networks Inc., Decision to refuse a European patent application (Art. 97(2) EPC, EP09820936.4, Feb. 20, 2015, 4 pgs.
ActiveVideo Networks Inc., Communication Pursuant to Article 94(3) EPC, 10754084.1, Feb. 10, 2015, 12 pgs.
ActiveVideo Networks Inc., Communication under Rule 71(3) EPC, Intention to Grant, EP08713106.6, Feb. 19, 2015, 12 pgs.
ActiveVideo Networks Inc., Notice of Reasons for Rejection, JP2014-100460, Jan. 15, 2015, 6 pgs.
ActiveVideo Networks Inc., Notice of Reasons for Rejection, JP2013-509016, Dec. 24, 2014 (Received Jan. 14, 2015), 11 pgs.
Brockmann, Office Action, U.S. Appl. No. 13/737,097, filed Mar. 16, 2015, 18 pgs.
Brockmann, Notice of Allowance, U.S. Appl. No. 14/298,796, filed Mar. 18, 2015, 11 pgs.
Craig, Decision on Appeal—Reversed—, U.S. Appl. No. 11/178,177, filed Feb. 25, 2015, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,177, filed Mar. 5, 2015, 7 pgs.
Craig, Notice of Allowance, U.S. Appl. No. 11/178,181, filed Feb. 13, 2015, 8 pgs.
Related Publications (1)
Number Date Country
20080170622 A1 Jul 2008 US
Provisional Applications (3)
Number Date Country
60884773 Jan 2007 US
60884744 Jan 2007 US
60884772 Jan 2007 US