Methods for identifying video segments and displaying option to view from an alternative source and/or on an alternative device

Information

  • Patent Grant
  • 10945006
  • Patent Number
    10,945,006
  • Date Filed
    Tuesday, July 23, 2019
    5 years ago
  • Date Issued
    Tuesday, March 9, 2021
    3 years ago
Abstract
Techniques and systems are provided for identifying a video segment displayed on a screen of a remote television system, and providing an option to switch to an alternative or related version of the video program that includes the video segment. For example, video segments displayed on a screen of a television system can be identified, and contextually-targeted content or contextually-related alternative content can be provided to a television system based on the identification of a video segment. The alternative or related version of the video program can include the currently displayed program in an on-demand format that can be viewed off-line and can be started over from a beginning portion of the program.
Description
FIELD

The present disclosure relates generally to identifying video content being displayed by a television system and providing options related to the video content. For example, various techniques and systems are provided for identifying a video segment being displayed, and providing an option to switch to an alternative or related version of the video program that includes the video segment.


BACKGROUND

Advancements in fiber-optic and digital transmission technology have enabled the television programming distribution industry to rapidly increase channel capacity and provide some degree of interactive television (ITV) services due in large part to the industry combining the increased carriage capacity of their respective networks with the processing power of contemporary consumer computer systems, such as Smart Televisions (TVs), set-top boxes (STB), or other devices.


SUMMARY

Certain aspects and features of the present disclosure relate to identifying a video segment being displayed, and providing an option to switch to an alternative or related version of the video program that includes the video segment. For example, techniques and systems are described for identifying video segments displayed on a screen of a television system, and to systems and methods for providing contextually-targeted content or contextually-related alternative content to a television system based on the identification of a video segment.


In some examples, a video segment of a currently viewed program (called the original program) can be identified by deriving data from television signals and comparing the information to data stored in a reference database. In some cases, this feature can be used to extract a reaction of a viewer (e.g., changing the channel, or the like) to a specific video segment and to report the extracted information as statistical data-metrics to interested parties. In some examples, contextually-targeted or contextually-related content can be provided to the television system presenting the viewer with an option to view the currently displayed program off-line in an on-demand format. The viewer, thus, can start over viewing a program that had already begun and for which the viewer perhaps missed a segment. In some cases, upon selection of the option to start over, the television system can present the viewer with an option to select a different format for the program (e.g., a higher resolution version of the program, a 3D video version of said program, or the like). In some instances, fewer television third party content items can be provided with the on-demand program, and the viewer may be informed of the fewer third party content. In some examples, the system can substitute certain third party content that are estimated to be of interest to the demographics of the viewer in place of certain third party content that is part of the original program.


According to at least one example, a matching server may be provided for identifying video content being displayed by a television system. The matching server includes one or more processors. The matching server further includes a non-transitory machine-readable storage medium containing instructions which when executed on the one or more data processors, cause the one or more processors to perform operations including: receiving video data of a video segment being displayed by the television system, wherein the video segment includes at least a portion of a video program; identifying the video segment being displayed by the television system, wherein identifying the video segment includes comparing the video data of the video segment with stored video data to find a closest match; determining contextual content, wherein the contextual content is contextually related to the identified video segment, and wherein the contextual content includes an option to switch to an alternative or related version of the video program from a video server; and providing the contextual content to the television system.


In another example, a computer-implemented method is provided that includes: receiving, by a computing device, video data of a video segment being displayed by a television system, wherein the video segment includes at least a portion of a video program; identifying the video segment being displayed by the television system, wherein identifying the video segment includes comparing the video data of the video segment with stored video data to find a closest match; determining contextual content, wherein the contextual content is contextually related to the identified video segment, and wherein the contextual content includes an option to switch to an alternative or related version of the video program from a video server; and providing the contextual content to the television system.


In another example, a computer-program product tangibly embodied in a non-transitory machine-readable storage medium of a computing device may be provided. The computer-program product may include instructions configured to cause one or more data processors to: receive video data of a video segment being displayed by a television system, wherein the video segment includes at least a portion of a video program; identify the video segment being displayed by the television system, wherein identifying the video segment includes comparing the video data of the video segment with stored video data to find a closest match; determine contextual content, wherein the contextual content is contextually related to the identified video segment, and wherein the contextual content includes an option to switch to an alternative or related version of the video program from a video server; and provide the contextual content to the television system.


According to at least one other example, a television system may be provided that includes one or more processors. The television system further includes a non-transitory machine-readable storage medium containing instructions which when executed on the one or more data processors, cause the one or more processors to perform operations including: displaying a video segment; transmitting video data of the video segment being displayed, wherein the video segment includes at least a portion of a video program, wherein the video data is addressed to a matching server, and wherein the video data of the video segment is compared with stored video data to identify the video segment being displayed; receiving contextual content, wherein the contextual content is contextually related to the identified video segment, and wherein the contextual content includes an option to switch to an alternative or related version of the video program from a video server; and displaying the contextual content on a screen.


In another example, a computer-implemented method is provided that includes: displaying, by a television system, a video segment; transmitting video data of the video segment being displayed, wherein the video segment includes at least a portion of a video program, wherein the video data is addressed to a matching server, and wherein the video data of the video segment is compared with stored video data to identify the video segment being displayed; receiving contextual content, wherein the contextual content is contextually related to the identified video segment, and wherein the contextual content includes an option to switch to an alternative or related version of the video program from a video server; and displaying the contextual content on a screen.


In another example, a computer-program product tangibly embodied in a non-transitory machine-readable storage medium of a television system may be provided. The computer-program product may include instructions configured to cause one or more data processors to: display a video segment; transmit video data of the video segment being displayed, wherein the video segment includes at least a portion of a video program, wherein the video data is addressed to a matching server, and wherein the video data of the video segment is compared with stored video data to identify the video segment being displayed; receive contextual content, wherein the contextual content is contextually related to the identified video segment, and wherein the contextual content includes an option to switch to an alternative or related version of the video program from a video server; and display the contextual content on a screen


In some embodiments, selection of the option to switch to the alternative or related version of the video program causes the television system to receive a version of the video program starting from the beginning of the video program.


In some embodiments, the contextual content is displayed by the television system while the video program is displayed on a video screen of the television system. In some embodiments, the contextual content includes a graphical interface with the option to switch to the alternative or related version of the video program.


In some embodiments, the video server includes a video-on-demand server.


In some embodiments, the contextual content further includes an option to select from a plurality of video program choices, and wherein the plurality of video program choices include video control capability, display format of the video content, or reduced commercial messaging.


In some embodiments, the television system requests the alternative or related version of the video content from the video server when the option to switch to the alternative or related version of the video program is selected.


In some embodiments, the television system is connected with a third party content server when the option to switch to the alternative or related version of the video program is selected, wherein the television system connects with the third party server to obtain third party content from the third party content server at a specified time interval of the alternative or related version of the video program.


This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used in isolation to determine the scope of the claimed subject matter. The subject matter should be understood by reference to appropriate portions of the entire specification of this patent, any or all drawings, and each claim.


The foregoing, together with other features and embodiments, will become more apparent upon referring to the following specification, claims, and accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

Illustrative embodiments of the present invention are described in detail below with reference to the following drawing figures:



FIG. 1 is a block diagram of an example of an interactive television environment.



FIG. 2A is a block diagram of another example of an interactive television environment.



FIG. 2B is a block diagram of another example of an interactive television environment.



FIG. 2C is a block diagram of an example of an interactive television environment with an alternative viewing device.



FIG. 3 is a block diagram of an example of an interactive television environment with alternative content options.



FIG. 4 is a block diagram of another example of an interactive television environment with alternative content options.



FIG. 5 is a flowchart illustrating an embodiment of a process of identifying video content being displayed and providing related content.



FIG. 6 is a flowchart illustrating another embodiment of a process of identifying video content being displayed and providing related content.



FIG. 7 is a flowchart illustrating an embodiment of a process of identifying video content being displayed.



FIG. 8 is a flowchart illustrating another embodiment of a process of providing information used to identify video content being displayed.





DETAILED DESCRIPTION

In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of embodiments of the invention. However, it will be apparent that various embodiments may be practiced without these specific details. The figures and description are not intended to be restrictive.


The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention as set forth in the appended claims.


Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.


Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed, but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.


The term “machine-readable storage medium” or “computer-readable storage medium” includes, but is not limited to, portable or non-portable storage devices, optical storage devices, and various other mediums capable of storing, containing, or carrying instruction(s) and/or data. A machine-readable storage medium or computer-readable storage medium may include a non-transitory medium in which data can be stored and that does not include carrier waves and/or transitory electronic signals propagating wirelessly or over wired connections. Examples of a non-transitory medium may include, but are not limited to, a magnetic disk or tape, optical storage media such as compact disk (CD) or digital versatile disk (DVD), flash memory, memory or memory devices. A computer-program product may include code and/or machine-executable instructions that may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted using any suitable means including memory sharing, message passing, token passing, network transmission, etc.


Furthermore, embodiments may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks (e.g., a computer-program product) may be stored in a machine-readable medium. A processor(s) may perform the necessary tasks.


Systems depicted in some of the figures may be provided in various configurations. In some embodiments, the systems may be configured as a distributed system where one or more components of the system are distributed across one or more networks in a cloud computing system.


As described in further detail below, certain aspects and features of the present disclosure relate to identifying a video segment being displayed, and providing an option to switch to an alternative or related version of the video program that includes the video segment. For example, techniques and systems are described for identifying video segments displayed on a screen of a television system, and to systems and methods for providing contextually-targeted content or contextually-related alternative content to a television system based on the identification of a video segment


The technology of interactive television (ITV) has been developed in an attempt to enable TV Systems to serve as a two-way information distribution mechanism in a manner approximating certain aspects of the World Wide Web. Features of an ITV service accommodate a wide variety of marketing, entertainment, and educational capabilities such as allowing a user to get more information about a product or service, order said product or service, compete against contestants in a game show, and the like. In many instances, the interactive functionality is controlled by a set-top box (STB) that executes an interactive program created for inclusion with the TV broadcast. The interactive functionality can be displayed on the TV's screen and may include icons or menus to allow a user to make selections or otherwise interact with the contextually-related content via the TV's remote control or a keyboard linked to the TV.


In some cases, the interactive content can be incorporated into the broadcast stream (also referred to herein as the “channel/network feed”). In the present disclosure, the term “broadcast stream” refers to the broadcast signal received by a television, regardless of the method of reception of that signal, e.g., by TV antenna, satellite dish, cable TV connection, Internet delivery or any other method of signal transmission. One technique for incorporating interactive content into a broadcast stream is the insertion of timed data triggers into the broadcast stream for a particular program. Program content in which triggers have been inserted is sometimes referred to as enhanced program content or as an enhanced TV program. Triggers may be used to alert a STB or a processor in a Smart TV that interactive content is available that is associated with said television programming. The trigger may contain information about available content as well as the memory location of the content. A trigger may also contain user-perceptible text that is displayed on the screen, for example, at the bottom of the screen, which may prompt the user to perform some action or choose amongst a plurality of options.


Connected TVs are TVs that are connected to the Internet via a viewer's home network (wired and/or wireless). Connected TVs can run an application platform such as Google's Android, or other proprietary platforms enabling interactive, smartphone or tablet-like applications to run on these TVs. The basic common features of such connected TV platforms are: (1) a connection to the Internet; and (2) the ability to run software applications with graphics from said applications overlaid on or occupying all of the TV display. Many models of TVs with this capability have been in the market in large quantities since 2009 and virtually all new TVs now support these features.


Currently, few TVs (Internet connected or otherwise) have access to metadata about what the viewer is watching at the moment, nor who the viewer is from the perspective of providing that viewer with programming or commercial opportunities customized for them. While some information about a content offering is available in bits and pieces in the content distribution pipeline, by the time a television show reaches the viewers' screen over typical legacy distribution systems, such as cable and satellite TV, all information other than video and audio has been lost.


Many efforts have been underway for well over a decade to encode the identification information, also known as metadata, into entertainment and third party content in the form of watermarks applied to the audio or video portions of said programming in a way that can survive compression and decompression. Many commercial services and products are available for applying watermarking information to audio and video content but no means or method has been adopted for widespread use. Even once these watermarking means are standardized, reliable and generally available, they are forecast to have the ability to identify a point in a television segment that is being displayed on a certain TV system to a time resolution of tens of seconds. Said time resolution is far course than is provided by certain embodiments of the current disclosure.


As a result, in legacy distribution systems, the TV system does not have the means to know what TV channel or show the viewer is watching at the present moment nor what the show is about. The channel and show information as seen on a television screen by a viewer is currently overlaid by the STB from sometimes incomplete information. This barrier is the result of the fundamental structure of the TV content distribution industry.


Despite cable system operators' STBs and Smart TVs supporting ever more-sophisticated on-screen electronic program guides (EPGs), rather than searching for a specific program or program type, many TV viewers still “channel surf” in the sense of randomly or semi-randomly browsing currently-available programming, either on the EPG or by physically changing channels. If the programming being viewed was not already captured by a user's DVR, then joining already playing content, particularly when it is in a longer format such as movies, sporting events, or reality programs, can be less than satisfying. Meanwhile, finding that same movie or long-format program playing from the beginning at another time or on another channel is a complex process or may not even be possible.


To increase the quality of the viewing experience by mitigating the problem of wanting to view a video program that is already in progress, what is then needed is a system that enables a simple user interface (UI) gesture by the viewer such as, by way of example only, clicking on an on-screen graphic of a button or the like conveying a command to “Start Over” or the like. This gesture would cause a request to be communicated to a central server means to automatically switch the viewer from, for example, a live television program to a customized Video-on-Demand (VoD) service which restarts the displayed movie, sporting event, or other programming content from the beginning or continues from the currently viewed time in the program. Restarting of the content can be referred to as an alternative mode. The restarting of the movie can occur immediately or can be delayed. Once viewing in the alternative mode, the user is free to rewind or pause and the like. Other options could include, by way of example and without limitation, offering the just-selected VoD programming in a higher than normal resolution or in a 3D format, or other enhanced state, with fewer or no commercial breaks, or with specific commercials inserted that are more relevant to the demographics or previous shopping or browsing behavior of said viewer. In yet another embodiment, said programming from the VoD source could be viewed by said user on another viewing device. This would make the consumption of content and the personalization of commercial messages closer to the means by which video content and commercial offerings are already available on the Internet.


Embodiments of the present disclosure are directed to systems and methods for identifying video segments as they are displayed on a screen of a television (TV) system (e.g., in a user's or consumer's home). In particular, the resulting data identifying the video segment currently displaying on the TV system can be used to enable the capture and appropriate response to a TV viewer's reaction, such as requesting that the programming be restarted from its beginning, therefore enabling the seamless switching of a viewer from a conventional, real-time broadcast environment delivered over the cable system's network to a custom-configured, video on demand (VoD) product delivered over an Internet connection or over a cable TV network's managed channels. The VoD programming could also include the substitution of more relevant third party content (e.g., commercial or advertisement) messages, among other options.


As used herein, the term “Television System” includes, but is not limited to, a television device such as television receiver or an Internet-connected TV (also known as “Smart TVs”). Equipment can be incorporated in, or co-located with, the television system, such as a set-top box (STB), a digital video disc (DVD) player or a digital video recorder (DVR). As used herein, the term “television signals” includes signals representing video and audio data which are broadcast together (with or without metadata) to provide the picture and sound components of a television program or television commercial. As used herein, the term “metadata” refers to information (data) about or relating to the video/audio data in television signals.


In accordance with some embodiments, the video segment is identified by sampling (e.g., at fixed intervals, such as five times per second, ten times per second, fifteen times per second, twenty times a second, or any other suitable interval) a subset of the pixel data being displayed on the screen, or associated audio data, and then finding similar pixel or audio data in a content database. In accordance with other embodiments, the video segment is identified by extracting audio or image data associated with such video segment and then finding similar audio or image data in a content database. In accordance with alternative embodiments, the video segment is identified by processing the audio data associated with such video segment using known automated speech recognition techniques then searching a keyword database for matching words and then further processing said matched words in a context-sensitive natural language processing means. In accordance with further alternative embodiments, the video segment is identified by processing metadata associated with such video segment.


In some embodiments, systems and methods are described for providing contextually targeted content (also referred to as contextual content) to an interactive television system. The contextual targeting is based on identification of the video segment being displayed, and is also based on a determination concerning the playing time or offset time of the particular portion of the video segment being currently displayed. The terms “playing time” and “offset time” will be used interchangeably herein and refer to a time which is offset from a fixed point in time, such as the starting time of a particular television program or commercial.


In some embodiments, systems and methods are described that detect what is playing on a connected TV, determine the subject matter of what is being played, and interact with the television system (and viewer) accordingly. In particular, the technology disclosed herein overcomes the limited ability of interactive TVs to strictly be responsive only to predetermined contextually related content from a central server means via the Internet, and enables novel features including the ability to provide instant access to video-on-demand versions of content, and providing the user with the option to view video programming in higher resolutions or in 3D formats if available, and with the additional ability to restart a program from its beginning and/or to fast forward, pause, and rewind said programming. The systems and methods also enable having some or all third party messages included in the now on-demand programming customized, by way of example only and without limitation, with respect to the viewer's location, demographic group, or shopping history, or to have the commercials reduced in number or length or eliminated altogether to support various business models.


In accordance with some embodiments, the video segment is identified and the offset time is determined by sampling a subset of the pixel data and/or associated audio data as the pixel data is displayed on the screen or the audio data is played through the audio system of the TV, and then finding similar pixel or audio data in a content database. In accordance with other embodiments, the video segment is identified and the offset time is determined by extracting audio or image data associated with said video segment and then finding similar audio or image data in a content database. In accordance with alternative embodiments, the video segment is identified and the offset time is determined by processing the audio data associated with such video segment using known automated speech recognition techniques. In accordance with further alternative embodiments, the video segment is identified and the offset time is determined by processing metadata associated with such video segment.


As will be described in more detail below, the software for identifying video segments being viewed on a connected TV and, optionally, determining offset times, can reside in a non-transitory medium of the computing means of a television system of which the connected TV is a component. In accordance with alternative embodiments, one part of the software for identifying video segments resides on the television system and another part resides on a remote computer server means connected to the television system via the Internet. Other aspects of the invention are disclosed below.



FIG. 1 is a block diagram of an example of an interactive television environment 100 that can perform automated content recognition to identify video content being displayed by a television system (a viewing event), and can provide content that is contextually related to the identified video content. The interactive television environment 100 includes an interactive television system 101 that can be part of a smart TV and that is connected to a matching server 107 system that processes video data of one or more video segments. One of ordinary skill in the art will understand that the matching server 107 can include one server, or can include multiple servers. A server can include an actual machine running matching algorithms, or can include one or more virtual machines running matching algorithms.


The video data of the one or more video segments can be used to generate video fingerprints 103. For example, the video fingerprints 103 are generated by processing video data from a video frame buffer of the smart TV, and sending the video data to the matching server system 107. The video segment recognition system 110 (or channel recognition system 110) receives the video data or samples (which can be referred to as cues) from smart TV and compares the cues to a reference database 112 of video samples (or video data). In some examples, the video data or video samples can include pixels of video frames. The video samples in the reference database 112 of can be received from the television program ingest engine 120.


Successful matches of video segments against the video samples 112 triggers the signaling of identified video segment information and respective client identification to contextual targeting system 111 for further processing of the viewing event. The further processing can include sending a trigger to the respective client TV system. The trigger invokes one or more software applications resident in the client TV system (e.g., a processor or machine-readable storage medium), such as a context-sensitive or context-targeted application. The one or more software applications can then present information to the viewer in the form of a graphical interface, such as an on-screen graphical interface in a window displayed over the video content being displayed on the TV system. For example, the information can be displayed by the TV system while the video program is displayed on a video screen of the television system. Various information can be presented by the graphical interface. For example, the information may supplement a substitute item of third party content for a third party content item (e.g., a television commercial, or other third party content) currently being displaying on the TV system. In another example, the information may include additional information about a currently displayed TV program. In another example, the information may include an option to switch to an alternative or related version of the video program from a video server. For instance, the option may include an offer to watch a currently displayed video program from the beginning by means of a video-on-demand server.



FIG. 2A is a block diagram of another example of an interactive television environment 200A. The interactive television environment 200A includes an example configuration of the TV system 101 that is connected to the automatic content recognition-based matching server 107. The TV system 101 is supplied with content by a Video-on-Demand (VoD) Content server 211 and a third party content server 214, both of which are also in two way communication with the client TV system 101 over the Internet using, for example, communication links 202a and 202b. The third party content server 214 can provide third party content, such as one or more items of commercial content. In one instance of the invention, the user is presented with a graphical interface (e.g., a window, a banner overlay, or the like) on a TV display or screen of the TV system 101. The graphical interface can present information that is context dependent upon the video segment as detected by the matching server 107. For example, once the matching server 107 finds a closest match between a video segment being displayed and a video segment in the content database, a trigger causing the graphical interface to be presented can be sent from the matching server 107 to the TV system 101. If the video segment is also available as a video-on-demand product from the VoD content server 211, the user is offered the option to “start over” and view a version of the program being viewed from the beginning, which will then be under full VoD control.


When the trigger is received and a user selects the option to receive an alternative or related version of the program (e.g., to start the program over, select a similar program in a different format), the TV system 101 can establish one or more communication links with the VoD server 211 (or the VoD server 312 described below). The communication links can include connections 202a, 202b, 211a, and 211b. In one example, a communication link can include one or more Internet or web-based communication links, such as a TCP link to setup and control the communication channel and/or a UDP link for streaming content from VoD server 211 or 312 to the TV system 101. The TV system 101 can communicate directly with the VoD server 211 or 312, and can obtain the alternative or related version of the program without being required to go through a central server (e.g., a cable provider, the matching server 107, or other server). Accordingly, an Internet-based communication link is established between the TV system 101 and the VoD server 211 or 312 instead of through a cable provider head-end, allowing web-based rules and traffic and monitoring to be used. The VoD server 211 or 312 and the third party server 214 can utilize information provided in the direct Internet-based communications from the TV system 101 to select contextually related third party content or other information that is known to be of interest to the user of the TV system 101 (e.g., a viewer of content presented by the TV system 101). For example, the VoD server 211 or 312 and the third party server 214 can take advantage of IP traffic sent from the TV system 101 to collect a cookie pool or web history (e.g., browsing history of website usage by the user) of the user. In one instance, the TV system's IP address can be used to obtain the cookie pool, and the third party server 214 can provide third party content to the VoD server 211 is are targeted based on the cookies.


By using automated content recognition to identify a portion of original content that is being viewed at a particular point in time, the identification of viewed original content is disconnected from the provider of content. As a result, any content being viewed can be identified, including over-the-top content, cable-provided content, or other content that provides video programs to a viewer. For example, regardless of the source of the content (e.g., from a cable provider, a streaming service, or the like), the current program being viewed can be identified, and the TV system 101 can obtain a related program from the VoD server 211 or 312 without involving the provider or the source of the original content.


In some embodiments, the application processor 102 can detect remote control commands performed by the user through a remote control. The remote control commands can include any suitable command, such as a command to start a video segment from the beginning, a rewind command, a fast-forward command, a pause command, or other suitable functions or commands. The application processor 102 can communicate the commands through communication link 211a to control the VoD server 211. In other embodiments, the VoD commands can be processed by the matching server 107 and relayed on behalf of the respective user to the VoD content server 211. In yet a further embodiment, a third party server 214 can be instructed to insert third party content (e.g., television commercial) in coordination with VoD content server 211 in order to substitute for third party content items that were part of the video program when originally broadcast.



FIG. 2B is a block diagram of another example of an interactive television environment 200B. The interactive television environment 200B includes an example configuration that is similar to that as depicted in FIG. 2A, with the addition of direct communication links 211c and 211d between VoD content server 211 and the third party content server 214. The direct communication links 211c and 211d enable a control system of the VoD content server 211 to perform seamless substitution of commercial messages or other third party content in an on-demand video stream at predetermined times in the video program provided by the VoD server 211.



FIG. 2C is a block diagram of another example of an interactive television environment 200C including an alternative viewing device. The interactive television environment 200C enables a user to view the VoD programming from the VoD server 211 on an alternative viewing device. The alternative viewing device can include another network connected smart TV system, a mobile device, a desktop computer, or the like. In one example, the VoD programming from the VoD server 211 can be viewed on the alternative viewing device after the user selects the graphic option to enable the “start over” function.



FIG. 3 is a block diagram of an example of an interactive television environment 300 with alternative content options. For example, the environment 300 is similar to the environment 200A, with the addition of a server 312 providing alternative formats of the available content from server 312. In some examples, alternative formats that can be selected by the user include receiving the video content at a higher resolution (e.g., at a 4 k or ultra-high-definition (UHD) resolution using a 4 k television), in a 3D video mode, in a high definition format (in the event the original video content was displayed in standard definition format), or the like.



FIG. 4 is a block diagram of another example of an interactive television environment 400 with alternative content options. For example, the environment 400 is similar to the environment 300, with the addition of a third party content broker server 413. The third party content broker server 413 can provide a third party content brokering system allowing third-party networks or third party content providers to bid for, and upon winning, provide third party message content that may be substituted for the third party content messages in the video content from video server 211 or 312.



FIG. 5 is a flowchart summarizing steps performed by an example process implemented by any of the systems shown in the previous figures. At step 501, the server 107 can provide context-sensitive applications to the TV system 101 in advance of the occurrence of one or more context-sensitive events. At 502, the application processor of the TV system 101 receives the context-sensitive applications from the matching server 107 over connection 107b, and prepares the one or more services offered by the applications. A context-sensitive application (also referred to as context-targeted application) contains embedded address information to access and enable video substitution from remote servers 211 and 312. The access occurs when the content matching system 107 identifies appropriate video content displaying on TV system 101 (based on matching video data to stored video data, as previously described). For example, at step 503, the video fingerprint client 103 can send one or more fingerprints to the matching server 107 over connection 107a. At step 504, the matching server 107 can detect whether there is an event match between a received fingerprint (a cue or video data of a video segment) and stored video data. In the event no match is found the matching server 107 continues to process fingerprints until a match is found. If a match is found, the matching server 107 sends a trigger to the client application processor 102 over connection 107c. If more than one match is found for a given fingerprint or cue, the closest match that has the most similarities with a received fingerprint is used.


Upon receiving the trigger, the application processor 102 at step 506 launches a context-sensitive application (e.g., application 202). At step 507, the context-sensitive application displays an option on a TV screen of the TV system that includes an option to switch to an alternative or related version of the video program from a video server (e.g., a startover button overlay). At step 508, a channel change event or an event timeout can be detected. If either event is detected, the process starts over at step 502. In the event neither event is detected, the context-sensitive application detects a selection of the option by a user at step 509. At step 510, the context-sensitive application establishes a connection to VoD server 211 or 312 using a URL of the VoD server 211 or 312. At step 511, the context-sensitive application instructs the client video display to switch from live TV (or other current source of displayed content) to the VoD service.


In some examples, the context-sensitive application also contains address information of a third party server 214 or third party content broker 413 from which to obtain substitute third party content (e.g., a commercial). Further, the context-sensitive application can also contain timing information of when in the playout of a substitute or alternative video program to substitute the third party content. For example, at 512, the context-sensitive application can read the timecode of the VoD stream from the VoD server 211 or 312. At 513, it can be determined if it is time to substitute third party content of the VoD content with alternative third party content (e.g., third party content that is targeted to the user based on cookie data or browsing history). At 514, if a time for substitution is detected, the context-sensitive application can connect to the third party server 214 through a URL of the third party server 214 to receive a third party content stream. At 515, the context-sensitive application instructs the client video display to switch from the VoD stream to the third party content stream. At 516, it is determined whether a third party content timeout has occurred. If not, the context-sensitive application continues to receive the third party content stream. If a timeout has occurred, at step 518, the context-sensitive application instructs the client video display to switch from the third party content stream back to the VoD stream. At step 519, the context-sensitive application informs the matching server 107 that the third party content display event has completed (e.g., over communication link 107d).


At step 520, the context-sensitive application determines whether the VoD program has ended. If the VoD program has not ended, the process returns to step 512 where the context-sensitive application reads the timecode of the VoD stream. At step 521, if the VoD program has ended, the context-sensitive application instructs the TV system (or client TV) to return to live programming (or other current source of the content displayed before selection of the option to switch to the alternative or related version of the video program).


A program monitor 500 represents an independent software process that monitors user input to the TV system 101. Upon detection by the program monitor 500 of a channel change or other input change to the TV system 101, the program monitor 500 can cause a termination of the alternative video program initiated by user response to the context sensitive application (referred to as “Start Over”) supplied by the matching server 107. The program manager 500 can include a daemon that runs as a background process, rather than being under the control of an interactive user.



FIG. 6 is a flow chart summarizing steps performed by another example process in which the content matching system 107 detects first video content displaying on the TV system 101, and then transmits to the TV system 101 an application (e.g., a context-sensitive application) that executes within the processing means of the TV system 101 upon being received by the TV system 101. Execution of the application instructs the TV system to obtain video from VoD server 211 or the server 312.


At step 603, the video fingerprint client 103 can send one or more fingerprints to the matching server 107 over connection 107a. At step 604, the matching server 107 can detect whether there is an event match between a received fingerprint (a cue or video data of a video segment) and stored video data. In the event no match is found the matching server 107 continues to process fingerprints until a match is found. If a match is found, the matching server 107 sends the application to the client application processor 102 over connection 107c. If more than one match is found for a given fingerprint or cue, the closest match that has the most similarities with a received fingerprint is used.


Upon receiving the application, the application processor 102 at step 606 launches the application (e.g., context-targeted application 202). At step 607, the application displays an option on a TV screen of the TV system that includes an option to switch to an alternative or related version of the video program from a video server (e.g., a startover button overlay). At step 608, a channel change event or an event timeout can be detected. If either event is detected, the process starts over at step 602. In the event neither event is detected, the application detects a selection of the option by a user at step 609. At step 610, the application establishes a connection to VoD server 211 or 312 using a URL of the VoD server 211 or 312. At step 611, the application instructs the client video display to switch from live TV (or other current source of displayed content) to the VoD service.


In some examples, as with the process of FIG. 5, the context-sensitive application could contain address and timing instructions of where and when to obtain alternative video information. In some examples, the context-sensitive application can obtain alternative video information and the alternative video information could by means of instructions internal to its processes, substitute third party content or provide alternative information at prescribed times relative to the program material being offered. For example, at 612, the VoD server 211 or 312 can read the timecode of the VoD stream sent to TV system 101. At 613, the VoD server 211 or 312 can determine if it is time to substitute third party content of the VoD content with alternative third party content (e.g., third party content that is targeted to the user based on cookie data or browsing history). At 615, if a time for substitution is detected, the VoD server 211 or 312 can obtain substitute third party content from third party server 214 or 413, and can substitute the substitute third party content into the VoD content stream that is transmitted to the context-sensitive application. At 616, the VoD server 211 or 312 can determine whether a third party content timeout has occurred. If not, the VoD server 211 or 312 continues to provide the VoD content with the third party content. If a timeout has occurred, at step 618, the VoD server 211 or 312 switches back from the third party content stream to the VoD content.


At step 620, the context-sensitive application determines whether the VoD program has ended. If the VoD program has not ended, the process returns to step 612 where the context-sensitive application reads the timecode of the VoD stream. At step 621, if the VoD program has ended, the context-sensitive application instructs the TV system (or client TV) to return to live programming (or other current source of the content displayed before selection of the option to switch to the alternative or related version of the video program).


As with the process of FIG. 5, the process shown in FIG. 6 includes a user input program monitor 600 that acts on user-initiated video input (e.g., changes in channel, video input to TV system 100, or the like) in order to terminate the context-sensitive application upon said changes.


The example embodiments disclosed in FIG. 1-FIG. 4 and the flow charts depicted in FIG. 5 and FIG. 6, provide systems and methods that extends the meaning of the previously used term “contextually targeted” beyond the display of simple graphics or short video segments related to the associated content, to include the complete substitution of the same or substantially enhanced forms of the selected content. These systems and methods replaces a video segment currently being viewed in its entirety with a video that is in a VoD-like format, which enables the viewer to re-start the content from the beginning, and can include a complete “virtual DVR” control including restarting, pausing, fast-forwarding, and rewinding functions. The systems and methods also provide an ability to view the content at higher resolutions, in 3D video format, or other enhanced format, if available. The systems and methods also provide an ability to remove commercial messages from the replacement video segment and to substitute messages that are more closely related to anticipated interests of the viewer, for example, based on location, demographics, or previous shopping behavior. The anticipated interests of the user can be determined based on such information being stored in the form of compact data modules of the type often called “cookies” in the memory of a connected TV viewing system, such as a Smart TV. This enables the development and sale to sponsors or brokers of various premium, closely-targeted third party content products, or in an alternative business model, the removal of some or all of the third party content messaging as a premium service for the viewer.


The methods and systems described herein employ a central automated content recognition system (e.g., matching server 107) to detect a video program currently displayed on the remote client TV system 101 (e.g., based on a displayed video segment). The matching server 107 (or in some embodiments the TV system 100 itself) can determine if the video program has an available on-demand copy, and the central system can cause the remote TV system 101 to display a graphical interface (e.g., a graphical overlay overlaid over the displayed video program) offering the option for the viewer to switch to an on-demand version of the video program, either from the beginning of the program or at the point where the viewer is currently watching.


In some embodiments, when a viewer accepts the option to view the video program in an on-demand mode, the system can provide DVR-like control of the video program with control originating from the client application operating in the television system 101 (smart-TV). The DVR-like control is conveyed to a server (e.g., VoD server 211, server 312, or other central server) that is responsive to the control commands and provides transport control of the video program to allow the viewer to change aspects of the video program (e.g., to rewind, pause, fast-forward, stop, or perform some other suitable function to the video program).


In some embodiments, the systems and methods described herein provide the user with the option for a higher-quality version of the detected video program that is being displayed. The viewer has the option to view the high-quality version of the program from the beginning or at any point in the program, including the currently viewed instant, as described above. The higher-quality program may be, for example, a new ultra-high-definition format such as the 4K ultra-HD programming, or other higher quality format than that currently being viewed by the viewer. It should be obvious to the skilled person that any enhanced variation of a video program is equivalent. Such variations may include 3D-video versions, or even higher definition versions than 4K ultra-HD.


In some embodiments, the systems and methods described herein can provide other viewing options for a version of the program being viewed. For example, further viewing options can include the provision of versions of a video program with fewer or even no commercials. In some examples, when a viewer accepts the option to view a video program in an on-demand mode (e.g., to start the program over from the beginning, to view a different quality video, or the like), the system may substitute third party content (e.g., a television commercial) at specific times during the program. The third party content can be provided by a separate server means from the on-demand source of said video. In some embodiments, when a viewer accepts the option to view a video program in an on-demand mode (e.g., to start the program over from the beginning, to view a different quality video, or the like), the system can substitute third party content at specific times during the program. The third party content may be provided by a server means separate from the on-demand source of said video. The availability of a third party content substitution opportunity is conveyed by a computer application operating within the viewer's network-enabled television system and in communications with a third party content delivery server. The network-enabled television system may also retain code modules or “cookies” with data on the user's demographics and previous viewing and purchase behavior to enable sponsors to more tightly-target commercial messages. In some embodiments, the third party content substitution may take place between the VoD server (e.g, server 211 or 312) and the third party content server (e.g., third party server 214) without the involvement of the TV system 101.


In some embodiments, when a viewer accepts the option to view a video program in an on-demand mode (e.g., to start the program over from the beginning, to view a different quality video, or the like), the system can substitute third party content at specific times during the program, and the third party content may be provided by a server that is separate from the on-demand source of said video. The substitute third party content may be provided by an third party content bidding process where certain demographic or other information regarding the television viewer is provided to an auction system such that the auction bidder can bid for the time slot available when the ad slot becomes available in the course of viewing the on-demand video program. The availability of a third party content substitution opportunity is conveyed by a computer application operating within the viewer's network-enabled television system and in communications means with a video third party content delivery means.


In some embodiments, when a viewer accepts the option to view a video program in an on-demand mode (e.g., to start the program over from the beginning, to view a different quality video, or the like), the matching server 107 or the TV system 101 can present an option for the viewer to consume the on-demand content on a device other than the TV from which the user is currently engaged. Such devices may include mobile phones, tablets (e.g., the Apple iPad), or other device that is separate from the television system on which the original content was viewed.


When a viewer accepts the option to view a video program in an on-demand mode (e.g., to start the program over from the beginning, to view a different quality video, or the like), the matching server 107 or the TV system 101 can provide an option for the viewer to consume the on-demand content in a different form such as with a different screen resolution, a longer version of a program, a version of the program with additional scenes, a version of the program with an alternate plot, or other form.


In some examples, a system, including a centrally located computer means, is provided for automatically identifying a video program currently displaying on a remote television and providing contextually-targeted content to the remotely located television system while the live video segment is displayed on the video screen, where the contextually-targeted content comprises a visual graphic offering the viewer the option to select an alternative or related version of the content delivered to said remote television system from a video-on-demand server or other means.


In some embodiments, the alternative or replacement version of the content may be viewer selectable from a plurality of video program choices with features including DVR control capability, the display format and resolution of the content, and reduced commercial messaging.


In some embodiments, commercial sponsors or commercial brokers may interact with the centrally located computer system to purchase targeted local third party content opportunities that may be available in the customized content stream.


In some embodiments, a computer system as part of the remote television system is given instructions to address a central video server system when a user of said television chooses to accept the option to switch from original to alternative version of a television program.


In some embodiments, a computer system as part of the remote television system is given instructions to address a central third party content server system when a user of said television chooses to accept the option to switch from original to alternative version of a television program and further said instructions contain the run-time of said alternative television program at which the remote television system will address the third party content server to replace an item of third party content at said time interval of the alternative program with a demographically targeted item of third party content targeted to the viewer's interests.


In some embodiments, the information containing the interests of the viewer is stored in a cookie or other data means in the memory means of the processor contained in said television system.


In some embodiments, the information containing the interests of the viewer is derived by a computer program means which records the previous actions of the user with regards to changing channels during ad breaks or the by means of the user requesting additional information when presented with the opportunity or from information derived from the user's activity which utilizing a personal computer or mobile device which interacting with Internet sources.


In some embodiments, the alternative or replacement version of the content may be viewed on a second video screen such as a computer laptop or tablet or other television system.



FIG. 7 illustrates an embodiment of a process 700 for identifying video content being displayed by a television system. In some aspects, the process 700 may be performed by a computing device, such as the matching server 107.



FIG. 8 illustrates an embodiment of a process 800 for providing information for identifying video content being displayed by a television system. In some aspects, the process 800 may be performed by a computing device, such as the television system 101. The computing device may include a network-connected television (a smart TV), a mobile device, a mobile telephone, a smartphone, a desktop computer, a laptop computer, a tablet computer, or any other suitable computing device.


Process 700 and process 800 are each illustrated as a logical flow diagram, the operation of which represent a sequence of operations that can be implemented in hardware, computer instructions, or a combination thereof. In the context of computer instructions, the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes.


Additionally, the process 700 and process 800 each may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof. As noted above, the code may be stored on a machine-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. The machine-readable storage medium may be non-transitory.


As noted above, the process 700 includes a process for identifying video content being displayed by a television system. At 702, the process 700 includes receiving video data of a video segment being displayed by the television system, wherein the video segment includes at least a portion of a video program. In some examples, the video segment can include a portion of the video program other than a beginning portion (e.g., a video segment in the middle of the program). At 704, the process 700 includes identifying the video segment being displayed by the television system. Identifying the video segment includes comparing the video data of the video segment with stored video data to find a closest match. In one example, the video segment can be identified by sampling (e.g., at fixed intervals, such as five times per second, ten times per second, fifteen times per second, twenty times a second, or any other suitable interval) a subset of the pixel data being displayed on the screen (the pixel data of a frame comprising the video segment), and then finding similar pixel data in the content database. One of ordinary skill in the art will appreciate that the other methods of identifying video content being displayed described herein can be used to identify the video content.


At 706, the process 700 includes determining contextually related content. The contextually related content is contextually related to the identified video segment, and includes an option to switch to an alternative or related version of the video program from a video server. For example, the alternative or related version of the video program can include a version of the video program starting from the beginning of the video program. In one instance, the alternative version can be provided from a VoD server that provides a version of the program to the viewer that starts from the beginning. For example, the video server can include a video-on-demand server. At 708, the process 700 includes providing the contextually related content to the television system. The television system can then present the contextually related content to a viewer.


As noted above, the process 800 includes a process for providing information for identifying video content being displayed by a television system. At 802, the process 800 includes displaying a video segment. The video segment includes at least a portion of a video program. At 804, the process 800 includes transmitting video data of the video segment being displayed. The video data is addressed to and thus sent to a matching server. The video data of the video segment is compared with stored video data to identify the video segment being displayed. For example, the matching server can compare the video data of the video segment with stored video data.


At 806, the process 800 includes receiving contextually related content. The contextually related content is contextually related to the identified video segment, and includes an option to switch to an alternative or related version of the video program from a video server. For example, the alternative or related version of the video program can include a version of the video program starting from the beginning of the video program. In one instance, the alternative version can be provided from a VoD server that provides a version of the program to the viewer that starts from the beginning. For example, the video server can include a video-on-demand server. At 808, the process 800 includes displaying the contextually related content on a screen


In some embodiments of process 700 and process 800, selection of the option to switch to the alternative or related version of the video program causes the television system to receive a version of the video program starting from the beginning of the video program. In some embodiments, the contextually related content is displayed by the television system while the video program is displayed on a video screen of the television system. For example, the option can be displayed as an overlay or pop-up window over the displayed video program. In some embodiments, the contextually related content includes a graphical interface with the option to switch to the alternative or related version of the video program.


In some embodiments, the contextually related content further includes an option to select from a plurality of video program choices. In some examples, the plurality of video program choices include video control capability, display format of the video content, or reduced commercial messaging.


In some embodiments, the television system requests the alternative or related version of the video content from the video server when the option to switch to the alternative or related version of the video program is selected.


In some embodiments, the television system is connected with a third party content server when the option to switch to the alternative or related version of the video program is selected. For instance, the television system can connect with the third party server to obtain third party content from the third party content server at a specified time interval of the alternative or related version of the video program.


Substantial variations may be made in accordance with specific requirements. For example, customized hardware might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other access or computing devices such as network input/output devices may be employed.


In the foregoing specification, aspects of the invention are described with reference to specific embodiments thereof, but those skilled in the art will recognize that the invention is not limited thereto. Various features and aspects of the above-described invention may be used individually or jointly. Further, embodiments can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive.


In the foregoing description, for the purposes of illustration, methods were described in a particular order. It should be appreciated that in alternate embodiments, the methods may be performed in a different order than that described. It should also be appreciated that the methods described above may be performed by hardware components or may be embodied in sequences of machine-executable instructions, which may be used to cause a machine, such as a general-purpose or special-purpose processor or logic circuits programmed with the instructions to perform the methods. These machine-executable instructions may be stored on one or more machine readable mediums, such as CD-ROMs or other type of optical disks, floppy diskettes, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, flash memory, or other types of machine-readable mediums suitable for storing electronic instructions. Alternatively, the methods may be performed by a combination of hardware and software.


Where components are described as being configured to perform certain operations, such configuration can be accomplished, for example, by designing electronic circuits or other hardware to perform the operation, by programming programmable electronic circuits (e.g., microprocessors, or other suitable electronic circuits) to perform the operation, or any combination thereof.


While illustrative embodiments of the application have been described in detail herein, it is to be understood that the inventive concepts may be otherwise variously embodied and employed, and that the appended claims are intended to be construed to include such variations, except as limited by the prior art.

Claims
  • 1. A system for identifying video content being displayed by a media system, the system comprising: one or more processors; anda non-transitory machine-readable storage medium containing instructions which when executed on the one or more processors, cause the one or more processors to:identify a video segment being displayed by the media system, wherein the video segment includes at least a portion of a version of a video program;determine contextual content being contextually related to the identified video segment, the contextual content including an alternative version of the video program, wherein the version and the alternative version include a same video program, the alternative version including at least one of a customization or a substitution of third party content relative to third party content that is part of the version of the video program; andtransmit one or more software instructions, wherein the one or more software instructions, when received by the media system, cause a software application associated with the contextual content to execute on the media system, the software application providing the contextual content to the media system and causing at least one of the customized or substituted third party content to be presented in place of the third party content that is part of the version of the video program.
  • 2. The system of claim 1, further comprising instructions which when executed on the one or more processors, cause the one or more processors to: instruct a third party server to provide at least one of the customized or substituted third party content to the media system.
  • 3. The system of claim 2, wherein the alternative version of the video program is from a video server.
  • 4. The system of claim 1, further comprising instructions which when executed on the one or more processors, cause the one or more processors to: instruct a third party server to provide at least one of the customized or substituted third party content to a video server, the video server being configured to provide the alternative version of the video program and at least one of the customized or substituted third party content to the media system.
  • 5. The system of claim 1, wherein the software application associated with the contextual content includes embedded address information, the embedded address information providing access to at least one of the customized or substituted third party content.
  • 6. The system of claim 1, wherein the software application associated with the contextual content includes timing information, the timing information indicating a time in the alternative version of the video program to substitute at least one of the customized or substituted third party content.
  • 7. The system of claim 1, wherein the contextual content is determined based on at least one of cookie data or browsing history.
  • 8. The system of claim 1, wherein the contextual content includes an option to switch to the alternative version of the video program, and wherein selection of the option causes the media system to switch from the version of the video program to the alternative version of the same video program.
  • 9. The system of claim 8, wherein the media system is connected with a third party content server when the option is selected, wherein the media system connects with the third party content server to obtain at least one of the customized or substituted third party content at a specified time interval of the alternative version.
  • 10. The system of claim 1, further comprising instructions which when executed on the one or more processors, cause the one or more processors to: transmit, to the media system, one or more software applications configured for execution by the media system.
  • 11. A computer-implemented method comprising: identifying a video segment being displayed by the media system, wherein the video segment includes at least a portion of a version of a video program;determining contextual content being contextually related to the identified video segment, the contextual content including an alternative version of the video program, wherein the version and the alternative version include a same video program, the alternative version including at least one of a customization or a substitution of third party content relative to third party content that is part of the version of the video program; andtransmitting one or more software instructions, wherein the one or more software instructions, when received by the media system, cause a software application associated with the contextual content to execute on the media system, the software application providing the contextual content to the media system and causing at least one of the customized or substituted third party content to be presented in place of the third party content that is part of the version of the video program.
  • 12. The method of claim 11, further comprising instructions which when executed on the one or more processors, cause the one or more processors to: instruct a third party server to provide at least one of the customized or substituted third party content to the media system.
  • 13. The method of claim 12, wherein the alternative version of the video program is from a video server.
  • 14. The method of claim 11, further comprising instructions which when executed on the one or more processors, cause the one or more processors to: instruct a third party server to provide at least one of the customized or substituted third party content to a video server, the video server being configured to provide the alternative version of the video program and at least one of the customized or substituted third party content to the media system.
  • 15. The method of claim 11, wherein the software application associated with the contextual content includes embedded address information, the embedded address information providing access to at least one of the customized or substituted third party content.
  • 16. The method of claim 11, wherein the software application associated with the contextual content includes timing information, the timing information indicating a time in the alternative version of the video program to substitute at least one of the customized or substituted third party content.
  • 17. The method of claim 11, wherein the contextual content is determined based on at least one of cookie data or browsing history.
  • 18. The method of claim 11, wherein the contextual content includes an option to switch to the alternative version of the video program, and wherein selection of the option causes the media system to switch from the version of the video program to the alternative version of the same video program.
  • 19. The method of claim 18, wherein the media system is connected with a third party content server when the option is selected, wherein the media system connects with the third party content server to obtain at least one of the customized or substituted third party content at a specified time interval of the alternative version.
  • 20. A non-transitory computer-readable medium having stored thereon instructions that, when executed by one or more processors, cause the one or more processor to: identify a video segment being displayed by the media system, wherein the video segment includes at least a portion of a version of a video program;determine contextual content being contextually related to the identified video segment, the contextual content including an alternative version of the video program, wherein the version and the alternative version include a same video program, the alternative version including at least one of a customization or a substitution of third party content relative to third party content that is part of the version of the video program; andtransmit one or more software instructions, wherein the one or more software instructions, when received by the media system, cause a software application associated with the contextual content to execute on the media system, the software application providing the contextual content to the media system and causing at least one of the customized or substituted third party content to be presented in place of the third party content that is part of the version of the video program.
CROSS-REFERENCE TO RELATED APPLICATIONS

This is a continuation of U.S. patent application Ser. No. 16/141,598, filed Sep. 25, 2018, which is a continuation of U.S. patent application Ser. No. 15/011,099, filed Jan. 29, 2016, which claims the benefit of U.S. Provisional Application No. 62/110,024, filed Jan. 30, 2015, both of which are incorporated herein by reference in their entirety for all purposes. This application is related to U.S. patent application Ser. No. 14/089,003, filed Nov. 25, 2013, which claims the benefit of U.S. Provisional Application No. 61/182,334, filed May 29, 2009 and the benefit of U.S. Provisional Application No. 61/290,714, filed Dec. 29, 2009, all of which are hereby incorporated by reference in their entirety. This application is also related to U.S. patent application Ser. No. 12/788,748, filed May 27, 2010, U.S. patent application Ser. No. 12/788,721, filed May 27, 2010, and U.S. patent application Ser. No. 14/217,075, filed Mar. 17, 2014, all of which are hereby incorporated by reference in their entirety.

US Referenced Citations (321)
Number Name Date Kind
4677466 Lert, Jr. et al. Jun 1987 A
4697209 Kiewit et al. Sep 1987 A
4739398 Thomas et al. Apr 1988 A
5019899 Boles et al. May 1991 A
5193001 Kerdranvrat Mar 1993 A
5210820 Kenyon May 1993 A
5319453 Copriviza et al. Jun 1994 A
5436653 Ellis et al. Jul 1995 A
5481294 Thomas et al. Jan 1996 A
5557334 Legate Sep 1996 A
5572246 Ellis et al. Nov 1996 A
5812286 Li Sep 1998 A
5826165 Echeita et al. Oct 1998 A
5918223 Blum et al. Jun 1999 A
6008802 Goldschmidt et al. Dec 1999 A
6025837 Matthews, III et al. Feb 2000 A
6035177 Moses et al. Mar 2000 A
6064764 Bhaskaran et al. May 2000 A
6298482 Seidman Oct 2001 B1
6381362 Deshpande et al. Apr 2002 B1
6415438 Blackketter et al. Jul 2002 B1
6463585 Hendricks et al. Oct 2002 B1
6469749 Dimitrova Oct 2002 B1
6577346 Perlman Jun 2003 B1
6577405 Kranz et al. Jun 2003 B2
6628801 Powell et al. Sep 2003 B2
6647548 Lu et al. Nov 2003 B1
6675174 Bolle et al. Jan 2004 B1
6771316 Iggulden Aug 2004 B1
6804659 Graham et al. Oct 2004 B1
6978470 Swix et al. Dec 2005 B2
6990453 Wang et al. Jan 2006 B2
7028327 Dougherty et al. Apr 2006 B1
7039930 Goodman et al. May 2006 B1
7050068 Bastos et al. May 2006 B1
7051351 Goldman et al. May 2006 B2
7064796 Roy et al. Jun 2006 B2
7089575 Agnihotri et al. Aug 2006 B2
7098959 Mishima et al. Aug 2006 B2
7136875 Anderson et al. Nov 2006 B2
7178106 Lamkin et al. Feb 2007 B2
7210157 Devara Apr 2007 B2
7346512 Wang et al. Mar 2008 B2
7356830 Dimitrova Apr 2008 B1
7421723 Harkness et al. Sep 2008 B2
7545984 Kiel et al. Jun 2009 B1
7590998 Hanley Sep 2009 B2
7623823 Zito et al. Nov 2009 B2
7787696 Wilhelm et al. Aug 2010 B2
7793318 Deng Sep 2010 B2
7933451 Kloer Apr 2011 B2
8001571 Schwartz et al. Aug 2011 B1
8094872 Yagnik et al. Jan 2012 B1
8171004 Kaminski, Jr. et al. May 2012 B1
8171030 Peira et al. May 2012 B2
8175413 Ioffe et al. May 2012 B1
8189945 Stojancic et al. May 2012 B2
8195589 Bakke et al. Jun 2012 B2
8195689 Ramanathan et al. Jun 2012 B2
8229227 Stojancic et al. Jul 2012 B2
8335786 Peira et al. Dec 2012 B2
8352535 Peled Jan 2013 B2
8364703 Ramanathan et al. Jan 2013 B2
8385644 Stojancic et al. Feb 2013 B2
8392789 Biscondi et al. Mar 2013 B2
8494234 Djordjevic et al. Jul 2013 B1
8522283 Laligand et al. Aug 2013 B2
8595781 Neumeier et al. Nov 2013 B2
8619877 McDowell Dec 2013 B2
8625902 Baheti et al. Jan 2014 B2
8769854 Battaglia Jul 2014 B1
8776105 Sinha et al. Jul 2014 B2
8832723 Sinha et al. Sep 2014 B2
8856817 Sinha et al. Oct 2014 B2
8893167 Sinha et al. Nov 2014 B2
8893168 Sinha et al. Nov 2014 B2
8898714 Neumeier et al. Nov 2014 B2
8918804 Sinha et al. Dec 2014 B2
8918832 Sinha et al. Dec 2014 B2
8930980 Neumeier et al. Jan 2015 B2
8959202 Haitsma et al. Feb 2015 B2
9055309 Neumeier et al. Jun 2015 B2
9055335 Neumeier et al. Jun 2015 B2
9071868 Neumeier et al. Jun 2015 B2
9094714 Neumeier et al. Jul 2015 B2
9094715 Neumeier et al. Jul 2015 B2
9262671 Unzueta Feb 2016 B2
9368021 Touloumtzis Jun 2016 B2
9449090 Neumeier et al. Sep 2016 B2
9465867 Hoarty Oct 2016 B2
9838753 Neumeier et al. Dec 2017 B2
9955192 Neumeier et al. Apr 2018 B2
10080062 Neumeier et al. Sep 2018 B2
10116972 Neumeier et al. Oct 2018 B2
10169455 Neumeier et al. Jan 2019 B2
10185768 Neumeier et al. Jan 2019 B2
10192138 Neumeier et al. Jan 2019 B2
10271098 Neumeier et al. Apr 2019 B2
10284884 Neumeier et al. May 2019 B2
10306274 Neumeier et al. May 2019 B2
10375451 Neumeier et al. Aug 2019 B2
10405014 Neumeier et al. Sep 2019 B2
20010039658 Walton Nov 2001 A1
20010044992 Jahrling Nov 2001 A1
20020026635 Wheeler et al. Feb 2002 A1
20020054069 Britt et al. May 2002 A1
20020054695 Bjorn et al. May 2002 A1
20020056088 Silva, Jr. et al. May 2002 A1
20020059633 Harkness et al. May 2002 A1
20020078144 Lamkin et al. Jun 2002 A1
20020083060 Wang et al. Jun 2002 A1
20020083442 Eldering Jun 2002 A1
20020100041 Rosenberg et al. Jul 2002 A1
20020105907 Bruekers et al. Aug 2002 A1
20020120925 Logan Aug 2002 A1
20020122042 Bates Sep 2002 A1
20020144262 Plotnick Oct 2002 A1
20020162117 Pearson et al. Oct 2002 A1
20020162118 Levy et al. Oct 2002 A1
20030026422 Gerheim et al. Feb 2003 A1
20030086341 Wells May 2003 A1
20030105794 Jasinschi Jun 2003 A1
20030121037 Swix et al. Jun 2003 A1
20030121046 Roy et al. Jun 2003 A1
20030147561 Faibish et al. Aug 2003 A1
20030182291 Kurupati Sep 2003 A1
20030188321 Shoff et al. Oct 2003 A1
20030208763 McElhatten et al. Nov 2003 A1
20040045020 Witt et al. Mar 2004 A1
20040059708 Dean et al. Mar 2004 A1
20040183825 Stauder et al. Sep 2004 A1
20040216171 Barone et al. Oct 2004 A1
20040221237 Foote et al. Nov 2004 A1
20040226035 Hauser Nov 2004 A1
20040237102 Konig Nov 2004 A1
20040240562 Bargeron et al. Dec 2004 A1
20050015795 Iggulden Jan 2005 A1
20050015796 Bruckner et al. Jan 2005 A1
20050027766 Ben Feb 2005 A1
20050066352 Herley Mar 2005 A1
20050120372 Itakura Jun 2005 A1
20050172312 Lienhart et al. Aug 2005 A1
20050207416 Rajkotia Sep 2005 A1
20050209065 Schlosser Sep 2005 A1
20050235318 Grauch et al. Oct 2005 A1
20060029286 Lim et al. Feb 2006 A1
20060029368 Harville Feb 2006 A1
20060031914 Dakss et al. Feb 2006 A1
20060133647 Werner et al. Jun 2006 A1
20060153296 Deng Jul 2006 A1
20060155952 Haas Jul 2006 A1
20060173831 Basso et al. Aug 2006 A1
20060187358 Lienhart et al. Aug 2006 A1
20060193506 Dorphan et al. Aug 2006 A1
20060195857 Wheeler et al. Aug 2006 A1
20060195860 Eldering et al. Aug 2006 A1
20060245724 Hwang et al. Nov 2006 A1
20060245725 Lim Nov 2006 A1
20060253330 Maggio et al. Nov 2006 A1
20060277047 DeBusk et al. Dec 2006 A1
20060294561 Grannan et al. Dec 2006 A1
20070009235 Walters et al. Jan 2007 A1
20070033608 Eigeldinger Feb 2007 A1
20070050832 Wright et al. Mar 2007 A1
20070061724 Slothouber et al. Mar 2007 A1
20070061831 Savoor et al. Mar 2007 A1
20070083901 Bond Apr 2007 A1
20070094696 Sakai Apr 2007 A1
20070109449 Cheung May 2007 A1
20070113263 Chatani May 2007 A1
20070124762 Chickering et al. May 2007 A1
20070139563 Zhong Jun 2007 A1
20070143796 Malik Jun 2007 A1
20070168409 Cheung Jul 2007 A1
20070180459 Smithpeters et al. Aug 2007 A1
20070192782 Ramaswamy Aug 2007 A1
20070242880 Stebbings Oct 2007 A1
20070250901 McIntire et al. Oct 2007 A1
20070261070 Brown et al. Nov 2007 A1
20070261075 Glasberg Nov 2007 A1
20070271300 Ramaswamy Nov 2007 A1
20070274537 Srinivasan Nov 2007 A1
20070300280 Turner et al. Dec 2007 A1
20080044102 Ekin Feb 2008 A1
20080046945 Hanley Feb 2008 A1
20080089551 Heather et al. Apr 2008 A1
20080138030 Bryan et al. Jun 2008 A1
20080155588 Roberts et al. Jun 2008 A1
20080155627 O'Connor et al. Jun 2008 A1
20080172690 Kanojia et al. Jul 2008 A1
20080208891 Wang et al. Aug 2008 A1
20080240562 Fukuda et al. Oct 2008 A1
20080249986 Clarke-Martin Oct 2008 A1
20080263620 Berkvens et al. Oct 2008 A1
20080276266 Huchital et al. Nov 2008 A1
20080310731 Stojancic et al. Dec 2008 A1
20080313140 Pereira et al. Dec 2008 A1
20090007195 Beyabani Jan 2009 A1
20090024923 Hartwig et al. Jan 2009 A1
20090028517 Shen et al. Jan 2009 A1
20090052784 Covell et al. Feb 2009 A1
20090087027 Eaton et al. Apr 2009 A1
20090088878 Otsuka et al. Apr 2009 A1
20090100361 Abello et al. Apr 2009 A1
20090116702 Conradt May 2009 A1
20090131861 Braig et al. May 2009 A1
20090172728 Shkedi et al. Jul 2009 A1
20090172746 Aldrey et al. Jul 2009 A1
20090175538 Bronstein et al. Jul 2009 A1
20090213270 Ismert et al. Aug 2009 A1
20090235312 Morad et al. Sep 2009 A1
20100010648 Bull et al. Jan 2010 A1
20100083299 Nelson Apr 2010 A1
20100115543 Falcon May 2010 A1
20100115574 Hardt et al. May 2010 A1
20100125870 Ukawa et al. May 2010 A1
20100166257 Wredenhagen Jul 2010 A1
20100199295 Katpelly Aug 2010 A1
20100235486 White et al. Sep 2010 A1
20100253838 Garg et al. Oct 2010 A1
20100269138 Krikorian et al. Oct 2010 A1
20100306805 Neumeier et al. Dec 2010 A1
20100306808 Neumeier et al. Dec 2010 A1
20110015996 Kassoway et al. Jan 2011 A1
20110026761 Radhakrishnan et al. Feb 2011 A1
20110041154 Olson Feb 2011 A1
20110055552 Francis et al. Mar 2011 A1
20110096955 Voloshynovskiy et al. Apr 2011 A1
20110247042 Mallinson Oct 2011 A1
20110251987 Buchheit Oct 2011 A1
20110289099 Quan Nov 2011 A1
20110299770 Vaddadi et al. Dec 2011 A1
20120017240 Shkedi Jan 2012 A1
20120054143 Doig et al. Mar 2012 A1
20120076357 Yamamoto et al. Mar 2012 A1
20120095958 Pereira et al. Apr 2012 A1
20120117584 Gordon May 2012 A1
20120158511 Lucero et al. Jun 2012 A1
20120174155 Mowrey et al. Jul 2012 A1
20120177249 Levy et al. Jul 2012 A1
20120185566 Nagasaka Jul 2012 A1
20120272259 Cortes Oct 2012 A1
20120294586 Weaver et al. Nov 2012 A1
20120317240 Wang Dec 2012 A1
20120324494 Burger et al. Dec 2012 A1
20130007191 Klappert et al. Jan 2013 A1
20130042262 Riethmueller Feb 2013 A1
20130050564 Adams et al. Feb 2013 A1
20130054356 Richman et al. Feb 2013 A1
20130067523 Etsuko et al. Mar 2013 A1
20130070847 Iwamoto et al. Mar 2013 A1
20130108173 Lienhart et al. May 2013 A1
20130139209 Urrabazo et al. May 2013 A1
20130198768 Kitazato Aug 2013 A1
20130202150 Sinha et al. Aug 2013 A1
20130205315 Sinha Aug 2013 A1
20130209065 Yeung Aug 2013 A1
20130212609 Sinha et al. Aug 2013 A1
20130290502 Bilobrov Oct 2013 A1
20130297727 Levy Nov 2013 A1
20130318096 Cheung Nov 2013 A1
20140016696 Nelson Jan 2014 A1
20140025837 Swenson Jan 2014 A1
20140052737 Ramanathan et al. Feb 2014 A1
20140082663 Neumeier et al. Mar 2014 A1
20140088742 Srinivasan Mar 2014 A1
20140105567 Casagrande Apr 2014 A1
20140123165 Mukherjee et al. May 2014 A1
20140130092 Kunisetty May 2014 A1
20140188487 Perez Gonzalez Jul 2014 A1
20140193027 Scherf Jul 2014 A1
20140195548 Harron Jul 2014 A1
20140196085 Dunker Jul 2014 A1
20140201769 Neumeier et al. Jul 2014 A1
20140201772 Neumeier et al. Jul 2014 A1
20140201787 Neumeier et al. Jul 2014 A1
20140219554 Yamaguchi et al. Aug 2014 A1
20140237576 Zhang Aug 2014 A1
20140258375 Munoz Sep 2014 A1
20140270489 Jaewhan et al. Sep 2014 A1
20140270504 Baum et al. Sep 2014 A1
20140270505 McCarthy Sep 2014 A1
20140282671 McMillan Sep 2014 A1
20140293794 Zhong et al. Oct 2014 A1
20140344880 Geller et al. Nov 2014 A1
20150026728 Carter et al. Jan 2015 A1
20150040074 Hofmann et al. Feb 2015 A1
20150100979 Moskowitz et al. Apr 2015 A1
20150112988 Pereira et al. Apr 2015 A1
20150120839 Kannan et al. Apr 2015 A1
20150121409 Zhang et al. Apr 2015 A1
20150128161 Conrad et al. May 2015 A1
20150163545 Freed et al. Jun 2015 A1
20150181311 Navin et al. Jun 2015 A1
20150229998 Kaushal et al. Aug 2015 A1
20150256891 Kim et al. Sep 2015 A1
20150302890 Ergen et al. Oct 2015 A1
20150382075 Neumeier et al. Dec 2015 A1
20160227261 Neumeier et al. Aug 2016 A1
20160227291 Shaw et al. Aug 2016 A1
20160286244 Chang et al. Sep 2016 A1
20160307043 Neumeier Oct 2016 A1
20160314794 Leitman et al. Oct 2016 A1
20160316262 Chen Oct 2016 A1
20160330529 Byers Nov 2016 A1
20160353172 Miller et al. Dec 2016 A1
20160359791 Zhang et al. Dec 2016 A1
20170017645 Neumeier et al. Jan 2017 A1
20170017651 Neumeier et al. Jan 2017 A1
20170017652 Neumeier et al. Jan 2017 A1
20170019716 Neumeier et al. Jan 2017 A1
20170019719 Neumeier et al. Jan 2017 A1
20170026671 Neumeier et al. Jan 2017 A1
20170031573 Kaneko Feb 2017 A1
20170032033 Neumeier et al. Feb 2017 A1
20170032034 Neumeier et al. Feb 2017 A1
20170134770 Neumeier et al. May 2017 A9
20170186042 Wong et al. Jun 2017 A1
20170311014 Fleischman Oct 2017 A1
20170325005 Liassides Nov 2017 A1
20170353776 Holden et al. Dec 2017 A1
Foreign Referenced Citations (33)
Number Date Country
2501316 Sep 2005 CA
1557096 Dec 2004 CN
101162470 Apr 2008 CN
1681304 Jul 2010 CN
102377960 May 2012 CN
101681373 Sep 2012 CN
248 533 Aug 1994 EP
1578126 Sep 2005 EP
1 760 693 Mar 2007 EP
1504445 Aug 2008 EP
2 084 624 Aug 2009 EP
2 352 289 Aug 2011 EP
2 541 963 Jan 2013 EP
2 685 450 Jan 2014 EP
2457694 Aug 2009 GB
0144992 Jun 2001 WO
2005101998 Nov 2005 WO
2007114796 Oct 2007 WO
2008065340 Jun 2008 WO
2009131861 Oct 2009 WO
2009150425 Dec 2009 WO
2010135082 Nov 2010 WO
2011090540 Jul 2011 WO
2012057724 May 2012 WO
2012108975 Aug 2012 WO
2012170451 Dec 2012 WO
2014142758 Sep 2014 WO
2014145929 Sep 2014 WO
2015100372 Jul 2015 WO
2016123495 Aug 2016 WO
2016168556 Oct 2016 WO
2017011758 Jan 2017 WO
2017011792 Jan 2017 WO
Non-Patent Literature Citations (107)
Entry
International Search Report and Written Opinion dated Mar. 31, 2015 for PCT Application No. PCT/US2014/072255, 8 pages.
International Search Report and Written Opinion dated Apr. 26, 2016 for PCT Application No. PCT/US2016/015681,13 pages.
“How to: Watch from the beginning |About DISH” (Dec. 31, 2014) XP055265764, retrieved on Apr. 15, 2016 from URL:http://about.dish.com/blog/hopper/how-watch-beginning 2 pages.
International Search Report and Written Opinion dated Jun. 24, 2016 for PCT Application No. PCT/US2016/027691, 13 pages.
Gionis et al., “Similarity Search in High Dimension via Hashing”, Proceedings of the 25th VLDB Conference, 1999, 12 pages.
Huang , “Bounded Coordinate System Indexing for Real-time Video Clip Search”, Retrieved from the Internet:URL:http://staff.itee.uq.edu.aujzxf/papers/TOIS.pdf, Jan. 1, 2009, 32 pages.
Kim et al., “Edge-Based Spatial Descriptor Using Color Vector Angle for Effective Image Retrieval”, Modeling Decisions for Artificial Intelligence; [Lecture Notes in Computer Science; Lecture Notes in Artificial Intelligence, Jul. 1, 2005, pp. 365-375.
Liu et al., “Near-duplicate video retrieval”, ACM Computing Surveys, vol. 45, No. 4, Aug. 30, 2013, pp. 1-23.
International Search Report and Written Opinion dated Oct. 12, 2016 for PCT Application No. PCT/US2016/042522,13 pages.
International Search Report and Written Opinion dated Oct. 11, 2016 for PCT Application No. PCT/US2016/042621, 13 pages.
International Search Report and Written Opinion dated Oct. 20, 2016 for PCT Application No. PCT/US2016/042611,12 pages.
Scouarnec et al., “Cache policies for cloud-based systems:To keep or not to keep”, 2014 IEEE 7th International Conference on Cloud Computing, IEEE XP032696624, Jun. 27, 2014, pp. 1-8.
International Search Report and Written Opinion dated Oct. 25, 2016 for PCT Application No. PCT/US2016/042564, 14 pages.
Anonymous; “Cache (computing)” Wikipedia, the free encyclopedia, URL:http://en.wikipedia.org/w/index.phpti tle=Cache(computing)&oldid=474222804, Jan. 31, 2012; 6 pages.
International Search Report and Written Opinion dated Oct. 24, 2016 for PCT Application No. PCT/US2016/042557, 11 pages.
Anil K. Jain, “Image Coding Via a Nearest Neighbors Image Model” IEEE Transactions on Communications, vol. Com-23, No. 3, Mar. 1975, pp. 318-331.
Lee et al., “Fast Video Search Algorithm for Large Video Database Using Adjacent Pixel Intensity Difference Quantization Histogram Feature” International Journal of Computer Science and Network Security, vol. 9, No. 9, Sep. 2009, pp. 214-220.
Li et al., A Confidence Based Recognition System for TV Commercial Extraction, Conferences in Research and Practice in Information Technology vol. 75, 2008.
International Search Report and Written Opinion dated Jul. 27, 2011 for PCT Application No. PCT/US2010/057153, 8 pages.
International Search Report and Written Opinion dated Aug. 31, 2011 for PCT Application No. PCT/US2010/057155, 8 pages.
International Search Report and Written Opinion dated Aug. 26, 2014 for PCT Application No. PCT/US2014/030782; 11 pages.
International Search Report and Written Opinion dated Jul. 21, 2014 for PCT Application No. PCT/US2014/030795; 10 pages.
International Search Report and Written Opinion, dated Jul. 25, 2014 for PCT Application No. PCT/US2014/030805, 10 pages.
Extended European Search Report dated Mar. 7, 2013 for European Application No. 12178359.1, 8 pages.
Extended European Search Report dated Oct. 11, 2013 for European Application No. 10844152.8, 19 pages.
Kabal (P.), Ramachandran (R.P.): The computation of line spectral frequencies using Chebyshev polynomials, IEEE Trans. on ASSP, vol. 34, No. 6, pp. 1419-1426, 1986.
Itakura (F.): Line spectral representation of linear predictive coefficients of speech signals, J. Acoust. Soc. Amer., vol. 57, Supplement No. 1, S35, 1975, 3 pages.
Bistritz (Y.), Pellerm (S.): Immittance Spectral Pairs (ISP) for speech encoding, Proc. ICASSP'93, pp. 11-9 to 11-12.
International Search Report and Written Opinion dated Mar. 8, 2016 for PCT Application No. PCT/ US2015/062945; 9 pages.
Extended European Search Report dated Dec. 21, 2016 for European Application No. 14763506.4, 11 pages.
Extended European Search Report dated Nov. 23, 2016 for European Application No. 14764182.3, 10 pages.
Extended European Search Report dated Jan. 24, 2017 for European Application No. 14762850.7, 12 pages.
Extended European Search Report dated Jun. 16, 2017, for European Patent Application No. 14873564.0, 8 pages.
Extended European Search Report dated Jun. 28, 2019, for European Patent Application No. 19166400.2, 9 pages.
U.S. Appl. No. 14/551,933, “Final Office Action”, dated May 23, 2016, 19 pages.
U.S. Appl. No. 14/551,933 , “Non-Final Office Action”, dated Oct. 17, 2016, 15 pages.
U.S. Appl. No. 14/551,933 , “Non-Final Office Action”, dated Dec. 31, 2015, 24 pages.
U.S. Appl. No. 14/551,933 , “Notice of Allowance”, dated Mar. 21, 2017, 8 pages.
U.S. Appl. No. 14/217,039 , “Non-Final Office Action”, dated May 23, 2014, 27 pages.
U.S. Appl. No. 14/217,039 , “Final Office Action”, dated Nov. 7, 2014, 15 pages.
U.S. Appl. No. 14/217,039 , “Notice of Allowance”, dated Jan. 29, 2015, 8 pages.
U.S. Appl. No. 14/678,856, “Non-Final Office Action”, dated Dec. 1, 2015, 28 pages.
U.S. Appl. No. 14/678,856, “Notice of Allowance”, dated May 20, 2016, 9 pages.
U.S. Appl. No. 14/217,075, “Non-Final Office Action”, dated Jul. 16, 2014, 39 pages.
U.S. Appl. No. 14/217,075, “Notice of Allowance ”, dated Feb. 20, 2015, 51 pages.
U.S. Appl. No. 14/217,094, “Notice of Allowance ”, dated Sep. 4, 2014, 30 pages.
U.S. Appl. No. 14/217,375, “Non-Final Office Action”, dated Apr. 1, 2015, 39 pages.
U.S. Appl. No. 14/217,375, “Notice of Allowance”, dated Apr. 1, 2015, 31 pages.
U.S. Appl. No. 14/217,425, “Non-Final Office Action”, dated Apr. 7, 2015, 12 pages.
U.S. Appl. No. 14/217,425, “Notice of Allowance”, dated May 20, 2015, 15 pages.
U.S. Appl. No. 14/217,435, “Non-Final Office Action”, dated Nov. 24, 2014, 9 pages.
U.S. Appl. No. 14/217,435, “Notice of Allowance”, dated Jun. 5, 2015, 9 pages.
U.S. Appl. No. 15/011,099 , “First Action Interview Office Action Summary”, dated May 9, 2017, 6 pages.
U.S. Appl. No. 15/011,099 , “First Action Interview Pilot Program Pre-Interview Communication”, dated Feb. 28, 2017, 5 pages.
U.S. Appl. No. 12/788,721 , “Non-Final Office Action”, dated Mar. 28, 2012, 15 Pages.
U.S. Appl. No. 12/788,721 , “Final Office Action”, dated Aug. 15, 2012, 22 Pages.
U.S. Appl. No. 12/788,721 , “Notice of Allowance”, dated Aug. 15, 2013, 16 Pages.
U.S. Appl. No. 14/763,158 , “Non-Final Office Action”, dated Jun. 27, 2016, 16 Pages.
U.S. Appl. No. 14/763,158 , “Final Office Action”, dated Sep. 7, 2016, 12 Pages.
U.S. Appl. No. 14/763,158 , “Notice of Allowance”, dated Mar. 17, 2016, 8 Pages.
U.S. Appl. No. 14/807,849 , “Non-Final Office Action”, dated Nov. 25, 2015, 12 Pages.
U.S. Appl. No. 14/807,849 , “Final Office Action”, dated Apr. 19, 2016, 13 pages.
U.S. Appl. No. 14/807,849 , “Non-Final Office Action”, dated Feb. 28, 2017, 10 Pages.
U.S. Appl. No. 14/089,003 , “Notice of Allowance”, dated Jul. 30, 2014, 24 Pages.
U.S. Appl. No. 12/788,748 , “Non-Final Office Action”, dated Jan. 10, 2013, 10 Pages.
U.S. Appl. No. 12/788,748 , “Final Office Action”, dated Nov. 21, 2013, 13 Pages.
U.S. Appl. No. 12/788,748 , “Notice of Allowance”, dated Mar. 6, 2014, 7 Pages.
U.S. Appl. No. 14/953,994 , “Non-Final Office Action”, dated Mar. 3, 2016, 34 Pages.
U.S. Appl. No. 14/953,994 , “Final Office Action”, dated Jun. 1, 2016, 36 Pages.
U.S. Appl. No. 14/953,994 , “Notice of Allowance”, dated Aug. 31, 2016, 15 Pages.
U.S. Appl. No. 14/807,849 , “Final Office Action”, dated Jun. 22, 2017, 10 pages.
U.S. Appl. No. 15/011,099 , “Final Office Action”, dated Jul. 24, 2017, 22 pages.
U.S. Appl. No. 15/240,801 , “Non-Final Office Action”, dated Aug. 11, 2017, 18 pages.
U.S. Appl. No. 15/240,815 , “Non-Final Office Action”, dated Aug. 23, 2017, 15 pages.
U.S. Appl. No. 15/211,345 , “First Action Interview Pilot Program Pre-Interview Communication”, dated Sep. 19, 2017, 8 pages.
U.S. Appl. No. 14/807,849 , “Notice of Allowance”, dated Nov. 30, 2017, 9 Pages.
U.S. Appl. No. 15/240,801 , “Final Office Action”, dated Dec. 22, 2017, 24 pages
U.S. Appl. No. 15/011,099, “Non-Final Office Action”, dated Jan. 22, 2018, 23 pages.
U.S. Appl. No. 15/240,815 , “Final Office Action”, dated Mar. 2, 2018, 14 pages.
U.S. Appl. No. 15/211,345 , “Final Office Action”, dated Mar. 2, 2018, 14 pages.
Extended European Search Report dated Mar. 22, 2018 for European Application No. 15865033.3, 10 pages.
U.S. Appl. No. 15/099,842 , “Final Office Action”, dated Apr. 2, 2018, 8 pages.
U.S. Appl. No. 15/210,730, “Notice of Allowance”, dated May 23, 2018, 10 pages.
U.S. Appl. No. 15/796,706, “Non-Final Office Action”, dated Jun. 26, 2018, 17 pages.
U.S. Appl. No. 15/011,099, “Notice of Allowance”, dated Jun. 28, 2018, 12 pages.
U.S. Appl. No. 15/796,698, “Non-Final Office Action”, dated Jul. 5, 2018, 15 pages.
U.S. Appl. No. 15/240,801, “Notice of Allowance”, dated Aug. 30, 2018, 9 pages.
U.S. Appl. No. 15/211,345, “Non-Final Office Action”, dated Sep. 4, 2018, 13 pages.
U.S. Appl. No. 15/099,842, “Notice of Allowance”, dated Sep. 7, 2018, 10 pages.
U.S. Appl. No. 15/240,815, “Notice of Allowance”, dated Sep. 12, 2018, 9 pages.
U.S. Appl. No. 15/290,848 , “First Action Interview Office Action Summary”, dated Nov. 2, 2018, 5 pages.
U.S. Appl. No. 15/796,692, “Notice of Allowance”, dated Dec. 5, 2018, 8 pages.
U.S. Appl. No. 15/796,698, “Notice of Allowance”, dated Dec. 20, 2018, 8 pages.
U.S. Appl. No. 15/796,706, “Notice of Allowance”, dated Jan. 11, 2019, 9 pages.
U.S. Appl. No. 15/211,508, “Non-Final Office Action”, dated Jan. 10, 2019, 20 pages.
U.S. Appl. No. 15/211,492, “Non-Final Office Action”, dated Jan. 11, 2019, 19 pages.
U.S. Appl. No. 16/141,598, “First Action Interview Office Action Summary”, dated Jan. 11, 2019, 8 pages.
U.S. Appl. No. 15/290,848, “Non-Final Office Action”, dated Mar. 5, 2019, 5 pages.
U.S. Appl. No. 15/211,991, “Non-Final Office Action”, dated Feb. 26, 2019, 8 pages.
U.S. Appl. No. 15/211,345, “Notice of Allowance”, dated Mar. 20, 2019, 6 pages.
U.S. Appl. No. 16/210,796, “Non-Final Office Action”, dated Mar. 22, 2019, 7 pages.
U.S. Appl. No. 16/141,598, “Notice of Allowance”, dated Apr. 25, 2019, 11 pages.
U.S. Appl. No. 15/211,508, “Final Office Action”, dated Jun. 20, 2019, 20 pages.
U.S. Appl. No. 15/211,492, “Final Office Action”, dated Jun. 20, 2019, 18 pages.
U.S. Appl. No. 16/210796, “Notice of Allowance”, dated Jul. 11, 2019, 9 pages.
U.S. Appl. No. 16/214,875, “Non-Final Office Action”, dated Jul. 29, 2019, 12 pages.
U.S. Appl. No. 15/211,991, “Final Office Action”, dated Oct. 31, 2019, 9 pages.
Related Publications (1)
Number Date Country
20200059681 A1 Feb 2020 US
Provisional Applications (1)
Number Date Country
62110224 Jan 2015 US
Continuations (2)
Number Date Country
Parent 16141598 Sep 2018 US
Child 16519292 US
Parent 15011099 Jan 2016 US
Child 16141598 US