The present invention relates to an interactive digital media platform and more particularly to methods and apparatus for detecting and dynamically synchronizing to media content (e.g., television (TV) programs or movies) that a viewer is watching while providing related content on a second screen for enhancing the viewer experience.
In general there is a large amount of related content that exists with respect to movies and television shows, e.g., reviews, ratings, trailers, movie tickets, fan gear, actor and actress biographies, crew biographies, celebrity stats, sound tracks, etc. However, there are no efficient, easy-to-use, automated mechanisms for delivering that related content to a viewer at a time and in a format that encourages the user to actively use the related context to enhance the viewing experience, e.g., concurrently review, search in depth and/or make purchasing decisions or take other actions based on that related content. Instead, the viewer is left on his own to actively seek out other media channels, e.g., websites, that provide related content and on these other channels is required to initiate a search based on what he or she can recall from the original programming that spurred the viewer's interest. These multiple steps, delays, and other burdens placed on the viewer mean they are less likely to be engaged in and take action on the programming and advertising they see, thereby limiting the return on the investment by the owner of the related content (e.g., movie studio and/or TV programmers).
In one embodiment, the present invention provides a second screen interactive platform that can synchronize a viewer of a movie or TV programming (the primary content) is viewing on another source. In various embodiments, the second screen is a tablet computer, smartphone or laptop computer. In various embodiments, the primary content is being delivered on a television, a personal computer, a mobile device such as a smartphone or portable media player, or the like.
In one embodiment, the primary content (e.g., TV programming) that the viewer is watching is determined by detecting an audio signal of the primary content which the viewer is listening to. For example, the second screen device can detect (e.g., via a microphone) the audio signal and the second screen platform can determine the identity of the primary content. In this embodiment the audio signal (primary content) can be generated by substantially any source, and the provider of the second screen interactive platform need not be associated with nor licensed by the primary content provider. Similarly, the user is not required to take any active steps (other than start the second screen application) to identify what the viewer is watching on the other (primary content) device. The viewer is free to select from any of the available sources of primary content for viewing, and to randomly reselect (e.g., change TV channels, change media service providers, or change video display devices) without limitation.
In accordance with one embodiment of the invention, the second screen platform continuously tracks what the viewer is watching and provides related content which changes as the primary content being viewed changes. This continuous tracking can be in substantially real time to enable the delivery of related content to the viewer substantially concurrently with the primary content being viewed.
In one embodiment, the second screen platform captures an audio portion of the primary content that the viewer is watching, and from this captured audio content determines what is being viewed. In one example, this determination is made by audio fingerprinting, a passive technique whereby key elements or signatures are extracted from the detected audio to generate a sample fingerprint which is then compared against a known set of fingerprints generated (by the same or similar fingerprinting algorithm) to determine a match.
In one embodiment, the platform then transmits to the viewer on the second screen information relevant to the primary content being viewed. This information may include one or more of the following: identification of the primary content being viewed, a time or location marker of what is being viewed, additional information about the primary content, or the like.
In one embodiment, the second screen presentation is synchronized in time to that portion of the primary content currently or recently viewed by the viewer. In one example, the second screen presentation includes a plurality of time synchronized pages.
In one embodiment, the platform automatically advances the presentation through the pages without input from the viewer (i.e., a passive viewer mode). Alternatively, the viewer can himself engage the screen to interact with content or actively move through the content (Le., active viewer mode).
In one embodiment, a page on the second screen presentation can be a direct connection to a web page of related content. Alternatively, the second screen page can link to another page of related content provided by the platform itself.
In yet another embodiment, the presentation on the second screen may be asynchronous with respect to time, e.g., no time indicator. In one example, the presentation may include one or more pages directed to the general subject matter of the primary content, the cast, multiple episodes, and/or a social networking forum.
Preferably, the second screen presentation compels the viewer to interact with the presentation. This interaction may include the viewer requesting more information, conducting a search, reviewing advertisements, scheduling a future event, contributing to the related content, interacting with other viewers and/or non-viewers having an interest in related content, social networking associated with the primary or related content, purchasing services or goods as a result of such interaction, or the like.
In one embodiment, the second screen platform works with primary content comprising a live broadcast, streaming or stored video content.
In one embodiment of the invention, a second screen interactive content delivery system is provided comprising:
In another embodiment, the first stored program utilizes a fingerprinting algorithm for processing the detected audio portion.
In another embodiment, the known primary content comprises fingerprints and for each fingerprint an associated television program and a time offset within the program.
In another embodiment, the interactive content is presented as a series of web-based pages synchronized in time with respect to the detected primary content.
In another embodiment, the series of pages are synchronized to time codes in the television programming.
In another embodiment, the series of pages can be scrolled via the interactive display screen.
In another embodiment, individual pages can be selected via the interactive display screen.
In another embodiment, the series of pages comprises a flipbook, organized horizontally, vertically or stacked, and in order of the time codes.
In another embodiment, one or more asynchronous pages is presented at the beginning and/or end of the series of synchronized pages.
In another embodiment, the first stored program operates to automatically after a designed time period, or in response to a communication from a user selectable option on the display screen, return to a page having a time code closest to but not exceeding a current time.
In another embodiment, each page is displayed only after its associated time code has passed in the primary content being detected.
In another embodiment, the first stored program operates to conceal a page until after its associated time code has passed in the primary content begin detected, and presents a user selectable option on the display screen to reveal the page.
In another embodiment, the first stored program operates to automatically advance through the time synchronized pages presented on the interactive display screen.
In another embodiment, the first stored program halts the automatic advancement in response to an input signal from the display screen indicating a user interaction with the display screen.
In another embodiment, the first stored program communicates as a client with a fingerprinting identification server external to the second screen device for determining a match of a detected primary contact and a stored primary contact.
In another embodiment, the client and server accumulate and share matching information over several request-response transactions to determine a match.
In another embodiment, the server executes a second stored program that sends a cookie to the client with partial match information.
In another embodiment, the first stored program receives the cookie and sends the cookie back to the server along with a subsequently detected audio portion.
In another embodiment, wherein the first stored program communicates with a fingerprinting identification service to search for a match across a data store of known primary content, and once a match is identified, subsequent searches for matches with subsequent detected audio portions are performed within a neighborhood of the identified match.
In another embodiment, the neighborhood is a range of time prior to and after to the matched primary content.
In another embodiment, the second screen device is Internet-enabled for communicating with the identification process and source(s) of the interactive content.
In another embodiment, the second screen device includes a browser process communicating with an external web server for aggregating the interactive content.
In another embodiment, the second screen device includes a web browser, a data store of detected primary content, and an inter-process interface communicating with the browser and data store for processing the interactive content.
In another embodiment, the first stored program on the second screen device includes a fingerprinting generation process communicating with an external web service that stores detected primary content.
In another embodiment, the second screen device includes a browser process having a fingerprinting generation process embedded in the browser process.
In another embodiment, the second screen device includes a fingerprinting process and a browser process embedded in a primary application stored on the second screen device.
In another embodiment, the identification process utilizes metadata of the known primary content and the detected audio portion to determine a match.
In another embodiment, the metadata comprises a characteristic of the known primary content including one or more of:
In another embodiment, the first stored program utilizes the metadata to determine one or more of:
In another embodiment, the first screen device comprises a television, a personal computer, a Smartphone, a portable media player, a cable or satellite set-top box, an Internet-enabled streaming device, a gaming device, or a DVD/blue ray device.
In another embodiment, the second screen device comprises a tablet computer, a Smartphone, a laptop computer, or a portable media player.
In another embodiment, the second screen device continually tracks the detected audio portion and the first stored program presents in substantially real time interactive content which changes as the detected audio portion changes.
In another embodiment, once a match is determined the identification service sends a portion of the data store defined by the neighborhood to the second screen device which portion is then stored on the second screen device for use locally on the second screen device in subsequent identification searches.
In another embodiment, the second screen device includes a user selectable input and the first stored program responds to a communication from the user selectable input to advance through the time synchronized pages.
In another embodiment, the first stored program operates to process communications from the user selectable input including one or more of:
In another embodiment, the primary content comprises a live broadcast, streaming content, or stored video content.
In another embodiment, the interactive content comprises one or more of a direct connection to a web page, and a link to a web page.
In one embodiment of the invention, a method is provided for substantially real time comparison and recognition of what primary content a viewer is watching on a first screen device comprising:
In another embodiment, the method includes utilizing metadata of the primary content detection signal or representation thereof in the step of identifying the detected primary content or in a step of selecting the content presented on the second screen device.
In another embodiment, the method includes the step of extracting information from one or more content streams to generate the metadata.
In another embodiment, the streams comprise one or more of video, audio and closed captioning of the primary content.
In one embodiment of the invention, a method is provided of substantially real time sharing of video content a viewer is watching on a first screen device comprising:
In another embodiment the method includes:
These and other embodiments of the invention will be more fully understood with regard to the following detailed description and accompanying drawings.
Various embodiments of the present invention are now described with reference to the drawings. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It may be evident, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the present invention.
As used in this application, the terms “component” and “system” are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
The present invention may also be illustrated as a flow chart of a process of the invention. While, for the purposes of simplicity of explanation, the one or more methodologies shown therein, e.g., in the form of a flow chart, are described as a series of acts, it is to be understood and appreciated that the present invention is not limited by the order of acts, as some acts may, in accordance with the present invention, occur in a different order and/or concurrent with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all illustrated acts may be required to implement a methodology in accordance with the present invention.
Thus, the SSCP 220 is responsible for processing the detected signal from the viewer's first screen and determining the related content, enabling a substantially real time comparison and recognition of what primary content the viewer is watching.
This processing and determination can be implemented in various methods according to the invention. In one embodiment, the client (second screen) and server (SSCP Platform) use symmetric fingerprinting algorithms to generate fingerprints of primary content that are used in a matching process to identify what the viewer is watching on the second screen. In one embodiment, the SSCP (backend listening servers and ingestion/indexing servers) receive live broadcast video, and optionally metadata concerning that video (e.g., show description, cast, episode, gossip, news), together with or separate from the live broadcast video, and generate fingerprints of the video content and optionally the SSCP generates further metadata concerning the video content, which the SSCP then stores (e.g., fingerprints, received metadata, generated metadata) in the SSCP servers. Then, when the SSCP receives a detection signal from the second screen device, it uses the same or similar fingerprinting algorithm to generate fingerprints which are then compared to the fingerprints (and optionally metadata) stored in the SSCP servers. The SSCP determines a match between the respective fingerprints alone and/or on the basis of the metadata. In some examples, described below, the SSCP uses the metadata in the matching process. In one example, the metadata includes classification information concerning the primary content, such as whether the primary content is a distinguishing characteristic, e.g., unique in the database or whether it is repetitive (Le., recurrent) content. If it is classified as repetitive, the SSCP server may defer (reject) the determination of a match, and instead utilize further detection signals based upon distinguishing characteristics of the primary content. Alternatively, the metadata may be annotation information useful in selecting related content. For example, the metadata may identify the cast members or news relating to the show, which the SSCP then utilizes in the post match process for selecting the related content to send to the second screen device. These and other embodiments are discussed further below.
The following flow charts illustrate more specific embodiments for implementing the present invention.
More specific examples of systems and methods for implementing the present invention are described below.
In one embodiment the second screen interactive content is arranged as a series of pages (e.g., web-based pages) that can be swiped across the screen (e.g., left/right) to advance through the pages, herein referred to as a “flipbook”. An optional set of thumbnails or icons representing each page is presented underneath the pages, in the format of a “filmstrip”. By clicking on a particular icon on the filmstrip, the user can select the related page (of the flipbook) which is then presented on the second screen.
The individual pages of the flipbook may contain synchronizeronous or asynchronizeronous content. The synchronizeronous pages are designed to be shown to the user (viewer) at a specific point in the primary content (e.g., TV program), as the related content is specific to a particular portion of the program. In contrast, asynchronizeronous pages are designed to be of general interest throughout the program, and not necessarily at a specific portion of the program.
It is expected that the user may be viewing a program on the primary screen, other than at the time the primary content provider distributed (e.g., broadcast) that program. For example, the user may have recorded the program for later viewing, and/or acquired a stored version of the program from various third-party providers. To avoid displaying the synchronizeronous pages to the user prematurely, the synchronizeronous pages can be displayed on the second screen device according to different methods, for example:
A useful feature according to one embodiment of the present invention is the ability to provide related content for reviewing in alternative user modes, namely active navigation of the second screen content and passive navigation of the second screen content. In the passive navigation mode, the user is not required to interact with the second screen device, but can simply view the second screen interface as desired, and interact therewith when desired. The second screen content provider enables such passive viewing by propelling the viewer through related content even if they are not interacting with the second screen. This automatic moving through pages, without requiring the user to do anything, generates page views and ad impressions for the second screen provider and third-party content providers. By default, the second screen interface may be provided in passive mode when the user enters the second screen interface. Then, when a user actively selects something on the second screen, the platform switches into active mode. Now, the automatic progression of the flipbook stops, and the user is redirected to another page based on the user selection. Later, if a user double taps for example on the page or otherwise signals that he wants to return to the current point in the primary content being viewed on the first screen, or if the user does not interact with the flipbook for some designated period of time, the second screen content returns to the most current live position (in the primary content) and to passive mode.
As another example,
If the “Social” icon on
If the “Cast” icon is selected in
Additional embodiments of the invention will now be described including specific implementations designed to enhance the viewer's experience of the primary and related content. These embodiments relate generally to: a) presenting webpages synchronizeronous to the viewer's location in the primary content; b) client-server fingerprint matching to enable identification of the primary content; c) continuous tracking of the primary content being viewed to enable more efficient identification of the primary content; and d) classification of primary content attributes for more efficient identification of the primary content and/or selection of related content.
One feature of the present invention is the ability to control when during a program viewing a page in the flipbook is shown to a user. In one embodiment, the SSCP utilizes an identifier (e.g., a live broadcast timecode) of the primary content and links the second screen (SS) presentation (of related content) to that identifier. The SS presentation is now synchronized to the primary content. However, the SSCP may also wish to monitor and control when individual flipbook pages are presented to each user. For example, a “quiz” page of the flipbook may pose a series of questions to viewers at specific moments in a program. If there are two viewers of a program, one watching live, and another watching time-shifted five minutes behind the live program airing, it is desired that the second viewer not see the questions on that page until five minutes after the first viewer. In one embodiment, a Javascript library enables a webpage developer to design a webpage that calls to retrieve a current timecode of the viewer in the program. An alternative approach is to provide a backend web service (e.g., provided by the SSCP) which posts a user's current program and timecode. Then, other web pages can call that service to determine where the user is in the program.
Referring now to
As previously described, it may be beneficial for the SSCP to continuously track the viewer's location in the primary content, (e.g., via a primary content time identifier), in order to control the time of delivery of the related content. For this purpose, after step 1404 the process returns to step 1400 (see dashed line in
It may also be desirable to provide a web-based fingerprint matching service in which each client, e.g., an iPad application on the viewer's second screen device, sends one or more fingerprints to the server of the second screen content provider (SSCP) in order to identify what the viewer is watching. The SSCP likely would desire a highly scalable backend service for conducting such fingerprint matching. Generally, the less state a web server has to store about an individual client application (user) the faster and better it will scale because the request can be handled by any one of multiple available servers without regard to previous requests. However, the activity must be coordinated across sequential request-response transactions. Thus, it may take several cycles for a client to accumulate enough information to distinguish the primary content. In one embodiment, the second screen application sends fingerprint data to the SSCP server periodically, e.g., every 2 to 4 seconds, with the expectation that it may take several, e.g., 3 to 5 such transactions (over a 6 to 20 second time period) to gather enough information to have a high quality fingerprint match to identify the primary content being viewed. The SSCP would like to minimize the amount of state its server(s) have to retain in order to provide this service. In one embodiment, the SSCP platform utilizes a cookie model. On a first request from the second screen application for identification of the primary content, the server attempts to find a match but may find a weak or incomplete match. The server reports to the client “no match” but stores the partial match information in a cookie which the server sends to the client. The client receives this cookie and saves it, sending it back to the server along with the next transmission of fingerprint data. The server then performs a match on this new query, combining in the comparison process the latest fingerprint data with the partial match information contained in the cookie. If a match is found of complete or high quality, the server then tells the client “match found”. If a match is still not found, then the server sends the updated partial information (now including the cumulative results of the two queries) back to the client, and the process continues.
According to another embodiment of the invention, a method is provided for more efficient fingerprinting services, for example that scales better to a large volume of users. In this process, during an initial primary content matching process, a search is conducted for a match across the entire database e.g., of stored primary content fingerprints. Once a sufficient match is found identifying the primary content being viewed, future searches for that user can be conducted within a subset of the database related to the initially identified primary content, thus reducing the processing load of subsequent searches on the fingerprinting service. In one example, the SSCP identifies what the viewer is watching (e.g., finds the program and timecode in the program), and then restricts future restrictions to a time window just beyond that timecode. This drastically reduces the search space. In another embodiment, future searches are restricted to plus or minus a specified amount of time (e.g., five minutes) to account for the fact that users may jump forward or backward using digital recording devices or services. In an alternative process, future searches are conducted at the second screen device, rather than at the SSCP server. Thus, once the SSCP server has made an initial identification of a program and a timecode, instead of the client sending future fingerprint data to the server for a match, the server starts sending the predicted fingerprints back to the client (second screen), and the client then performs a local verification of the primary content location. If the client cannot verify, then the whole search process may start again beginning with the initial determination at the server.
In another embodiment of the invention, a system and method are provided for classifying different types of primary content detection signals to determine whether the content being detected is truly unique or whether it is ambiguous. More specifically, in a fingerprint-base content identification model, the SSCP is inferring what the viewer is watching by comparing it with a library of primary content data, using for example a symmetric fingerprint, i.e., the client and server compute fingerprints using the same algorithm. While the SSCP may have scheduling information about what primary content is being broadcast, e.g., on TV, this information is often not accurate to the minute or second, and it does not identify where commercials will occur. Furthermore, without prior collaboration with the TV networks and advertisers, which can be difficult to arrange, the SSCP may not know beforehand the fingerprint signatures that the primary content will generate until the content is broadcast and publically available. Finally, much of the primary content, such as TV content, is repetitive. TV shows are often broadcast multiple times, commercials are aired repetitively across many networks, and TV shows use theme songs which may occur many times on the show. As a result of these factors, it is often ambiguous what a viewer is watching based on fingerprints alone.
What is needed is a system that can predict whether a piece of content is truly identifying (unique) or whether it is ambiguous. In one embodiment, this determination is made based on live and accumulated broadcast content. If it is determined that something is identifying, then the related content is determined and sent to the client. However, if it is ambiguous, the second screen content provider defers providing related content until an identification can be made (it is no longer ambiguous).
According to one embodiment, the SSCP accumulates and classifies content as follows. The SSCP receives content (e.g., a live broadcast, streaming, DVD, file, etc.), and as the content comes in, the SSCP classifies the content as it is added to the library. In one embodiment, the SSCP labels and stores the content attempting to classify it into one of a plurality of categories, e.g.:
The content may be classified into these and other categories based on a number of attributes, for example:
In another embodiment, the SSCP uses previously classified advertisements, or produces fingerprints from sample advertisements (e.g., provided by a third-party source) to locate the edges (i.e., beginning and ending) of a TV program. If the SSCP knows all of the advertisements in a program, then by subtraction the second screen content provider can find the program segments between by those advertisements.
There are a number of potential advantages that result from listening to and classifying live broadcasts. One advantage of classifying content into one of several buckets (e.g., advertising, programming) is that this helps to perform faster, more accurate content recognition (i.e., determine what content the viewer is watching on the primary device). Secondly, extracting information related to the broadcast (e.g., cast information, keywords, products, texts/titles on screen) provides insights into what additional (related) content to show the user.
For example, in one embodiment the SSCP utilizes multiple content streams, e.g., video, audio, closed captioning, to generate metadata that can be used to improve the quality of the matching and/or selection of related content process step(s). The SSCP backend servers (ingestion servers) have access to the video and closed captioning content of the programming (primary content) from which the SSCP can extract information (e.g., images from the video and text from the closed captioning). Also, the ingestion servers presumably receive a cleaner audio signal that the SSCP can use to extract additional information for use in generating related content. Typically, the client signal (detected from the primary device) will be noisy (e.g., include ambient noise in the user's environment such as noise generated by air conditioners, traffic, conversations, etc.). Preferably, a fingerprinting algorithm is used that can detect the more prominent distinctive characteristics (versus the noise) of the audio signal from the primary device, for use in the matching process. However, the fingerprint generated from the primary device may contain significantly less informational content than the stored fingerprint generated by the SSCP backend ingestion servers. Thus, by using the stored (cleaner) signal the SSCP can facilitate the matching process, e.g., utilize the metadata from the stored signal to assist in classifying the incoming detection signal and determining whether it is a unique or ambiguous signal (e.g., relates to an advertisement or is a repetitive segment relating to a theme song, etc.) Also, the stored metadata can be used for determining additional related content for sending to the second screen device.
The third-party providers 1804 may supply data from external sources to the SSCP data center 1802. The other data centers 1806 may share information directly or indirectly with the SSCP data center 1802, allowing for load balancing and rapid synchronization of user data. The other third-party related content providers and content delivery network 1808 includes commercial services that store and distribute web pages to users so that the data centers do not become bottlenecked, and/or provide related content that is selected by the SSCP for inclusion in the second screen web page 1810.
The previously described methods may be implemented in a suitable computing environment, e.g., in the context of computer-executable instructions that may run on one or more computers. In for example a distributed computing environment certain tasks are performed by remote processing devices that are linked through a communications network and program modules may be located in both local and remote memory storage devices.
A computer may include a processing unit, a system memory, and system bus, wherein the system bus couples the system components including, but not limited to, the system memory and the processing unit. A computer may further include disk drives and interfaces to external components. A variety of computer-readable media can be accessed by the computer and includes both volatile and nonvolatile media, removable and nonremovable media
The second screen device may be a wired or wireless device enabling a user to enter commands and information into the second screen device via a touch screen, game pad, keyboard or mouse. In the disclosed embodiment, the second screen device includes an internal microphone for detecting the primary content being watched or heard by the user from the primary content device. The second screen device includes a monitor or other type of display device for viewing the second screen content. The second screen device may be connected to the SSCP via a global communications network, e.g., the Internet. The communications network may include a local area network, a wide area network or other computer network. It will be appreciated that the network connections shown herein are exemplary and other means of establishing communications between the computers may be used.
In one embodiment, a method of generating fingerprints and associated information of the primary content includes obtaining (e.g. licensing from one of several vendors) television broadcast schedule information. For a particular broadcast program or series of programs, time slices of the audio signal are captured along with the time of capture. For each audio signal time slice there is produced a fingerprint, and for each fingerprint an associated program that is airing at the time of capture and the relative time within the program. Each fingerprint and its associated program and time offset information is stored in a data store (such as a database). The data store, and a program for generating the primary content fingerprints and associated information and for determining a match (between a detected primary content and the stored primary content), may be external to the second screen device as described in the embodiments of
Thus, when a second screen device (client) captures (detects) an audio portion of a primary content presented on a first screen device, and generates a fingerprint of that detected content, the detected fingerprint (and associated information, if any) is sent to the external fingerprint identification process for determining a match. Along with a match, the external service returns to the client the program and time offset information associated with the match, which again can be stored locally (on the second screen device) or remotely (external to the second screen device).
In other embodiments, the metadata associated with a match can be further extended to include linking the program to other information such as a Twitter or Facebook conversation occurring around the program, articles or photos associated with the program, etc.
The primary content (first screen) device 1902 presents video programming to a user, which includes an audio signal transmitted via air 1913 that is captured (detected) by the microphone 1903 of the second screen device.
The microphone 1903 transmits the detected (optionally processed) audio signal on a communication channel 1914 to a fingerprint generation process 1910 for generating a fingerprint of the detected audio signal using the same fingerprinting algorithm used for generating fingerprints of known primary content. The fingerprinting process 1910 communicates via channel 1917 with an external identification server 1911 for determining a match. Once a match has occurred the process 1910 transmits on channel 1915 certain match data to data store 1904, such as the current (detected) primary content and time offset within the detected program. That data is then available via channel 1916 to an inter-process interface 1905, which communicates via channel 1918 with the browser process 1901 to determine an associated secondary (related) content for presentation on the second screen display 1908. The browser 1901 communicates on channel 1921 with an external web server 1906 for supplying via channel 1922 relevant secondary content web pages 1907. The web pages sent to the browser 1901 can then be associated with the current program and position (time offset) information for presentation on the display screen 1908, e.g., as a series of time synchronized pages.
In one embodiment, the inter-process communication interface 1905 is a Javascript library, a browser plug-in, or other interface. The web browser 1901 receives web pages from web server 1906 in response to various events (e.g., an application starting, an event on the second screen device, or some user interaction on the second screen display), and executes the Javascript or other dynamic web scripts that are embedded on the page. One of these scripts may call for an interaction with the custom Javascript library that accesses the information stored by the fingerprinting process. The browser then takes an action conditional on this information, which may include presenting current information differently, or retrieving new secondary content from the same web server 1906 or from another web server. The browser may then present the secondary content to the user via the display screen 1908.
More specifically, system 2000 is illustrated including a second screen device 2012 and external thereto a primary content device 2002, fingerprint identification server 2011, web server 2006 and secondary content web page 2007, all of which are comparable to the similarly defined elements in
In the embodiment of
The embodiment illustrated in
Various classifications can be used to simplify and/or speed the identification (matching) matching process, for example:
The use of such classifications may also vary. In one embodiment of the classification process, if the current content matches a prior fingerprint that has already been labeled as an advertisement, then the current content is also classified as an advertisement. In another embodiment, if the current content matches a different episode of the claimed program, it is labeled as a theme song or repetitive program element. In another embodiment, if the current content does not match any prior fingerprints, it is classified as unique programming.
At a later time, when content is received from a second screen device 2425, a matching process 2426 attempts to find a match within the fingerprint store 2422. If a match occurs, the matching engine retrieves the classification information associated with the same content from within the classification store 2424. In one embodiment, if the matching engine finds a match within content classified as an advertisement, it may communicate to a second screen device that for example, no match has occurred (assuming this is not a primary content of interest to the user of the second screen device), or alternatively that a match to an advertisement has occurred. In another embodiment, if a matching engine finds a match with a theme song, it may communicate to a second screen device that a known program is being viewed, but it is not clear what episode is being viewed.
In one or more embodiments described herein, various programming languages can be used as would be apparent to those skilled in the art for accomplishing the functionality described herein. By way of example only, where the second screen device comprises a mobile or tablet device, suitable programming languages may include Objective C, C++, and Java. For Internet services, suitable programming languages may include Java, Python, MySQL and Perl. For ingesting the primary content (e.g., capturing and generating audio fingerprints), the programming languages may include Java, Python, and MySQL.
The primary content source 2501 consists of an audio signal 2502, a video signal 2505, and a unique primary content (PC) identifier (ID) and current timecode 2516. The audio signal 2502 is fingerprinted 2503 and the resulting fingerprints, PC ID, and timecode 2517 are stored in a fingerprint identification database 2504. Simultaneous to the audio process, the video signal 2505 and current PC ID and timecode 2516 are captured and processed 2506. Short segments of video, still images, PC ID, and timecode 2518 are stored in a video and image database 2507.
A second screen device 2530 is used in conjunction with a primary content source 2520. Audio 2519 is received by the second screen device and fingerprinted 2509. The second screen sends fingerprints 2521 to audio identification servers 2511 of the SCCP. If a match is found, a corresponding PC ID and Timecode 2522 are returned. The same PC ID and Timecode are then sent (2523) to the video and image database 2507 of the SCCP, and an associated set of videos and/or images from that PC and neighboring timecodes are returned 2524.
The second screen may then display these images and videos to a user 2513. The user may then select an image or video, add a comment, and share them (2514). The image or video and comment may then be delivered to other people via email, Twitter, Facebook or other similar communications protocol or social network 2515. In various embodiments, the invention includes:
What has been described above includes examples of the present invention. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the present invention, but one of the ordinary skill in the art may recognize that many further combinations and permutations of the present invention are possible. Accordingly, the present invention is intended to embrace all such alternations, modifications and variations that fall within the present disclosure and/or claims. Furthermore, to the extent that the term “includes” is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term “comprising” when employed as a transitional word in a U.S. claim.
Number | Date | Country | |
---|---|---|---|
61535511 | Sep 2011 | US |