It is increasingly common for television viewers to watch a show while using a computing device. Frequently, viewers search the Internet for content related to the show to extend the entertainment experience. In view of the vast amount of information available on the Internet, it can be difficult for the viewer to find content specifically related to the television show the viewer is watching at a particular instant. Further, because the viewer's attention may be distracted from the show while searching for relevant content, the viewer may miss exciting developments in the television show, potentially spoiling the viewer's entertainment experience.
Embodiments related to distributing an identity of a video item being presented on a video presentation device within a video viewing environment to applications configured to obtain content related to the video itemare provided. In one example embodiment, an alert is provided by determining an identity of the video item currently being presented on the video presentation device, and, responsive to a trigger, transmitting the identity of the video item while the video item is being presented on the video presentation device. The identity may then be received by a receiving device and used to obtain supplemental content.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
Viewers may enjoy viewing supplementary content (like web content) that is contextually related to video content while the video content is being watched. For example, a viewer may enjoy finding trivia for an actor while watching a movie, sports statistics for a team while watching a game, and character information for a television series while watching an episode of that series. However, the act of searching for such content may distract the viewer, who may miss out on part of the video content due to having to manually enter search terms and sort through search results, or otherwise manually navigate to content.
Thus, the disclosed embodiments relate to facilitating the retrieval and presentation of such supplemental information by transmitting an identity of a video item being presented on a device in a viewing environment to one or more applications configured to present such supplemental information. The identity of the video content item and/or a particular scene or other portion of the video content item may be determined and transmitted by an identity transmission service to a receiving application registered with the identity transmission service. Upon receipt of the identity, the receiving application may fetch related content and present it to the viewer. Thus, the viewer is presented with potentially interesting related content with a potentially lower search burden. It will be understood that, in various embodiments, the receiving application may be on a different device or same device as the identity transmission service.
The identity of the video content item may be determined in any suitable manner. For example, in some situations, an identifier may be included with a video item upon creation of the video item in the form of metadata that contains identity information in some format recognizable by the identity transmission service. As a more specific example, a television network that broadcasts a series over cable, satellite, or other television transmission medium may include metadata with the transmission that is readable by a set-top box, an application running on a media presentation computer, or other media presentation device, to determine an identification of the broadcast. The format of such metadata may be proprietary, or may be an agreed-upon format utilized by multiple unrelated entities.
The identity information may include any suitable information about the associated video item. For example, the identity information may identify particular scenes within the video item, in addition to the video content item as a whole. As a more specific example, a particular scene may include actors and/or objects specific to that scene that may not appear in other portions of the video content item. Therefore, the transmission of such identity information may allow a device that receives the identity information to fetch information related to that particular scene while the scene is playing.
In other cases, a video content item may lack such identification metadata. For example, as a television program is syndicated, adapted into different languages, adapted for different formats (broadcast as opposed to streaming, for example), the media content item may be edited. Such editing may involve shortening the content by removing frames from the content. Such frames may be located at opening or closing credits, or even within the content itself. Thus, any identification metadata that is associated with a particular scene in the video content may be lost if such edits are made. Furthermore, at times, a clip of a video content item may be presented separately from the rest of the video content item.
In light of such issues, and considering the proliferation of video clips on the Internet, a snippet taken from a longer video item may be extremely difficult to identify in an automated fashion once set adrift from its identifier. As a consequence, an application seeking to automatically obtain supplemental content related to a video item being viewed may not be able to identify the video item in many situations. Indeed, a viewer, much less an automated identification transmission service, may have a difficult time identifying such clips.
To overcome such difficulties, in some embodiments, video fingerprinting technologies may be used to detect the identity of a portion of a video item and build a digital fingerprint for that video item. Later, the digital fingerprint may be detected, identified, and an alert may be transmitted to the application so that the application may obtain related content. The “fingerprint” of a video item may be identified based on patterns detected in one or more of a video signal and/or an audio signal for the video item. For example, color and/or motion tracking techniques may be used to identify variations between selected frames in the video signal and the result of such tracking may provide an extracted video fingerprint, either for an overall video item or for a specific scene in the video item (such that multiple scenes are fingerprinted). A similar approach may be used for an audio signal. For example, audio features (e.g., sound frequency, intensity, and duration) may be tracked, providing an extracted audio fingerprint. In other words, fingerprinting techniques extract perceptible characteristics of the video item (like the visual and/or audible characteristics that human viewers and listeners use to identify such items) when building a digital fingerprint for a video item. Consequently, fingerprinting techniques may overcome potential variations in a video and/or audio signal resulting from video items that may have been modified during editing (e.g., from compression, rotation, cropping, frame reversal, insertion of new elements, etc.). Given the ability to potentially identify video items despite such alterations, a viewer encountering an unknown video item may still discover supplementary content related to the video item and/or scenes in a video item, potentially enriching the viewer's entertainment experience.
Once constructed, the digital fingerprints may be stored in database so that the digital fingerprint may be accessed for identification in response to a request to identify a particular video item in real time. Further, in some embodiments, such a database may be used as a clearinghouse for licensing rights to enable the tracking of reproduction and/or presentation of video content items virtually independent of the format into which the video item may eventually be recorded.
Video viewing environment sensor system 112 provides sensor data collected from video viewing environment 100 to media computing device 106. Video viewing environment sensor system 112 may include any suitable sensors, including but not limited to one or more image sensors, depth sensors, and/or microphones or other acoustic sensors. Further, in some embodiments, sensors that reside in other devices than video viewing environment sensor system 112 may be used to provide input to media computing device 106. For example, in some embodiments, an acoustical sensor included in a mobile computing device 105 (e.g., a mobile phone, a laptop computer, a tablet computer, etc.) held by viewer 116 within video viewing environment 100 may collect and provide sensor data to media computing device 106. It will be appreciated that the various sensor inputs described herein are optional, and that some of the methods and processes described herein may be performed in the absence of such sensors and sensor data.
In the example shown in
First, method 200 comprises, at 202, registering an application with an identity transmission service. The identity transmission service may act like a beacon, transmitting the identity of the video item to registered applications so that the applications may then obtain suitable related content. Further, such transmission may be repeated on a desired time interval so that mobile devices of later-joining viewers also may receive the identity information. The identity transmission service also may provide identity information when requested, instead of as a beacon.
Any suitable application may register with the identity transmission service. For example, some viewers may have a mobile computing device when watching another display device to access supplementary content about the video item being watched. Therefore, process 202 may comprise, at 204, registering a device on the mobile application with the identity transmission service. Likewise, in some cases, an application (e.g. a web browser) running on a same device used to present the primary video item may be used to obtain supplemental content. As such, process 202 may comprise registering an application on a same device as that used to present the primary video content. In another example, an application may be a digital rights management application configured to obtain digital rights to the video item from a digital rights clearinghouse based on the video item's identity, the related content including appropriate licenses for the video item.
At 206, method 200 includes receiving a request to play the video item. The request may be received from the registered application, or from any suitable device, without departing from the scope of the present disclosure.
Responsive to the request, the video content item is presented. Method 200 then includes, at 208, determining an identity of the video item currently being presented on the video presentation device. As used herein, the identity includes any information that may be used to identify the video item. For example, in some embodiments, 208 may include, at 210, determining the identity from a digital fingerprint of the video item. As described above, such a “fingerprint” of a video item may be identified based on patterns detected in one or more of a video signal and/or an audio signal for the video item, and therefore may be used even for video content items having no identification information, including but not limited to edited or derivative versions of a video content item in which identity information has been removed.
In one scenario, the identity may be determined from a digital fingerprint of the video item by collecting sound data from an audio signal included in an audio track for the video item and identifying the digital fingerprint based on the sound data. For example, referring to
In other embodiments, as indicated at 212, the identity may be determined from metadata that is included with the video content item. The metadata may specify any suitable information, including but not limited to a universal identifier (e.g. a unique code for a particular video item and/or a particular scene in a particular item) that may be directly used to identify relevant content, and/or used to look up the video item in a database to retrieve title and other relevant information, such as actors appearing in the item, directors and filming locations related to the item, trivia for the item, and so on Likewise, in some embodiments, the identifier may include text metadata that are human-readable and/or directly enterable in a search engine by a receiving application, and may include information including show name, series number, season number, episode number, episode name, and the like.
Identity metadata may be included with a video item upon creation (including the creation of a derivative version of the video item), and/or sent as supplemental content by a content provider or distributer, such as a digital content identifier sent by a cable or satellite television provider to a set-top box. Where stored during the initial creation of a video item or video item version, the metadata may have a propriety format or a more widely-used format. Likewise, where the metadata is provided as supplemental content by a content provider or distributer), the identity metadata may be transmitted continuously during transmission of the associated metadata, periodically, or in any other suitable manner.
Continuing with
Continuing with
The video item identity may be transmitted in any suitable manner. For example, in some embodiments, the identity may be transmitted to the application via a peer-to-peer network connection at 222. In this case, referring to
In other embodiments, the identity may be transmitted to one or more applications via a server computing device networked with the computing device and application, respectively. For example, mobile computing device 105 of
In yet other embodiments, the identity may be transmitted to the mobile computing device and/or the application at 226 via a local light and/or sound transmission. For example, an ultrasonic signal encoding the identity may be output by an audio presentation device into the video viewing environment, where it is received by an audio input device connected with a viewer's mobile computing device. It will be appreciated that any suitable sound frequency may be used to transmit the identity without departing from the scope of the present disclosure. Further, it will be appreciated that, in some embodiments, the identity may be transmitted to the mobile computing device via an optical communications channel. In one non-limiting example, a visible light encoding of the identity may be output by the video presentation device for receipt by an optical sensor connected with the mobile device during presentation in a manner that the encoded identity is not perceptible by a viewer Likewise, identity information may be transmitted via an infrared communication channel provided by an infrared beacon on a display device or media computing device.
In yet other embodiments, as indicated at 228, the identity may be transmitted to a supplementary content presentation module on the same computing device at 228. In other words, the identity may be detected at one module on a computing device where the video item is being presented and transmitted to a supplementary content module on the same computing device so that contextually-related content may be presented on the same computing device. In one specific embodiment, the identity transmission service may be implemented as an operating system component that automatically determines the identification of video content items being presented, and then provides the identifications to applications registered with the identity transmission service.
The supplementary content module 310 may display the supplementary content in any suitable manner, including but not limited to in a different display region of a video presentation device on which the video item is being displayed, as a partially transparent overlay over the video item, etc. For example, sidecar links spawned by a web browser may be presented in a display region next to a display region where the video presentation module is displaying the video item.
The transmission examples provided above are not intended to be limiting, and it will be appreciated that combinations of computing devices running services from any suitable combination of service providers may be employed without departing from the scope of the present disclosure. For example, a user may have a cable service with a set-top-box provider and a web service with a separate online service provider. In such an instance, the user's mobile device may use an application programming interface (API) provided by the cable service (or any suitable API provider) to communicate with a set-top-box or other transmitting device and receive video item identities. Once identified, the mobile device may then obtain contextually-related supplemental content from the web.
Turning to
At 232, method 200 includes performing a software event based on the video item identity. For example, as depicted in
It will be appreciated that the application may perform other tasks associated with obtaining the related content. For example, in some embodiments, the application may provide analytical data about the content the viewer received to an analytical service. As a more specific example, in the case of digital rights management applications, analytical data may be provided to a digital rights management service and used to track license compliance and manage royalty payments. Further, in the case of web services, page view analytics may be tracked and fed to advertisers to assist in tracking clickthrough rates on advertisements sent with the contextually related content. For example, tracking clickthrough rates as a function of scene-specific video item identity may help advertisers understand market segments comparatively better than approaches that are unconnected with video item identity information.
In some embodiments, the above described methods and processes may be tied to a computing system including one or more computers. In particular, the methods and processes described herein may be implemented as a computer application, computer service, computer API, computer library, and/or other computer program product.
Computing system 300 includes a logic subsystem 302 and a data-holding subsystem 304. Computing system 300 may optionally include a display subsystem, communication subsystem, and/or other components not shown in
Logic subsystem 302 may include one or more physical devices configured to execute one or more instructions. For example, the logic subsystem may be configured to execute one or more instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more devices, or otherwise arrive at a desired result.
Logic subsystem 302 may include one or more processors that are configured to execute software instructions. Additionally or alternatively, logic subsystem 302 may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. Processors of logic subsystem 302 may be single core or multicore, and the programs executed thereon may be configured for parallel or distributed processing. Logic subsystem 302 may optionally include individual components that are distributed throughout two or more devices, which may be remotely located and/or configured for coordinated processing. One or more aspects of logic subsystem 302 may be virtualized and executed by remotely accessible networked computing devices configured in a cloud computing configuration.
Data-holding subsystem 304 may include one or more physical, non-transitory devices configured to hold data and/or instructions executable by logic subsystem 302 to implement the herein described methods and processes. When such methods and processes are implemented, the state of data-holding subsystem 304 may be transformed (e.g., to hold different data).
Data-holding subsystem 304 may include removable media and/or built-in devices. Data-holding subsystem 304 may include optical memory devices (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory devices (e.g., RAM, EPROM, EEPROM, etc.) and/or magnetic memory devices (e.g., hard disk drive, floppy disk drive, tape drive, MRAM, etc.), among others. Data-holding subsystem 304 may include devices with one or more of the following characteristics: volatile, nonvolatile, dynamic, static, read/write, read-only, random access, sequential access, location addressable, file addressable, and content addressable. In some embodiments, logic subsystem 302 and data-holding subsystem 304 may be integrated into one or more common devices, such as an application specific integrated circuit or a system on a chip.
It is to be appreciated that data-holding subsystem 304 includes one or more physical, non-transitory devices. In contrast, in some embodiments aspects of the instructions described herein may be propagated in a transitory fashion by a pure signal (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for at least a finite duration. Furthermore, data and/or other forms of information pertaining to the present disclosure may be propagated by a pure signal.
The terms “module,” “program,” and “engine” may be used to describe an aspect of computing system 300 that is implemented to perform one or more particular functions. In some cases, such a module, program, or engine may be instantiated via logic subsystem 302 executing instructions held by data-holding subsystem 304. It is to be understood that different modules, programs, and/or engines may be instantiated from the same application, service, code block, object, library, routine, API, function, etc Likewise, the same module, program, and/or engine may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The terms “module,” “program,” and “engine” are meant to encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc.
It is to be appreciated that a “service”, as used herein, may be an application program executable across multiple user sessions and available to one or more system components, programs, and/or other services. In some implementations, a service may run on a server responsive to a request from a client.
When included, a display subsystem may be used to present a visual representation of data held by data-holding subsystem 304. As the herein described methods and processes change the data held by data-holding subsystem 304, and thus transform the state of data-holding subsystem 304, the state of the display subsystem may likewise be transformed to visually represent changes in the underlying data. A display subsystem may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic subsystem 302 and/or data-holding subsystem 304 in a shared enclosure, or such display devices may be peripheral display devices.
When included, a communication subsystem may be configured to communicatively couple computing system 300 with one or more other computing devices. A communication subsystem may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network, a wireless local area network, a wired local area network, a wireless wide area network, a wired wide area network, etc. In some embodiments, the communication subsystem may allow computing system 300 to send and/or receive messages to and/or from other devices via a network such as the Internet.
It is to be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated may be performed in the sequence illustrated, in other sequences, in parallel, or in some cases omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.