The invention is particularly applicable to a Social Media Platform in which the source of the original content is a broadcast television signal and it is in this context that the invention will be described. It will be appreciated, however, that the system and method has greater utility since it can be used with a plurality of different types of original source content.
The ecosystem of the Social Media Platform may include primary sources of media, generative media, participatory media, generative programming, parallel programming, and accessory devices. The Social Media Platform uses the different sources of original content to create generative media, which is made available through generative programming and parallel programming (when published in parallel with the primary source of original content). The generative media may be any media connected to a network that is generated based on the media coming from the primary sources. The generative programming is the way the generative media is exposed for consumption by an internal or external system. The parallel programming is achieved when the generative programming is contextually synchronized and published in parallel with the transmitted media (source of original content). The participatory media means that third parties can produce generative media, which can be contextually linked and tuned with the transmitted media. The accessory devices of the Social Media Platform and the parallel programming experience may include desktop or laptop PCs, mobile phones, PDAs, wireless email devices, handheld gaming units and/or PocketPCs that are the new remote controls.
The contextual content source 12 may include different types of contextual media including text, images, audio, video, advertising, commerce (purchasing) as well as third party content such as publisher content (such as Time, Inc., XML), web content, consumer content, advertiser content and retail content. An example of an embodiment of the user interface of the contextual content source is described below with reference to
The original/primary content source 10 is fed into a media transcriber 13 that extracts information from the original content source which is fed into a social media platform 14 that contains an engine and an API for the contextual content and the users. The Social Media Platform 14 at that point extracts, analyzes, and associates the Generative Media (shown in more detail in
The social media platform uses linear broadcast programming (the original content) to generate participative, parallel programming (the contextual/secondary content) wherein the original content and secondary content may be synchronized and delivered to the user. The social media platform enables viewers to jack-in into broadcasts to tune and publish their own content. The social media platform also extends the reach of advertising and integrates communication, community and commerce together.
The social media platform 14, in this embodiment, may be a computer implemented system that has one or more units (on the same computer resources such as servers or spread across a plurality of computer resources) that provide the functionality of the system wherein each unit may have a plurality of lines of computer code executed by the computer resource on which the unit is located that implement the processes and steps and functions described below in more detail. The social media platform 14 may capture data from the original content source and analyze the captured data to determine the context/subject matter of the original content, associate the data with one or more pieces of contextual data that is relevant to the original content based on the determined context/subject matter of the original content and provide the one or more pieces of contextual data to the user synchronized with the original content. The social media platform 14 may include an extract unit 22 that performs extraction functions and steps, an analyze unit 24 that performs an analysis of the extracted data from the original source, an associate unit 26 that associates contextual content with the original content based on the analysis, a publishing unit 28 that publishes the contextual content in synchronism with the original content and a participatory unit 30. The extraction unit 22 captures the digital data from the original content source 10 and extracts or determines information about the original content based on an analysis of the original content. The analysis may occur through keyword analysis, context analysis, visual analysis and speech/audio recognition analysis. For example, the digital data from the original content may include close captioning information or metadata associated with the original content that can be analyzed for keywords and context to determine the subject matter of the original content. As another example, the image information in the original content can be analyzed by a computer, such as by video optical character recognition to text conversion, to generate information about the subject matter of the original content. Similarly, the audio portion of the original content can be converted using speech/audio recognition to obtain textual representation of the audio. The extracted closed captioning and other textual data is fed to an analysis component which is responsible for extracting the topic and the meaning of the context. The extract unit 22 may also include a mechanism to address an absence or lack of close caption data in the original content and/or a mechanism for addressing too much data that may be known as “informational noise.”
Once the keywords/subject matter/context of the original content is determined, that information is fed into the analyze unit 24 which may include a contextual search unit. The analysis unit 24 may perform one or more searches, such as database searches, web searches, desktop searches and/or XML searches, to identify contextual content in real time that is relevant to the particular subject matter of the original content at the particular time. The resultant contextual content, also called generative media, is then fed into the association unit 26 which generates the real-time contextual data for the original content at that particular time. As shown in
The participatory unit 30 may be used to add other third party/user contextual data into the association unit 26. The participatory contextual data may include user publishing information (information/content generated by the user or a third party), user tuning (permitting the user to tune the contextual data sent to the user) and user profiling (that permits the user to create a profile that will affect the contextual data sent to the user). An example of the user publishing information may be a voiceover of the user which is then played over the muted original content. For example, a user who is a baseball fan might do the play-by-play for a game and then play his play-by-play while the game is being played wherein the audio of the original announcer is muted which may be known as fan casting.
The publishing unit 28 may receive data from the association unit 26 and interact with the participatory unit 30. The publishing unit 28 may publish the contextual data into one or more formats that may include, for example, a proprietary application format, a PC format (including for example a website, a widget, a toolbar, an IM plug-in or a media player plug-in) or a mobile device format (including for example WAP format, JAVA format or the BREW format). The formatted contextual data is then provided, in real time and in synchronization with the original content, to the devices 16 that display the contextual content.
While the foregoing has been with reference to a particular embodiment of the invention, it will be appreciated by those skilled in the art that changes in this embodiment may be made without departing from the principles and spirit of the invention, the scope of which is defined by the appended claims.