Social media viewing system

Information

  • Patent Grant
  • 9571606
  • Patent Number
    9,571,606
  • Date Filed
    Thursday, March 14, 2013
    11 years ago
  • Date Issued
    Tuesday, February 14, 2017
    7 years ago
Abstract
Methods, systems and computer program products are described that facilitate enhanced interactions via social media that can be enabled, at least in-part, by using various content identification techniques. Enhanced viewing of a content can be accomplished by monitoring activities of a user related to the user's accessing of a particular content and analyzing information acquired from the monitoring in conjunction with stored data related to additional users. Next, a subset of the additional users that are associated with the user or with the particular content are identified, and enhanced viewing of the particular content is enabled amongst the user and the identified subset of the additional users.
Description
FIELD OF INVENTION

The present application generally relates to the field of social interaction and particularly to facilitating social viewing of media content.


BACKGROUND

This section is intended to provide a background or context to the disclosed embodiments that are recited in the claims. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived or pursued. Therefore, unless otherwise indicated herein, what is described in this section is not prior art to the description and claims in this application and is not admitted to be prior art by inclusion in this section.


In recent years, social media and social media-related technologies and platforms have proliferated such that interactions through social media has become an increasingly important part of many people's daily lives. Enabled by ubiquitously accessible and scalable communication technologies, social media has provided alternate means for communication between organizations, communities, and individuals. Social media often utilize web-based and mobile technologies to turn communication into an interactive dialogue. For example, a group of Internet-based applications can allow the creation and exchange of user-generated content. The content that is exchanged in a social media setting can include, but is not limited to, audio, video, still image, text, mark-up language, software, and combinations thereof.


Identification of content (i.e., user-generated or otherwise produced content) can be accomplished in different ways. For example, when a content is organized in a file structure, a filename may be used to identify the content. Additionally, or alternatively, additional information, often called metadata, can accompany the content to enable the inclusion of identification information as part of a content file structure. For example, such metadata can be stored as part of a file header.


Other techniques for identifying a content rely on the inclusion of embedded watermarks in a content. Watermarks are designed to carry auxiliary information without substantially affecting fidelity of the host content, or without interfering with normal usage of the host content. Embedded watermarks can be utilized to convey information such as a content identifier (ID), a content name, a content owner, and the like.


Fingerprinting is yet another technique that may be used to identify a content. As opposed to watermarks, which are additional signals embedded into a host content, fingerprints are calculated based on inherent characteristics of the content. Similar to an actual fingerprint that uniquely identifies a person, specific characteristics (e.g., distribution of frequency components) of a content can be computed and distilled into a set of parameters, or a bit string, that allows unique identification of that content. Fingerprint computations are often carried out for consecutive segments of a content to produce a series of parameters or bit strings, which are then stored at a fingerprint database along with other identification information, such as content name, content owner and the like. When a received content is to be identified, the content's fingerprint is computed and compared against the stored database fingerprints until a match is found.


Despite the advent of social media that has enabled unique interactions among various entities, the role of content identification techniques has largely remained unchanged.


SUMMARY

The disclosed embodiments relate to methods, devices, systems, and computer program products that facilitate enhanced interactions via social media that can be enabled, at least in-part, by using various content identification techniques. One aspect of the disclosed embodiments relates to a method that includes monitoring activities of a user related to the user's accessing of a particular content, analyzing information acquired from the monitoring in conjunction with stored data related to additional users, identifying a subset of the additional users that are associated with the user or with the particular content, and allowing enhanced viewing of the particular content amongst the user and the identified subset of the additional users.


In one embodiment, allowing enhanced viewing of the particular content comprises enabling at least one of the identified subset of additional users to receive the particular content through a second type of communication channel while the user simultaneously receives the particular content through a first type of communication channel. In another embodiment, analyzing the information acquired from the monitoring in conjunction with stored data the stored data identifies the particular content. Additionally, in such an embodiment, identifying the subset of additional users includes identifying users who are currently accessing the particular content. In yet another embodiment, the above noted method further includes identifying users who have previously accessed the particular content.


According to one embodiment, monitoring activities of the user includes receiving information extracted from embedded watermarks in the particular content. In another exemplary embodiment, the received information enables identification of one or more of: (a) information specific to the particular content, (b) a media distribution channel associated with the particular content, or (c) a time of transmission of the particular content. In another embodiment, monitoring activities of the user includes receiving fingerprints computed from one or more segments of the particular content, and comparing the received fingerprints with information at a fingerprint database, where the fingerprint database comprises stored fingerprints associated with previously registered content.


In another exemplary embodiment, activities of the user include one or more inputs provided by the user on a user interface of a user device. For example, such inputs can be received from one or more of: a remote control device, a keyboard, a mouse, a physical button, or a virtual button. In still another embodiment, analyzing the information acquired from the monitoring comprises analyzing metadata associated with the particular content. In another embodiment, the information acquired from the monitoring includes information indicative of one or more uniform resource locators (URLs) accessed by the user. In yet another embodiment, analyzing the information acquired from the monitoring comprises performing a network data analysis related to the particular content.


According to another embodiment, analyzing the information acquired from the monitoring includes analyzing a program guide information to determine an identity of the particular content. In one exemplary embodiment, the particular content includes one or more of: an audio component, a video component, an image component, a text component, a mark-up language component, or a software component. In another embodiment, allowing enhanced viewing of the particular content includes enabling the user and one or more of the identified subset of additional users to communicate with one another. For example, communications between the user and the one or more of the identified subset of additional users can be effectuated using at least one of: an instant messaging, a voice chat, a conference call or a video call.


In another exemplary embodiment, allowing enhanced viewing of the particular content includes designating a lead user for navigating the particular content, and allowing the lead user to navigate through one or more segments of the particular content or an associated content. In still another exemplary embodiment, allowing enhanced viewing of the particular content comprises designating a lead user for navigating the particular content for a first period of time, allowing the lead user to navigate through one or more segments of the particular content or an associated content during the first period of time, designating a new lead user for navigating the particular content for a second period of time, and allowing the new lead user to navigate through one or more segments of the particular content or an associated content during the second period of time.


According to another embodiment, the additional users are associated with the user or with the particular content based on one or more of the following: (a) a consensual action by the user and by the additional users, (b) geographic proximity of the user and the additional users, or (c) a shared interest between the user and the additional users. In another embodiment, allowing enhanced viewing of the particular content comprises providing supplemental content to the user and/or to one or more of the identified subset of additional users. For example, the supplemental content can include at least one of: a program information, an advertisement, a group purchasing opportunity, additional programming material, alternate programming material, or an interactive feature. In one embodiment, the user accesses the particular content on a first device, and the supplemental content is provided to a second device that is different from the first device.


In one exemplary embodiment, allowing enhanced viewing of the particular content includes enabling communication between users who have previously viewed the particular content. In another embodiment, the particular content is provided to the user through at least one of: a broadcast channel, a cable channel, an on-demand delivery service, or playback from a local storage unit. In yet another exemplary embodiment, the above noted further additionally includes creating one or more groups in a social network, where the one or more groups include a first group that includes the user and one or more of the additional users whom have previously accessed, or are currently accessing, the particular content. In one example embodiment, creating the one or more groups comprises at least one of: (a) issuing an invitation to the user to join the one or more groups, (b) allowing the user to request to join the one or more groups, or (c) allowing the user to browse additional existing groups. In another exemplary embodiment, creating the one or more groups comprises providing privacy controls around the one or more groups to limit visibility of members of the one or more groups or behavior of the members of the one or more groups. In still another embodiments, at least one of the one or more groups is formed around one or more of: a particular event, or a particular media content.


In another exemplary embodiment, allowing enhanced viewing of the particular content comprises remotely controlling presentation of the particular content on a plurality of user devices to allow synchronized presentation of the particular content on the plurality of user devices, and substantially simultaneously changing the presentation of the particular content, or an associated content, on the plurality of user devices. In one embodiment, change of the presentation of the particular content or an associated content is effected automatically based on an algorithm. In another exemplary embodiment, the algorithm determines the change in the presentation of the particular content or an associated content based on one or more of the following: (a) information about the user and the identified subset of the additional users, or (b) input about the particular content provided by the user or the identified subset of the additional users.


Another aspect of the disclosed embodiments relates to a device that includes one or more processors, and one or more memory units comprising processor executable code. The processor executable code, when executed by the one or more processors, configures the device to monitor activities of a user related to the user's accessing of a particular content, analyze information acquired from the monitoring in conjunction with stored data related to additional users, identify a subset of the additional users that are associated with the user or with the particular content, and generate one or more signals to enable enhanced viewing of the particular content amongst the user and the identified subset of the additional users.


In one exemplary embodiment, the one or more signals enable the user and one or more of the identified subset of additional users to communicate with one another. In another exemplary embodiment, the one or more signals: (a) designate a lead user for navigating the particular content for a first period of time, (b) allow the lead user to navigate through one or more segments of the particular content or an associated content during the first period of time, (c) designate a new lead user for navigating the particular content for a second period of time, and (d) allow the new lead user to navigate through one or more segments of the particular content or an associated content during the second period of time.


According to another exemplary embodiment, the one or more signals enable enhanced viewing of the particular content by allowing supplemental content to be provided to the user and/or to one or more of the identified subset of additional users. In one exemplary embodiment, the one or more signals enable enhanced viewing of the particular content by allowing the supplemental content to be provided to a second device while the user is accessing the particular content on a first device. In yet another exemplary embodiment, the one or more signals enable communication between users who have previously viewed the particular content. In still another embodiment, the one or more signals remotely control presentation of the particular content on a plurality of user devices to allow synchronized presentation of the particular content on the plurality of user devices, and substantially simultaneously cause the presentation of the particular content, or an associated content, to be changed on the plurality of user devices.


In another exemplary embodiment, the above noted device further includes a communication unit configured to allow communications with the user and with one or more of the additional users.


Another aspect of the disclosed embodiments relates to a system that includes the above noted device, in addition to a first user device configured to receive at least the one or more signals, and to receive and present the particular content. The system further includes a second user device configured to receive at least the one or more signals and the particular content, and to present the particular content in synchronization with presentation of the particular content on the first user device in accordance with the one or more signals. In one exemplary embodiment, at least one of the first or second user devices is further configured to receive a supplemental content to be presented in synchronization with the particular content.


In another exemplary embodiment, the first user device and the second user device are configured to communicate with one another through at least one of: an instant messaging, a voice chat, a conference call or a video call. In yet another exemplary embodiment, the above noted system further includes a third user device configured to receive a supplemental content, and to present the supplemental content in synchronization with the particular content. In one exemplary embodiment, at least one of the first and the second user devices is configured to extract one or more watermarks that are embedded in the particular content. In another embodiment, at least one of the first and the second user devices is configured to compute one or more fingerprints associated with one or more segments of the particular content. In yet another embodiment, at least one of the first and the second user devices is configured to process a metadata associated with the particular content to enable identification of the particular content.


Another aspect of the disclosed embodiments relates to a computer program product, embodied on one or more non-transitory computer readable media, comprising program code for monitoring activities of a user related to the user's accessing of a particular content, program code for analyzing information acquired from the monitoring in conjunction with stored data related to additional users, program code for identifying a subset of the additional users that are associated with the user or with the particular content, and program code for allowing enhanced viewing of the particular content amongst the user and the identified subset of the additional users.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates a set of operations that can be carried out in accordance with an exemplary embodiment.



FIG. 2 illustrates another set of operations that can be carried out in accordance with an exemplary embodiment.



FIG. 3 illustrates another set of operations that can be carried out in accordance with an exemplary embodiment.



FIG. 4 illustrates an exemplary device that can be used to implement at least some of the exemplary embodiments.



FIG. 5 illustrates a system within which enhanced viewing of content can be implemented in accordance with an exemplary embodiment.



FIG. 6 illustrates an exemplary device that can be used to implement at least some of the exemplary embodiments.



FIG. 7 illustrates another set of operations that can be carried out in accordance with an exemplary embodiment.





DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS

In the following description, for purposes of explanation and not limitation, details and descriptions are set forth in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to those skilled in the art that the present invention may be practiced in other embodiments that depart from these details and descriptions.


Additionally, in the subject description, the word “exemplary” is used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word exemplary is intended to present concepts in a concrete manner.


The disclosed embodiments facilitate interactions through social media by providing improved media discovery and viewing experiences. In some embodiments, viewing behaviors of social media users are automatically identified and stored to facilitate social interaction among users regarding this behavior. Media (or “content”) may be audio, video, image, text, mark-up language, software, and the like, or combinations thereof.


In some example embodiments, a content is automatically identified by employing one or more content identification techniques. To this end, content identification may be carried out by, for example, detecting embedded watermarks, computing one or more fingerprints that are subsequently matched to a fingerprint database, analyzing user inputs (such as via remote control, keyboard, mouse, or button push), analyzing content metadata, URL or network data, or program guide information sources. In the case where content identification is accomplished through the use of watermarks, watermarks that are embedded in the content may contain information specific to the media content, the media distribution channel, the time of transmission of the content or other mechanism that supports identification of the content. The embedded information can then be used to facilitate shared media viewing or formation of social groups.


In the case where content identification is accomplished through the use of fingerprints, once a content is received at a user device, fingerprints are generated from the received content and compared against a number of previously-generated fingerprint data that are stored at a fingerprint database. The stored fingerprints correspond to previously identified (or “registered”) content and can be linked to a plurality of content identification information, such as content title, content owner, and the like. Alternately, the fingerprints generated from a particular content that is received by first user device may be compared against fingerprints generated from content received by other users (e.g., by a friend's user device) to identify users that are viewing a common content. Additionally, the system may populate the fingerprint database with fingerprints and related identifiers or metadata based on fingerprints obtained in a professional environment, such as by using a dedicated fingerprint-generating component employed for the purpose of populating the database, or based on the further processing of fingerprint data collected during content access by a user and/or user device. It should be noted that the identification of a common content that is being viewed by a plurality of users can be accomplished using other identification techniques, such as through comparison of detected watermarks from content that is being viewed by the plurality of users.


In some example embodiments, social interaction amongst users of social media is facilitated by including capabilities for creating simultaneous and enhanced media viewing experiences among a group of users. Such simultaneous and enhanced experiences may include instant messaging, voice chat, conference call, video call, and/or remotely-synchronized media navigation that is enabled or enhanced for users that form a group. A group of users may include groups identified by a consensual “friend” relationship (e.g., Facebook friends), via geographic proximity (e.g., neighbors in the real world), via shared interests (e.g., “Physics PhDs who love Big Bang Theory”), by content (e.g., “Jets versus Dolphins”), and combinations thereof.


The formation of groups can be facilitated by including features that provide, for example, invitations to join, requests for admittance, ability to browse existing groups, and the like. Privacy controls may be provided to limit visibility of individual and group behaviors. Groups of users may be formed on an ad-hoc basis (i.e. for a particular time or piece media content), or persistent group affiliations may be created and preserved over time. Formation of relationships or groups may be facilitated based on user viewing behavior, such as by informing a user of other selected users or groups of users with similar interests or that are viewing (or have viewed) the same content.



FIG. 1 illustrates a set of exemplary operations 100 for enabling a common and/or an enhanced content usage in accordance with an exemplary embodiment. At 102, a content that is received by, or is being consumed by, a user is identified using one or more content identification techniques. As noted earlier, a content may include, but is not limited to, a multimedia content, such as audio, video, image or text, as well as hypertext or programs. The consumption of a content can include, but is not limited to, viewing, editing, playback, typing, or otherwise interacting with, or generating, the content. At 104, content identification results are assessed to determine common content users, if any. Such common content users include users that are currently consuming the content, users that have previously consumed the content, or even users that have indicated a preference or desire to consume that content. The detection of common content users can be carried out, as noted earlier, through, for example, a comparison of detected and/or stored watermarks (or fingerprints) associated with other users' content and the detected watermarks (or fingerprints) associated the received content, as well as through comparison of content identification information obtained through other identification techniques.


Referring back to FIG. 1, if common content users are detected (“YES” at 106), enhanced content use amongst one or more users corresponding to one or more groups is enabled. As noted earlier, such enhanced use may include common viewing of a content (e.g., the content is made available simultaneously to the users of the group), exchange of commentary, instant messages, voice chat, conference call, video call, remotely-synchronized media navigation and the like. If the determination at 106 fails to identify a common user (i.e., “NO” at 106), at 110, the history of content usage for the user that has received the content is retained. Such a usage history may be used by a social media network, by other users and/or by the same user to facilitate future interactions with the social media.


Non-simultaneous enhanced usage (or viewing) experience may also be facilitated using similar features as the ones discussed above in connection with simultaneous experiences, except that communications are associated with stored communications, such as records of what content individual users have accessed or are accessing, user message postings or comments, user ratings, and the like. For instance, in some embodiments, social media viewing is enhanced by sorting through and responding to one or more user inputs to facilitate functions such as shared viewing or group formation. To this end, a variety of user inputs can be used to automatically identify user behavior.


The media content may be provided via broadcast and/or on-demand delivery, may be played back from local storage media, or through other media content access mechanisms. In some embodiments, different users can access the same content in different ways. For example, if users A and B are interested in having a shared viewing experience of a particular piece of content, it may be the case that the content is available to the two users on different television channels (e.g., channel 1 for user A and channel 2 for user B) or via different services (e.g., broadcast television for user A and on-demand subscription service for user B). In order to facilitate each user's access to the same content, it may be necessary for components of the shared media viewing system to access information sources regarding services, content libraries, programming schedules that are accessible to individual users to ensure that all users can access and use the content.


The following example scenario further illustrates how enhanced content usage amongst users can be facilitated in accordance with an exemplary embodiment. Assume user A is interested in classic silent movies from the early 1900's. User A uses a video service from an Internet source (e.g., Youtube) to view such movies on his tablet device. Further, assume that user B is also interested in viewing classic silent movies from the early 1900's. However, user B's main source of movie consumption is through a cable service (e.g., Time Warner Cable) that allows user B to view movies on his personal computer. Under normal circumstances, user A and user B can be completely oblivious of one another's existence. Even if users A and B were aware of having a shared interest in silent movies, it is unlikely that one would be aware of another's movie viewing schedule. In accordance with an exemplary embodiment, whenever user A starts viewing a movie, user A's tablet can convey information to a linked database that allows the content to be identified. Similarly, but optionally, whenever user B starts viewing a movie, user B's PC can convey information to the linked database that allows the content to be identified. If data stored at the database indicates that user A and user B are members of the “classic silent movie” group, whenever user A (or user B) starts viewing of a silent movie, user B (or user A) can be notified so as to allow a common and/or enhanced viewing experience with user A or other members of the group. If user A and user B are not part of the “classic silent movie” group, or if such a group does not exist, user A and user B can be allowed to form, or become members of, the “classic silent movie” group, and then enjoy a common and/or enhanced viewing experience.


If user A and user B are using different media services, these services may not use the same mechanism for identifying a given content item. For example, each media service may adhere to a different numbering system or protocol for referencing a given content item or portion thereof. It may also be the case that either one or both of the media services may employ a different numbering system or protocol for referencing a given content item or portion thereof. In such cases, it may be necessary for the system to employ more than one linked database or to employ a linked databases that includes a translation mappings between the different numbering systems or protocols in order to translate content identification information between the point of identification and the services in order to permit interoperability. Alternatively, one or more linked databases may be employed that provide translation mappings between a common numbering system or protocol and a numbering system or protocol used by a media service or identification technology.


In some example embodiments, upon user A's viewing of a particular content, other users, such as user B, with a shared interest in that particular content may be notified (e.g., via email, text message, etc.) to start viewing the same content. In some embodiments, user B's media player can automatically start playing the same content that user A is viewing if certain conditioned are satisfied. These conditions can include, but are not limited to, whether or not user B has the capability to obtain user A's content (e.g., user B has a valid subscription to a media service), whether or not user B is already viewing another content, whether or not user B's profile (e.g., residing at, or accessible to, a database) has authorized such automatic shared viewing with user A, and the like. If the requisite conditions are satisfied, then user B's media player can automatically enable enhanced viewing of the content. In scenarios, where user B is already viewing the same content as user A, user B (and/or user A) can commence enhanced viewing based on received notifications, or in an automatic fashion, in order to allow exchange of comments, content navigation capabilities and other enhanced viewing options.


In some embodiments, enhanced use of a content includes presentation of additional content to the users. Such additional content may include program information, advertisements, group purchasing opportunities, additional programming material, alternate programming material, and other interactive features such as games. For example, in one embodiment, if a group of users are experiencing a shared viewing of a content, the shared viewing experience can be enhanced by presentation of additional information related to the content of the television program or its advertisers, such as information about actors, scenes, characters, storyline, props, advertised products, retailers, coupons and the like.


In some embodiments, a content may be remotely navigated in a synchronized fashion amongst a plurality of users. In one example embodiment, the navigation of a content is directed by a particular user acting as the leader of a group. In such a scenario, a lead user can manually control the presentation of the content for the plurality of other users such are presented with the same content as the lead user navigates through successive contents. Alternatively, an algorithm may automatically lead the group navigation based on factors that may include the interests and history of the members of the group at the time. Navigation may be through one kind of media, a variety of media, or through various web sites, or to different locations on a geographical map (e.g., Google Earth), with the group members commenting on the “trip” as they move from place to place. In some embodiments, a first lead user may be designated for a first period of time, and a new lead user is designated for a second period of time when the first period expires.



FIG. 2 illustrates a set of exemplary operations that may be carried out in accordance with an exemplary embodiment. At 202, activities of a user related to the user's accessing of one or more contents are monitored. At 204, information acquired from the monitoring is stored. At 206, social interaction between the user and other users is facilitated using the stored information.


In one embodiment, the stored information identifies a content accessed by the user and facilitating social interaction can include identifying other users who have previously accessed the same content. In another embodiment, the other users who are currently accessing the same content as the user are also identified. In another embodiment, the monitoring of activities comprises detecting watermarks in a content that is being accessed by the user. In yet another embodiment, the detected watermarks include one or more of: information specific to the content that is being accessed by the user, a media distribution channel associated with the content that is being accessed by the user, or a time of transmission of the content that is being accessed by the user. In still another embodiment, monitoring of activities includes generating digital fingerprints associated with one or more segments of a content that is being accessed by the user. In one example embodiment, monitoring of activities includes comparing the generated fingerprints with information in a fingerprint database, where such fingerprint database includes fingerprints associated with previously registered content. In another example embodiment, the fingerprint database is further populated with fingerprints and/or related metadata using a dedicated fingerprint-generating component.


According to another embodiment, monitoring of activities includes monitoring one or more user inputs. In one example, the one or more user inputs is received from one or more of: a remote control device, a keyboard, a mouse, a physical button, or a virtual button. In another embodiment, monitoring of activities includes analyzing metadata associated with a content that is accessed by the user. In one embodiment, the monitoring comprises detecting one or more uniform resource locators (URLs) accessed by the user. In another embodiment, the monitoring comprises performing a network data analysis related to a content accessed by the user. In one embodiment, the monitoring comprises analysis of a program guide information related to a content accessed by the user. In another embodiment, the one or more contents include one or more of: an audio component, a video component, an image component, a text component, a mark-up language component, or a software component.


In another example embodiment, where the user is accessing a first content, facilitating social interaction includes allowing the user to communicate with one or more of other users that are also accessing the first content. In one embodiment, facilitating social interaction includes enabling communication between users who have previously viewed the same content. According to another embodiment, one or more contents are provided to the user through at least one of: a broadcast channel, an on-demand delivery service, or playback from a local storage unit. In yet another embodiment, one or more contents are provided to the user through a different communication delivery service than at least one of the other users.


According to another embodiment, communications between the user and one or more of the other users is effectuated using at least one of: an instant messaging, a voice chat, a conference call, a video call, or remotely synchronized media navigation. In one embodiment, the users accessing the first content are organized in one or more groups based on one or more of the following: a group of users identified based on a consensual action by user's in that group, such as friending, geographic proximity, or shared interest. According to yet another embodiment, supplemental content is further provided to the user or to the one or more of the other users. For example, the supplemental content includes at least one of: a program information, an advertisement, a group purchasing opportunity, additional programming material, alternate programming material, or an interactive feature. In another example embodiment, a content that is being accessed by the user is presented on a first device and the supplemental content is presented on a second device.


In one embodiment, facilitating social interaction includes creating one or more groups in a social network, where the one or more groups comprise at least some of the users who have all accessed a first content. In another example embodiment, one or more groups are created through at least one of: issuing an invitation to join the one or more groups, allowing a user of the first content to request to join the one or more groups, or allowing a user to browse existing groups. According to another embodiment, privacy controls are provided around the one or more groups to limit visibility of members of the one or more groups or behavior of members of the one or more groups. In some embodiments, at least one of the one or more groups is formed around a particular event. Further, in some embodiments, at least one of the one or more groups is formed around a particular media content. According to another example embodiment, the user or other users or groups are informed about users who have similar interests or have accessed the first content.



FIG. 3 illustrates a set of operations 300 that can be used to facilitate enhanced content use in accordance with an exemplary embodiment. At 302, presentation of a content is remotely controlled for a plurality of users of the content. At 304, presentation of the content is synchronized amongst the plurality of users and, at 306, presentation of the content is substantially simultaneously changed for the plurality of users. In one embodiment, a lead user manually controls the presentation of the content for the plurality of users such that the plurality of users are presented with the same content as the lead user navigates through successive contents. In another exemplary embodiment, the content that is being presented to the plurality of users is determined based on an algorithm. In yet another exemplary embodiment, the algorithm determines the content based on information about the plurality of users. In one example, the algorithm determines the content based on input about the content from the users.


It is understood that the various embodiments of the present disclosure may be implemented individually, or collectively, in devices comprised of various hardware and/or software modules and components. In describing the disclosed embodiments, sometimes separate components have been illustrated as being configured to carry out one or more operations. It is understood, however, that two or more of such components can be combined together and/or each component may comprise sub-components that are not depicted. Further, the operations that are described in various figures of the present application are presented in a particular sequential order in order to facilitate the understanding of the underlying concepts. It is understood, however, that such operations may be conducted in a different sequential order, and further, additional or fewer steps may be used to carry out the various disclosed operations.


In one exemplary embodiment, a device is provided that includes a first component that is configured to monitor activities of a user related to the user's accessing of one or more contents, a second component that is configured to store information acquired from the monitoring, and a third component that is configured to facilitate social interaction between the user and other users using the stored information. Such a device may be implemented entirely at a database (e.g., a remote location that include servers and storage devices) that is in communication with various user devices. In other embodiments, such a device may be implemented partly at the server and partly at another location, such as at user premises. Such a device is configured to receive content information that allows identification of the content. Such information can include watermark extraction results, computed content fingerprints, meta data associated with the content (e.g., source of content, time of content viewing, URL associated with content, user input, etc.). Based on the received information, the device can identify the content and monitor how the content is being consumed. The device can further facilitate enhanced viewing of the content for a plurality of viewers.


In some examples, the devices that are described in the present application can comprise one or more processors, one or more memory units, and an interface that are communicatively connected to each other, and may range from desktop and/or laptop computers, to consumer electronic devices such as media players, mobile devices and the like. For example, FIG. 4 illustrates a block diagram of a device 400 within which various disclosed embodiments may be implemented. The device 400 comprises at least one processor 402 and/or controller, at least one memory 404 unit that is in communication with the processor 402, and at least one communication unit 406 that enables the exchange of data and information, directly or indirectly, through the communication link 408 with other entities, devices, databases and networks. The communication unit 406 may provide wired and/or wireless communication capabilities in accordance with one or more communication protocols, and therefore it may comprise the proper transmitter/receiver antennas, circuitry and ports, as well as the encoding/decoding capabilities that may be necessary for proper transmission and/or reception of data and other information. The exemplary device 400 that is depicted in FIG. 4 may be integrated into as part of a content handling device to carry out some or all of the operations that are described in the present application.


In some embodiments, the device 400 of FIG. 4 may also be incorporated into a device that resides at a database and is configured to perform some or all of the operations that are described in accordance with various disclosed embodiments. For instance on aspect of the disclosed embodiments relates to a device that includes a processor, and a memory comprising processor executable code, the processor executable code, when executed by the processor, configures the device to perform any one of and/or all operations that are described in the present application. For example, it may configure the device to: monitor activities of a user related to the user's accessing of one or more contents, store information acquired from the monitoring, and facilitate social interaction between the user and other users using the stored information.



FIG. 5 illustrates a system within which enhanced viewing of a content can be implemented in accordance with an exemplary embodiment. FIG. 5 illustrates a household 502, a corporation 504 and a vehicle 506, all of which include one or more user devices. For example, the household 502 can include a television 502(a), a set top box 502(b), a computer 502(c) (e.g., a PC, a laptop, a tablet, etc.) and a smart phone 502(e) that are each capable of accessing and presenting a content (e.g., a primary content). A secondary device 502(d) is also depicted in FIG. 5, which is in communication with one or more other user devices. The secondary device 502(d) may have the capability to access and present the primary content, however, in some embodiments, the secondary device 502(d) is used to access and present a secondary or supplemental content (e.g., an advertisement, commentary from other users, other content related to a primary content, etc.). FIG. 5 also illustrates a corporation 504, which similar to the household 502, can include one or more user devices 504(a) through 504(e). FIG. 5 further shows a vehicle 506 that includes a mobile device 506(a). User devices 502(a) through 502(e), user devices 504(a) through 504(e) and user device 506(a) are configured to be in communication with a database 508 through a communication link 510. Although the exemplary diagram of FIG. 5 shows a single database 508, it is understood that the embodiments of the present application can be implemented using a distributed network of databases that can communicate with user devices and with one another. Further, the database 508 can include a variety of additional components, such as servers, processors, memory devices, and communication units (not depicted). In one example, the database 508 includes a translation mappings between the different numbering systems or protocols of different content distribution channels/sources. Such translation mappings enable translation of content identification information in order to permit interoperability.


The communication link 510 of FIG. 5 can be a wired or wireless communication link that utilize one or more communication protocols. While not explicitly shown in FIG. 5, the household 502 and/or the corporation 504 can further include a gateway device to manage communications between various user devices within the household 502 and/or corporation 504, with the database and with other outside entities. For example, such a gateway device provide various security and authentication functionalities. The household 502, the corporation 504 and the vehicle 506 are further capable of receiving content from an source, such as through a satellite source, a cable source, Internet, or from a local storage. In some example embodiments, a user generated content is provided (e.g., retrieved from local storage, or captured in real-time) at one of the household 502, the corporation 504 or the vehicle 506. When a user accesses a content for viewing at one or more of the household 502, the corporation 504 or the vehicle 506, other user's in the same group (e.g., friend of user) can simultaneously view that content, provide comments, interact with each other and take control of navigating the content.


One exemplary aspect of the present application relates to a device that includes components that allow the device to monitor activities of a user related to the user's accessing of a particular content, and to analyze information acquired from the monitoring in conjunction with stored data related to additional users. Such an exemplary device can also be configured to identify a subset of the additional users that are associated with the user or with the particular content, and to generate one or more signals to enable enhanced viewing of the particular content amongst the user and the identified subset of the additional users. The components of such a device can be implemented at least partially in hardware by using, for example, discrete analog and digital circuit components, ASICs, FPGAs. Such a device can also be implemented at least partially using software that configures the device to perform various operations. In one exemplary embodiment, the device includes one or more processors, and one or more memory units comprising processor executable code. The processor executable code, when executed by the one or more processors, configures the device to carry out various operations such as to process and analyze information, to generate signals, and to transmit and receive information and data using appropriate communication protocols.



FIG. 6 illustrates a block diagram of a device 600 within which certain disclosed embodiments may be implemented. The exemplary device 600 of FIG. 6 may be, for example, incorporated as part of the user devices 502(a) through 502(e), 504(a) through 504(e) and 506(a) that are illustrated in FIG. 5. Some of the components in FIG. 6 (e.g., the metadata processing component 610) may be reside at a remote database, such as database 508 that is shown in FIG. 5. The device 600 comprises at least one processor 604 and/or controller, at least one memory 602 unit that is in communication with the processor 604, and at least one communication unit 606 that enables the exchange of data and information, directly or indirectly, through the communication link 608 with at least other entities, devices, databases and networks (collectively illustrated in FIG. 6 as Other Entities 616). The communication unit 606 of the device 600 can also include a number of input and output ports that can be used to receive and transmit information from/to a user and other devices or systems. The communication unit 606 may provide wired and/or wireless communication capabilities in accordance with one or more communication protocols and, therefore, it may comprise the proper transmitter/receiver, antennas, circuitry and ports, as well as the encoding/decoding capabilities that may be necessary for proper transmission and/or reception of data and other information. In some embodiments, the device 600 can also include a microphone 618 that is configured to receive an input audio signal.


In some embodiments, the device 600 can also include a camera 620 that is configured to capture a video and/or a still image. The signals generated by the microphone 618 and the camera 620 may further undergo various signal processing operations, such as analog to digital conversion, filtering, sampling, and the like. It should be noted that while the microphone 618 and/or camera 620 are illustrated as separate components, in some embodiments, the microphone 618 and/or camera 620 can be incorporated into other components of the device 600, such as the communication unit 606. The received audio, video and/or still image signals can be processed (e.g., converted from analog to digital, color corrected, sub-sampled, evaluated to detect embedded watermarks, analyzed to obtain fingerprints, etc.) in cooperation with the processor 604. In some embodiments, instead of, or in addition to, a built-in microphone 618 and camera 620, the device 600 may be equipped with an input audio port and an input/output video port that can be interfaced with an external microphone and camera, respectively.


The device 600 also includes an information extraction/processing component 622 that is configured to extract information from one or more content segments and/or associated metadata to enable determination of content identification, as well as other information. In some embodiments, the information extraction component 622 includes a watermark detector 612 that is configured to extract watermarks from one or more components (e.g., audio or video components) of a multimedia content, and to determine the information (such as a content identifier (CID) and time codes) carried by such watermarks. Such audio (or video) components may be obtained using the microphone 618 (or camera 620), or may be obtained from multimedia content that is stored on a data storage media (or broadcast in real-time) and communicated to the device 600. The information extraction component 622 can additionally, or alternatively include a fingerprint computation component 614 that is configured to compute fingerprints for one or more segments of a multimedia content. The fingerprint computation component 614 can operate on one or more components (e.g., audio or video components) of the multimedia content to compute fingerprints for one or more content segments, and to communicate with a database. The metadata processing component 610 is configured to obtain metadata associated with the multimedia content, and to process the metadata to extract identification or other information. In some embodiments, the operations of information extraction component 622, are at least partially controlled and or implemented by the processor 604.


The device 600 is also coupled to one or more user interface devices 624, including but not limited to a display device, a keyboard, a speaker, a mouse, a touch pad, a motion sensors, a remote control, and the like. The user interface device(s) 624 allow a user of the device 600 to view, and/or listen to, multimedia content, to input information such a text, to click on various fields within a graphical user interface, and the like. While in the exemplary block diagram of FIG. 6 the user interface devices 624 are depicted as residing outside of the device 600, it is understood that, in some embodiments, one or more of the user interface devices 624 may be implemented as part of the device 600. Moreover, the user interface devices 624 may be in communication with the device 600 through the communication unit 606.



FIG. 7 illustrates a set of operations 700 for enabling enhanced viewing of a content in accordance with an exemplary embodiment. At 702, activities of a user related to the user's accessing of a particular content is monitored. The particular content may be accessible to a user through a variety of sources, such as cable, Internet, satellite, over-the-air, over-the-top (OTT), and more generally from any transmission and/or storage medium. In some examples, the particular content can be a user-generated content. At 704, information acquired from the monitoring is analyzed in conjunction with stored data related to additional users. For example, the stored data can include identification information of additional users that share an interest in the particular content, user that are currently viewing, or have previously, accessed the current content. Other examples of stored information can include geographical locations of additional users (e.g., users in the same household, neighbors, etc.), affiliations of the additional users, and the like. At 706, a subset of the additional users that are associated with the user or with the particular content is identified and, at 708, enhanced viewing of the particular content amongst the user and the identified subset of the additional users is allowed.


Various embodiments described herein are described in the general context of methods or processes, which may be implemented in one embodiment by a computer program product, embodied in a computer-readable medium, including computer-executable instructions, such as program code, executed by computers in networked environments. A computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (DVD), Blu-ray Discs, etc. Therefore, the computer-readable media described in the present application include non-transitory storage media. Generally, program modules may include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes.


For example, one aspect of the disclosed embodiments relates to a computer program product that is embodied on a non-transitory computer readable medium. The computer program product includes program code for carrying out any one or and/or all of the operations of the disclosed embodiments. Such a computer program product may include program code for monitoring activities of a user related to the user's accessing of one or more contents, program code for storing information acquired from the monitoring, and program code for facilitating social interaction between the user and other users using the stored information.


A content that is embedded with watermarks in accordance with the disclosed embodiments may be stored on a storage medium. In some embodiments, such a stored content that includes one or more imperceptibly embedded watermarks, when accessed by a content handling device (e.g., a software or hardware media player) that is equipped with a watermark extractor can trigger a watermark extraction process, as well as additional operations that are needed to allow enhanced viewing of a content in accordance with the disclosed embodiments.


The foregoing description of embodiments has been presented for purposes of illustration and description. The foregoing description is not intended to be exhaustive or to limit embodiments of the present invention to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments. The embodiments discussed herein were chosen and described in order to explain the principles and the nature of various embodiments and its practical application to enable one skilled in the art to utilize the present invention in various embodiments and with various modifications as are suited to the particular use contemplated. The features of the embodiments described herein may be combined in all possible combinations of methods, apparatus, modules, systems, and computer program products.

Claims
  • 1. A method, comprising: monitoring activities of a user related to the user's accessing of a particular primary content;analyzing information acquired from the monitoring in conjunction with stored data related to additional users;identifying a subset of the additional users that are associated with the user or with the particular primary content; andallowing enhanced viewing of the particular primary content amongst the user and the identified subset of the additional users, wherein allowing enhanced viewing of the particular primary content comprises enabling at least one of the identified subset of additional users to receive the particular primary content through a second type of communication channel while the user simultaneously receives the particular primary content through a first type of communication channel and wherein the monitoring comprises receiving information extracted from embedded watermarks in the particular primary content that enables identification of one or more of: information specific to the particular primary content, a media distribution channel associated with the particular primary content, or a time of transmission of the particular primary content.
  • 2. The method of claim 1, wherein: analyzing the information acquired from the monitoring in conjunction with stored data the stored data identifies the particular primary content; andthe identifying comprises identifying users who are currently accessing the particular primary content.
  • 3. The method of claim 2, further comprising identifying users who have previously accessed the particular primary content.
  • 4. The method of claim 1, wherein the monitoring comprises: receiving fingerprints computed from one or more segments of the particular primary content; andcomparing the received fingerprints with information at a fingerprint database, wherein the fingerprint database comprises stored fingerprints associated with previously registered content.
  • 5. The method of claim 1, wherein activities of the user comprise one or more inputs provided by the user on a user interface of a user device.
  • 6. The method of claim 5, wherein the one or more inputs is received from one or more of: a remote control device, a keyboard, a mouse, a physical button, or a virtual button.
  • 7. The method of claim 1, wherein the analyzing comprises analyzing metadata associated with the particular primary content.
  • 8. The method of claim 1, wherein the information acquired from the monitoring comprises information indicative of one or more uniform resource locators (URLs) accessed by the user.
  • 9. The method of claim 1, wherein the analyzing comprises performing a network data analysis related to the particular primary content.
  • 10. The method of claim 1, wherein the analyzing comprises analyzing a program guide information to determine an identity of the particular primary content.
  • 11. The method of claim 1, wherein the particular primary content comprises one or more of: an audio component, a video component, an image component, a text component, a mark-up language component, or a software component.
  • 12. The method of claim 1, wherein allowing enhanced viewing of the particular primary content comprises enabling the user and one or more of the identified subset of additional users to communicate with one another.
  • 13. The method of claim 12, wherein communications between the user and the one or more of the identified subset of additional users is effectuated using at least one of: an instant messaging, a voice chat, a conference call or a video call.
  • 14. The method of claim 1, wherein allowing enhanced viewing of the particular primary content comprises: designating a lead user for navigating the particular primary content; andallowing the lead user to navigate through one or more segments of the particular primary content or an associated content.
  • 15. The method of claim 1, wherein allowing enhanced viewing of the particular primary content comprises: designating a lead user for navigating the particular primary content for a first period of time;allowing the lead user to navigate through one or more segments of the particular primary content or an associated content during the first period of time;designating a new lead user for navigating the particular primary content for a second period of time; andallowing the new lead user to navigate through one or more segments of the particular primary content or an associated content during the second period of time.
  • 16. The method of claim 1, wherein the additional users are associated with the user or with the particular primary content based on one or more of the following: a consensual action by the user and by the additional users,geographic proximity of the user and the additional users, ora shared interest between the user and the additional users.
  • 17. The method of claim 1, wherein allowing enhanced viewing of the particular primary content comprises providing supplemental content to the user and/or to one or more of the identified subset of additional users.
  • 18. The method of claim 17, wherein the supplemental content includes at least one of: a program information, an advertisement, a group purchasing opportunity, additional programming material, alternate programming material, or an interactive feature.
  • 19. The method of claim 17, wherein: the user accesses the particular primary content on a first device; andthe supplemental content is provided to a second device that is different from the first device.
  • 20. The method of claim 1, wherein allowing enhanced viewing of the particular primary content comprises enabling communication between users who have previously viewed the particular primary content.
  • 21. The method of claim 1, wherein the particular primary content is provided to the user through at least one of: a broadcast channel, a cable channel, an on-demand delivery service, or playback from a local storage unit.
  • 22. The method of claim 1, further comprising creating one or more groups in a social network, wherein the one or more groups comprise a first group that includes the user and one or more of the additional users whom have previously accessed, or are currently accessing, the particular primary content.
  • 23. The method of claim 21, wherein creating the one or more groups comprises at least one of: issuing an invitation to the user to join the one or more groups,allowing the user to request to join the one or more groups, orallowing the user to browse additional existing groups.
  • 24. The method of claim 21, wherein creating the one or more groups comprises providing privacy controls around the one or more groups to limit visibility of members of the one or more groups or behavior of the members of the one or more groups.
  • 25. The method of claim 22, wherein at least one of the one or more groups is formed around one or more of: a particular event, ora particular media content.
  • 26. The method of claim 1, wherein allowing enhanced viewing of the particular primary content comprises: remotely controlling presentation of the particular primary content on a plurality of user devices to allow synchronized presentation of the particular primary content on the plurality of user devices; andsubstantially simultaneously changing the presentation of the particular primary content, or an associated content, on the plurality of user devices.
  • 27. The method of claim 26, wherein change of the presentation of the particular primary content or an associated content is effected automatically based on an algorithm.
  • 28. The method of claim 27, wherein the algorithm determines the change in the presentation of the particular primary content or an associated content based on one or more of the following: information about the user and the identified subset of the additional users; orinput about the particular primary content provided by the user or the identified subset of the additional users.
  • 29. A device, comprising: one or more processors; andone or more memory units comprising processor executable code, the processor executable code, when executed by the one or more processors, configures the device to:monitor activities of a user related to the user's accessing of a particular primary content and receive information extracted from embedded watermarks in the particular primary content that enables identification of one or more of: information specific to the particular primary content, a media distribution channel associated with the particular primary content, or a time of transmission of the particular primary content;analyze information acquired from the monitoring in conjunction with stored data related to additional users;identify a subset of the additional users that are associated with the user or with the particular primary content; andgenerate one or more signals to enable enhanced viewing of the particular primary content amongst the user and the identified subset of the additional users, wherein the one or more signals enables at least one of the identified subset of additional users to receive the particular primary content through a second type of communication channel while the user simultaneously receives the particular primary content through a first type of communication channel.
  • 30. The device of claim 29, wherein: the processor executable code, when executed by the one or more processors, configures the device to:identify the particular primary content upon analysis of the information acquired from monitoring the activities; andidentify users who are currently accessing the particular primary content.
  • 31. The device of claim 30, the processor executable code, when executed by the one or more processors, further configures the device to identify users who have previously accessed the particular primary content.
  • 32. The device of claim 29, wherein the processor executable code, when executed by the one or more processors, configures the device to: receive fingerprints computed from one or more segments of the particular primary content; andcompare the received fingerprints with information at a fingerprint database, wherein the fingerprint database comprises stored fingerprints associated with previously registered content.
  • 33. The device of claim 29, wherein activities of the user comprise one or more inputs provided by the user on a user interface of a user device.
  • 34. The device of claim 33, wherein the one or more inputs is received from one or more of: a remote control device, a keyboard, a mouse, a physical button, or a virtual button.
  • 35. The device of claim 29, wherein the processor executable code, when executed by the one or more processors, configures the device to analyze metadata associated with the particular primary content.
  • 36. The device of claim 29, wherein the information acquired from the monitoring comprises information indicative of one or more uniform resource locators (URLs) accessed by the user.
  • 37. The device of claim 29, wherein the processor executable code, when executed by the one or more processors, configures the device to perform a network data analysis related to the particular primary content.
  • 38. The device of claim 29, wherein the processor executable code, when executed by the one or more processors, configures the device to analyze a program guide information to determine an identity of the particular primary content.
  • 39. The device of claim 29, wherein the particular primary content comprises one or more of: an audio component, a video component, an image component, a text component, a mark-up language component, or a software component.
  • 40. The device of claim 29, wherein one or more signals enable the user and one or more of the identified subset of additional users to communicate with one another.
  • 41. The device of claim 29, wherein the one or more signals: designate a lead user for navigating the particular primary content for a first period of time;allow the lead user to navigate through one or more segments of the particular primary content or an associated content during the first period of time;designate a new lead user for navigating the particular primary content for a second period of time; andallow the new lead user to navigate through one or more segments of the particular primary content or an associated content during the second period of time.
  • 42. The device of claim 29, wherein the additional users are associated with the user or with the particular primary content based on one or more of the following: a consensual action by the user and by the additional users,geographic proximity of the user and the additional users, ora shared interest between the user and the additional users.
  • 43. The device of claim 29, wherein the one or more signals enable enhanced viewing of the particular primary content by allowing supplemental content to be provided to the user and/or to one or more of the identified subset of additional users.
  • 44. The device of claim 43, wherein the supplemental content includes at least one of: a program information, an advertisement, a group purchasing opportunity, additional programming material, alternate programming material, or an interactive feature.
  • 45. The device of claim 43, wherein the one or more signals enable enhanced viewing of the particular primary content by allowing the supplemental content to be provided to a second device while the user is accessing the particular primary content on a first device.
  • 46. The device of claim 29, wherein the one or more signals enable communication between users who have previously viewed the particular primary content.
  • 47. The device of claim 29, further comprising a communication unit configured to allow communications with the user and with one or more of the additional users.
  • 48. The device of claim 29, wherein the processor executable code, when executed by the one or more processors, configures the device to create one or more groups in a social network, wherein the one or more groups comprise a first group that includes the user and one or more of the additional users whom have previously accessed, or are currently accessing, the particular primary content.
  • 49. The device of claim 48, wherein the processor executable code, when executed by the one or more processors, configures the device to perform at least one of following: issuing an invitation to the user to join the one or more groups,allowing the user to request to join the one or more groups, orallowing the user to browse additional existing groups.
  • 50. The device of claim 48, wherein the processor executable code, when executed by the one or more processors, configures the device to provide privacy controls around the one or more groups to limit visibility of members of the one or more groups or behavior of the members of the one or more groups.
  • 51. The device of claim 48, wherein the processor executable code, when executed by the one or more processors, configures the device to form at least one of the one or more groups based on one or more of: a particular event, ora particular media content.
  • 52. The device of claim 29, wherein the one or more signals: remotely control presentation of the particular primary content on a plurality of user devices to allow synchronized presentation of the particular primary content on the plurality of user devices; andsubstantially simultaneously cause the presentation of the particular primary content, or an associated content, to be changed on the plurality of user devices.
  • 53. The device of claim 52, wherein change of the presentation of the particular primary content or an associated content is effected automatically based on an algorithm.
  • 54. The device of claim 53, wherein the algorithm determines the change in the presentation of the particular primary content or an associated content based on one or more of the following: information about the user and the identified subset of the additional users; orinput about the particular primary content provided by the user or the identified subset of the additional users.
  • 55. A system comprising the device of claim 29, and further comprising: a first user device configured to receive at least the one or more signals, and to receive and present the particular primary content; anda second user device configured to receive at least the one or more signals and the particular primary content, and to present the particular primary content in synchronization with presentation of the particular primary content on the first user device in accordance with the one or more signals.
  • 56. The system of claim 55, wherein at least one of the first or second user devices is further configured to receive a supplemental content to be presented in synchronization with the particular primary content.
  • 57. The system of claim 56, wherein the first user device and the second user device are configured to communicate with one another through at least one of: an instant messaging, a voice chat, a conference call or a video call.
  • 58. The system of claim 55, further comprising a third user device configured to receive a supplemental content, and to present the supplemental content in synchronization with the particular primary content.
  • 59. The system of claim 55, wherein at least one of the first and the second user devices is configured to extract one or more watermarks that are embedded in the particular primary content.
  • 60. The system of claim 55, wherein at least one of the first or the second user devices is configured to compute one or more fingerprints associated with one or more segments of the particular primary content.
  • 61. The system of claim 55, wherein at least one of the first or the second user devices is configured to process a metadata associated with the particular primary content to enable identification of the particular primary content.
  • 62. A computer program product, embodied on one or more non-transitory computer readable media, comprising: program code for monitoring activities of a user related to the user's accessing of a particular primary content;program code for analyzing information acquired from the monitoring in conjunction with stored data related to additional users;program code for identifying a subset of the additional users that are associated with the user or with the particular primary content; andprogram code for allowing enhanced viewing of the particular primary content amongst the user and the identified subset of the additional users, wherein allowing enhanced viewing of the particular primary content comprises enabling at least one of the identified subset of additional users to receive the particular primary content through a second type of communication channel while the user simultaneously receives the particular primary content through a first type of communication channel and wherein the monitoring comprises receiving information extracted from embedded watermarks in the particular primary content that enables identification of one or more of: information specific to the particular primary content, a media distribution channel associated with the particular primary content, or a time of transmission of the particular primary content.
RELATED APPLICATIONS

This patent application claims the benefit of priority to U.S. Provisional Patent Application No. 61/695,938 filed on Aug. 31, 2012, which is incorporated herein by reference in its entirety for all purposes.

US Referenced Citations (583)
Number Name Date Kind
3406344 Hopper Oct 1968 A
3842196 Loughlin Oct 1974 A
3885217 Cintron May 1975 A
3894190 Gassmann Jul 1975 A
3919479 Moon et al. Nov 1975 A
3973206 Haselwood et al. Aug 1976 A
4048562 Haselwood et al. Sep 1977 A
4176379 Wessler et al. Nov 1979 A
4199788 Tsujimura Apr 1980 A
4225967 Miwa et al. Sep 1980 A
4230990 Lert, Jr. et al. Oct 1980 A
4281217 Dolby Jul 1981 A
4295128 Hashemian et al. Oct 1981 A
4425578 Haselwood et al. Jan 1984 A
4454610 Sziklai Jun 1984 A
4464656 Nakamura Aug 1984 A
4497060 Yang Jan 1985 A
4512013 Nash et al. Apr 1985 A
4547804 Greenberg Oct 1985 A
4564862 Cohen Jan 1986 A
4593904 Graves Jun 1986 A
4639779 Greenberg Jan 1987 A
4669089 Gahagan et al. May 1987 A
4677466 Lert, Jr. et al. Jun 1987 A
4686707 Iwasaki et al. Aug 1987 A
4703476 Howard Oct 1987 A
4706282 Knowd Nov 1987 A
4723302 Fulmer et al. Feb 1988 A
4729398 Benson et al. Mar 1988 A
4739398 Thomas et al. Apr 1988 A
4750173 Bluthgen Jun 1988 A
4755871 Morales-Garza et al. Jul 1988 A
4755884 Efron et al. Jul 1988 A
4764608 Masuzawa et al. Aug 1988 A
4764808 Solar Aug 1988 A
4789863 Bush Dec 1988 A
4805020 Greenberg Feb 1989 A
4807013 Manocha Feb 1989 A
4807031 Broughton et al. Feb 1989 A
4840602 Rose Jun 1989 A
4843562 Kenyon et al. Jun 1989 A
4876617 Best et al. Oct 1989 A
4876736 Kiewit Oct 1989 A
4930011 Kiewit May 1990 A
4931871 Kramer Jun 1990 A
4937807 Weitz et al. Jun 1990 A
4939515 Adelson Jul 1990 A
4943963 Waechter et al. Jul 1990 A
4945412 Kramer Jul 1990 A
4967273 Greenberg Oct 1990 A
4969041 O'Grady et al. Nov 1990 A
4972471 Gross et al. Nov 1990 A
4972503 Zurlinden Nov 1990 A
4979210 Nagata et al. Dec 1990 A
5057915 Von Kohorn Oct 1991 A
5073925 Nagata et al. Dec 1991 A
5080479 Rosenberg Jan 1992 A
5113437 Best et al. May 1992 A
5116437 Yamamoto et al. May 1992 A
5161251 Mankovitz Nov 1992 A
5191615 Aldava et al. Mar 1993 A
5200822 Bronfin et al. Apr 1993 A
5210820 Kenyon May 1993 A
5210831 Emma et al. May 1993 A
5213337 Sherman May 1993 A
5214792 Alwadish May 1993 A
5237611 Rasmussen et al. Aug 1993 A
5251041 Young et al. Oct 1993 A
5270480 Hikawa Dec 1993 A
5294962 Sato et al. Mar 1994 A
5294982 Salomon et al. Mar 1994 A
5319453 Copriviza et al. Jun 1994 A
5319735 Preuss et al. Jun 1994 A
5351304 Yamamoto Sep 1994 A
5379345 Greenberg Jan 1995 A
5402488 Karlock Mar 1995 A
5404160 Schober et al. Apr 1995 A
5404377 Moses Apr 1995 A
5408258 Kolessar Apr 1995 A
5414729 Fenton May 1995 A
5424785 Orphan Jun 1995 A
5425100 Thomas et al. Jun 1995 A
5432799 Shimpuku et al. Jul 1995 A
5436653 Ellis et al. Jul 1995 A
5450490 Jensen et al. Sep 1995 A
5452901 Nakada et al. Sep 1995 A
5473631 Moses Dec 1995 A
5481294 Thomas et al. Jan 1996 A
5497372 Nankoh et al. Mar 1996 A
5502576 Ramsay et al. Mar 1996 A
5504518 Ellis et al. Apr 1996 A
5508754 Orphan Apr 1996 A
5519454 Willis May 1996 A
5523794 Mankovitz et al. Jun 1996 A
5526427 Thomas et al. Jun 1996 A
5537484 Kobayashi Jul 1996 A
5579124 Aijala et al. Nov 1996 A
5581658 O'Hagan et al. Dec 1996 A
5581800 Fardeau et al. Dec 1996 A
5592553 Guski et al. Jan 1997 A
5612729 Ellis et al. Mar 1997 A
5613004 Cooperman et al. Mar 1997 A
5636292 Rhoads Jun 1997 A
5664018 Leighton Sep 1997 A
5687191 Lee et al. Nov 1997 A
5687236 Moskowitz et al. Nov 1997 A
5699427 Chow et al. Dec 1997 A
5719619 Hattori et al. Feb 1998 A
5719937 Warren et al. Feb 1998 A
5737329 Horiguchi Apr 1998 A
5752880 Gabai et al. May 1998 A
5761606 Wolzien Jun 1998 A
5764763 Jensen et al. Jun 1998 A
5778108 Coleman, Jr. Jul 1998 A
5787334 Fardeau et al. Jul 1998 A
5805635 Andrews, Jr. et al. Sep 1998 A
5809064 Fenton et al. Sep 1998 A
5809139 Girod et al. Sep 1998 A
5819289 Sanford, II et al. Oct 1998 A
5822360 Lee et al. Oct 1998 A
5822432 Moskowitz et al. Oct 1998 A
5825892 Braudaway et al. Oct 1998 A
5828325 Wolosewicz et al. Oct 1998 A
5832119 Rhoads Nov 1998 A
5841978 Rhoads Nov 1998 A
5848155 Cox Dec 1998 A
5850249 Massetti et al. Dec 1998 A
5850481 Rhoads Dec 1998 A
5862260 Rhoads Jan 1999 A
5870030 DeLuca et al. Feb 1999 A
5887243 Harvey et al. Mar 1999 A
5889868 Moskowitz et al. Mar 1999 A
5892900 Ginter et al. Apr 1999 A
5893067 Bender et al. Apr 1999 A
5901178 Lee et al. May 1999 A
5905800 Moskowitz et al. May 1999 A
5930369 Cox et al. Jul 1999 A
5933798 Linnartz Aug 1999 A
5937000 Lee et al. Aug 1999 A
5940124 Janko et al. Aug 1999 A
5940134 Wirtz Aug 1999 A
5940135 Petrovic et al. Aug 1999 A
5940429 Lam et al. Aug 1999 A
5943422 Van Wie et al. Aug 1999 A
5945932 Smith et al. Aug 1999 A
5949885 Leighton Sep 1999 A
5960081 Vynne et al. Sep 1999 A
5963909 Warren et al. Oct 1999 A
5986692 Logan et al. Nov 1999 A
6021432 Sizer, II et al. Feb 2000 A
6031914 Tewfik et al. Feb 2000 A
6035171 Takaya et al. Mar 2000 A
6035177 Moses et al. Mar 2000 A
6037984 Isnardi et al. Mar 2000 A
6044156 Honsinger et al. Mar 2000 A
6061793 Tewfik et al. May 2000 A
6067440 Diefes May 2000 A
6078664 Moskowitz et al. Jun 2000 A
6094228 Ciardullo et al. Jul 2000 A
6101310 Terada et al. Aug 2000 A
6128597 Kolluru et al. Oct 2000 A
6145081 Winograd et al. Nov 2000 A
6154571 Cox et al. Nov 2000 A
6160986 Gabai et al. Dec 2000 A
6173271 Goodman et al. Jan 2001 B1
6175627 Petrovic et al. Jan 2001 B1
6175842 Kirk Jan 2001 B1
6189123 Anders Nystrom et al. Feb 2001 B1
6209092 Linnartz Mar 2001 B1
6209094 Levine et al. Mar 2001 B1
6222932 Rao et al. Apr 2001 B1
6229572 Ciardullo et al. May 2001 B1
6233347 Chen et al. May 2001 B1
6246775 Nakamura et al. Jun 2001 B1
6246802 Fujihara et al. Jun 2001 B1
6249870 Kobayashi et al. Jun 2001 B1
6252972 Linnartz Jun 2001 B1
6253113 Lu Jun 2001 B1
6253189 Feezell et al. Jun 2001 B1
6268866 Shibata Jul 2001 B1
6278792 Cox et al. Aug 2001 B1
6282299 Tewfik et al. Aug 2001 B1
6285774 Schumann et al. Sep 2001 B1
6289108 Rhoads Sep 2001 B1
6290566 Gabai et al. Sep 2001 B1
6330335 Rhoads Dec 2001 B1
6330672 Shur Dec 2001 B1
6332031 Rhoads et al. Dec 2001 B1
6332194 Bloom et al. Dec 2001 B1
6353672 Rhoads Mar 2002 B1
6363159 Rhoads Mar 2002 B1
6373974 Zeng Apr 2002 B2
6374036 Ryan et al. Apr 2002 B1
6381341 Rhoads Apr 2002 B1
6385330 Powell et al. May 2002 B1
6388712 Shinohara et al. May 2002 B1
6389152 Nakamura et al. May 2002 B2
6389538 Gruse et al. May 2002 B1
6400826 Chen et al. Jun 2002 B1
6400827 Rhoads Jun 2002 B1
6404781 Kawamae et al. Jun 2002 B1
6404898 Rhoads Jun 2002 B1
6411725 Rhoads Jun 2002 B1
6415040 Linnartz et al. Jul 2002 B1
6415041 Oami et al. Jul 2002 B1
6424726 Nakano et al. Jul 2002 B2
6427012 Petrovic Jul 2002 B1
6430301 Petrovic Aug 2002 B1
6430302 Rhoads Aug 2002 B2
6449367 Van Wie et al. Sep 2002 B2
6449496 Beith et al. Sep 2002 B1
6473560 Linnartz et al. Oct 2002 B1
6477431 Kalker et al. Nov 2002 B1
6487301 Zhao Nov 2002 B1
6490355 Epstein Dec 2002 B1
6496591 Rhoads Dec 2002 B1
6505160 Levy et al. Jan 2003 B1
6510233 Nakano Jan 2003 B1
6510234 Cox et al. Jan 2003 B1
6512837 Ahmed Jan 2003 B1
6523113 Wehrenberg Feb 2003 B1
6529506 Yamamoto et al. Mar 2003 B1
6530021 Epstein et al. Mar 2003 B1
6550011 Sims, III Apr 2003 B1
6553127 Kurowski Apr 2003 B1
6556688 Ratnakar Apr 2003 B1
6557103 Boncelet, Jr. et al. Apr 2003 B1
6570996 Linnartz May 2003 B1
6571144 Moses et al. May 2003 B1
6574350 Rhoads et al. Jun 2003 B1
6577744 Braudaway et al. Jun 2003 B1
6584138 Neubauer et al. Jun 2003 B1
6590996 Reed et al. Jul 2003 B1
6590997 Rhoads Jul 2003 B2
6591365 Cookson Jul 2003 B1
6592516 Lee Jul 2003 B2
6598162 Moskowitz Jul 2003 B1
6614914 Rhoads et al. Sep 2003 B1
6618484 Weber et al. Sep 2003 B1
6625297 Bradley Sep 2003 B1
6628729 Sorensen Sep 2003 B1
6633653 Hobson et al. Oct 2003 B1
6636615 Rhoads et al. Oct 2003 B1
6636967 Koyano Oct 2003 B1
6647128 Rhoads Nov 2003 B1
6647129 Rhoads Nov 2003 B2
6654501 Acharya et al. Nov 2003 B1
6661905 Chupp et al. Dec 2003 B1
6665419 Oami Dec 2003 B1
6668068 Hashimoto Dec 2003 B2
6671376 Koto et al. Dec 2003 B1
6671388 Op De Beeck et al. Dec 2003 B1
6674861 Xu et al. Jan 2004 B1
6674876 Hannigan et al. Jan 2004 B1
6675146 Rhoads Jan 2004 B2
6678389 Sun et al. Jan 2004 B1
6681029 Rhoads Jan 2004 B1
6683958 Petrovic Jan 2004 B2
6697944 Jones et al. Feb 2004 B1
6700990 Rhoads Mar 2004 B1
6704431 Ogawa et al. Mar 2004 B1
6707926 Macy et al. Mar 2004 B1
6721439 Levy et al. Apr 2004 B1
6728390 Rhoads et al. Apr 2004 B2
6737957 Petrovic et al. May 2004 B1
6738495 Rhoads et al. May 2004 B2
6744906 Rhoads et al. Jun 2004 B2
6748360 Pitman et al. Jun 2004 B2
6751337 Tewfik et al. Jun 2004 B2
6757908 Vogel Jun 2004 B1
6768807 Muratani Jul 2004 B1
6771797 Ahmed Aug 2004 B2
6785399 Fujihara Aug 2004 B2
6785401 Walker et al. Aug 2004 B2
6785815 Serret-Avila et al. Aug 2004 B1
6792542 Lee et al. Sep 2004 B1
6798893 Tanaka Sep 2004 B1
6801999 Venkatesan et al. Oct 2004 B1
6823455 Macy et al. Nov 2004 B1
6829368 Meyer et al. Dec 2004 B2
6829582 Barsness Dec 2004 B1
6834344 Aggarwal et al. Dec 2004 B1
6834345 Bloom et al. Dec 2004 B2
6850555 Barclay Feb 2005 B1
6850626 Rhoads et al. Feb 2005 B2
6856693 Miller Feb 2005 B2
6871180 Neuhauser et al. Mar 2005 B1
6880082 Ohta Apr 2005 B2
6888943 Lam et al. May 2005 B1
6891958 Kirovski et al. May 2005 B2
6912010 Baker et al. Jun 2005 B2
6912294 Wang et al. Jun 2005 B2
6912315 Wong et al. Jun 2005 B1
6915002 Gustafson Jul 2005 B2
6915422 Nakamura Jul 2005 B1
6915481 Tewfik et al. Jul 2005 B1
6928233 Walker et al. Aug 2005 B1
6931536 Hollar Aug 2005 B2
6944313 Donescu Sep 2005 B1
6944771 Epstein Sep 2005 B2
6947893 Iwaki et al. Sep 2005 B1
6952774 Kirovski et al. Oct 2005 B1
6954541 Fan et al. Oct 2005 B2
6961854 Serret-Avila et al. Nov 2005 B2
6973195 Matsui Dec 2005 B1
6993154 Brunk Jan 2006 B2
6996249 Miller et al. Feb 2006 B2
7007166 Moskowitz et al. Feb 2006 B1
7020304 Alattar et al. Mar 2006 B2
7024018 Petrovic Apr 2006 B2
7043049 Kuzma May 2006 B2
7043536 Philyaw et al. May 2006 B1
7043638 McGrath et al. May 2006 B2
7046808 Metois et al. May 2006 B1
7054461 Zeller et al. May 2006 B2
7054462 Rhoads et al. May 2006 B2
7058809 White et al. Jun 2006 B2
7058815 Morin Jun 2006 B2
7068809 Stach Jun 2006 B2
7072492 Ogawa et al. Jul 2006 B2
7103678 Asai et al. Sep 2006 B2
7107452 Serret-Avila et al. Sep 2006 B2
7111169 Ripley et al. Sep 2006 B2
7113613 Echizen et al. Sep 2006 B2
7142691 Levy Nov 2006 B2
7162642 Schumann et al. Jan 2007 B2
7164778 Nakamura et al. Jan 2007 B1
7167599 Diehl Jan 2007 B1
7171020 Rhoads et al. Jan 2007 B2
7177429 Moskowitz et al. Feb 2007 B2
7197368 Kirovski et al. Mar 2007 B2
7206649 Kirovski et al. Apr 2007 B2
7224819 Levy et al. May 2007 B2
7231061 Bradley Jun 2007 B2
7289643 Brunk et al. Oct 2007 B2
7298865 Lubin et al. Nov 2007 B2
7319759 Peinado et al. Jan 2008 B1
7321666 Kunisa Jan 2008 B2
7334247 Finseth et al. Feb 2008 B1
7336802 Kunisa Feb 2008 B2
7346514 Herre et al. Mar 2008 B2
7369677 Petrovic et al. May 2008 B2
7389421 Kirovski et al. Jun 2008 B2
7430670 Horning et al. Sep 2008 B1
7450727 Griesinger Nov 2008 B2
7454019 Williams Nov 2008 B2
7562392 Rhoads et al. Jul 2009 B1
7581103 Home et al. Aug 2009 B2
7587601 Levy et al. Sep 2009 B2
7616776 Petrovic et al. Nov 2009 B2
7617509 Brunheroto et al. Nov 2009 B1
7630497 Lotspiech et al. Dec 2009 B2
7644282 Petrovic et al. Jan 2010 B2
7660991 Nakamura et al. Feb 2010 B2
7664332 Wong et al. Feb 2010 B2
7693297 Zhang et al. Apr 2010 B2
7698570 Schumann et al. Apr 2010 B2
7788684 Petrovic et al. Aug 2010 B2
7788693 Robbins Aug 2010 B2
7818763 Sie et al. Oct 2010 B2
7840006 Ogawa et al. Nov 2010 B2
7979881 Wong et al. Jul 2011 B1
7983922 Neusinger et al. Jul 2011 B2
7986806 Rhoads Jul 2011 B2
7991995 Rabin et al. Aug 2011 B2
8005258 Petrovic et al. Aug 2011 B2
8015410 Pelly et al. Sep 2011 B2
8055013 Levy et al. Nov 2011 B2
8059815 Lofgren et al. Nov 2011 B2
8155463 Wong et al. Apr 2012 B2
8189861 Rucklidge May 2012 B1
8194803 Baum et al. Jun 2012 B2
8249992 Harkness et al. Aug 2012 B2
8315835 Tian et al. Nov 2012 B2
8346532 Chakra et al. Jan 2013 B2
8467717 Croy et al. Jun 2013 B2
8479225 Covell Jul 2013 B2
8538066 Petrovic et al. Sep 2013 B2
8589969 Falcon Nov 2013 B2
20010001159 Ford May 2001 A1
20010021926 Schneck et al. Sep 2001 A1
20010022786 King et al. Sep 2001 A1
20010044899 Levy Nov 2001 A1
20010051996 Cooper et al. Dec 2001 A1
20010054146 Carro et al. Dec 2001 A1
20020007403 Echizen et al. Jan 2002 A1
20020012443 Rhoads et al. Jan 2002 A1
20020033844 Levy et al. Mar 2002 A1
20020044659 Ohta Apr 2002 A1
20020052885 Levy May 2002 A1
20020053026 Hashimoto May 2002 A1
20020054089 Nicholas et al. May 2002 A1
20020068987 Hars Jun 2002 A1
20020080964 Stone et al. Jun 2002 A1
20020080976 Schreer Jun 2002 A1
20020082731 Pitman et al. Jun 2002 A1
20020095577 Nakamura et al. Jul 2002 A1
20020097873 Petrovic Jul 2002 A1
20020120849 McKinley et al. Aug 2002 A1
20020120854 LeVine et al. Aug 2002 A1
20020126842 Hollar Sep 2002 A1
20020126872 Brunk et al. Sep 2002 A1
20020138734 David et al. Sep 2002 A1
20020154144 Lofgren et al. Oct 2002 A1
20020168087 Petrovic Nov 2002 A1
20020178368 Yin et al. Nov 2002 A1
20020199106 Hayashi Dec 2002 A1
20030009671 Yacobi et al. Jan 2003 A1
20030012098 Sako et al. Jan 2003 A1
20030012403 Rhoads et al. Jan 2003 A1
20030016825 Jones Jan 2003 A1
20030021439 Lubin et al. Jan 2003 A1
20030021441 Levy et al. Jan 2003 A1
20030028796 Roberts et al. Feb 2003 A1
20030031317 Epstein Feb 2003 A1
20030033321 Schrempp et al. Feb 2003 A1
20030037075 Hannigan et al. Feb 2003 A1
20030053655 Barone et al. Mar 2003 A1
20030056213 McFaddin et al. Mar 2003 A1
20030061489 Pelly et al. Mar 2003 A1
20030063747 Petrovic Apr 2003 A1
20030072468 Brunk et al. Apr 2003 A1
20030076955 Alve et al. Apr 2003 A1
20030078891 Capitant Apr 2003 A1
20030081809 Fridrich et al. May 2003 A1
20030112974 Levy Jun 2003 A1
20030112997 Ahmed Jun 2003 A1
20030115504 Holliman et al. Jun 2003 A1
20030131350 Peiffer et al. Jul 2003 A1
20030152225 Kunisa Aug 2003 A1
20030174862 Rhoads et al. Sep 2003 A1
20030177359 Bradley Sep 2003 A1
20030179901 Tian et al. Sep 2003 A1
20030185417 Alattar et al. Oct 2003 A1
20030187679 Odgers et al. Oct 2003 A1
20030188166 Pelly et al. Oct 2003 A1
20030190054 Troyansky et al. Oct 2003 A1
20030190055 Kalker et al. Oct 2003 A1
20030200438 Kirovski et al. Oct 2003 A1
20030223584 Bradley et al. Dec 2003 A1
20040005076 Hosaka et al. Jan 2004 A1
20040008864 Watson et al. Jan 2004 A1
20040009763 Stone et al. Jan 2004 A1
20040010692 Watson Jan 2004 A1
20040015400 Whymark Jan 2004 A1
20040025176 Franklin et al. Feb 2004 A1
20040028255 Miller Feb 2004 A1
20040042635 Epstein et al. Mar 2004 A1
20040042636 Oh Mar 2004 A1
20040073916 Petrovic et al. Apr 2004 A1
20040078575 Morten et al. Apr 2004 A1
20040088556 Weirauch May 2004 A1
20040091111 Levy et al. May 2004 A1
20040093202 Fischer et al. May 2004 A1
20040093523 Matsuzaki et al. May 2004 A1
20040098593 Muratani May 2004 A1
20040101160 Kunisa May 2004 A1
20040103293 Ryan May 2004 A1
20040111740 Seok et al. Jun 2004 A1
20040133794 Kocher et al. Jul 2004 A1
20040136531 Asano et al. Jul 2004 A1
20040151316 Petrovic Aug 2004 A1
20040169581 Petrovic et al. Sep 2004 A1
20040174996 Tewfik et al. Sep 2004 A1
20040202324 Yamaguchi et al. Oct 2004 A1
20040204943 Kirovski et al. Oct 2004 A1
20040216157 Shain et al. Oct 2004 A1
20040250078 Stach et al. Dec 2004 A1
20040258274 Brundage et al. Dec 2004 A1
20050008190 Levy et al. Jan 2005 A1
20050010779 Kobayashi et al. Jan 2005 A1
20050013462 Rhoads Jan 2005 A1
20050025332 Seroussi Feb 2005 A1
20050050332 Serret-Avila et al. Mar 2005 A1
20050071669 Medvinsky et al. Mar 2005 A1
20050120220 Oostveen et al. Jun 2005 A1
20050144632 Mears et al. Jun 2005 A1
20050154891 Skipper Jul 2005 A1
20050196051 Wong et al. Sep 2005 A1
20050202781 Steelberg et al. Sep 2005 A1
20050242568 Long et al. Nov 2005 A1
20050251683 Levy et al. Nov 2005 A1
20060005029 Petrovic et al. Jan 2006 A1
20060056653 Kunisa Mar 2006 A1
20060062426 Levy et al. Mar 2006 A1
20060075424 Talstra et al. Apr 2006 A1
20060104477 Isogai et al. May 2006 A1
20060227968 Chen et al. Oct 2006 A1
20060239501 Petrovic et al. Oct 2006 A1
20070003103 Lemma et al. Jan 2007 A1
20070005500 Steeves et al. Jan 2007 A1
20070033146 Hollar Feb 2007 A1
20070039018 Saslow et al. Feb 2007 A1
20070100483 Kentish et al. May 2007 A1
20070110237 Tehranchi et al. May 2007 A1
20070143617 Farber et al. Jun 2007 A1
20070150418 Ben-Menahem et al. Jun 2007 A1
20070168673 Van Der Veen et al. Jul 2007 A1
20070177761 Levy Aug 2007 A1
20070192261 Kelkar et al. Aug 2007 A1
20070208711 Rhoads et al. Sep 2007 A1
20070214049 Postrel Sep 2007 A1
20070223708 Villemoes et al. Sep 2007 A1
20080002854 Tehranchi et al. Jan 2008 A1
20080016360 Rodriguez et al. Jan 2008 A1
20080031463 Davis Feb 2008 A1
20080209219 Rhein Aug 2008 A1
20080228733 Davis et al. Sep 2008 A1
20080273861 Yang et al. Nov 2008 A1
20080298632 Reed Dec 2008 A1
20080310629 Van Der Veen et al. Dec 2008 A1
20080310673 Petrovic et al. Dec 2008 A1
20080313741 Alve et al. Dec 2008 A1
20090031134 Levy Jan 2009 A1
20090033617 Lindberg et al. Feb 2009 A1
20090158318 Levy Jun 2009 A1
20090172405 Shiomi et al. Jul 2009 A1
20090175594 Ann et al. Jul 2009 A1
20090177674 Yoshida Jul 2009 A1
20090262932 Petrovic Oct 2009 A1
20090319639 Gao et al. Dec 2009 A1
20090326961 Petrovic et al. Dec 2009 A1
20100034513 Nakano et al. Feb 2010 A1
20100115267 Guo et al. May 2010 A1
20100121608 Tian et al. May 2010 A1
20100146286 Petrovic et al. Jun 2010 A1
20100159425 Hamlin Jun 2010 A1
20100162352 Haga et al. Jun 2010 A1
20100214307 Lee et al. Aug 2010 A1
20100226525 Levy et al. Sep 2010 A1
20100228632 Rodriguez Sep 2010 A1
20100228857 Petrovic et al. Sep 2010 A1
20100287579 Petrovic et al. Nov 2010 A1
20100287609 Gonzalez et al. Nov 2010 A1
20110016172 Shah Jan 2011 A1
20110068898 Petrovic et al. Mar 2011 A1
20110091066 Alattar Apr 2011 A1
20110103444 Baum et al. May 2011 A1
20110123063 Delp et al. May 2011 A1
20110173210 Ahn et al. Jul 2011 A1
20110202687 Glitsch et al. Aug 2011 A1
20110209191 Shah Aug 2011 A1
20110219229 Cholas et al. Sep 2011 A1
20110225427 Wood et al. Sep 2011 A1
20110235908 Ke et al. Sep 2011 A1
20110286625 Petrovic et al. Nov 2011 A1
20110293090 Ayaki et al. Dec 2011 A1
20110311056 Winograd Dec 2011 A1
20110320627 Landow et al. Dec 2011 A1
20120017091 Petrovic et al. Jan 2012 A1
20120026393 Petrovic et al. Feb 2012 A1
20120072729 Winograd et al. Mar 2012 A1
20120072730 Winograd et al. Mar 2012 A1
20120072731 Winograd et al. Mar 2012 A1
20120084870 Petrovic Apr 2012 A1
20120102304 Brave Apr 2012 A1
20120130719 Petrovic et al. May 2012 A1
20120203556 Villette et al. Aug 2012 A1
20120265735 McMillan et al. Oct 2012 A1
20120300977 Petrovic et al. Nov 2012 A1
20130007462 Petrovic et al. Jan 2013 A1
20130011006 Petrovic et al. Jan 2013 A1
20130031579 Klappert Jan 2013 A1
20130073065 Chen et al. Mar 2013 A1
20130108101 Petrovic et al. May 2013 A1
20130114847 Petrovic et al. May 2013 A1
20130114848 Petrovic et al. May 2013 A1
20130117570 Petrovic et al. May 2013 A1
20130117571 Petrovic et al. May 2013 A1
20130129303 Lee et al. May 2013 A1
20130132727 Petrovic May 2013 A1
20130142382 Petrovic et al. Jun 2013 A1
20130151855 Petrovic et al. Jun 2013 A1
20130151856 Petrovic et al. Jun 2013 A1
20130152210 Petrovic et al. Jun 2013 A1
20130283402 Petrovic Oct 2013 A1
20130339029 Petrovic et al. Dec 2013 A1
20140029786 Winograd Jan 2014 A1
20140071342 Winograd et al. Mar 2014 A1
20140074855 Zhao et al. Mar 2014 A1
20140075465 Petrovic et al. Mar 2014 A1
20140075466 Zhao Mar 2014 A1
20140075469 Zhao Mar 2014 A1
Foreign Referenced Citations (97)
Number Date Country
2276638 Jan 2000 CA
282734 Sep 1988 EP
372601 Jun 1990 EP
581317 Feb 1994 EP
1137250 Sep 2001 EP
2166725 Mar 2010 EP
2260246 Apr 1993 GB
2292506 Feb 1996 GB
2363027 Dec 2001 GB
10-150548 Jun 1998 JP
11-086435 Mar 1999 JP
11-284516 Oct 1999 JP
11-346302 Dec 1999 JP
2000069273 Mar 2000 JP
2000083159 Mar 2000 JP
2000163870 Jun 2000 JP
2000174628 Jun 2000 JP
2000216981 Aug 2000 JP
2001022366 Jan 2001 JP
2001119555 Apr 2001 JP
2001175270 Jun 2001 JP
2001188549 Jul 2001 JP
2001216763 Aug 2001 JP
2001218006 Aug 2001 JP
2001245132 Sep 2001 JP
2001257865 Sep 2001 JP
2001312570 Nov 2001 JP
2001339700 Dec 2001 JP
2001527660 Dec 2001 JP
2002010057 Jan 2002 JP
2002024095 Jan 2002 JP
2002027223 Jan 2002 JP
2002091465 Mar 2002 JP
2002091712 Mar 2002 JP
2002100116 Apr 2002 JP
2002125205 Apr 2002 JP
2002135557 May 2002 JP
2002165191 Jun 2002 JP
2002176614 Jun 2002 JP
2002519916 Jul 2002 JP
2002232412 Aug 2002 JP
2002232693 Aug 2002 JP
2002319924 Oct 2002 JP
2002354232 Dec 2002 JP
2003008873 Jan 2003 JP
2003039770 Feb 2003 JP
2003091927 Mar 2003 JP
2003230095 Aug 2003 JP
2003244419 Aug 2003 JP
2003283802 Oct 2003 JP
2003316556 Nov 2003 JP
2003348324 Dec 2003 JP
2004023786 Jan 2004 JP
2004070606 Mar 2004 JP
2004163855 Jun 2004 JP
2004173237 Jun 2004 JP
2004193843 Jul 2004 JP
2004194233 Jul 2004 JP
2004328747 Nov 2004 JP
2005051733 Feb 2005 JP
2005094107 Apr 2005 JP
2005525600 Aug 2005 JP
20080539669 Nov 2008 JP
20100272920 Dec 2010 JP
5283732 Jul 2013 JP
20100009384 Jan 2010 KR
94-10771 May 1994 WO
95-14289 May 1995 WO
97-09797 Mar 1997 WO
97-33391 Sep 1997 WO
98-53565 Nov 1998 WO
99-03340 Jan 1999 WO
99-39344 May 1999 WO
99-45706 Oct 1999 WO
99-62022 Dec 1999 WO
00-00969 Jan 2000 WO
00-13136 Mar 2000 WO
0056059 Sep 2000 WO
01-54035 Jul 2001 WO
01-55889 Aug 2001 WO
0197128 Dec 2001 WO
0219589 Mar 2002 WO
0223883 Mar 2002 WO
0249363 Jun 2002 WO
0295727 Nov 2002 WO
03052598 Jun 2003 WO
03102947 Dec 2003 WO
2005017827 Feb 2005 WO
2005-027501 Mar 2005 WO
2005038778 Apr 2005 WO
2006051043 May 2006 WO
2006116394 Nov 2006 WO
2010073236 Jul 2010 WO
2013067439 May 2013 WO
2013090462 Jun 2013 WO
2013090466 Jun 2013 WO
2013090467 Jun 2013 WO
Non-Patent Literature Citations (108)
Entry
International Search Report and Written Opinion dated May 19, 2004 for International Application No. PCT/US2003/031816, filed Apr. 29, 2004 (3 pages).
International Search Report and Written Opinion dated May 29, 2008 for International Application No. PCT/US2006/015410, filed Apr. 21, 2006 (6 pages).
International Search Report and Written Opinion dated Sep. 26, 2008 for International Application No. PCT/US2007/016812, filed Jul. 25, 2007 (6 pages).
International Search Report and Written Opinion dated Mar. 18, 2013 for International Application No. PCT/US2012/063431, filed Nov. 2, 2012 (10 pages).
Jacobsmeyer, J., et al., “Introduction to error-control coding,” Pericle Communications Company, 2004 (16 pages).
Kalker, T., et al., “A security risk for publicly available watermark detectors,” Proc. Benelux Info. Theory Symp., Veldhoven, The Netherlands, May 1998 (7 pages).
Kalker, T., et al., “System issues in digital image and video watermarking for copy protection,” Proc. IEEE Int. Conf. on Multimedia Computing and Systems, pp. 562-567, Jun. 1999.
Kang, X., et al., “A DWT-DFT composite watermarking scheme robust to both affine transform and JPEGcompression,” IEEE Transactions on Circuits and Systems for Video Technology, 8(13):776-786, Aug. 2003.
Kim, T.Y., et al., “An asymmetric watermarking system with many embedding watermarks corresponding to one detection watermark,” IEEE Signal Processing Letters, 3(11):375-377, Mar. 2004.
Kirovski, D., et al., “Multimedia content screening using a dual watermarking and fingerprinting system,” Proceedings of the tenth ACM international conference, pp. 372-381, 2002.
Kirovski, D., et al., “Randomizing the replacement attack,” ICASSP, pp. 381-384, 2004.
Kirovski, D., et al., “Robust spread-spectrum audio watermarking,” IEEE International Conference on Acoustics, Speech, and Signal Processing, 3:1345-1348, 2001.
Kirovski, D., et al., “Multimedia contentscreening using a dual watermarking and fingerprinting system,” Multimedia '02 Proceedings of the tenth ACM international conference on Multimedia, 2002 (11 pages).
Kocher, P et al., “Self-Protecting Digital Content: A Technical Report from the CRI Content Security Research Initiative,” Cryptography Research, Inc. (CRI), 2002-2003 (14 pages).
Kutter, M., et al., “The watermarkcopy attack,” Proc. of the SPIE: Security and Watermarking of Multimedia Content II, 3971:1-10, Jan. 2000.
Kuznetsov, A.V., et al., “An error correcting scheme for defective memory,” IEEE Trans. Inf. Theory, 6(4):712-718, Nov. 1978 (7 pages).
Lacy, J., et al., “Intellectual property protection systems and digital watermarking,” Proceedings: Information Hiding, Second International Workshop, Portland, Oregon, pp. 158-168, 1998.
Lin, E.T., et al., “Detection of image alterations using semi-fragile watermarks,” Proceedings of the SPIE International Conference on Security and Watermarking of Multimedia Contents II, Jan. 2000 (12 pages).
Lin, P.L., et al., “Robust transparent image watermarking system with spatial mechanisms,” The Journal of Systems and Software, 50:107-116, Feb. 2000.
Lotspeich, J., “The Advanced Access Content System's Use of Digital Watermarking,” MCPS '06, Oct. 28, 2006, pp. 19-21.
Lu, C.S., et al., “Oblivious cocktail watermarking by sparse code shrinkage: A regional- and global-based scheme,” IEEE Transactions on Multimedia, 4(2):209-224, Dec. 2000.
Maehara, F., et al., “A proposal of multimedial home education terminal system based on flash-squeak OS,” Technical report of the institute of image information and television engineers, 28(43):21-24, Jul. 2004.
Mason, A. J., et al., “User requirements for watermarking in broadcast applications,” IEEE Conference Publication, International Broadcasting Convention (BC 2000), Amsterdam, Sep. 8-12, 2000 (7 pages).
Mintzer, F., et al., “If one watermark is good, are more better?,” Acoustics, Speech, and Signal Processing, ICASSP, 4:2067-2069, Mar. 1999.
Mobasseri, B.G., et al. “Content authentication and tamper detection in digital video,” Image Processing Proceedings, International Conference, 1:458-461, 2000.
Moulin, P., et al., “Detection-theoretic analysis of desynchronization attacks in watermarking, ” Technical Report MSR-TR-2002-24, Microsoft Corporation, Mar. 2002.
Muranoi, R., et al., “Video retrieval method using shotID for copyright protection systems,” Proc. SPIE Multimedia Storage and Archiving Systems III, 3527:245-252, Nov. 1998.
Nikolaidis, N., et al., “Watermark detection: benchmarking perspectives,” 2002 IEEE Conference on Multimedia and Expo, 2002 (4 pages).
Office Action dated Jul. 21, 2011 for Japanese Patent Application No. 2008-508985 (6 pages).
Office Action dated Mar. 16, 2012 for Japanese Patent Application No. 2008-508985 (8 pages).
Office Action dated Mar. 18, 2011 for European Patent Application No. 03774648.4 (6 pages).
Office Action dated May 8, 2012 for Japanese Patent Application No. 2009-522802 (4 pages).
Office Action dated Nov. 26, 2012 for Japanese Patent Application No. 2011-114667 (8 pages).
Office Action dated May 1, 2013 for Japanese Patent Application No. 2011-114667 (6 pages).
Office Action dated Nov. 28, 2012 for Japanese Patent Application No. 2011-114666 (8 pages).
Office Action dated Jan. 20, 2014 for Japanese Patent Application No. 2013-036990 (6 pages).
Park, J.H., et al., “Robust and fragile watermarking techniques for documents using bidirectional diagonal profiles,” Information and Communications Security: Third International Conference, Xian, China, Nov. 2001, pp. 483-494.
Perez-Gonzalez, F., et al., “Approaching the capacity limit in image watermarking a perspective on coding techniques for data hiding applications,” Signal Processing, 6(81):1215-1238 Jun. 2001.
Petitcolas, F., et al., “The blind patternmatching attack on watermark systems,” IEEE Trans. SignalProcessing, Apr. 2003 (4 pages).
Petitcolas, F.A.P., et al., “Attackson copyright marking systems,” Second Workshop on Information Hiding, Lecture Notes in Computer Science, Portland, Oregon, pp. 218-238, Apr. 1998.
Philips Research Liquid Audio Fraunhofer Institute, “Digital Audio Screening Technology for Phased Rollout,” Version 1.00, May 1999 (38 pages).
Pytlak, J.,“Anti-piracy coding,” http://www.tele.com/pipermail/tig/2003-November/003842.html; Nov. 2003 (2 pages).
RSA Laboratories, “Frequently asked questions about today's cryptography,” Version 4.1, May 2000 (37 pages).
Schneier, B., “Applied cryptography, second edition: protocols, algorithms and source code in C,” Oct. 1995 (10 pages).
Seok, J., et al., “A novel audio watermarking algorithm for copyright protection of digital audio,” ETRI Journal, 24(3):181-189, Jun. 2002.
Shih, F.Y., et al., “Combinational, image watermarking in the spatial and frequency domains,” Pattern Recognition, 36:696-975, May 2002.
Solanki, K., et al., “Robust image-adaptive data hiding: modeling, source coding and channel coding”, 41st Allerton Conference on Communications, Control and Computing, Oct. 2003 (10 pages).
Spangler, T., “Social Science,” http://www.multichannel.com/content/social-science, Sep. 2011 (5 pages).
Steinebach, M., et al., “StirMark benchmark: audiowatermarking attacks,” International Conference on Information Technology: Coding and Computing (ITCC 2001), Las Vegas, Nevada, Apr. 2001 (6 pages).
Tanaka, K., et al., “Secret transmission method of character data in motion picture communication,” SPIE Visual Communications and Image Processing '91, 1605:646-649, 1991.
“Advanced Access Content System (AACS), Pre-recorded Video Book,” Revision 0.951, Sep. 2009 (86 pages).
“Civolution's 2nd screen synchronisation solution wins CSI product of the year 2011 award at IBC,” IBC Press Release, Hall 2—Stand C30, Sep. 2011 (2 pages).
“Content Protection—Self Protecting Digital Content,” http://www.cryptography.com/technology/spdc/index.html, May 2010 (1 page).
“Red Bee and Civolution develop companion app for FX UK,” http://www.digitaltveurope.net/19981/red-bee-and-civolution-develop-companion-app-for-fx-uk, Jan. 2012 (2 pages).
“Microsoft response to CfP for technology solutions to screen digital audio content for LCM acceptance,” Microsoft Corporation, May 23, 1999 (9 pages).
“Task AC122-copy protection for distribution services,” http://acad.bg/WISE/english/rd/partners/acts/areal/ac122-t.html, Jul. 1, 1997 (2 pages).
Adelsbach, A., et al., “Proving Ownership of Digital Content,” Proc. 3rd Int. Workshop on Information Hiding, 1768:117-133, Sep. 1999.
Aggarwal, A., et al., “Multi-Layer Grid Embeddings,” Foundations of Computer Science, 26th Annual Symposium on Foundations of Computer Science, 85:186-196, Oct. 1985.
Aris Technologies, Inc. “Audio Watermarking System to Screen Digital Audio Content for LCM Acceptance,” May 1999 (17 pages).
Bangaleea, R., et al., “Performance improvement of spread spectrum spatial-domain watermarking scheme through diversity and attack characterisation,” IEEE Africon, pp. 293-298, 2002.
Barreto, P.S.L.M., et al. “Toward Secure Public-Key Blockwise Fragile Authentication Watermarking,” IEEE Proceedings Vision, Image, and Signal Processing, 149(2):57-62, Apr. 2002.
Boney, L., et al., “Digital Watermarks for Audio Signals,” Dept. of Electrical Engineering, Univ. of Minnesota, Mar. 1996 (4 pages).
Cappellini, V., et al. “Robust Frame-based Watermarking for Digital Video,” Proceedings of the 12th International Workshop on Database and Expert Systems Applications, Sep. 2001 (5 pages).
Caronni, G., “Assuring Ownership Rights for Digital Images,” Proceedings of reliable IT systems VIS 95, Vieweg Publishing Company, Germany, 1995 (10 pages).
Chen, B., et al., “Quantization index modulation: aclass of provably good methods for digital watermarking and information embedding,” IEEE Transactions on Information Theory, 47(4):1423-1443, May 2001.
Chou, J., et al., “A Robust Blind Watermarking Scheme based on Distributed Source Coding Principles,” Multimedial 2000 Proceedings of the eighth ACM international conference on multimedia, Los Angeles, California, 2000 (8 pages).
Chou, J., et al., “A Robust Optimization Solution to the Data Hiding Problem using Distributed Source Coding Principles,” Pro. SPIE, 3971, San Jose, California, Jan. 2000 (10 pages).
Cinea, Inc., “Forensic watermarking deterring video piracy,” 2004, (9 pages). [http://www.cinea.com/whitepapers/forensic—watermarking.pdf].
Costa, M., “Writing on Dirty Paper,” IEEE Trans. on Info. Theory, 29(3):439-441, May 1983.
Cox, I. J., et al., “Some general methods for tampering with watermarks,” IEEE Journal on Selected Areas in Communications, 16(4): 587-593, May 1998.
Coxford, A., et al., “Advanced Mathematics: A Preparation for Calculus, Second Edition,” Harcourt Brace Jovanovish, Inc., 1978 (14 pages).
Das, et al., “Distributed Priority Queues on Hybercube Architectures,” IEEE, 1996, pp. 620-627.
Davidson, M.F., “Music File Filter,” Sony Music, New York, May 23, 1999 (2 pages).
Digimarc Corporation, “Digimarc Watermarking Guide,” 1999 (22 pp.).
Dittmann, J., “Combining digital watermarks and collusion secure fingerprints for customer copy monitoring,” Proc. IEEE Seminar on Secure Images and Image Authentication, Apr. 2000 (6 pages).
Dittmann, J., et al., “Combining digital watermarks and collusion secure fingerprints for digital images,” Proc. SPIE 3657:171-182, Jan. 1999 (12 pages).
Epp, L.W., et al., “Generalized scattering matrices for unit cell characterization of grid amplifiers and device de-embedding,” IEEE, 2:1288-1291, Jun. 1995.
European Search Report dated Apr. 12, 2012 for European Patent Application No. 07836262.1, filed Jul. 25, 2007 (12 pages).
European Search Report dated Jul. 3, 2012 for European Patent Application No. 12150742.0, filed Oct. 7, 2003 (5 pages).
European Search Report dated Nov. 10, 2010 for European Patent Application No. 03774648.4, filed Oct. 7, 2003 (5 pages).
European Search Report dated Oct. 24, 2012 for European Patent Application No. 06758537.2, filed Apr. 21, 2006 (6 pages).
European Search Report dated Oct. 31, 2012 for European Patent Application No. 06758577.8, filed Apr. 25, 2006 (6 pages).
European Search Report dated Nov. 8, 2012 for European Patent Application No. 06785709.4, filed Jun. 27, 2006 (5 pages).
Furon, T., et al., “An asymmetric watermarkingmethod,” IEEE Trans. Signal Processing, 4(51):981-995, Apr. 2003.
Guth, H.J. et al., “Error- and collusion-secure fingerprinting for digital data,” Proc. 3rd Int. Workshop on Information Hiding, LNCS 1768:134-145, Sep./Oct. 1999.
Hartung, F., et al., “Digital watermarking of MPEG-2 coded video in the bitstream domain,” Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, 4:2621-2624, Apr. 1997.
Hartung, F., et al., “Watermarking of MPEG-2 encoded video without decoding and re-coding,” Proc. SPIE Multimedia Computing and Networking 97, 3020:264-274, Feb. 1997.
Hartung, F., et al., “Watermarking of uncompressed and compressed video,” Signal Processing, 3(66):283-301, May 1998.
Heegard, C., et al., “On the capacity of computer memory with defects,” IEEE Trans. Info. Theory, 5(IT-29):731-739, Sep. 1983.
International Search Report and Written Opinion dated Apr. 8, 2013 for International Application No. PCT/US2012/069306, filed Dec. 12, 2012 (12 pages).
International Search Report and Written Opinion dated Mar. 25, 2013 for International Application No. PCT/US2012/069302, filed Dec. 12, 2012 (22 pages).
International Search Report and Written Opinion dated Apr. 24, 2012 for International Application No. PCT/US2011/051857, filed Sep. 15, 2011 (9 pages).
International Search Report and Written Opinion dated Aug. 14, 1998 for International Application No. PCT/US1998/009587, filed May 12, 1998 (3 pages).
International Search Report and Written Opinion dated Aug. 22, 2007 for International Application No. PCT/US2006/031267, filed Aug. 9, 2006 (2 pages).
International Search Report and Written Opinion dated Feb. 14, 2002 for International Application No. PCT/US2001/026505, filed Aug. 27, 2001 (2 pages).
International Search Report and Written Opinion dated Feb. 28, 2013 for International Application No. PCT/US2012/066138, filed Nov. 20, 2012 (11 pages).
International Search Report and Written Opinion dated Jan. 4, 2008 for International Application No. PCT/US2006/015615, filed Apr. 25, 2006 (5 pages).
International Search Report and Written Opinion dated Mar. 14, 2013 for International Application No. PCT/US2012/069308, filed Dec. 12, 2012 (10 pages).
International Search Report and Written Opinion dated Mar. 28, 2012 for International Application No. PCT/US2011/051855, filed Sep. 15, 2011 (8 pages).
International Search Report and Written Opinion dated May 13, 2008 for International Application No. PCT/US2006/025090, filed Jun. 27, 2006 (2 pages).
Tsai, M.J., et al., “Wavelet packet and adaptive spatial transformation of watermark for digital image authentication,” IEEE Image Processing, 2000 International Conference, 1:450-453, 2000 (4 pages).
Verance Corporation, “Confirmedia,” PowerPoint presentation made to National Association of Broadcasters, Apr. 24, 2001 (40 pages).
Wang, X, et al., “Robust correlation of encrypted attack traffic through stepping stones by manipulation of interpacket delays,” Proceedings of the 10th ACM conference on computer communications security, Oct. 27-30, 2003, Washington D.C., USA.
Wolfgang, R., et al., “Perceptual watermarks for digital images and video,” Proceedings of the IEEE, 87(7):1108-1126, Jul. 1999.
Xu, C., et al., “Applications of digital watermarking technology in audio signals,” Journal of Audio Eng. Soc., 10(47):805-812, Oct. 1999.
Yeung, M. M., et al., “An invisible watermarking technique for image verification,” Image Processing, International Conference Proceedings, 2:680-683, Oct. 26-29, 1997.
Zhao, J., “A WWW service to embed and prove digital copyright watermarks,” Proc. European Conf. on Multimedia Applications, Services and Techniques (ECMAST'96), May 1996 (15 pages).
Zhao, J., “Applying digital watermarking techniques to online multimedia commerce,” Proc. Int. Conf. on Imaging Science, Systems and Applications (CISSA'97), Jun./Jul. 1997 (7 pages).
Related Publications (1)
Number Date Country
20140067950 A1 Mar 2014 US
Provisional Applications (1)
Number Date Country
61695938 Aug 2012 US