The technology of the present disclosure relates generally to audiovisual entertainment systems, and more particularly to a system and methods for dynamic video content modification based on user reactions.
The variety and depth of audiovisual entertainment systems has expanded substantially in recent years. High definition video, full surround sound, and other advances in video and audio technology have provided users a theater-like experience in the home.
Certain forms of audiovisual entertainment may be inherently user-centric, with the experience being tailored automatically for each user. For example, video games in particular tend to adapt to the user's manner of progress through the game. Video and audio content will change based on the user's actions within the game, resulting in each user having an individualized experience. Such individualized experience enhances the entertainment value.
Certain other forms of audiovisual entertainment, however, tend to be substantially fixed in nature at the conclusion of production. Movies and television shows, for example, currently are not significantly alterable while being watched. As such, all users tend to experience the same or substantially similar content, and without an individualized experience the enjoyment of such audiovisual entertainment can be limited.
Audiovisual entertainment, such as movies, television shows, user produced content, and the like, can be obtained in a variety of means. For example, content can be obtained from traditional broadcast and cable networks, or streamed from pay-per-view or subscription services. Accessing content is not limited to traditional viewing devices, such as televisions in the home or computer devices, but also may be obtained by a variety of mobile devices such as mobile telephones and other portable media playing devices. Audiovisual content also may be played back from a storage medium, such as a DVD, Blu-ray disc (BD), hard drive disc of digital video recorder (DVR), and the like.
As referenced above, certain forms of audiovisual entertainment, such as movies and television shows for example, are substantially fixed and provide a relatively uniform viewing experience to all viewers. Content providers, however, have provided a variety of limited mechanisms for adjusting or modifying the viewing experience. For example, viewers can fast-forward through undesirable scenes, or rewind to and replay highly enjoyable scenes. Content providers commonly provide a “scene selection” feature, particularly for relatively lengthy content such as movies, which permit a user to select and jump to a particularly desired scene. Certain content media, such as DVDs and BDs in particular, may have a “special features” or “bonus features” section. Such features may provide additional content associated with the main content, such as deleted scenes, commentary, alternative versions of the content or portions of the content (e.g., a “director's cut” or an alternative ending), and the like. These features provide certain selections for minor modifications of the otherwise fixed content, which can provide somewhat of an improvement to the viewing experience.
The above features for altering content, however, are deficient in that they do not provide a truly individualized, enhanced viewing experience. The extent of the modifications to the main content is relatively minor. In addition, the scope of the modifications and enhancements is essentially fixed by the content provider. Accordingly, users essentially are selecting from a fixed and finite set of content provider enhancements. Such enhancements, therefore, are not tailored to individual users. In addition, the above features are for the most part highly manual. A user must select a modification, enhancement, or additional features, which reduces the overall effectiveness in improving the viewing experience. Accordingly, conventional features for modifying substantially fixed audiovisual content, and movies and television shows in particular, are deficient in not providing a truly individualized and enhanced viewing experience.
To improve the consumer experience with audiovisual entertainment, there is a need in the art for an improved system and method for modifying audiovisual content for providing an individualized and enhanced viewing experience. The system described herein overcomes the deficiencies of conventional systems by providing a system and methods for dynamic modification of audiovisual content based on user reactions.
Accordingly, an aspect of the invention is a dynamic content modification system for dynamically modifying content playback based on a user reaction. The system includes a sensor module configured to receive a plurality of sensor measurements of at least one user, a user model database including a plurality of user models associated with content preferences, and a controller. The controller is configured to receive the sensor measurements and apply the sensor measurements to at least one user model to determine a prediction of a user reaction to content. The controller is configured to determine a content modification to a playback of the content based on the prediction.
According to an embodiment of the dynamic content modification system, the controller is further configured to a cause a content reproduction device to play back the content in a manner that incorporates the determined content modification.
According to an embodiment of the dynamic content modification system, the sensor module comprises a plurality of sensor devices.
According to an embodiment of the dynamic content modification system, the plurality of sensor devices includes at least one of a face detection camera, a motion sensor, and a photoplethysmography measuring system.
According to an embodiment of the dynamic content modification system, the plurality of sensor devices includes at least one of a headset worn by the user, a probe sensor worn by the user, and a remote control device operated by the user.
According to an embodiment of the dynamic content modification system, the probe sensor includes at least one of a photoplethysmography measuring system and a galvanic skin response measuring sensor.
According to an embodiment of the dynamic content modification system, the sensor module further comprises a wireless interface, and the controller receives the sensor measurements via the wireless interface.
According to an embodiment of the dynamic content modification system, the at least one user is a plurality of users, and the sensor module is configured to distinguish sensor measurements associated with each respective user. The controller is configured to receive the sensor measurements for each respective user and apply the sensor measurements to a corresponding user model for each respective user to determine a prediction of a reaction to content for each respective user. The controller is configured to determine a content modification to a playback of the content based on one or more of the user predictions.
According to an embodiment of the dynamic content modification system, the user model data base contains at least one of an individual model that is specific to a corresponding individual user, and a default model that is not specific to a corresponding individual user.
According to an embodiment of the dynamic content modification system, the individual model includes a user profile for the corresponding individual user.
According to an embodiment of the dynamic content modification system, the individual model is generated automatically based on a learning process utilizing a usage history of a plurality default models.
According to an embodiment of the dynamic content modification system, the controller is configured to determine the content modification based on information contained in a media file associated with the content.
According to an embodiment of the dynamic content modification system, the controller is configured to determine the content modification based on information contained in metadata of the media file associated with the content.
According to an embodiment of the dynamic content modification system, the system further includes a wireless interface and a server storing the user model database. The controller is configured to apply the sensor measurements to the at least one user model by accessing the at least one user model from the server over the wireless interface.
According to an embodiment of the dynamic content modification system, the controller is configured to determine the content modification based on information contained in metadata of a media file associated with the content, the server includes a metadata database containing the content modification information, and the controller is configured to determine the content modification by accessing the metadata database from the server over the wireless interface.
According to an embodiment of the dynamic content modification system, the system further includes a wireless interface, and a server storing the user model database and including a server controller. The server controller is configured to receive the sensor measurements over the wireless interface and apply the sensor measurements to at least one user model to determine the a prediction of a user reaction to content. The server controller is configured to determine a content modification to a playback of the content based on the prediction and to transmit the determined content modification to the controller over the wireless interface. The controller is further configured to a cause a content playback device to play back the content in a manner that incorporates the determined content modification.
Another aspect of the invention is a content reproduction system including the described content modification system and a content reproduction device. The controller is further configured to a cause the content reproduction device to play back the content in a manner that incorporates the determined content modification.
Another aspect of the invention is a method of dynamically modifying content playback based on a user reaction. The method includes the steps of receiving a plurality of sensor measurements of at least one user, applying the sensor measurements to at least one user model to determine a prediction of a user reaction to content, and determining a content modification to a playback of the content based on the prediction.
According the one embodiment of the method, the method further includes causing a content reproduction device to play back the content in a manner that incorporates the determined content modification.
According to one embodiment of the method, the method the at least one user is a plurality of users. The method further includes receiving the plurality of sensor measurements from the plurality of users, distinguishing sensor measurements associated with each respective user, applying the sensor measurements of each respective user to a corresponding user model for each respective user to determine a prediction of a reaction to content for each respective user, and determining a content modification to a playback of the content based on one or more of the user predictions.
These and further features of the present invention will be apparent with reference to the following description and attached drawings. In the description and drawings, particular embodiments of the invention have been disclosed in detail as being indicative of some of the ways in which the principles of the invention may be employed, but it is understood that the invention is not limited correspondingly in scope. Rather, the invention includes all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
Features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
It should be emphasized that the terms “comprises” and “comprising,” when used in this specification, are taken to specify the presence of stated features, integers, steps or components but do not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.
The system described herein overcomes the deficiencies of conventional systems by providing a system and methods for dynamic modification of audiovisual content based on user reactions.
Components of the system generally include the following components, which are described in more detail below. A sensor module detects various sensor measurements from a user than can be indicative of a user emotional state in reaction to viewed content, which can include a variety of physical parameters such as facial expressions and features, heart rate, blood pressure, pupil size etc. A controller or processing device is configured to receive the various sensor inputs, and determines the state or emotional condition of the user in reaction to the viewed content. For example, a combination of high heart rate, high blood pressure, and small pupil size may be associated with an excited state, whereas the reverse may be associated with a relaxed or even bored state. The sensor module also may include face recognition features, which in addition to emotional state determination, can be employed to determine a user identity (and thus such user features as age and gender).
Based on the determination of the user emotional state in reaction to viewed content, the controller or processing device is configured to predict whether a user will react favorably versus unfavorably to upcoming content. The prediction occurs as follows. The controller applies the emotional state determination based on data of the sensor inputs to one or more user models. Each model constitutes a database of entries that relate the user emotional reaction to user preferences so as to permit a prediction of a user reaction to upcoming content. For frequent users, the user models each may be individualized to particular users. For example, in a home a father, mother, and each child may be associated with his or her own individual model. For relatively infrequent users, such as a house guest, a default model may be employed based on more generalized default associations between emotional states and likely user reactions to upcoming content.
Once a user reaction to upcoming content is predicted, the controller or processing device accesses the audiovisual content so as to determine a modification to the audiovisual content. For example, associations between a user state or condition and a content modification may be stored in the metadata of the audiovisual content. For example, if a prediction is made that a user is favorably excited about a scene, the metadata may include an associated modification entry to extend the scene. If, however, a prediction is made that a scene would be inappropriate to a user (such as based on the user age), the metadata may include an associated modification entry to delete the scene. More details concerning the modification selection process are described below. Once a modification is determined, the controller or processing device causes the modification to be applied to the audiovisual content to provide a viewing experience that is individualized to the user based on the user reaction to content.
In accordance with the above general description,
The method overview may begin at step 100, at which a sensor module detects a plurality of user sensory measurements. At step 110, a controller determines a user emotional state based on an analysis of such sensory measurements. At step 120, the controller associates the determined user emotional state with a user model. At step 130, the controller predicts a user reaction to upcoming content based on the determined user emotional state as applied to the appropriate user model. At step 140, the controller accesses the metadata of the audiovisual content being modified, wherein the metadata includes correspondences between user states and predictions, and content modifications. At step 150, the controller determines a content modification(s) based on the metadata, and at step 160, the controller causes the content modification(s) to be applied to the audiovisual content playback. In this manner, the audiovisual content is dynamically modified based on user reactions to the content so as to provide a highly individualized user viewing experience.
As stated above, the method of
As also depicted in
As further depicted in
As depicted in
In exemplary embodiments, one of the internal sensing devices as depicted in
In exemplary embodiments, the face detection camera 32 may be a camera array of a plurality of cameras, wherein each camera of the array is dedicated to a particular portion or aspect of face detection. For example, a more precise camera may be dedicated to pupil detection versus a camera dedicated to detecting broader facial features and movements. In another embodiment, the face detection camera 32 may be more broadly configured to perform body detection. For example, such a camera can detect whether a person is in a huddled position (such as may occur when one is afraid), vigorously moving (such as may occur in connection with strong laughter), or lying down (such as may occur when one is tired and bored).
Referring again to
The plurality of sensing devices of the sensor module 30 further may include a light emitter and a corresponding light sensor (e.g., respectively sensors 36 and 38). The light sensor 38 may be configured as another digital camera. As is known in art, a process known as photoplethysmography may be employed to determine cardiovascular vital statistics in a non-invasive manner. In such a process, a light source or light emitter is configured to emit light of a wavelength suitable for detecting blood flow. The light is emitted toward a body tissue, and light transmitted in turn from the body tissue is detected and analyzed as a series of images of the living tissue as captured by the light sensor or camera. By analyzing the images obtained based on how the tissue absorbs, reflects, and/or transmits the light received from the light emitter, vital statics can be determined. For example, photoplethysmography may be employed to determine heart rate and blood pressure, which can vary based on differing emotion states of the user.
In accordance with such features, the light emitter 36 and light sensor/camera 38 may be part of a photoplethysmography system that detects blood flow through a particular body part of the user, such as through the major arteries of a user's neck. In particular, the light emitter 36 may emit a light pulse, and the light sensor/camera 38 may generate blood flow images of the appropriate artery. The processing device 40 of the sensor module 30 may perform the photoplethysmography analysis to determine heart rate and blood pressure from the images obtained by the camera 38. Relatedly, the photoplethysmography components may include a long wave infrared sensor from which body temperature can be determined.
Referring again to
As seen in
Another external sensing device (e.g., sensing device 52) may be a probe 52. Such probes typically can be worn on the user's finger and may include a light emitter and light sensor suitable for taking photoplethysmography measurements comparably as referenced above. Photoplethysmography measurements may then be transmitted from the probe 52 wirelessly to the wireless interface 42 of the sensor module 30. As described above, the processing device 40 of the sensor module 30 may perform a photoplethysmography analysis to determine such parameters as heart rate and blood pressure from the referenced measurements. In exemplary embodiments, the probe 52 may include galvanic skin response (GSR) measurement sensor 54. GSR measurements measure changes in the electrical conductivity about the surface of the skin, particularly caused by varying degrees of perspiration. As referenced above, the degree of perspiration also may vary with different emotional states.
The use of sensing devices worn by the user has certain drawbacks in terms of adding potential discomfort to the user. On the other hand, particularly as to the measurement of biological parameters like blood flow and electrical biosignals, measurements are improved when the sensing devices are adjacent to the appropriate body parts. The use of worn sensing devices versus sensing devices incorporated into a more remote electronic device can represent a balance between user comfort and efficient measurement processes. Accordingly, it will be appreciated that the above configuration of sensing devices represents an example, and the precise number, nature, and configuration of the sensing devices may be varied substantially. In this vein more broadly, other suitable additional or alternative sensing devices, and combinations thereof, may be employed.
Another external sensing device (e.g., sensing device 56) may be a user input device, such as a remote control or like device. User inputs to the remote control 56 also may be indicative of a user's emotional reaction with respect to a portion of audiovisual content. For example, if a user routinely fast-forwards through a particular scene or category of scene, such an input may be indicative that the particular type of scene is disfavored. Conversely, if a user repeatedly plays back a particular scene or category of scene, such an input may be indicative that the particular type of scene is favored. In such manner, user inputs to an input device such as remote control device 56 also may be considered in determining a user's emotional state while watching audiovisual content.
In certain situations, there may be multiple viewers and it is desirable to distinguish the sensor measurements for each particular viewer. The processing device 40 of sensor module 30, therefore, may be configured to distinguish which sensor measurements pertain to which respective viewer. These viewer groupings become part of the sensor measurements being gathered by sensor module 30. In exemplary embodiments, therefore, the face detection capabilities may include determining a user identity. As further described below, frequent users, such as family members within a household, may be associated with user models particular to each respective user. Accordingly, sensor measurements may be associated with a recognized identity. Even when an identity is not recognized as to a particular group of sensor measurements, such sensor measurements still may be grouped as to a corresponding viewer and denoted by an unspecified or “guest” identity that would be associated with a particular grouping of sensor measurements for a particular viewer.
Referring back to
As referenced above, the content modification system 20 also includes a user model database 60. The user model database may be stored in any conventional computer readable medium or memory as are known in the art, such as a volatile or non-volatile memory, hard drive or hard disk, and the like. For transferring between different modification systems, the user model database also may be stored on a removable storage device, such as a USB device, optical storage disc or device, flash memory or memory card, and the like.
Generally, the controller 22 is configured to execute the content modification application 24 to combine information from the user model database 60 with the results of the sensor measurements of the sensor module 30 to predict a user attitude toward the audiovisual content, such as whether the content being viewed is liked or disliked, and to what degree. In other words, the controller is configured to receive the sensor measurements and apply the sensor measurements to at least one user model to determine a prediction of user reaction to content.
For example, the system may determine that a viewer is afraid while watching a particular a movie scene. The determination of a “fear” reaction, however, by itself does not predict whether the scene, as well as comparable upcoming content, is (or will be) liked or disliked. If a particular viewer enjoys horror movies, a “fear” reaction would be indicative of a favorable reaction to the scene, insofar as a fear reaction to a horror movie is the desired effect of such content. In contrast, if a user dislikes horror-type content, or if a fear reaction is determined as to a scene that perhaps is not intended to be scary (e.g., a young viewer becomes afraid during a scene that an adult actually may find humorous), the “fear” reaction would be indicative of an unfavorable attitude toward the content. As can be seen, therefore, a given emotional reaction may have a different meaning depending upon the user and content. The user model database 60, therefore, is employed in combination with the sensor measurements so as to predict whether upcoming content will be considered favorably or unfavorably based on the user reaction to the content being viewed. As further explained below, based on such predictions, the content may be modified so as to be tailored to the user's preferences in accordance with the user's reactions.
As seen in
The individual model entries 66 further may include a section 70 of preferences and a section 72 of non-preferences. For example, John has a preference for action, horror, and comedy, while he has a non-preference of romance and “tear-jerker”content. Like John, Betty prefers action movies as well, but also likes romance. She has non-preferences for gory violence and profanity. Content characteristics that are not deemed either preferences or non-preferences may be considered by the system to be neutral characteristics, in which case likes versus dislikes would be more determined specifically on the user reaction and movie genre. It will be appreciated that the content of the individual model entries of the individual model database 62 can be far more extensive and myriad than depicted. The intent of the individual models is to capture the preferences and non-preferences of specific users. The more detailed the individual models, the more the content can be tailored to the user so as to maximize enjoyment of the content.
The preferences and non-preferences may be generated by a variety of mechanisms. For example, they may be inputted manually by logging into the system under a particular user profile. Additionally, preferences and non-preferences may be automatically generated based on user history of emotional reactions to like content. In exemplary embodiments, the content modification system may be linked to an external network such as the Internet or a cellular network. Such links may include links to social networking sites such as Facebook, Twitter, LinkedIn, and the like. Preferences and non-preferences, in the form of “likes” and “dislikes” and comparable indications on such sites may be accessed by the content modification system and incorporated into a corresponding individual model entry 66 for the user. Such preferences or non-preferences may be content specific, even scene specific within content, or may be more general indications of preferences or non-preferences by genre and the like. Additional network features are explained in more detail below.
In addition to preferences and non-preferences, the individual model entries 66 may include a prohibitions section 73. The prohibitions section 73 may include outright prohibitions from viewing certain kinds of content. For example, a user may employ the prohibitions section 73 to exert parental controls over a minor aged user to preclude such minors from viewing age-inappropriate content, such as graphic violence, inappropriate sexual content, or the like. In the above examples of
The individual models 62 are particularly suited to common users of the particular system, such as members of a household. Reactions of non-common or “guest” viewers, however, also can be determined by the system. For non-common users, as referenced above, the user model database 60 also may include default models 64.
Returning to the individual model database 62, such models may be generated manually by a user utilizing any suitable input interface, such as by menu selection and/or key entries. In one embodiment, however, the individual models 62 may be generated automatically based on a learning process utilizing a usage history of a plurality of default models 64. For example, suppose a user is consistently watching content that invokes usage of high action and horror models, and favorable emotional reactions are measured for high action and scary scenes. In contrast, the romantic model is rarely invoked, and when romance scenes are present in action movies, the user's emotional reaction is negative. The system will build an individual model for that user based on action and horror preferences, with romance categories being deemed non-preferences. In this manner, the system can perform in a fully automated manner with little significant user input.
As referenced above, the controller 22 combines the emotional reaction of the user as determined from the sensor measurements, with the content of the appropriate user model, so as to generate a prediction as to whether the user will consider upcoming content favorable or non-favorable. The prediction generation is essentially a balance of the various factors of the system as applied to specific content being viewed. For example, a user model may have a preference for high action while romance is not preferred. However, a user may have a strong favorable emotional reaction to a romance scene within an action movie. Accordingly, the system may predict a user will have a favorable reaction to comparable upcoming romance scenes in this action movie, essentially determining that the weight of the emotional reaction supersedes the negative preference in the user profile. In this manner, the prediction system is highly flexible and specific to particular viewing circumstances.
Based on the prediction as to whether an upcoming content portion will be favored or disfavored, the controller is configured to apply a modification to the content portion while the content portion plays. In particular, the controller is configured to determine the content modification based on information contained in a media file associated with the content. In exemplary embodiments, content modification instructions are stored as part of the media file itself. For example, content modification instructions may be stored as part of the metadata of a media file, such as the metadata of an audiovisual file.
At the outset, as referenced above the controller 22 also may extract metadata information as part of the prediction process. For example, whether a detected viewer reaction is commensurate or appropriate to the genre of the content (e.g., action), may be a pertinent factor in whether a favorable or unfavorable prediction is made.
In the example of
As another example in
The modification instructions 84d of the movie file metadata may also include a “linked scenes” section 90. A linked scene is another second scene that must also be modified for consistency when a first scene is modified. For example, if content is added in which a particular character is killed, for consistency such character must not be present in any subsequent scenes. The controller 22 may read any linked scenes identifications from the movie file metadata, and apply commensurate content modifications to any linked scene such that the content remains consistent throughout the entirety of the viewing. In this particular example, a linked scene is identified by a time of when the scene falls within the content. For example, the linked scene entry of “1:05:42” indicates a linked scene begins at one hour, five minutes, and forty-two seconds into the content. It will be appreciated that other forms of linked scene identification may be employed.
The controller is configured to determine a content modification to a playback of the content based on the prediction. In the exemplary operation of the system being described, the system controller 22 determines the appropriate modification instructions and extracts the modification instructions 84d from the metadata of the media file. The controller is further configured to a cause a content reproduction device to play back the content in a manner that incorporates the determined content modification. As referenced above, such modified playback may include extracting additional content portions from content modifications section 86 of the media file, and incorporating such additional content into the playback. Referring back to
The following description sets forth certain examples of the operation of the content modification system 20 in accordance with the above. It will be appreciated that the following examples are provided for illustrative purposes and not intended to limit the scope of the invention. Numerous variations of the described examples may be employed.
In a first example, John is associated with an individual model as depicted in
The controller then accesses John's individual model in the user model database. John's model indicates that action is one of his preferences. The controller also may read from the metadata associated with the media file that the scene is an action scene, which confirms that John's excited state is an appropriate reaction to the scene. Based on John's excited state, his preference for action, and the nature of the scene being an action scene, the controller predicts that John's would enjoy additional high action content. The controller, therefore, extracts appropriate content modification instructions from the metadata of the movie file. Based on such instructions, the controller causes a playback with increased volume of the music and special effects of the scene. In addition, the modification instructions indicate five minutes of content are available to add to the scene, which are contained in the content modifications portion of the movie file. The controller, in turn, causes the upcoming action to be modified in playback to have increased music and special effects volume. Furthermore, the additional five minutes is added to the playback of the scene.
In addition, the controller reads from the media file metadata that there are linked scenes based on the additional content. It seems additional buildings are destroyed in the scene by virtue in the added five minutes of content, and future scenes of the same locality are modified to incorporate such destruction so that the content remains consistent in view of the modifications.
In a second example, Betty is watching the same action movie instead of John. Betty is associated with an individual model as depicted in
In a third example, both Betty and John are watching the action movie. Accordingly, the processor of the sensor module is configured to delineate the sensor measurements of John versus the sensor measurements of Betty. In one exemplary embodiment, one of the users can be deemed the “lead user”, in this case John or Betty, and the controller will determine and process content modifications based on the reactions of the lead user.
Preferably, however, the controller is configured to determine modifications based on the combined reactions of both users. In such circumstances, the controller is configured to receive the sensor measurements for each respective user and apply the sensor measurements to a corresponding user model for each respective user to determine a prediction of a reaction to content for each respective user. In this manner, the controller balances the reactions of the multiple users so as to maximize the enjoyment of the group of viewers as a whole. For example, both John and Betty generally have a preference for action. Accordingly, both users may enjoy increased volume of the music and special effects, and also may enjoy an extension of the action scene. In consideration of the Betty's revulsion to particularly graphic and gory content, however, such highly graphic content is toned down in upcoming portions of the content (e.g., a less gory version is played back) to accommodate this specific aspect of Betty's user model. As a result, the content is modified in a manner that attempts to maximize the enjoyment of the users as a group—e.g., the music and special effects are enhanced and the action scene is extended, but the level of gore is reduced.
In a fourth example, Bobby is ten years old. His parents have entered into his user model a prohibition from viewing content beyond a minimal sexual nature. Bobby begins watching a movie that in most respects may be considered appropriate viewing for Bobby, but two scenes are of a sexual nature that exceeds the scope of the prohibition in Bobby's user model. When Bobby begins viewing the movie, the controller identifies Bobby as the viewer based on the images detected by the face detection camera. The controller then accesses the prohibition in Bobby's user model, and detects the sexual nature of an upcoming scene from the metadata of the movie file. The combined “sexual nature/prohibition” is associated in the metadata with a modification instruction to delete the scene. The controller reads such modification instruction from the metadata and causes the playback of the movie to proceed without the sexual content.
As indicated above, the described examples are provided for illustrative purposes of the operation of the content modification system 20, and are not intended to limit the scope of the invention. Numerous variations of the described examples may be employed.
In the embodiments described above, the content modification system is described as being a unitary or substantially unitary device, in which the various components are present in a single location, such as the living room of a house. In other exemplary embodiments, one or more components of the content modification system may be external and accessed over a long range network, such as over a cellular network or the Internet.
Referring to
In one embodiment, the server 94 may operate as a content modification server to perform content modification function as part of the content modification system.
In this networked embodiment, the server 94 also may include a controller 97 and wireless interface 98. The sensor measurements may be received by the wireless interface 42 of the sensor module 30, and in turn be transmitted from the wireless interface 42 to the server wireless interface 98 of the server. The server controller 97 is configured perform the analysis described above utilizing the networked user model database 95 so as to determine appropriate modification instructions from the metadata database 96. The modification instructions may then be transmitted back to the local components of the content modification system 20 via the wireless interface 42, and the controller 22 may implement the received content modification instructions to cause a modified playback of the content.
In networked embodiments, functionality may be distributed over the network components in different ways. In one exemplary embodiment, the controller is located locally, such as in a user's home, as in the above descriptions. The controller is configured to apply the sensor measurements to the at least one user model by accessing the at least one user model from the server over the wireless interface. The controller further is configured to determine the content modifications based on information contained in metadata of a media file associated with the content as accessed also via the wireless interface 43 (which may be a unitary component with the wireless interface 42 of the sensor module described above) and wireless interface 98 of the server. The server includes the metadata database containing the content modification information, and the controller is configured to determine the content modification by accessing the metadata database from the server over the wireless interfaces 43 and 98 of the networked components.
In another exemplary embodiment, the controller functionality may be performed at the server level by the server controller 97. The server controller is configured to receive the sensor measurements over the wireless interface and apply the sensor measurements to at least one user model stored by the server to determine the prediction of a user reaction to content. The server controller further is configured to determine a content modification to a playback of the content based on the prediction and to transmit the determined content modification to the local controller in the user's home over the wireless interface. The local controller is then configured to a cause a content playback device to play back the content in a manner that incorporates the determined content modification.
Although the invention has been shown and described with respect to certain preferred embodiments, it is understood that equivalents and modifications will occur to others skilled in the art upon the reading and understanding of the specification. The present invention includes all such equivalents and modifications, and is limited only by the scope of the following claims.
Number | Date | Country | |
---|---|---|---|
61636855 | Apr 2012 | US |