Embodiments of the present invention will be described by way of exemplary embodiments, but not limitations, illustrated in the accompanying drawings in which like references denote similar elements, and in which:
Methods, apparatuses and storage medium associated with multi-sensorial emotional expression are disclosed herewith. In various embodiments, a large display screen may be provided to display images with associated emotion classifications. The images may, e.g., be photos uploaded to Instagram or other social media by participants/viewers, analyzed and assigned the associated emotion classifications. Analysis and assignment of the emotion classifications may be made based on, e.g., sentimental analysis or matching techniques.
The large display screen may be touch sensitive. Individuals may interact with the touch sensitive display screen to select an emotion classification for an image in a multi-sensorial manner, e.g., by touching coordinates on a mood map projected at the back of an image with visual and/or audio accompaniment. The visual and audio accompaniment may vary in response to the user's selection to provide real time multi-sensorial response to the user. The initially assigned or aggregated emotion classification may be adjusted to integrate the user's selection.
Emotion classification may be initially assigned to photos based on sentiment analysis of caption, and when a caption is not available various matching techniques may be applied. For examples, matching techniques may include, but are not limited to, associations with other images by computer vision (such as association of a photo with other images with same or similar colors that were captioned and classified), and association based on time and place (such as association of a photo with other images taken/created in the same context (event, locale, and so forth) that were captioned and classified).
Visual and aural accompaniments may be provided by image regions. For example, as one moves an icon of the image around a projection of the mood map behind an image, the color of the mood map may change to reflect the different moods of the different regions or areas of the image. The mood map may, e.g., be grey when an area associated with low energy and negative mood is touched, v. pink when a positive, high energy area is touched. Additionally or alternatively, the interaction may be complemented aurally. Tonal feedback may be associated with different emotional states. When the user touches a specific spot (coordinates) on the mood map, an appropriate musical phrase may be played. The music may change as the user touches different places on the mood map. An extensive library of musical phrases (snippets) may be created/provided to be associated with 16 zones of the circumplex model of emotion.
By touching numerous images and/or selecting/adjusting their emotion classifications, individuals may create musical and visual compositions. The visual and aural association of images can convey the socio-emotionai dynamics (e.g., emotional contagion, entrainment, attunement). Longer visual sequences or musical phrases may be stringed together by touching different photos (that have been assigned emotion classifications by mood mapping or sentiment analysis of image caption). These longer phrases reflect the “mood wave” or collective mood of the images that have been selected. The music composition affordances in this system may be nearly infinite, with hundreds of millions of permutations possible. The result may be an engaging, rich exploration and composition space.
Resultantly, people of a community can compose jointly and build on one another's compositions. People can look back at the history of color and music that have been associated with a particular photograph. Together, their collective interactions may form/covey the socio-emotional dynamics of the community.
Further, the photos/pictures may be displayed based on a determined mood of the user. The user's mood may be determined, e.g., based on analysis of facial expression and/or other mood indicators of the user. The analysis may be performed on data captured via the individuals' phones (e.g., camera capture of expression or reading of pulse via light) or embedded camera on the large screen display. Thus, the individuals' emotions may guide the arrangement of photos/pictures displayed. By changing their expressions, the individuals may cause the arrangement and musical composition to be changed.
In embodiments, the system may also allow users to experiment with emotional contagion effects and other social dynamics. In response to user inputs, emotion tags and filters may be used to juxtapose a particular image with other images that have either a similar affect/mood, or with similar context and different affect/mood.
Further, images may be sorted by the various models of emotion (e.g., the circumplex model which is organized by the dimensions of arousal and valence, or a simple negative to positive arrangement).
Various aspects of illustrative embodiments will be described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that alternate embodiments may be practiced with only some of the described aspects. For purposes of explanation, specific numbers, materials, and configurations are set forth in order to provide a thorough understanding of the illustrative embodiments. However, it will be apparent to one skilled in the art that alternate embodiments may be practiced without the specific details. In other instances, well-known features are omitted or simplified in order not to obscure the illustrative embodiments.
Various operations will be described as multiple discrete operations, in turn, in a manner that is most helpful in understanding the illustrative embodiments; however, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation. Further, descriptions of operations as separate operations should not be construed as requiring that the operations be necessarily performed independently and/or by separate entities. Descriptions of entities and/or modules as separate modules should likewise not be construed as requiring that the modules be separate and/or perform separate operations. In various embodiments, illustrated and/or described operations, entities, data, and/or modules may be merged, broken into further sub-parts, and/or omitted.
The phrase “in one embodiment” or “in an embodiment” is used repeatedly. The phrase generally does not refer to the same embodiment; however, it may. The terms “comprising,” “having,” and “including” are synonymous, unless the context dictates otherwise. The phrase “A/B” means “A or B.” The phrase “A and/or B” means “(A), (B), or (A and B).” The phrase “at least one of A, B and C” means “(A), (B), (C), (A and B), (A and C), (B and C) or (A, B and C).”
The terms “images,” “photos,” “pictures,” and their variants may be considered synonymous, unless the context of their usage clearly indicates otherwise.
In embodiments, arrangement 100 may include speakers 108 for providing the audio accompaniments/responses. Audio accompaniments/responses may include playing of a music snippet 114 representative of the selected or aggregated emotion classification. Visual accompaniments/responses may include changing the color of a boundary trim of the selected photo to correspond to the selected or aggregated emotion classification. As described earlier, the visual/audio accompaniments may also vary for different regions of a photo/picture, corresponding to different colors or other attributes of the regions of the photo/picture, as the user hovers/moves around different regions of the photo/picture.
In embodiments, mood map 110 may be two dimensional mood grid. Mood map 110 may be displayed on the back of the selected photo/picture. Mood map 110 may be presented through an animation of flipping the selected photo/picture. In embodiments, icon 112 may be a thumbnail of the selected photo/picture.
In embodiments, display device 104 may include a touch sensitive display screen. Selection of a photo/picture may be made by touching the photo/picture. Selection of the mood may be made by dragging and dropping icon 112.
In embodiments, arrangement 100 may be equipped to recognize user gestures. Display of next or previous set of photos/pictures may be commanded through user gestures. In embodiments, display device 104 may also include embedded cameras 116 to allow capturing of user's user gestures and/or facial expressions for analysis. The photos/pictures displayed may be a subset of photos/pictures of particular emotion classifications selected from a collection of photos/pictures based on a determination of the user's mood in accordance with a result of the facial expression analysis, e.g., photos/pictures with emotion classifications commensurate with the happy or more somber mood of the user, or the opposite, photos/pictures with emotion classifications to induce happier mood for users with somber mood. In alternate embodiments, arrangement 100 may include communication facilities to provide similar data, e.g., cameras or mobile phones of the users.
In embodiments, display device 104 may be a large display device, e.g., a wall size display device, allowing a wall of community photos/pictures of a community be displaced for individual and/or collective viewing and multisensory expression of mood.
In various embodiments, computing device(s) 102 may include one or more local and/or remote computing devices. In embodiments, computing device(s) 102 may include a local computing device coupled to one or more remote computing servers via one or more networks. The local computing device and the remote computing servers may be any one of such devices known in the art. The one or more networks may be one or more local or wide area, wired or wireless networks, including, e.g., the Internet.
Referring now to
Prior and/or during the display, block 202 may also include the upload of photos/pictures by various users, e.g., from Instagram or other social media. The users may be of a particular community or association. Uploaded photos/pictures may be analyzed and assigned emotion classifications, by computing device(s) 102, based on sentiment analysis of captions of the photos/pictures. When captions are not available, various matching techniques may be applied. For examples, matching techniques may include, but are not limited to, associations with other images by computer vision (such as association of an image with other images with same or similar colors that were captioned and classified), and association based on time and place (such as association of an image with other images taken in the same context (event, locale, and so forth) that were captioned and classified).
From block 202, on selection of a photo/picture, process 200 may proceed to block 204. At block 204, a mood map, e.g., a mood grid, may be displayed by computing device(s) 102. As described earlier, in embodiments, the mood map may be displayed at the back of the selected photo/picture, and presented to the user, e.g., via an animation of flipping the selected photo/picture. Individuals may be allowed to adjust/correct the emotion classification of an image by touching the image, and moving an icon of the image to the desired spot on the “mood map”. In embodiments, the user's mood selection may be aggregated with other user's selection. From block 204, on selection of a mood, process 200 may proceed to block 206. At block 206, audio and/or visual responses corresponding to the selected or updated aggregated emotion classification may be provided.
As described earlier, in embodiments, tonal feedback may be associated with different emotional states. When the user touches a specific spot (coordinates) on the mood map, an appropriate musical phrase may be played. The music may change as the user touches different places on the mood map. An extensive library of musical phrases (snippets) may be created/provided to be associated with, e.g., 16 zones of the circumplex model of emotion. Visual response may also be provided. As one moves an icon of the image around a projection of the mood map behind the image, the color of the mood map may be changed to reflect the mood of the zone. For example, the mood map may be grey when an area associated with low energy and negative mood is touched, v. pink when a positive, high energy area is touched.
From block 206, process 200 may return to block 202, and continues therefrom.
Thus, by touching numerous images, users, individually or jointly may create musical and visual compositions. The visual and aural association of images can convey the socio-emotional dynamics (e.g., emotional contagion, entrainment, attunement). As the user successively touches different images on the projection (one or more at a time), and the photos/pictures including their visual and/or aural responses may be successively displayed/played or refreshed, providing a background swath of color represents the collective mood of the photos and the transition in mood across the photos. Further, compound or longer musical phrases may be formed by successively touching different photos (one or more at a time), that have been assigned a mood by mood mapping or sentiment analysis of image caption. These compound and/or longer phrases may also reflect the “mood wave” or collective mood of the images that have been selected. The music composition affordances in this system may be nearly infinite, with hundreds of millions of permutations possible. The result may be an engaging, rich exploration and composition space.
Resultantly, as earlier described, people of a community can compose jointly and build on one another's compositions. People can look back at the history of color and music that have been associated with a particular photograph. Together, their collective interactions may form/covey the socio-emotional dynamics of the community
In embodiments, at block 202, facial expression and/or other mood indicators may be analyzed. The analysis may be performed on data captured via the individuals' phones (e.g., camera capture of expression or reading of pulse via light) or embedded camera on the large screen display. The individuals' emotions may guide the selection and/or arrangement of the photos/pictures. By changing their expressions, the individuals may cause the arrangement (and musical composition to be changed).
In embodiments, at block 204, history of past mood selections by other users of the community (along with the associated audio and/or visual responses) may also be presented in response to a selection of a photo/picture.
Thus, the arrangement may allow users to experiment with emotional contagion effects and other social dynamics. In response to user inputs, emotion tags and filters may be used to juxtapose a particular image with other images that have either a similar effect, or with similar context and different effect.
Further, images may be sorted by the various models of emotion (e.g., the circumplex model which is organized by the dimensions of arousal and valence, or a simple negative to positive arrangement).
Referring now to
Each of these elements may perform its conventional functions known in the art. In particular, system memory 304 and mass storage devices 306 may be employed to store a working copy and a permanent copy of the programming instructions implementing the multi sensory expression of emotion functions described earlier. The various elements may be implemented by assembler instructions supported by processors(s) 302 or high-level languages, such as, for example, C, that can be compiled into such instructions.
The permanent copy of the programming instructions may be placed into permanent storage devices 306 in the factory, or in the field, through, for example, a distribution medium (not shown), such as a compact disc (CD), or through communication interface 310 (from a distribution server (not shown)). That is, one or more distribution media having an implementation of the agent program may be employed to distribute the agent and program various computing devices.
The constitution of these elements 302-312 are known, and accordingly will not be further described.
Referring back to
<Directly corresponding plain English version of the claims to be inserted here after QR approval of the claims (by Intel Legal), prior to filing.>
Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a wide variety of alternate and/or equivalent implementations may be substituted for the specific embodiments shown and described, without departing from the scope of the embodiments of the present disclosure. This application is intended to cover any adaptations or variations of the embodiments discussed herein. Therefore, it is manifestly intended that the embodiments of the present disclosure be limited only by the claims.
This application is a non-provisional application of provisional application 61/662,132, and claims priority to the 61/662,132 provisional application. The specification of the 61/662,132 provisional application is hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
61662132 | Jun 2012 | US |