With the growing volume of content available over the Internet, people are increasingly seeking content online as part of their multimedia experience (MME) not only for useful information to address his/her problem, but also to have the benefit of an emotional experience. Here the content may include one or more of, a text, an image, a video or audio clip. The content that impacts a viewer/user emotionally can be psychoactive (psyche-transforming) in nature, i.e., the content may be beautiful, sensational, even evocative, and thus may induce emotional reactions from the user.
It has been taken for granted by media professionals, particularly in the advertising field, that imagery and montage can have psychoactive properties and impact on a user. Vendors of online content in various market segments that include but are not limited to advertising, computer games, leadership/management training, and adult education, have been trying to provide psychoactive content in order to elicit certain emotions and behaviors from users. However, it is often hard to identify, select, and tag psychoactive content to achieve the desired psychotherapeutic effect or purpose on a specific user. Although some online vendors do keep track of web surfing and/or purchasing history or tendency of an online user for the purpose of recommending services and products to the user based on such information, such online footprint of the user does not truly reflect the emotional impact of the online content on the user. For a non-limiting example, the fact that a person purchased certain books as gifts for his/her friend(s) is not indicative of the emotional impact of the book may or may not have on him/herself.
The foregoing examples of the related art and limitations related therewith are intended to be illustrative and not exclusive. Other limitations of the related art will become apparent upon a reading of the specification and a study of the drawings.
a)-(f) show examples of images with various inherent properties.
a)-(c) depict a pixel selection grid used for identifying a centroid in a color space.
a)-(c) show examples of distributions of pixels to clusters.
a)-(b) depict examples of images used for user-specific content-feeling associations.
The approach is illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that references to “an” or “one” or “some” embodiment(s) in this disclosure are not necessarily to the same embodiment, and such references mean at least one.
A new approach is proposed that contemplates systems and methods to identify, select, and present psychoactive content to a user in order to achieve a desired psychotherapeutic effect or purpose on the user. More specifically, content items in a content library are tagged and categorized under various psychoactive properties. In addition, image-feeling associations are assessed on a per user basis to determine what types of content items induce what types of feelings/reactions from the specific user. A script of content (also known as a user experience, referred to hereinafter as “content”) comprising one or more content items can then be presented to the user based on its ability to induce a desired shift in the emotional state of the user. With the in-depth knowledge and understanding of the psychoactive properties of the content and the possible emotional reactions of a user to such content, an online vendor is capable of identifying and presenting the “right kind” of content to the user that specifically addresses his/her emotional needs at the time, and thus provides the user with a unique emotional experience that distinguishes it from his/her experiences with other types of content.
A content referred to herein can include one or more content items, each of which can be individually identified, retrieved, composed, and presented to the user online as part of the user's multimedia experience (MME). Here, each content item can be, but is not limited to, a media type of a (displayed or spoken) text (for a non-limiting example, an article, a quote, a personal story, or a book passage), a (still or moving) image, a video clip, an audio clip (for a non-limiting example, a piece of music or sounds from nature), and other types of content items from which a user can learn information or be emotionally impacted. Here, each item of the content can either be provided by another party or created or uploaded by the user him/herself.
In some embodiments, each of a text, image, video, and audio item can include one or more elements of: title, author (name, unknown, or anonymous), body (the actual item), source, type, and location. For a non-limiting example, a text item can include a source element of one of literary, personal experience, psychology, self help, spiritual, and religious, and a type element of one of essay, passage, personal story, poem, quote, sermon, speech, and summary. For another non-limiting example, a video, an audio, and an image item can all include a location element that points to the location (e.g., file path or URL) or access method of the video, audio, or image item. In addition, an audio item may also include elements on album, genre, or track number of the audio item as well as its audio type (music or spoken word).
In the example of
As used herein, the term engine refers to software, firmware, hardware, or other component that is used to effectuate a purpose. The engine will typically include software instructions that are stored in non-volatile memory (also referred to as secondary memory). When the software instructions are executed, at least a subset of the software instructions is loaded into memory (also referred to as primary memory) by a processor. The processor then executes the software instructions in memory. The processor may be a shared processor, a dedicated processor, or a combination of shared or dedicated processors. A typical program will include calls to hardware components (such as I/O devices), which typically requires the execution of drivers. The drivers may or may not be considered part of the engine, but the distinction is not critical.
As used herein, the term library or database is used broadly to include any known or convenient means for storing data, whether centralized or distributed, relational or otherwise.
In the example of
In the example of
In the example of
In the example of
In the example of
Abstractness (Concrete vs. Abstract)—Images rendered more for form than content naturally tend to decrease the significance of the content of the image and increase the significance of the form (i.e., other image properties). In addition, more abstract images may allow the user to project his/her feelings and imagination onto the specific image and the MME as a whole more readily than more concrete images.
Energy (Static vs. Kinetic)—An image of an ocean can be calm or raging;
Scale (Micro vs. Macro)—Whether an image is shot from an extreme macro point of view (POV) such as high above Earth or from outer space or in extreme close-up such as of the stamen of a flower, both have distinct effects on viewer's mood.
Time of day (Dawn through Night)—Time of day strongly affects the mood of an image.
Urbanity (Urban to Natural)—Many images are a blend of both man-made and natural elements, and the precise ratio can elicit a unique response.
Season (Summer, Fall, Winter, Spring)—The same scene elicits different reactions when embellished with flowers vs. snow. Seasons can be selected by radio button or check box rather than slider when tagged manually.
Facial expressions and depictions of behavior—There is an entire class of psychoactive image properties pertaining to the presence within the image of facial expressions (such as happy, sad, angry, etc.) or depictions of behavior (such as kindness, cruelty, tenderness, etc.). Both the expressions and the behaviors can be rapidly categorized via a custom screen built using emotive icons.
Note that the content characterization component 108 can tag multiple properties, such as Abstract, Night, and Summer, on a single content item for the purpose of easy identification and retrieval.
In some embodiments, the content characterization component 108 of the content engine 102 identifies/detects a color profile and/or brightness of an image algorithmically, as colors and how light or dark an image is affects a user's mood dramatically, such as a painting whose dark scenes are sometimes punctuated by a single candle's light. In one embodiment, the content characterization component 108 uses the identified color profile as an index to a table of predefined “dark” and “bright” color values to select images from the content library for desired effect on the user. Here, the color profile is defined as the set of RGB values that occur most frequently in an image. In most occasions, it is insufficient to simply count the number of times each color appears in an image and pick a winner.
In the example of
In the example of
In some embodiments, the content characterization component 108 adopts the k-means clustering approach, which defines a set of k clusters, where the centroid of each cluster is the mean of all values within the cluster. In some embodiments, the content characterization component 108 starts by building a grid over the image, with each vertical and horizontal line spaced at 1/10th of the image size, as shown in
In the example of
d=√{square root over ((r1−r2)2+(g1−g2)2+(b1−b2)2)}{square root over ((r1−r2)2+(g1−g2)2+(b1−b2)2)}{square root over ((r1−r2)2+(g1−g2)2+(b1−b2)2)}
Two colors with a lower value of d (a shorter distance) are more similar than two colors with a larger value of d (a greater distance). For each pixel in the image, the content characterization component 108 obtains its RGB value and computes a distance value d from that pixel to each centroid in the color space. The pixel is assigned to the centroid with the shortest distance (i.e., the nearest centroid).
In the example of
In the example of
In the example of
In some embodiments, the content characterization component 108 identifies a color name for the detected colors, using the same distance measure as used in the k-means clustering discussed above. There are about 16.7 million colors in the color space (2563) and there is no standard mapping of color names to all possible RGB values. In some embodiments, the content characterization component 108 uses a set of color names taken from an extended HTML color set and finds the closest named color for the identified RGB values. Although the closest-named color may not be a strong match to the perception of the actual color because there are so few color names for determining whether two images have a similar color profile, however, the actual RGB values of the color are used, not the RGB value of the nearest named color.
In the example of
In some embodiments, the user assessment engine 110 presents the user with one or more content items, such as images, preceded by one or more questions regarding the user's feeling towards the images via the display component 120 of user interaction engine 116 for the purpose of soliciting and gathering at least part of the information needed to assess the types of feelings/reactions of the user to content items with certain psychoactive properties. Here, each image presented to the user has a specific (generally unique) combination of potentially psychoactive properties, and to the extent the user provides honest answers about what he or she is feeling when viewing each image during the image/feeling association assessment, the assessment engine 110 may be able to induce similar feelings during future content presentations by including images with substantially similar psychoactive property values. The initial content-feeling association assessment can be fairly short—perhaps 5-6 questions/image sets—ideally at the user's registration. If necessary, the assessment engine 110 can recommend additional image/feeling assessments at regular intervals, such as once per user log-in. Here, the questions preceding the images may focus on the user's emotional feelings towards certain content items. For a non-limiting example, such question can be “which image makes you feel most peaceful?”—followed by a set of images, which may then be followed by another question and another set of images to probing a different image/feeling association. For non-limiting examples,
The process of iterative question-image set probing described above is quick, perhaps even fun for some users, and it can be repeated as many times as necessary for the assessment engine 110 to build increasingly effective associations between psychoactive properties of a group of content items and associated emotional inductions (e.g., peaceful, excited, loved, hopeful, etc.) of that specific user. Once established, the content-feeling associations of the specific user can be maintained in a user library 126 for management and later retrieval.
In some embodiments, the assessment component 114 of the assessment engine 110 also assesses the current emotional state of the user before any content is retrieved and presented to the user. For non-limiting examples, such emotional state may include but is not limited to, Love, Joy, Surprise, Anger, Sadness, or Fear, each having its own set of secondary emotions. The assessment of the user's emotional state is especially important when the user's emotional state lies at positive or negative extremes, such as joy, rage, or terror, since it may substantially affect content-feeling associations and the psychoactive content to be presented to the user—the user apparently would look for different things depending upon whether he/she is happy or sad.
In some embodiments, the assessment engine 110 may initiate one or more questions to the user via the user interaction engine 116 for the purpose of soliciting and gathering at least part of the information necessary to assess the user's emotional state. Here, such questions focus on the aspects of the user's life and his/her current emotional state that are not available through other means. The questions initiated by the assessment engine 110 may focus on the personal interests and/or the spiritual dimensions of the user as well as the present emotional well being of the user. For a non-limiting example, the questions may focus on how the user is feeling right now and whether he/she is up or down for the moment, which may not be truly obtained by simply observing the user's past behavior or activities. In some embodiments, the profile engine 110 may present a visual representation of emotions, such as a three-dimensional emotion circumplex as shown in
In some embodiments, in order to gather responses based on the current state of mind of the user, the user assessment engine 110 may always perform an emotional state and an emotional-state-specific content-feeling association assessment of the user whenever psychoactive content is to be retrieved or presented to the user. Such assessment aims at identifying the user's emotional state as well as his/her content-feeling associations at the time, and is especially important when the user's emotional state lies at positive or negative extremes. For a non-limiting example, the user may report that a certain image is exciting in one state of mind, but not in another state of mind. Thus, different kinds of psychoactive content may need to be recommended and retrieved for the user depending upon whether he/she is currently happy or sad. The user assessment engine 110 may then save the assessed content-feeling associations in the user library 126 together with the user's emotional state at the time.
In some embodiments, the user assessment engine 110 may perform content-feeling association assessments on a regular basis in order to assess an emotional-state-neutral, instead of emotional-state-specific, content-feeling associations of the user. Differing responses based on differing states of mind of the user may eventually average out, resulting in a more predictable and neutral set of image/feeling associations. Such regular content-feeling association assessment is to address the concern that any single assessment alone may be strongly affected by the user's emotional state of mind at the time when such assessment is performed on him/her as discussed above. The content-feeling association so identified can be used to recommend or retrieve content when the user's emotional state lies within a normal or neutral range.
In the example of
In the example of
While the system 100 depicted in
In the example of
In the example of
In the example of
In the example of
In the example of
One embodiment may be implemented using a conventional general purpose or a specialized digital computer or microprocessor(s) programmed according to the teachings of the present disclosure, as will be apparent to those skilled in the computer art. Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art. The invention may also be implemented by the preparation of integrated circuits or by interconnecting an appropriate network of conventional component circuits, as will be readily apparent to those skilled in the art.
One embodiment includes a computer program product which is a machine readable medium (media) having instructions stored thereon/in which can be used to program one or more hosts to perform any of the features presented herein. The machine readable medium can include, but is not limited to, one or more types of disks including floppy disks, optical discs, DVD, CD-ROMs, micro drive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data. Stored on any one of the computer readable medium (media), the present invention includes software for controlling both the hardware of the general purpose/specialized computer or microprocessor, and for enabling the computer or microprocessor to interact with a human viewer or other mechanism utilizing the results of the present invention. Such software may include, but is not limited to, device drivers, operating systems, execution environments/containers, and applications.
The foregoing description of various embodiments of the claimed subject, matter has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the claimed subject matter to the precise forms disclosed. Many modifications and variations will be apparent to the practitioner skilled in the art. Particularly, while the concept “interface” is used in the embodiments of the systems and methods described above, it will be evident that such concept can be interchangeably used with equivalent software concepts such as, class, method, type, module, component, bean, module, object model, process, thread, and other suitable concepts. While the concept “component” is used in the embodiments of the systems and methods described above, it will be evident that such concept can be interchangeably used with equivalent concepts such as class, method, type, interface, module, object model, and other suitable concepts. Embodiments were chosen and described in order to best describe the principles of the invention and its practical application, thereby enabling others skilled in the relevant art to understand the claimed subject matter, the various embodiments and with various modifications that are suited to the particular use contemplated.
This application is related to U.S. Ser. No. 12/476,953 filed Jun. 2, 2009, which is a continuation-in-part of U.S. Ser. No. 12/253,893, filed Oct. 17, 2008, both of which applications are fully incorporated herein by reference.