The technology described herein generally relates to computer implemented systems and methods of natural language understanding and processing, more particularly to the mapping of emotion dyads using computing systems to create an assessment of an individual's emotion state.
Measuring private events such as emotion is a difficult task. Private events consist of two independent factors. The first is the perception of an event, the second is the categorization of that bodily sensation that is converted to label the event. This data may originate in several forms, from written responses to oral, or as input to a form or field within a computing environment.
Most assessments simply measure the labeling of an event, for example, do you feel sad. These assessments are highly structured, and in part operate under rigid conditions and standards. For example, the labeling of an event is difficult to interpret without information about the bodily sensation together with the categorization parameters. For example, with regard to the emotion of “sad,” there is a bodily sensation and categorization of the emotion.
Accordingly, there is a need to enable better understanding of emotions, and how they are often perceived differently, and at different ranges. The technology described herein addresses this need and the limitations of conventional methods of emotion categorization with the assistance of computing systems and analysis of emotion dyads to advance understanding and determinations of an emotion state, and further provide improved systems and methods for assessing emotion categorization and labeling
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used in isolation as an aid in determining the scope of the claimed subject matter.
At a high level, embodiments of the technology described herein are generally directed towards systems and methods for assessing emotion categorization and labeling utilizing generated bounded visual analog scales incorporating emotion dyads.
According to some embodiments, a computer implemented method for emotion mapping is disclosed. In some aspects, a computing device generates an emotional dyad, wherein the emotional dyad comprises two different emotion states. Next, the computing device displays the emotion dyads with a degree of separation for receiving input on a scale from an individual. Next, the computing device receives input of a first endpoint anchor on the dyad scale. Next, the computing device receives input of a second endpoint anchor on the dyad scale. Next, the computing device receives input of an emotion anchor within the degree of separation. Then the computing device calculates the length of separation from the first endpoint anchor and the second endpoint anchor, wherein the length of separation is a neutral region, and lengths outside of the neutral region are emotion regions, and an emotion anchor. Lastly, the computing device determines the individual's assessment by averaging the neutral region, the endpoint anchors, and the emotion anchor in one or more cycles of acquiring endpoints, emotion anchors, and determining the lengths of the neutral region and emotion region.
According to other embodiments, a system for emotion mapping and assessment thereof is provided. In some aspects, a computing device or user device is equipped with a user input device and a display device. Further, the computing device is configured with a software application held on memory. The software application can comprise: (a) a dyad engine, wherein the dyad engine generates dyad pairs; (b) an input engine, wherein the input engine accepts user input of a first anchor endpoint, a second anchor endpoint, and an emotion anchor; and (c) an assessment engine, wherein the assessment engine calculates an emotion state based on the dyad engine and the input engine.
Additional objects, advantages, and novel features of the technology will be set forth in part in the description that follows, and in part will become apparent to those skilled in the art upon examination of the following, or can be learned by practice of the invention.
Aspects of the present disclosure will be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, with emphasis instead being placed upon clearly illustrating the principles of the disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views. It should be recognized that these implementations and embodiments are merely illustrative of the principles of the present disclosure. In the drawings:
The subject matter of aspects of the present disclosure is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” can be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps disclosed herein unless and except when the order of individual steps is explicitly described.
Accordingly, embodiments described herein can be understood more readily by reference to the following detailed description, examples, and figures. Elements, apparatus, and methods described herein, however, are not limited to the specific embodiments presented in the detailed description, examples, and figures. It should be recognized that the exemplary embodiments herein are merely illustrative of the principles of the invention. Numerous modifications and adaptations will be readily apparent to those of skill in the art without departing from the spirit and scope of the invention.
Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
In addition, all ranges disclosed herein are to be understood to encompass any and all subranges subsumed therein. For example, a stated range of “1.0 to 10.0” should be considered to include any and all subranges beginning with a minimum value of 1.0 or more and ending with a maximum value of 10.0 or less, e.g., 1.0 to 5.3, or 4.7 to 10.0, or 3.6 to 7. All ranges disclosed herein are also to be considered to include the end points of the range, unless expressly stated otherwise. For example, a range of “between 5 and 10” or “5 to 10” or “5-10” should generally be considered to include the end points 5 and 10. Further, when the phrase “up to” is used in connection with an amount or quantity; it is to be understood that the amount is at least a detectable amount or quantity. For example, a material present in an amount “up to” a specified amount can be present from a detectable amount and up to and including the specified amount.
Described herein are systems and methods for assessing emotion categorization and labeling utilizing a bounded visual analog scale (also referred to herein as a universal emotion line, emotion scale, visual scale). As will be appreciated, a visual analog scale can utilize received input from a user and further utilizes signals or information represented by continuously variable spatial position or distance indicators, for instance a spatial position and/or distance between two endpoint anchors or bounding points. The bounded visual scale or universal emotion line additionally can be implemented to measure how people categorize emotions and how that categorization related to their reported magnitude of underlying emotional affect.
According to some aspects a visual scale can be generated, by a user device or computing device) for an emotional dyad, where the emotional dyad can comprise two different emotional states. In some instances, the emotional dyad's can include: angry-compassionate; anxious-calm; cold-hot; fatigued-energized; fearful-brave; not in control-in control; powerless-powerful; sad-happy; tired-awake; uninterested-excited; unsafe-safe; and unsatisfied-satisfied. A generated emotional dyad may be presented to a user via a user device or display screen. Further, an emotional dyad and/or visual scale can include one or more indicators (e.g. two indicators) presented on the visual scale, where the indicators are displayed with an initial degree of separation. In some other aspects indicators are not initially displayed. In some instances, one or more prompts may be displayed to a user, either initially or subsequent to receiving a user input. Dyads and associated prompts may be stored in a repository from which the system can store and/or pull data from. A user can subsequently interact with the visual scale and/or emotional dyad via one or more input signals. In one instance, a user device can receive a first input anchor corresponding to one of the emotional states of the emotional dyad, where the first endpoint anchor indicates when the individual or user moves from a first emotion state to a first neutral state. A user device can receive a second input anchor corresponding to the other of the emotional states of the emotional dyad, where the second input anchor indicates when the individual or user moves from a second emotion state to a second neutral state. Additionally, a user device can further receive an input corresponding to an emotion anchor within the degree of separation. In some instances, the emotion anchor can correspond to a time-based emotion indication. The user device, based on the inputs, can calculate and/or determine the length of separation from the first endpoint anchor and the second endpoint anchor, wherein the length of separation is a neutral region, and lengths outside of the neutral region are emotion regions, as well as a location of the emotion anchor. In some instances, a device can determine a proportion corresponding to each of the lengths based on the degree of separation. Based on the determining of one or more aspects of the visual scale, a user emotion score or user/individual assessment can be determined and/or generated and subsequently compared against one or more cut scores (e.g. previously determined cut scores or cutoff scores) for a selected or given emotional dyad to determine a course of action for the user. As will be appreciated, cut scores can be stored such that they correspond to a given emotional dyad.
The present technology may be embodied as, among other things, a system, method, or computer-product. Accordingly, embodiments may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware. In one embodiment, the present invention takes the form of a computer program product that includes computer useable instructions embodied on one or more computer readable media and executed by one or more processors.
Computer readable media includes both volatile and nonvolatile media, removable and non-removable media, and media readable by a database, a switch, and various other network devices. Network switches, routers, access points, and related components in some instances act as a means of communication within the scope of the technology. By way of example, computer readable media comprise computer storage media and communications media.
Computer storage media or machine readable media can include media implemented in any method or technology for storing and/or transmitting information or data. Examples of such information include computer-useable instructions, data elements, data structures, programs and program modules, and other data representations.
The presently disclosed subject matter now will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the presently disclosed subject matter are shown. Like numbers refer to like elements throughout. The presently disclosed subject matter may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Indeed, many modifications and other embodiments of the presently disclosed subject matter set forth herein will come to mind to one skilled in the art to which the presently disclosed subject matter pertains having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the presently disclosed subject matter is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims.
Referring now to
In other aspects, the computer implemented system may further track the first and second endpoint anchor placement, along with speed of selection, whether or not the individual moved the anchor, including the degree of movement and the amount of movement, including over a time series set of assessments. Further, in other aspects, a third anchor point may be utilized to dictate how an individual felt, this anchor point may be placed in between the first and second endpoints to provide a more accurate assessment. Similarly, a fourth endpoint anchor may be added along with a first, second, and third to further constrain the range of emotion, for example to modify the neutral state range.
In the disclosed embodiments, the assessment and methods may be performed on private events such as emotions, pain, hunger, and health symptoms. The principles remain the same, wherein an emotion state dyad, often of opposite emotion states is formed, and endpoint anchors establish a zone of the emotion state and that of neutral. The assessment may be refined over repetitive trials in time series, with observations made about the individual's selection, and how said selection may evolve over time and over treatment or otherwise.
Referring now to
Continuing, the input engine receives input, often a first, second, and third anchor point (in other embodiments a fourth, fifth, sixth anchor endpoint may be established), wherein the anchor points may be selected by a mouse moving an anchor point to a specific location, thus defining the beginning of an emotion state, a neutral state, and, the opposite emotion state. This selection may be made by any number of input devices, including a touchscreen, a keyboard, through augmented reality, voice (including the use of natural language processing). As referenced earlier, when determining the hue of green, there is bodily sensation and categorization of such, and the emotion state may be further characterized by the individual. The input engine utilizes aspects of the individuals input, such as selection speed, change in selection, amount of change, time spent in selecting, or other aspects associated with selecting an endpoint. In combination, the dyad engine 220, and the selection engine 222, provide input for the assessment engine 224 to provide a neutral zone or state and an emotion zone or state that provides an assessment of an individual's emotion state, including a mapping.
Continuing, in one aspect the emotion region is detected by an individual selecting a first endpoint anchor while moving towards a neutral bodily sensation. The individual then selects a second endpoint anchor while moving aware from a second emotion, utilizing a bodily sensation. The two endpoint anchors are defined in the space between as a neutral region, wherein an individual's emotion is neutral. Lastly, the individual selects an emotion state and places an emotion anchor. This emotion anchor may appear in any degree of separation between the dyad pairs, and may be sampled with any timeline, including within minutes or days, or weeks, and then the assessment is built from the sampling data. This data is typically averaged or otherwise assimilated across multiple time series samples of endpoint anchors and emotion anchors. In doing so one may see a change in neutrality, an increase in various emotion regions, and where an individual places themselves in relation to the dyad pair.
Most assessments are highly structure, and that structure provides validity for standardization. However, the rigid structure fails to encompass the breadth of human emotion states, including the bodily sensation of an emotion state and respective categorization. The disclosure herein provides for a flexible presentation, that includes randomization and variation as a means to prove validity. For example, the following parameters may be randomized when conducting the mapping and presenting dyads: 1) Order of questions 2) Order of endpoint anchors 3) Amount of questions provided at one sitting 4) Length of space between dyads, or the degree of separation 5) Orientation of the separation between dyads (circle form, line form, etc.) 6) start point and response tick time. The variation of the above elements may be utilized to increase engagement, and retain validity of the representations made by individuals. Further benefits include reduction in memory effects, and flexibility of integrating the mapping assessment into other applications.
In
In the example of
In the example depicted in
In the example of
In the example of
Further disclosed in the example embodiment of
Referring now to
Once generated, the computing device displays the dyad word pairings, and displays a degree of separation between the word pairs that allows a user to place endpoints or anchors. Next, the individual places a first endpoint anchor leading from the emotion state to the feeling of neutrality. This order of operation may move from either word. Next, a second endpoint anchor is placed, from the feeling of the opposite second emotion state to a place of neutrality. The bounding of the first endpoint anchor and the second endpoint anchor delineate the neutral zone. Outside of the neutral zone is the emotion zone for each dyad. The computing system calculates the neutral zone and the emotion zones and stores the information within a data repository, often a relational database. The assessment may be repeated and the respective zones analyzed. In one aspect, an average may be taken of the first endpoint anchor and the average may be placed on a summary dyad pair that accounts for all trials. This may occur for the second endpoint anchor and the selected emotion state. Thus, the assessment engine may summarize the input received and provide an overall theme or time series response to an individual's emotion state, including how the emotion state is changing in response to treatment.
Referring now to
Embodiments of the invention can be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program modules, being executed by a computer or other machine (virtual or otherwise), such as a smartphone or other handheld device. Generally, program modules, or engines, including routines, programs, objects, components, data structures etc., refer to code that perform particular tasks or implement particular abstract data types. Embodiments of the invention can be practiced in a variety of system configurations, including hand-held devices, consumer electronics, general-purpose computers, more specialized computing devices, etc. Embodiments of the invention can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
Computing device 800 includes a bus 810 that directly or indirectly couples the following devices: memory 812, one or more processors 814, one or more presentation components 816, input/output ports 818, input/output components 820, and an illustrative power supply 822. In some embodiments, devices described herein utilize wired and rechargeable batteries and power supplies. Bus 810 represents what can be one or more busses (such as an address bus, data bus or combination thereof). Although the various blocks of
Computing device 800 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by computing device 800, and includes both volatile and non-volatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media can comprise computer storage media and communication media.
Computer storage media include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 800. Computer storage media excludes signals per se.
Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner at to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, NFC, Bluetooth, cellular, and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
Memory 812 includes computer storage media in the form of volatile and/or non-volatile memory. As depicted, memory 812 includes instructions 824, when executed by processor(s) 814 are configured to cause the computing device to perform any of the operations described herein, in reference to the above discussed figures, or to implement any program modules described herein. The memory can be removable, non-removable, or a combination thereof. Illustrative hardware devices include solid-state memory, hard drives, optical-disc drives, etc. Computing device 800 includes one or more processors that read data from various entities such as memory 812 or I/O components 820. Presentation component(s) 816 present data indications to a user or other device. Illustrative presentation components include a display device, speaker, printing component, vibrating component, etc.
I/O ports 818 allow computing device 800 to be logically coupled to other devices including I/O components 820, some of which can be built in. Illustrative components include a microphone, joystick, touch screen, presentation component, satellite dish, scanner, printer, wireless device, battery, etc
Embodiments of the technology are further illustrated by way of the following example aspects. According to some aspects of the technology described herein systems and methods are provided for assessing emotion categorization and labeling utilizing a bounded visual analog scale (also referred to herein as a universal emotion line, emotion scale, visual scale). As will be appreciated, a visual analog scale can utilize received input from a user and further utilizes signals or information represented by a continuously variable spatial position or distance, for instance a spatial position and/or distance between two endpoint anchors or bounding points. The bounded visual scale or universal emotion line additionally can be implemented to measure how people categorize emotions and how that categorization related to their reported magnitude of underlying emotional affect.
The universal emotion line generates and presents a bounded line segment that is labeled on the left and right boundary with an emotion dyad of opposing emotions (i.e., happy-sad). Twelve emotion dyads were presented in the universal emotion line: angry-compassionate; anxious-calm; cold-hot; fatigued-energized; fearful-brave; not in control-in control; powerless-powerful; sad-happy; tired-awake; uninterested-excited; unsafe-safe; and unsatisfied-satisfied.
Questions may be presented (e.g. five questions) for every emotion dyad. Two questions assessed how the participant categorized emotions. These two boundary questions asked participants to indicate where one emotion of the dyad ends and neutral begins (i.e., Please move the line of the spectrum to the point where happy feelings end and neutral feelings begin”). Three questions assessed the participant's emotional experiences. These three time questions asked participants to indicate how they felt during three temporal reference points: now, typically, and the past two weeks (i.e., Please move the line on the spectrum to the point that indicates how you typically feel”). The three temporal references can be selected to measure both current emotional states (i.e., now), as well as more generalized mood states (i.e., typically, two weeks, etc.).
In some example embodiments a universal emotion line can be implemented to determine underlying changes in emotion categorization and labeling. In some instances, additional variables may be generated and analyzed, for example neutral region and typical affective state. The neutral region represents the size of the neutral area between the positive and negative emotion dyad (i.e. bounded by endpoints or anchors), calculated by subtracting the negative boundary from the positive boundary. The typical affective state represents a user's emotional state at a given time or averaged across how they typically feel and how they felt over a period of time (e.g. the last two weeks).
In some further example embodiments, changes in a typical affective state can be related to changes in emotion classification for a given dyad. An additional variable, an emotion label, can be utilized to categorize a user's typical affective state based on the placement of their boundaries for each emotion dyad. Specifically, a user's typical affective state may be labeled as positive if it was placed above their positive emotion boundary, labeled as negative if it was placed below the negative emotion boundary, and labeled neutral if it was placed in between the emotion boundaries in the neutral region.
Various embodiments of the invention have been described in fulfillment of the various objectives of the invention. It should be recognized that these embodiments are merely illustrative of the principles of the present invention. Numerous modifications and adaptations thereof will be readily apparent to those skilled in the art without departing from the scope of the invention. Many different arrangements of the various components and/or steps depicted and described, as well as those not shown, are possible without departing from the scope of the claims below. Embodiments of the present technology have been described with the intent to be illustrative rather than restrictive. Alternative embodiments will become apparent from reference to this disclosure. Alternative means of implementing the aforementioned can be completed without departing from the scope of the claims below. Certain features and subcombinations are of utility and can be employed without reference to other features and subcombinations and are contemplated within the scope of the claims.
This application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application No. 63/357,245 filed Jun. 30, 2022, the entirety of which is incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
63357245 | Jun 2022 | US |