This disclosure relates generally to the field of automated association testing which may be used for psychological testing.
Basic and applied research in the social sciences often entails measuring people's attitudes towards certain subjects or concepts. The standard method for measuring attitudes is to simply administer a survey to a group of people that asks them to self-report their attitudes. However, when attitudes involving socially sensitive topics such as race, gender, politics, and/or suicidality are surveyed, self-report measures may become less accurate as the people being surveyed attempt to answer the survey questions according to socially acceptable norms or according to what they believe the expectation of the surveyor is. For example, due to the social stigma around suicide, people may be reluctant to self-identify as struggling with low self-esteem or suicidal thoughts even when they are in fact prone to suicidal thoughts. Thus a simple survey that asks people to self-report their suicidality may be ineffective. A second problem with self-report measures is that people may have subtle attitudes or biases they are not fully aware of, and thus cannot report. For example, many people believe that they are not gender biased and are able to answer surveys in a way that confirms this belief. However, these same people may exhibit observably gender biased behavior in other contexts, whether they know it or not.
Today more than ever, corporations, police departments, municipalities, and academia are interested in employee, clients, and student attitudes toward issues such as race, sexuality, age, and gender. Of course, psychologists also benefit from understanding their patient's attitudes about themselves and others to better provide care for the patients. In the late 1990s, Professor Dr. Anthony Greenwald of the University of Washington and others developed an Implicit Association Test (IAT) to test a subject's attitude toward certain topics or concepts. However, critiques of the IAT have made it desirable to improve the testing of implicit attitudes and researchers have worked to improve upon Dr. Greenwald's initial IAT. Therefore, devices and techniques to accurately and efficiently measure attitudes in general, and especially attitudes or biases toward socially sensitive topics, is desirable.
Non-limiting and non-exhaustive embodiments of the invention are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified.
Embodiments of a device and method for measuring attitudes are described herein. In the following description, numerous specific details are set forth to provide a thorough understanding of the embodiments. One skilled in the relevant art will recognize, however, that the techniques described herein can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring certain aspects.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In this disclosure, example devices and processes generate implicit attitude values by measuring how quickly users can correctly select a target category that an item of the target category belongs to. The reaction time of the user may be measured by the difference in time of presenting the item to a display and the user correctly selecting a user input (e.g. keyboard key or touchscreen zone) that corresponds with the target category. The user may be presented with a number of different items to categorize to generate a statistically relevant sample size. An attitude value is then generated based on the reaction times where short reaction times correspond to stronger correlations and longer reaction times correspond to weaker correlations.
In one illustrative example, the target categories of “Flowers” and “Insects” are rendered to a display of a computer or mobile device. A pairing that includes a member (e.g. Lovely) of a first association category (e.g. positive attributes) and an item (e.g. Lilac) linked to one of the target categories is also rendered to the display. A user must first determine which of the two presented items is a member of the “Flowers” or “Insects” category and then select which category it belongs to by selecting a key on a keyboard or a software button on a touchscreen that corresponds to the category. In one example, the user selects the “e” key on a keyboard to correctly categorize the Lovely/Lilac pairing as belonging to the “Flowers” category. The reaction time of the user from when the Lovely/Lilac pairing is rendered to the display until when the user correctly selects the “Flowers” category is measured. Briefly turning to
A pairing that includes a member (e.g. gross) of a second association category (e.g. negative attribute) and an item (e.g. Wasp) linked to the “Insect” category may also be rendered to the display and the user can select the “Insect” category by pressing an “i” key of a keyboard or a software button on a touchscreen that corresponds to the “Insect” category. The reaction time of the user from when the Gross/Wasp pairing is rendered to the display until when the user correctly selects the “Insects” category is measured. In the specific illustrated embodiment, the first association category is positive attributes which opposes the negative attributes that is the second association category.
Additionally, Flower items (e.g. Rose, Daisy, Pansy) are paired with members of the second association category (e.g. negative attributes such as disgusting, nasty, gross) and the categorization reaction times of users is measured. The Insect items (e.g. beetle, mosquito, ant) are paired with members of the first association category (e.g. positive attributes such as nice, beautiful, lovely) and the categorization reaction times of users are measured.
In this illustrative example of the Flower and Insect target categories, most users have faster reaction times for the first pairing (flowers paired with positive attributes) and the second pairing (insects paired with negative attributes) because the pairing are more congruent in the user's mind. In contrast, most users have slower reaction times with the third pairings (flowers paired with negative attributes) and the fourth pairings (insects paired with positive attributes) because the pairings are less congruent in the user's mind. Hence, the reaction times in categorizing the pairings are indicative of a user's attitudes toward the different categories and an attitude value can be generated based on the reaction times. In this illustrative example, the difference in reaction times would indicate a positive attitude towards flower relative to insects. While measuring people's attitudes toward Insects and Flowers is for light hearted illustration purposes, similar techniques can be used to measure user's attitudes toward sensitive issues such as male vs. female, dark skin tone vs. light skin tone, or old vs. young, for example. More features and examples of the devices and processes of the disclosure are described below.
The term “processing logic” (e.g. 201) in this disclosure may include one or more processors, microprocessors, multi-core processors, and/or Field Programmable Gate Arrays (FPGAs) to execute operations disclosed herein. In some embodiments, non-transitory computer-readable memories (not illustrated) are integrated into the processing logic to store instructions to execute operations and/or store data. Processing logic may include analog or digital circuitry to perform the operations disclosed herein.
A “memory” or “memories” (e.g. 203) described in this disclosure may include volatile or non-volatile memory architectures. In
Processing logic 201 includes a display driver 207 configured to drive images onto display 220. Display 220 may be an LCD (liquid crystal display), plasma display, or AMOLED (active matrix organic light emitting diode) display, for example.
User input interface 210 includes a first user input 211 and a second user input 212. First user input 211 is configured to output a first signal when a user selects the first user input 211. Second user input 212 is configured to output a second signal when a user selects the second user input 212. As will be described in more detail, first user input 211 may correspond to selecting a first target category and second user input 212 may correspond to a second target category. Processing logic 201 is coupled to receive the first signal from input 211 via communication channel 255, in the illustrated embodiment. Processing logic 201 is coupled to receive the second signal from input 212 via communication channel 254, in the illustrated embodiment. In one embodiment, the first signal and the second signal are digital highs or lows and communication channels 254 and 255 are simply wire traces going to I/O (input/output) pins of processing logic 201.
In the illustrated embodiment, reaction time engine 230 is coupled to receive the first signal from the first user input 211 (via communication channel 256 connected to communication channel 255) and coupled to receive the second signal from the second user input 212 (via communication channel 257 connected to communication channel 254). In the illustrated embodiment, reaction time engine 230 is also coupled to display driver 207 via communication channel 252 to sense when display driver 207 renders certain images (e.g. pairings) to display 220. Hence, reaction time engine 230 may be configured to generate a reaction time between rendering a given image and a user selection of a user input that corresponds to a target category that is rendered to display 220. The reaction time may be sent from reaction time engine 230 to processing logic 201 via communication channel 253. In one embodiment (not illustrated) reaction time engine 230 is included in processing logic 201.
Target category 320 and target category 340 are examples of category 120 and category 140. In the illustrated embodiment, the plurality of first items 330 includes items 331, 332, 333, 334, and 335. Each of the items in the plurality of first items 330 is associated with target category 320 in computer-readable medium (e.g. memory 203) such that target category 320 is the “correct” category for each of the first items 330. Each of the first items 330 may be associated with target category 320 in any way known in the art, including linking each item with target category 320 or including first items 330 in a data structure where each of the first items 330 are a sub-species of the target category 320. In the illustrated embodiment, the plurality of second items 350 includes items 351, 352, 353, 354, and 355. Each of the items in the plurality of second items 350 is associated with target category 340 in a computer-readable medium (e.g. memory 203) such that target category 340 is the “correct” category for each of the second items 350. Each of the second items 350 may be associated with target category 340 in any way known in the art, including linking each item with target category 340 or including second items 350 in a data structure where each of the second items 350 are a sub-species of the target category 340.
In the illustrated embodiment, the plurality of first members 380 of first association category 360 includes members 381, 382, 383, 384, and 385. The plurality of second members 390 of second association category 370 includes members 391, 392, 393, 394, and 395. In one embodiment, the first association category 360 opposes the second association category 370 and, consequently, the first members 380 oppose the second members 390.
In a first specific illustrative example, target category 320 is “Flowers” and target category 340 is “Insects.” The plurality of first items 330 includes “Lilac” as item 331, “Rose” as item 332, “Daffodil” as item 333, “Violet” as item 334, and “Poppy” as item 335. The plurality of second items 350 includes “Moth” as item 351, “Wasp” as item 352, “Beetle” as item 353, “Termite” as item 354, and “Ant” as item 355. In this first specific illustrative example, the items are sub-species of their respective categories. The items are associated to their target category such that a user's categorization of any of the plurality of items 330 as corresponding to target category 320 is considered a “correct” categorization and such that a user's categorization of any of the plurality of items 350 as corresponding to target category 340 is considered a “correct” categorization.
Still referring to the first specific illustrative example, a plurality of first members 380 includes “Beautiful” as member 381, “Lovely” as member 382, “Wonderful” as member 383, “Peaceful” as member 384, and “Delightful” as member 385. A plurality of second members 390 includes “Terrible” as member 391, “Awful” as member 392, “Disgusting” as member 393, “Filthy” as member 394, and “Ugly” as member 395.
The “target categories” of the disclosure are the subject of the attitude of the user desired to be measured. The items associated with the different target categories may be sub-species of the target categories. In the first specific illustrative embodiment, the first association category may be “positive attribute” and the second association category may be “negative attribute.” The “association categories” of the disclosure may never be rendered to the display for the user to view, yet the members of the association category will be rendered to the display as part of a pairing. In one embodiment, the association categories are opposing categories. For example, “positive attributes” would oppose “negative attributes” when the association test concerns measuring positive and negative attitudes toward the target categories. In another example, the association categories are “Democrat” vs. “Republican.”
In a second specific illustrative example, target category 320 is “Self” and target category 340 is “Others.” The second specific illustrative example may be used to measure a person's attitudes regarding self-esteem or suicidality, for example. The plurality of first items 330 includes “I” as item 331, “Myself” as item 332, “Mine” as item 333, “My” as item 334, and “Me” as item 335. The plurality of second items 350 includes “Them” as item 351, “They” as item 352, “Their” as item 353, “Themselves” as item 354, and “Theirs” as item 355. In this second specific illustrative example, the items are associated with their respective target categories.
Still referring to the second specific illustrative example, a plurality of first members 380 includes “Happy” as member 381, “Alive” as member 382, “Thrive” as member 383, “Cheer” as member 384, and “Flourish” as member 385 that belong to a first association category 360. In this case, the first association category 360 may be characterized as life-affirming attributes and members 381-385 are life-affirming attributes. A plurality of second members 390 includes “Hopeless” as member 391, “Dead” as member 392, “Suicide” as member 393, “Die” as member 394, and “Despair” as member 395. In this case, the first association category 370 may be characterized as life-negating attributes and members 391-395 are life-negating attributes.
In one embodiment, each of the first items 330 is potentially descriptive of, or associated with target category 320. In one embodiment, each of the second items 350 is descriptive of, or associated with target category 340. In one embodiment, each of the first members 380 is potentially descriptive of, or associated with association category 360. In one embodiment, each of the second members 390 is descriptive of, or associated with association category 370.
In process block 405, a first target category (e.g. 320) and a second target category (e.g. 340) is rendered to a display (e.g. 220).
In process block 410, a first user input corresponding to the first target category is provided. In process block 415, a second user input corresponding to the second target category is provided. In one embodiment, the first user input is a first key on a computer keyboard and the second user input is a second key on the computer keyboard. In one embodiment, the first user input is a first software button displayed under a first zone of a touch-screen interface and the second user input is a second software button displayed under a second zone of the touch-screen interface. In
In process block 420, a first item (e.g. lily) and a first member (e.g. beautiful) is rendered to the display at a first time where the first item is linked to the first target category. The first item is one of the plurality of first items 330 that are linked to the first target category 320 and the first member is one of the plurality of first members 380 linked to a first association category 360 (e.g. positive attribute). The rendering of an item paired with a member may be referred to as a “pairing” in this disclosure. When a pairing includes one of the plurality of first items 330 and one of the first members 380, the pairing may be referred to as a “first pairing” for the purposes of this disclosure.
In process block 425, a first signal from the first user input is received at a second time subsequent to the first time. Since the first item (e.g. lily) is linked to the first target category (e.g. flower), a user's selection of the first user input corresponding to the first target category indicates a “correct” selection while a user's selection of the second user input corresponding to the second target category would indicate an “incorrect” selection.
In process block 430, a first reaction time is determined. The first reaction time is a first difference between the second time and the first time.
In process block 435, a second item (e.g. wasp) and a second member (e.g. gross) is rendered to the display at a third time where the second item is linked to the second target category. The second item is one of the plurality of second items 350 that are linked to the second target category 340 and the second member is one of the plurality second members 390 linked to a second association category 370 (e.g. negative attribute). When a pairing includes one of the plurality of items 350 and one of the second members 390, the pairing may be referred to as a “second pairing” for the purposes of this disclosure. In one embodiment, the second association category opposes the first association category. In one embodiment, each of the first members opposes each of the second members.
In process block 440, a second signal from the second user input is received at a fourth time subsequent to the third time. Since the second item (e.g. wasp) is linked to the second target category (e.g. insect), a user's selection of the second user input corresponding to the second target category indicates a “correct” selection.
In process block 445, a second reaction time is determined. The second reaction time is a second difference between the fourth time and the third time.
In one embodiment, process blocks 420, 425, and 430 are executed many times to generate multiple reaction times that measure a user's reaction time to correctly categorizing any of the plurality of items 330 (that are linked with the first target category) to the first target category when the items in the plurality of first items 330 are paired with any of the members in the plurality of first members 380 (first pairings).
In one embodiment, process blocks 435, 440, and 445 are executed many times to generate multiple reaction times that measure a user's reaction time to correctly categorizing any of the plurality of items 350 (that are linked with the second target category) to the second target category when the items in the plurality of second items 350 are paired with any of the members in the plurality of second members 390 (second pairings).
Measuring many reaction times for the first pairings and second pairings may increase the reliability of attitude values that are generated by measuring the reaction times.
In process block 450, a third item (e.g. violet) and a third member (e.g. gross) is rendered to the display at a fifth time where the third item is linked to the first target category 320. The third item is one of the plurality of items 330 that are linked to the first target category 320 and the third member is one of the plurality second members 390 linked to the second association category 370. When a pairing includes one of the plurality of first items 330 and one of the second members 390, the pairing may be referred to as a “third pairing” for the purposes of this disclosure.
In process block 455, a third signal from the first user input is received at a sixth time subsequent to the fifth time. Since the third item (e.g. violet) is linked to the first target category (e.g. flowers), a user's selection of the first user input corresponding to the first target category indicates a “correct” selection.
In process block 460, a third reaction time is determined. The third reaction time is a third difference between the sixth time and the fifth time.
In process block 465, a fourth item (e.g. beetle) and a fourth member (e.g. lovely) is rendered to the display at a seventh time where the fourth item is linked to the second target category 340. The fourth item is one of the plurality of second items 350 that are linked to the second target category 340 and the fourth member is one of the plurality of first members 380. When a pairing includes one of the plurality of items 350 and one of the members of the plurality of first members 380, the pairing may be referred to as a “fourth pairing” for the purposes of this disclosure.
In process block 470, a fourth signal from the second user input is received at an eighth time subsequent to the seventh time. Since the fourth item (e.g. beetle) is linked to the second target category (e.g. insects), a user's selection of the second user input corresponding to the second target category indicates a “correct” selection.
In process block 475, a fourth reaction time is determined. The fourth reaction time is a fourth difference between the eighth time and the seventh time.
In one embodiment, process blocks 450, 455, and 460 are executed many times to generate multiple reaction times that measure a user's reaction time to correctly categorizing any of the plurality of items 330 (that are linked with the first target category) to the first target category when the items in the plurality of items 330 are paired with any of the members of the plurality of second members 390 (third pairings).
In one embodiment, process blocks 465, 470, and 475 are executed many times to generate multiple reaction times that measure a user's reaction time to correctly categorizing any of the plurality of items 350 (that are linked with the second target category) to the second target category when the items in the plurality of items 350 are paired with any of the members of the plurality of first members 380 (fourth pairings).
Measuring many reaction times for the third pairings and fourth pairings may increase the reliability of attitude values that are generated by measuring the reaction times. In one embodiment, the first and second target category are rendered to the display from the first time until the eighth time. In one embodiment, the first and second target category are rendered to the display between the first and second time, between the third and fourth time, between the fifth and sixth time, and between the seventh and eighth time.
In process block 480, a first average of at least the first reaction time and the second reaction time is computed. In process block 485, a second average of at least the third reaction time and the fourth reaction time is computed.
In process block 490, an attitude value toward the first and second categories is generated based at least in part on a difference between the first average and the second average. In the Flowers/Insects illustrative example, a first average of 0.25 seconds and second average of 1.5 seconds would indicate that the user has a very positive attitude toward flowers (and a negative attitude toward insects). In one illustrative example, an attitude value is within a range of −2 to 2 where an attitude value of 2 indicates the strongest positive attitude toward flowers and an attitude value of −2 indicates a strong positive attitude toward insects. Of course, other values and other ranges may be used.
In the illustrated embodiment of
As described in association with process 400, presenting multiples of the first pairing, the second pairings, the third pairings, and the fourth pairings to measure a user's reaction time for categorization may be useful to generate more useful statistics for analysis.
Hence, in one embodiment, reaction time measurement block 571 is presented to the user for categorizing each of the first pairings and the second pairings into the first and second target category. In one embodiment, the reaction time measurement block 571 includes intermixing the renderings of the first and second pairings so that the user cannot predict if one of the first pairings or one of the second pairings will be presented next. In one embodiment, the first pairings and second pairings are randomly rendered to the display for reaction time measurement block 571.
In one embodiment, reaction time measurement block 572 is presented to the user for categorizing each of the third pairings and the fourth pairings into the first and second target category. In one embodiment, the reaction time measurement block 572 includes intermixing the renderings of the third and fourth pairings so that the user cannot predict if one of the third pairings or one of the fourth pairings will be presented next. In one embodiment, the third pairings and fourth pairings are randomly rendered to the display for reaction time measurement block 572.
In one embodiment, two reaction time measurement blocks 571 are presented to the user for categorizing and two reaction time measurement blocks 572 are presented to the user for categorizing. The number of pairings presented in each reaction time measurement block may vary. In one embodiment, twenty pairings are presented in each of the reaction time measurement blocks and twenty reaction times corresponding to the categorization of the pairings are measured. In one embodiment, reaction time measurement blocks are separated by a rest period. In one embodiment, the reaction time measurement blocks are preceded by a training module that includes rendering two categories to the display and only one item (without being paired with a member) so the user can learn to match the item with the correct category using the first user input and the second user input.
As described above, each pairing has an item and a member. In
With regard to process block 490, in some embodiments, generating the attitude value toward the first and second target categories includes measuring a variability of the first, second, third, and fourth reaction times, adjusting a weighting value based on the variability of the reaction times, and applying the weighting value to a raw attitude value to generate the attitude value. The raw attitude value may simply be the difference between the first and second average. In embodiments where multiple first, second, third, and fourth pairings are rendered and corresponding reaction times measured, the measuring of variability, adjusting a weighting value, and applying the weighting value to a raw attitude value to generate the attitude value may also be performed on the multiple reaction times. During experimentation and corresponding statistical analysis, Applicant's data indicates that more tightly bracketed (less variability) reaction times are indicative of a higher assurance that the raw attitude value is legitimate. Thus, less variability in the reaction times may warrant increasing the weighting value such that the attitude value is increased to reflect a stronger attitude toward one of the target categories. Applicant's data also indicates that more dispersed (more variability) reaction times are indicative of a lower assurance that the raw attitude value is legitimate. Thus, more variability in the reaction times may warrant decreasing the weighting value such that the attitude value is decreased to reflect a less strong attitude toward one of the categories.
In one embodiment, the variability in reaction times is separately calculated for the first reaction time measurement blocks 571 that include the first pairings and second pairings and separately calculated from the second reaction time measurement blocks 572 that include the third and fourth pairings since reaction time variability between the two different measurement blocks is common and expected. In one embodiment, standard deviation techniques are used to quantify the variability of the reaction times.
In process block 605, a first target category and a second target category is rendered to a display.
In process block 610, first reaction times of a user are measured. The first reaction times are measured between when first pairings (e.g. 511, 512, 513, 514, or 515) are rendered to the display and when the user correctly selects the first target category (via a first user input, for example), where each of the first pairings includes one of a plurality of first items (e.g. 330) that are linked to the first target category (e.g. 320). Each of the first pairings also include one of a plurality of first members (e.g. 380).
In process block 615, second reaction times of a user are measured. The second reaction times are measured between when second pairings (e.g. 516, 517, 518, 519, or 520) are rendered to the display and when the user correctly selects the second target category (via a second user input, for example), where each of the second pairings includes one of a plurality of second items (e.g. 350) that are linked to the second target category (e.g. 340). Each of the second pairings also include one of a plurality of second members (e.g. 390).
In process block 620, third reaction times of a user are measured. The third reaction times are measured between when third pairings (e.g. 521, 522, 523, 524, or 525) are rendered to the display and when the user correctly selects the first target category, where each of the third pairings includes one of a plurality of the first items (e.g. 330) that are linked to the first target category (e.g. 320). Each of the third pairings also include one of the plurality of second members (e.g. 390).
In process block 625, fourth reaction times of a user are measured. The fourth reaction times are measured between when fourth pairings (e.g. 526, 527, 528, 529, or 530) are rendered to the display and when the user correctly selects the second target category, where each of the fourth pairings includes one of a plurality of the second items (e.g. 350) that are linked to the second target category (e.g. 340). Each of the fourth pairings also include one of the plurality of first members (e.g. 380).
In process block 630, an attitude value toward the first and second target categories is generated based at least in part on the first, second, third, and fourth reaction times. In some embodiments, generating the attitude value toward the first and second target categories includes measuring a variability of the first, second, third, and fourth reaction times, adjusting a weighting value based on the variability of the reaction times, and applying the weighting value to a raw attitude value to generate the attitude value. The raw attitude value may simply be the difference between the first and second average. In one embodiment, when the variability of the reaction times is low, the weighting is increased, and when the variability is high, the weighting is decreased.
In embodiments of the disclosure, words/text are rendered to the display as part of a pairing. For example, the words “beetle,” “ant,” “rose,” “lily,” “happy,” and “hopeless” may be rendered to the display screen as part of pairings. However, images may also be rendered to the screen instead of words/text. In one embodiment, a video is played on the screen as part of the pairing. In one embodiment, audio is played on the speakers of a computer or mobile device instead of words/text in the pairing. In one embodiment, the first members are images and/or the second members are images. In one illustrative example, an image of a person smiling is substituted for the positive attribute of the word “happy” (a member of 380, in one example) or an image of a person appearing sad could be substituted for the negative attribute of the word “hopeless” (a member of 390, in one example). In one embodiment, at least one of the first, second, third, and fourth items include an image that is rendered to the display. In one illustrative example, an image of a rose could replace the item of the word “rose.” In one illustrative example, an image of a beetle could replace the item of the word “beetle.”
The potential advantages of the disclosed devices and systems include automating implicit attitude tests, increasing the speed and efficiency of which implicit attitude tests may be administered and taken, and increasing the accuracy of measuring implicit attitudes.
The disclosed embodiments of the disclosure described in association with
In contrast to the implementation of
The features described in the preceding paragraph also contribute to a simpler possible testing interface, fewer required instructions for the testing, potentially eliminating training blocks of the test altogether, and as a result, a faster testing process with fewer invalid testing results. Additionally, Applicant has observed that the reduction in eye scanning between categories and reducing or eliminating the need for a user to remember the instructions contributes to a decreased variability in reaction times which allows for fewer testing iterations (e.g. pairings) to be presented to the user to generate statistically stable averages. Thus, the test can be administered in a shorter amount of time and consume less processing resources.
Describing yet another advantage of the features of the embodiments of the disclosure, the implementation of
Therefore, disclosed herein is a technical solution to the long standing technical problem in the implicit attitude testing industry of more accurately and more efficiently measuring implicit attitudes. However, the disclosed device and method for measuring reaction times to generate implicit attitude values does not encompass, embody, or preclude other forms of innovation in the implicit attitude or implicit bias testing. In addition, the disclosed device and method for measuring reaction times to generate implicit attitude values is not related to any fundamental economic practice, mental steps, or pen and paper based solution. In fact, the disclosed embodiments would not be possible using a pen and paper solution because pen and paper surveys are susceptible to surveyors consciously or unconsciously gaming the surveys and pen and paper surveys are not capable of measuring reaction times from when a pairing is rendered to the screen and when a user correctly categorizes the pairing. Consequently, the disclosed device and method for measuring reaction times to generate implicit attitude values is not directed to, does not encompass, and is not merely, an abstract idea or concept.
In addition, the disclosed device and method for measuring reaction times to generate implicit attitude values provides for significant improvements to the technical fields of implicit attitude testing including faster and more accurate implicit attitude testing while consuming fewer processor resources. Consequently, using the disclosed device and method for measuring reaction times to generate implicit attitude values results in more efficient use of human and non-human resources, fewer processor cycles being utilized, and reduced memory utilization. As a result, computing systems and devices are transformed into faster, more efficient, and more effective computing systems by implementing the disclosed device and method for measuring reaction times to generate implicit attitude values.
The processes and methods explained above are described in terms of computer software and hardware. The techniques described may constitute machine-executable instructions embodied within a tangible or non-transitory machine (e.g., computer) readable storage medium, that when executed by a machine will cause the machine to perform the operations described. Additionally, the processes may be embodied within hardware, such as an application specific integrated circuit (“ASIC”) or otherwise.
A tangible non-transitory machine-readable storage medium includes any mechanism that provides (i.e., stores) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.). For example, a machine-readable storage medium includes recordable/non-recordable media (e.g., read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.).
The above description of illustrated embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize.
These modifications can be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.