Not Applicable
Not Applicable
Investigating how people form subjective estimates of unknown attributes has been explored in fields spanning economics, I/O psychology, neuroscience, and even artificial intelligence. What makes this topic challenging, is not only the complex nature of estimating subjective attributes, but also the divergent approaches/definitions used to investigate this same concept. For example, the attribute of trust in cooperative economic tasks is often defined by the monetary investment in a partner, whereas it's defined through facial properties in some social neuroscience research. Moreover, the experimental techniques used to investigate trust have ranged from subjective, naturalistic approaches, to quantitatively-based, experimental designs.
Another hurdle that must be overcome is the task of teasing-apart or quantifying the differences in the reaction of a signal receiving person to a signal resulting from that person's characteristics (e.g., risk averse/seeking, bias), from differences that are due to the true attributes of the signal making person. This capability requires quantitative measurement of both individual biases, in addition to quantifying changes in the person's reactions due to reliable attribute information from the signal maker.
The following summary is included only to introduce some concepts discussed in the Detailed Description below. This summary is not comprehensive and is not intended to delineate the scope of protectable subject matter, which is set forth by the claims presented at the end.
Some embodiments of the methods and systems described herein provide a method for quantifying an entity's reaction to communication signals in a simulated or real interaction. This ability is provided by quantifying a probabilistic relationship between the communication signal and the known relationship of an attribute to the communication signal. With this quantification, the entity's reactions can also be modeled as probability distributions that can be compared to the communication signal and known relationship. With this information, each entity's reactions can be compared to an ideal algorithm that optimally integrates the known relationships and communication signals in the task to arrive at an optimal reaction. By making this comparison between the entity's reaction and an optimal reaction, a quantitative calibration, such as an estimate for bias, can be determined. In some embodiments, the methods can be iterated and the reactions can be dynamically updated throughout the iterations. The meaning of the communication signals, or relationships to an attribute, may or may not be known and in embodiments the quantification of reactions can provide an ability to estimate a hidden/unknown attribute from the observable communication signals. Furthermore, sensitivity measures can be achieved that determine the ability of each entity to use reliable task information, and ignore irrelevant task information, when making decisions.
It is an object of one embodiment of the invention to provide a computer based method of measuring an entity reaction comprising receiving a reaction of a second entity to a first and second communication signal, the reaction representing an estimate of an attribute of a first entity given the first and second communication signal and automatically determining an entity reaction measure from the reaction wherein the entity reaction measure is a probability of the reaction of the second entity to the first and second communication signal. In some embodiments, the entity reaction measure comprises a probability curve automatically computed as a probability distribution for the reaction as a function of the communication signal. In some embodiments, the first and second communications signals are mapped to a first and second quantitative representation of the communication signals according to a translation protocol and the quantitative representation comprises a Gaussian distribution of the probability of the first and second communication signals given the attribute. In some embodiments, the communication signal can be a visual signal, a verbal signal or a gesture signal.
It is another object of an embodiment of the invention to provide a method of measuring an entity reaction wherein the entity reaction measure is a quantitative measure comprising a Gaussian distribution of the probability of the reaction of the first entity to the first and second communication signal. In some embodiments, the quantitative measure can reflect a bias of the entity.
It is yet another object of an embodiment of the invention to provide the method of measuring an entity reaction further comprising the step of determining an optimal reaction measure reflecting a probability of an optimal reaction of the first entity to the first and second communication signal. In some embodiments, one of the first and second communication signals has a known relationship to the attribute and the optimal reaction measure is determined by a probability distribution for the known relationship to the attribute as a function of the communication signal. In some embodiments, the method further comprises comparing the optimal reaction measure to the entity reaction measure to create an entity calibration measure.
It is an object of some an embodiment of the invention to provide a computer based method of measuring an entity reaction comprising receiving a reaction of a second entity to a first and second communication signal, the first and second communication signals comprising computer generated signals, the reaction representing an estimate of an attribute of a first entity given the first and second communication signal, the first communication signal having a known relationship to the attribute and the second communication signal having an unknown relationship to the attribute, determining an entity reaction measure from the reaction wherein the entity reaction measure is a probability of the reaction of the second entity given the first and second communication signal, determining an optimal reaction measure wherein the optimal reaction measure comprises a probability of the reaction given the first communication signal and determining an entity calibration measure from the entity reaction measure and the optimal reaction measure. In some embodiments, the entity reaction measure and the optimal reaction measure are determined from a plurality of reactions to a plurality of first and second communication signals.
It is an object of an embodiment of the invention to provide a computer based system for measuring an entity reaction, said system comprising a means for receiving a reaction of a second entity to a first and second communication signal, the reaction representing an estimate of an attribute of the first entity given the first and second communication signal and a means for automatically determining an entity reaction measure from the reaction wherein the entity reaction measure is a probability of the reaction of the first entity to the first and second communication signal. In some embodiments, the computer based system further comprises a means for translating the communication signal to a quantitative representation of the communications signal comprising a Gaussian distribution of the probability of the first and second communication signals given the attribute and the means for automatically determining an entity reaction measure comprises a processor executing a computer program product capable of computing a probability distribution for the reaction as a function of the communication signal to determine the entity reaction measure.
In order that the manner in which the above-recited and other advantages and features of the invention are obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
FIGS. 8A-8C illustrates quantitative measures from several subjects who participated in the described poker experiment test.
System and methods to quantify reactions to communications will now be described in detail with reference to the accompanying drawings. It will be appreciated that, while the following description focuses on an assembly that is capable of quantifying reactions of a person to communications of another person, or a simulated other person, the systems and methods disclosed herein have wide applicability. For example, the systems and methods described herein may be readily employed to determine influences of groups of persons, organizations, computer programs or other entities that select choices or make decisions. Examples of this include using the methods to determine the bias of an individual given different visual communication signals or to determine the buying preferences of a group of people given different product designs. The systems and methods may also be used to determine influences of those entities based on information sources such as, but not limited to real persons, groups of persons, newspapers, books, on-line information sources or any other type of information source. Notwithstanding the specific example embodiments set forth below, all such variations and modifications that would be envisioned by one of ordinary skill in the art are intended to fall within the scope of this disclosure.
As used throughout this description, a quantitative representation means any type of representation such as, but not limited to, numeric, vector, graphic, mathematical representation or any other representation that can be used to quantitatively compare different variables.
As used throughout this description, a communication signal means any action or inaction imparting information such as, but not limited to visual, gestures, verbal, electronic or computer generated communications or signals of information. A communication signal may be contextualized or it may not include information about the context of the communication signal.
As used throughout this description, a reaction means any type of response to some influence such as but not limited to verbal, physical, neurological, physiological, conscious or subconscious responses to a communication signal.
As used throughout these methods, a protocol is a set of rules governing the format of one type of information set to another. For example only, and not for limitation, a protocol is able to translate a communication signal to a quantitative representation of the communication signal.
A general overview of the influence determining methods is shown in
Example embodiments of selected steps in
Step 131 shows details of translating a known relationship of the communication signal to an attribute to a “known”. The translation protocol is defined to categorize the communication signal and it's relationships with attributes and allows the known to be mapped to the communication signal and a quantitative representation. An example of a known relationship is whether the answers to questions are known to be true or untrue. Again, an example of a quantitative representation can be a probability curve of X (probability of true/untrue responses) given the frequency of questions Y (communication signal/question). The result of this step 131 is a quantitative representation of the known for the related signal andor communication signal.
Referring to
At step 171, an optimal reaction measure is determined. This measure is determined by using a technique similar to 170 where the optimal reaction measure is a probability distribution for the known as a function of the communication signal.
Referring again to
Although not necessary, some embodiments of the methods have the communication signal and relationships of communication signals translated to a context. In these embodiments, this contextual signal can be analyzed using methods similar to the methods for communications signals above.
To illustrate these steps only, and not for limitation, one embodiment of systems and methods will be described below.
To illustrate example embodiments of methods to quantify reaction to communication, and not for limitation, an embodiment of the methods used to quantify one person's (trustor's) interpretation of the attribute of truthfulness of another entity (trustee) will be used. The person whose reaction will be monitored and quantified will be termed the “trustor” and the entity that will be communicating signals will be the “trustee”. In this context, it is the job of the trustor to determine if the reliability or truthfulness of the information provided by the trustee, which unfolds/updates across the interview process. Techniques in this process include the trustor asking both questions to which s/he knows the answer [known questions: a known relationship (known correct/incorrect answer) of the attribute (trust) to the communication signals (answer statement)], in addition to questions in which the answer is not known [unknown questions: an unknown relationship of the attribute to the communication signals]. The idea is that the trustee's answers to the known questions may provide insight into the reliability to the unknown information. Moreover, the trustor can use the trustee's behavioral patterns to determine when the response provided is truthful (e.g. ‘tells’, in poker terms).
A general challenge for using behavioral patterns is that it's often unclear when such information is indicative of truth telling, or is an uninformative ‘tick’. When participating in a communications with an unfamiliar individual, rapid impressions of the opponent are formed through observable information, and depending on the situation, different attributes become important to estimate. For example, success in a poker game is limited by a player's ability to estimate their opponent's strategy. Since an opponent's strategy cannot be directly observed, it must be inferred through auxiliary information (e.g., facial properties). This inferential process is deeply related to concepts in Bayesian explaining away which provides a formal framework for how information is used to arrive at estimates of hidden variables.
In order to estimate an attribute (A) of another entity, it requires that the attribute be estimated through observable information/communication signals (B). One method of quantifying the attribute (A) is to utilize Bayes' rule :
Where p(A|O) is the (posterior) probability of attribute (A), given the fused behavioral observations (0) about the other. Notice that nuisance variables (Nc, Nu) were integrated-out, allowing us to focus on the attribute of interest.
Using this Bayes' rule, the method is able to accommodate many attributes of interest about another (e.g., trustworthiness, strategy, competence, etc.). Moreover, it is robust enough to fuse information from many different sources. For example, when estimating another's trustworthiness, you have two very different sources of information available: 1) information they offer you (i.e., Controlled behavioral information—Bc); and 2) behavioral ‘tells’ that are not being consciously offered by the other person (i.e., Uncontrolled Behavioral Information—Bu). Examples of controlled behavioral information may include verbal information, monetary returns/outcomes (e.g., in poker or negotiation), and clothing/appearance, while uncontrolled behavioral information includes face information (e.g., gaze, sweating, pupil dilation, etc.), posture, and nervous tics.
The challenge of fusing the information requires the various observable variables to be transformed to a quantitative representation. For example, if we assume that there are four sources of information about another's trustworthiness (clothing—c, verbal—v, gaze—g, and posture—p), which can be quantitatively represented as Gaussian, then the optimal combined estimate (N(μcσc)) is provided by:
μc=ωcμc+ωvμv+ωgμg+ωpμp,
Note that the representation need not be Gaussian, rather the distributions just need to be conjugates. The optimal fused estimates will take-on different forms according to the parameterization.
Once an estimate has been fused, then we're able to use that information to make a decision. In the Bayesian decision-theoretic sense, an optimal decision is one that maximizes the expected gain, or minimizes the expected risk associated with the decision:
R(A,d)*=arg mindp(A|O)L(A,d) (5)
The loss function (L(A,d)) allows for the cost of making an error to impact the decision. In the context of economic decision making, it would include the monetary consequences corresponding with each possible decision, whereas in the context of judging another's trustworthiness, it would involve the cost of incorrectly deciding if the person was trustworthy or untrustworthy. Defining optimal behavior allows the model to be compared to human performance to assess if they are acting optimally. This could be used for training or more basic research purposes.
The result of defining this translation protocol is a way to create a database or table that includes a list of candidate communication signals and potential mappings to different probability distributions. This database or table can be predefined or they can be refined and created as part of an iterative process to feed and update the database or table.
Using this translation protocol, it is possible to quantify elements of the process such as communication signals and knowns. The result is a mapping that has the following desirable characteristics in a trustee/trustor embodiment: 1) Probabilistically defining trust information allows for different concepts/definitions of trust to be mapped into the same experimental/quantitative framework; 2) Trust can be measured and updated dynamically across an ‘interview/interrogation’; 3) Ability to elicit implicit/explicit biases through a rigorous baseline procedure, allowing for individual factors to be distinguished from reliable trust communication signals; 4) Quantitatively determines individual sensitivities to reliable trust information; 5) Ability to systematically manipulate information in a complex/realistic simulation to allow for the clean interpretation of data, in addition to maximizing the generalization of the results; and 6) Ability to distinguish if reliable neural/physiological communication signals are being factored into trust decisions. This is useful for signal amplification and signal correlating/substituting, which could play an important role in ‘detaching’ the trustor from the equipment, in other embodiments.
Referring back to
In this embodiment, it is the job of the trustor to determine if the reliability of the information provided by the trustee, which unfolds/updates across the interview process. Techniques in this embodiment include the trustor asking both questions to which s/he knows the answer ('known questions'), in addition to questions in which the answer is not known ('unknown questions'). The idea is that the trustee's answers to the known questions may provide insight into the reliability to the unknown information. Moreover, the trustor can use the trustee's behavioral patterns to determine when the response provided is truthful (i.e., ‘tells’, in poker terms). A challenge for using behavioral patterns is that it's often unclear when such information is indicative of truth telling, or is an uninformative ‘tick’. One property of these systems and methods is that they allow for the reliability of communication signals, such as behavioral cues, to be experimentally controlled, providing insight into how well trustors are using reliable behavioral patterns, in addition to their ability to ignore uninformative behavioral information.
Moreover, since we are using an approach that maps communication signals to hidden attributes, we can determine an optimal reaction measure. Having this optimal measure allows us to compare it to the trustor's reaction measure and determine the trustor's calibration measure which includes insight into each trustor's sensitivity to true information (i.e., slope parameter) as well as to their bias in responding to information (i.e., offset parameter). Moreover, physiological reactions during trust decisions can be measured and summarize by a (robust) sufficient statistic (e.g., maximum activation, average response, etc.) to use for entity reaction measures, as a means to determining entity calibration measures.
An obstacle to discovering reliable trust communication signals is that individual differences in behavioral, neural and physiological responses must be measured and factored into the analysis. The disclosed methods of this embodiment accomplish this by running each subject in an entity reaction measure, with a translational protocol where all of the relevant information regarding the trustworthiness in the ‘trustee’ is kept constant (i.e., sampled from a uniform distribution). Mapping this technique into a realistic scenario has very robust and has profound implications: 1) It allows for each individual's implicit/explicit biases to be quantitatively measured; 2) Important factors from the literature such as race [13], risk seeking/aversion [14], and competence [15] can be measured and/or manipulated to quantify their influence on trust estimation, and later factored-out as nuisance variables, if desired; 3) Optimal aggregate estimates (across trustors) can be accomplished via optimal data fusion techniques [16] that give more ‘weight’ to trustors who are more sensitive to trust information. Moreover, each person's bias can be removed during the aggregation process.
These methods are also able to accomplish discovering reliable signals in the trustor that reflect beliefs about the trustee. In order to realize this goal, entity calibration measures are computed across responses, to determine if responses are sensitive to reliable changes in communication signals. This is afforded by the baseline task where reactions, such as continuous neural data are turned into a binary response to allow for entity calibration measures to be calculated. More specifically, neural and physiological summary statistics are categorized as either above baseline (1) or below baseline (0). Moreover, since these methods systematically control the behavioral patterns that reliably predict the trustworthiness of the trustee, it allows us to determine how sensitive each trustor is to reliable changes in trust information.
These methods have several desirable characteristics: 1) Allows for reliable behavioral patterns to be teased-apart from uninformative movements to assess if trustors are using un/informative communication signals to make their decisions; 2) Since experimental measures are interpreted with respect to each person's baseline, it is only sensitive to changes in behavioral/neural/physiological responses due to the trustworthiness of the trustee; 3) Can discover reliable neural/physiological responses that are not being used to make trust decisions. This is a potential candidate for communication signal amplification/bio-feedback; 4) Ability to assess the correlation between ‘high-level’ neural/physiological measures (e.g., EEG), and ‘low-level’ measures (e.g., GSR, HR, etc.). If reliable neural/physiological responses are correlated across levels, it allows the trustor to be ‘un-hooked’ from the expensive/bulky machines, for easier transition into field applications.
Again, using the process diagram of
At step 110 of
With step 111, a known relationship of the received communication signal to the attribute is also received. This known relationship may be for one or more of the communication signals. A known relationship does not need to be received for all communication signals. In this example, the known relationship is the truth of the communication signal and is stored in the communication signal database with the communication signal.
At step 130, the communication signals are translated into a quantitative representation according to the translation protocol. Examples of this quantitative representation include sampling communication signals from a uniform distribution in the baseline condition and from a Gaussian distribution in the experimental conditions.
At step 131, the known relationship of the communication signal to the attribute is translated to a known. This relationship between the communication signal and hidden attribute is represented by a conditional distribution p(s|a) that maps the probability of a communication signal (s) being present, to the presence of the hidden attribute (a). For example, the mean of a Gaussian distribution would reflect the strength of this relationship, and the variance would reflect the consistency of the relationship. In this example there is no reliable relationship between communication signals and attributes in the control condition (
At step 150, the communication signals are presented to the subject. Examples of this presentation include an avatar whose communication signals are controlled by the conditional distributions described in 131. In this example the avatar's hand, eye, and verbal communication signals are controlled by the probability of truth.
At step 160, the reaction to communication signal is received. Examples of a reaction include a behavioral, neural, or physiological response. Examples of receiving the reaction include recording the response to a data-file for analysis. In this example, decisions about the trustworthiness of the avatar were recorded, in addition to both neural and physiological communication signals.
At step 170, an entity reaction measure is determined. This measure is determined by updating the probability distribution for the reaction, as a function of the communication signal presented. Examples of this measure include probit and logistic curves, in addition to non-parametric statistical techniques. In this example the probability of a particular response (e.g., trust decision) is updated using probit techniques. In this respect, the subject's current response is used to update the probability distribution from previous responses, which forms a new decision curve for the current time. Therefore, these entity reaction measures dynamically update across time. More specifically, each response is recorded as a point on a scatter plot, where the y-axis of the plot represents the response value (e.g., 1=trust decision; 0=no trust decision), and the x-axis is determined by a binomial learning model that perfectly estimates (p(truthful)), based on the (known) values of the hidden attribute. This allows the reaction measures to be formed for each response type, and later fused to form a combined measure, if desired. Moreover, entity measures can be developed in a similar manner to determined the sensitivity of subjects to un/reliable communication signals by exchanging the x-axis from p(truth) to p(truth|signal).
At step 171, an optimal reaction measure is determined. This measure is determined by using a technique similar to 170, only the y-axis (actual responses), are selected according to an optimal decision rule that takes into account both the probability of the attribute, and the loss function (See
At step 190, an entity calibration measure is created by comparing the optimal reaction measures to the entity reaction measures. Examples of this calibration measure include, but are not limited to the comparison of the probit parameters that resulted in the entity measure to those achieved by the optimal reaction measure. In this example any differences in the slope term would suggest that the entity measure is less sensitive than optimal, whereas differences in the offset term would suggest that the entity measures are biased.
The result of this example is a quantitative bias and sensitivity measure for how the responses of a trustor correspond to a hidden attribute of a trustee, based on the available communication signals.
The various method embodiments of the invention will be generally implemented by a computer executing a sequence of program instructions for carrying out the steps of the methods, assuming all required data for processing is accessible to the computer, which sequence of program instructions may be embodied in a computer program product comprising media storing the program instructions. One example of a computer-based system for quantifying reactions to communications is depicted in
The program product may also be stored on hard disk drives within processing unit or may be located on a remote system such as a server, coupled to processing unit, via a network interface, such as an Ethernet interface. The monitor, mouse and keyboard can be coupled to processing unit through an input receiver or an output transmitter, to provide user interaction. The scanner and printer can be provided for document input and output. The printer can be coupled to processing unit via a network connection and may be coupled directly to the processing unit. The scanner can be coupled to processing unit directly but it should be understood that peripherals may be network coupled or direct coupled without affecting the ability of workstation computer to perform the method of the invention.
As will be readily apparent to those skilled in the art, the present invention can be realized in hardware, software, or a combination of hardware and software. Any kind of computer/server system(s), or other apparatus adapted for carrying out the methods described herein, is suited. A typical combination of hardware and software could be a general-purpose computer system with a computer program that, when loaded and executed, carries out the respective methods described herein. Alternatively, a specific use computer, containing specialized hardware or software for carrying out one or more of the functional tasks of the invention, could be utilized.
The present invention, or aspects of the invention, can also be embodied in a computer program product, which comprises all the respective features enabling the implementation of the methods described herein, and which—when loaded in a computer system—is able to carry out these methods. Computer program, software program, program, or software, in the present context mean any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: (a) conversion to another language, code or notation; and/or reproduction in a different material form.
The memory 520 stores information within the system 500. In some implementations, the memory 520 is a computer-readable storage medium. In one implementation, the memory 520 is a volatile memory unit. In another implementation, the memory 520 is a non-volatile memory unit.
The storage device 530 is capable of providing mass storage for the system 500. In some implementation, the storage device 530 is a computer-readable storage medium. In various different implementations, the storage device 530 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device. Computer readable medium includes both transitory propagating signals and non-transitory tangible media.
The input/output device 540 provides input/output operations for the system 500 and may be in communication with a user interface 540A as shown. In one implementation, the input/output device 540 includes a keyboard and/or pointing device. In another implementation, the input/output device 540 includes a display unit for displaying graphical user interfaces.
The features described can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them such as but not limited to digital phone, cellular phones, laptop computers, desktop computers, digital assistants, servers or server/client systems. An apparatus can be implemented in a computer program product tangibly embodied in a machine-readable storage device, for execution by a programmable processor; and method steps can be performed by a programmable processor executing a program of instructions to perform functions of the described implementations by operating on input data and generating output. The described features can be implemented in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and a sole processor or one of multiple processors of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer will also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube), LCD (liquid crystal display) or Plasma monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
The features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a LAN, a WAN, and the computers and networks forming the Internet.
The computer system can include clients and servers. A client and server are generally remote from each other and typically interact through a network, such as the described one. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
One embodiment of the computer program product capable of executing the described methods is shown in the functional diagram in
A means to receive the communication signal and the known is provided by a receiving module 671. This module receives the communication signal and the known relationships from memory or from the input/output device.
The translation protocol module 672 receives a representation of the communication signal and utilizes the defined mapping of communications signals to identify the quantitative representations. The translation protocol also receives the known and translates that into a quantitative representation. These quantitative representations are then made available for the entity reaction measure module 675 and the optimal reaction measure module 676.
The presentation module 673 provides the means to present the communication signal to an entity.
The receive reaction module 674 provide the means to receive the entities reaction to the communication signal. The module makes the reaction, or a representation of the reaction to the entity reaction measure module.
The entity reaction measure 675 module provides the means to calculate the entity reaction measure. This measure is created with information from the translation protocol module 672 and the receive reaction module 674.
The optimal reaction measure module 676 provides the means to determining the optimal reaction measure. This measure is determined primarily with input from the translation protocol module 672.
The entity calibration measure module 677 provides the means to compare the entity reaction measure and the optimal reaction measure.
The combination of these modules determines the entity reaction measure and the entity calibration measure. These measures can be communicated to other system element such as the input/output devices like a computer monitor or another computer system.
Once rapid impressions have been formed, beliefs can later be updated by direct experience with the individual, to develop a new estimate that will be used going forward. Within the poker scenario provided above, experience could include return on investment percentages achieved against a particular opponent. In fact, research in strategic games has explored how wagering decisions are modified through experience with a partner. In repeated trust games, people's willingness to share money with a partner is strongly influenced by both previous return rates and the probability of wagering with the same partner in future negations. It is also known that areas of the brain responsible for people's experienced-based impressions of a partner are the same areas that are known to be involved in predicting future reward, and that these regions activate differently in autistic adults.
In all of these studies, subjective estimates of a partner are used to modify wagering decisions in an economic situation that is mutually beneficial to both parties. However, little is known about how rapid impressions of an opponent, based on face information, operate and influence behavior in competitive (i.e., zero-sum) games, where one person's gain is another person's loss. In fact, research and theory in competitive games has focused on how opponent models are developed through previous outcomes (i.e., the likelihood), and how peoples' decisions relate to normative predictions. This test explored if rapid estimates of opponents are used in competitive games with hidden information, even when no feedback about outcomes is provided.
To investigate if people are inferring their opponent's style through face information, participants competed in a simplified poker game against opponents whose faces varied along an axis of trustworthiness: untrustworthy, neutral, and trustworthy. If people use information about an opponent's face, it predicts they should systematically adjust their wagering decisions, despite the fact that they receive no feedback about outcomes, and the value associated with the gambles is identical between conditions. Conversely, if people only use outcome-based information in competitive games, or use face information inconsistently, then there should be no reliable differences in wagering decisions between the groups.
As participants, 14 adults consented to participation in this study for monetary compensation. Participants were between 19 and 36 years of age, and all had normal or corrected-to-normal vision. In order to be eligible for this study, participants achieved a minimum score of 7/10 on a pre-experimental exam about the rules of Texas Hold'em poker, in addition to demonstrating no previous history of gambling addiction. The experimental protocol used in this study passed a Harvard University Human Subjects Review Committee.
Data from a pre-experimental inventory found that participants were novice poker players as 12 of the 14 in this study played less than 10 hours/year. Moreover, all participants in this study tended to play more ‘live’ games than online games. In fact, 12 of 14 participants played more than 90% of their games ‘live’, rather than online.
Participants saw a simple Texas Hold'em scenario that was developed using MATLAB's psychtoolbox, running on a Mac OSX system. The stimuli consisted of the participant's starting hand, the blind and bet amounts, in addition to the opponent's face (
The opponent's faces were derived from an online database that morphed neutral faces along an axis that optimally predicts people's subjective ratings of trustworthiness. More specifically, faces in the trustworthy condition are 3 standard deviations above the mean/neutral face along an axis of trustworthiness. Whereas, untrustworthy faces are 3 standard deviations below the mean/neutral face along this dimension.
The database provided 100 different ‘identities’. Each of the faces was morphed to three trust levels, giving a neutral, trustworthy and untrustworthy exemplar for each face. Therefore, in this experiment, there were 300 total trials (100 identities×3 trust levels each), that were presented in a random order.
Two-card hand distributions were selected to be identical between levels of trustworthiness. In order to minimize the probability that participants would detect this manipulation, we used hand distributions that had identical value, but are different in their appearance (e.g., cards were changed in their absolute suit (i.e., hearts, diamonds, clubs, spades) without changing the fact that they were suited (e.g., heart, heart) or unsuited (e.g., heart, club). This precaution seemed to work as no participant reported noticing this manipulation.
Within each level of trustworthiness, we also selected hand distributions to have an equal number of optimal call (i.e., accepting a bet) and fold (not accepting a bet) decisions (50 call/50 fold). Optimal decisions are considered to be the decision that maximizes the expected value (i.e., the number of chips earned; See
After participants passed the Texas Hold'em exam and signed the consent form, they were provided task with instructions. The instructions explained that they would be participating in a simplified version of Texas Hold'em poker. Unlike ‘real’ poker, they would always be in the big blind (i.e., they were required by the rules to make an initial bet of 100 chips) facing only one opponent who always bets 5000 chips.
Moreover, they would only be allowed two possible decisions: call or fold. Therefore, unlike ‘real’ poker, they would not be able to ‘bluff’ their opponent out-of the pot or ‘out-play’ their opponent since no extra cards are dealt. Participants were instructed that the only information they available have to make their betting decisions is their starting hand and the opponent who is betting. It was explained that similar to ‘real’ poker, different opponents may have different ‘styles’ of play. We did not mention anything about the opponent's face or the trustworthiness of the opponent. They were only told that if they choose to call, the probability of their hand winning is going to be based on their starting hand and their opponent's style. Of course, unknown to them, the opponents were always betting randomly in this study.
Unlike ‘real’ poker, no feedback about outcomes was provided after each trail and no ‘community cards’ were dealt. Rather, the hand was simulated and the outcome was recorded to use for consideration in their bonus pay. Participants received bonus pay that is based on the outcome of one randomly selected trial from the 300 possible hands. If participants chose to call the randomly selected trial, and the outcome was a win, they would earn a total of $15 ($5 participation+$5 gambling allowance+$5 bonus). Whereas, if participants decided to call and the outcome is a loss, then they would only earn $5 ($5 participation+$5 gambling allowance−$5 bonus). Finally, if participants chose to fold the randomly selected hand, they would earn $10 ($5 participation+$5 gambling allowance+$0 bonus:−$0.10 rounded to nearest dollar amount). Therefore, participants were motivated to make optimal decisions, as that would maximize their chance of winning bonus money. After completing the 300 trials, participants were paid and debriefed.
In Panel A of
In order to directly investigate how opponent information is impacting wagering decisions, a softmax expected utility model (See Supplementary Material) was used that separates the influence of three different choice parameters: a loss aversion parameter (lambda), a risk aversion parameter (rho), and a sensitivity parameter (gamma). These parameters have been shown to partially explain risky choices with numerical outcomes in many experimental studies, and in some field studies (Sokol-Hessner, et al., 2008). They were fit to each subject's data and averaged across subjects to explore the impact of opponent information on components of risk and loss preference revealed by wagering.
The loss aversion parameter discussed above provides a way to directly quantify this ‘shift’ in calling decisions.
From these results, it is clear that people are using face information to modify their wagering decisions in a competitive task. These results can be easily framed within a Bayesian interpretation and are related to ideas in Bayesian explaining away. Since an opponent's ‘style’ is a hidden state, participants must estimate it through observable variables. For example, a Bayesian estimator could assume that an opponent is random (i.e., they bet uniformly across hand value) until information to the contrary is acquired. In our experiment, the only information participants have available about their opponent's style is the trustworthiness expressed by their face. If people are using beliefs that trustworthy opponents tend to bet with high-value hands, then they should adjust their decision criterion by making it more stringent than against a random opponent. Indeed, participants' observed changes in betting behavior (
However, unknown to participants, their increased loss aversion (
Although the faces used in this experiment are thought to optimally predict subjective ratings of trustworthiness, it is also known that impressions of trust are deeply related to other attributes, such as perceived happiness, dominance, competence, etc. To investigate the possible role of these attributes, we conducted an independent rating task using a different group of subjects and correlated these results with the wagering behavior observed in this study. The results demonstrate that the impressions of trustworthiness also influence impressions of many other attributes that correlate with wagering decisions. Therefore, a more general conclusion is that common avoidance cues (dominant, angry, masculine) lead to more aggressive wagering decisions (i.e., increased calling), whereas approach cues (happy, friendly, trustworthy, attractive) tend to lead to conservative wagering decisions (i.e., increased folding). Although this seems contrary to evolutionary predictions, it is rational within the context of poker since approach cues may suggest the opponent has a good hand and/or is less likely to bluff. This interpretation is supported by the fact that subjects were more likely to call against opponents who were perceived to frequently bluff, and these opponents have similar subjective impression rating trends as those who are high on avoidance dimensions.
The increased influence of trustworthiness on reaction time (
It is also interesting that all of the changes in wagering decisions were observed against trustworthy opponents, while untrustworthy opponents did not yield any significant results. This asymmetry is even more fascinating given that people's perception of trustworthiness is more sensitive to changes between untrustworthy and neutral faces, than between neutral and trustworthy faces. One possible explanation stems from the assumption that people use a random opponent decision criterion in this task, unless there is information that an opponent is betting with non-random hands. In this respect, neutral and untrustworthy faces are functionally the same: neutral faces do not provide information about an opponent's style, while untrustworthy faces may suggest that opponents are betting with poor hands. However, since participants are already assuming opponents bet randomly, they cannot decrease their criterion any further. In agreement with this proposal,
Although we have been interpreting the results with respect to normative decision theory, research has also demonstrated that impressions of trust can occur extremely rapidly, and that implicit information can also modify brain activity and behavior. In fact, research has also shown that loss aversion is tightly related to emotional arousal, suggesting the loss aversion observed against trustworthy opponents (
In conclusion, we have shown that rapid impressions of opponents modify wagering decisions in a zero-sum game with hidden (opponent) information. Interestingly, contrary to the popular belief that the optimal poker face is neutral in appearance, the face that invokes the most betting mistakes by our subjects is has attributes that are correlated with trustworthiness. This suggests that poker players who bluff frequently may actually benefit from appearing trustworthy, since the natural tendency seems to be inferring that a trustworthy-looking player bluffs less. More generally, these results are important for competitive situations in which opponents have little or no experience with one another, such as the early stages of a game, or in one-shot negotiation situations among strangers where ‘first impressions’ matter.
Although this invention has been described in the above forms with a certain degree of particularity, it is understood that the foregoing is considered as illustrative only of the principles of the invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and operation shown and described, and accordingly, all suitable modifications and equivalents may be resorted to, falling within the scope of the invention which is defined in the claims and their equivalents.
This application claims the benefit of U.S. application Ser. No. 61/331,814, filed on May 05, 2010, entitled “SYSTEMS AND METHODS FOR DETERMINING DECISION INFLUENCES,” the entire contents of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61331814 | May 2010 | US |