The present invention relates generally to systems and methods for collective intelligence, and more specifically to systems and methods for enabling users to provide consistent assessments.
When capturing the insights from the members of a population it is often important to record not just their expressed sentiment, but also the strength of their sentiment, whether it be their level of confidence in a predicted outcome or their level of conviction in an expressed point of view. For example, when asking the members of a population to make forecasts about future events, it is often valuable to record not just their forecasted outcome, but their confidence in that forecasted outcome.
Soliciting honest expressions of confidence is particularly important when aggregating sentiment across a population, as some members may harbor a sentiment with only slight confidence, while other members may harbor very strong confidence. In such situations, it is sometimes desirable to aggregate sentiments across uses such that they are weighted by expressed confidence (or conviction) of each member.
The question remains—how can we best capture confidence or conviction levels from participants?
In the past we have used various methods to address this need. For example, we have required participants to assign numerical probabilities (0% to 100%) that reflect their expected likelihood of an outcome. Alternatively, we have required participants to place wagers on outcomes, the level of their wagers reflecting their confidence or conviction. Whatever the method we choose, the objective is—to get participants to (a) report their confidence (or conviction) as accurately as they can, and (b) do so using a numerical scale that is highly consistent from person to person.
Unfortunately, current methods often perform poorly on both fronts. That's because human participants have been shown to be inaccurate at reporting their internal confidence (or conviction) on numerical scales. This is true when asked to express confidence as percentages or wagers.
In addition, current methods are often inconsistent from person to person. For example, when asking participants to predict which team will win a sporting event—Team A or Team B, along with a confidence value, is difficult to get accurate and consistent confidence values. Why is it so difficult to get accurate and consistent confidence values? The problems faced include:
One approach that aims to motivate participants in an authentic manner, is to ask participants to place a wager upon their chosen outcome. For example, if they predict Team A will win, we can then ask how much they would bet on that outcome. If they stand to benefit from a correct prediction in proportion to their wager, they may express confidence in an authentic manner.
We have used this method in the past, through a value we call “Dollar Confidence”. While our results for Dollar Confidence have enabled amplification of intelligence, analysis shows that this value is not ideal. The fact is, people are inconsistent when asked to place wagers, as personality differences cause some individuals to be risk adverse, and other individuals to be risk tolerant.
So, how can we drive participants to give authentic, accurate, and consistent expressions of confidence when making forecasts? An innovative solution is needed that provides participants with a new form of wagering, that forces them to think probabilistically and reduces the differences between risk adverse and risk tolerant personalities.
Several embodiments of the invention advantageously address the needs above as well as other needs by providing an interactive system for eliciting from a user a probabilistic indication of the likelihood of each of two possible outcomes of a future event, the interactive system comprising: a processor connected to graphical display and a user interface; a graphical user interface presented upon the graphical display and including a user manipulatable wager marker that can be moved by the user across a range of positions between a first limit and a second limit, wherein the first limit is associated with a first outcome of the two possible outcomes and the second limit is associated with a second outcome of the two possible outcomes; a first reward value presented on the graphical display and visually associated with the first outcome, the first reward value interactively responsive to the position of the user manipulatable wager marker; a second reward value presented on the graphical display and visually associated with the second outcome, the second reward value interactively responsive to the position of the user manipulatable wager marker; a first software routine configured to run on the processor, the first software routine configured to repeatedly update both the first and second reward values in response to user manipulation of the wager marker, the first software routine using a non-linear model for updating the first and second reward values in response to linear manipulation of the wager marker, the non-linear model following a monotonic power function that is implemented such that a non-linear increase in the first reward value corresponds to a non-linear decrease of the second reward value, and a non-linear increase in the second reward value corresponds to a non-linear decrease in the first reward value; a second software routine configured to run on the processor and determine a final first reward value and a final second reward value based upon a final position of the wager marker; and a third software routine configured to run on the processor and generate a forecast probability value associated with each of the two possible outcomes based upon the final position of the wager marker between the two limits, the forecast probability value being a linear function of the position of the wager marker.
The above and other aspects, features and advantages of several embodiments of the present invention will be more apparent from the following more particular description thereof, presented in conjunction with the following drawings.
Corresponding reference characters indicate corresponding components throughout the several views of the drawings. Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments of the present invention. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments of the present invention.
The following description is not to be taken in a limiting sense, but is made merely for the purpose of describing the general principles of exemplary embodiments. Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
Furthermore, the described features, structures, or characteristics of the invention may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
Real-time occurrences as referenced herein are those that are substantially current within the context of human perception and reaction.
As referred to in this specification, “media items” refers to video, audio, streaming and any combination thereof. In addition, the audio subsystem is envisioned to optionally include features such as graphic equalization, volume, balance, fading, base and treble controls, surround sound emulation, and noise reduction. One skilled in the relevant art will appreciate that the above cited list of file formats is not intended to be all inclusive.
Historical research demonstrates that the insights generated by groups can be more accurate than the insights generated by individuals in many situations. A classic example is estimating the number of beans in a jar. Many researchers have shown that taking the statistical average of estimates made by many individuals will yield an answer that is more accurate than the typical member of the population queried. When the individuals provide their input as isolated data points, to be aggregated statistically, the process is often referred to as “crowdsourcing”. When the individuals form a real-time system and provide their input together, with feedback loops enabling the group to converge on a solution in synchrony, the process is often referred to as “swarming”. While very different entities, crowds and swarms both share one characteristic—the thoughts, feelings, insights, and intuitions of human participants need to be captured and represented as data in an accurate manner that can be processed across trials and across populations. Inconsistencies in the data captured from participants, because of difficulties in expressing their internal thinking as external reports can significantly degrade the ability to amplify the intelligence of a population. Swarms outperform crowds because they capture data indicating behaviors of participants, but the requirement that all members of a swarm participate at the same time, is a logistical constraint. What is needed is a method that combines the asynchronous benefits of polling (i.e. the participants do not need to all be engaged at the same time) and the behavioral benefits of swarming (i.e. that participant don't just report, they interact over time, revealing far more about their internal thinking than they might even be able to consciously express). The present invention addresses this by create a new form of interactive behavioral polling, combined with machine learning, to optimize the collective intelligence and/or collective insights of populations.
Interactive Behavioral Polling and Machine Learning
Human participants possess deep insights across a vast range of topics, having knowledge, wisdom, and intuition that can be harnessed and combined to build an emergent collective intelligence. A significant problem, however, is that people are very bad at reporting the sentiments inside their heads and are even worse at expressing their relative levels of confidence and/or conviction in those sentiments. Reports from human participants are inconsistent from trial to trial, using scales that are highly non-linear, and highly inconsistent from participant to participant. To solve this problem, innovative methods and systems have been developed to change the process of poll-based “reporting” to a dynamic process of “behaving” such that behavioral data is collected and processed which gives far deeper insights into the true sentiments of human participants, as well as far deeper insights into the confidence and/or conviction that go with the expressed sentiments.
The methods and systems involve providing a question prompt, providing a dynamic interface with data that is tracked and stored over time, and providing a countdown timer that puts temporal pressure on the participant to drive their behavior.
The methods and systems further involve then providing the perturbation prompt which inspires the participant to adjust their answer, while again providing the dynamic interface with data that is tracked and stored over time, and providing the countdown timer that puts temporal pressure on the participant to drive their behavior. The perturbation prompt is an authoritative reference point which may be a real or fictional indication of an alternative view on the answer in many preferred embodiments. The usefulness of a fictional indication is that the direction and magnitude of the perturbation (with respect to the participant's initial answer) can be varied across the pre-planned spectrum across trials and across the population, so as to capture a spectrum of behavioral data for processing and machine learning.
Referring next to
Generally, Human participants possess deep insights across a vast range of topics, having knowledge, wisdom, and intuition that can be harnesses and combined to build a collective intelligence. A significant problem, however, people are very bad at reporting the sentiments inside their heads and are even worse at expressing their relative levels of confidence and/or conviction in those sentiments. Reports from human participants are inconsistent from trial to trial, using scales that are highly non-linear, and highly inconsistent from participant to participant. To solve this problem, these innovative methods and systems have been developed to change the process of poll-based “reporting” to a dynamic process of “behaving” such that behavioral data is collected and processed which gives far deeper insights into the true sentiments of human participants, as well as far deeper insights into the confidence and/or conviction that go with the expressed sentiments.
The interactive behavioral polling method involves providing a prompt to each user in the group, providing a dynamic interface with data that is tracked and stored over time, and providing a countdown timer that puts temporal pressure on each participant to drive their behavior.
The methods and systems further involve then providing a perturbation prompt which inspires each participant to adjust their answer, while again providing dynamic interface with data that is tracked and stored over time, and providing a countdown timer that puts temporal pressure on the participant to drive their behavior. The perturbation prompt is an authoritative reference point which may be a real or fictional indication of an alternative view on the answer in many preferred embodiments. The usefulness of a fictional indication is that the direction and magnitude of the perturbation (with respect to the participants initial answer) can be varied across the pre-planned spectrum across trials and across the population, so as to capture a spectrum of behavioral data for processing and machine learning.
A good way to describe the systems and methods enabled by the hardware and software disclosed herein is by example. In the example illustrated in
In this example, the objective is to predict the outcome of football games, forecasting both the winner of the game and the number of points the winner wins by. In this example, a set of 15 games will be forecast by a population of 100 human participants, with the objective of aggregating the data from the 100 human participants to generate the most accurate group forecast possible. Traditionally, this would be done by asking a single question and capturing static data, which has all the problems described above. In the inventive system and method, a dynamic process is provided using a unique interface, unique prompts, and unique timer, and a unique machine learning process.
In a first provide prompt and dynamic user interface step 400 of the interactive behavioral polling method, the CCS 142 sends to each computing device 100 instructions and data for displaying the prompt and the dynamic user interface. A dynamic user interface is presented to each user. As illustrated in
In this example, the first choice 504 is displayed as “49ers” and the second choice 506 is displayed as “Raiders”. The selection values 508 range from +16 on the 49ers (left) side to +16 on the Raiders (right) side of the selection line 502. An origin is located at the center of the selection line 502. The prompt 514 is displayed as “Who will win and by how much, the San Francisco 49ers or the Oakland Raiders?”.
It will be understood by those of ordinary skill in the art that although the exemplary interface is a slider, the dynamic data collecting and prompting methods can be achieved with a variety of other layouts.
As illustrated in the exemplary display 500, in this case the prompt 514 is a textual question, and the interface is a sliding-type interface. The prompt 514 will appear on the screen under software control of the computing device 100, allowing control of the exact timing of when the user reads the question and provides a response. In preferred embodiments, during step 402, the countdown timer 512 also initially appears. The countdown of the time will give time pressure to the user. As shown in
In the next provide dynamic input step 404, during a pre-determined time period the user is allowed to provide input to indicate his response to the prompt. If the countdown timer 512 is shown, the countdown timer 512 counts down during the time period. During the dynamic input step 404, the computing device 100 tracks the user input, collecting data about the timing, position, speed, acceleration, and trajectory of the user's response using the dynamic interface (in this case the slider 512). Any time delay between the prompt 514 appearing on the display and a first motion of the slider 512 is captured as well. This behavioral information reflects not just a reported final answer, but also indicates the internal confidence and/or conviction the participant has in the expressed value, especially when processed by machine learning in later steps. For example, a slower movement of the slider may result in the user being assigned a lower confidence value.
An example of the dynamic interface 500 at the end of the first time period is shown in
In some embodiments, a textual indication 600 of the selection location may be shown.
The system generally includes the countdown timer 512, so the user feels pressured to act promptly, but still has significant time—for example, 15 seconds counting down in the example above. The user might move the slider 510 quickly during the 15 seconds, or they might adjust and readjust, all of which is captured by the software. The system also generally includes the clickable or pressable “DONE” button (as shown in
Again, the important thing is that the software tracks behavioral information for the user, not just the final resting position of the slider 510. This behavioral information includes (1) the time delay between the prompt and the user first grabbing the slider and moving it, (2) the speed of the slider, (3) the motion dynamics of the slider—does it go straight to an answer or does it overshoot and come back or does it get adjusted and readjusted during the allotted time period, as the countdown timer ticks down.
In the next perturbation analysis step 406, after the first time period has ended or the user clicks the “done” button 516, the software is then configured to determine (and then present to the user) a perturbation stimulus, which will drive a second round of behavior to be captured in real-time during a second time period. This may be an indicator telling them what an “EXPERT” thinks the answer is to the given prompt. This may be an indicator telling them what an “AI” thinks the answer is to the given prompt. This may be an indicator telling them what a “population” thinks the answer is to the given prompt. The primary goal is psychological—to inform them that a voice of authority and/or credibility has an alternate view to be considered. In many preferred embodiments, the perturbation is fictional. In other embodiments perturbation can be real (based on actual expert or AI or population). The population may be the group of users currently engaged with the behavioral polling method through individual computing devices 100 and the perturbation indicator may be derived based on a statistical mean or median of their initial input values prior to any perturbation.
In some preferred embodiments, the perturbation analysis step 406 includes collecting and processing user dynamic input data from a plurality of users responding to the prompt, each interacting with a computing device connected to the server, such that the perturbation stimulus is generated based at least in part on the data from the plurality of users responding to the prompt. In this way, a statistical mean or median or other statistical aggregation can be used alone or in part when generating the perturbation stimulus.
In some embodiments the perturbation indicator communicated to each of the plurality of computing devices is identical, such that all participants are given the same perturbation. In other embodiments, a distribution of perturbations is calculated and transmitted such that a range of perturbation indicators are communicated to the plurality of computing devices. In some such embodiments the distribution is a normal distribution. In some embodiments, the distribution is a randomized distribution. An inventive aspect of using a distribution is that the population of participants are provided with a range of unique perturbation stimuli and thereby provide a range of responses to said stimuli for analysis and processing.
In some embodiments the perturbation indicator computed and provided to a first group of participants is based on data collected from a second group of participants.
For this particular example embodiment, the perturbation is a fictional perturbation where the system displays a selection and identifies it as an “expert opinion”. The perturbation may be randomly selected to be either higher or lower than the user's response, by a random margin. Or, instead of a random margin, a pre-planned distribution of margins across all users may be employed. For example, if 100 people were given this survey, a pre-planned distribution of margins (above or below the user's initial prediction) may be used for the expert perturbation, enabling the system to collect a diverse data set that has a range of desired perturbations. Alternately, a range of perturbations could be given not across all users who are answering this same question, but across each given user, across a set of predictions (for example, across a set of 10 games being predicted).
In the present perturbation stimulus step 408, the perturbation stimulus is shown on the display. An example of the display of the perturbation stimulus is shown in
Next, in the provide prompt for updating step 410, The system is configured to prompt the user again, giving them the opportunity during a second time period to update their prediction. In the next second time period step 412, during the second time period the user has the option to adjust the location of the slider 510. In a preferred embodiment, a countdown timer again appears during the second time period, giving the user, for example, 15 seconds to update their prediction. The user could just click the “DONE” button 516 during the second period and not adjust the location of the slider 510. Or the user could adjust the location of the slider 510. During the second time period the software tracks not just the final answer, but again tracks the behavioral dynamics of the process—including (1) the time delay between the prompt and the user grabbing the slider 510 and (2) the speed of the slider motion, and (3) the trajectory of the slider motion, and (4) how quickly the user settles on an updated location. This additional behavioral data, during the second time period in response to the perturbation stimulus, is a further indicator of confidence/conviction in the answer given, especially when processed by machine learning.
After the second time period is ended, in optional calculate scaling factor step 414, the computing device 100 (or the CCS 142, if the data is sent over the network to the CCS 142) calculates a scaling factor based on the data collected during the first and second time periods. This scaling factor could be then used when the user is participating in a real-time collaborative session, as previously described in the related applications.
In the next calculate prediction step 416, the CCS 142 receives the data from each user computing device 100 participating in the polling session, and computes a predication related to the prompt using the behavioral data collected from all users during the time periods, in addition to other data. In the optional final display prediction step 418, the CCS 142 sends an indication of the prediction to each computing device 100, and each computing device 100 displays the prediction on the display, as illustrated in
Behavioral data for each user for each polling session is stored. At the completion of all instances of the dynamic poll, across the set of questions (for example all 15 NFL football games being played during a given week) and across a population (for example, 100 football fans), a detailed and expressive behavioral data set will have been collected and stored in the database of the system (for example, send to and stored in the CCS 142)—indicating not just a set of final predictions across a set of users, but representing the confidence and/or conviction in those predictions, especially when processed by machine learning. In some embodiments the data obtained during the interactive behavioral polling method can be used to identify from a group of users a sub-population of people who are the most effective and/or insightful participants, to be used in a real-time swarming process.
Typical “Wisdom of Crowds” for making a sports prediction collects a set of data points for a particular game, wherein each data point would indicate each user's single forecast for which team will win, and by how much. If there were 100 users predicting the 49ers/Raiders game, the process would generate an average value across the simple data set and produce a mean. The problem is, every user in that group has (a) a different level of confidence and/or conviction in their answer, (b) they are very poor at expressing or even knowing their confidence, and (c) if asked to report their confidence, every individual has a very different internal scale—so confidence can't be averaged with any accuracy. Thus, traditional methods fail because they combine predictions form a population of very different individuals, but those values are not all equivalent in their scales, confidence or accuracy.
To solve this problem, the unique behavioral data collected in the steps of
In a first obtain and store behavioral data step 1000, the system collects behavioral data using the interactive polling method as previously described. For example, the system collects behavioral data that includes thinking delay time between when the prompt 514 appears on the screen for a given user and that user starts to move the slider 510, in combination with elapsed time taken for the user to settle on an initial answer using the slider 510, the max speed of the slider 510, and the total distance traveled of the slider 510 over time (including overshoots and/or double-backs), along with the user's answer value for the initial prompt phase, and the adjustment amount that the user changed their answer when prompted with a perturbation, as well as the elapsed time taken for the adjustment, the max speed during the adjustment, and the total distance traveled of the slider 510 during the adjustment (including overshoots and/or double-backs).
In the next obtain additional data step 1002, additional event/user/group data is obtained. For example, in some embodiments, the system collects and/or computes, for each user, the amount of time they spent pulling away from the majority sentiment in the population. In addition to the above behavioral data, the system also may also collect and store data regarding how correct each user was in their initial estimate, as well as how correct they were in their final estimate in the face of the perturbance. In some embodiments, the users are required to also engage in a real-time swarm, where how much time the user spends pulling against the swarm (i.e. being defiant to the prevailing sentiment) is tracked.
In some embodiments, users are asked “How much would you bet on this outcome?” as a means of gathering further confidence data. In some such embodiments, the dynamic behavioral data is used to estimate confidence by training on the correlation between the behavioral data and the “How much would you bet on this outcome?” answers. (This is called ‘dollar confidence’ in our current methods).
In some embodiments, webcam data is collected from users during the interactive period while they are dynamically manipulating the slider input in response to the prompt. This webcam data, which is configured to capture facial expressions, is run through sentiment analysis software (i.e. facial emotion determination and/or sentiment determination and/or engagement determination) to generate and store sentiment, emotion, and/data engagement data from each user. For example, tools like Emotient® and/or EmoVu may be used, or other suitable emotion detection API. Facial emotion, sentiment, and/or engagement data collected during the first time period, and separately collected during the second adjustment time period driven by the perturbation, are stored for users for each question they answer. Facial data is thus a secondary form of behavioral data, indicating confidence and/or conviction of users during their interactive responses. This data can then be used in later steps to (a) optimize predictions of confidence in the answer, and/or (b) optimize predictions of accuracy in the answer.
In the computer user score step 1004, the system computes a user score using an algorithm and the obtained data. In one example, the system computes a skill score for each user. Using a football game as an example, if the true outcome of the game is +4, and the user's initial guess is +9, the user might get an initial accuracy score of: |4−9|=5, where the lower the score, the better, with a perfect score being 0. If their updated score was +8, they will get an updated accuracy score of: |4−8|=4, where the lower the score, the better, with a perfect score being 0. The updated accuracy score is also calculated with consideration as to whether the perturbance was an influencer towards the correct score, or away from the correct score. In one such embodiment, the updated accuracy score could be a function of both the user's initial and final skill scores. As an example, if the user's initial guess is +9, and then after the perturbation gives a final guess of +7, where the true outcome of the game is +4, the skill score may be calculated as: |4−9|+|4−7|=8. These scores act as a measure of the user's skill in prediction overall.
In other embodiments, instead of computing accuracy (i.e. how correct the initial prediction and updated predictions were), the system computes scores for user confidence and/or conviction. Defiance time can used by the current system to compute a defiance score. The defiance score and/or other real-time behavioral data is of unique inventive value because it enables weighting of participants without needing historical performance data regarding the accuracy of prior forecasts.
In the final train machine learning algorithm step 1006, the behavioral data and/or the user scores are used to train a machine learning system. For example, the system can train the machine learning algorithm on user confidence and/or conviction scores. The system can train a machine learning algorithm using the defiance score to estimate the defiance time of each user. With this estimate, we are able to weight new users' contributions to new crowds based on their estimated defiance time, thereby increasing the accuracy of the new crowd.
By training a Machine Learning algorithm on these scores, the system can predict which users are more likely to be most skillful at answering the question (e.g. generating an accurate forecast) and which users are less likely to do so. With this predicted skill level, the software of the system is then able to weight new users' contributions to new crowds (or swarms) based on their behavioral data and the combination of other factors, thereby increasing the accuracy of the new crowd (or swarm).
The dynamic behavioral data obtained can be used in an adaptive outlier analysis, for example as previously described in related application Ser. No. 16/059,658 for ADAPTIVE OUTLIER ANALYSIS FOR AMPLIFYING THE INTELLIGENCE OF CROWDS AND SWARMS. For example, the behavioral data can be used in combination with other characteristics determined from the survey responses for use in machine learning, including the outlier index for that user, as described in the aforementioned related patent application. The contribution of each user to that statistical average can be weighted by (1−Outlier_Index) for that user. Similarly, when enabling users to participate in a real-time swarm, the User Intent values (as disclosed in the related applications) that are applied in real time, can be scaled by a weighting factor of (1−Outlier_Index) for that user. In this way, users are statistically most likely to provide incorrect insights are either removed from the population and/or have reduced influence on the outcome. What is significant about this method is that it does not use any historical data about the accuracy of participants in prior forecasting events. It enables a fresh pool of participants to be curated into a population that will give amplified accuracy in many cases.
In addition to using the behavioral data disclosed above to make a more accurate crowd prediction, as described above, it is also useful to use the behavioral data to identify from the population, a sub-population of people who are the most effective and/or insightful participants, to be used in a real-time swarming process. The present invention enables the curation of human participants by using dynamic behavioral data in two inventive forms—(I) taking an initial pool of baseline participants and culling that pool down to a final pool of curated participants which are then used for crowd-based or swarm-based intelligence generation, and/or (II) taking a pool of baseline participants and assigning weighting factors to those participants based on their likelihood of giving accurate insights (i.e. giving a higher weight within a swarm to participants who are determined to be more likely to give accurate, correctly-confident insights than participants who are determined to be less likely to give accurate, over- or under-confident insights). In some inventive embodiments, both culling and weighting are used in combination—giving a curated pool that has eliminated the participants who are most likely to be low insight performers, and weighting the remaining members of the pool based on their likelihood of being accurate insight performers.
For example, the behavioral data described above, may be used in some embodiments, in combination with other characteristics determined from the survey responses for use in machine learning, including the OUTLIER INDEX for that user, as described in the aforementioned co-pending patent application Ser. No. 16/059,658.
The contribution of each user to a question in a crowd can be weighted by a machine-learned representation of their confidence and predicted accuracy. Similarly, when enabling users to participate in a real-time swarm, the User Intent values that are applied in real time can be scaled by a weighting factor that is machine-learned from the Outlier index and behavioral data. In this way, users are statistically most likely to provide consistently incorrect insights are either removed from the population and/or have reduced influence on the outcome. What is significant about this method is that it does not use any historical data about the accuracy of participants in prior forecasting events. It enables a fresh pool of participants to be curated into a population that will give amplified accuracy in many cases. (This amplification is based only on analysis of that users responses and behaviors to the current set of questions, which does not require historical accuracy data for that user).
In some embodiments of the present invention, a plurality of values are generated for each participant within the population of participants that reflect that participant's overall character across the set of events being predicted. outlier index is one such multi-event value that characterizes each participant with respect to the other participants within the population across a set of events being predicted. In addition, a confidence INDEX is generated in some embodiments of the present invention as a normalized aggregation of the confidence values provided in conjunction with each prediction within the set of predictions. For example, in the sample set of questions provided above, each prediction includes Confidence Question on a scale of 0% to 100%. For each user, the confidence index is the average confidence the user reports across the full set of predictions, divided by the average confidence across all users across all predictions in the set. This makes the confidence index a normalized confidence value that can be compared across users. In addition, multi-event self-assessment values are also collected at the end of a session, after a participant has provided a full set of predictions.
In some embodiments of the present invention, a plurality of multi-event characterization values are computed during the data collection and analysis process including (1) Outlier Index, (2) Confidence Index, (3) Predicted Self Accuracy, (4) Predicted Group Accuracy, (5) Self-Assessment of Knowledge, (6) Group-Estimation of Knowledge, (7) Behavioral Accuracy Prediction, and (8) Behavioral Confidence Prediction. In such embodiments, additional methods are added to the curation step wherein Machine Learning is used to find a correlation between the multi-event characterization values and the performance of participants when predicting events similar to the set of events.
In such embodiments, a training phase is employed using machine learning techniques such as regression analysis and/or classification analysis employing one or more learning algorithms. The training phrase is employed by first engaging a large group of participants (for example 500 to 1000 participants) who are employed to make predictions across a large set of events (for example, 20 to 40 baseball games). For each of these 500 to 1000 participants, and across the set of 20 to 40 events to be predicted, a set of values are computed including an Outlier Index (OI) and at least one or more of a Confidence Index (CI), a Predicted Self Accuracy (PSA), a Predicted Group Accuracy (PGA), a Self-Assessment of Knowledge (SAK), a Group Estimation of Knowledge (GAK), a Behavioral Accuracy Prediction (BAP), and a Behavioral Confidence Prediction (BCP).
In addition, user performance data is collected after the predicted events have transpired (for example, after the 20 to 40 baseball games have been played). This data is then used to generate a score for each of the large pool of participants, the score being an indication of how many (or what percent) of the predicted events were forecast correctly by each user. This value is preferably computed as a normalized value with respect to the mean score and standard deviation of scores earned across the large pool of participants. This normalized value is referred to as a Normalized Event Prediction Score (NEPS). It should be noted that in some embodiments, instead of discrete event predictions, user predictions can be collected as probability percentages provided by the user to reflect the likelihood of each team winning the game, for example in a Dodgers vs. Padres game, the user could be required to assign percentages such as 78% likelihood the Dodgers win, 22% likelihood the Padres win. In such embodiments, alternate scoring methods may be employed by the software system disclosed here. For example, computing a Brier Score for each user or other similar cost function.
The next step is the training phase wherein the machine learning system is trained (for example, using a regression analysis algorithm or a neural network system) to find a correlation between a plurality of the collected characterization values for a given user (i.e. a plurality of the Outlier Index, the Confidence Index, a Predicted Self Accuracy, a Predicted Group Accuracy, a Self-Assessment of Knowledge, a Group Estimation of Knowledge, a Behavioral Accuracy Prediction, and a Behavioral Confidence Prediction) and the Normalized Event Prediction Score for a given user. This correlation, once derived, can then be used by the inventive methods herein on characterization value data collected from new users (new populations of users) to predict if the users are likely to be a strong performer (i.e. have high normalized Event Prediction Scores). In such embodiments, the machine learning system (for example using multi-variant regression analysis) will provide a certainty metric as to whether or not a user with a particular combination of characterization values (including an Outlier Index) is likely to be a strong or weak performer when making event predictions. In other embodiments, the machine learning system will select a group of participants from the input pool of participants that are predicted to perform well in unison.
Thus, the final step in the Optimization and Machine Learning process is to use the correlation that comes out of the training phase of the machine learning system. Specifically, the trained model is used by providing as input a set of characterization values for each member of a new population of users, and generating as output a statistical profile for each member of the new population of users that predicts the likelihood that each user will be a strong performer based only on their characterization values (not their historical performance). In some embodiments the output is rather a grouping of agents that is predicted to perform optimally. This is a significant value because it enables a new population of participants to be curated into a high performing sub-population even if historical data does not exist for those new participants.
Non-Linear Probabilistic Wagering
To solve the problem of obtaining authentic, accurate, and consistent expressions of confidence for a user making a prediction, an innovative approach for soliciting confidence from human forecasters has been developed. Rather than ask participants to assign probabilities to their forecast (which is too abstract for most participants to accurately provide), or asking participants to place simple wagers on outcomes (which is too susceptible to variations in risk tolerance), a new methodology has been created where participants express the relative probability of each outcome, but do so in a way that is presented as an authentic wager, and which motivates participants to be as accurate as they can.
This solution is called Non-Linear Probabilistic Wagering (NPW). It is a methodology in which the participants (a) are asked to place a wager on each outcome, thereby requiring them to make an authentic assessment of their confidence in each possible result, and (b) requires users to distribute capital between the possible outcomes, based not on a simple linear scale as is used in traditional wagering, but using a novel Mean Square Difference scale.
Furthermore, the innovation enables distribution of wagers in response to an easily understood interface control, like a slider interface or dial interface. This allows participants to move an element (like a slider) and adjust the relative wagers on the possible outcomes of a forecasted event, the non-linear computations happening automatically. An example slider interface display 1100 including a slider selection line 1104 with a slider 1102 (also referred to in general as a user manipulatable wager marker) at a neutral (center) position is shown in
It's important to note that the above exemplary slider interface display 1100, while appearing simple, performs unlike any prior confidence interface that we know of. As will be described later in this document, the values assigned to each side of the slider selection line 1102 vary with slider position in a unique and powerful way.
Specifically, this method enables participants to place wagers upon the predicted outcomes but does so using a unique non-linear scale that models probabilistic forecast without the users needing to be skilled in thinking in terms of probabilities or even needing to know anything about probabilities.
The users just need to think in terms of wagers. In some embodiments the users are authentically motivated, for example when their compensation is tied to the true outcome of these events. For example, the users only win wagered amounts (real winnings or simulated points) for the outcome that actually happens in the real event.
This method can be described with respect to the sequence of slider positions shown in the
Referring next to
In the present embodiment, the methods are executed by one or more software routines configured to run on the processor of the computing device. In the initial display user interface step 1500, first, a graphical user interface is displayed, in this case the linear slider interface display 1100, with the amount paid for each outcome (first reward value 1106 and second reward value 1110) clearly identified. In step 1502 the user can adjust some aspect of this interface (in the examples of
Pay if Chelsea Wins=100*(1−(1−p1)2)
Pay if Arsenal Wins=100*(1−(p1)2)
In this way the reward values 1108, 1110 are interactively responsive to the position of the slider 1102. In some embodiments, such as shown in
In step 1508 the reward values 1106, 1110 are updated in the display 1100, 1200, 1300, 1400 in real time to reflect the interface's current pay-outs.
In the next decision step 1510, the user decides whether he is satisfied with his wager. If the user is satisfied, the method proceeds to step 1512 and the user submits the wager with the slider at the current value. If the user is not satisfied with the wager, the method returns to step 1502, and modifies the slider location. In this way the user can interact with the interface until they are satisfied with the wager split and submit their wager (for example, the wagers shown in
In some embodiments, the system comprises a first a first software routine configured to run on the processor, the first software routine configured to repeatedly update a first and a second reward value in response to user manipulation of a wager marker between a first limit and a second limit, the first software routine using a non-linear model for updating the first and second reward values in response to linear manipulation of the wager marker, the non-linear model following a monotonic power function that is implemented such that a non-linear increase in the first reward value corresponds to a non-linear decrease of the second reward value, and a non-linear increase in the second reward value corresponds to a non-linear decrease in the first reward value.
In some embodiments the system comprises a second software routine configured to run on the processor and determine a final first reward value and a final second reward value based upon a final position of the wager marker.
In some embodiments the system comprises a third software routine configured to run on the processor and generate a forecast probability value associated with each of the two possible outcomes based upon the final position of the wager marker between the two limits, the forecast probability value being a linear function of the position of the wager marker.
In some embodiments the system comprises a scoring software routine configured to run on the processor, the scoring software routine configured to be executed after an actual outcome of the future event is known, the scoring software routine configured to assign a score to the user based at least in part upon the final reward value associated with the actual outcome.
In some embodiments the Expressed Probability is then mapped to an Implied Probability, which represents the real-world probability of the event after accounting for human biases and individual risk aversion.
Another inventive aspect of this methodology is the mapping of Expressed to Implied probabilities, which can be computed based on either (a) behavioral and/or performance history of a general pool of participants, (b) the behavioral and/or history of this user, or (c) a combination of a and b.
For example, one mapping can be found using a technique called Driven Surveys. In this novel technique, individuals interact with the Probabilistic Wagering program over a series of survey questions and are paid a bonus depending on their wagering success. The questions have a known probabilistic outcome, which the users are told about, and then they are asked to distribute wagers using the unique slider system above. In this way, we get a direct mapping between probabilities that are known to the users and the wager splits that they produce from authentic visceral response.
In the example shown in
But again, the odds in this example were “driven” such that the user knows there is an 80% change of Heads turning up, and a 20% chance of tails. The user proceeds to adjust the position of the slider 1102 according to the method of
As you can see, the dollar amounts for reward values 1106 and 1110 in
After the survey is complete and the results of the simulated coin flips known, they receive pay (or points) in proportion to the wagers they made. This motivates users to place their wagers in the proportion they believe will maximize their expected return, rather than splitting their money evenly or placing it all on one side.
This process allows the Implied Probability (the wager that the user made) to be associated with the true probability of an event (the likelihood of a coin flipping heads) for any individual who takes this test. Additionally, it allows generalizations to be made about the mapping for an average person by taking the statistical average over all users who have taken the test.
This unique method can be described by the flowchart of
Referring again to
In some embodiment the bias calibration routine uses an optimized mapping from the forecast probability values to the calibrated forecast value, where the mapping is generated using historical data captured for a population of users who have previously used the interactive system, the historical data including forecast probability values and known outcomes for a set of prior events.
In some embodiments the bias calibration routine uses an optimized mapping from forecast probability values to the calibrated forecast value, where the mapping is generated using historical data captured for the user during a series of previous uses of the interactive system, the historical data including forecast probability values and known outcomes for a set of prior events.
One thing that both poll-based methods and swarm-based methods have in common when making predictions based on input from populations of participants, is that a smarter population generally results in more accurate forecasts. As disclosed in co-pending U.S. patent application Ser. No. 16/059,698 by the current inventors, entitled “ADAPTIVE POPULATION OPTIMIZATION FOR AMPLIFYING THE INTELLIGENCE OF CROWDS AND SWARMS” which is hereby incorporated by reference, methods and systems are disclosed that enables the use of polling data to curate a refined population of people to form a swarm intelligence. While this method is effective, by incorporating improved assessments of participant confidence using the unique NPW process above, deeper and more accurate assessments of human confidence and human conviction are attained and used to significantly improve the population curation process. Specifically, this enables higher accuracy when distinguishing members of the population of are likely to be high-insight performers on a given prediction task as compared to members of the population who are likely to be low-insight performers on a given prediction task and does so without using historical data about their performance on similar tasks. Instead we can perform outlier analysis to determine which forecasts the participant went against the convention wisdom, and then look at their NPW confidence to determine if they were self-aware that their picks were perceived as risky by the general population.
While many embodiments are described herein, it is appreciated that this invention can have a range of variations that practice the same basic methods and achieve the novel collaborative capabilities that have been disclosed above. Many of the functional units described in this specification have been labeled as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like.
Modules may also be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions that may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.
Indeed, a module of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network.
While the invention herein disclosed has been described by means of specific embodiments, examples and applications thereof, numerous modifications and variations could be made thereto by those skilled in the art without departing from the scope of the invention set forth in the claims.
This application is a continuation of U.S. application Ser. No. 17/024,474 entitled SYSTEM AND METHOD OF NON-LINEAR PROBABILISTIC FORECASTING TO FOSTER AMPLIFIED COLLECTIVE INTELLIGENCE OF NETWORKED HUMAN GROUPS filed Sep. 17, 2020, which is a continuation of U.S. application Ser. No. 16/356,777 entitled NON-LINEAR PROBABILISTIC WAGERING FOR AMPLIFIED COLLECTIVE INTELLIGENCE filed Mar. 18, 2019, now U.S. Pat. No. 10,817,159, which claims the benefit of U.S. Provisional Application No. 62/648,424 entitled NON-LINEAR PROBABILISTIC WAGERING FOR AMPLIFIED COLLECTIVE INTELLIGENCE filed Mar. 27, 2018, which is a continuation-in-part of U.S. application Ser. No. 16/230,759 entitled METHOD AND SYSTEM FOR A PARALLEL DISTRIBUTED HYPER-SWARM FOR AMPLIFYING HUMAN INTELLIGENCE, filed Dec. 21, 2018, now U.S. Pat. No. 10,817,158, claiming the benefit of U.S. Provisional Application No. 62/611,756 entitled METHOD AND SYSTEM FOR A PARALLEL DISTRIBUTED HYPER-SWARM FOR AMPLIFYING HUMAN INTELLIGENCE, filed Dec. 29, 2017, which is a continuation-in-part of U.S. application Ser. No. 16/154,613 entitled INTERACTIVE BEHAVIORAL POLLING AND MACHINE LEARNING FOR AMPLIFICATION OF GROUP INTELLIGENCE, filed Oct. 8, 2018, now U.S. Pat. No. 11,269,502, claiming the benefit of U.S. Provisional Application No. 62/569,909 entitled INTERACTIVE BEHAVIORAL POLLING AND MACHINE LEARNING FOR AMPLIFICATION OF GROUP INTELLIGENCE, filed Oct. 9, 2017, which is a continuation-in-part of U.S. application Ser. No. 16/059,698 entitled ADAPTIVE POPULATION OPTIMIZATION FOR AMPLIFYING THE INTELLIGENCE OF CROWDS AND SWARMS, filed Aug. 9, 2018, now U.S. Pat. No. 11,151,460, claiming the benefit of U.S. Provisional Application No. 62/544,861, entitled ADAPTIVE OUTLIER ANALYSIS FOR AMPLIFYING THE INTELLIGENCE OF CROWDS AND SWARMS, filed Aug. 13, 2017 and of U.S. Provisional Application No. 62/552,968 entitled SYSTEM AND METHOD FOR OPTIMIZING THE POPULATION USED BY CROWDS AND SWARMS FOR AMPLIFIED EMERGENT INTELLIGENCE, filed Aug. 31, 2017, which is a continuation-in-part of U.S. application Ser. No. 15/922,453 entitled PARALLELIZED SUB-FACTOR AGGREGATION IN REAL-TIME SWARM-BASED COLLECTIVE INTELLIGENCE SYSTEMS, filed Mar. 15, 2018, claiming the benefit of U.S. Provisional Application No. 62/473,424 entitled PARALLELIZED SUB-FACTOR AGGREGATION IN A REAL-TIME COLLABORATIVE INTELLIGENCE SYSTEMS filed Mar. 19, 2017, which in turn is a continuation-in-part of U.S. application Ser. No. 15/904,239 entitled METHODS AND SYSTEMS FOR COLLABORATIVE CONTROL OF A REMOTE VEHICLE, filed Feb. 23, 2018, now U.S. Pat. No. 10,416,666, claiming the benefit of U.S. Provisional Application No. 62/463,657 entitled METHODS AND SYSTEMS FOR COLLABORATIVE CONTROL OF A ROBOTIC MOBILE FIRST-PERSON STREAMING CAMERA SOURCE, filed Feb. 26, 2017 and also claiming the benefit of U.S. Provisional Application No. 62/473,429 entitled METHODS AND SYSTEMS FOR COLLABORATIVE CONTROL OF A ROBOTIC MOBILE FIRST-PERSON STREAMING CAMERA SOURCE, filed Mar. 19, 2017, which is a continuation-in-part of U.S. application Ser. No. 15/898,468 entitled ADAPTIVE CONFIDENCE CALIBRATION FOR REAL-TIME SWARM INTELLIGENCE SYSTEMS, filed Feb. 17, 2018, now U.S. Pat. No. 10,712,929, claiming the benefit of U.S. Provisional Application No. 62/460,861 entitled ARTIFICIAL SWARM INTELLIGENCE WITH ADAPTIVE CONFIDENCE CALIBRATION, filed Feb. 19, 2017 and also claiming the benefit of U.S. Provisional Application No. 62/473,442 entitled ARTIFICIAL SWARM INTELLIGENCE WITH ADAPTIVE CONFIDENCE CALIBRATION, filed Mar. 19, 2017, which is a continuation-in-part of U.S. application Ser. No. 15/815,579 entitled SYSTEMS AND METHODS FOR HYBRID SWARM INTELLIGENCE, filed Nov. 16, 2017, now U.S. Pat. No. 10,439,836, claiming the benefit of U.S. Provisional Application No. 62/423,402 entitled SYSTEM AND METHOD FOR HYBRID SWARM INTELLIGENCE filed Nov. 17, 2016, which is a continuation-in-part of U.S. application Ser. No. 15/640,145 entitled METHODS AND SYSTEMS FOR MODIFYING USER INFLUENCE DURING A COLLABORATIVE SESSION OF REAL-TIME COLLABORATIVE INTELLIGENCE, filed Jun. 30, 2017, now U.S. Pat. No. 10,353,551, claiming the benefit of U.S. Provisional Application No. 62/358,026 entitled METHODS AND SYSTEMS FOR AMPLIFYING THE INTELLIGENCE OF A HUMAN-BASED ARTIFICIAL SWARM INTELLIGENCE filed Jul. 3, 2016, which is a continuation-in-part of U.S. application Ser. No. 15/241,340 entitled METHODS FOR ANALYZING DECISIONS MADE BY REAL-TIME COLLECTIVE INTELLIGENCE SYSTEMS, filed Aug. 19, 2016, now U.S. Pat. No. 10,222,961, claiming the benefit of U.S. Provisional Application No. 62/207,234 entitled METHODS FOR ANALYZING THE DECISIONS MADE BY REAL-TIME COLLECTIVE INTELLIGENCE SYSTEMS filed Aug. 19, 2015, which is a continuation-in-part of U.S. application Ser. No. 15/199,990 entitled METHODS AND SYSTEMS FOR ENABLING A CREDIT ECONOMY IN A REAL-TIME COLLABORATIVE INTELLIGENCE, filed Jul. 1, 2016, claiming the benefit of U.S. Provisional Application No. 62/187,470 entitled METHODS AND SYSTEMS FOR ENABLING A CREDIT ECONOMY IN A REAL-TIME SYNCHRONOUS COLLABORATIVE SYSTEM filed Jul. 1, 2015, which is a continuation-in-part of U.S. application Ser. No. 15/086,034 entitled SYSTEM AND METHOD FOR MODERATING REAL-TIME CLOSED-LOOP COLLABORATIVE DECISIONS ON MOBILE DEVICES, filed Mar. 30, 2016, now U.S. Pat. No. 10,310,802, claiming the benefit of U.S. Provisional Application No. 62/140,032 entitled SYSTEM AND METHOD FOR MODERATING A REAL-TIME CLOSED-LOOP COLLABORATIVE APPROVAL FROM A GROUP OF MOBILE USERS filed Mar. 30, 2015, which is a continuation-in-part of U.S. patent application Ser. No. 15/052,876, filed Feb. 25, 2016, entitled DYNAMIC SYSTEMS FOR OPTIMIZATION OF REAL-TIME COLLABORATIVE INTELLIGENCE, now U.S. Pat. No. 10,110,664, claiming the benefit of U.S. Provisional Application No. 62/120,618 entitled APPLICATION OF DYNAMIC RESTORING FORCES TO OPTIMIZE GROUP INTELLIGENCE IN REAL-TIME SOCIAL SWARMS, filed Feb. 25, 2015, which is a continuation-in-part of U.S. application Ser. No. 15/047,522 entitled SYSTEMS AND METHODS FOR COLLABORATIVE SYNCHRONOUS IMAGE SELECTION, filed Feb. 18, 2016, now U.S. Pat. No. 10,133,460, which in turn claims the benefit of U.S. Provisional Application No. 62/117,808 entitled SYSTEM AND METHODS FOR COLLABORATIVE SYNCHRONOUS IMAGE SELECTION, filed Feb. 18, 2015, which is a continuation-in-part of U.S. application Ser. No. 15/017,424 entitled ITERATIVE SUGGESTION MODES FOR REAL-TIME COLLABORATIVE INTELLIGENCE SYSTEMS, filed Feb. 5, 2016 which in turn claims the benefit of U.S. Provisional Application No. 62/113,393 entitled SYSTEMS AND METHODS FOR ENABLING SYNCHRONOUS COLLABORATIVE CREATIVITY AND DECISION MAKING, filed Feb. 7, 2015, which is a continuation-in-part of U.S. application Ser. No. 14/925,837 entitled MULTI-PHASE MULTI-GROUP SELECTION METHODS FOR REAL-TIME COLLABORATIVE INTELLIGENCE SYSTEMS, filed Oct. 28, 2015, now U.S. Pat. No. 10,551,999, which in turn claims the benefit of U.S. Provisional Application No. 62/069,360 entitled SYSTEMS AND METHODS FOR ENABLING AND MODERATING A MASSIVELY-PARALLEL REAL-TIME SYNCHRONOUS COLLABORATIVE SUPER-INTELLIGENCE, filed Oct. 28, 2014, which is a continuation-in-part of U.S. application Ser. No. 14/920,819 entitled SUGGESTION AND BACKGROUND MODES FOR REAL-TIME COLLABORATIVE INTELLIGENCE SYSTEMS, filed Oct. 22, 2015, now U.S. Pat. No. 10,277,645, which in turn claims the benefit of U.S. Provisional Application No. 62/067,505 entitled SYSTEM AND METHODS FOR MODERATING REAL-TIME COLLABORATIVE DECISIONS OVER A DISTRIBUTED NETWORKS, filed Oct. 23, 2014, which is a continuation-in-part of U.S. application Ser. No. 14/859,035 entitled SYSTEMS AND METHODS FOR ASSESSMENT AND OPTIMIZATION OF REAL-TIME COLLABORATIVE INTELLIGENCE SYSTEMS, filed Sep. 18, 2015, now U.S. Pat. No. 10,122,775, which in turns claims the benefit of U.S. Provisional Application No. 62/066,718 entitled SYSTEM AND METHOD FOR MODERATING AND OPTIMIZING REAL-TIME SWARM INTELLIGENCES, filed Oct. 21, 2014, which is a continuation-in-part of U.S. patent application Ser. No. 14/738,768 entitled INTUITIVE INTERFACES FOR REAL-TIME COLLABORATIVE INTELLIGENCE, filed Jun. 12, 2015, now U.S. Pat. No. 9,940,006, which in turn claims the benefit of U.S. Provisional Application 62/012,403 entitled INTUITIVE INTERFACE FOR REAL-TIME COLLABORATIVE CONTROL, filed Jun. 15, 2014, which is a continuation-in-part of U.S. application Ser. No. 14/708,038 entitled MULTI-GROUP METHODS AND SYSTEMS FOR REAL-TIME MULTI-TIER COLLABORATIVE INTELLIGENCE, filed May 8, 2015, which in turn claims the benefit of U.S. Provisional Application 61/991,505 entitled METHODS AND SYSTEM FOR MULTI-TIER COLLABORATIVE INTELLIGENCE, filed May 10, 2014, which is a continuation-in-part of U.S. patent application Ser. No. 14/668,970 entitled METHODS AND SYSTEMS FOR REAL-TIME COLLABORATIVE INTELLIGENCE, filed Mar. 25, 2015, now U.S. Pat. No. 9,959,028, which in turn claims the benefit of U.S. Provisional Application 61/970,885 entitled METHOD AND SYSTEM FOR ENABLING A GROUPWISE COLLABORATIVE CONSCIOUSNESS, filed Mar. 26, 2014, all of which are incorporated in their entirety herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5236199 | Thompson, Jr. | Aug 1993 | A |
5400248 | Chisholm | Mar 1995 | A |
5808908 | Ghahramani | Sep 1998 | A |
5867799 | Lang | Feb 1999 | A |
6064978 | Gardner | May 2000 | A |
6480210 | Martino | Nov 2002 | B1 |
6606615 | Jennings | Aug 2003 | B1 |
6792399 | Phillips | Sep 2004 | B1 |
6903723 | Forest | Jun 2005 | B1 |
6944596 | Gray | Sep 2005 | B1 |
7031842 | Musat | Apr 2006 | B1 |
7040982 | Jarvis | May 2006 | B1 |
7155510 | Kaplan | Dec 2006 | B1 |
7158112 | Rosenberg | Jan 2007 | B2 |
7451213 | Kaplan | Nov 2008 | B2 |
7489979 | Rosenberg | Feb 2009 | B2 |
7542816 | Rosenberg | Jun 2009 | B2 |
7562117 | Rosenberg | Jul 2009 | B2 |
7603414 | Rosenberg | Oct 2009 | B2 |
7624077 | Bonabeau | Nov 2009 | B2 |
7653726 | Kaplan | Jan 2010 | B2 |
7690991 | Black | Apr 2010 | B2 |
7831928 | Rose | Nov 2010 | B1 |
7856602 | Armstrong | Dec 2010 | B2 |
7880741 | Proebsting | Feb 2011 | B2 |
7917148 | Rosenberg | Mar 2011 | B2 |
7937285 | Goldberg | May 2011 | B2 |
7958006 | Keil | Jun 2011 | B2 |
8176101 | Rosenberg | May 2012 | B2 |
8209250 | Bradway | Jun 2012 | B2 |
8229824 | Berg | Jul 2012 | B2 |
8250071 | Killalea | Aug 2012 | B1 |
8341065 | Berg | Dec 2012 | B2 |
8396777 | Fine | Mar 2013 | B1 |
8468580 | Casey | Jun 2013 | B1 |
8583470 | Fine | Nov 2013 | B1 |
8589488 | Huston | Nov 2013 | B2 |
8612331 | Hanson | Dec 2013 | B2 |
8655804 | Carrabis | Feb 2014 | B2 |
8660972 | Heidenreich | Feb 2014 | B1 |
8676735 | Heidenreich | Mar 2014 | B1 |
8745104 | Rosenberg | Jun 2014 | B1 |
8762435 | Rosenberg | Jun 2014 | B1 |
8814660 | Thompson | Aug 2014 | B2 |
9005016 | Amaitis | Apr 2015 | B2 |
9483161 | Wenger | Nov 2016 | B2 |
9710836 | O'Malley | Jul 2017 | B1 |
9772759 | Hogan | Sep 2017 | B2 |
9852239 | Natarajan | Dec 2017 | B2 |
9940006 | Rosenberg | Apr 2018 | B2 |
9947174 | Rangarajan | Apr 2018 | B2 |
9959028 | Rosenberg | May 2018 | B2 |
10110664 | Rosenberg | Oct 2018 | B2 |
10122775 | Rosenberg | Nov 2018 | B2 |
10133460 | Rosenberg | Nov 2018 | B2 |
10222961 | Rosenberg | Mar 2019 | B2 |
10277645 | Rosenberg | Apr 2019 | B2 |
10310802 | Rosenberg | Jun 2019 | B2 |
10332412 | Swank | Jun 2019 | B2 |
10353551 | Rosenberg | Jul 2019 | B2 |
10410287 | Marsh | Sep 2019 | B2 |
10416666 | Rosenberg | Sep 2019 | B2 |
10439836 | Rosenberg | Oct 2019 | B2 |
10515516 | Eckman | Dec 2019 | B1 |
10551999 | Rosenberg | Feb 2020 | B2 |
10599315 | Rosenberg | Mar 2020 | B2 |
10606463 | Rosenberg | Mar 2020 | B2 |
10606464 | Rosenberg | Mar 2020 | B2 |
10609124 | Rosenberg | Mar 2020 | B2 |
10656807 | Rosenberg | May 2020 | B2 |
10712929 | Rosenberg | Jul 2020 | B2 |
10713303 | Ashoori | Jul 2020 | B2 |
10817158 | Rosenberg | Oct 2020 | B2 |
10817159 | Willcox | Oct 2020 | B2 |
10902194 | Toronto | Jan 2021 | B2 |
11037400 | Cohen | Jun 2021 | B2 |
11151460 | Rosenberg | Oct 2021 | B2 |
11269502 | Rosenberg | Mar 2022 | B2 |
11360655 | Willcox | Jun 2022 | B2 |
11360656 | Rosenberg | Jun 2022 | B2 |
20010042010 | Hassell | Nov 2001 | A1 |
20020042920 | Thomas | Apr 2002 | A1 |
20020107726 | Torrance | Aug 2002 | A1 |
20020129106 | Gutfreund | Sep 2002 | A1 |
20020152110 | Stewart | Oct 2002 | A1 |
20020171690 | Fox | Nov 2002 | A1 |
20030023685 | Cousins | Jan 2003 | A1 |
20030033193 | Holloway | Feb 2003 | A1 |
20030065604 | Gatto | Apr 2003 | A1 |
20030079218 | Goldberg | Apr 2003 | A1 |
20030088458 | Afeyan | May 2003 | A1 |
20030100357 | Walker | May 2003 | A1 |
20030119579 | Walker | Jun 2003 | A1 |
20030210227 | Smith | Nov 2003 | A1 |
20030227479 | Mizrahi | Dec 2003 | A1 |
20040015429 | Tighe | Jan 2004 | A1 |
20040064394 | Wallman | Apr 2004 | A1 |
20040210550 | Williams | Oct 2004 | A1 |
20050067493 | Urken | Mar 2005 | A1 |
20050075919 | Kim | Apr 2005 | A1 |
20050168489 | Ausbeck | Aug 2005 | A1 |
20050218601 | Capellan | Oct 2005 | A1 |
20050261953 | Malek | Nov 2005 | A1 |
20060010057 | Bradway | Jan 2006 | A1 |
20060147890 | Bradford | Jul 2006 | A1 |
20060200401 | Lisani | Sep 2006 | A1 |
20060204945 | Masuichi | Sep 2006 | A1 |
20060218179 | Gardner | Sep 2006 | A1 |
20060250357 | Safai | Nov 2006 | A1 |
20070011073 | Gardner | Jan 2007 | A1 |
20070039031 | Cansler, Jr. | Feb 2007 | A1 |
20070055610 | Palestrant | Mar 2007 | A1 |
20070067211 | Kaplan | Mar 2007 | A1 |
20070072156 | Kaufman | Mar 2007 | A1 |
20070073606 | Lai | Mar 2007 | A1 |
20070078977 | Kaplan | Apr 2007 | A1 |
20070097150 | Ivashin | May 2007 | A1 |
20070099162 | Sekhar | May 2007 | A1 |
20070121843 | Atazky | May 2007 | A1 |
20070124503 | Ramos | May 2007 | A1 |
20070208727 | Saklikar | Sep 2007 | A1 |
20070209069 | Saklikar | Sep 2007 | A1 |
20070211050 | Ohta | Sep 2007 | A1 |
20070216712 | Louch | Sep 2007 | A1 |
20070220100 | Rosenberg | Sep 2007 | A1 |
20070226296 | Lowrance | Sep 2007 | A1 |
20070294067 | West | Dec 2007 | A1 |
20080003559 | Toyama | Jan 2008 | A1 |
20080015115 | Guyot-Sionnest | Jan 2008 | A1 |
20080016463 | Marsden | Jan 2008 | A1 |
20080091777 | Carlos | Apr 2008 | A1 |
20080103877 | Gerken | May 2008 | A1 |
20080140477 | Tevanian | Jun 2008 | A1 |
20080140688 | Clayton | Jun 2008 | A1 |
20080189634 | Tevanian | Aug 2008 | A1 |
20080195459 | Stinski | Aug 2008 | A1 |
20090037355 | Brave | Feb 2009 | A1 |
20090063379 | Kelly | Mar 2009 | A1 |
20090063463 | Turner | Mar 2009 | A1 |
20090063991 | Baron | Mar 2009 | A1 |
20090063995 | Baron | Mar 2009 | A1 |
20090073174 | Berg | Mar 2009 | A1 |
20090076939 | Berg | Mar 2009 | A1 |
20090076974 | Berg | Mar 2009 | A1 |
20090125821 | Johnson | May 2009 | A1 |
20090170595 | Walker | Jul 2009 | A1 |
20090182624 | Koen | Jul 2009 | A1 |
20090239205 | Morgia | Sep 2009 | A1 |
20090254425 | Horowitz | Oct 2009 | A1 |
20090254836 | Bajrach | Oct 2009 | A1 |
20090287685 | Charnock | Nov 2009 | A1 |
20090325533 | Lele | Dec 2009 | A1 |
20100023857 | Mahesh | Jan 2010 | A1 |
20100100204 | Ng | Apr 2010 | A1 |
20100144426 | Winner | Jun 2010 | A1 |
20100145715 | Cohen | Jun 2010 | A1 |
20100169144 | Estill | Jul 2010 | A1 |
20100174579 | Hughes | Jul 2010 | A1 |
20100199191 | Takahashi | Aug 2010 | A1 |
20100205541 | Rapaport | Aug 2010 | A1 |
20100299616 | Chen | Nov 2010 | A1 |
20110003627 | Nicely | Jan 2011 | A1 |
20110016137 | Goroshevsky | Jan 2011 | A1 |
20110080341 | Helmes | Apr 2011 | A1 |
20110087687 | Immaneni | Apr 2011 | A1 |
20110119048 | Shaw | May 2011 | A1 |
20110141027 | Ghassabian | Jun 2011 | A1 |
20110166916 | Inbar | Jul 2011 | A1 |
20110208328 | Cairns | Aug 2011 | A1 |
20110208684 | Dube | Aug 2011 | A1 |
20110208822 | Rathod | Aug 2011 | A1 |
20110276396 | Rathod | Nov 2011 | A1 |
20110288919 | Gross | Nov 2011 | A1 |
20110320536 | Lobb | Dec 2011 | A1 |
20120005131 | Horvitz | Jan 2012 | A1 |
20120011006 | Schultz | Jan 2012 | A1 |
20120013489 | Earl | Jan 2012 | A1 |
20120072843 | Durham | Mar 2012 | A1 |
20120079396 | Neer | Mar 2012 | A1 |
20120088220 | Feng | Apr 2012 | A1 |
20120088222 | Considine | Apr 2012 | A1 |
20120101933 | Hanson | Apr 2012 | A1 |
20120109883 | Iordanov | May 2012 | A1 |
20120110087 | Culver | May 2012 | A1 |
20120179567 | Soroca | Jul 2012 | A1 |
20120191774 | Bhaskaran | Jul 2012 | A1 |
20120290950 | Rapaport | Nov 2012 | A1 |
20120316962 | Rathod | Dec 2012 | A1 |
20120322540 | Shechtman | Dec 2012 | A1 |
20130013248 | Brugler | Jan 2013 | A1 |
20130019205 | Gil | Jan 2013 | A1 |
20130035981 | Brown | Feb 2013 | A1 |
20130035989 | Brown | Feb 2013 | A1 |
20130041720 | Spires | Feb 2013 | A1 |
20130097245 | Adarraga | Apr 2013 | A1 |
20130103692 | Raza | Apr 2013 | A1 |
20130132284 | Convertino | May 2013 | A1 |
20130160142 | Lai | Jun 2013 | A1 |
20130171594 | Gorman | Jul 2013 | A1 |
20130184039 | Steir | Jul 2013 | A1 |
20130191181 | Balestrieri | Jul 2013 | A1 |
20130191390 | Engel | Jul 2013 | A1 |
20130203506 | Brown | Aug 2013 | A1 |
20130231595 | Zoss | Sep 2013 | A1 |
20130254146 | Ellis | Sep 2013 | A1 |
20130298690 | Bond | Nov 2013 | A1 |
20130300740 | Snyder | Nov 2013 | A1 |
20130311904 | Tien | Nov 2013 | A1 |
20130317966 | Bass | Nov 2013 | A1 |
20130339445 | Perincherry | Dec 2013 | A1 |
20140006042 | Keefe | Jan 2014 | A1 |
20140012780 | Sanders | Jan 2014 | A1 |
20140047356 | Ameller-Van-Baumberghen et al. | Feb 2014 | A1 |
20140057240 | Colby | Feb 2014 | A1 |
20140074751 | Rocklitz | Mar 2014 | A1 |
20140075004 | Van Dusen | Mar 2014 | A1 |
20140087841 | Council | Mar 2014 | A1 |
20140089233 | Ellis | Mar 2014 | A1 |
20140089521 | Horowitz | Mar 2014 | A1 |
20140100924 | Ingenito | Apr 2014 | A1 |
20140108293 | Barrett | Apr 2014 | A1 |
20140108915 | Lu | Apr 2014 | A1 |
20140128162 | Arafat | May 2014 | A1 |
20140129946 | Harris | May 2014 | A1 |
20140155142 | Conroy | Jun 2014 | A1 |
20140162241 | Morgia | Jun 2014 | A1 |
20140171039 | Bjontegard | Jun 2014 | A1 |
20140214831 | Chi | Jul 2014 | A1 |
20140249689 | Bienkowski | Sep 2014 | A1 |
20140249889 | Park | Sep 2014 | A1 |
20140258970 | Brown | Sep 2014 | A1 |
20140278835 | Moseson | Sep 2014 | A1 |
20140279625 | Carter | Sep 2014 | A1 |
20140282586 | Shear | Sep 2014 | A1 |
20140310607 | Abraham | Oct 2014 | A1 |
20140316616 | Kugelmass | Oct 2014 | A1 |
20140337097 | Farlie | Nov 2014 | A1 |
20140351719 | Cattermole | Nov 2014 | A1 |
20140358825 | Phillipps | Dec 2014 | A1 |
20140379439 | Sekhar | Dec 2014 | A1 |
20150006492 | Wexler | Jan 2015 | A1 |
20150065214 | Olson | Mar 2015 | A1 |
20150089399 | Megill | Mar 2015 | A1 |
20150120619 | Baughman | Apr 2015 | A1 |
20150149932 | Yamada | May 2015 | A1 |
20150154557 | Skaaksrud | Jun 2015 | A1 |
20150156233 | Bergo | Jun 2015 | A1 |
20150170050 | Price | Jun 2015 | A1 |
20150192437 | Bouzas | Jul 2015 | A1 |
20150236866 | Colby | Aug 2015 | A1 |
20150242755 | Gross | Aug 2015 | A1 |
20150242972 | Lemmey | Aug 2015 | A1 |
20150248817 | Steir | Sep 2015 | A1 |
20150262208 | Bjontegard | Sep 2015 | A1 |
20150294527 | Kolomiiets | Oct 2015 | A1 |
20150302308 | Bartek | Oct 2015 | A1 |
20150310687 | Morgia | Oct 2015 | A1 |
20150326625 | Rosenberg | Nov 2015 | A1 |
20150331601 | Rosenberg | Nov 2015 | A1 |
20150339020 | D'Amore | Nov 2015 | A1 |
20150347903 | Saxena | Dec 2015 | A1 |
20150378587 | Falaki | Dec 2015 | A1 |
20160034305 | Shear | Feb 2016 | A1 |
20160044073 | Rosenberg | Feb 2016 | A1 |
20160048274 | Rosenberg | Feb 2016 | A1 |
20160055236 | Frank | Feb 2016 | A1 |
20160057182 | Rosenberg | Feb 2016 | A1 |
20160062735 | Wilber | Mar 2016 | A1 |
20160078458 | Gold | Mar 2016 | A1 |
20160082348 | Kehoe | Mar 2016 | A1 |
20160092989 | Marsh | Mar 2016 | A1 |
20160098778 | Blumenthal | Apr 2016 | A1 |
20160133095 | Shraibman | May 2016 | A1 |
20160154570 | Rosenberg | Jun 2016 | A1 |
20160170594 | Rosenberg | Jun 2016 | A1 |
20160170616 | Rosenberg | Jun 2016 | A1 |
20160189025 | Hayes | Jun 2016 | A1 |
20160209992 | Rosenberg | Jul 2016 | A1 |
20160210602 | Siddique | Jul 2016 | A1 |
20160274779 | Rosenberg | Sep 2016 | A9 |
20160277457 | Rosenberg | Sep 2016 | A9 |
20160284172 | Weast | Sep 2016 | A1 |
20160314527 | Rosenberg | Oct 2016 | A1 |
20160320956 | Rosenberg | Nov 2016 | A9 |
20160335647 | Rebrovick | Nov 2016 | A1 |
20160349976 | Lauer | Dec 2016 | A1 |
20160357418 | Rosenberg | Dec 2016 | A1 |
20160366200 | Healy | Dec 2016 | A1 |
20170083974 | Guillen | Mar 2017 | A1 |
20170091633 | Vemula | Mar 2017 | A1 |
20170223411 | De Juan | Aug 2017 | A1 |
20170300198 | Rosenberg | Oct 2017 | A1 |
20170337498 | Rahimi | Nov 2017 | A1 |
20180076968 | Rosenberg | Mar 2018 | A1 |
20180181117 | Rosenberg | Jun 2018 | A1 |
20180196593 | Rosenberg | Jul 2018 | A1 |
20180203580 | Rosenberg | Jul 2018 | A1 |
20180204184 | Rosenberg | Jul 2018 | A1 |
20180217745 | Rosenberg | Aug 2018 | A1 |
20180239523 | Rosenberg | Aug 2018 | A1 |
20180373991 | Rosenberg | Dec 2018 | A1 |
20180375676 | Bader-Natal | Dec 2018 | A1 |
20190014170 | Rosenberg | Jan 2019 | A1 |
20190034063 | Rosenberg | Jan 2019 | A1 |
20190042081 | Rosenberg | Feb 2019 | A1 |
20190066133 | Cotton | Feb 2019 | A1 |
20190121529 | Rosenberg | Apr 2019 | A1 |
20190212908 | Willcox | Jul 2019 | A1 |
20200005341 | Marsh | Jan 2020 | A1 |
20210004149 | Willcox | Jan 2021 | A1 |
20210004150 | Rosenberg | Jan 2021 | A1 |
20210150443 | Shih | May 2021 | A1 |
20210209554 | Hill | Jul 2021 | A1 |
20210241127 | Rosenberg | Aug 2021 | A1 |
Number | Date | Country |
---|---|---|
2414397 | Aug 2003 | CA |
3123442 | Feb 2017 | EP |
3155584 | Apr 2017 | EP |
3210386 | Aug 2017 | EP |
2561458 | Oct 2018 | GB |
2010191533 | Sep 2010 | JP |
5293249 | Sep 2013 | JP |
101273535 | Jun 2013 | KR |
2011121275 | Oct 2011 | WO |
2014023432 | Jan 2014 | WO |
2014190351 | Nov 2014 | WO |
2015148738 | Oct 2015 | WO |
2015195492 | Dec 2015 | WO |
2016064827 | Apr 2016 | WO |
2017004475 | Jan 2017 | WO |
2018006065 | Jan 2018 | WO |
2018094105 | May 2018 | WO |
Entry |
---|
“Dialogr—A simple tool for collective thinking”; Mar. 25, 2015; http://www.dialogr.com./; 1 page. |
Atanasov et al.; “Distilling the Wisdom of Crowds: Prediction Markets vs. Prediction Polls”; Managament Science, Articles in Advance; pp. 1-16; http://dx.doi.org/10.1287/mnsc.2015.2374; 2016 INFORMS (Year 2016). |
Beni; “From Swarm Intelligence to Swarm Robotics”; Swarm Robotics WS 2004, LNCS 3342; pp. 1-9; 2005. |
Combined Search and Examination Report under Sections 17 & 18(3) for GB2202778.3 issued by the GB Intellectual Property Office dated Mar. 10, 2022. |
Cuthbertson; “Artificial Intelligence Turns $20 into $11,000 in Kentucky Derby Bet”; Newsweek Tech & Science; http://www.newsweek.com/artificial-intelligence-turns-20-11000-kentucky-derby-bet-457783; May 10, 2016; 9 pages. |
Cuthbertson; “Oscar Predictions: AI Calculates Leonardo DiCaprio Will Finally Get His Oscar”; Newsweek Tech & Science; http://www.newsweek.com/oscar-predictions-artificial-intelligence-predicts-leo-will-finally-get-his-430712; Feb. 26, 2016; 3 pages. |
Cuthbertson; “Swarm Intelligence: AI Algorithm Predicts the Future”; Newsweek Tech & Science; http://www.newsweek.com/swarm-intelligence-ai-algorithm-predicts-future-418707; Jan. 25, 2016; 4 pages. |
Deck et al.; “Prediction Markets in the Laboratory”; University of Arkansas and Chapman University; J. Econ. Surv., 2013; 33 pages. |
Deneubourg et al.; “Collective Patterns and Decision-Making”; Ethology Ecology & Evolution; Mar. 22, 1989; pp. 295-311. |
Ding et al.; “Time Weight Collaborative Filtering”; CIKM'05, Oct. 31-Nov. 5, 2005; Bremen, Germany; pp. 485-492. |
EP; Extended European Search Report for EP Application No. 15767909.3 mailed from the European Patent Office dated Sep. 4, 2017. |
EP; Extended European Search Report for EP Application No. 15808982.1 mailed from the European Patent Office dated Nov. 28, 2017. |
EP; Extended European Search Report for EP Application No. 15852495.9 mailed from the European Patent Office dated Mar. 21, 2018. |
Examination Report for Indian Patent Application No. 201747000968 mailed from the Indian Patent Office dated Nov. 4, 2020. |
Examination Report under Section 18(3) for GB1805236.5 mailed from the Intellectual Property Office dated Jul. 30, 2021. |
Examination Report under Section 18(3) for GB1805236.5 mailed from the Intellectual Property Office dated Apr. 1, 2021. |
Examination Report under Section 18(3) for GB1805236.5 mailed from the Intellectual Property Office dated Mar. 24, 2022. |
Gauchou et al.; “Expression of Nonconscious Knowledge via Ideomotor Actions”; Consciousness and Cognition; Jul. 28, 2011; 9 pages. |
Green; “Testing and Quantifying Collective Intelligence”; Collective Intelligence 2015; May 31, 2015; 4 pages. |
Gurcay et al.; “The Power of Social Influence on Estimation Accuracy”; Journal of Behavioral Decision Making; J. Behav. Dec. Making (2014); Published online in Wiley Online Library (wileyonlinelibrary.com); DOI:10.1002/bdm.1843; 12 pages (Year: 2014). |
Hanson et al.; “Information Aggregation and Manipulation in an Experimental Market”; Interdisciplinary Center for Economic Science, George Mason University; Jul. 12, 2005; 15 pages. |
Herkewitz; “Upvotes, Downvotes, and the Science of the Reddit Hivemind”; Aug. 8, 2013; http://www.popularmechanics.com/science/health/a9335/upvotes-downvotes-and-the-scien . . . ; downloaded Mar. 25, 2015; 10 pages. |
International Search Report and Written Opinion of the International Searching Authority for PCT/ US2015/022594 dated Jun. 29, 2015. |
Malone et al.; “Harnessing Crowds: Mapping the Genome of Collective Intelligence”; MIT Center for Collective Intelligence; Feb. 2009; 20 pages. |
Mathematics Libre Texts; “3.3: Power Functions and Polynomial Functions”; Accessed on Oct. 8, 2021 at https://math.libretexts.org/Bookshelves/Precalculus/Precalculus_(OpenStax)/03%3A_Polynomial_and_Rational_Functions/3.03%3A_Power_Functions_and_Polynomial_Functions (Year:2021); 12 pages. |
Meyer; “Meet Loomio, The Small-Scale Decision-Making Platform With The Biggest Ambitions”; Mar. 13, 2014; https://gigaom.com/2014/03/13/meet-loomio-the-small-scale-decision-making-platform-wi . . . ; downloaded Mar. 25, 2015; 11 pages. |
PCT; International Search Report and Written Opinion of the International Searching Authority for PCT/US2015/035694 dated Aug. 28, 2015. |
PCT; International Search Report and Written Opinion of the International Searching Authority for PCT/US2015/56394 dated Feb. 4, 2016. |
PCT; International Search Report and Written Opinion of the International Searching Authority for PCT/US2016/040600 dated Nov. 29, 2016. |
PCT; International Search Report and Written Opinion of the International Searching Authority for PCT/US2017/040480 dated Oct. 23, 2017. |
PCT; International Search Report and Written Opinion of the International Searching Authority for PCT/US2017/062095 dated May 23, 2018. |
Puleston et al.; “Predicting the Future: Primary Research Exploring the Science of Prediction”; Copyright Esomar 2014; 31 pages; (Year: 2014). |
Quora; “What are Some Open Source Prediction Market Systems?”; retrieved from https://www.quora.com/What-are-some-Open-Source-Prediction-Market-systems; on Mar. 19, 2020 (Year: 2016). |
Rand et al.; “Dynamic Social Networks Promote Cooperation in Experiments with Humans”; PNAS; Nov. 29, 2011; vol. 108, No. 48; pp. 19193-19198. |
Robertson; “After Success of Mob-Run ‘Pokemon’, Twitch Bets on Turning Viewers Into ‘Torture Artists’ Streaming Game Platform Helps Fund ‘Choice Chamber’, Where the Chat Window Sets the Challenges”; The Verge; Apr. 16, 2014; http://www.theverge.com/2014/4/16/5618334/twitch-streaming-platform-funds-viewer-con . . . ; downloaded Mar. 25, 2015; 4 pages. |
Rosenberg et al.; “Amplifying Prediction Accuracy Using Swarm A. I.”; Intelligent Systems Conference 2017; Sep. 7, 2017; 5 pages. |
Rosenberg et al.; “Crowds vs. Swarms, A Comparison of Intelligence”; IEEE; Oct. 21, 2016; 4 pages. |
Rosenberg; U.S. Appl. No. 17/024,580, filed Sep. 17, 2020. |
Rosenberg; “Artificial Swarm Intelligence vs. Human Experts”; Neural Networks (IJCNN); 2016 International Joint Conference on IEEE; Jul. 24, 2016; 5 pages. |
Rosenberg; “Artificial Swarm Intelligence, a human-in-the-loop approach to A. I.”; Association for the Advancement of Artificial Intelligence; Feb. 12, 2016; 2 pages. |
Rosenberg; “Human Swarming and The Future of Collective Intelligence”; Singularity WebLog; https://www.singularityweblog.com/human-swarming-and-the-future-of-collective-intelligence/; Jul. 19, 2015; 7 pages. |
Rosenberg; “Human Swarming, a real-time method for Parallel Distributed Intelligence”; Proceedings of IEEE, 2015 Swarm/Human Blended Intelligence; Sep. 28, 2015; 7 pages. |
Rosenberg; “Human Swarms Amplify Accuracy in Honesty Detection”; Collective Intelligence 2017; Jun. 15, 2017; 5 pages. |
Rosenberg; “Human Swarms, a Real-Time Method for Collective Intelligence”; Proceedings of the European Conference on Artificial Life 2015; Jul. 20, 2015; pp. 658-659. |
Rosenberg; “Monkey Room Book One”; Outland Pictures; Amazon ebook; Jan. 15, 2014; 39 pages. |
Rosenberg; “Monkey Room Book Three”; Outland Pictures; Amazon ebook; Feb. 20, 2014; 22 pages. |
Rosenberg; “Monkey Room Book Two”; Outland Pictures; Amazon ebook; Feb. 9, 2014; 27 pages. |
Rosenberg; “Monkey Room”; Outland Pictures; Amazon; Mar. 30, 2014; 110 pages. |
Rosenberg; “New Hope for Humans in an A. I. World”; TEDxKC—You Tube; Sep. 7, 2017; http://www.youtube.com/watch?v=Eu-RyZT_Uas. |
Rosenberg; U.S. Appl. No. 14/668,970, filed Mar. 25, 2015. |
Rosenberg; U.S. Appl. No. 14/708,038, filed May 8, 2015. |
Rosenberg; U.S. Appl. No. 14/738,768, filed Jun. 12, 2015. |
Rosenberg; U.S. Appl. No. 14/859,035, filed Sep. 18, 2015. |
Rosenberg; U.S. Appl. No. 14/920,819, filed Oct. 22, 2015. |
Rosenberg; U.S. Appl. No. 14/925,837, filed Oct. 28, 2015. |
Rosenberg; U.S. Appl. No. 15/017,424, filed Feb. 5, 2016. |
Rosenberg; U.S. Appl. No. 15/047,522, filed Feb. 18, 2016. |
Rosenberg; U.S. Appl. No. 15/052,876, filed Feb. 25, 2016. |
Rosenberg; U.S. Appl. No. 15/086,034, filed Mar. 30, 2016. |
Rosenberg; U.S. Appl. No. 15/199,990, filed Jul. 1, 2016. |
Rosenberg; U.S. Appl. No. 15/241,340, filed Aug. 19, 2016. |
Rosenberg; U.S. Appl. No. 15/640,145, filed Jun. 30, 2017. |
Rosenberg; U.S. Appl. No. 15/898,468, filed Feb. 17, 2018. |
Rosenberg; U.S. Appl. No. 15/904,239, filed Feb. 23, 2018. |
Rosenberg; U.S. Appl. No. 15/910,934, filed Mar. 2, 2018. |
Rosenberg; U.S. Appl. No. 15/922,453, filed Mar. 15, 2018. |
Rosenberg; U.S. Appl. No. 15/936,324, filed Mar. 26, 2018. |
Rosenberg; U.S. Appl. No. 16/059,698, filed Aug. 9, 2018. |
Rosenberg; U.S. Appl. No. 16/154,613, filed Oct. 8, 2018. |
Rosenberg; U.S. Appl. No. 17/581,769, filed Jan. 21, 2022. |
Rosenberg; U.S. Appl. No. 17/744,464, filed May 13, 2022. |
Rosenberg: U.S. Appl. No. 15/959,080, filed Apr. 20, 2018. |
Salminen; “Collective Intelligence in Humans: A Literature Review”; Lappeenranta University of Technology, Lahti School of Innovation; 1Proceedings; 2012; 8 pages. |
Search and Examination Report under Sections 17 & 18(3) for Application No. GB1805236.5 issued by the UK Intellectual Property Office dated Jan. 28, 2022. |
Search Report under Section 17 for GB2202778.3 issued by the GB Intellectual Property Office dated Mar. 4, 2022. |
Souppouris; “Playing ‘Pokemon’ with 78,000 People is Frustratingly Fun”; The Verge; Feb. 17, 2014; http://www.theverge.com/2014/2/17/5418690/play-this-twitch-plays-pokemon-crowdsource . . . ; downloaded Mar. 25, 2015; 3 pages. |
Stafford; “How the Ouija Board Really Moves”; BBC Future; Jul. 30, 2013; http://www.bbc.com/future/story/20130729-what-makes-the-ouija-board-move; downloaded Mar. 25, 2015; 5 pages. |
Surowiecki; “The Wisdom of Crowds—Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies, and Nations”; Business Book Review; vol. 21, No. 43; 2006; 10 pages. |
Transforming Relationships; “Lottery: A tax on the statistically-challenged. ”; Chapter 4, Sec 4.1; Accessed on Oct. 8, 2021 at http://www.henry.k12.ga.us/ugh/apstat/chaptemotes/sec4.1.html (Year:2021); 3 pages. |
Unanimous A. I.; “What is Swarm Intelligence”; 2015; http://unu.ai/swarm-intelligence/; downloaded Oct. 6, 2016; 3 pages. |
USPTO; Non-Final Office Action for U.S. Appl. No. 17/024,474 dated Oct. 12, 2021. |
USPTO; Examiner Interview Summary for U.S. Appl. No. 15/086,034 dated Feb. 13, 2019. |
USPTO; Examiner Interview Summary for U.S. Appl. No. 15/086,034 dated Jan. 9, 2019. |
USPTO; Examiner Interview Summary for U.S. Appl. No. 16/059,698 dated Feb. 2, 2021. |
USPTO; Final Office Action for U.S. Appl. No. 14/925,837 dated Aug. 7, 2019. |
USPTO; Final Office Action for U.S. Appl. No. 15/086,034 dated Jul. 17, 2018. |
USPTO; Final Office Action for U.S. Appl. No. 16/059,698 dated Dec. 31, 2020. |
USPTO; Non-Final Office Action for U.S. Appl. No. 14/738,768 dated Sep. 8, 2017. |
USPTO; Non-Final Office Action for U.S. Appl. No. 17/024,580 dated Nov. 23, 2021. |
USPTO; Non-Final Office Action for U.S. Appl. No. 15/047,522 dated Jan. 5, 2018. |
USPTO; Non-Final Office Action for U.S. Appl. No. 14/708,038 dated Feb. 15, 2018. |
USPTO; Non-Final Office Action for U.S. Appl. No. 14/859,035 dated Feb. 12, 2018. |
USPTO; Non-Final Office Action for U.S. Appl. No. 14/920,819 dated Jun. 27, 2018. |
USPTO; Non-Final Office Action for U.S. Appl. No. 14/925,837 dated Apr. 3, 2018. |
USPTO; Non-Final Office Action for U.S. Appl. No. 15/017,424 dated Apr. 2, 2019. |
USPTO; Non-Final Office Action for U.S. Appl. No. 15/052,876 dated Feb. 22, 2018. |
USPTO; Non-Final Office Action for U.S. Appl. No. 15/086,034 dated Feb. 2, 2018. |
USPTO; Non-Final Office Action for U.S. Appl. No. 15/199,990 dated Sep. 25, 2019. |
USPTO; Non-Final Office Action for U.S. Appl. No. 15/241,340 dated Jul. 19, 2018. |
USPTO; Non-Final Office Action for U.S. Appl. No. 15/898,468 dated Mar. 3, 2020. |
USPTO; Non-Final Office Action for U.S. Appl. No. 15/910,934 dated Oct. 16, 2019. |
USPTO; Non-Final Office Action for U.S. Appl. No. 15/922,453 dated Dec. 23, 2019. |
USPTO; Non-Final Office Action for U.S. Appl. No. 15/936,324 dated Oct. 21, 2019. |
USPTO; Non-Final Office Action for U.S. Appl. No. 15/959,080 dated Nov. 7, 2019. |
USPTO; Non-Final Office Action for U.S. Appl. No. 16/059,698 dated Jun. 8, 2020. |
USPTO; Non-Final Office Action for U.S. Appl. No. 16/130,990 dated Oct. 21, 2019. |
USPTO; Non-Final Office Action for U.S. Appl. No. 16/147,647 dated Jan. 27, 2020. |
USPTO; Non-Final Office Action for U.S. Appl. No. 16/154,613 dated Aug. 20, 2021. |
USPTO; Non-Final Office Action for U.S. Appl. No. 16/230,759 dated Mar. 26, 2020. |
USPTO; Non-Final Office Action for U.S. Appl. No. 16/668,970 dated Aug. 15, 2017. |
USPTO; Notice of Allowance for U.S. Appl. No. 16/154,613 dated Nov. 3, 2021. |
USPTO; Notice of Allowance for U.S. Appl. No. 16/130,990 dated Jan. 21, 2020. |
USPTO; Notice of Allowance for U.S. Appl. No. 14/668,970 dated Feb. 8, 2018. |
USPTO; Notice of Allowance for U.S. Appl. No. 14/738,768 dated Feb. 2, 2018. |
USPTO; Notice of Allowance for U.S. Appl. No. 14/859,035 dated Aug. 23, 2018. |
USPTO; Notice of Allowance for U.S. Appl. No. 14/920,819 dated Dec. 27, 2018. |
USPTO; Notice of Allowance for U.S. Appl. No. 14/925,837 dated Nov. 7, 2019. |
USPTO; Notice of Allowance for U.S. Appl. No. 15/959,080 dated Jan. 31, 2020. |
USPTO; Notice of Allowance for U.S. Appl. No. 15/047,522 dated Aug. 30, 2018. |
USPTO; Notice of Allowance for U.S. Appl. No. 15/052,876 dated Aug. 13, 2018. |
USPTO; Notice of Allowance for U.S. Appl. No. 15/086,034 dated Mar. 5, 2019. |
USPTO; Notice of Allowance for U.S. Appl. No. 15/241,340 dated Nov. 20, 2018. |
USPTO; Notice of Allowance for U.S. Appl. No. 15/640,145 dated May 23, 2019. |
USPTO; Notice of Allowance for U.S. Appl. No. 15/640,145 dated Nov. 15, 2018. |
USPTO; Notice of Allowance for U.S. Appl. No. 15/815,579 dated Jul. 31, 2019. |
USPTO; Notice of Allowance for U.S. Appl. No. 15/898,468 dated May 1, 2020. |
USPTO; Notice of Allowance for U.S. Appl. No. 15/910,934 dated Jan. 15, 2020. |
USPTO; Notice of Allowance for U.S. Appl. No. 15/936,324 dated Dec. 9, 2019. |
USPTO; Notice of Allowance for U.S. Appl. No. 16/059,698 dated Mar. 15, 2021. |
USPTO; Notice of Allowance for U.S. Appl. No. 16/147,647 dated Mar. 18, 2020. |
USPTO; Notice of Allowance for U.S. Appl. No. 16/230,759 dated Jul. 17, 2020. |
USPTO; Notice of Allowance for U.S. Appl. No. 16/356,777 dated Jul. 22, 2020. |
USPTO; Notice of Allowance for U.S. Appl. No. 16/904,239 dated Jun. 17, 2019. |
USPTO; Notice of Allowance for U.S. Appl. No. 17/024,580 dated Apr. 12, 2022. |
USPTO; Office Action for U.S. Appl. No. 14/708,038 dated Apr. 23, 2019. |
USPTO; Restriction Requirement for U.S. Appl. No. 15/017,424 dated Oct. 2, 2018. |
USPTO; Restriction Requirement for U.S. Appl. No. 15/199,990 dated Aug. 1, 2019. |
USPTO: Notice of Allowance issued in U.S. Appl. No. 17/024,474 dated Apr. 11, 2022. |
Wikipedia; “Swarm (simulation)”; Jul. 22, 2016; http://en.wikipedia.org/wiki/Swarm_(simulation); downloaded Oct. 6, 2016; 2 pages. |
Wikipedia; “Swarm intelligence”; Aug. 31, 2016; http://en.wikipedia.org/wiki/Swarm_intelligence; downloaded Oct. 6, 2016; 8 pages. |
Willcox; U.S. Appl. No. 16/356,777, filed Mar. 18, 2019. |
Willcox; U.S. Appl. No. 17/024,474, filed Sep. 17, 2020. |
Yeung et al.; “Metacognition in human decision-making: confidence and error monitoring”; Philosophical Transactions of the Royal Society B; 2012; pp. 1310-1321. |
Number | Date | Country | |
---|---|---|---|
20220276775 A1 | Sep 2022 | US |
Number | Date | Country | |
---|---|---|---|
62648424 | Mar 2018 | US | |
62611756 | Dec 2017 | US | |
62569909 | Oct 2017 | US | |
62552968 | Aug 2017 | US | |
62544861 | Aug 2017 | US | |
62473424 | Mar 2017 | US | |
62473442 | Mar 2017 | US | |
62473429 | Mar 2017 | US | |
62463657 | Feb 2017 | US | |
62460861 | Feb 2017 | US | |
62423402 | Nov 2016 | US | |
62358026 | Jul 2016 | US | |
62207234 | Aug 2015 | US | |
62187470 | Jul 2015 | US | |
62140032 | Mar 2015 | US | |
62120618 | Feb 2015 | US | |
62117808 | Feb 2015 | US | |
62113393 | Feb 2015 | US | |
62069360 | Oct 2014 | US | |
62067505 | Oct 2014 | US | |
62066718 | Oct 2014 | US | |
62012403 | Jun 2014 | US | |
61991505 | May 2014 | US | |
61970885 | Mar 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17024474 | Sep 2020 | US |
Child | 17744479 | US | |
Parent | 16356777 | Mar 2019 | US |
Child | 17024474 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16230759 | Dec 2018 | US |
Child | 16356777 | US | |
Parent | 16154613 | Oct 2018 | US |
Child | 16230759 | US | |
Parent | 16059698 | Aug 2018 | US |
Child | 16154613 | US | |
Parent | 15922453 | Mar 2018 | US |
Child | 16059698 | US | |
Parent | 15904239 | Feb 2018 | US |
Child | 15922453 | US | |
Parent | 15898468 | Feb 2018 | US |
Child | 15904239 | US | |
Parent | 15815579 | Nov 2017 | US |
Child | 15898468 | US | |
Parent | 15640145 | Jun 2017 | US |
Child | 15815579 | US | |
Parent | 15241340 | Aug 2016 | US |
Child | 15640145 | US | |
Parent | 15199990 | Jul 2016 | US |
Child | 15241340 | US | |
Parent | 15086034 | Mar 2016 | US |
Child | 15199990 | US | |
Parent | 15052876 | Feb 2016 | US |
Child | 15086034 | US | |
Parent | 15047522 | Feb 2016 | US |
Child | 15052876 | US | |
Parent | 15017424 | Feb 2016 | US |
Child | 15047522 | US | |
Parent | 14925837 | Oct 2015 | US |
Child | 15017424 | US | |
Parent | 14920819 | Oct 2015 | US |
Child | 14925837 | US | |
Parent | 14859035 | Sep 2015 | US |
Child | 14920819 | US | |
Parent | 14738768 | Jun 2015 | US |
Child | 14859035 | US | |
Parent | 14708038 | May 2015 | US |
Child | 14738768 | US | |
Parent | 14668970 | Mar 2015 | US |
Child | 14708038 | US |