This disclosure relates to the field of analysis of physiological responses from viewers of media instances.
A key to creating a high performing media instance is to ensure that every event in the media elicits the desired responses from viewers. Here, the media instance can be but is not limited to, a video game, an advertisement clip, a movie, a computer application, a printed media (e.g., a magazine), a website, an online advertisement, a recorded video, a live performance of media, and other types of media.
Physiological data, which includes but is not limited to heart rate, brain waves, electroencephalogram (EEG) signals, blink rate, breathing, motion, muscle movement, galvanic skin response and any other response correlated with changes in emotion of a viewer of a media instance, can give a trace (e.g., a line drawn by a recording instrument) of the viewer's responses while he/she is watching the media instance. The physiological data can be measure by one or more physiological sensors, each of which can be but is not limited to, an electroencephalogram, electrocardiogram, an accelerometer, a blood oxygen sensor, a galvanometer, an electromyograph, skin temperature sensor, breathing sensor, eye tracking, pupil dilation sensing, and any other physiological sensor.
It is well established that physiological data in the human body of a viewer correlates with the viewer's change in emotions. Thus, from the measured “low level” physiological data, “high level” (e.g., easier to understand, intuitive to look at) physiological responses from the viewers of the media instance can be created. An effective media instance that connects with its audience/viewers is able to elicit the desired emotional response. Here, the high level physiological responses include, but are not limited to, liking (valence)—positive/negative responses to events in the media instance, intent to purchase or recall, emotional engagement in the media instance, thinking—amount of thoughts and/or immersion in the experience of the media instance, and adrenaline—anger, distraction, frustration, and other emotional experiences to events in the media instance, and tension and stress.
Advertisers, media producers, educators, scientists, engineers, doctors and other relevant parties have long desired to have greater access to collected reactions to their media products and records of responses through a day from their targets, customers, clients and pupils. These parties desire to understand the responses people have to their particular stimulus in order to tailor their information or media instances to better suit the needs of end users and/or to increase the effectiveness of the media instance created. Making the reactions to the media instances available remotely over the Web to these interested parties has potentially very large commercial and socially positive impacts. Consequently, allowing a user to remotely access and analyze the media instance and the physiological responses from numerous viewers to the media instance is desired.
Each patent, patent application, and/or publication mentioned in this specification is herein incorporated by reference in its entirety to the same extent as if each individual patent, patent application, and/or publication was specifically and individually indicated to be incorporated by reference. Notwithstanding the prior sentence, U.S. patent application Ser. No. 12/244,737, filed Oct. 2, 2008; U.S. patent application Ser. No. 12/244,748, filed Oct. 2, 2008; U.S. patent application Ser. No. 12/263,331, filed Oct. 31, 2008; U.S. patent application Ser. No. 12/244,752, filed Oct. 2, 2008; U.S. patent application Ser. No. 12/263,350, filed Oct. 31, 2008; U.S. patent application Ser. No. 12/326,016, filed Dec. 1, 2008; and U.S. patent application Ser. No. 13/252,910, filed Oct. 4, 2011 are not incorporated by reference.
Examples disclosed herein enable remote and interactive access, navigation, and analysis of reactions from one or more viewers to a specific media instance. Here, the reactions include, but are not limited to, physiological responses, survey results, verbatim feedback, event-based metadata, and derived statistics for indicators of success and failure from the viewers. The reactions from the viewers are aggregated and stored in a database and are delivered to a user via a web-based graphical interface or application, such as a Web browser. Through the web-based graphical interface, the user is able to remotely access and navigate the specific media instance, together with one or more of: the aggregated physiological responses that have been synchronized with the media instance, the survey results, and the verbatim feedbacks related to the specific media instance. Instead of being presented with static data (such as a snapshot) of the viewers' reactions to the media instance, the user is now able to interactively divide, dissect, parse, and analysis the reactions in any way he/she prefer. The examples disclosed herein provide automation that enables those who are not experts in the field of physiological analysis to understand and use physiological data by enabling these non-experts to organize the data and organize and improve presentation or visualization of the data according to their specific needs. In this manner, the examples disclosed herein provide an automated process that enables non-experts to understand complex data, and to organize the complex data in such a way as to present conclusions as appropriate to the media instance.
In the following description, numerous specific details are introduced to provide a thorough understanding of, and enabling description for, example systems and methods. One skilled in the relevant art, however, will recognize that these examples can be practiced without one or more of the specific details, or with other components, systems, etc. In other instances, well-known structures or operations are not shown, or are not described in detail, to avoid obscuring aspects of the disclosed examples.
Having multiple reactions from the viewers (e.g., physiological responses, survey results, verbatim feedback, events tagged with metadata, etc.) available in one place and at a user's fingertips, along with the automated methods for aggregating the data provided herein, allows the user to view the reactions to hundreds of media instances in one sitting by navigating through them. For each of the media instances, the integration of multiple reactions provides the user with more information than the sum of each of the reactions to the media instance. For a non-limiting example, if one survey says that an ad is bad, that is just information; but if independent surveys, verbatim feedbacks and physiological data across multiple viewers say the same, the reactions to the media instance become more trustworthy. By combining this before a user sees it, the correct result is presented to the user.
Referring to
Referring to
In some examples, alternative forms of access to the one or more reactions from the viewers other than over the network may be adopted. For non-limiting examples, the reactions can be made available to the user on a local server on a computer or on a recordable media such as a DVD disc with all the information on the media.
In some examples, with reference to
In some examples, user database 113 stores information of users who are allowed to access the media instances and the reactions from the viewers, and the specific media instances and the reactions each user is allowed to access. The access module 106 may add or remove a user for access, and limit or expand the list of media instances and/or reactions the user can access and/or the analysis features the user can use by checking the user's login name and password. Such authorization/limitation on a user's access can be determined based upon who the user is, e.g., different amounts of information for different types of users. For a non-limiting example, Company ABC can have access to certain ads and survey results of viewers' reactions to the ads, which Company XYZ cannot or have only limited access to.
In some examples, one or more physiological responses aggregated from the viewers can be presented in the response panel 111 as lines or traces 301 in a two-dimensional graph or plot as shown in
In some examples, change (trend) in amplitude of the aggregated responses is also a good measure of the quality of the media instance. If the media instance is able to change viewers emotions up and down in a strong manner (for a non-limiting example, mathematical deviation of the response is large), such strong change in amplitude corresponds to a good media instance that puts the viewers into different emotional states. In contrast, a poor performing media instance does not put the viewers into different emotional states. The amplitudes and the trend of the amplitudes of the responses are good measures of the quality of the media instance. Such information can be used by media designers to identify if the media instance is eliciting the desired response and which key events/scenes/sections of the media instance need to be changed in order to match the desired response. A good media instance should contain multiple moments/scenes/events that are intense and produce positive amplitude of response across viewers. A media instance that failed to create such responses may not achieve what the creators of the media instance have intended.
In some examples, other than providing a second by second view for the user to see how specific events in the media instance affect the viewers' emotions, the aggregated responses collected and calculated can also be used for the compilation of aggregate statistics, which are useful in ranking the overall affect of the media instance. Such statistics include but are not limited to Average Liking and Heart Rate Deviation.
In some examples, the viewers of the media instance are free to write comments (e.g., what they like, what they dislike, etc.) on the media instance, and the verbatim (free flowing text) comments or feedbacks 401 from the viewers can be recorded and presented in a response panel 111 as shown in
In some examples, the viewers' comments about the media instance can be characterized as positive or negative in a plurality of categories/topics/aspects related to the product, wherein such categories include but are not limited to, product, event, logo, song, spokesperson, jokes, narrative, key events, storyline. These categories may not be predetermined, but instead be extracted from the analysis of their comments.
In some examples, answers to one or more survey questions 501 aggregated from the viewers can be rendered graphically, for example, by being presented in the response panel 111 in a graphical format 502 as shown in
In some examples, the survey questions can be posed or presented to the viewers while they are watching the specific media instance and their answers to the questions are collected, recorded, summed up by pre-defined categories via a surveying module 114. Once the survey results are made available to the user (creator of the media instance), the user may pick any of the questions, and be automatically presented with survey results corresponding to the question visually to the user. The user may then view and analyze how viewers respond to specific questions to obtain a more complete picture of the viewers' reactions.
In some examples, many different facets of the one or more reactions from the viewers described above can be blended into a few simple metrics that the user can use to see how it is currently positioned against the rest of their industry. For the user, knowing where it ranks in its industry in comparison to its competition is often the first step in getting to where it wants to be. For a non-limiting example, in addition to the individual survey results of a specific media instance, the surveying module may also provide the user with a comparison of survey results and statistics to multiple media instances. This automation allows the user not only to see the feedback that the viewers provided with respect to the specific media instance, but also to evaluate how the specific media instance compares to other media instances designed by the same user or its competitors.
Some examples disclosed herein provide a user not only with tools for accessing and obtaining a maximum amount of information out of reactions from a plurality of viewers to a specific media instance, but also with actionable insights on what changes the user can make to improve the media instance based on in-depth analysis of the viewers' reactions. Such analysis requires expert knowledge on the viewers' physiological behavior and large amounts of analysis time, which the user may not possess. Here, the reactions include but are not limited to, physiological responses, survey results, and verbatim feedbacks from the viewers, to name a few. The reactions from the viewers are aggregated and stored in a database and presented to the user via a graphical interface, as described above. In some examples, predefined methods for extracting information from the reactions and presenting that information are provided so that the user is not required to be an expert in physiological data analysis to reach and understand conclusions supported by the information. Making in-depth analysis of reactions to media instances and actionable insights available to a user enables a user who is not an expert in analyzing physiological data to obtain critical information that can have significant commercial and socially positive impacts.
Referring to
The media instance and its pertinent data can be stored in a media database 804, and the one or more reactions from the viewers can be stored in a reaction database 805, respectively. An analysis module 806 performs in-depth analysis on the viewers' reactions and provides actionable insights on the viewers' reactions to a user 807 so that the user can draw its own conclusion on how the media instance can/should be improved. A presentation module 808 is operable to retrieve and present the media instance 801 together with the one or more reactions 802 from the viewers of the media instance via an interactive browser 809. Here, the interactive browser includes at least two panels—a media panel 810, operable to present, play, and pause the media instance, and a reaction panel 811, operable to display the one or more reactions corresponding to the media instance as well as the key insights provided by the analysis module 806.
Referring to
In some examples, the analysis module is operable to provide insights or present data based in-depth analysis on the viewers' reactions to the media instance on at least one question. An example question is whether the media instance performs most effectively across all demographic groups or especially on a specific demographic group, e.g., older women? Another example question is whether certain elements of the media instance, such as loud noises, were very effective at engaging viewers in a positive, challenging way? Yet another example question is whether thought provoking elements in the media instance were much more engaging to viewers than product shots? Also, an example question includes whether certain characters, such as lead female characters, appearing in the media instance were effective for male viewers and/or across target audiences in the female demographic? Still another example question includes whether physiological responses to the media instance from the viewers were consistent with viewers identifying or associating positively with the characters in the media instance? A further question is whether the media instance was universal—performed well at connecting across gender, age, and income boundaries, or highly polarizing?
The analysis module therefore automates the analysis through use of one or more questions, as described above. The questions provide a context for analyzing and presenting the data or information received from viewers in response to the media instance. The analysis module is configured, using the received data, to answer some number of questions, where answers to the questions provide or correspond to the collected data. When a user desires results from the data for a particular media instance, the user selects a question to which they desire an answer for the media instance. In response to the question selection, the results of the analysis are presented in the form of an answer to the question, where the answer is derived or generated using the data collected and corresponding to the media instance. The results of the analysis can be presented using textual and/or graphical outputs or presentations. The results of the analysis can also be generated and presented using previous knowledge of how to represent the data to answer the question, the previous knowledge coming from similar data analyzed in the past. Furthermore, presentation of data of the media instance can be modified by the user through user or generation of other questions.
The analysis module performs the operations described above in conjunction with the presentation module, where the presentation module includes numerous different renderings for data. In operation, a rendering is specified or selected for a portion of data of a media instance, and the rendering is then tagged with one or more questions that apply to the data. This architecture allows users to modify how data is represented using a set of tools. The system remembers or stores information of how data was represented and the question or question type that was being answered. This information of prior system configurations allows the system, at a subsequent time, to self-configure to answer the same or similar questions for the same media instance or for different media instances. Users thus continually improve the ability of the system to answer questions and improve the quality of data provided in the answers.
In some examples, the presentation module is operable to enable the user to pick a certain section 1001 of the reactions to the media instance 1002, such as the physiological responses 1003 from the viewers shown in the reaction panel 1011 via, for a non-limiting example, “shading”, as shown in
In some examples, the analysis module is operable to analyze the shaded section of the media instance and/or responses by being preprogrammed either by an analyst or the user themselves. Usually, a user is most often interested in a certain number of attributes of the viewers' responses. The analysis module provides the user with insights, conclusions, and findings that they can review from the bottom up. Although the analysis result provides inside and in-depth analysis of the data as well as various possible interpretations of the shaded section of the media instance, which often leaves a conclusion evident, such analysis, however, is no substitute for reaching conclusion by the user Instead the user is left to draw his/her own conclusion about the section based on the analysis provided.
In some examples, a user may pick a section and choose one of the questions/tasks/requests 1004 that he/she is interested in from a prepared list. The prepared list of questions may include but is not limited to any number of questions. Some example questions follow along with a response evoked in the analysis module.
An example question is “Where were there intense responses to the media instance?” In response the analysis module may calculate the intensity of the responses automatically by looking for high coherence areas of responses.
Another example question is “Does the media instance end on a happy note?” or “Does the audience think the event (e.g., joke) is funny?” In response the analysis module may check if the physiological data shows that viewer acceptance or approval is higher in the end than at the beginning of the media instance.
Yet another example question is “Where do people engage in the spot?” In response to this question the analysis module may check if there is a coherent change in viewers' emotions.
Still another example question is “What is the response to the brand moment?” In response the analysis module may check if thought goes up, but acceptance or approval goes down during the shaded section of the media.
An additional example question is “Which audience does the product introduction work on best?” In response the analysis module analyzes the responses from various segments of the viewers, which include but are not limited to, males, females, gamers, republicans, engagement relative to an industry, etc.
In some examples, the presentation module (
In some examples, verbal explanation 1007 of the analysis results in response to the questions raised can be provided to the user together with graphical markings shown in
In some examples, with reference to
In some examples, optional user database 814 stores information of users who are allowed to access the media instances and the verbatim reactions from the viewers, and the specific media instances and the reactions each user is allowed to access. The access module 810 may add or remove a user for access, and limit or expand the list of media instances and/or reactions the user can access and/or the analysis features the user can use by checking the user's login name and password. Such authorization/limitation on a user's access can be determined to based upon who the user is, e.g., different amounts of information for different types of users. For a non-limiting example, Company ABC can have access to certain ads and feedbacks from viewers' reactions to the ads, to which Company XYZ cannot have access or can have only limited access.
In some examples, a specific media instance is synchronized with physiological responses to the media instance from a plurality of viewers continuously over the entire time duration of the media instance. Once the media instance and the physiological responses are synchronized, an interactive browser enables a user to navigate through the media instance (or the physiological responses) in one panel while presenting the corresponding physiological responses (or the section of the media instance) at the same point in time in another panel.
The interactive browser allows the user to select a section/scene from the media instance, correlate, present, and compare the viewers' physiological responses to the particular section. Alternatively, the user may monitor the viewers' physiological responses continuously as the media instance is being displayed. Being able to see the continuous (instead of static snapshot of) changes in physiological responses and the media instance side by side and compare aggregated physiological responses from the viewers to a specific event of the media instance in an interactive way enables the user to obtain better understanding of the true reaction from the viewers to whatever stimuli being presented to them.
Referring to
In some examples, the synchronization module 1103 synchronizes and correlates a media instance 1101 with one or more physiological responses 1102 aggregated from a plurality of viewers of the media instance by synchronizing each event of the media. The physiological response data of a person includes but is not limited to heart rate, brain waves, electroencephalogram (EEG) signals, blink rate, breathing, motion, muscle movement, galvanic skin response, skin temperature, and any other physiological response of the person. The physiological response data corresponding to each event or point in time is then retrieved from the media database 1104. The data is offset to account for cognitive delays in the human brain corresponding to the signal collected (e.g., the cognitive delay of the brain associated with human vision is different than the cognitive delay associated with auditory information) and processing delays of the system, and then synchronized with the media instance 1101. Optionally, an additional offset may be applied to the physiological response data 1102 of each individual to account for time zone differences between the viewer and reaction database 1105.
Referring to
In some examples, with reference to
In some examples, change (trend) in amplitude of the aggregated responses is a good measure of the quality of the media instance. If the media instance is able to change viewers emotions up and down in a strong manner (for a non-limiting example, mathematical deviation of the response is large), such strong change in amplitude corresponds to a good media instance that puts the viewers into different emotional states. In contrast, a poor performing media instance does not put the viewers into different emotional states. Such information can be used by media designers to identify if the media instance is eliciting the desired response and which key events/scenes/sections of the media instance need to be changed in order to match the desired response. A good media instance should contain multiple moments/scenes/events that are intense and produce positive amplitude of response across viewers. A media instance failed to create such responses may not achieve what the creators of the media instance have intended.
In some examples, the media instance can be divided up into instances of key moments/events/scenes/segments/sections in the profile, wherein such key events can be identified and/tagged according to the type of the media instance. In the case of video games, such key events include but are not limited to, elements of a video game such as levels, cut scenes, major fights, battles, conversations, etc. In the case of Web sites, such key events include but are not limited to, progression of Web pages, key parts of a Web page, advertisements shown, content, textual content, video, animations, etc. In the case of an interactive media/movie/ads, such key events can be but are not limited to, chapters, scenes, scene types, character actions, events (for non-limiting examples, car chases, explosions, kisses, deaths, jokes) and key characters in the movie.
In some examples, an event module 1111 can be used to quickly identify a numbers of moments/events/scenes/segments/sections in the media instance retrieved from the media database 1104 and then automatically calculate the length of each event. The event module may enable each user, or a trained administrator, to identify and tag the important events in the media instance so that, once the “location” (current event) in the media instance (relative to other pertinent events in the media instance) is selected by the user, the selected event may be better correlated with the aggregated responses from the viewers.
In some examples, the events in the media instance can be identified, automatically if possible, through one or more applications that parse user actions in an environment (e.g., virtual environment, real environment, online environment, etc.) either before the viewer's interaction with the media instance in the case of non-interactive media such as a movie, or afterwards by reviewing the viewer's interaction with the media instance through recorded video, a log of actions or other means. In video games, web sites and other electronic interactive media instance, the program that administers the media can create this log and thus automate the process.
An example enables graphical presentation and analysis of verbatim comments and feedbacks from a plurality of viewers to a specific media instance. These verbatim comments are first collected from the viewers and stored in a database before being analyzed and categorized into various categories. Once categorized, the comments can then be presented to a user in various graphical formats, allowing the user to obtain an intuitive visual impression of the positive/negative reactions to and/or the most impressive characteristics of the specific media instance as perceived by the viewers.
An example enables graphical presentation and analysis of verbatim comments and feedbacks from a plurality of viewers to a specific media instance. These verbatim comments are first collected from the viewers and stored in a database before being analyzed and categorized into various categories. Once categorized, the comments can then be presented to a user in various graphical formats, allowing the user to obtain an intuitive visual impression of the positive/negative reactions to and/or the most impressive characteristics of the specific media instance, as perceived by the viewers. Instead of parsing through and dissecting the comments and feedbacks word by word, the user is now able to visually evaluate how well the media instance is being received by the viewers at a glance.
Referring to
Referring to
In some examples, the viewers of the media instance are free to write what they like and don't like about the media instance, and the verbatim (free flowing text) comments or feedbacks 501 from the viewers can be recorded and presented in the comments panel 111 verbatim as shown in
In some examples, the analysis module is operable to characterize the viewers' comments about the media instance as positive or negative in a plurality of categories/topics/aspects related to the product, wherein such categories include but are not limited to, product, event, logo, song, spokesperson, jokes, narrative, key events, storyline. These categories may not be predetermined, but instead be extracted from the analysis of their comments.
In some examples, the presentation module is operable to present summation of the viewers' positive and negative comments to various aspects/topics/events of the media instance to the user (creator of the media instance) in a bubble graph, as shown in
In some examples, the verbatim comments from the viewers can be analyzed, and key words and concepts (adjectives) can be extracted and presented in a word cloud, as shown in
In some examples, the viewers may simply be asked to answer a specific question, for example, “What are three adjectives that best describe your response to this media.” The adjectives in the viewers' responses to the question can then be collected, categorized, and summed up, and presented in a Word cloud. Alternatively, the adjectives the viewers used to describe their responses to the media instance may be extracted from collected survey data.
In some examples, with reference to
In some examples, optional user database 1314 stores information of users who are allowed to access the media instances and the verbatim reactions from the viewers, and the specific media instances and the reactions each user is allowed to access. The access module 1310 may add or remove a user for access, and limit or expand the list of media instances and/or reactions the user can access and/or the analysis features the user can use by checking the user's login name and password. Such authorization/limitation on a user's access can be determined to based upon who the user is, e.g., different amounts of information for different types of users. For a non-limiting example, Company ABC can have access to certain ads and feedback from viewers' reactions to the ads, while Company XYZ cannot have access or can only have limited access to the same ads and/or feedback.
Some of the examples described herein include a method comprising: receiving a media instance, the media instance including a plurality of media events; receiving reaction data from a plurality of viewers while the plurality of viewers are viewing the media instance; generating aggregated reaction data by aggregating the reaction data from the plurality of viewers; generating synchronized data by synchronizing the plurality of media events of the media instance with corresponding aggregated reaction data; and providing controlled access to the synchronized data from a remote device.
The method of a disclosed example comprises providing, via the controlled access, remote interactive manipulation of the reaction data synchronized to corresponding events of the media instance.
The manipulation of a disclosed example includes at least one of dividing, dissecting, aggregating, parsing, organizing, and analyzing the reaction data.
The method of a disclosed example comprises providing controlled access to at least one of the reaction data and aggregated reaction data.
The method of a disclosed example comprises enabling via the controlled access interactive analysis of at least one of the media instance and the synchronized data.
The method of a disclosed example comprises enabling via the controlled access interactive analysis of at least one of the reaction data, the aggregated reaction data, and parsed reaction data.
The reaction data of a disclosed example includes at least one of physiological responses, survey results, feedback generated by the viewers, metadata, and derived statistics.
The reaction data of a disclosed example includes physiological responses.
The reaction data of a disclosed example includes survey results.
The reaction data of a disclosed example includes feedback generated by the viewers.
The reaction data of a disclosed example includes metadata, wherein the metadata is event-based metadata.
The reaction data of a disclosed example includes derived statistics, wherein the derived statistics are derived statistics for indicators of success and failure of the media instance
Receiving the reaction data of a disclosed example comprises receiving the reaction data from a plurality of sensor devices via wireless couplings, wherein each viewer wears a sensor device of the plurality of sensor devices.
The method of a disclosed example comprises presenting a user interface (UI), wherein the controlled access is made via the UI.
The method of a disclosed example comprises presenting the synchronized data using a rendering of a plurality or renderings.
The plurality of renderings of a disclosed example includes text, charts, graphs, histograms, images, and video.
The aggregating of a disclosed example comprises aggregating the reaction data according to at least one of maximums, minimums, averages, deviations, derivatives, amplitudes, and trends of at least one parameter of the reaction data.
The method of a disclosed example comprises selecting, via the controlled access, a portion of the media instance for which at least one of the synchronized data, the reaction data, the aggregated reaction data, and parsed reaction data is viewed. The portion of a disclosed example includes a point in time. The portion of a disclosed example includes a period of time.
The method of a disclosed example comprises automatically analyzing the reaction data.
The method of a disclosed example comprises providing remote access to results of the analyzing, and presenting the results, the presenting including presenting actionable insights corresponding to a portion of the media instance via at least one of a plurality of renderings, wherein the actionable insights correspond to emotional reactions of the plurality of viewers.
The analyzing of a disclosed example includes applying expert knowledge of physiological behavior to the reaction data.
The method of a disclosed example comprises generating a first set of questions that represent the results.
The analyzing of a disclosed example includes analyzing the reaction data in the context of the first set of questions.
The method of a disclosed example comprises selecting at least one rendering of the plurality of renderings.
The method of a disclosed example comprises tagging the selected rendering with at least one question of the first set of questions.
A user of a disclosed example can modify the presenting of the results via the selecting of at least one rendering of the plurality of renderings.
The presenting of a disclosed example includes presenting the results via presentation of the first set of questions.
The method of a disclosed example comprises, in response to the user selecting a question of the first set of questions, presenting an answer to the selected question that includes the actionable insight.
The method of a disclosed example comprises receiving comments from the plurality of viewers in response to the viewing. The comments of a disclosed example are textual comments. The synchronized data of a disclosed example includes the comments.
The method of a disclosed example comprises presenting survey questions to the plurality of viewers, the survey questions relating to the media instance. The method of a disclosed example comprises receiving answers to the survey questions from the plurality of viewers. The answers to the survey questions of a disclosed example are textual comments. The synchronized data of a disclosed example includes the answers to the survey questions.
The plurality of viewers of a disclosed example is at a location.
The plurality of viewers of a disclosed example is at a plurality of locations.
A first set of the plurality of viewers of a disclosed example is at a first location and a second set of the plurality of viewers is at a second location different from the first location.
A first set of the plurality of viewers of a disclosed example is viewing the media instance at a first time and a second set of the plurality of viewers is viewing the media instance at a second time different from the first time.
The reaction data of a disclosed example corresponds to electrical activity in brain tissue of the user.
The reaction data of a disclosed example corresponds to electrical activity in muscle tissue of the user.
The reaction data of a disclosed example corresponds to electrical activity in heart tissue of the user.
Examples described herein include a method comprising: receiving a media instance; receiving reaction data from a plurality of viewers, the reaction data generated in response to viewing of the media instance and including physiological response data; aggregating the reaction data from the plurality of viewers; and providing remote access to at least one of the reaction data and aggregated reaction data, wherein the remote access enables interactive analysis of at least one of the media instance, the reaction data, aggregated reaction data, and parsed reaction data.
Examples described herein include a method comprising: receiving a media instance; receiving reaction data from a plurality of viewers, the reaction data generated in response to viewing of the media instance and including physiological response data; aggregating the reaction data from the plurality of viewers; and enabling remote interactive analysis of the media instance and at least one of the reaction data, aggregated reaction data, and parsed reaction data.
Examples described herein include a method comprising: receiving a media instance; receiving reaction data from a plurality of viewers, the reaction data generated in response to viewing of the media instance and including physiological response data; and enabling remote interactive manipulation of the reaction data synchronized to corresponding events of the media instance, the manipulation including at least one of dividing, dissecting, aggregating, parsing, and analyzing the reaction data.
Examples described herein include a system comprising: a processor coupled to a database, the database including a media instance and reaction data, the media instance comprising a plurality of media events, the reaction data received from a plurality of viewers viewing the media instance; a first module coupled to the processor, the first module generating aggregated reaction data by aggregating the reaction data from the plurality of viewers, the first module generating synchronized data by synchronizing the plurality of media events of the media instance with corresponding aggregated reaction data; and a second module coupled to the processor, the second module comprising a plurality of renderings and a user interface (UI) that provide controlled access to the synchronized data from a remote device.
The controlled access of a disclosed example is through the UI and includes remote interactive manipulation of the reaction data synchronized to corresponding events of the media instance.
The manipulation of a disclosed example includes at least one of dividing, dissecting, aggregating, parsing, organizing, and analyzing the reaction data.
The controlled access of a disclosed example includes access to at least one of the reaction data and aggregated reaction data.
The controlled access of a disclosed example includes interactive analysis of at least one of the media instance and the synchronized data.
The controlled access of a disclosed example includes interactive analysis of at least one of the reaction data, the aggregated reaction data, and parsed reaction data.
The plurality of renderings of a disclosed example includes text, charts, graphs, histograms, images, and video.
The UI of a disclosed example presents the synchronized data using at least one rendering of the plurality or renderings.
The UI of a disclosed example allows selection of a portion of the media instance for which at least one of the synchronized data, the reaction data, the aggregated reaction data, and parsed reaction data is viewed. The portion of a disclosed example includes a point in time. The portion of a disclosed example includes a period of time.
The first module of a disclosed example analyzes the reaction data.
The UI of a disclosed example provides remote access to results of the analysis.
The UI of a disclosed example presents the results using at least one rendering of the plurality of renderings, the results including actionable insights corresponding to a portion of the media instance.
The actionable insights of a disclosed example correspond to emotional reactions of the plurality of viewers.
The analyzing of a disclosed example comprises applying expert knowledge of physiological behavior to the reaction data.
The system of a disclosed example comprises generating a first set of questions that represent the results.
The analyzing of a disclosed example includes analyzing the reaction data in the context of the first set of questions.
The system of a disclosed example comprises selecting at least one rendering of the plurality of renderings.
The system of a disclosed example comprises tagging the selected rendering with at least one question of the first set of questions.
A user of a disclosed example can modify presentation of the results via the UI by selecting at least one rendering of the plurality of renderings.
The presenting of a disclosed example includes presenting the results via presentation of the first set of questions on the UI.
The system of a disclosed example comprises, in response to the user selecting a question of the first set of questions, presenting via the UI an answer to the selected question that includes the actionable insight.
The reaction data of a disclosed example includes at least one of physiological responses, survey results, feedback generated by the viewers, metadata, and derived statistics.
The reaction data of a disclosed example includes physiological responses.
The reaction data of a disclosed example includes survey results.
The reaction data of a disclosed example includes feedback generated by the viewers.
The reaction data of a disclosed example includes metadata. The metadata of a disclosed example is event-based metadata.
The reaction data of a disclosed example includes derived statistics. The derived statistics of a disclosed example are derived statistics for indicators of success and failure of the media instance.
The system of a disclosed example comprises a plurality of sensor devices, wherein each viewer wears a sensor device of the plurality of sensor devices, wherein each sensor device receives the reaction data from a corresponding view and transmits the reaction data to at least one of the first module and the database.
The aggregating of a disclosed example comprises aggregating the reaction data according to at least one of maximums, minimums, averages, deviations, derivatives, amplitudes, and trends of at least one parameter of the reaction data.
The system of a disclosed example comprises a third module coupled to the second module, the third module receiving comments from the plurality of viewers in response to the viewing. The comments of a disclosed example are textual comments. The synchronized data of a disclosed example includes the comments.
The system of a disclosed example comprises a third module coupled to the second module, the third module presenting survey questions to the plurality of viewers via the UI, the survey questions relating to the media instance.
The third module of a disclosed example receives answers to the survey questions from the plurality of viewers via the UI. The answers to the survey questions of a disclosed example are textual comments. The synchronized data of a disclosed example includes the answers to the survey questions.
The plurality of viewers of a disclosed example is at a location.
The plurality of viewers of a disclosed example is at a plurality of locations.
A first set of the plurality of viewers of a disclosed example is at a first location and a second set of the plurality of viewers are at a second location different from the first location.
A first set of the plurality of viewers of a disclosed example is viewing the media instance at a first time and a second set of the plurality of viewers are viewing the media instance at a second time different from the first time.
The reaction data of a disclosed example corresponds to electrical activity in brain tissue of the user.
The reaction data of a disclosed example corresponds to electrical activity in muscle tissue of the user.
The reaction data of a disclosed example corresponds to electrical activity in heart tissue of the user.
Examples described herein include a system comprising: a processor coupled to a database, the database including a media instance and reaction data of a plurality of viewers, the reaction data generated in response to viewing of the media instance and including physiological response data; a first module that aggregates the reaction data from the plurality of viewers; and a second module that provides remote access to at least one of the reaction data and aggregated reaction data, wherein the remote access enables interactive analysis of at least one of the media instance, the reaction data, aggregated reaction data, and parsed reaction data.
Examples described herein include a system comprising: a processor coupled to a database, the database receiving a media instance and reaction data from a plurality of viewers, the reaction data generated in response to viewing of the media instance and including physiological response data; a first module aggregating the reaction data from the plurality of viewers; and a second module enabling remote interactive analysis and presentation of the media instance and at least one of the reaction data, aggregated reaction data, and parsed reaction data.
Examples described herein include a system comprising: a processor coupled to a database, the database receiving a media instance and reaction data from a plurality of viewers, the reaction data generated in response to viewing of the media instance and including physiological response data; and an interface coupled to the processor, the interface enabling remote interactive manipulation of the reaction data synchronized to corresponding events of the media instance, the manipulation including at least one of dividing, dissecting, aggregating, parsing, and analyzing the reaction data.
Examples described herein include a method comprising: receiving a media instance, the media instance including a plurality of media events; receiving reaction data from a plurality of viewers while the plurality of viewers are viewing the media instance; automatically analyzing the reaction data; and providing remote access to results of the analyzing, and presenting the results, the presenting including presenting actionable insights corresponding to a portion of the media instance via at least one of a plurality of renderings, wherein the actionable insights correspond to emotional reactions of the plurality of viewers.
The analyzing of a disclosed example includes applying expert knowledge of physiological behavior to the reaction data.
The method of a disclosed example comprises generating a first set of questions that represent the results.
The analyzing of a disclosed example includes analyzing the reaction data in the context of the first set of questions.
The method of a disclosed example comprises selecting at least one rendering of the plurality of renderings.
The method of a disclosed example comprises tagging the selected rendering with at least one question of the first set of questions.
A user of a disclosed example can modify the presenting of the results via the selecting of at least one rendering of the plurality of renderings.
The presenting of a disclosed example includes presenting the results via presentation of the first set of questions.
The method of a disclosed example comprises, in response to the user selecting a question of the first set of questions, presenting an answer to the selected question that includes the actionable insight.
The method of a disclosed example comprises selecting a second set of questions that represent the results, wherein the second set of questions were generated prior to the first set of questions to represent previous results from analysis of preceding reaction data of a preceding media instance, wherein the preceding reaction data is similar to the reaction data.
The analyzing of a disclosed example includes analyzing the reaction data in the context of the second set of questions.
The method of a disclosed example comprises selecting at least one rendering of the plurality of renderings.
The method of a disclosed example comprises tagging the selected rendering with at least one question of the second set of questions.
A user of a disclosed example can modify the presenting of the results via the selecting of at least one rendering of the plurality of renderings.
The presenting of a disclosed example includes presenting the results via presentation of the second set of questions.
The method of a disclosed example comprises, in response to the user selecting a question of the second set of questions, presenting an answer to the selected question that includes the actionable insight.
The method of a disclosed example comprises selecting a set of the reaction data to which the analyzing is applied, the selecting including selecting a portion of the media instance to which the set of the reaction data corresponds. The portion of a disclosed example includes a point in time. The portion of a disclosed example includes a period of time.
The method of a disclosed example comprises generating aggregated reaction data by aggregating the reaction data from the plurality of viewers.
The aggregating of a disclosed example comprises aggregating the reaction data according to at least one of maximums, minimums, averages, deviations, derivatives, amplitudes, and trends of at least one parameter of the reaction data.
The method of a disclosed example comprises generating synchronized data by synchronizing the plurality of media events of the media instance with the reaction data.
The method of a disclosed example comprises enabling remote interactive manipulation of the media instance.
The method of a disclosed example comprises enabling remote interactive manipulation of the reaction data.
The method of a disclosed example comprises enabling remote interactive manipulation of the plurality of renderings.
The method of a disclosed example comprises enabling remote interactive manipulation of the actionable insights.
The plurality of renderings of a disclosed example includes text, charts, graphs, histograms, images, and video.
The reaction data of a disclosed example includes at least one of physiological responses, survey results, feedback generated by the viewers, metadata, and derived statistics
The reaction data of a disclosed example includes physiological responses
The reaction data of a disclosed example includes survey results.
The reaction data of a disclosed example includes feedback generated by the viewers.
The reaction data of a disclosed example includes metadata, wherein the metadata is event-based metadata.
The reaction data of a disclosed example includes derived statistics, wherein the derived statistics are derived statistics for indicators of success and failure of the media instance.
Receiving the reaction data of a disclosed example comprises receiving the reaction data from a plurality of sensor devices via wireless couplings, wherein each viewer wears a sensor device of the plurality of sensor devices.
The reaction data of a disclosed example corresponds to electrical activity in brain tissue of the user.
The reaction data of a disclosed example corresponds to electrical activity in muscle tissue of the user.
The reaction data of a disclosed example corresponds to electrical activity in heart tissue of the user.
A first set of the plurality of viewers of a disclosed example is at a first location and a second set of the plurality of viewers is at a second location different from the first location
A first set of the plurality of viewers of a disclosed example is viewing the media instance at a first time and a second set of the plurality of viewers is viewing the media instance at a second time different from the first time.
Examples described herein include a method comprising: receiving a media instance; receiving reaction data from a plurality of viewers while the plurality of viewers are viewing the media instance; automatically analyzing the reaction data; and presenting the results by presenting actionable insights corresponding to a portion of the media instance via at least one of a plurality of renderings, wherein the actionable insights correspond to emotional reactions of the plurality of viewers.
Examples described herein include a method comprising: receiving a media instance; receiving reaction data from a plurality of viewers viewing the media instance; analyzing the reaction data; and presenting results of the analyzing by presenting a set of questions corresponding to a portion of the media instance, the set of questions corresponding to at least one of a plurality of renderings, wherein answers to questions of the set of questions present actionable insights of the reaction data, the actionable insights corresponding to emotional reactions of the plurality of viewers.
Examples described herein include a system comprising: a processor coupled to a database, the database including a media instance and reaction data, the media instance including a plurality of media events, the reaction data received from a plurality of viewers while the plurality of viewers are viewing the media instance; a first module coupled to the processor, the first module analyzing the reaction data; and a second module coupled to the processor, the second module comprising a plurality of renderings and a user interface (UI) that provide remote access to results of the analyzing and the results, the results including actionable insights corresponding to a portion of the media instance, wherein the actionable insights correspond to emotional reactions of the plurality of viewers.
The analyzing of a disclosed example includes applying expert knowledge of physiological behavior to the reaction data.
The first module of a disclosed example generates a first set of questions that represent the results.
The analyzing of a disclosed example includes analyzing the reaction data in the context of the first set of questions.
At least one of the second module and the UI of a disclosed example enables selection of at least one rendering of the plurality of renderings.
At least one of the second module and the UI of a disclosed example enables tagging of a selected rendering with at least one question of the first set of questions.
A user of a disclosed example can modify presentation of the results via the UI by selecting at least one rendering of the plurality of renderings.
At least one of the second module and the UI of a disclosed example presents the results via presentation of the first set of questions.
In response to receipt of a selected question of the first set of questions, the second module of a disclosed example presents an answer to the selected question that includes the actionable insight.
The first module of a disclosed example selects a second set of questions that represent the results, wherein the second set of questions were generated prior to the first set of questions to represent previous results from analysis of preceding reaction data of a preceding media instance, wherein the preceding reaction data is similar to the reaction data.
The analyzing of a disclosed example includes analyzing the reaction data in the context of the second set of questions.
The UI of a disclosed example enables selection of at least one rendering of the plurality of renderings.
The method of a disclosed example comprises tagging the selected rendering with at least one question of the second set of questions.
A user of a disclosed example can modify presentation of the results via the UI by the selecting of at least one rendering of the plurality of renderings.
At least one of the second module and the UI of a disclosed example presents the results via presentation of the second set of questions.
In response to the user selecting a question of the second set of questions, at least one of the second module and the UI of a disclosed example presents an answer to the selected question that includes the actionable insight.
The UI of a disclosed example enables selection of a set of the reaction data to which the analyzing is applied, the selecting including selecting a portion of the media instance to which the set of the reaction data corresponds. The portion of a disclosed example includes a point in time. The portion of a disclosed example includes a period of time.
The first module of a disclosed example generates aggregated reaction data by aggregating the reaction data from the plurality of viewers.
The aggregating of a disclosed example comprises aggregating the reaction data according to at least one of maximums, minimums, averages, deviations, derivatives, amplitudes, and trends of at least one parameter of the reaction data.
The method of a disclosed example comprises generating synchronized data by synchronizing the plurality of media events of the media instance with the reaction data.
The method of a disclosed example comprises enabling remote interactive manipulation of the media instance via the UI.
The method of a disclosed example comprises enabling remote interactive manipulation of the reaction data via the UI.
The method of a disclosed example comprises enabling remote interactive manipulation of the plurality of renderings via the UI.
The method of a disclosed example comprises enabling remote interactive manipulation of the actionable insights via the UI.
The plurality of renderings of a disclosed example includes text, charts, graphs, histograms, images, and video.
The reaction data of a disclosed example includes at least one of physiological responses, survey results, feedback generated by the viewers, metadata, and derived statistics.
The reaction data of a disclosed example includes physiological responses.
The reaction data of a disclosed example includes survey results.
The reaction data of a disclosed example includes feedback generated by the viewers.
The reaction data of a disclosed example includes metadata, wherein the metadata is event-based metadata.
The reaction data of a disclosed example includes derived statistics, wherein the derived statistics are derived statistics for indicators of success and failure of the media instance.
The method of a disclosed example comprises a plurality of sensor devices, wherein each viewer wears a sensor device of the plurality of sensor devices, wherein each sensor device receives the reaction data from a corresponding view and transmits the reaction data to at least one of the first module and the database.
The reaction data of a disclosed example corresponds to electrical activity in brain tissue of the user.
The reaction data of a disclosed example corresponds to electrical activity in muscle tissue of the user.
The reaction data of a disclosed example corresponds to electrical activity in heart tissue of the user.
A first set of the plurality of viewers of a disclosed example is at a first location and a second set of the plurality of viewers of a disclosed example is at a second location different from the first location.
A first set of the plurality of viewers of a disclosed example is viewing the media instance at a first time and a second set of the plurality of viewers is viewing the media instance at a second time different from the first time.
Examples described herein include a system comprising: a processor coupled to a database, the database receiving a media instance and reaction data from a plurality of viewers while the plurality of viewers are viewing the media instance; a first module coupled to the processor, the first module automatically analyzing the reaction data; and a second module coupled to the processor, the second module presenting the results by presenting actionable insights corresponding to a portion of the media instance via at least one of a plurality of renderings, wherein the actionable insights correspond to emotional reactions of the plurality of viewers.
Examples described herein include a system comprising: a processor coupled to a database, the database receiving a media instance and reaction data from a plurality of viewers viewing the media instance; a first module coupled to the processor, the first module analyzing the reaction data; and a second module coupled to the processor, the second module presenting results of the analyzing by presenting a set of questions corresponding to a portion of the media instance, the set of questions corresponding to at least one of a plurality of renderings, wherein answers to questions of the set of questions present actionable insights of the reaction data, the actionable insights corresponding to emotional reactions of the plurality of viewers.
Examples described herein may be implemented using a conventional general purpose or a specialized digital computer or microprocessor(s) programmed according to the teachings of the present disclosure, as will be apparent to those skilled in the computer art Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art. The teachings of this disclosure may also be implemented by the preparation of integrated circuits or by interconnecting an appropriate network of conventional component circuits, as will be readily apparent to those skilled in the art.
A disclosed example includes a computer program product which is a machine readable medium (media) having instructions stored thereon/in which can be used to program one or more computing devices to perform any of the features presented herein. The machine readable medium can include, but is not limited to, one or more types of disks including floppy disks, optical discs, DVD, CD-ROMs, micro drive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data. Stored on any one of the computer readable medium (media), the teachings of the present disclosure include software for controlling both the hardware of the general purpose/specialized computer or microprocessor, and for enabling the computer or microprocessor to interact with a human viewer or other mechanism utilizing the results of the teachings of this disclosure. Such software may include, but is not limited to, device drivers, operating systems, execution environments/containers, and applications.
The examples described herein include and/or run under and/or in association with a processing system. The processing system includes any collection of processor-based devices or computing devices operating together, or components of processing systems or devices, as is known in the art. For example, the processing system can include one or more of a portable computer, portable communication device operating in a communication network, and/or a network server. The portable computer can be any of a number and/or combination of devices selected from among personal computers, cellular telephones, personal digital assistants, portable computing devices, and portable communication devices, but is not so limited. The processing system can include components within a larger computer system.
The processing system of a disclosed example includes at least one processor and at least one memory device or subsystem. The processing system can also include or be coupled to at least one database. The term “processor” as generally used herein refers to any logic processing unit, such as one or more central processing units (CPUs), digital signal processors (DSPs), application-specific integrated circuits (ASIC), etc. The processor and memory can be monolithically integrated onto a single chip, distributed among a number of chips or components of the systems described herein, and/or provided by some combination of algorithms. The methods described herein can be implemented in one or more of software algorithm(s), programs, firmware, hardware, components, circuitry, in any combination.
The components described herein can be located together or in separate locations. Communication paths couple the components and include any medium for communicating or transferring files among the components. The communication paths include wireless connections, wired connections, and hybrid wireless/wired connections. The communication paths also include couplings or connections to networks including local area networks (LANs), metropolitan area networks (MANs), wide area networks (WANs), proprietary networks, interoffice or backend networks, and the Internet. Furthermore, the communication paths include removable fixed mediums like floppy disks, hard disk drives, and CD-ROM disks, as well as flash RAM, Universal Serial Bus (USB) connections, RS-232 connections, telephone lines, buses, and electronic mail messages.
Aspects of the systems and methods described herein may be implemented as functionality programmed into any of a variety of circuitry, including programmable logic devices (PLDs), such as field programmable gate arrays (FPGAs), programmable array logic (PAL) devices, electrically programmable logic and memory devices and standard cell-based devices, as well as application specific integrated circuits (ASICs). Some other possibilities for implementing aspects of the systems and methods include: microcontrollers with memory (such as electronically erasable programmable read only memory (EEPROM)), embedded microprocessors, firmware, software, etc. Furthermore, aspects of the systems and methods may be embodied in microprocessors having software-based circuit emulation, discrete logic (sequential and combinatorial), custom devices, fuzzy (neural) logic, quantum devices, and hybrids of any of the above device types. Of course the underlying device technologies may be provided in a variety of component types, e.g., metal-oxide semiconductor field-effect transistor (MOSFET) technologies like complementary metal-oxide semiconductor (CMOS), bipolar technologies like emitter-coupled logic (ECL), polymer technologies (e.g., silicon-conjugated polymer and metal-conjugated polymer-metal structures), mixed analog and digital, etc.
It should be noted that any system, method, and/or other components disclosed herein may be described using computer aided design tools and expressed (or represented), as data and/or instructions embodied in various computer-readable media, in terms of their behavioral, register transfer, logic component, transistor, layout geometries, and/or other characteristics. Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, non-volatile storage media in various forms (e.g., optical, magnetic or semiconductor storage media) and carrier waves that may be used to transfer such formatted data and/or instructions through wireless, optical, or wired signaling media or any combination thereof. Examples of transfers of such formatted data and/or instructions by carrier waves include, but are not limited to, transfers (uploads, downloads, e-mail, etc.) over the Internet and/or other computer networks via one or more data transfer protocols (e.g., HTTP, HTTPs, FTP, SMTP, WAP, etc.). When received within a computer system via one or more computer-readable media, such data and/or instruction-based expressions of the above described components may be processed by a processing entity (e.g., one or more processors) within the computer system in conjunction with execution of one or more other computer programs.
Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in a sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively. Additionally, the words “herein,” “hereunder,” “above,” “below,” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. When the word “or” is used in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list.
The above description of example systems and methods is not intended to be exhaustive or to limit the systems and methods to the precise forms disclosed. While specific examples of, and examples for, the systems and methods are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the systems and methods, as those skilled in the relevant art will recognize. The teachings of the systems and methods provided herein can be applied to other systems and methods, not only for the systems and methods described above.
The elements and acts of the various examples described above can be combined to provide other examples. These and other changes can be made to the systems and methods in light of the above detailed description.
In general, in the following claims, the terms used should not be construed to limit the claims to the specific examples disclosed in the specification and the claims, but should be construed to include all systems and methods under the claims. Accordingly, the examples are not limited by the disclosure, but instead the scope of the examples is to be determined entirely by the claims.
While certain aspects of the examples are presented below in certain claim forms, the inventors contemplate the various aspects of the examples in any number of claim forms. Accordingly, the inventors reserve the right to add additional claims after filing the application to pursue such additional claim forms for other aspects disclosed in the various examples.
This patent arises from a continuation of U.S. patent application Ser. No. 14/673,077, filed on Mar. 30, 2015, U.S. patent application Ser. No. 13/659,592, filed on Oct. 24, 2012, now U.S. Pat. No. 9,021,515, U.S. patent application Ser. No. 12/244,751, filed on Oct. 2, 2008, now U.S. Pat. No. 8,327,395, and U.S. patent application Ser. No. 12/244,752, filed on Oct. 2, 2008, now U.S. Pat. No. 8,332,883, which are hereby incorporated by reference in their entireties. This patent claims the benefit of U.S. Patent Application Ser. No. 60/977,035, filed Oct. 2, 2007. This patent claims the benefit of U.S. Patent Application Ser. No. 60/977,040, filed Oct. 2, 2007. This patent claims the benefit of U.S. Patent Application Ser. No. 60/977,042, filed Oct. 2, 2007. This patent claims the benefit of U.S. Patent Application Ser. No. 60/977,045, filed Oct. 2, 2007. This patent claims the benefit of U.S. Patent Application Ser. No. 60/984,260, filed Oct. 31, 2007. This patent claims the benefit of U.S. Patent Application Ser. No. 60/984,268, filed Oct. 31, 2007. This patent claims the benefit of U.S. Patent Application Ser. No. 60/991,591, filed Nov. 30, 2007. This patent is related to U.S. patent application Ser. No. 11/681,265, filed Mar. 2, 2007; U.S. patent application Ser. No. 11/804,517, filed May 17, 2007; U.S. patent application Ser. No. 11/779,814, filed Jul. 18, 2007; U.S. patent application Ser. No. 11/846,068, filed Aug. 28, 2007; U.S. patent application Ser. No. 11/959,399, filed Dec. 18, 2007; U.S. patent application Ser. No. 12/244,737, filed Oct. 2, 2008; U.S. patent application Ser. No. 12/244,748, filed Oct. 2, 2008; U.S. patent application Ser. No. 12/263,331, filed Oct. 31, 2008; U.S. patent application Ser. No. 12/263,350, filed Oct. 31, 2008; U.S. patent application Ser. No. 12/326,016, filed Dec. 1, 2008; and U.S. patent application Ser. No. 13/252,910, filed Oct. 4, 2011.
Number | Name | Date | Kind |
---|---|---|---|
4145122 | Rinard et al. | Mar 1979 | A |
4610259 | Cohen et al. | Sep 1986 | A |
4626904 | Lurie | Dec 1986 | A |
4686999 | Snyder et al. | Aug 1987 | A |
4695879 | Weinblatt | Sep 1987 | A |
4755045 | Borah et al. | Jul 1988 | A |
4846190 | John | Jul 1989 | A |
4859050 | Borah et al. | Aug 1989 | A |
4870579 | Hey | Sep 1989 | A |
4931934 | Snyder | Jun 1990 | A |
4955388 | Silberstein | Sep 1990 | A |
4973149 | Hutchinson | Nov 1990 | A |
4974602 | Abraham-Fuchs et al. | Dec 1990 | A |
5226177 | Nickerson | Jul 1993 | A |
5243517 | Schmidt et al. | Sep 1993 | A |
5331544 | Lu et al. | Jul 1994 | A |
5345281 | Taboada et al. | Sep 1994 | A |
5363858 | Farwell | Nov 1994 | A |
5392788 | Hudspeth | Feb 1995 | A |
5406957 | Tansey | Apr 1995 | A |
5410609 | Kado et al. | Apr 1995 | A |
5436830 | Zaltman | Jul 1995 | A |
5447166 | Gevins | Sep 1995 | A |
5450855 | Rosenfeld | Sep 1995 | A |
5513649 | Gevins et al. | May 1996 | A |
5537618 | Boulton et al. | Jul 1996 | A |
5550928 | Lu et al. | Aug 1996 | A |
5579774 | Miller et al. | Dec 1996 | A |
5601090 | Musha | Feb 1997 | A |
5622168 | Keusch et al. | Apr 1997 | A |
5676138 | Zawilinski | Oct 1997 | A |
5676148 | Koo et al. | Oct 1997 | A |
5687322 | Deaton et al. | Nov 1997 | A |
5692906 | Corder | Dec 1997 | A |
5720619 | Fisslinger | Feb 1998 | A |
5724987 | Gevins et al. | Mar 1998 | A |
5726701 | Needham | Mar 1998 | A |
5736986 | Sever, Jr. | Apr 1998 | A |
5740812 | Cowan | Apr 1998 | A |
5762611 | Lewis et al. | Jun 1998 | A |
5774591 | Black et al. | Jun 1998 | A |
5802208 | Podilchuk et al. | Sep 1998 | A |
5802220 | Black et al. | Sep 1998 | A |
5812642 | Leroy | Sep 1998 | A |
5842199 | Miller et al. | Nov 1998 | A |
5892566 | Bullwinkel | Apr 1999 | A |
5974262 | Fuller et al. | Oct 1999 | A |
5983129 | Cowan et al. | Nov 1999 | A |
5983214 | Lang et al. | Nov 1999 | A |
5995868 | Dorfmeister et al. | Nov 1999 | A |
6001065 | DeVito | Dec 1999 | A |
6016475 | Miller et al. | Jan 2000 | A |
6032129 | Greef et al. | Feb 2000 | A |
6088040 | Oda et al. | Jul 2000 | A |
6099319 | Zaltman et al. | Aug 2000 | A |
6170018 | Voll et al. | Jan 2001 | B1 |
6171239 | Humphrey | Jan 2001 | B1 |
6182113 | Narayanaswami | Jan 2001 | B1 |
6190314 | Ark et al. | Feb 2001 | B1 |
6212502 | Ball et al. | Apr 2001 | B1 |
6228038 | Claessens | May 2001 | B1 |
6236885 | Hunter et al. | May 2001 | B1 |
6236975 | Boe et al. | May 2001 | B1 |
6254536 | DeVito | Jul 2001 | B1 |
6286005 | Cannon | Sep 2001 | B1 |
6292688 | Patton | Sep 2001 | B1 |
6299308 | Voronka et al. | Oct 2001 | B1 |
6309342 | Blazey et al. | Oct 2001 | B1 |
6315569 | Zaltman | Nov 2001 | B1 |
6322368 | Young et al. | Nov 2001 | B1 |
6358201 | Childre et al. | Mar 2002 | B1 |
6370513 | Kolawa et al. | Apr 2002 | B1 |
6422999 | Hill | Jul 2002 | B1 |
6435878 | Reynolds et al. | Aug 2002 | B1 |
6453194 | Hill | Sep 2002 | B1 |
6453241 | Bassett, Jr. et al. | Sep 2002 | B1 |
6487444 | Mimura | Nov 2002 | B2 |
6577329 | Flickner et al. | Jun 2003 | B1 |
6585521 | Obrador | Jul 2003 | B1 |
6609024 | Ryu et al. | Aug 2003 | B1 |
6615209 | Gomes et al. | Sep 2003 | B1 |
6623428 | Miller et al. | Sep 2003 | B2 |
6626676 | Freer | Sep 2003 | B2 |
6648822 | Hamamoto et al. | Nov 2003 | B2 |
6652283 | Van Schaack et al. | Nov 2003 | B1 |
6656116 | Kim et al. | Dec 2003 | B2 |
6678685 | McGill et al. | Jan 2004 | B2 |
6678866 | Sugimoto et al. | Jan 2004 | B1 |
6688890 | von Buegner | Feb 2004 | B2 |
6712468 | Edwards | Mar 2004 | B1 |
6792304 | Silberstein | Sep 2004 | B1 |
6839682 | Blume et al. | Jan 2005 | B1 |
6842877 | Robarts et al. | Jan 2005 | B2 |
6850252 | Hoffberg | Feb 2005 | B1 |
6852875 | Prakash | Feb 2005 | B2 |
6888457 | Wilkinson et al. | May 2005 | B2 |
6904408 | McCarthy et al. | Jun 2005 | B1 |
6909451 | Latypov et al. | Jun 2005 | B1 |
6978115 | Whitehurst et al. | Dec 2005 | B2 |
7010497 | Nyhan et al. | Mar 2006 | B1 |
7020508 | Stivoric et al. | Mar 2006 | B2 |
7035685 | Ryu et al. | Apr 2006 | B2 |
7043056 | Edwards et al. | May 2006 | B2 |
7047550 | Yasukawa et al. | May 2006 | B1 |
7050753 | Knutson | May 2006 | B2 |
7113916 | Hill | Sep 2006 | B1 |
7120880 | Dryer et al. | Oct 2006 | B1 |
7150715 | Collura et al. | Dec 2006 | B2 |
7246081 | Hill | Jul 2007 | B2 |
7249708 | McConnell et al. | Jul 2007 | B2 |
7340060 | Tomkins et al. | Mar 2008 | B2 |
D565735 | Washbon | Apr 2008 | S |
7359894 | Liebman et al. | Apr 2008 | B1 |
7383200 | Walker et al. | Jun 2008 | B1 |
7394385 | Franco, Jr. et al. | Jul 2008 | B2 |
7483844 | Takakura et al. | Jan 2009 | B2 |
7519860 | Hatonen et al. | Apr 2009 | B2 |
7623823 | Zito et al. | Nov 2009 | B2 |
7630757 | Dorfmeister et al. | Dec 2009 | B2 |
7636456 | Collins et al. | Dec 2009 | B2 |
7658327 | Tuchman et al. | Feb 2010 | B2 |
7689272 | Farwell | Mar 2010 | B2 |
7698238 | Barletta et al. | Apr 2010 | B2 |
7729755 | Laken | Jun 2010 | B2 |
7765564 | Deng | Jul 2010 | B2 |
7774052 | Burton et al. | Aug 2010 | B2 |
7797186 | Dybus | Sep 2010 | B2 |
7840250 | Tucker | Nov 2010 | B2 |
7844484 | Arnett et al. | Nov 2010 | B2 |
7895075 | Gettys et al. | Feb 2011 | B2 |
7895625 | Bryan et al. | Feb 2011 | B1 |
7917366 | Levanon et al. | Mar 2011 | B1 |
7930199 | Hill | Apr 2011 | B1 |
7966012 | Parker | Jun 2011 | B2 |
7984468 | Westberg | Jul 2011 | B2 |
8014847 | Shastri et al. | Sep 2011 | B2 |
8027518 | Baker et al. | Sep 2011 | B2 |
8055722 | Hille | Nov 2011 | B2 |
8065203 | Chien et al. | Nov 2011 | B1 |
8069125 | Jung et al. | Nov 2011 | B2 |
8073707 | Teller et al. | Dec 2011 | B2 |
8082215 | Jung et al. | Dec 2011 | B2 |
8086563 | Jung et al. | Dec 2011 | B2 |
8099315 | Amento et al. | Jan 2012 | B2 |
8126220 | Greig | Feb 2012 | B2 |
8151292 | Lee et al. | Apr 2012 | B2 |
8151298 | Begeja et al. | Apr 2012 | B2 |
8196168 | Bryan et al. | Jun 2012 | B1 |
8200775 | Moore | Jun 2012 | B2 |
8235725 | Hill | Aug 2012 | B1 |
8255267 | Breiter | Aug 2012 | B2 |
8296172 | Marci et al. | Oct 2012 | B2 |
8300526 | Saito et al. | Oct 2012 | B2 |
8327395 | Lee et al. | Dec 2012 | B2 |
8332883 | Lee et al. | Dec 2012 | B2 |
8381244 | King et al. | Feb 2013 | B2 |
8473345 | Pradeep et al. | Jun 2013 | B2 |
8484081 | Pradeep et al. | Jul 2013 | B2 |
8494610 | Pradeep et al. | Jul 2013 | B2 |
8494905 | Pradeep et al. | Jul 2013 | B2 |
8533042 | Pradeep et al. | Sep 2013 | B2 |
8561095 | Dimitrova et al. | Oct 2013 | B2 |
8635105 | Pradeep et al. | Jan 2014 | B2 |
8744237 | Baldwin et al. | Jun 2014 | B2 |
8764652 | Lee et al. | Jul 2014 | B2 |
8788372 | Kettner et al. | Jul 2014 | B2 |
9021515 | Lee et al. | Apr 2015 | B2 |
9336535 | Pradeep et al. | May 2016 | B2 |
9521960 | Lee et al. | Dec 2016 | B2 |
9560984 | Pradeep et al. | Feb 2017 | B2 |
9571877 | Lee et al. | Feb 2017 | B2 |
20010016874 | Ono et al. | Aug 2001 | A1 |
20010013009 | Greening et al. | Sep 2001 | A1 |
20010020236 | Cannon | Sep 2001 | A1 |
20010029468 | Yamaguchi et al. | Oct 2001 | A1 |
20010032140 | Hoffman | Oct 2001 | A1 |
20010040591 | Abbott et al. | Nov 2001 | A1 |
20010056225 | DeVito | Dec 2001 | A1 |
20020053076 | Landesmann | May 2002 | A1 |
20020055857 | Mault | May 2002 | A1 |
20020056087 | Berezowski et al. | May 2002 | A1 |
20020056124 | Hay | May 2002 | A1 |
20020059577 | Lu et al. | May 2002 | A1 |
20020065826 | Bell et al. | May 2002 | A1 |
20020072952 | Hamzy et al. | Jun 2002 | A1 |
20020082902 | Ando et al. | Jun 2002 | A1 |
20020103429 | deCharms | Aug 2002 | A1 |
20020111796 | Nemoto | Aug 2002 | A1 |
20020143627 | Barsade et al. | Oct 2002 | A1 |
20020154833 | Koch et al. | Oct 2002 | A1 |
20020169665 | Hughes et al. | Nov 2002 | A1 |
20020182574 | Freer | Dec 2002 | A1 |
20020188217 | Farwell | Dec 2002 | A1 |
20030003433 | Carpenter et al. | Jan 2003 | A1 |
20030036955 | Tanaka | Feb 2003 | A1 |
20030037333 | Ghashghai et al. | Feb 2003 | A1 |
20030044050 | Clark et al. | Mar 2003 | A1 |
20030063222 | Creed et al. | Apr 2003 | A1 |
20030063780 | Gutta et al. | Apr 2003 | A1 |
20030065524 | Giacchetti et al. | Apr 2003 | A1 |
20030076369 | Resner et al. | Apr 2003 | A1 |
20030081834 | Philomin et al. | May 2003 | A1 |
20030093784 | Dimitrova et al. | May 2003 | A1 |
20030093792 | Labeeb et al. | May 2003 | A1 |
20030126593 | Mault | Jul 2003 | A1 |
20030131351 | Shapira | Jul 2003 | A1 |
20030149344 | Nizan | Aug 2003 | A1 |
20030153841 | Kilborn | Aug 2003 | A1 |
20030165270 | Endrikhovski et al. | Sep 2003 | A1 |
20030172374 | Vinson et al. | Sep 2003 | A1 |
20030204412 | Brier | Oct 2003 | A1 |
20030208754 | Sridhar et al. | Nov 2003 | A1 |
20040001616 | Gutta et al. | Jan 2004 | A1 |
20040018476 | LaDue | Jan 2004 | A1 |
20040039268 | Barbour et al. | Feb 2004 | A1 |
20040055448 | Byon | Mar 2004 | A1 |
20040068431 | Smith et al. | Apr 2004 | A1 |
20040072133 | Kullok et al. | Apr 2004 | A1 |
20040101212 | Fedorovskaya et al. | May 2004 | A1 |
20040117831 | Ellis et al. | Jun 2004 | A1 |
20040133081 | Teller et al. | Jul 2004 | A1 |
20040187167 | Maguire et al. | Sep 2004 | A1 |
20040193068 | Burton et al. | Sep 2004 | A1 |
20040208496 | Pilu | Oct 2004 | A1 |
20040219184 | Brown et al. | Nov 2004 | A1 |
20040230989 | Macey et al. | Nov 2004 | A1 |
20040267141 | Amano et al. | Dec 2004 | A1 |
20050010087 | Banet et al. | Jan 2005 | A1 |
20050010637 | Dempski et al. | Jan 2005 | A1 |
20050041951 | Inoue et al. | Feb 2005 | A1 |
20050043646 | Viirre et al. | Feb 2005 | A1 |
20050043774 | Devlin et al. | Feb 2005 | A1 |
20050045189 | Jay | Mar 2005 | A1 |
20050047647 | Rutishauser et al. | Mar 2005 | A1 |
20050060312 | Curtiss et al. | Mar 2005 | A1 |
20050062637 | El Zabadani et al. | Mar 2005 | A1 |
20050066307 | Patel et al. | Mar 2005 | A1 |
20050071462 | Bodin et al. | Mar 2005 | A1 |
20050071865 | Martins | Mar 2005 | A1 |
20050079474 | Lowe | Apr 2005 | A1 |
20050097594 | O'Donnell et al. | May 2005 | A1 |
20050113649 | Bergantino | May 2005 | A1 |
20050113656 | Chance | May 2005 | A1 |
20050132401 | Boccon-Gibod et al. | Jun 2005 | A1 |
20050149964 | Thomas et al. | Jul 2005 | A1 |
20050154290 | Langleben | Jul 2005 | A1 |
20050165766 | Szabo | Jul 2005 | A1 |
20050172311 | Hjelt et al. | Aug 2005 | A1 |
20050177058 | Sobell | Aug 2005 | A1 |
20050216071 | Devlin et al. | Sep 2005 | A1 |
20050216243 | Graham et al. | Sep 2005 | A1 |
20050223237 | Barletta et al. | Oct 2005 | A1 |
20050256905 | Gruhl et al. | Nov 2005 | A1 |
20050261980 | Hadi | Nov 2005 | A1 |
20050262542 | DeWeese et al. | Nov 2005 | A1 |
20050267798 | Panara | Dec 2005 | A1 |
20050288954 | McCarthy et al. | Dec 2005 | A1 |
20050289582 | Tavares et al. | Dec 2005 | A1 |
20060009702 | Iwaki et al. | Jan 2006 | A1 |
20060010470 | Kurosaki et al. | Jan 2006 | A1 |
20060031882 | Swix et al. | Feb 2006 | A1 |
20060041548 | Parsons et al. | Feb 2006 | A1 |
20060042483 | Work et al. | Mar 2006 | A1 |
20060069663 | Adar et al. | Mar 2006 | A1 |
20060149337 | John | Jul 2006 | A1 |
20060176289 | Horn | Aug 2006 | A1 |
20060190822 | Basson et al. | Aug 2006 | A1 |
20060190966 | McKissick et al. | Aug 2006 | A1 |
20060218046 | Carfi et al. | Sep 2006 | A1 |
20060256133 | Rosenberg | Nov 2006 | A1 |
20060257834 | Lee et al. | Nov 2006 | A1 |
20060258926 | Ali et al. | Nov 2006 | A1 |
20060259371 | Perrier et al. | Nov 2006 | A1 |
20060259922 | Sandgren et al. | Nov 2006 | A1 |
20060277102 | Agliozzo | Dec 2006 | A1 |
20060293921 | McCarthy et al. | Dec 2006 | A1 |
20070016096 | McNabb | Jan 2007 | A1 |
20070038516 | Apple et al. | Feb 2007 | A1 |
20070050256 | Walker et al. | Mar 2007 | A1 |
20070053513 | Hoffberg | Mar 2007 | A1 |
20070055169 | Lee et al. | Mar 2007 | A1 |
20070060830 | Le et al. | Mar 2007 | A1 |
20070060831 | Le et al. | Mar 2007 | A1 |
20070066914 | Le et al. | Mar 2007 | A1 |
20070066916 | Lemos | Mar 2007 | A1 |
20070067305 | Ives | Mar 2007 | A1 |
20070078700 | Lenzmann et al. | Apr 2007 | A1 |
20070101360 | Gutta et al. | May 2007 | A1 |
20070112460 | Kiselik | May 2007 | A1 |
20070116037 | Moore | May 2007 | A1 |
20070124756 | Covell et al. | May 2007 | A1 |
20070130580 | Covell et al. | Jun 2007 | A1 |
20070136753 | Bovenschulte et al. | Jun 2007 | A1 |
20070143778 | Covell et al. | Jun 2007 | A1 |
20070150916 | Begole et al. | Jun 2007 | A1 |
20070162505 | Cecchi et al. | Jul 2007 | A1 |
20070168461 | Moore | Jul 2007 | A1 |
20070173733 | Le et al. | Jul 2007 | A1 |
20070179396 | Le et al. | Aug 2007 | A1 |
20070184420 | Mathan et al. | Aug 2007 | A1 |
20070192168 | Van Luchene | Aug 2007 | A1 |
20070192785 | Pellinat et al. | Aug 2007 | A1 |
20070209047 | Hallberg et al. | Sep 2007 | A1 |
20070214471 | Rosenberg | Sep 2007 | A1 |
20070225585 | Washbon et al. | Sep 2007 | A1 |
20070235716 | Delic et al. | Oct 2007 | A1 |
20070238945 | Delic et al. | Oct 2007 | A1 |
20070240180 | Shanks et al. | Oct 2007 | A1 |
20070244977 | Atkins | Oct 2007 | A1 |
20070250846 | Swix et al. | Oct 2007 | A1 |
20070250901 | Mcintire et al. | Oct 2007 | A1 |
20070265507 | de Lemos | Nov 2007 | A1 |
20070282566 | Whitlow et al. | Dec 2007 | A1 |
20080004940 | Rolleston Phillips | Jan 2008 | A1 |
20080024725 | Todd | Jan 2008 | A1 |
20080065468 | Berg et al. | Mar 2008 | A1 |
20080091463 | Shakamuri | Apr 2008 | A1 |
20080091512 | Marci et al. | Apr 2008 | A1 |
20080133724 | Clark | Jun 2008 | A1 |
20080144882 | Leinbach et al. | Jun 2008 | A1 |
20080147488 | Tunick et al. | Jun 2008 | A1 |
20080147742 | Allen | Jun 2008 | A1 |
20080159365 | Dubocanin et al. | Jul 2008 | A1 |
20080162182 | Cazares et al. | Jul 2008 | A1 |
20080177197 | Lee et al. | Jul 2008 | A1 |
20080195471 | Dube et al. | Aug 2008 | A1 |
20080211768 | Breen et al. | Sep 2008 | A1 |
20080214902 | Lee et al. | Sep 2008 | A1 |
20080218472 | Breen et al. | Sep 2008 | A1 |
20080221400 | Lee et al. | Sep 2008 | A1 |
20080221472 | Lee et al. | Sep 2008 | A1 |
20080221969 | Lee et al. | Sep 2008 | A1 |
20080222670 | Lee et al. | Sep 2008 | A1 |
20080222671 | Lee et al. | Sep 2008 | A1 |
20080249834 | Zigmond et al. | Oct 2008 | A1 |
20080249865 | Angell et al. | Oct 2008 | A1 |
20080275830 | Greig | Nov 2008 | A1 |
20080306398 | Uchiyama et al. | Dec 2008 | A1 |
20090018996 | Hunt et al. | Jan 2009 | A1 |
20090024049 | Pradeep et al. | Jan 2009 | A1 |
20090024447 | Pradeep et al. | Jan 2009 | A1 |
20090024448 | Pradeep et al. | Jan 2009 | A1 |
20090024449 | Pradeep et al. | Jan 2009 | A1 |
20090024475 | Pradeep et al. | Jan 2009 | A1 |
20090024747 | Moses et al. | Jan 2009 | A1 |
20090025023 | Pradeep et al. | Jan 2009 | A1 |
20090030287 | Pradeep et al. | Jan 2009 | A1 |
20090030303 | Pradeep et al. | Jan 2009 | A1 |
20090030717 | Pradeep et al. | Jan 2009 | A1 |
20090030780 | York et al. | Jan 2009 | A1 |
20090030930 | Pradeep et al. | Jan 2009 | A1 |
20090036755 | Pradeep et al. | Feb 2009 | A1 |
20090036756 | Pradeep et al. | Feb 2009 | A1 |
20090062629 | Pradeep et al. | Mar 2009 | A1 |
20090062681 | Pradeep et al. | Mar 2009 | A1 |
20090063255 | Pradeep et al. | Mar 2009 | A1 |
20090063256 | Pradeep et al. | Mar 2009 | A1 |
20090070798 | Lee et al. | Mar 2009 | A1 |
20090082643 | Pradeep et al. | Mar 2009 | A1 |
20090082692 | Hale et al. | Mar 2009 | A1 |
20090083129 | Pradeep et al. | Mar 2009 | A1 |
20090088610 | Lee et al. | Apr 2009 | A1 |
20090094286 | Lee et al. | Apr 2009 | A1 |
20090094627 | Lee et al. | Apr 2009 | A1 |
20090094628 | Lee et al. | Apr 2009 | A1 |
20090094629 | Lee et al. | Apr 2009 | A1 |
20090105576 | Do et al. | Apr 2009 | A1 |
20090112077 | Nguyen et al. | Apr 2009 | A1 |
20090132441 | Muller et al. | May 2009 | A1 |
20090133047 | Lee et al. | May 2009 | A1 |
20090138356 | Pomplun | May 2009 | A1 |
20090150919 | Lee et al. | Jun 2009 | A1 |
20090153328 | Otani et al. | Jun 2009 | A1 |
20090156907 | Jung et al. | Jun 2009 | A1 |
20090156925 | Jin et al. | Jun 2009 | A1 |
20090156955 | Jung et al. | Jun 2009 | A1 |
20090163777 | Jung et al. | Jun 2009 | A1 |
20090164132 | Jung et al. | Jun 2009 | A1 |
20090171164 | Jung et al. | Jul 2009 | A1 |
20090187467 | Fang et al. | Jul 2009 | A1 |
20090214060 | Chuang et al. | Aug 2009 | A1 |
20090221928 | Einav et al. | Sep 2009 | A1 |
20090222330 | Leinbach | Sep 2009 | A1 |
20090249223 | Barsook et al. | Oct 2009 | A1 |
20090253996 | Lee et al. | Oct 2009 | A1 |
20090259509 | Landvater | Oct 2009 | A1 |
20090271294 | Hadi | Oct 2009 | A1 |
20090300672 | Van Gulik | Dec 2009 | A1 |
20090305006 | Steffen | Dec 2009 | A1 |
20090318773 | Jung et al. | Dec 2009 | A1 |
20090327068 | Pradeep et al. | Dec 2009 | A1 |
20090328089 | Pradeep et al. | Dec 2009 | A1 |
20090328122 | Amento et al. | Dec 2009 | A1 |
20100004977 | Marci et al. | Jan 2010 | A1 |
20100063881 | Ghosh et al. | Mar 2010 | A1 |
20100094702 | Silberstein | Apr 2010 | A1 |
20100180029 | Fourman | Jul 2010 | A1 |
20100228604 | Desai et al. | Sep 2010 | A1 |
20100292998 | Bodlaender et al. | Nov 2010 | A1 |
20110022459 | Milanese et al. | Jan 2011 | A1 |
20110153423 | Elvekrog et al. | Jun 2011 | A1 |
20120321271 | Baldwin et al. | Dec 2012 | A1 |
20130046577 | Marci et al. | Feb 2013 | A1 |
20130060602 | Rupp et al. | Mar 2013 | A1 |
20130117771 | Lee et al. | May 2013 | A1 |
20130185140 | Pradeep et al. | Jul 2013 | A1 |
20130185141 | Pradeep et al. | Jul 2013 | A1 |
20130185142 | Pradeep et al. | Jul 2013 | A1 |
20130185145 | Pradeep et al. | Jul 2013 | A1 |
20130304540 | Pradeep et al. | Nov 2013 | A1 |
20130332259 | Pradeep et al. | Dec 2013 | A1 |
20140164095 | Pradeep et al. | Jun 2014 | A1 |
20140221866 | Quy | Aug 2014 | A1 |
20140244345 | Sollis et al. | Aug 2014 | A1 |
20150208113 | Lee et al. | Jul 2015 | A1 |
20170053296 | Lee et al. | Feb 2017 | A1 |
Number | Date | Country |
---|---|---|
1087618 | Mar 2001 | EP |
1609418 | Dec 2005 | EP |
2221759 | Feb 1990 | GB |
2001147944 | May 2001 | JP |
2005051654 | Feb 2005 | JP |
2005160805 | Jun 2005 | JP |
2006006355 | Jan 2006 | JP |
2006227994 | Aug 2006 | JP |
2006305334 | Nov 2006 | JP |
20000072489 | Dec 2000 | KR |
20010104579 | Nov 2001 | KR |
200422399 | Jul 2006 | KR |
2008030831 | Mar 2008 | WO |
2008055078 | May 2008 | WO |
2008064431 | Jun 2008 | WO |
2008121651 | Oct 2008 | WO |
2008137579 | Nov 2008 | WO |
2008137581 | Nov 2008 | WO |
2008141340 | Nov 2008 | WO |
2008154410 | Dec 2008 | WO |
2009018374 | Feb 2009 | WO |
Entry |
---|
United States Patent and Trademark Office, “Notice of Allowance,” issued in connection with U.S. Appl. No. 12/263,331, dated Aug. 4, 2016, 46 pages. |
Knutson et al., “Neural Predictors of Purchases,” Neuron vol. 53 (Jan. 4, 2007), pp. 147-156, 10 pages. |
Schaefer et al., “Neural Correlates of Culturally Familiar Brands of Car Manufacturers,” Neuroimage, vol. 31 (2006), pp. 861-865, 5 pages. |
Aharon et al., “Beautiful Faces Have Variable Reward Value: fMRI and Behavorial Evidence,” Neuron, vol. 32 (2001), pp. 537-551, 15 pages. |
M. Corbetta et al., “Control of Goal-Directed and Stimulus-Driven Attention in the Brain,” Nature Reviews Neuroscience, vol. 3, pp. 201-215 (Mar. 2002), 15 pages. |
Becker, S. “A Study of Web Usability for Older Adults Seeking Online Health Resources,” ACM Transactions on Computer-Human Interaction, vol. 11, No. 4, pp. 387-406 (Dec. 2004), 20 pages. |
Landau et al., “Different Effects of Voluntary and Involunatty Attention on EEG Activity in the Gamma Band,” The Journal of Neuroscience 27(44), Oct. 31, 2007, pp. 11986-11990, 5 pages. |
Kamba et al., “The Krakatoa Chronicle—An Interactive, Personalized Newspaper on the Web,” available at: http://www.w3.org/Conferences/WWW4/Papers/93/ (retrieved on Apr. 11, 2016), 11 pages. |
Ehrenberg et al., “Understanding Brand Performance Measures: Using Dirichlet Benchmarks,” 2004, Journal of Business Research, vol. 57, pp. 1307-1325, 19 pages. |
Leeflang et al., “Building Models for Marketing Decisions,” 2000, Springer Science + Business Media, pp. 192-235, 482-521, 86 pages. |
Bhattacharya , “Is your brand's loyalty too much, too little, or just right?: Explaining deviations in loyalty from the Dirichlet norm,” 1997, International Journal of Research in Marketing, vol. 14, pp. 421-435, 15 pages. |
Nikolaeva et al., “The Moderating Role of Consumer and Product Characteristics on the Value of Customized On-Line Recommendations,” 2006, International Journal of Electronic Commerce, vol. 11, No. 2, pp. 101-123, 24 pages. |
Ehrenberg, “New Brands and the Existing Market,” 1991, International Journal of Market Research, vol. 33, No. 4, 10 pages. |
Foxall, “The Substitutability of Brands,” 1999, Managerial and Decision Economics, vol. 20, pp. 241-257, 17 pages. |
Pammer, “Forecasting the Penetration of a New Product—A Bayesian Approach,” 2000, Journal of Business and Economic Statistics, vol. 18, No. 4, pp. 428-435, 8 pages. |
Rungie et al., “Calculation of Theoretical Brand Performance Measures from the Parameters of the Dirichlet Model,” 2004, Marketing Bulletin, Massey University, 15, Technical Note 2, pp. 1-19, 20 pages. |
Uncles et al.., “Patterns of Buyer Behavior: Regularities, Models, and Extensions,” 1995, Marketing Science, vol. 14, No. 3, pp. G71-G78, 9 pages. |
Boltz, “The cognitive processing of film and musical soundtracks,” Memory and Cognition, 2004, 32 (7), 1194-1205, 12 pages. |
Christie et al., “Autonomic specificity of discrete emotion and dimensions of affective space: a multivariate approach,” International J ournal of Psychophysiology, 51 (2004) 143-153, 11 pages. |
Coombes et al., “Emotion and movement: Activation of defensive circuitry alters the magnitude of a sustained muscle contraction,” University of Florida, USA, Neuroscience Letters 396 (2006) 192-196, 5 pages. |
Cryer et al. “Pull the Plug on Stress,” Harvard Business Review, Jul. 2003, 8 pages. |
Demaree et al., “Predicting facial valence to negative stimuli from resting RSA: Not a function of active emotion regulation,” Cognition and Emotion vol. 20, Issue 2, 2006, pp. 161-176, published on Sep. 9, 2010, http://www.tandfonline.com/doi/abs/10.1080/02699930500260427, 6 pages. (Abstract provided). |
Ekman et al., “Autonomic Nervous System Activity Distinguishes among Emotions,” Science, New Series, vol. 221, No. 4616 (Sep. 16, 1983), pp. 1208-1210, http://links.jstor.org/sici?sici=0036-0875%2819830916%293%3A221%3A4616%3Cl208%3AANSADA%3E2.0.C0%3B2-H, 5 pages. |
Elton, “Measuring emotion at the symphony,” The Boston Globe, Apr. 5, 2006, http://www.psych.mcgill.ca/labs/levitin1media/measuring—emotion—boston.html, 3 pages. |
Goldberg, “Getting wired could help predict emotions,” The Boston Globe, Jun. 13, 2005, http://www.boston.com/yourlife/health/mental/articles/2005/06/13/getting—wired—could—help—predict—emotions/?page=full, 4 pages. |
Gomez et al., “Respiratory Responses Associated with Affective Processing of Film Stimuli,” Biological Psychology, vol. 68, Issue 3, Mar. 2005, pp. 223-235, 2 pages. (Abstract provided). |
Hall, “Is cognitive processing the right dimension,” World Advertising Research Center, Jan. 2003, 3 pages. |
Hall, “On Measuring the Power of Communications,” Joumal of Advertising Research, 44, pp. 1-11, doi: 10.1017 /S0021849904040 139, Jun. 2004, 1 page. (Abstract provided). |
Hall, “Research and strategy: a fall from grace,” ADMAP, Issue 443, pp. 18-20, 2003, 1 page. (Abstract provided). |
Hubert et al., “Autonomic, neuroendocrine, and subjective responses to emotion-inducing film stimuli,” Int J Psychophysiol, Aug. 1991, 2 pages. (Abstract provided). |
Levenson et al., “Emotion and Autonomic Nervous System Activity in the Minangkabau of West Sumatra,” Department of Psychology, University of California, Berkeley, Joumal of Personality and Social Psychology, 1992, 2 pages. (Abstract provided). |
Marci et al., “The effect of emotional distance on psychophysiologic concordance and perceived empathy between patient and interviewer,” Applied Psychophysiology and Biofeedback, Jun. 2006, vol. 31, issue 2, 31:115-129, 8 pages. (Abstract provided). |
McCraty et al., “Analysis of twenty-four hour heart rate variability in patients with panic disorder,” Biological Psychology, vol. 56, Issue 2, Jun. 2001, pp. 131-150, 1 page. (Abstract provided). |
McCraty et al., “Electrophysiological Evidence of Intuition: Part 1. The Surprising Role of the Heart,” The Journal of Alternative and Complementary Medicine, vol. 10, No. 1, 2004, pp. 133-143, Mary Ann Liebert, Inc., 12 pages. |
McCraty et al., “Electrophysiological Evidence of Intuition: Part 2. A System-Wide Process?,” The Journal of Alternative and Complementary Medicine, vol. 10, No. 2, 2004, pp. 325-336, Mary Ann Liebert, Inc., 12 pages. |
McCraty et al., “The Effects of Different Types of Music on Mood, Tension, and Mental Clarity,” Original Research, Alternative Therapies, Jan. 1998, vol. 4., No. 1, pp. 75-84, 10 pages. |
McCraty et al., “The Effects of Emotions on Short-Term Power Spectrum Analysis of Heart Rate Variability,” The American Journal of Cardiology, vol. 76, No. 14, Nov. 15, 1995, pp. 1089-1093, 6 pages. |
McCraty et al., “The Impact of a New Emotional Self-Management Program on Stress, Emotions, Heart Rate Variability, DHEA and Cortisol,” Integrative Physiological and Behavioral Science, Apr.-Jun. 1998, vol. 33, No. 2, 151-170, 20 pages. |
McCraty et al., “The Impact of an Emotional Self-Management Skills Course on Psychosocial Functioning and Autonomic Recovery to Stress in Middle School Children,” Integrative Physiological and Behavioral Science, Oct.-Dec. 1999, vol. 34, No. 4, 246-268, 23 pages. |
Melillo, “Inside the Consumer Mind; What Neuroscience Can Tell Us About Marketing,” Adweek, Public Citizen's Commercial Alert, Jan. 16, 2006, http://www.adweek.com/news/advertising/insideconsumer-mind-83549, 8 pages. |
Miller et al., “Influence of Specific Emotional States on Autonomic Reactivity and Pulmonary Function in Asthmatic Children,” Journal of the American Academy of Child & Adolescent Psychiatry, vol. 36, Issue 5, May 1997, pp. 669-677, 3 pages. (Abstract provided). |
Murphy et al., “The Heart Reinnervates After Transplantation,” Official Journal of the Society of Thoracic Surgeons and the Southern Thoracic Surgical Association, Jun. 2000, vol. 69, Issue 6, pp. 1769-1781, 13 pages. |
Rosenberg, “Emotional R.O.I.,” The Hub, May/Jun. 2006, pp. 24-25, 2 pages. |
Tiller et al., “Cardiac Coherence: A New, Noninvasive Measure of Autonomic Nervous System Order,” Alternative Therapies, Jan. 1996, vol. 2, No. 1, 14 pages. |
Umetani et al. “Twenty-Four Hour Time Domain Heart Rate Variability and Heart Rate: Relations to Age and Gender Over Nine Decades,” J. Am. Coll. Cardiol., Mar. 1, 1998, pp. 593-601, 9 pages. |
Von Leupoldt et al., “Emotions in a Body Plethysmograph,” Journal of Psychophysiology (2004), 18, pp. 170-176, 1 page. (Abstract provided). |
Kallman et al., “Effect of Blank Time on Picture Recognition,” The American Journal of Psychology, vol. 97, No. 3 (Autumn, 1984), pp. 399-406, 4 pages. (Abstract provided). |
Larose, Data Mining Methods and Models, Department of Mathematical Sciences, Central Connecticut State University, www.dbeBooks.com—An Ebook Library, published by John Wiley & Sons, Inc., 2006, 340 pages. (Book). |
Han et al., Data Mining: Concepts and Techniques, 2nd Edition, Elsevier, 2006, 772 pages. (Book). |
Liu et al., Web Data Mining: Exgloring HYQerlinks, Contents, and Usage Data, Springer Science & Business Media, 2007, 532 pages, (Book). |
Berry et al., Data Mining Techniques: for Marketing, Sales, and Customer Support, Wiley Publishing Inc., Jun. 1997, 464 pages. (Book). |
Horovitz, “Watching Ads Is Real Science Research Companies Monitor Physiological Reactions to Commercials to Determine Their Effectiveness,” Los Angeles Times, Sep. 1, 1991, 3 pages. |
Sung et al., “Wearable feedback systems for rehabilitation,” Journal of NeuroEngineering and Rehabilitation, 2005, 12 pages. |
Jaffe, Casting for Big Ideas, Adweek Magazine Series, Book 8, 2003, 256 page. (Book). |
Hall, “A New Model for Measuring Advertising Effectiveness,” J. Advertising Research, vol. 42(2) (Mar./ Apr. 2002), 1 page. (Abstract provided). |
Hall, “Advertising as a Factor of Production,” ADMAP, Apr. 2003, pp. 20-23, 1 page. (Abstract provided). |
Ranii, “Adding Science to Gut Check,” The News & Observer, D3 (Apr. 6, 2005), 1 page. (Abstract provided). |
McCraty et al., “Impact of a Workplace Stress Reduction Program on Blood Pressure and Emotional Health in Hypertensive Employees”, the Journal of Alternative and Complementary Medicine, vol. 9, No. 3, 2003, pp. 355-369, Mary Aim Liebert, Inc., 15 pages. |
United States Patent and Trademark Office, “Final Office Action,” issued in connection with U.S. Appl. No. 12/263,331, dated Apr. 1, 2016, 32 pages. |
Egner et al., “EEG Signature and Phenomenology of Alpha/Theta Neurofeedback Training Versus Mock Feedback.” Applied Psychophysiology and Biofeedback. vol. 27, No. 4, Dec. 2002, 10 pages. |
Ei-Bab et al., “Cognitive Event Related Potentials During a Learning Task,” Doctoral Dissertation, Faculty of Medicine, University of Southampton, UK, 2010, 25 pages. |
Engel, et al., “Dynamic Predictions: Oscillations and Synchrony in Top-Down Processing,” Macmillan Magazines Ltd., vol. 2, Oct. 2001, 13 pages. |
Fries, “A Mechanism for Cognitive Dynamics: Neuronal Communication Through Neuronal Coherence,” Trends in Cognitive Sciences vol. 9 No. 10, Oct. 2005, 7 pages. |
Gazzaley, et al., “Top-down Enhancement and Suppression of the Magnitude and Speed of Neural Activity,” Massachusetts Institute of Technology, Journal of Cognitive Neuroscience, No. 17:3,2005, 11 pages. |
Gevins et al., “High-resolution EEG Mapping of Cortical Activation Related to a Working Memory: Effects of Task Difficulty, Type of Processing, and Practice,” Cerebral Cortex, vol. 7, No. 4, Jun. 1997, 12 pages. |
Government of Newfoundland and Labrador, “Who Responded,” Budget 1996, http://www.budget.gov.nl.ca/budget96/who.gif, 1996, 1 page. |
Harmony et al., “Specific EEG Frequencies Signal General Common Cognitive Processes as well as Specific Tasks Processes in Man,” International Journal of Psychophysiology, vol. 53, 2004, 10 pages. |
Haq, This is Your Brain on Advertising, BusinessWeek.com, http://www.businessweek.com/print/globalbiz/content/oct2007/gb2007108—286282.htm, Oct. 8, 2007, 3 pages. |
Hazlett et al., “Emotional Response to Television Commercials: Facial EMG vs. Self-Report,” Journal of Advertising Research, Apr. 1999, 17 pages. |
Hopf, et al., “Neural Sources of Focused Attention in Visual Search,” Cerebral Cortex, vol. 10, No. 12, Dec. 2000, 9 pages. |
Hughes et al., “Conventional and Quantitative Electroencephalography in Psychiatry,” Journal of Neuropsychiatry and Clinical Neurosciences, vol. 11, No. 2, Spring 1999, 19 pages. |
Klimesch et al., “Episodic and semantic memory: an analysis in the EEG theta and alpha band,” Electroencephalography and Clinical Neurophysiology, vol. 91, 1994, pp. 428-441, 14 pages. |
Lee et al., “What is ‘neuromarketing’? A discussion and agenda for future research,” International Journal of Psychophysiology, vol. 63, 2007, pp. 199-204, 6 pages. |
Lewis et al., “Market Researchers make Increasing use of Brain Imaging,” ACNR, vol. 5, No. 3, Jul./Aug. 2005, pp. 36-37, 2 pages. |
Makeig, et al., “Mining event-related brain dynamics,” Trends in Cognitive Sciences, vol. 8, No. 5, May 2004, 7 pages. |
Mizuhara et al., “A long range cortical network emerging with theta oscillation in mental task,” Neuroreport, vol. 15, No. 8, Jun. 7, 2004, pp. 1233-1238, 11 pages. |
Oxford English Dictionary, Definition of “Question,” retrieved from oed.com on Nov. 11, 2011, 2 pages. |
Page et al., “Cognitive Neuroscience, Marketing and Research,” Congress 2006—Foresight—The Predictive Power of Research Conference Papers, ESOMAR Publications, Sep. 17, 2006, 25 pages. |
Ruchkin et al., “Modality-specific processing streams in verbal working memory: evidence from spatio-temporal patterns of brain activity,” Cognitive Brain Research, vol. 6, 1997, pp. 95-113, 19 pages. |
Selden, “Machines that Read Minds,” Science Digest, Oct. 1981, 9 pages. |
Sutherland, “Neuromarketing: What's it all about?,” retrieved from Max Sutherland's Weblog on Aug. 23, 2011, http://www.sutherlandsurvey.com/Column—pages/Neuromarketing—whats—it—all—about.htm, Mar. 2007, 5 pages. |
Innerscope Research, “Technology Platform: SmartShirt + Eye-Tracking,” Mar. 2007, 1 page. |
Willis et al., Discover Your Child's Learning Style: Children Learn in Unique Ways—Here's the Key to Every Child's Learning Success, Prime Publishing, Roseville, CA, pp. 13-15, 20-22, 143-156, 1999, 22 pages. |
Wise, The High Performance Mind, Mastering Brainwaves for Insight, Healing and Creativity, Penguin Putnam Inc., New York, 1996, pp. 13-14, 16, 42, 44-47, 50, 11 pages. |
Wise, The High Performance Mind, Mastering Brainwaves for Insight, Healing and Creativity, Penguin Putnam Inc., New York, 1996, pp. 156-158; 165-170; 186-187, 189-192, 15 pages. |
Ziegenfuss, “Neuromarketing: Advertising Ethical & Medical Technology,” The Brownstone Journal, vol. XII, May 2005, 9 pages. |
United States Patent and Trademark Office, “Notice of Allowance,” issued in connection with U.S. Appl. No. 13/659, 592, dated Dec. 24, 2014, 65 pages. |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 12/263,331, dated Jun. 1, 2015, 37 pages. |
McClure, Samuel, et al., “Neural Correlates of Behavioral Preference for Culturally Familiar Drinks,” Neuron (Oct. 14, 2004), 379-387, 9 pages. |
United States Patent and Tradmark Office, “Final Office Action,” issued in connection with U.S. Appl. No. 12/263,331, dated Aug. 2, 2013, 56 pages. |
United States Patent and Trademark Office, “Advisory Action,” issued in connection with U.S. Appl. No. 12/263,331, dated Dec. 16, 2013, 6 pages. |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 12/263,331, dated Jan. 15, 2014, 22 pages. |
United States Patent and Trademark Office, “Final Office Action,” issued in connection with U.S. Appl. No. 12/263,331, dated Jul. 31, 2014, 29 pages. |
Beaver, John D., et al., “Individual Differences in Reward Drive Predict Neural Responses to Images of Food”, J. of Neuroscience, May 10, 2006, pp. 5160-5166, 7 pages. |
Cassanello, Carlos R., et al., “Neuronal Responses to Moving Targets in Monkey Frontal Eye Fields”, Journal of Neurophysiology, Sep. 2008, pp. 1544-1556, 16 pages. |
Darrow, Chester, “Psychological and Psychophysiological Significance of the Electroencephalogram,” Psychological Review, May 1947, pp. 157-168, 12 pages. |
Duchowski, “A Breadth-First Survey of Eye-tracking Applications,” Behavior Research Methods, Instruments, and Computers, Nov. 2002, pp. 455-470, 16 pages. |
Ekman, P., Friesen, W., “Measuring Facial Movement,” Environmental Psychology and Nonverbal Behavior, vol. 1, No. 1, Fall 1976, pp. 56-75, 20 pages. |
Enghoff, Sigurd, Thesis: “Moving ICA and Time-Frequency Analysis in Event-Related EEG Studies of Selective Attention,” Technical University of Denmark, Dec. 1999, 54 pages. |
Ekman, P., Friesen, W.V., Facial Action Coding System: A Technique for Measurement of Facial Movement, Consulting Psychologists Press, Palo Alto, California, 1978. (Book). |
Ekman, P., Friesen, W., Unmasking the Face—A Guide to Recognizing Emotions from Facial Clues, Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1979. (Book). |
Ekman, P., Friesen, W., Ancoli, S., “Facial Signs of Emotional Experience,” J. Personality & Social Psychology, vol. 39, No. 6, Dec. 1980, pp. 1125-1134, 10 pages. |
Heo et al., “Wait! Why is it Not Moving? Attractive and Distractive Ocular Responses to Web Ads,” Paper presented to AEJMC, Aug. 2001, Washington DC, http://www.psu.edu/deptlmedialab/researchpage/newabstracts/wait.html, 3 pages. |
Izard, C. E., The Maximally Discriminative Facial Movement Coding System, (Rev. ed.), Instructional Resources Center, University of Delaware, Newark, Delaware, 1983. (Book). |
Izard, C., Dougherty, L., Hembree, E., A System for Identifying Affect Expressions by Holistic Judgments (AFFEX), Instructional Resources Center, University of Delaware, Newark, Delaware, 1983. (Book). |
Jaimes, A., Sebe, N., “Multimodal Human-Computer Interaction: A Survey,” Computer Vision and Image Understanding, 108, Oct.-Nov. 2007, pp. 116-134, 19 pages. |
Jia, X., Nixon, M.S., “Extending the Feature Set for Automatic Face Recognition,” International Conference on Image Processing and its Applications, Apr. 7-9, 1992, 6 pages. |
Lisetti, C., Nasoz, F., “Using Noninvasive Wearable Computers to Recognize Human Emotions from Physiological Signals,” EURASIP J. Applied Signal Processing, No. 11, Sep. 2004, pp. 1672-1687, 16 pages. |
Mehta, A. et al., “Reconsidering Recall and Emotion in Advertising”, Journal of Advertising Research, Mar. 2006, pp. 49-56, 9 pages. |
Rothschild et al., “Predicting Memory for Components of TV Commercials from EEG,” Journal of Consumer Research, Mar. 1990, pp. 472-478, 8 pages. |
Shadlen et al., “A Computational Analysis of the Relationship between Neuronal and Behavioral Responses to Visual Motion”, The Journal of Neuroscience, Feb. 15, 1996, pp. 1486-1510, 25 pages. |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 12/244,748, dated Dec. 17, 2010, 35 pages. |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 12/244,751, dated Feb. 7, 2011, 48 pages. |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 12/244,752, dated Feb. 18, 2011, 34 pages. |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 12/326,016, dated Mar. 21, 2011, 41 pages. |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 12/244,737, dated May 16, 2011, 42 pages. |
United States Patent and Trademark Office, “Notice of Allowance,” issued in connection with U.S. Appl. No. 12/244,748, dated Aug. 30, 2011, 19 pages. |
United States Patent and Trademark Office, “Final Office Action,” issued in connection with U.S. Appl. No. 12/244,751, dated Sep. 7, 2011, 29 pages. |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 12/263,350, dated Oct. 24, 2011, 21 pages. |
United States Patent and Trademark Office, “Final Office Action,” issued in connection with U.S. Appl. No. 12/244,752, dated Nov. 23, 2011, 22 pages. |
United States Patent and Trademark Office, “Final Office Action,” issued in connection with U.S. Appl. No. 12/244,737, dated Nov. 29, 2011, 37 pages. |
United States Patent and Trademark Office, “Final Office Action,” issued in connection with U.S. Appl. No. 12/326,016, dated Nov. 30, 2011, 27 pages. |
United States Patent and Trademark Office, “Restriction Requirement,” issued in connection with U.S. Appl. No. 12/263,331, dated Jan. 19, 2012, 10 pages. |
United States Patent and Trademark Office, “Notice of Allowance,” issued in connection with U.S. Appl. No. 12/244,752, dated May 29, 2012, 35 pages. |
United States Patent and Trademark Office, “Notice of Allowance,” issued in connection with U.S. Appl. No. 12/244,751, dated Jun. 12, 2012, 47 pages. |
United States Patent and Trademark Office, “Notice of Allowance,” issued in connection with U.S. Appl. No. 12/244,751, dated Jul. 26, 2012, 19 pages. |
United States Patent and Trademark Office, “Notice of Allowance,” issued in connection with U.S. Appl. No. 12/244,752, dated Sep. 7, 2012, 32 pages. |
United States Patent and Trademark Office, “Restriction Requirement,” issued in connection with U.S. Appl. No. 12/263,331, dated Sep. 14, 2012, 9 pages. |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 12/263,331, dated Jan. 7, 2013, 43 pages. |
State Intellectual Property Office of China, “First Office Action,” issued in connection with Chinese Patent Application No. 200880123640.4, dated Feb. 29, 2012, 14 pages. |
International Bureau, “Written Opinion of the International Searching Authority,” issued in connection with International Patent Application No. PCT/US07/15019, dated Jun. 11, 2008, 5 pages. |
International Bureau, “International Search Report,” issued in connection with International Patent Application No. PCT/US07/15019, dated Jun. 11, 2008, 1 page. |
International Bureau, “International Preliminary Report on Patentablity,” issued in connection with International Patent Application No. PCT/US07/15019, dated Sep. 17, 2009, 7 pages. |
International Bureau, “Written Opinion of the International Searching Authority,” issued in connection with International Patent Application No. PCT/US07/14955, dated Jul. 3, 2008, 6 pages. |
International Bureau, “International Search Report,” issued in connection with International Patent Application No. PCT/US07/14955, dated Jul. 3, 2008, 1 page. |
International Bureau, “International Preliminary Report on Patentability,” issued in connection with International Patent Application No. PCT/US07/14955, dated Sep. 8, 2009, 7 pages. |
International Bureau, “Written Opinion of the International Searching Authority,” issued in connection with International Patent Application No. PCT/US07/16796, dated Jul. 3, 2008, 6 pages. |
International Bureau, “International Search Report,” issued in connection with International Patent Application No. PCT/US07/16796, dated Jul. 3, 2008, 1 page. |
International Bureau, “International Preliminary Report on Patentablity,” issued in connection with International Patent Application No. PCT/US07/16796, dated Sep. 8, 2009, 7 pages. |
International Bureau, “Written Opinion of the International Searching Authority,” issued in connection with International Patent Application No. PCT/US06/031569, dated Feb. 20, 2007, 6 pages. |
International Bureau, “International Search Report,” issued in connection with International Patent Application No. PCT/US06/031569, dated Feb. 20, 2007, 2 pages. |
International Bureau, “International Preliminary Report on Patentability,” issued in connection with International Patent Application No. PCT/US06/031569, dated Mar. 4, 2008, 7 pages. |
International Bureau, “Written Opinion of the International Searching Authority,” issued in connection with International Patent Application No. PCT/US07/20714, dated Apr. 8, 2008, 6 pages. |
International Bureau, “International Search Report,” issued in connection with International Patent Application No. PCT/US07/20714, dated Apr. 8, 2008, 1 page. |
International Bureau, “International Preliminary Report on Patentability,” issued in connection with International Patent Application No. PCT/US07/20714, dated Sep. 8, 2009, 7 pages. |
International Bureau, “Written Opinion of the International Searching Authority,” issued in connection with International Patent Application No. PCT/US07/17764, dated May 6, 2008, 7 pages. |
International Bureau, “International Search Report,” issued in connection with International Patent Application No. PCT/US07/17764, dated May 6, 2008, 1 page. |
International Bureau, “International Preliminary Report on Patentability,” issued in connection with International Patent Application No. PCT/US07/17764, dated Sep. 8, 2009, 8 pages. |
International Bureau, “Written Opinion of the International Searching Authority,” issued in connection with International Patent Application No. PCT/US07/20713, dated May 13, 2008, 5 pages. |
International Bureau, “International Search Report,” issued in connection with International Patent Application No. PCT/US07/20713, dated May 13, 2008, 1 page. |
International Bureau, “International Preliminary Report on Patentability,” issued in connection with International Patent Application No. PCT/US07/20713, dated Sep. 8, 2009, 6 pages. |
International Bureau, “Written Opinion of the International Searching Authority,” issued in connection with International Patent Application No. PCT/US08/009110, dated Feb. 20, 2009, 4 pages. |
International Bureau, “International Search Report,” issued in connection with International Patent Application No. PCT/US08/009110, dated Feb. 20, 2009, 2 pages. |
International Bureau, “International Preliminary Report on Patentability,” issued in connection with International Patent Application No. PCT/US08/009110, dated Jan. 26, 2010, 5 pages. |
International Bureau, “Written Opinion of the International Searching Authority,” issued in connection with International Patent Application No. PCT/US08/75640, dated Nov. 7, 2008, 3 pages. |
International Bureau, “International Search Report,” issued in connection with International Patent Application No. PCT/US08/75640, dated Nov. 7, 2008, 1 page. |
International Bureau, “International Preliminary Report on Patentability,” issued in connection with International Patent Application No. PCT/US08/75640, dated Mar. 9, 2010, 4 pages. |
International Bureau, “Written Opinion of the International Searching Authority,” issued in connection with International Patent Application No. PCT/US08/78663, dated Dec. 5, 2008, 6 pages. |
International Bureau, “International Search Report,” issued in connection with International Patent Application No. PCT/US08/78663, dated Dec. 5, 2008, 1 page. |
International Bureau, “International Preliminary Report on Patentability,” issued in connection with International Patent Application No. PCT/US08/78663, dated Apr. 7, 2010, 7 pages. |
International Bureau, “Written Opinion of the International Searching Authority,” issued in connection with International Patent Application No. PCT/US08/82147, dated Jan. 21, 2009, 14 pages. |
International Bureau, “International Search Report,” issued in connection with International Patent Application No. PCT/US08/82147, dated Jan. 21, 2009, 1 page. |
International Bureau, “International Preliminary Report on Patentability,” issued in connection with International Patent Application No. PCT/US08/82147, dated May 4, 2010, 15 pages. |
International Bureau, “Written Opinion of the International Searching Authority,” issued in connection with International Patent Application No. PCT/US08/82149, dated Jan. 21, 2009, 14 pages. |
International Bureau, “International Search Report,” issued in connection with International Patent Application No. PCT/US08/82149, dated Jan. 21, 2009, 1 page. |
International Bureau, “International Preliminary Report on Patentability,” issued in connection with International Patent Application No. PCT/US08/82149, dated May 4, 2010, 15 pages. |
International Bureau, “Written Opinion of the International Searching Authority,” issued in connection with International Patent Application No. PCT/US08/75651, dated Nov. 28, 2008, 9 pages. |
International Bureau, “International Search Report,” issued in connection with International Patent Application No. PCT/US08/75651, dated Nov. 28, 2008, 1 page. |
International Bureau, “International Preliminary Report on Patentability,” issued in connection with International Patent Application No. PCT/US08/75651, dated Mar. 9, 2010, 10 pages. |
International Bureau, “Written Opinion of the International Searching Authority,” issued in connection with International Patent Application No. PCT/US08/85723, dated Mar. 20, 2009, 7 pages. |
International Bureau, “International Search Report,” issued in connection with International Patent Application No. PCT/US08/85723, dated Mar. 20, 2009, 1 page. |
International Bureau, “International Preliminary Report on Patentability,” issued in connection with International Patent Application No. PCT/US08/85723, dated Jun. 22, 2010, 8 pages. |
International Bureau, “Written Opinion of the International Searching Authority,” issued in connection with International Patent Application No. PCT/US08/85203, dated Feb. 27, 2009, 6 pages. |
International Bureau, “International Search Report,” issued in connection with International Patent Application No. PCT/US08/85203, dated Feb. 27, 2009, 1 page. |
International Bureau, “International Preliminary Report on Patentability,” issued in connection with International Patent Application No. PCT/US08/85203, dated Jun. 2, 2010, 7 pages. |
International Bureau, “Written Opinion of the International Searching Authority,” issued in connection with International Patent Application No. PCT/US08/75649, dated Nov. 19, 2008, 5 pages. |
International Bureau, “International Search Report,” issued in connection with International Patent Application No. PCT/US08/75649, dated Nov. 19, 2008, 1 page. |
International Bureau, “International Preliminary Report on Patentability,” issued in connection with International Patent Application No. PCT/US08/75649, dated Mar. 9, 2010, 6 pages. |
Aaker et al., “Warmth in Advertising: Measurement, Impact, and Sequence Effects,” The Journal of Consumer Research, vol. 12, No. 4, Mar. 1986, pp. 365-381, 18 pages. |
Ambler et al., “Ads on the Brain; A Neuro-Imaging Comparison of Cognitive and Affective Advertising Stimuli,” London Business School, Centre for Marketing Working Paper, No. 00-902, Mar. 2000, 23 pages. |
Ambler et al., “Salience and Choice: Neural Correlates of Shopping Decisions,” Psychology & Marketing, vol. 21, No. 4, Apr. 2004, pp. 247-261, 16 pages. |
Bagozzi et al., “The Role of Emotions in Marketing,” Journal of the Academy of Marketing Science, vol. 27, No. 2, 1999, pp. 184-206, 23 pages. |
Belch et al., “Psychophysiological and Cognitive Responses to Sex in Advertising,” Advances in Consumer Research, vol. 9, 1982, pp. 424-427, 6 pages. |
Blakeslee, “If You Have a ‘Buy Button’ in Your Brain, What Pushes It?” The New York Times, www.nytimes.com, Oct. 19, 2004, 3 pages. |
Braeutigam, “Neuroeconomics-From Neural Systems to Economic Behavior,” Brain Research Bulletin, vol. 67, 2005, pp. 355-360, 6 pages. |
Carter, “Mapping the Mind,” University of California Press, Berkley, 1998, p. 28, 3 pages. |
Clarke et al., “EEG Analysis of Children with Attention-Deficit/Hyperactivity Disorder and Comorbid Reading Disabilities,” Journal of Learning Disabilities, vol. 35, No. 3, May-Jun. 2002, pp. 276-285, 10 pages. |
Crawford et al., “Self-Generated Happy and Sad Emotions in Low and Highly Hypnotizable Persons During Waking and Hypnosis: Laterality and Regional EEG Activity Differences,” International Journal of Psychophysiology, vol. 24, Dec. 1996, 28 pages. |
Desmet, “Measuring Emotion: Development and Application of an Instrument to Measure Emotional Responses to Products,” to be published in Funology: From Usability to Enjoyment, pp. 111-123, Kluwer Academic Publishers, Blythe et al., eds., 2004, 13 pages. |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 14/673,077, dated Jan. 7, 2016, 87 pages. |
United States Patent and Trademark Office, “Final Office Action,” issued in connection with U.S. Appl. No. 14/673,077, dated Jun. 9, 2016, 53 pages. |
United States Patent and Trademark Office, “Advisory Action,” issued in connection with U.S. Appl. No. 14/673,077, dated Aug. 30, 2016, 11 pages. |
United States Patent and Trademark Office, “Notice of Allowance,” issued in connection with U.S. Appl. No. 14/673,077, dated Sep. 23, 2016, 55 pages. |
Number | Date | Country | |
---|---|---|---|
20170127113 A1 | May 2017 | US |
Number | Date | Country | |
---|---|---|---|
60977035 | Oct 2007 | US | |
60977040 | Oct 2007 | US | |
60977042 | Oct 2007 | US | |
60977045 | Oct 2007 | US | |
60984260 | Oct 2007 | US | |
60984268 | Oct 2007 | US | |
60991591 | Nov 2007 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14673077 | Mar 2015 | US |
Child | 15400357 | US | |
Parent | 13659592 | Oct 2012 | US |
Child | 14673077 | US | |
Parent | 12244751 | Oct 2008 | US |
Child | 13659592 | US | |
Parent | 12244752 | Oct 2008 | US |
Child | 12244751 | US |