Electronic personal interactive device

Information

  • Patent Grant
  • 11341962
  • Patent Number
    11,341,962
  • Date Filed
    Thursday, April 20, 2017
    7 years ago
  • Date Issued
    Tuesday, May 24, 2022
    2 years ago
Abstract
An interface device and method of use, comprising audio and image inputs; a processor for determining topics of interest, and receiving information of interest to the user from a remote resource; an audio-visual output for presenting an anthropomorphic object conveying the received information, having a selectively defined and adaptively alterable mood; an external communication device adapted to remotely communicate at least a voice conversation with a human user of the personal interface device. Also provided is a system and method adapted to receive logic for, synthesize, and engage in conversation dependent on received conversational logic and a personality.
Description
FIELD OF THE INVENTION

The present invention relates generally to consumer electronics and telecommunications, and, more particularly, to personal devices having social human-machine user interfaces.


BACKGROUND OF THE INVENTION

Many systems and methods intended for use by elderly people are known in the art. Elderly people as a group have less developed technological skills than younger generations. These people may also have various disabilities or degraded capabilities as compared to their youth. Further, elderly people tend to be retired, and thus do not spend their time focused on an avocation.


Speech recognition technologies, as described, for example in Gupta, U.S. Pat. No. 6,138,095, incorporated herein by reference, are programmed or trained to recognize the words that a person is saying. Various methods of implementing these speech recognition technologies include either associating the words spoken by a human with a dictionary lookup and error checker or through the use of neural networks which are trained to recognize words.


See also: U.S. Pat. Nos. 7,711,569, 7,711,571, 7,711,560, 7,711,559, 7,707,029, 7,702,512, 7,702,505, 7,698,137, 7,698,136, 7,698,131, 7,693,718, 7,693,717, 7,689,425, 7,689,424, 7,689,415, 7,689,404, 7,684,998, 7,684,983, 7,684,556, 7,680,667, 7,680,666, 7,680,663, 7,680,662, 7,680,661, 7,680,658, 7,680,514, 7,676,363, 7,672,847, 7,672,846, 7,672,841, US Patent App. Nos. 2010/0106505, 2010/0106497, 2010/0100384, 2010/0100378, 2010/0094626, 2010/0088101, 2010/0088098, 2010/0088097, 2010/0088096, 2010/0082343, 2010/0082340, 2010/0076765, 2010/0076764, 2010/0076758, 2010/0076757, 2010/0070274, 2010/0070273, 2010/0063820, 2010/0057462, 2010/0057461, 2010/0057457, 2010/0057451, 2010/0057450, 2010/0049525, 2010/0049521, 2010/0049516, 2010/0040207, 2010/0030560, 2010/0030559, 2010/0030400, 2010/0023332, 2010/0023331, 2010/0023329, 2010/0010814, 2010/0004932, 2010/0004930, 2009/0326941, 2009/0326937, 2009/0306977, 2009/0292538, 2009/0287486, 2009/0287484, 2009/0287483, 2009/0281809, 2009/0281806, 2009/0281804, 2009/0271201, each of which is expressly incorporated herein by reference.


The current scholarly trend is to use statistical modeling to determine whether a sound is a phoneme and whether a certain set of phonemes corresponds to a word. This method is discussed in detail in Turner, Statistical Methods for Natural Sounds (Thesis, University of London, 2010), incorporated herein by reference. Other scholars have applied Hidden Markov Models (HMM) to speech recognitions. Hidden Markov Models are probabilistic models that assume that at any given time, the system is in a state (e.g. uttering the first phoneme). In the next time-step, the system moves to another state with a certain probability (e.g., uttering the second phoneme, completing a word, or completing a sentence). The model keeps track of the current state and attempts to determine the next state in accordance with a set of rules. See, generally, Brown, Decoding HMMs using the k best paths: algorithms and applications, BMC Bioinformatics (2010), incorporated herein by reference, for a more complete discussion of the application of HMMs.


In addition to recognizing the words that a human has spoken, speech recognition software can also be programmed to determine the mood of a speaker, or to determine basic information that is apparent from the speaker's voice, tone, and pronunciation, such as the speaker's gender, approximate age, accent, and language. See, for example, Bohacek, U.S. Pat. No. 6,411,687, incorporated herein by reference, describing an implementation of these technologies. See also, Leeper, Speech Fluency, Effect of Age, Gender and Context, International Journal of Phoniatrics, Speech Therapy and Communication Pathology (1995), incorporated herein by reference, discussing the relationship between the age of the speaker, the gender of the speaker, and the context of the speech, in the fluency and word choice of the speaker. In a similar field of endeavor, Taylor, U.S. Pat. No. 6,853,971, teaches an application of speech recognition technology to determine the speaker's accent or dialect. See also: US App. 2007/0198261, US App. 2003/0110038, and U.S. Pat. No. 6,442,519, all incorporated herein by reference.


In addition, a computer with a camera attached thereto can be programmed to recognize facial expressions and facial gestures in order to ascertain the mood of a human. See, for example, Black, U.S. Pat. No. 5,774,591, incorporated herein by reference. One implementation of Black's technique is by comparing facial images with a library of known facial images that represent certain moods or emotions. An alternative implementation would ascertain the facial expression through neural networks trained to do so. Similarly, Kodachi, U.S. Pat. No. 6,659,857, incorporated herein by reference, teaches about the use of a “facial expression determination table” in a gaming situation so that a user's emotions can be determined. See also U.S. Pat. Nos. 6,088,040, 7,624,076, 7,003,139, 6,681,032, and US App. 2008/0101660.


Takeuchi, “Communicative Facial Displays as a New Conversational Modality,” (1993) incorporated herein by reference, notes that facial expressions themselves could be communicative. Takeuchi's study compared a group of people who heard a voice only and a group of people who viewed a face saying the same words as the voice. The people who saw the face had a better understanding of the message, suggesting a communicative element in human facial expressions. Catrambone, “Anthropomorphic Agents as a User Interface Paradigm: Exponential Findings and a Framework for Research,” incorporated herein by reference, similarly, notes that users who learn computing with a human face on the computer screen guiding them through the process feel more comfortable with the machines as a result.


Lester goes even further, noting that “animated pedagogical agents” can be used to show a face to students as a complex task is demonstrated on a video or computer screen. The computer (through the face and the speaker) can interact with the students through a dialog. Lester, “Animated Pedagogical Agents: Face-to-Face Interaction in Interactive Learning Environments,” North Carolina State University (1999), incorporated herein by reference. Cassell, similarly, teaches about conversational agents. Cassell's “embodied conversational agents” (ECAs) are computer interfaces that are represented by human or animal bodies and are lifelike or believable in their interaction with the human user. Cassell requires ECAs to have the following features: the ability to recognize and respond to verbal and nonverbal input; the ability to generate verbal and nonverbal output; the ability to deal with conversational functions such as turn taking, feedback, and repair mechanisms; and the ability to give signals that indicate the state of the conversation, as well as to contribute new propositions to the discourse. Cassell, “Conversation as a System Framework: Designing Embodied Conversational Agents,” incorporated herein by reference.


Massaro continues the work on conversation theory by developing Baldi, a computer animated talking head. When speaking, Baldi imitates the intonations and facial expressions of humans. Baldi has been used in language tutoring for children with hearing loss. Massaro, “Developing and Evaluating Conversational Agents,” Perpetual Science Laboratory, University of California. In later developments, Baldi was also given a body so as to allow for communicative gesturing and was taught to speak multiple languages. Massaro, “A Multilingual Embodied Conversational Agent,” University of California, Santa Cruz (2005), incorporated herein by reference.


Bickmore continues Cassell's work on embodied conversational agents. Bickmore finds that, in ECAs, the nonverbal channel is crucial for social dialogue because it is used to provide social cues, such as attentiveness, positive affect, and liking and attraction. Facial expressions also mark shifts into and out of social activities. Also, there are many gestures, e.g. waving one's hand to hail a taxi, crossing one's arms and shaking one's head to say “No,” etc. that are essentially communicative in nature and could serve as substitutes for words.


Bickmore further developed a computerized real estate agent, Rea, where, “Rea has a fully articulated graphical body, can sense the user passively through cameras and audio input, and is capable of speech with intonation, facial display, and gestural output. The system currently consists of a large projection screen on which Rea is displayed and which the user stands in front of. Two cameras mounted on top of the projection screen track the user's head and hand positions in space. Users wear a microphone for capturing speech input.” Bickmore & Cassell, “Social Dialogue with Embodied Conversational Agents,” incorporated herein by reference.


Similar to the work of Bickmore and Cassell, Beskow at the Royal Institute of Technology in Stockholm, Sweden created Olga, a conversational agent with gestures that is able to engage in conversations with users, interpret gestures, and make its own gestures. Beskow, “Olga—A Conversational Agent with Gestures,” Royal Institute of Technology, incorporated herein by reference.


In “Social Cues in Animated Conversational Agents,” Louwerse et al. note that people who interact with ECAs tend to react to them just as they do to real people. People tend to follow traditional social rules and to express their personality in usual ways in conversations with computer-based agents. Louwerse, M. M., Graesser, A. C., Lu, S., & Mitchell, H. H. (2005). Social cues in animated conversational agents. Applied Cognitive Psychology, 19, 1-12, incorporated herein by reference.


In another paper, Beskow further teaches how to model the dynamics of articulation for a parameterized talking head based on the phonetic input. Beskow creates four models of articulation (and the corresponding facial movements). To achieve this result, Beskow makes use of neural networks. Beskow further notes several uses of “talking heads.” These include virtual language tutors, embodied conversational agents in spoken dialogue systems, and talking computer game characters. In the computer game area, proper visual speech movements are essential for the realism of the characters. (This factor also causes “dubbed” foreign films to appear unrealistic.) Beskow, “Trainable Articulatory Control Models for Visual Speech Synthesis” (2004), incorporated herein by reference.


Ezzat goes even further, presenting a technique where a human subject is recorded uttering a predetermined speech corpus by a video camera. A visual speech model is created from this recording. Now, the computer can allow the person to make novel utterances and show how she would move her head while doing so. Ezzat creates a “multidimensional morpheme model” to synthesize new, previously unseen mouth configurations from a small set of mouth image prototypes.


In a similar field of endeavor, Picard proposes computer that can respond to user's emotions. Picard's ECAs can be used as an experimental emotional aid, as a pre-emptive tool to avert user frustration, and as an emotional skill-building mirror.


In the context of a customer call center, Bushey, U.S. Pat. No. 7,224,790, incorporated herein by reference, discusses conducting a “verbal style analysis” to determine a customer's level of frustration and the customer's goals in calling customer service. The “verbal style analysis” takes into account the number of words that the customer uses and the method of contact. Based in part on the verbal style analysis, customers are segregated into behavioral groups, and each behavioral group is treated differently by the customer service representatives. Gong, US App. 2003/0187660, incorporated herein by reference, goes further than Bushey, teaching an “intelligent social agent” that receives a plurality of physiological data and forms a hypothesis regarding the “affective state of the user” based on this data. Gong also analyzes vocal and verbal content and integrates the analysis to ascertain the user's physiological state.


Mood can be determined by various biometrics. For example, the tone of a voice or music is suggestive of the mood. See, Liu et al., Automatic Mood Detection from Acoustic Music Data, Johns Hopkins University Scholarship Library (2003). The mood can also be ascertained based on a person's statements. For example, if a person says, “I am angry,” then the person is most likely telling the truth. See Kent et al., Detection of Major and Minor Depression in Children and Adolescents, Journal of Child Psychology (2006). One's facial expression is another strong indicator of one's mood. See, e.g., Cloud, How to Lift Your Mood? Try Smiling Time Magazine (Jan. 16, 2009).


Therefore, it is feasible for a human user to convey his mood to a machine with an audio and a visual input by speaking to the machine, thereby allowing the machine to read his voice tone and words, and by looking at the machine, thereby allowing the machine to read his facial expressions.


It is also possible to change a person's mood through a conversational interface. For example, when people around one are smiling and laughing, one is more likely to forget one's worries and to smile and laugh oneself. In order to change a person's mood through a conversational interface, the machine implementing the interface must first determine the starting mood of the user. The machine would then go through a series of “optimal transitions” seeking to change the mood of the user. This might not be a direct transition. Various theories discuss how a person's mood might be changed by people or other external influences. For example, Neumann, “Mood Contagion”: The Automatic Transfer of Mood Between persons, Journal of Personality and Social Psychology (2000), suggests that if people around one are openly experiencing a certain mood, one is likely to join them in experiencing said mood. Other scholars suggest that logical mood mediation might be used to persuade someone to be happy. See, e.g., DeLongis, The Impact of Daily Stress on Health and Mood: Psychological and Social Resources as Mediators, Journal of Personality and Social Psychology (1988). Schwarz notes that mood can be impacted by presenting stimuli that were previously associated with certain moods, e.g. the presentation of chocolate makes one happy because one was previously happy when one had chocolate. Schwarz, Mood and Persuasion: Affective States Influence the Processing of Persuasive Communications, in Advances in Experimental Social Psychology, Vol. 24 (Academic Press 1991). Time Magazine suggests that one can improve one's mood merely by smiling or changing one's facial expression to imitate the mood one wants to experience. Cloud, How to Lift Your Mood? Try Smiling. Time Magazine (Jan. 16, 2009).


Liquid crystal display (LCD) screens are known in the art as well. An LCD screen is a thin, flat electronic visual display that uses the light modulating properties of liquid crystals. These are used in cell phones, smartphones, laptops, desktops, and televisions. See Huang, U.S. Pat. No. 6,437,975, incorporated herein by reference, for a detailed discussion of LCD screen technology.


Many other displays are known in the art. For example, three-dimensional televisions and monitors are available from Samsung Corp. and Philips Corp. One embodiment of the operation of three-dimensional television, described by Imsand in U.S. Pat. No. 4,723,159, involves taking two cameras and applying mathematical transforms to combine the two received images of an object into a single image, which can be displayed to a viewer. On its website, Samsung notes that it's three-dimensional televisions operate by “display[ing] two separate but overlapping images of the same scene simultaneously, and at slightly different angles as well.” One of the images is intended to be perceived by the viewer's left eye. The other is intended to be perceived by the right eye. The human brain should convert the combination of the views into a three-dimensional image. See, generally, Samsung 3D Learning Resource, www.samsung.com/us/learningresources3D (last accessed May 10, 2010).


Projectors are also known in the art. These devices project an image from one screen to another. Thus, for example, a small image on a cellular phone screen that is difficult for an elderly person to perceive may be displayed as a larger image on a wall by connecting the cell phone with a projector. Similarly, a netbook with a small screen may be connected by a cable to a large plasma television or plasma screen. This would allow the images from the netbook to be displayed on the plasma display device.


Devices for forming alternative facial expressions are known in the art. There are many children's toys and pictures with changeable facial expressions. For example, Freynet, U.S. Pat. No. 6,146,721, incorporated herein by reference, teaches a toy having alternative facial expression. An image of a face stored on a computer can be similarly presented on an LCD screen with a modified facial expression. See also U.S. Pat. Nos. 5,215,493, 5,902,169, 3,494,068, and U.S. Pat. No. 6,758,717, expressly incorporated herein by reference.


In addition, emergency detection systems taking input from cameras and microphones are known in the art. These systems are programmed to detect whether an emergency is ongoing and to immediately notify the relevant parties (e.g. police, ambulance, hospital or nursing home staff, etc.). One such emergency detection system is described by Lee, U.S. Pat. No. 6,456,695, expressly incorporated herein by reference. Lee suggests that an emergency call could be made when an emergency is detected, but does not explain how an automatic emergency detection would take place. However, Kirkor, U.S. Pat. No. 4,319,229, proposes a fire emergency detector comprising “three separate and diverse sensors . . . a heat detector, a smoke detector, and an infrared radiation detector.” Under Kirkor's invention, when a fire emergency is detected, (through the combination of inputs to the sensors) alarm is sounded to alert individuals in the building and the local fire department is notified via PSTN. In addition, some modern devices, for example, the Emfit Movement Monitor/Nighttime Motion Detection System, www.gosouthernmd.com/store/store/comersus_viewItem.asp?idProduct=35511, last accessed May 10, 2010, comprise a camera and a pressure sensor adapted to watch a sleeping person and to alert a caregiver when the sleeping patient is exhibiting unusual movements.


See, also (each of which is expressly incorporated herein by reference):

  • Andre, et al., “Employing AI Methods to Control the Behavior of Animated Interface Agents.”
  • Andre, et al., “The Automated Design of Believable Dialogues for Animated Presentation Teams”; in J. Cassell, S. Prevost, J. Sullivan, and E. Churchill: Embodied Conversational Agents, The MIT Press, pp. 220-255, 2000.
  • Aravamuden, U.S. Pat. No. 7,539,676, expressly incorporated herein by reference, teaches about presenting content to a user based on how relevant it is believed to be for a user based on the text query that the user entered and how the user responded to prior search results.
  • Atmmarketplace.com (2003) ‘New bank to bring back old ATM character,’ News Article, 7 Apr. 2003
  • Barrow, K (2000) ‘What's anthropomorphism got to with artificial intelligence? An investigation into the extent of anthropomorphism within the field of science’. Unpublished student dissertation, University of the West of England
  • Beale, et al., “Agent-Based Interaction,” in People and Computers IX: Proceedings of HCI '94, Glasgow, UK, August 1994, pp. 239-245.
  • Becker, et al., “Simulating the Emotion Dynamics of a Multimodal Conversational Agent.”
  • Bentahar, et al., “Towards a Formal Framework for Conversational Agents.”
  • Beskow, “Trainable Articulatory Control Models for Visual Speech Synthesis.”
  • Beskow, et al., “Olga-a Conversational Agent with Gestures,” In André, E. (Ed.), Proc of the IJCAI-97 Workshop on Animated Interface Agents: Making them Intelligent (pp. 39-44). Nagoya, Japan.
  • Beun, et al., “Embodied Conversational Agents: Effects on Memory Performance and Anthropomorphisation”; T. Rist et al. (Eds.): IVA 2003, LNAI 2792, pp. 315-319, 2003
  • Bickmore, et al., “Establishing and Maintaining Long-Term Human-Computer Relationships.”
  • Bickmore, et al., “Relational Agents: A Model and Implementation of Building User Trust.”
  • Bickmore, et al., “Social Dialogue with Embodied Conversational Agents”; T.H.E. Editor(s) (ed.), Book title, 1-6, pages 1-27.
  • Biever, C (2004) ‘Polite computers win users’ hearts and minds' News article, 17 Jul. 2004, New Scientist
  • Brennan, S E & Ohaeri, J O (1994) ‘Effects of message style on users' attributions toward agents.’ Proceedings of the ACM CHI '94 Human Factors in Computing Systems: Conference Companion, Boston, 24-28 Apr. 1994, 281-282.
  • Brennan, S E, Laurel, B, & Shneiderman, B (1992) ‘Anthropomorphism: from ELIZA to Terminator 2. Striking a balance’ Proceedings of the 1992 ACM/SIGCHI Conference on Human Factors in Computing Systems, New York: ACM Press, 67-70.
  • Cassell, “Embodied Conversational Agents: Representation and Intelligence in User Interface”; In press, AI Magazine.
  • Cassell, et al., “Animated Conversation: Rule-based Generation of Facial Expression, Gesture & Spoken Intonation for Multiple Conversational Agents”, Computer Graphics (1994), Volume: 28, Issue: Annual Conference Series, Publisher: ACM Press, Pages: 413-420.
  • Cassell, et al., “Conversation as a System Framework: Designing Embodied Conversational Agents.”
  • Cassell, et al., “Negotiated Collusion: Modeling Social Language and its Relationship Effects in Intelligent Agents”; User Modeling and User-Adapted Interaction 13: 89-132, 2003.
  • Catrambone, et al., “Anthropomorphic Agents as a User Interface Paradigm: Experimental Findings and a Framework for Research.”
  • Cole, et al., “Intelligent Animated Agents for Interactive Language Training.”
  • Dawson, Christian, W (2000) The Essence of Computing Projects: A Student's Guide, Prentice Hall
  • De Laere, K, Lundgren, D & Howe, S (1998) ‘The Electronic Mirror: Human-Computer Interaction and Change in Self-Appraisals’ Computers in Human Behavior, 14 (1) 43-59
  • Dix, A, Finlay, J, Abowd, G & Beale, R (2002) Human-Computer Interaction, Second Edition, Pearson Education, Harlow, Essex
  • Egges, et al., “Generic Personality and Emotion Simulation for Conversational Agents.”
  • Ezzat, et al., “Trainable Videorealistic Speech Animation.”
  • Flind, Allison, (2006) “Is Anthropomorphic Design a Viable Way of Enhancing Interface Usability?”, B. Sc. Thesis Apr. 14, 2005, University of West England, Bristol, www.anthropomorphism.co.uk/index.html, ww.anthropomorphism.co.uk/anthropomorphism.pdf
  • Fogg, B J & Nass, C (1997) ‘Silicon sycophants: the effects of computers that flatter,’ International Journal of Human-Computer Studies 46 551-561.
  • Forbes (1998) ‘Banks that chat and other irrelevancies’ Interview with Ben Shneiderman,
  • Gates, B. (1995) ‘Bill's speech at Lakeside High-School 1995.’
  • Grosz, “Attention, Intentions, and the Structure of Discourse,” Computational Linguistics, Volume 12, Number 3, July-September 1986, pp. 175-204.
  • Guthrie, S (1993) Faces in the clouds—a new theory of religion, Oxford U. Press, NY
  • Harper, W, M (1965) Statistics, Unwin, London
  • Harris, B (1996) ‘No stamps in cyberspace’ News article, August 1996, govtech.net
  • Hartmann, et al., “Implementing Expressive Gesture Synthesis for Embodied Conversational Agents.”
  • Hasegawa, et al., “A CG Tool for Constructing Anthropomorphic Interface Agents.”
  • Henderson, M, E, Lyons Morris, L, Taylor Fitz-Gibbon, C (1987) How to Measure Attitudes, 2nd Edition, Sage Publications
  • Heylen, et al., “Experimenting with the Gaze of a Conversational Agent.”
  • Hodgkinson, T (1993) ‘Radical mushroom reality,’ An interview with author Terence McKenna, Fortean Times Magazine, 71, October/November 1993
  • Horvitz, E (2005) ‘Lumiére Project: Bayesian Reasoning for Automated Assistance,’ research.microsoft.com/˜horvitz/lum.htm).
  • Horvitz, E, Breese, J, Heckerman, D, Hovel, D & Rommelse, K (1998) ‘The Lumiére project: Bayesian user modeling for inferring the goals and needs of software users’, Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, Madison, Wis., 256-265, Morgan Kaufmann, San Francisco. http://research.microsoft.com/˜horvitz/lumiere.htm.
  • Isbister, K & Nass, C (2000). ‘Consistency of personality in interactive characters: verbal cues, non-verbal cues, and user characteristics.’ International Journal of Human-Computer Studies, 53 (1), 251-267.
  • Johnson, et al., “Animated Pedagogical Agents: Face-to-Face Interaction in Interactive Learning Environments.” International Journal of Artificial Intelligence in Education, 2000.
  • Ju, W, Nickell, S, Eng & Nass, C (2005) ‘Influence of colearner behavior on learner performance and attitudes’ Proceedings of the CHI Conference on Human Factors in Computing Systems 2005, Portland, Oreg.
  • Lanier, J (1995) ‘Agents of alienation’, Journal of Consciousness Studies, 2 (1), 76-81.
  • Laurel, B (1992) ‘In defense of anthropomorphism,’ speech delivered at the ACM SIGCHI 92, published on Laurel's website, www.tauzero.com/Brenda_Laurel/Severed_Heads/DefenseOfAnthropomorphism.html
  • Lester, et al., “The Persona Effect: Affective Impact of Animated Pedagogical Agents.”
  • Louwerse, et al., “Social Cues in Animated Conversational Agents”; Applied Cognitive Psychology, 19, 1-12.
  • Luck, Martin (1999) Your Student Research Project, Gower
  • Markoff, J (2000). ‘Microsoft sees software “agent” as way to avoid distractions.’ New York Times, Technology Section.
  • Massaro, et al., “A Multilingual Embodied Conversational Agent.” Proceedings of the 38th Hawaii International Conference on System Sciences-2005, pp. 1-8.
  • Massaro, et al., “Developing and Evaluating Conversational Agents”; Paper for First Workshop on Embodied Conversational Characters (WECC) Granlibakken Resort & Conference Center, November 1998, Lake Tahoe.
  • Mc Breen, et al., “Evaluating Humanoid Synthetic Agents in E-Retail Applications”; IEEE Transactions on Systems, Man and Cybernetics-Part A: Systems and Humans, Vol. 31, No. 5, September 2001, pp. 394-405.
  • McNeil, Patrick (1990) Research Methods, 2nd Edition, Routledge
  • Morkes, J, Kernal, H & Nass, C ‘Humour in Computer-Mediated Communication and Human-Computer Interaction’. Proceedings of the ACM CHI '98, Los Angeles, Calif., p. 215-216
  • Morris, “Conversational Agents for Game-Like Virtual Environments”; American Association for Artificial Intelligence, pp. 82-86.
  • Nass, C & Moon, Y (2000) ‘Machines and mindlessness: social responses to computers,’ Journal of social issues, 56 (1) 81-103
  • Nass, C (1998). Are computers scapegoats? Attributions of responsibility in human-computer interaction. International Journal of Human-Computer Studies, 49 (1), 79-94.
  • Nass, C, Moon, Y, Fogg, B J, Reeves, B, & Dryer, C (1995). ‘Can computer personalities be human personalities?’ International Journal of Human-Computer Studies, 43, 223-239.
  • Nass, C, Steuer, J & Tauber, E (1994) ‘Computers are social actors. Proceeding of the CHI Conference, 72-77. Boston, Mass.
  • Nass, C, Steuer, J S, Henriksen, L, & Dryer, C (1994) ‘Machines and social attributions: Performance assessments of computers subsequent to “self-” or “other-” evaluations,’ International Journal of Human-Computer Studies, 40, 543-559
  • Nass, et al., “Truth is Beauty: Researching Embodied Conversational Agents.”
  • New Scientist Archive (2004) ‘Strictly non-PC,’ News article about Microsoft's cultural insensitivity (22 Nov. 2004)
  • Office Assistant Demonstration (1996) ‘From Office 97 Comdex Roll Out.’
  • Picard, “Affective Computing”; M.I.T. Media Laboratory Perceptual Computing Section Technical Report No. 321, pp. 1-16.
  • Picard, et al., “Computers that Recognise and Respond to User Emotion: Theoretical and Practical Implications,” MIT Media Lab Tech Report 538, Interacting with Computers (2001).
  • Preece, J, Rogers, Y, Sharp, H, Benyon, D, Holland, S, Carey, T (1994) Human-Computer Interaction, Addison-Wesley
  • Reeves, B & Nass, C. (1996), The media equation—How people treat computers, television, and new media like real people and places. CSLI Publications, Cambridge
  • Resnik, P V & Lammers, H B (1986) ‘The influence of self-esteem on cognitive Responses to machine-like versus human-like computer feedback,’ The Journal of Social Psychology, 125 (6), 761-769
  • Rickenberg, R & Reeves, B (2000) The effects of animated characters on anxiety, task performance, and evaluations of user interfaces. Proceedings of CHI 2000-Conference on Human Factors in Computing Systems. New York, N.Y., 49-56.
  • Roy, US App. 2009/0063147, expressly incorporated herein by reference, teaches about phonetic, syntactic and conceptual analysis drive speech recognition.
  • Schneider, David I (1999) Essentials of Visual Basic 6.0 programming Prentice-Hall, NJ.
  • See also, each of which is expressly incorporated herein by reference:
  • Shneiderman, B & Plaisant, C (2004) Designing the User Interface: Strategies for Effective Human-Computer Interaction Fourth Edition, Pearson Addison Wesley, London
  • Shneiderman, B (1992) Designing the User Interface: Strategies for Effective Human-Computer Interaction Second Edition, Addison Wesley Longman, London
  • Swartz, L (2003) ‘Why people hate the paperclip: labels, appearance, Behaviour and social responses to user interface agents,’ Student thesis, symbolic systems program, Stanford University, xenon.stanford.edu/˜lswartz/paperclip/
  • Takeuchi, et al., “Communicative Facial Displays as a New Conversational Modality.”
  • Technovelgy.com (2005) ‘Mac Mini and KITT the Knight Rider’ News article about the Mac Mini, 13 Jan. 2005, www.technovelgy.com/ct/Science-Fiction-News.asp?NewsNum=311
  • Toastytech.com (2005) ‘Microsoft Bob Version 1.00’, Summary of Microsoft Bob, toastytech.com/guis/bob.html (10 Jan. 2005)
  • Tzeng, J-Y, (2004) ‘Towards a more civilised design, studying the effects of computers that apologise,’ International Journal of Human-Computer Studies, 61 319-345
  • Vertegaal, et al., “Why Conversational Agents Should Catch the Eye”, CHI 2000, 1-6 Apr. 2000, pp. 257-258.
  • Wetmore, J (1999) ‘Moving relationships: befriending the automobile to relieve anxiety’ www.drdriving.org/misc/anthropomorph.html


SUMMARY OF THE INVENTION

The present system and method provide a conversational interactive interface for an electronic system, which communicates using traditional human communication paradigms, and employs artificial intelligence to respond to the user. Many of technologies employed by components of the system and method are available. For example, by combining the technologies of, Gupta U.S. Pat. No. 6,138,095 (word recognizer), Bohacek U.S. Pat. No. 6,411,687 (mood detector based on speech), Black U.S. Pat. No. 5,774,591 (facial expression to mood converter), and Bushey U.S. Pat. No. 7,224,790 (analysis of word use to detect the attitude of the customer), the mood of a user of a computer with a camera and a microphone who is looking into the camera and speaking into the microphone can effectively be ascertained.


Conversation is a progression of exchanges (usually oral, but occasionally written) by participants. Each participant is a “learning system,” that is, a system that is adaptive and changes internally as a consequence of experience. This highly complex type of interaction is also quite powerful, for conversation is the means by which existing knowledge is conveyed, and new knowledge is generated. Conversation is different from other interactions, such as a mechanical response (e.g. door that opens when one presses a button or an Internet search query that returns a pre-determinable set of results) because conversation is not a simple reactive system. It is a uniquely personal interaction to the degree that any output response must be based on the input prior statement, as well as other information about one's dealings with the other party to the conversation and former conversation. It often involves synthesis of ideas with new information or preexisting information not previously expressed for the purpose at hand, and can also involve a form of debate, where a party adopts a position or hypothesis that it does not hold firmly, in order to continue the interaction. As a result, the thesis or topic can itself evolve, since the conversation need not be purposeful. Indeed, for social conversation, the process is not intended to resolve or convince, but rather to entertain. One would normally converse very differently with one's spouse, one's child, one's social friend, and one's business colleague, thus making conversation dependent on the counterparty. See, generally, Gordon Pask, Conversation Theory, Applications in Education and Epistemology, Elsevier, 1976; Gordon Pask, Heinz von Foerster's Self-Organisation, the Progenitor of Conversation and Interaction Theories, 1996. We say that an output response is “conversationally relevant” to an input prior statement and course of dealings if the output builds on the input, and does more than merely repeats the information that can be found in the prior course of dealings. Often, the evolution of a conversation incorporates “new” facts, such as current events or changes from a prior conversation.


In spite of a large amount of technology created for the care of elderly people, a problem which many elderly people experience is loneliness. Many elderly individuals live alone or in nursing homes and do not have as much company as they would like due to the fact that many of their friends and families are far away, unavailable, sick or deceased. In addition, a large percentage of elderly people do not drive and have difficulty walking, making it difficult for them to transport themselves to visit their friends. Social and business networking websites, such as Facebook and LinkedIn, which are popular among younger generations, are not as popular with elderly people, creating a need in the elderly community for updates regarding their friends and families One particular issue is a generation gap in technological proficiency, and comfort level with new types of man-machine interfaces. For example, older generations are more comfortable using a telephone than a computer for communications, and may also prefer “face to face” conversation to voice only paradigms.


The present invention provides, according to one aspect, an automated device that allows humans, and especially elderly people, to engage in conversational interactions, when they are alone. Such automated devices may provide users with entertainment and relevant information about the world around them. Also, preferably, this device would contribute to the safety of the elderly people by using the camera and microphone to monitor the surroundings for emergency situations, and notify the appropriate people if an emergency takes place.


A preferred embodiment of the invention provides a personal interface device. The personal interface device is, for example, particularly adapted for use by an elderly or lonely person in need of social interaction.


In a first embodiment, the personal interface device has a microphone adapted to receive audio input, and a camera adapted to receive image input. Persons having ordinary skill in the art will recognize many such devices that have a microphone and a camera and could be used to implement this invention. For example, the invention could be implemented on a cell phone, a smartphone, such as a Blackberry or Apple iPhone, a PDA, such as an Apple iPad, Apple iPod or Amazon Kindle, a laptop computer, a desktop computer, or a special purpose computing machine designed solely to implement this invention. Preferably, the interface device comprises a single integral housing, such as a cellular telephone, adapted for video conferencing, in which both a video camera and image display face the user.


In a preferred embodiment, the device is responsive to voice commands, for example supporting natural language interaction. This embodiment is preferred because many elderly people have difficulty operating the small buttons on a typical keyboard or cell phone. Thus the oral interaction features, for both communication and command and control, are helpful.


Embodiments of the invention further comprise at least one processor executing software adapted to determine the mood of the user based on at least one of the audio input and the image input. This mood determination could take into account many factors. In addition to the actual words spoken by the user, the mood might be inferred from the content of the conversation, user's tone, hand gestures, and facial expressions. The mood could be ascertained, for example, through an express input, a rule-based or logical system, through a trainable neural network, or other known means. For example, a user mood may be determined in a system according to an embodiment of the present invention which combines and together analyzes data derived from application of the technologies of Gupta (U.S. Pat. No. 6,138,095), which provides a word recognizer, Bohacek (U.S. Pat. No. 6,411,687), which provides a mood detector based on speech, Black (U.S. Pat. No. 5,774,591), which provides a system and method to ascertain mood based on facial expression, and Bushey (U.S. Pat. No. 7,224,790), which analyzes word use to detect the attitude of the customer.


In one embodiment, in order to have conversations that are interesting to the user, the device is adapted to receive information of interest to the user from at least one database or network, which is typically remote from the device, but may also include a local database and/or cache, and which may also be provided over a wireless or wired network, which may comprise a local area network, a wide area network, the Internet, or some combination. Information that is of interest to the user can also be gathered from many sources. For example, if the user is interested in finance, the device could receive information from Yahoo Finance and the Wall Street Journal. If the user is interested in sports, the device could automatically upload the latest scores and keep track of ongoing games to be able to discuss with the user. Also, many elderly people are interested in their families, but rarely communicate with them. The device might therefore also gather information about the family through social networking websites, such as Facebook and LinkedIn. Optionally, the device might also track newspaper or other news stories about family members. In one embodiment, artificial intelligence techniques may be applied to make sure that the news story is likely to be about the family member and not about someone with the same name. For example, if a grandson recently graduated from law school, it is likely that the grandson passed the local Bar Exam, but unlikely that the grandson committed an armed robbery on the other side of the country. In another embodiment, the device could notify the user when an interesting item of information is received, or indeed raise this as part of the “conversation” which is supported by other aspects of the system and method. Therefore, the device could proactively initiate a conversation with the user under such a circumstance, or respond in a contextually appropriate manner to convey the new information. A preferred embodiment of this feature would ensure that the user was present and available to talk before offering to initiate a conversation. Thus, for example, if there were other people present already engaged in conversation (as determined by the audio information input and/or image information input), an interruption might be both unwarranted and unwelcome.


The gathering of information might be done electronically, by an automatic search, RSS (most commonly expanded as “Really Simple Syndication” but sometimes “Rich Site Summary”) feed, or similar technique. The automatic information gathering could take place without a prompt or other action from the user. Alternatively, in one embodiment, the device communicates with a remote entity, (e.g. call center employee) who may be someone other than the user-selected person who is displayed on the screen, that communicates information in response to the requests of the user. In one embodiment, the remote entity is a human being who is responsible for keeping the conversation interesting for the user and for ensuring the truth and veracity of the information being provided. This embodiment is useful because it ensures that a software bug would not report something that is upsetting or hurtful to the user.


In various embodiments, the device has a display. The display may, for example, present an image of a face of a person. The person could be, for example, anyone of whom a photograph or image is available, or even a synthetic person (avatar). It could be a spouse, a relative, or a friend who is living or dead. The image is preferably animated in an anthropomorphically accurate manner, thus producing an anthropomorphic interface. The interface may adopt mannerisms from the person depicted, or the mood and presentation may be completely synthetic.


The device preferably also has at least one speaker. The speaker is adapted to speak in a voice associated with the gender of the person on the display. In one embodiment, the voice could also be associated with the race, age, accent, profession, and background of the person in the display. In one embodiment, if samples of the person's voice and speech are available, the device could be programmed to imitate the voice.


Also, the invention features at least one programmable processor that is programmed with computer executable code, stored in a non-transitory computer-readable medium such as flash memory or magnetic media, which when executed is adapted to respond to the user's oral requests with at least audio output that is conversationally relevant to the audio input. As noted above, the audio output is preferably in the voice of the person whose image appears on the display, and both of these may be user selected. In one embodiment, the processor stores information of interest to the user locally, and is able to respond to the user's queries quickly, even if remote communication is unavailable. For example, a user might ask about a score in the recent Yankees game. Because the device “knows” (from previous conversations) that the user is a Yankees fan, the processor will have already uploaded the information and is able to report it to the user. In another embodiment, the device is connected to a remote system, such as a call center, where the employees look up information in response to user requests. Under this “concierge” embodiment, the device does not need to predict the conversation topics, and the accuracy of the information provided is verified by a human being.


In a preferred embodiment, the processor implementing the invention is further adapted to receive input from the microphone and/or the camera and to process the input to determine the existence of an emergency. The emergency could be detected either based on a rule-based (logical) system or based on a neural network trained by detecting various emergency scenarios. If an emergency is detected, the processor might inform an emergency assistance services center which is contact, for example, through a cellular telephone network (e.g., e911), cellular data network, the Internet, or produce a local audio and/or visual alert. Emergency assistance services may include, for example, police, fire, ambulance, nursing home staff, hospital staff, and/or family members. The device could be further adapted to provide information about the emergency to emergency assistance personnel. For example, the device could store a video recording of events taking place immediately before the accident, and/or communicate live audio and/or video.


Another embodiment of the invention is directed to a machine-implemented method of engaging in a conversation with a user. In the first step, the machine receives audio and visual input from the user. Such input could come from a microphone and camera connected to the machine. Next, the machine determines the mood of the user based on at least one of the audio input and the visual input. To do this, the machine considers features including facial expressions and gestures, hand gestures, voice tone, etc. In the following step, the machine presents to the user a face of a user-selected person or another image, wherein the facial expression of the person depends on, or is responsive to, the user's mood. The person could be anyone of whom a photograph is available, for example, a dead spouse or friend or relative with whom the user wishes that she were speaking. Alternatively, the user-selected person could be a famous individual, such as the President. If the user does not select a person, a default will be provided. The device may develop its own “personality” based on a starting state, and the various interactions with the user.


In a preferred embodiment, the machine receives information of interest to a user from a database or network. For example, if a user is interested in weather, the machine might upload weather data to be able to “discuss” the weather intelligently. If the user is interested in college football, the machine might follow recent games and “learn” about key plays. In one embodiment, the current conversation could also be taken into account in determining the information that is relevant to the machine's data mining.


Finally, the last step involves providing audio output in a voice associated with a gender of the user-selected person, the tone of the voice being dependent on at least the mood of the user, wherein the audio output is conversationally relevant to the audio input from the user.


In an embodiment of the invention where the machine initiates a conversation with the user, the first step is to receive information of interest from at least one database or network, such as the Internet. The next step is to request to initiate a conversation with the user. Optionally, the machine could check that the user is present and available before offering to initiate a conversation. The machine would then receive from the user an audio input (words spoken into a microphone) and visual input (the user would look on the screen and into a camera). The user would then be presented with an image of the person he selected to view on the screen. The facial expression on the person would be dependent on the mood of the user. In one embodiment the machine would either imitate the mood of the user or try to cheer up the user and improve his mood. Finally, the machine would provide audio output in a voice associated with the gender of the user-selected person on the screen. The tone of the voice will be dependent on the mood of the user. The audio output will be conversationally relevant to the audio input from the user.


Persons skilled in the art will recognize many forms of hardware which could implement this invention. For example, a user interface system may be provided by an HP Pavilion dv4t laptop computer, which has a microphone, video camera, display screen, speakers, processor, and wireless local area network communications, with capacity for Bluetooth communication to a headset and wide area networking (cellular data connection), and thus features key elements of various embodiments of the invention in the body of the computer. If the laptop or desktop computer does not have any of these features, an external screen, webcam, microphone, and speakers could be used. Alternatively, aspects of the invention could be implemented on a smartphone, such as the Apple iPhone or a Google//Motorola Android “Droid.” However, an inconvenience in these devices is that the camera usually faces away from the user, such that the user cannot simultaneously look at the screen and into the camera. This problem can be remedied by connecting an iPhone 3G with an external camera or screen or by positioning mirrors such that the user can see the screen while the camera is facing a reflection of the user.


Almost any modern operating system can be used to implement this invention. For example, one embodiment can run on Windows 7. Another embodiment can run on Linux. Yet another embodiment can be implemented on Apple Mac Os X. Also, an embodiment can be run as an Apple iPhone App, a Windows Mobile 6.5 or 7.0 App, a RIM Blackberry App, an Android App or a Palm App. The system need not be implemented as a single application, except on systems which limit multitasking, e.g., Apple iPhone, and therefore may be provided as a set of cooperating software modules. The advantage of a modular architecture, especially with an open application programming interface, is that it allows replacement and/or upgrade of different modules without replacing the entire suite of software. Likewise, this permits competition between providers for the best module, operating within a common infrastructure.


Thus, for example, the conversation logic provided to synthesize past communications and external data sources may be designed in different ways. Rather than mandating a single system, this module may be competitively provided from different providers, such as Google, Microsoft, Yahoo!, or other providers with proprietary databases and/or algorithms. Likewise, in some cases, a commercial subsidy may be available from a sponsor or advertiser for display or discussion of its products, presumably within the context of the conversation. Thus, for example, if the subject of “vacation” is raised, the agent within the device might respond by discussing a sponsor's vacation offering. The user might say: “I hate sitting here—I want to go on vacation somewhere fun!”. The device, recognizing the word “vacation” in the context of an open-ended declarative, might respond: “early summer is a great time to go to Florida, before the hurricane season. Hilton Hotels are having a timeshare promotion like the one you went on last year. You can invite grandson Jimmy, who did well in school this year.” The user may respond: “that's a great idea. How much does it cost? And I don't want to sit in an endless timeshare sales pitch!” The device might then respond: “If you sit in the sales pitch, which is 90 minutes, you get $300 off the hotel rate, plus it keeps you out of the sun midday. Besides, your friend Wendy Montclair owns a timeshare there and wrote goods things about it on her blog. You always liked Wendy.” The user might respond: “I don't like her anymore. She's going out with Snidely Whiplash!” The device might then respond, “You're joking. Snidely Whiplash is a cartoon character from Dudley Do-Right. Besides, the timeshare you now own went up in value, and you can sell it at a profit to buy this one.” The user might respond, “I bought the last one to be near Harry. He's a good friend.” The conversational interface might respond: “I just checked; Harry Lefkowitz passed away last month at age 79. His obituary is in the Times. Would you like me to read it to you?”


As can be seen from this exchange, the conversational interface seeks to synthesize information, some of which can be gathered in real time based on the context of the conversation, and may optionally have commercial motivation. This motivation or biasing is generally not too strong, since that might undermine the conversational value of the device, but the commercial biasing might be used to reduce the acquisition and/or usage costs of the device, and adaptively provide useful information to the user.


In another embodiment, ads and incentives may be brokered in real time by a remote database. That is, there is no predetermined commercial biasing, but after the user interacts with the device to trigger a “search,” a commercial response may be provided, perhaps accompanied by “organic” responses, which can then be presented to the user or synthesized into the conversation. For example, the remote system may have “ads” that are specifically generated for this system and are communicated with sophisticated logic and perhaps images or voices. An example of this is a T-Mobile ad presented conversationally by a Catherine Zeta Jones avatar, talking with the user about the service and products, using her voice and likeness. Assuming the user is a fan, this “personalized” communication may be welcomed, in place of the normal images and voices of the interface. Special rules may be provided regarding what information is uploaded from the device to a remote network, in order to preserve privacy, but in general, an ad-hoc persona provided to the device may inherit the knowledge base and user profile database of the system. Indeed, this paradigm may form a new type of “website,” in which the information is conveyed conversationally, and not as a set of static or database-driven visual or audio-visual depictions.


Yet another embodiment does not require the use of a laptop or desktop computer. Instead, the user could dial a phone number from a home, office, or cellular phone and turn on television to a prearranged channel. The television would preferably be connected to the cable or telephone company's network, such that the cable or telephone company would know which video output to provide. The telephone would be used to obtain audio input from the user. Note that video input from the user is not provided here.


The software for running this app could be programmed in almost any programming language, such as Java or C++. Microphones, speakers, and video cameras typically have drivers for providing input or output. Also, Skype provides a video calling platform. This technology requires receiving video and audio input from a user. Skype can be modified such that, instead of calling a second user, a user would “call” an avatar implementing the present invention, which would apply the words the user speaks, as well as the audio and video input provided from the user by the Skype software in order to make conversationally relevant responses to the user.


It is therefore an object to provide a method, and system for performing the method comprising: receiving audio-visual information; determining at least one of a topic of interest to a user and a query by a user, dependent on received audio-visual information; presenting an anthropomorphic object through an audio-visual output controlled by at least one automated processor, conveying information of interest to the user, dependent on at least one of the determined topic of interest and the query; and telecommunicating audio-visual information through a telecommunication interface. The anthropomorphic object may have an associated anthropomorphic mood which is selectively varied in dependence on at least one of the audio-visual information input, the topic of interest, and the received information.


The receiving, presenting and telecommunicating may be performed using a self-contained cellular telephone communication device. The system may respond to spoken commands. The system may determine an existence of an emergency condition. The system may automatically telecommunicate information about the emergency condition without required human intervention. The emergency condition may be automatically telecommunicated with a responder selected from one or more of the group consisting police, fire, and emergency medical. The query or topic of interest may be automatically derived from the audio-visual information input and communicated remotely from the device through the Internet. The system may automatically interact with a social networking website and/or an Internet search engine and/or a call center through the telecommunication interface. The system may respond to the social networking website, Internet search engine, or call center by transmitting audio-visual information. The system may automatically receive at least one unit of information of interest to the user from a resource remote from the device substantially without requiring an express request from the user, and may further proactively interact with the user in response to receiving said at least one unit of information. The anthropomorphic object may be modified to emulate a received image of a person. The audio-visual output may be configured to emulate a voice corresponding to characteristics of the person represented in the received image of the person. The system may present at least one advertisement responsive to at least one of the topic of interest and the query, and financially accounting for at least one of a presentation of the at least one advertisement and a user interaction with the at least one advertisement. The system may generate structured light, and capture three-dimensional information based at least on the generated structured light. The system may capture a user gesture, and control the anthropomorphic object in dependence on the user gesture. The system may automatically generate a user profile generated based on at least prior interaction with the user.


It is a further object to provide a user interface device, and method of use, comprising: an audio-visual information input configured to receive information sufficient to determine at least one of a topic of interest to a user and a query by a user, dependent on received audio-visual information; at least one audio-visual output configured to present an anthropomorphic object controlled by at least one automated processor, conveying information of interest to the user, dependent on at least one of the determined topic of interest and the query; and an audio-visual telecommunication interface. The at least one automated processor may control the anthropomorphic object to have an associated anthropomorphic mood which is selectively varied in dependence on at least one of the audio-visual information input, the topic of interest, and the received information.


The audio-visual information input and audio-visual output may be implemented on a self-contained cellular telephone communication device. The at least one automated processor may be configured to respond to spoken commands, and to process the received information and to determine an emergency condition. The at least one processor may be configured to automatically telecommunicate information about the determined emergency condition without required human intervention. The determined emergency condition may be automatically telecommunicated with a responder selected from one or more of the group consisting police, fire, and emergency medical. The system may automatically interact with a social networking website based on at least an implicit user command may be provided. The system may be configured to automatically interact with a call center, and to automatically respond to the call center to transmit audio-visual information may be provided. The at least one processor may be configured to automatically receive at least one unit of information of interest to the user from a resource remote from the device substantially without requiring an express request from the user and to initiate an interaction with the user in response to receiving said at least one unit of information. The anthropomorphic object may be configured to represent a received image of a person and to provide an audio output in a voice corresponding to a characteristic of the received image of the person. The at least one processor may be configured to present at least one advertisement responsive to at least one of the topic of interest and the query and to permit the user to interact with the advertisement. The audio-visual information input may comprise a structured light image capture device. The at least one processor may be configured to automatically generate a user profile generated based on the at least prior interaction of the user. The mood may correspond to a human emotional state, and the at least one processor may be configured to determine a user emotional state based on at least the audio-visual information.


It is a further object to provide a method comprising: defining an automated interactive interface having an anthropomorphic personality characteristic, for semantically interacting with a human user to receive user input and present information in a conversational style; determining at least one of a topic of interest to a user dependent on the received user input; automatically generating a query seeking information corresponding to the topic of interest from a database; receiving information of interest to the user from the database, comprising at least a set of facts or information; and providing at least a portion of the received facts or information to the user through the automated interactive interface, in accordance with the conversational style, responsive to the received user input, and the information of interest. The conversational style may be defined by a set of conversational logic comprising at least a persistent portion and an information of interest responsive portion. The anthropomorphic personality characteristic may comprise an automatically controlled human emotional state, the human emotional state being controlled responsive to at least the received user input. Telecommunications with the database may be conducted through a wireless network interface.


It is another object to provide a user interface system comprising an interactive interface; and at least one automated processor configured to control the interactive interface to provide an anthropomorphic personality characteristic, configured to semantically interact with a human user to receive user input and present information in a conversational style; determine at least one of a topic of interest to a user dependent on the received user input; automatically generate a query seeking information corresponding to the topic of interest from a database; receive information of interest to the user from the database, comprising at least a set of facts or information; and provide at least a portion of the received facts or information to the user through the interactive interface, in accordance with the conversational style, responsive to the received user input, and the information of interest. The conversational style may be defined by a set of conversational logic comprising at least a persistent portion and an information of interest responsive portion. The anthropomorphic personality characteristic may comprise a human emotional state, the human emotional state being controlled responsive to at least the received user input. A wireless network interface telecommunications port may be provided, configured to communicate with the database.


Another object provides a method comprising: defining an automated interactive interface having an artificial intelligence-based anthropomorphic personality, configured to semantically interact with a human user through an audio-visual interface, to receive user input and present information in a conversational style; determining at least one of a topic of interest to a user dependent on at least the received user input and a history of interaction with the user; automatically generating a query seeking information corresponding to the topic of interest from a remote database through a telecommunication port; receiving information of interest to the user from the remote database through the telecommunication port, comprising at least a set of facts or information; and controlling the automated interactive interface to convey the facts or information to the user in the conversation style, subject to user interruption and modification of the topic of interest.


A still further object provides a system, comprising: a user interface, comprising a video output port, an audio output port, a camera, a structured lighting generator, and an audio input port; a telecommunication interface, configured to communicate at least a voice conversation through an Internet interface; and at least one processor, configured to receive user input from the user interface, to generate signals for presentation through the user interface, and to control the telecommunication interface, the at least one processor being responsive to at least one user gesture captured by the camera in conjunction with the structured lighting generator to provide control commands for voice conversation communication.


Another object provides a system and method for presenting information to a user, comprising: generating a data file corresponding to a topic of information, the data file comprising facts and conversational logic; communicating the data file to a conversational processor system, having a human user interface configured to communicate a conversational semantic dialog with a user; processing the data file in conjunction with a past state of the conversational semantic dialog with the conversational processor; outputting through the human user interface a first semantic construct in dependence on at least the data file; receiving, after outputting said first semantic construct, through the human user interface a semantic user input; and outputting, after receiving said semantic user input, through the human user interface, a conversationally appropriate second semantic construct in dependence on at least the data file and said semantic user input. The method may further comprise receiving a second data file comprising at least one additional fact, after said receiving said semantic user input, wherein said conversationally appropriate second semantic construct is generated in dependence on at least the second data file.


These and other objects will become apparent from a review of the preferred embodiments and figures.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates an exemplary machine implementing an embodiment of the present invention.



FIG. 2 illustrates a flowchart of a method implementing an embodiment of the present invention.



FIG. 3 illustrates an embodiment of this invention which can be run on a substantially arbitrary cell phone with low processing abilities.



FIG. 4 illustrates a flowchart for a processor implementing an embodiment of the present invention.



FIG. 5 illustrates a smart clock radio implementing an embodiment of the present invention.



FIG. 6 illustrates a television with a set-top box implementing an embodiment of the present invention.



FIG. 7 illustrates a special purpose robot implementing an embodiment of the present invention.



FIG. 8 shows a prior art computer system.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Example 1
Cell Phone


FIG. 1 illustrates an exemplary machine 100 that can be used to implement an embodiment of the present invention. The machine comprises a microphone 110 adapted to receive audio information input and a camera 120 adapted to receive image information input. The camera 120 is preferably is facing the user. There are one or more speakers 130 for audio output (e.g., voice reproduction) and a display 140, which also preferably faces the user. There is also a processor (not illustrated in FIG. 1, but an exemplary processor appears in FIG. 4) and the machine is preferably at least sometimes able to connect to the Internet or a remote database server which stores a variety of human-interest information. The image 150 in display 140 is preferably the face of a person who is selected by the user. The face may also be of another species, or completely synthetic. In one embodiment, the lips of image 150 move as image 150 speaks, and image 150's facial expression is determined to convey an anthropomorphic mood, which itself may be responsive to the mood of the user, as signaled by the audio and image input through microphone 110 and camera 120. The mood of the user may be determined from the words spoken by the user, the voice tone of the user, the facial expression and gestures of the user, the hand gestures of the user, etc. The device 100 may be configured as a cellular telephone or so-called smartphone, but persons having ordinary skill in the art will realize that this invention could be implemented in many other form factors and configurations. For example, the device could be run on a cell phone, a smart phone (e.g. Blackberry, Apple iPhone), a PDA (e.g. Apple iPod, Apple iPad, Amazon Kindle), a laptop computer, a desktop computer, or a special purpose computing machine, with relatively minor modifications. The interface may be used for various consumer electronics devices, such as automobiles, televisions, set-top boxes, stereo equipment, kitchen appliances, thermostats and HVAC equipment, laundry appliances, and the like. The interface may be employed in public venues, such as vending machines and ATMs. In some cases, the interface may be an audio-only interface, in which imaging may be unidirectional or absent. In audio-only systems, the interface seeks to conduct an intelligent conversational dialog and may be part of a call center or interactive voice response system. Thus, for example, the technology might be employed to make waiting queues for call centers more interesting and tolerable for users.



FIG. 2 is a flowchart 200 illustrating the operation of one embodiment of the invention. In step 210, the user Ulysses looks into the camera and speaks into the microphone. Preferably, the user would naturally be looking into the camera because it is located near the screen where an image of a person is displayed. The person could be anyone whom the user selects, of whom the user can provide a photograph. For example, it might be a deceased friend or spouse, or a friend or relative who lives far away and visits rarely. Alternatively, the image might be of a famous person. In the example, the image in the machine (not illustrated) is of Ulysses' wife, Penelope.


In the example, in step 210, Ulysses says, “Is my grandson James partying instead of studying?” Ulysses has an angry voice and a mad facial expression. In step 220, the machine detects the mood of the user (angry/mad) based on audio input (angry voice) and image input (mad facial expression). This detection is done by one or more processors, which is, for example, a Qualcomm Snapdragon processor. Also, the one or more processors are involved in detecting the meaning of the speech, such that the machine would be able to provide a conversationally relevant response that is at least partially responsive to any query or comment the user makes, and builds on the user's last statement, in the context of this conversation and the course of dealings between the machine and the user. Roy, US App. 2009/0063147, incorporated herein by reference, discusses an exemplary phonetic, syntactic and conceptual analysis drive speech recognition system. Roy's system, or a similar technology, could be used to map the words and grammatical structures uttered by the user to a “meaning”, which could then be responded to, with a response converted back to speech, presented in conjunction with an anthropomorphic avatar on the screen, in order to provide a conversationally relevant output. Another embodiment of this invention might use hierarchal stacked neural networks, such as those described by Commons, U.S. Pat. No. 7,613,663, incorporated herein by reference, in order to detect the phonemes the user pronounces and to convert those phonemes into meaningful words and sentence or other grammatical structures. In one embodiment, the facial expression and/or the intonation of the user's voice are coupled with the words chosen by the user to generate the meaning. In any case, at a high level, the device may interpret the user input as a concept with a purpose, and generates a response as a related concept with a counter-purpose. The purpose need not be broader than furthering the conversation, or it may be goal-oriented. In step 230, the machine then adjusts the facial expression of the image of Penelope to angry/mad to mirror the user, as a contextually appropriate emotive response. In another embodiment, the machine might use a different facial expression in order to attempt to modify the user's mood. Thus, if the machine determines that a heated argument is an appropriate path, then a similar emotion to that of the user would carry the conversation forward. In other cases, the interface adopts a more submissive response, to defuse the aggression of the user.


Clearly, the machine has no way of knowing whether James is partying or studying without relying on external data. However, according to one embodiment of the invention, the machine can access a network, such as the Internet, or a database to get some relevant information. Here, in step 240, the machine checks the social networking website Facebook to determine James' recent activity. Facebook reveals that James got a C on his biology midterm and displays several photographs of James getting drunk and engaging in “partying” behavior. The machine then replies 250 to the user, in an angry female voice, “It is horrible. James got a C on his biology midterm, and he is drinking very heavily. Look at these photographs taken by his neighbor.” The machine then proceeds to display the photographs to the user. In step 260, the user continues the conversation, “Oh my God. What will we do? Should I tell James that I will disinherit him unless he improves his grades?”


Note that a female voice was used because Penelope is a woman. In one embodiment, other features of Penelope, for example, her race, age, accent, profession, and background could be used to select an optimal voice, dialect, and intonation for her. For example, Penelope might be a 75-year-old, lifelong white Texan housewife who speaks with a strong rural Texas accent.


The machine could look up the information about James in response to the query, as illustrated here. In another embodiment, the machine could know that the user has some favorite topics that he likes to discuss (e g family, weather, etc.) The machine would then prepare for these discussions in advance or in real-time by looking up relevant information on the network and storing it. This way, the machine would be able to discuss James' college experience in a place where there was no Internet access. In accordance with this embodiment, at least one Internet search may occur automatically, without a direct request from the user. In yet another embodiment, instead of doing the lookup electronically, the machine could connect to a remote computer server or a remote person who would select a response to give the user. Note that the remote person might be different from the person whose photograph appears on the display. This embodiment is useful because it ensures that the machine will not advise the user to do something rash, such as disinheriting his grandson.


Note that both the machine's response to the user's first inquiry and the user's response to the machine are conversationally relevant, meaning that the statements respond to the queries, add to the conversation, and increase the knowledge available to the other party. In the first step, the user asked a question about what James was doing. The machine then responded that James' grades were bad and that he had been drunk on several occasions. This information added to the user's base of knowledge about James. The user then built on what the machine had to say by suggesting threatening to disinherit James as a potential solution to the problem of James' poor grades.


In one embodiment, the machine starts up and shuts down in response to the user's oral commands. This is convenient for elderly users who may have difficulty pressing buttons. A deactivation permits the machine to enter into a power saving low power consumption mode. In another embodiment, the microphone and camera monitor continuously the scene for the presence of an emergency. If an emergency is detected, emergency assistance services, selected for example from the group of one or more of police, fire, ambulance, nursing home staff, hospital staff, and family members might be called. Optionally, the device could store and provide information relevant to the emergency, to emergency assistance personnel. Information relevant to the emergency includes, for example, a video, photograph or audio recording of the circumstance causing the emergency. To the extent the machine is a telephone, an automated e911 call might be placed, which typically conveys the user's location. The machine, therefore, may include a GPS receiver, other satellite geolocation receiver, or be usable with a network-based location system.


In another embodiment of this invention, the machine provides a social networking site by providing the responses of various people to different situations. For example, Ulysses is not the first grandfather to deal with a grandson with poor grades who drinks and parties a lot. If the machine could provide Ulysses with information about how other grandparents dealt with this problem (without disinheriting their grandchildren), it might be useful to Ulysses.


In yet another embodiment (not illustrated) the machine implementing the invention could be programmed to periodically start conversations with the user itself, for example, if the machine learns of an event that would be interesting to the user. (E.g., in the above example, if James received an A+ in chemistry, the machine might be prompted to share the happy news with Ulysses.) To implement this embodiment, the machine would receive relevant information from a network or database, for example through a web crawler or an RSS feed. Alternatively, the machine could check various relevant websites, such as James' social networking pages, itself to determine if there are updates. The machine might also receive proactive communications from a remote system, such as using an SMS or MMS message, email, IP packet, or other electronic communication.


Example 2
Cell Phone with Low Processing Abilities

This embodiment of this invention, as illustrated in FIG. 3, can be run on an arbitrary cell phone 310 connected to a cellular network, such as the GSM and CDMA networks available in the US, such as the Motorola Razr or Sony Ericsson W580. The cell phone implementing this embodiment of the invention preferably has an ability to place calls, a camera, a speakerphone, and a color screen. To use the invention, the user of the cell phone 310 places a call to a call center 330. The call could be placed by dialing a telephone number or by running an application on the phone. The call is carried over cell tower 320. In response to placing the call, an image of a person selected by the user or an avatar appears on the screen of the cell phone 310. Preferably, the call center is operated by the telephone company that provides cell phone service for cell phone 310. This way, the telephone company has control over the output on the screen of the cell phone as well as over the voice messages that are transmitted over the network.


The user says something that is heard at call center 330 by employee 332. The employee 332 can also see the user through the camera in the user's telephone. An image of the user appears on the employee's computer 334, such that the employee can look at the user and infer the user's mood. The employee then selects a conversationally relevant response, which builds on what the user said and is at least partially responsive to the query, to say to the user. The employee can control the facial expression of the avatar on the user's cell phone screen. In one embodiment, the employee sets up the facial expression on the computer screen by adjusting the face through mouse “drag and drop” techniques. In another embodiment, the computer 334 has a camera that detects the employee's facial expression and makes the same expression on the user's screen. This is processed by the call center computer 334 to provide an output to the user through cell phone's 310 speaker. If the user asks a question, such as, “What will the weather be in New York tomorrow?” the call center employee 332 can look up the answer through Google or Microsoft Bing search on computer 334.


Preferably, each call center employee is assigned to a small group of users whose calls she answers. This way, the call center employee can come to personally know the people with whom she speaks and the topic that they enjoy discussing. Conversations will thus be more meaningful to the users.


Example 3
Smart Phone, Laptop or Desktop with CPU Connected to a Network

Another embodiment of the invention illustrated in FIG. 4, is implemented on a smartphone, laptop computer, or desktop computer with a CPU connected to a network, such as a cellular network or an Ethernet WiFi network that is connected to the internet. The phone or computer implementing the invention has a camera 410 and a microphone 420 for receiving input from the user. The image data received by the camera and the audio data received by the microphone are fed to a logic to determine the user's mood 430 and a speech recognizer 440. The logic to determine the user's mood 430 provides as output a representation of the mood and the speech recognizer 440 provides as output a representation of the speech.


As noted above, persons skilled in the art will recognize many ways the mood-determining logic 430 could operate. For example, Bohacek, U.S. Pat. No. 6,411,687, incorporated herein by reference, teaches that a speaker's gender, age, and dialect or accent can be determined from the speech. Black, U.S. Pat. No. 5,774,591, incorporated herein by reference, teaches about using a camera to ascertain the facial expression of a user and determining the user's mood from the facial expression. Bushey, U.S. Pat. No. 7,224,790, similarly teaches about “verbal style analysis” to determine a customer's level of frustration when the customer telephones a call center. A similar “verbal style analysis” can be used here to ascertain the mood of the user. Combining the technologies taught by Bohacek, Black, and Bushey would provide the best picture of the emotional state of the user, taking many different factors into account.


Persons skilled in the art will also recognize many ways to implement the speech recognizer 440. For example, Gupta, U.S. Pat. No. 6,138,095, incorporated herein by reference, teaches a speech recognizer where the words that a person is saying are compared with a dictionary. An error checker is used to determine the degree of the possible error in pronunciation. Alternatively, in a preferred embodiment, a hierarchal stacked neural network, as taught by Commons, U.S. Pat. No. 7,613,663, incorporated herein by reference, could be used. If the neural networks of Commons are used to implement the invention, the lowest level neural network would recognize speech as speech (rather than background noise). The second level neural network would arrange speech into phonemes. The third level neural network would arrange the phonemes into words. The fourth level would arrange words into sentences. The fifth level would combine sentences into meaningful paragraphs or idea structures. The neural network is the preferred embodiment for the speech recognition software because the meanings of words (especially keywords) used by humans are often fuzzy and context sensitive. Rules, which are programmed to process clear-cut categories, are not efficient for interpreting ambiguity.


The output of the logic to determine mood 430 and the speech recognizer 440 are provided to a conversation logic 450. The conversation logic selects a conversationally relevant response 452 to the user's verbal (and preferably also image and voice tone) input to provide to the speakers 460. It also selects a facial expression for the face on the screen 470. The conversationally relevant response should expand on the user's last statement and what was previously said in the conversation. If the user's last statement included at least one query, the conversationally relevant response preferably answers at least part of the query. If necessary, the conversation logic 450 could consult the internet 454 to get an answer to the query 456. This could be necessary if the user asks a query such as “Is my grandson James partying instead of studying?” or “What is the weather in New York?”


To determine whether the user's grandson James is partying or studying, the conversation logic 450 would first convert “grandson James” into a name, such as James Kerner. The last name could be determined either through memory (stored either in the memory of the phone or computer or on a server accessible over the Internet 454) of prior conversations or by asking the user, “What is James' last name?” The data as to whether James is partying or studying could be determined using a standard search engine accessed through the Internet 454, such as Google or Microsoft Bing. While these might not provide accurate information about James, these might provide conversationally relevant information to allow the phone or computer implementing the invention to say something to keep the conversation going. Alternatively, to provide more accurate information the conversation logic 450 could search for information about James Kerner on social networking sites accessible on the Internet 454, such as Facebook, LinkedIn, Twitter, etc., as well as any public internet sites dedicated specifically to providing information about James Kerner. (For example, many law firms provide a separate web page describing each of their attorneys.) If the user is a member of a social networking site, the conversation logic could log into the site to be able to view information that is available to the user but not to the general public. For example, Facebook allows users to share some information with their “friends” but not with the general public. The conversation logic 450 could use the combination of text, photographs, videos, etc. to learn about James' activities and to come to a conclusion as to whether they constitute “partying” or “studying.”


To determine the weather in New York, the conversation logic 450 could use a search engine accessed through the Internet 454, such as Google or Microsoft Bing. Alternatively, the conversation logic could connect with a server adapted to provide weather information, such as The Weather Channel, www.weather.com, or AccuWeather, www.accuweather.com, or the National Oceanic and Atmospheric Administration, www.nws.noaa.gov.


Note that, to be conversationally relevant, each statement must expand on what was said previously. Thus, if the user asks the question, “What is the weather in New York?” twice, the second response must be different from the first. For example, the first response might be, “It will rain in the morning,” and the second response might be, “It sunny after the rain stops in the afternoon.” However, if the second response were exactly the same as the first, it would not be conversationally relevant as it would not build on the knowledge available to the parties.


The phone or computer implementing the invention can say arbitrary phrases. In one embodiment, if the voice samples of the person on the screen are available, that voice could be used. In another embodiment, the decision as to which voice to use is made based on the gender of the speaker alone.


In a preferred embodiment, the image on the screen 470 looks like it is talking. When the image on the screen is talking, several parameters need to be modified, including jaw rotation and thrust, horizontal mouth width, lip corner and protrusion controls, lower lip tuck, vertical lip position, horizontal and vertical teeth offset, and tongue angle, width, and length. Preferably, the processor of the phone or computer that is implementing the invention will model the talking head as a 3D mesh that can be parametrically deformed (in response to facial movements during speech and facial gestures).


Example 4
Smart Clock Radio

Another embodiment of this invention illustrated in FIG. 5, includes a smart clock radio 500, such as the Sony Dash, adapted to implement the invention. The radio once again includes a camera 510 and a microphone 520 for receiving input from the user. Speakers 530 provide audio output, and a screen 550 provides visual output. The speakers 530 may also be used for other purposes, for example, to play music or news on AM, FM, XM, or Internet radio stations or to play CDs or electronic audio files. The radio is able to connect to the Internet through the home WiFi network 540. In another embodiment, an Ethernet wire or another wired or wireless connection is used to connect the radio to the Internet.


In one embodiment, the radio 500 operates in a manner equivalent to that described in the smartphone/laptop embodiment illustrated in FIG. 4. However, it should be noted that, while a user typically sits in front of a computer or cell phone while she is working with it, users typically are located further away from the clock radio. For example, the clock radio might be located in a fixed corner of the kitchen, and the user could talk to the clock radio while the user is washing the dishes, setting the table or cooking.


Therefore, in a preferred embodiment, the camera 510 is more powerful than a typical laptop camera and is adapted to viewing the user's face to determine the facial expression from a distance. Camera resolutions on the order of 8-12 megapixels are preferred, although any camera will suffice for the purposes of the invention.


Example 5
Television with Set-Top Box

The next detailed embodiment of the invention illustrated in FIG. 6, is a television 600 with a set-top box (STB) 602. The STB is a standard STB, such as a cable converter box or a digital TV tuner available from many cable companies. However, the STB preferably either has or is configured to receive input from a camera 610 and microphone 620. The output is provided to the user through the TV screen 630 and speakers 640.


If the STB has a memory and is able to process machine instructions and connect to the internet (over WiFi, Ethernet or similar), the invention may be implemented on the STB (not illustrated). Otherwise, the STB may connect to a remote server 650 to implement the invention. The remote server will take as input the audio and image data gathered by the STB's microphone and camera. The output provided is an image to display in screen 630 and audio output for speakers 640.


The logic to determine mood 430, speech recognizer 440, and the conversation logic 450, which connects to the Internet 454 to provide data for discussion all operate in a manner identical to the description of FIG. 4.


When setting up the person to be displayed on the screen, the user needs to either select a default display or send a photograph of a person that the user wishes to speak with to the company implementing the invention. In one embodiment, the image is transmitted electronically over the Internet. In another embodiment, the user mails a paper photograph to an office, where the photograph is scanned, and a digital image of the person is stored.


Example 6
Robot with a Face


FIG. 7 illustrates a special purpose robot 700 designed to implement an embodiment of this invention. The robot receives input through a camera 710 and at least one microphone 720. The output is provided through a screen 730, which displays the face of a person 732, or non-human being, which is either selected by the user or provided by default. There is also at least one speaker 740. The robot further has joints 750, which it can move in order to make gestures.


The logic implementing the invention operates in a manner essentially identical to that illustrated in FIG. 4. In a preferred embodiment, all of the logic is internal to the robot. However, other embodiments, such as a processor external to the robot connecting to the robot via the Internet or via a local connection, are possible.


There are some notable differences between the present embodiment and that illustrated in FIG. 4. In a preferred embodiment, the internet connection, which is essential for conversation logic 450 of FIG. 4 is provided by WiFi router 540 and the robot 700 is able to connect to WiFi. Alternatively, the robot 700 could connect to the internet through a cellular network or through an Ethernet cable. In addition to determining words, voice tone, and facial expression, the conversation logic 450 can now suggest gestures, e.g., wave the right hand, point middle finger, etc. to the robot.


In one embodiment, the camera is mobile, and the robot rotates the camera so as to continue looking at the user when the user moves. Further, the camera is a three-dimensional camera comprising a structured light illuminator. Preferably, the structured light illuminator is not in a visible frequency, thereby allowing it to ascertain the image of the user's face and all of the contours thereon.


Structured light involves projecting a known pattern of pixels (often grids or horizontal bars) on to a scene. These patterns deform when striking surfaces, thereby allowing vision systems to calculate the depth and surface information of the objects in the scene. For the present invention, this feature of structured light is useful to calculate and to ascertain the facial features of the user. Structured light could be outside the visible spectrum, for example, infrared light. This allows for the robot to effectively detect the user's facial features without the user being discomforted.


In a preferred embodiment, the robot is completely responsive to voice prompts and has very few button, all of which are rather larger. This embodiment is preferred because it makes the robot easier to use for elderly and disabled people who might have difficulty pressing small buttons.


In this disclosure, we have described several embodiments of this broad invention. Persons skilled in the art will definitely have other ideas as to how the teachings of this specification can be used. It is not our intent to limit this broad invention to the embodiments described in the specification. Rather, the invention is limited by the following claims.


With reference to FIG. 8, a generic system, such as disclosed in U.S. Pat. No. 7,631,317, for processing program instructions is shown which includes a general purpose computing device in the form of a conventional personal computer 20, including a processing unit 21, a system memory 22, and a system bus 23 that couples various system components including the system memory to the processing unit 21. The system bus 23 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The system memory includes read only memory (ROM) 24 and random access memory (RAM) 25. A basic input/output system 26 (BIOS) containing the basic routines that help to transfer information between elements within the personal computer 20, such as during start-up, is stored in ROM 24. In one embodiment of the present invention on a server computer 20 with a remote client computer 49, commands are stored in system memory 22 and are executed by processing unit 21 for creating, sending, and using self-descriptive objects as messages over a message queuing network in accordance with the invention. The personal computer 20 further includes a hard disk drive 27 for reading from and writing to a hard disk, not shown, a magnetic disk drive 28 for reading from or writing to a removable magnetic disk 29, and an optical disk drive 30 for reading from or writing to a removable optical disk 31 such as a CD-ROM or other optical media. The hard disk drive 27, magnetic disk drive 28, and optical disk drive 30 are connected to the system bus 23 by a hard disk drive interface 32, a magnetic disk drive interface 33, and an optical drive interface 34, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the personal computer 20. Although the exemplary environment described herein employs a hard disk, a removable magnetic disk 29 and a removable optical disk 31, it should be appreciated by those skilled in the art that other types of computer-readable media which can store data that is accessible by a computer, such as flash memory, network storage systems, magnetic cassettes, random access memories (RAM), read only memories (ROM), and the like, may also be used in the exemplary operating environment.


A number of program modules may be stored on the hard disk, magnetic disk 29, optical disk 31, ROM 24 or RAM 25, including an operating system 35, one or more application programs 36, other program modules 37, and program data 38. A user may enter commands and information into the personal computer 20 through input devices such as a keyboard 40 and pointing device 42. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 21 through a serial data interface 46 that is coupled to the system bus, but may be collected by other interfaces, such as a parallel port, game port or a universal serial bus (USB). A monitor 47 or another type of display device is also connected to the system bus 23 via an interface, such as a video adapter 48. In addition to the monitor, personal computers typically include other peripheral output devices (not shown), such as speakers and printers.


The personal computer 20 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 49, through a packet data network interface to a packet switch data network. The remote computer 49 may be another personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the personal computer 20, although only a memory storage device 50 has been illustrated in FIG. 8. The logical connections depicted in FIG. 8 include a local area network (LAN) 51 and a wide area network (WAN) 52. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.


When used in a LAN networking environment, the personal computer 20 is connected to the local network 51 through a network interface or adapter 53. When used in a WAN networking environment, the personal computer 20 typically includes a modem 54 or other elements for establishing communications over the wide area network 52, such as the Internet. The modem 54, which may be internal or external, is connected to the system bus 23 via the serial port interface 46. In a networked environment, program modules depicted relative to the personal computer 20, or portions thereof, may be stored in the remote memory storage device. It will be appreciated that the network connections shown are exemplary and other elements for establishing a communications link between the computers may be used.


Typically, a digital data stream from a superconducting digital electronic processing system may have a data rate which exceeds a capability of a room temperature processing system to handle. For example, complex (but not necessarily high data rate) calculations or user interface functions may be more efficiently executed on a general purpose computer than a specialized superconducting digital signal processing system. In that case, the data may be parallelized or decimated to provide a lower clock rate, while retaining essential information for downstream processing.


The present embodiments are to be considered in all respects as illustrative and not restrictive, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. The invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The disclosure shall be interpreted to encompass all of the various combinations and permutations of the elements, steps, and claims disclosed herein, to the extent consistent, and shall not be limited to specific combinations as provided in the detailed embodiments.

Claims
  • 1. A user interface device comprising: topica communication port configured to communicate a received spoken language from a user, with a communication network comprising the Internet;at least one processor configured to: control the communication port to communicate the spoken natural language input received through the input port over the Internet to an automated data processing system;analyze the spoken natural language input and a past history of interaction to determine a topic of interest to the user;automatically generate a query seeking information corresponding to the topic of interest;generate a search request for communication to a database through the Internet based on the query and the topic of interest;receive at least one response from the database through the Internet comprising information selectively dependent on the generated search request;automatically gather information relating to the topic of interest and the query from a plurality of sources dependent on the at least one response from the database, without a prompt or other action from the user;receiving sponsored or advertising content related to the generated search request through the Internet;determine a context of the spoken natural language input based on at least prior spoken natural language inputs from the user received as part of an interactive conversation about the topic of interest to the user and the query;select information from the gathered information in a context-sensitive manner; andformulate a context-appropriate interactive spoken natural language output, dependent on the spoken natural language input and a past history of interaction, comprising the sponsored or advertising content and the selected information, as a response to the spoken natural language input; andthe communication port being further configured to communicate the context-appropriate interactive spoken natural language output to the user,wherein the communication port is controlled by the at least one processor to engage in a context-appropriate interactive spoken natural language conversation with the user comprising the topic of interest to the user, the query, the selected information, the gathered information, and the received sponsored or advertising content related to the generated search request.
  • 2. The user interface device of claim 1, wherein the spoken natural language output is presented according to an associated user mood, which is inferred-depending on at least one of the spoken natural language input, the topic of interest, the query, and the determined context.
  • 3. The user interface device of claim 1, wherein the input port is configured to continuously receive audio and image information to monitor surroundings for presence of an emergency.
  • 4. The user interface device of claim 1, wherein the input port, a portion of said at least one processor sufficient to process the spoken natural language input to selectively start up a communication over the communication port, and said at least one output port are together implemented in a mobile communication device, and the input port receives the spoken natural language input from a microphone of the mobile communication device.
  • 5. The user interface device of claim 1, wherein said at least one processor is configured to respond to a request for establishment of an interactive speech communication channel in the spoken natural language input, to automatically initiate the interactive communication channel through the communication port.
  • 6. The user interface device of claim 1, further comprising a geolocation receiver for determining a geolocation of the user, and the at least one processor is further configured to have at least one mode of operation in which the geolocation of the user is communicated through the communication port and wherein the at least one response is dependent on the communicated geolocation.
  • 7. The user interface device of claim 1 further comprising a display, wherein a portion of said at least one automated processor proximate to the input port and the at least one output port is further configured to control the display to produce an avatar which selectively corresponds to the selected information.
  • 8. The user interface of claim 1, wherein the database comprises a remote social networking service.
  • 9. The user interface device of claim 1, wherein a portion of said at least one processor is configured to recognize speech, and the at least one processor is further configured to: process the oral command to selectively start up the user interface device from a low power consumption mode in which data from the input port is not communicated through the communication port, to a non-low power consumption mode in which data from the input port is communicated through the communication port; andshut down up the user interface device from a non-low power consumption mode in which data from the input port is communicated through the communication port to a low power consumption mode in which data from the input port is not communicated through the communication port.
  • 10. The user interface device of claim 1, wherein said at least one processor is further configured to capture spatial information about the user derived from at least one of an acoustic data input device and an image data input device.
  • 11. The user interface device of claim 1, wherein said at least one processor is further configured to automatically generate a user profile of the user based on at least one prior interaction of the user with the user interface device.
  • 12. The user interface device of claim 1, further comprising a digital video receiver, and an image output port for driving an image display device proximate to the user with media from the digital video receiver.
  • 13. The user interface device of claim 1, wherein said at least one processor is further configured to receive at least an audio program from the communication network representing at least one of music and a talk show, and to present said at least one of music and a talk show through the at least one output port, controlled according to the spoken natural language input.
  • 14. The user interface device of claim 1, further comprising at least one camera, wherein a first portion of said at least one processor is further configured to control acquisition of at least one image through said at least one camera and to communicate the at least one image through the communication port to a second portion of the at least one processor.
  • 15. The user interface device of claim 1, wherein the contextually-appropriate interactive natural language output is generated by a multilayered neural network.
  • 16. The user interface device of claim 1, wherein the communication network comprises a cellular wireless network.
  • 17. The user interface device of claim 1, the communication network comprises a WiFi network.
  • 18. The user interface device of claim 1, wherein the input port is configured to receive spoken language input from a microphone within a device selected from the group consisting of a smartphone, a tablet, a laptop and a personal digital assistant.
  • 19. The user interface device of claim 1, wherein the database is an Internet search engine, and the at least one automated processor financially accounts for a presentation of the sponsored or advertising content.
  • 20. A user interface method, comprising: receiving audio information comprising a spoken natural language input through a microphone of a user interface device, of a topic of interest to a user and a query;communicating processed audio information representing the topic of interest to the user and the query to a remotely located automated data processing system through the Internet;automatically analyzing the communicated processed audio information and a past history of interaction, at the remotely located automated data processing system, to determine a topic of interest and the query;automatically generating a search request for a database outside of the remotely located automated data processing system through the Internet, based on the query and the topic of interest;automatically gathering information relating to the topic of interest and the query from a plurality of sources dependent on the at least one response to the search request from the database, without a prompt or other action from the user;receiving at least one response through the Internet selectively dependent on the search request;receiving sponsored or advertising content related to the generated search request through the Internet;determining a context of the spoken natural language input based on at least prior spoken natural language inputs from the user, representing an interactive conversation about the topic of interest to the user and the query;selecting information from the gathered information in a context-sensitive manner;formulating a context-dependent interactive spoken natural language output, dependent on the spoken natural language input and a past history of interaction, from the selected information, the gathered information, and the received sponsored or advertising content related to the generated search request; andpresenting the formulated context-dependent interactive spoken natural language output to the user through a speaker of the user interface device.
  • 21. The method of claim 20, further comprising inferring a mood of the user, and presenting the spoken natural language output selectively dependent on the inferred mood of the user.
  • 22. The method of claim 20, further comprising determining a user mood associated with the user, and presenting the spoken natural language output according to the determined associated user mood, the spoken natural language output being varied depending on at least one of the received audio information, the gathered information, the topic of interest to the user, the query, and the received at least one response.
  • 23. The method of claim 20, further comprising: receiving a user oral command when the user interface is in a first mode which is a power-saving low power consumption mode;responding to the user oral command without communication through the Internet, to enter a second mode which is not a power-saving low power consumption mode;receiving the spoken natural language input through the microphone of the user interface device and communicating the spoken natural language input over the Internet selectively when the user interface device is in the second mode;determining within the spoken natural language input, a user request for initiation of an interactive voice communication channel;automatically initiating the requested interactive voice communication channel; andreturning the user interface device to the first mode.
  • 24. The method of claim 20, wherein the user interface device further comprises a display, further comprising controlling the display to present an avatar associated with the spoken natural language output, the avatar being animated depending on the received at least one response from the database, and the determined context.
  • 25. The method of claim 20, further comprising automatically generating a user profile based on at least one prior user input or prior user action; and formulating the context-dependent interactive spoken natural language output further selectively dependent on the user profile.
  • 26. A user interface method, comprising: receiving audio information through a microphone of a user interface device comprising an interactive spoken natural language input;receiving image information through a camera of the user interface device;communicating a message to an automated database system representing the interactive spoken natural language input and the image information;automatically determining a query represented by the interactive spoken natural language input;automatically determining a topic of interest to the user based on the interactive spoken natural language input and a history of prior interaction with the user;automatically recognizing at least one user gesture represented in the image information with at least one automated processor;receiving a response from the automated database system responsive to the query;inferring a mood of the user based on the interactive spoken natural language input and the at least one user gesture;automatically gathering information relating to the topic of interest and the query from a plurality of sources without a prompt or other action from the user;receiving a sponsored or advertising content related to the topic of interest to the user and the query;determining a context of the interactive spoken natural language input based on at least prior user interactions;select information from the response dependent on the determined context;formulating a contextually-appropriate conversational spoken natural language reply to the user from the selected information, the gathered information, the inferred mood, and the sponsored or advertising content, further dependent on the recognized at least one user gesture; andpresenting the contextually-appropriate conversational spoken natural language conversational reply through an anthropomorphic object of the user interface device, wherein the anthropomorphic object conveys an anthropomorphic mood responsive to the inferred mood of the user.
  • 27. The user interface method of claim 26, wherein the user interface device further comprises a geolocation receiver, further comprising communicating a geolocation of the user interface device to the automated database system, and wherein the response is dependent on the communicated geolocation.
CROSS-REFERENCE TO RELATED PROVISIONAL APPLICATION

This application is a Continuation of U.S. patent application Ser. No. 13/106,575, filed May 12, 2011, now U.S. Pat. No. 9,634,855, issued Apr. 25, 2017, which claims priority benefit of provisional U.S. Patent Application Ser. No. 61/334,564, entitled ELECTRONIC PERSONAL INTERACTIVE DEVICE, filed on May 13, 2010, which applications are hereby incorporated by reference in their entirety, including all Figures, Tables, and Claims.

US Referenced Citations (1595)
Number Name Date Kind
3494068 Crosman et al. Feb 1970 A
4319229 Kirkor Mar 1982 A
4723159 Imsand Feb 1988 A
5215493 Zgrodek et al. Jun 1993 A
5774591 Black et al. Jun 1998 A
5875108 Hoffberg et al. Feb 1999 A
5877759 Bauer Mar 1999 A
5901246 Hoffberg et al. May 1999 A
5902169 Yamakawa May 1999 A
5907706 Brodsky et al. May 1999 A
6070140 Tran May 2000 A
6081750 Hoffberg et al. Jun 2000 A
6088040 Oda et al. Jul 2000 A
6108640 Slotznick Aug 2000 A
6138095 Gupta et al. Oct 2000 A
6146721 Freynet Nov 2000 A
6206829 Iliff Mar 2001 B1
6311159 Van Tichelen et al. Oct 2001 B1
6314410 Tackett et al. Nov 2001 B1
6336029 Ho et al. Jan 2002 B1
6400996 Hoffberg et al. Jun 2002 B1
6411687 Bohacek et al. Jun 2002 B1
6418424 Hoffberg et al. Jul 2002 B1
6434527 Horvitz Aug 2002 B1
6437975 Huang Aug 2002 B1
6442519 Kanevsky et al. Aug 2002 B1
6456695 Lee Sep 2002 B2
6466213 Bickmore et al. Oct 2002 B2
6480698 Ho et al. Nov 2002 B2
6482156 Iliff Nov 2002 B2
6501937 Ho et al. Dec 2002 B1
6513009 Comerford et al. Jan 2003 B1
6561811 Rapoza et al. May 2003 B2
6564261 Gudjonsson et al. May 2003 B1
6615172 Bennett et al. Sep 2003 B1
6633846 Bennett et al. Oct 2003 B1
6640145 Hoffberg et al. Oct 2003 B2
6659857 Ryan et al. Dec 2003 B2
6665640 Bennett et al. Dec 2003 B1
6681032 Bortolussi et al. Jan 2004 B2
6691151 Cheyer et al. Feb 2004 B1
6721706 Strubbe et al. Apr 2004 B1
6728679 Strubbe et al. Apr 2004 B1
6731307 Strubbe et al. May 2004 B1
6754647 Tackett et al. Jun 2004 B1
6758717 Park et al. Jul 2004 B1
6778970 Au Aug 2004 B2
6782364 Horvitz Aug 2004 B2
6785651 Wang Aug 2004 B1
6795808 Strubbe et al. Sep 2004 B1
6826540 Plantec et al. Nov 2004 B1
6842737 Stiles et al. Jan 2005 B1
6849045 Iliff Feb 2005 B2
6850252 Hoffberg Feb 2005 B1
6851115 Cheyer et al. Feb 2005 B1
6853971 Taylor Feb 2005 B2
6859931 Cheyer et al. Feb 2005 B1
6865370 Ho et al. Mar 2005 B2
6904408 McCarthy et al. Jun 2005 B1
6961748 Murrell et al. Nov 2005 B2
6970821 Shambaugh et al. Nov 2005 B1
6975970 Thorisson Dec 2005 B2
6988072 Horvitz Jan 2006 B2
7003139 Endrikhovski et al. Feb 2006 B2
7006098 Bickmore et al. Feb 2006 B2
7006881 Hoffberg et al. Feb 2006 B1
7007235 Hussein et al. Feb 2006 B1
7019749 Guo et al. Mar 2006 B2
7023979 Wu et al. Apr 2006 B1
7036128 Julia et al. Apr 2006 B1
7047226 Rubin May 2006 B2
7050977 Bennett May 2006 B1
7069560 Cheyer et al. Jun 2006 B1
7076430 Cosatto et al. Jul 2006 B1
7092928 Elad et al. Aug 2006 B1
7115393 Shu et al. Oct 2006 B2
7127497 Nonaka Oct 2006 B2
7136710 Hoffberg et al. Nov 2006 B1
7139714 Bennett et al. Nov 2006 B2
7177798 Hsu et al. Feb 2007 B2
7203646 Bennett Apr 2007 B2
7206303 Karas et al. Apr 2007 B2
7222075 Petrushin May 2007 B2
7224790 Bushey et al. May 2007 B1
7225125 Bennett et al. May 2007 B2
7225128 Kim et al. May 2007 B2
7240011 Horvitz Jul 2007 B2
7249117 Estes Jul 2007 B2
7253817 Plantec et al. Aug 2007 B1
7269568 Stiles et al. Sep 2007 B2
7277854 Bennett et al. Oct 2007 B2
7292979 Karas et al. Nov 2007 B2
7305345 Bares et al. Dec 2007 B2
7306560 Iliff Dec 2007 B2
7316000 Poole et al. Jan 2008 B2
7330787 Agrawala et al. Feb 2008 B2
7333967 Bringsjord et al. Feb 2008 B1
7343303 Meyer et al. Mar 2008 B2
7349852 Cosatto et al. Mar 2008 B2
7353177 Cosatto et al. Apr 2008 B2
7376556 Bennett May 2008 B2
7376632 Sadek et al. May 2008 B1
7379071 Liu et al. May 2008 B2
7391421 Guo et al. Jun 2008 B2
7392185 Bennett Jun 2008 B2
7398211 Wang Jul 2008 B2
7433876 Spivack et al. Oct 2008 B2
7437279 Agrawala et al. Oct 2008 B2
7451005 Hoffberg et al. Nov 2008 B2
7478047 Loyall et al. Jan 2009 B2
7480546 Kamdar et al. Jan 2009 B2
7496484 Agrawala et al. Feb 2009 B2
7502730 Wang Mar 2009 B2
7505921 Lukas et al. Mar 2009 B1
7536323 Hsieh May 2009 B2
7539676 Aravamudan et al. May 2009 B2
7542882 Agrawala et al. Jun 2009 B2
7542902 Scahill et al. Jun 2009 B2
7555431 Bennett Jun 2009 B2
7555448 Hsieh Jun 2009 B2
7574332 Ballin et al. Aug 2009 B2
7580908 Horvitz et al. Aug 2009 B1
7610556 Guo et al. Oct 2009 B2
7613663 Commons et al. Nov 2009 B1
7624007 Bennett Nov 2009 B2
7624076 Movellan et al. Nov 2009 B2
7627475 Petrushin Dec 2009 B2
7631032 Refuah et al. Dec 2009 B1
7631317 Caron Dec 2009 B2
7643985 Horvitz Jan 2010 B2
7647225 Bennett et al. Jan 2010 B2
7657424 Bennett Feb 2010 B2
7657434 Thompson et al. Feb 2010 B2
7672841 Bennett Mar 2010 B2
7672846 Washio et al. Mar 2010 B2
7672847 He et al. Mar 2010 B2
7676034 Wu et al. Mar 2010 B1
7676363 Chengalvarayan et al. Mar 2010 B2
7680514 Cook et al. Mar 2010 B2
7680658 Chung et al. Mar 2010 B2
7680661 Co et al. Mar 2010 B2
7680662 Shu et al. Mar 2010 B2
7680663 Deng Mar 2010 B2
7680666 Manabe et al. Mar 2010 B2
7680667 Sonoura et al. Mar 2010 B2
7684556 Jaiswal Mar 2010 B1
7684983 Shikano et al. Mar 2010 B2
7684998 Charles Mar 2010 B1
7685252 Maes et al. Mar 2010 B1
7689404 Khasin Mar 2010 B2
7689415 Jochumson Mar 2010 B1
7689420 Paek et al. Mar 2010 B2
7689424 Monne et al. Mar 2010 B2
7689425 Kim et al. Mar 2010 B2
7693717 Kahn et al. Apr 2010 B2
7693718 Jan et al. Apr 2010 B2
7698131 Bennett Apr 2010 B2
7698136 Nguyen et al. Apr 2010 B1
7698137 Kashima et al. Apr 2010 B2
7702505 Jung Apr 2010 B2
7702508 Bennett Apr 2010 B2
7702512 Gopinath et al. Apr 2010 B2
7702665 Huet et al. Apr 2010 B2
7707029 Seltzer et al. Apr 2010 B2
7711103 Culbertson et al. May 2010 B2
7711559 Kuboyama et al. May 2010 B2
7711560 Yamada et al. May 2010 B2
7711569 Takeuchi et al. May 2010 B2
7711571 Heiner et al. May 2010 B2
7716066 Rosow et al. May 2010 B2
7720695 Rosow et al. May 2010 B2
7720784 Froloff May 2010 B1
7725307 Bennett May 2010 B2
7725320 Bennett May 2010 B2
7725321 Bennett May 2010 B2
7729904 Bennett Jun 2010 B2
7734479 Rosow et al. Jun 2010 B2
7747785 Baker, III et al. Jun 2010 B2
7751285 Cain Jul 2010 B1
7752152 Paek et al. Jul 2010 B2
7756723 Rosow et al. Jul 2010 B2
7762665 Vertegaal et al. Jul 2010 B2
7769809 Samdadiya et al. Aug 2010 B2
7774215 Rosow et al. Aug 2010 B2
7775885 Van Luchene et al. Aug 2010 B2
7778632 Kurlander et al. Aug 2010 B2
7778948 Johnson et al. Aug 2010 B2
7813822 Hoffberg Oct 2010 B1
7814048 Zhou et al. Oct 2010 B2
7831426 Bennett Nov 2010 B2
7844467 Cosatto et al. Nov 2010 B1
7849034 Visel Dec 2010 B2
7855977 Morrison et al. Dec 2010 B2
7860921 Murrell et al. Dec 2010 B2
7873519 Bennett Jan 2011 B2
7873654 Bernard Jan 2011 B2
7877349 Huet et al. Jan 2011 B2
7882055 Estes Feb 2011 B2
7890347 Rosow et al. Feb 2011 B2
7904187 Hoffberg et al. Mar 2011 B2
7912702 Bennett Mar 2011 B2
7925743 Neely et al. Apr 2011 B2
7940914 Petrushin May 2011 B2
7941536 Murrell et al. May 2011 B2
7941540 Murrell et al. May 2011 B2
7949552 Korenblit et al. May 2011 B2
7953219 Freedman et al. May 2011 B2
7953610 Rosow et al. May 2011 B2
7962578 Makar et al. Jun 2011 B2
7966078 Hoffberg et al. Jun 2011 B2
7970664 Linden et al. Jun 2011 B2
7974714 Hoffberg Jul 2011 B2
7983411 Huet et al. Jul 2011 B2
7987003 Hoffberg et al. Jul 2011 B2
7991649 Libman Aug 2011 B2
7991770 Covell et al. Aug 2011 B2
8001067 Visel et al. Aug 2011 B2
8005720 King et al. Aug 2011 B2
8015138 Illiff Sep 2011 B2
8015143 Estes Sep 2011 B2
8019648 King et al. Sep 2011 B2
8022831 Wood-Eyre Sep 2011 B1
8027839 Da Palma et al. Sep 2011 B2
8027945 Elad et al. Sep 2011 B1
8031060 Hoffberg et al. Oct 2011 B2
8032375 Chickering et al. Oct 2011 B2
8037125 Murrell et al. Oct 2011 B2
8041570 Mirkovic et al. Oct 2011 B2
8046313 Hoffberg et al. Oct 2011 B2
8063929 Kurtz et al. Nov 2011 B2
8069131 Luechtefeld et al. Nov 2011 B1
8082353 Huber et al. Dec 2011 B2
8094551 Huber et al. Jan 2012 B2
8096660 Vertegaal et al. Jan 2012 B2
8121618 Rhoads et al. Feb 2012 B2
8121653 Marti et al. Feb 2012 B2
8135128 Marti et al. Mar 2012 B2
8135472 Fowler et al. Mar 2012 B2
8150872 Bernard Apr 2012 B2
8154578 Kurtz et al. Apr 2012 B2
8154583 Kurtz et al. Apr 2012 B2
8156054 Donovan et al. Apr 2012 B2
8156060 Borzestowski et al. Apr 2012 B2
8159519 Kurtz et al. Apr 2012 B2
8165916 Hoffberg et al. Apr 2012 B2
RE43433 Iliff May 2012 E
8167826 Oohashi et al. May 2012 B2
8175617 Rodriguez May 2012 B2
8195430 Lawler et al. Jun 2012 B2
8200493 Cosatto et al. Jun 2012 B1
8204182 Da Palma et al. Jun 2012 B2
8204884 Freedman et al. Jun 2012 B2
RE43548 Iliff Jul 2012 E
8214214 Bennett Jul 2012 B2
8224906 Mikkonen et al. Jul 2012 B2
8229734 Bennett Jul 2012 B2
8234184 Libman Jul 2012 B2
8237771 Kurtz et al. Aug 2012 B2
8239204 Da Palma et al. Aug 2012 B2
8239335 Schmidtler et al. Aug 2012 B2
8243893 Hayes, Jr. et al. Aug 2012 B2
8249886 Meyer et al. Aug 2012 B2
8253770 Kurtz et al. Aug 2012 B2
8260920 Murrell et al. Sep 2012 B2
8262714 Hulvershorn et al. Sep 2012 B2
8274544 Kurtz et al. Sep 2012 B2
8275117 Huet et al. Sep 2012 B2
8275546 Xiao et al. Sep 2012 B2
8275796 Spivack et al. Sep 2012 B2
8281246 Xiao et al. Oct 2012 B2
8285652 Biggs et al. Oct 2012 B2
8289283 Kida et al. Oct 2012 B2
8291319 Li et al. Oct 2012 B2
8292433 Vertegaal Oct 2012 B2
8296383 Lindahl Oct 2012 B2
8311838 Lindahl et al. Nov 2012 B2
8311863 Kemp Nov 2012 B1
8322856 Vertegaal et al. Dec 2012 B2
8326690 Dicker et al. Dec 2012 B2
8331228 Huber et al. Dec 2012 B2
8345665 Vieri et al. Jan 2013 B2
8346563 Hjelm et al. Jan 2013 B1
8346800 Szummer et al. Jan 2013 B2
8352268 Naik et al. Jan 2013 B2
8352272 Rogers et al. Jan 2013 B2
8352277 Bennett Jan 2013 B2
8352388 Estes Jan 2013 B2
8355919 Silverman et al. Jan 2013 B2
8364136 Hoffberg et al. Jan 2013 B2
8364694 Volkert Jan 2013 B2
8369967 Hoffberg et al. Feb 2013 B2
8370203 Dicker et al. Feb 2013 B2
8380503 Gross Feb 2013 B2
8380507 Herman et al. Feb 2013 B2
8385971 Rhoads et al. Feb 2013 B2
8386482 Gopalakrishnan Feb 2013 B2
8396714 Rogers et al. Mar 2013 B2
8401527 Weltlinger Mar 2013 B2
8407105 Linden et al. Mar 2013 B2
8411700 Mani Apr 2013 B2
8422994 Rhoads et al. Apr 2013 B2
8428908 Lawler et al. Apr 2013 B2
8433621 Linden et al. Apr 2013 B2
8442125 Covell et al. May 2013 B2
8452859 Long et al. May 2013 B2
8457959 Kaiser Jun 2013 B2
8457967 Audhkhasi et al. Jun 2013 B2
8458052 Libman Jun 2013 B2
8458278 Christie et al. Jun 2013 B2
8463726 Jerram et al. Jun 2013 B2
8464159 Refuah et al. Jun 2013 B2
8473420 Bohus et al. Jun 2013 B2
8473449 Visel Jun 2013 B2
8479225 Covell et al. Jul 2013 B2
8489115 Rodriguez et al. Jul 2013 B2
8489399 Gross Jul 2013 B2
8489769 Chuah Jul 2013 B2
8494854 Gross Jul 2013 B2
8510801 Majmundar et al. Aug 2013 B2
8516266 Hoffberg et al. Aug 2013 B2
8522312 Huber et al. Aug 2013 B2
8527861 Mercer Sep 2013 B2
8553849 Michaelis et al. Oct 2013 B2
8566097 Nakano et al. Oct 2013 B2
8566413 Horvitz Oct 2013 B2
8572076 Xiao et al. Oct 2013 B2
8574075 Dickins et al. Nov 2013 B2
8583263 Hoffberg et al. Nov 2013 B2
8583418 Silverman et al. Nov 2013 B2
8586360 Abbot et al. Nov 2013 B2
8600743 Lindahl et al. Dec 2013 B2
8600941 Raj et al. Dec 2013 B1
8602794 Cohen Dec 2013 B2
8612603 Murrell et al. Dec 2013 B2
8614431 Huppi et al. Dec 2013 B2
8620662 Bellegarda Dec 2013 B2
8620767 Linden et al. Dec 2013 B2
8638908 Leeds et al. Jan 2014 B2
8639516 Lindahl et al. Jan 2014 B2
8639638 Shae et al. Jan 2014 B2
8639716 Volkert Jan 2014 B2
8645137 Bellegarda et al. Feb 2014 B2
8649500 Cohen et al. Feb 2014 B1
8654940 Da Palma et al. Feb 2014 B2
8660355 Rodriguez et al. Feb 2014 B2
8660849 Gruber et al. Feb 2014 B2
8666928 Tunstall-Pedoe Mar 2014 B2
8670979 Gruber et al. Mar 2014 B2
8670985 Lindahl et al. Mar 2014 B2
8672482 Vertegaal et al. Mar 2014 B2
8676565 Larcheveque et al. Mar 2014 B2
8676807 Xiao et al. Mar 2014 B2
8676904 Lindahl Mar 2014 B2
8677377 Cheyer et al. Mar 2014 B2
8682649 Bellegarda Mar 2014 B2
8682667 Haughay Mar 2014 B2
8688446 Yanagihara Apr 2014 B2
8694304 Larcheveque et al. Apr 2014 B2
8696364 Cohen Apr 2014 B2
8700428 Rosow et al. Apr 2014 B2
8700641 Covell et al. Apr 2014 B2
8702432 Cohen Apr 2014 B2
8702433 Cohen Apr 2014 B2
8706472 Ramerth et al. Apr 2014 B2
8706503 Cheyer et al. Apr 2014 B2
8712776 Bellegarda et al. Apr 2014 B2
8713021 Bellegarda Apr 2014 B2
8713119 Lindahl Apr 2014 B2
8714987 Cohen May 2014 B2
8718047 Vieri et al. May 2014 B2
8719006 Bellegarda May 2014 B2
8719014 Wagner May 2014 B2
8719114 Libman May 2014 B2
8719197 Schmidtler et al. May 2014 B2
8719200 Beilby et al. May 2014 B2
8719318 Tunstall-Pedoe May 2014 B2
8731942 Cheyer et al. May 2014 B2
8737986 Rhoads et al. May 2014 B2
8744850 Gross Jun 2014 B2
8750098 Fan et al. Jun 2014 B2
8751238 James et al. Jun 2014 B2
8751428 Jerram et al. Jun 2014 B2
8755837 Rhoads et al. Jun 2014 B2
8762152 Bennett et al. Jun 2014 B2
8762156 Chen Jun 2014 B2
8762316 Jerram et al. Jun 2014 B2
8762469 Lindahl Jun 2014 B2
8768313 Rodriguez Jul 2014 B2
8768702 Mason et al. Jul 2014 B2
8768934 Jones et al. Jul 2014 B2
8775195 Stiles et al. Jul 2014 B2
8775442 Moore et al. Jul 2014 B2
8781836 Foo et al. Jul 2014 B2
8782069 Jockish et al. Jul 2014 B2
8792419 Wohlert et al. Jul 2014 B2
8799000 Guzzoni et al. Aug 2014 B2
8805110 Rhoads et al. Aug 2014 B2
8805698 Stiles et al. Aug 2014 B2
8812171 Filev et al. Aug 2014 B2
8812294 Kalb et al. Aug 2014 B2
8831205 Wu et al. Sep 2014 B1
8838659 Tunstall-Pedoe Sep 2014 B2
8849259 Rhoads et al. Sep 2014 B2
8850048 Huber et al. Sep 2014 B2
8855375 Macciola et al. Oct 2014 B2
8855712 Lord et al. Oct 2014 B2
8856878 Wohlert et al. Oct 2014 B2
8862252 Rottler et al. Oct 2014 B2
8863198 Sirpal et al. Oct 2014 B2
8873813 Tadayon et al. Oct 2014 B2
8874447 Da Palma et al. Oct 2014 B2
8879120 Thrasher et al. Nov 2014 B2
8885229 Amtrup et al. Nov 2014 B1
8886206 Lord et al. Nov 2014 B2
8886222 Rodriguez et al. Nov 2014 B1
8892419 Lundberg et al. Nov 2014 B2
8892446 Cheyer et al. Nov 2014 B2
8897437 Tan et al. Nov 2014 B1
8898098 Luechtefeld Nov 2014 B1
8898568 Bull et al. Nov 2014 B2
8903711 Lundberg et al. Dec 2014 B2
8903716 Chen et al. Dec 2014 B2
8908003 Raffle et al. Dec 2014 B2
8929877 Rhoads et al. Jan 2015 B2
8930191 Gruber et al. Jan 2015 B2
8934617 Haserodt et al. Jan 2015 B2
8935167 Bellegarda Jan 2015 B2
8942849 Maisonnier et al. Jan 2015 B2
8942986 Cheyer et al. Jan 2015 B2
8943089 Volkert Jan 2015 B2
8948372 Beall et al. Feb 2015 B1
8949126 Gross Feb 2015 B2
8949377 Makar et al. Feb 2015 B2
8958605 Amtrup et al. Feb 2015 B2
8959082 Davis et al. Feb 2015 B2
8963916 Reitan Feb 2015 B2
8965770 Petrushin Feb 2015 B2
8971587 Macciola et al. Mar 2015 B2
8972313 Ahn et al. Mar 2015 B2
8972445 Gorman et al. Mar 2015 B2
8972840 Karas et al. Mar 2015 B2
8977255 Freeman et al. Mar 2015 B2
8977293 Rodriguez et al. Mar 2015 B2
8977584 Jerram et al. Mar 2015 B2
8977632 Xiao et al. Mar 2015 B2
8989515 Shustorovich et al. Mar 2015 B2
8990235 King et al. Mar 2015 B2
8992227 Al Bandar et al. Mar 2015 B2
8995972 Cronin Mar 2015 B1
8996376 Fleizach et al. Mar 2015 B2
8996429 Francis, Jr. et al. Mar 2015 B1
9005119 Iliff Apr 2015 B2
9008724 Lord Apr 2015 B2
9019819 Huber et al. Apr 2015 B2
9020487 Brisebois et al. Apr 2015 B2
9021517 Selim Apr 2015 B2
9026660 Murrell et al. May 2015 B2
9031838 Nash et al. May 2015 B1
9053089 Bellegarda Jun 2015 B2
9055254 Selim Jun 2015 B2
9055255 Burdzinski et al. Jun 2015 B2
9058515 Amtrup et al. Jun 2015 B1
9058580 Amtrup et al. Jun 2015 B1
9060152 Sirpal et al. Jun 2015 B2
9064006 Hakkani-Tur et al. Jun 2015 B2
9064211 Visel Jun 2015 B2
9066040 Selim et al. Jun 2015 B2
9070087 Hatami-Hanza Jun 2015 B2
9070156 Linden et al. Jun 2015 B2
9075783 Wagner Jul 2015 B2
9075977 Gross Jul 2015 B2
9076448 Bennett et al. Jul 2015 B2
9077928 Milano et al. Jul 2015 B2
9081411 Kalns et al. Jul 2015 B2
9085303 Wolverton et al. Jul 2015 B2
9098492 Tunstall-Pedoe Aug 2015 B2
9100402 Lawler et al. Aug 2015 B2
9100481 O'Connor et al. Aug 2015 B2
9104670 Wadycki et al. Aug 2015 B2
9106866 de Paz et al. Aug 2015 B2
9110882 Overell et al. Aug 2015 B2
9117447 Gruber et al. Aug 2015 B2
9118771 Rodriguez Aug 2015 B2
9118864 Sirpal et al. Aug 2015 B2
9118967 Sirpal et al. Aug 2015 B2
9131053 Tan et al. Sep 2015 B1
9137417 Macciola et al. Sep 2015 B2
9141926 Kilby et al. Sep 2015 B2
9142217 Miglietta et al. Sep 2015 B2
9154626 Uba et al. Oct 2015 B2
9158841 Hu et al. Oct 2015 B2
9158967 Shustorovich et al. Oct 2015 B2
9161080 Crowe et al. Oct 2015 B2
9165187 Macciola et al. Oct 2015 B2
9165188 Thrasher et al. Oct 2015 B2
9167186 Csiki Oct 2015 B2
9167187 Dourado et al. Oct 2015 B2
9172896 de Paz et al. Oct 2015 B2
9177257 Kozloski et al. Nov 2015 B2
9177318 Shen et al. Nov 2015 B2
9183560 Abelow Nov 2015 B2
9185323 Sirpal Nov 2015 B2
9185324 Shoykher et al. Nov 2015 B2
9185325 Selim Nov 2015 B2
9189479 Spivack et al. Nov 2015 B2
9189742 London Nov 2015 B2
9189749 Estes Nov 2015 B2
9189879 Filev et al. Nov 2015 B2
9190062 Haughay Nov 2015 B2
9190063 Bennett et al. Nov 2015 B2
9190075 Cronin Nov 2015 B1
9191604 de Paz et al. Nov 2015 B2
9191708 Soto et al. Nov 2015 B2
9196245 Larcheveque et al. Nov 2015 B2
9197736 Davis et al. Nov 2015 B2
9202171 Kuhn Dec 2015 B2
9204038 Lord et al. Dec 2015 B2
9208536 Macciola et al. Dec 2015 B2
9213936 Visel Dec 2015 B2
9213940 Beilby et al. Dec 2015 B2
9215393 Voth Dec 2015 B2
9223776 Bernard Dec 2015 B2
9232064 Skiba et al. Jan 2016 B1
9232168 Sirpal Jan 2016 B2
9234744 Rhoads et al. Jan 2016 B2
9237291 Selim Jan 2016 B2
9239951 Hoffberg et al. Jan 2016 B2
9244894 Dale et al. Jan 2016 B1
9244984 Heck et al. Jan 2016 B2
9247174 Sirpal et al. Jan 2016 B2
9248172 Srivastava et al. Feb 2016 B2
9253349 Amtrup et al. Feb 2016 B2
9255248 Abbot et al. Feb 2016 B2
9256806 Aller et al. Feb 2016 B2
9258421 Matula et al. Feb 2016 B2
9258423 Beall et al. Feb 2016 B1
9262612 Cheyer Feb 2016 B2
9264503 Donovan et al. Feb 2016 B2
9264775 Milano Feb 2016 B2
9268852 King et al. Feb 2016 B2
9271039 Sirpal et al. Feb 2016 B2
9271133 Rodriguez Feb 2016 B2
9274595 Reitan Mar 2016 B2
9275042 Larcheveque et al. Mar 2016 B2
9275341 Cruse et al. Mar 2016 B2
9275641 Gelfenbeyn et al. Mar 2016 B1
9280610 Gruber et al. Mar 2016 B2
9292254 Simpson et al. Mar 2016 B2
9292952 Giuli et al. Mar 2016 B2
9298287 Heck et al. Mar 2016 B2
9299268 Aravkin et al. Mar 2016 B2
9300784 Roberts et al. Mar 2016 B2
9301003 Soto et al. Mar 2016 B2
9305101 Volkert Apr 2016 B2
9311043 Rottler et al. Apr 2016 B2
9311531 Amtrup et al. Apr 2016 B2
9318108 Gruber et al. Apr 2016 B2
9319964 Huber et al. Apr 2016 B2
9323784 King et al. Apr 2016 B2
9330381 Anzures et al. May 2016 B2
9330720 Lee May 2016 B2
9335904 Junqua et al. May 2016 B2
9336302 Swamy May 2016 B1
9338493 Van Os et al. May 2016 B2
9342742 Amtrup et al. May 2016 B2
9349100 Kozloski et al. May 2016 B2
9355312 Amtrup et al. May 2016 B2
9361886 Yanagihara Jun 2016 B2
9363457 Dourado Jun 2016 B2
9367490 Huang et al. Jun 2016 B2
9368114 Larson et al. Jun 2016 B2
9369578 Michaelis et al. Jun 2016 B2
9369654 Shoykher et al. Jun 2016 B2
9374468 George Jun 2016 B2
9374546 Milano Jun 2016 B2
9378202 Larcheveque et al. Jun 2016 B2
9380017 Gelfenbeyn et al. Jun 2016 B2
9380334 Selim et al. Jun 2016 B2
9384334 Burba et al. Jul 2016 B2
9384335 Hunt et al. Jul 2016 B2
9389729 Huppi et al. Jul 2016 B2
9392461 Huber et al. Jul 2016 B2
9396388 Amtrup et al. Jul 2016 B2
9412392 Lindahl Aug 2016 B2
9413835 Chen et al. Aug 2016 B2
9413836 Wohlert et al. Aug 2016 B2
9413868 Cronin Aug 2016 B2
9413891 Dwyer et al. Aug 2016 B2
9414108 Sirpal et al. Aug 2016 B2
9418663 Chen et al. Aug 2016 B2
9424861 Jerram et al. Aug 2016 B2
9424862 Jerram et al. Aug 2016 B2
9426515 Sirpal Aug 2016 B2
9426527 Selim et al. Aug 2016 B2
9430463 Futrell et al. Aug 2016 B2
9430570 Button et al. Aug 2016 B2
9430667 Burba et al. Aug 2016 B2
9431006 Bellegarda Aug 2016 B2
9431028 Jerram et al. Aug 2016 B2
9432742 Sirpal et al. Aug 2016 B2
9432908 Wohlert et al. Aug 2016 B2
RE46139 Kida et al. Sep 2016 E
9444924 Rodriguez et al. Sep 2016 B2
9450901 Smullen et al. Sep 2016 B1
9454760 Klemm et al. Sep 2016 B2
9454962 Tur et al. Sep 2016 B2
9456086 Wu et al. Sep 2016 B1
9462107 Rhoads et al. Oct 2016 B2
9474076 Fan et al. Oct 2016 B2
9477625 Huang et al. Oct 2016 B2
9483461 Fleizach et al. Nov 2016 B2
9483794 Amtrup et al. Nov 2016 B2
9489039 Donovan et al. Nov 2016 B2
9489625 Kalns et al. Nov 2016 B2
9489679 Mays Nov 2016 B2
9489854 Haruta et al. Nov 2016 B2
9491293 Matula et al. Nov 2016 B2
9491295 Shaffer et al. Nov 2016 B2
9495129 Fleizach et al. Nov 2016 B2
9495331 Govrin et al. Nov 2016 B2
9495787 Gusikhin et al. Nov 2016 B2
9501666 Lockett et al. Nov 2016 B2
9501741 Cheyer et al. Nov 2016 B2
9502031 Paulik et al. Nov 2016 B2
9509701 Wohlert et al. Nov 2016 B2
9509799 Cronin Nov 2016 B1
9509838 Leeds et al. Nov 2016 B2
9510040 Selim et al. Nov 2016 B2
9514357 Macciola et al. Dec 2016 B2
9514748 Reddy Dec 2016 B2
9516069 Hymus et al. Dec 2016 B2
9519681 Tunstall-Pedoe Dec 2016 B2
9521252 Leeds et al. Dec 2016 B2
9524291 Teodosiu et al. Dec 2016 B2
9531862 Vadodaria Dec 2016 B1
9535563 Hoffberg et al. Jan 2017 B2
9535906 Lee et al. Jan 2017 B2
9547647 Badaskar Jan 2017 B2
9548050 Gruber et al. Jan 2017 B2
9557162 Rodriguez et al. Jan 2017 B2
9558337 Gross Jan 2017 B2
RE46310 Hoffberg et al. Feb 2017 E
9565512 Rhoads et al. Feb 2017 B2
9569439 Davis et al. Feb 2017 B2
9571651 Hp et al. Feb 2017 B2
9571652 Zeppenfeld et al. Feb 2017 B1
9575963 Pasupalak et al. Feb 2017 B2
9576574 van Os Feb 2017 B2
9578384 Selim et al. Feb 2017 B2
9582608 Bellegarda Feb 2017 B2
9582762 Cosic Feb 2017 B1
9584984 Huber et al. Feb 2017 B2
9591427 Lyren et al. Mar 2017 B1
9595002 Leeman-Munk et al. Mar 2017 B2
9601115 Chen et al. Mar 2017 B2
9606986 Bellegarda Mar 2017 B2
9607023 Swamy Mar 2017 B1
9607046 Hakkani-Tur et al. Mar 2017 B2
9609107 Rodriguez et al. Mar 2017 B2
9614724 Menezes et al. Apr 2017 B2
9619079 Huppi et al. Apr 2017 B2
9620104 Naik et al. Apr 2017 B2
9620105 Mason Apr 2017 B2
9621669 Crowe et al. Apr 2017 B2
9626152 Kim et al. Apr 2017 B2
9626955 Fleizach et al. Apr 2017 B2
9633004 Giuli et al. Apr 2017 B2
9633660 Haughay Apr 2017 B2
9633674 Sinha Apr 2017 B2
9634855 Poltorak Apr 2017 B2
9640180 Chen et al. May 2017 B2
9641470 Smullen et al. May 2017 B2
9646609 Naik et al. May 2017 B2
9646614 Bellegarda et al. May 2017 B2
9647968 Smullen et al. May 2017 B2
9653068 Gross May 2017 B2
9654634 Fedorov et al. May 2017 B2
9668024 Os et al. May 2017 B2
9668121 Naik et al. May 2017 B2
9672467 Gilbert Jun 2017 B2
9679495 Cohen Jun 2017 B2
9684678 Hatami-Hanza Jun 2017 B2
9686582 Sirpal et al. Jun 2017 B2
9691383 Mason et al. Jun 2017 B2
9692984 Lord Jun 2017 B2
9697198 Davis Jones et al. Jul 2017 B2
9697477 Oh et al. Jul 2017 B2
9697820 Jeon Jul 2017 B2
9697822 Naik et al. Jul 2017 B1
9697823 Kuo et al. Jul 2017 B1
9697835 Kuo et al. Jul 2017 B1
9704097 Devarajan et al. Jul 2017 B2
9704103 Suskind et al. Jul 2017 B2
9711141 Henton et al. Jul 2017 B2
9715875 Piernot et al. Jul 2017 B2
9721257 Navaratnam Aug 2017 B2
9721563 Naik Aug 2017 B2
9721566 Newendorp et al. Aug 2017 B2
9722957 Dymetman et al. Aug 2017 B2
9727874 Navaratnam Aug 2017 B2
9733821 Fleizach Aug 2017 B2
9734046 Karle et al. Aug 2017 B2
9734193 Rhoten et al. Aug 2017 B2
9736308 Wu et al. Aug 2017 B1
9740677 Kim et al. Aug 2017 B2
9749766 Lyren et al. Aug 2017 B2
9760559 Dolfing et al. Sep 2017 B2
9760566 Heck et al. Sep 2017 B2
9774918 Sirpal et al. Sep 2017 B2
9775036 Huber et al. Sep 2017 B2
9785630 Willmore et al. Oct 2017 B2
9785891 Agarwal et al. Oct 2017 B2
9792279 Kim et al. Oct 2017 B2
9792903 Kim et al. Oct 2017 B2
9792909 Kim et al. Oct 2017 B2
9798393 Neels et al. Oct 2017 B2
9798799 Wolverton et al. Oct 2017 B2
9802125 Suskind et al. Oct 2017 B1
9805020 Gorman et al. Oct 2017 B2
9805309 Donovan et al. Oct 2017 B2
9807446 Sirpal et al. Oct 2017 B2
9811519 Perez Nov 2017 B2
9811935 Filev et al. Nov 2017 B2
9812127 Perez et al. Nov 2017 B1
9818400 Paulik et al. Nov 2017 B2
9819986 Shoykher et al. Nov 2017 B2
9820003 Milano et al. Nov 2017 B2
9823811 Brown et al. Nov 2017 B2
9830039 Stifelman et al. Nov 2017 B2
9830044 Brown et al. Nov 2017 B2
9836453 Radford et al. Dec 2017 B2
9836700 Bohus et al. Dec 2017 B2
9842101 Wang et al. Dec 2017 B2
9842105 Bellegarda Dec 2017 B2
9842168 Heck et al. Dec 2017 B2
9848271 Lyren et al. Dec 2017 B2
9852136 Venkataraman et al. Dec 2017 B2
9854049 Kelly et al. Dec 2017 B2
9858343 Heck et al. Jan 2018 B2
9858925 Gruber et al. Jan 2018 B2
9860391 Wu et al. Jan 2018 B1
9865248 Fleizach et al. Jan 2018 B2
9865260 Vuskovic et al. Jan 2018 B1
9865280 Sumner et al. Jan 2018 B2
9866693 Tamblyn et al. Jan 2018 B2
9871881 Crowe et al. Jan 2018 B2
9871927 Perez et al. Jan 2018 B2
9874914 Obie et al. Jan 2018 B2
9876886 McLaren et al. Jan 2018 B1
9881614 Thirukovalluru et al. Jan 2018 B1
9886432 Bellegarda et al. Feb 2018 B2
9886845 Rhoads et al. Feb 2018 B2
9886953 Lemay et al. Feb 2018 B2
9888105 Rhoads Feb 2018 B2
9899019 Bellegarda et al. Feb 2018 B2
9904370 Selim et al. Feb 2018 B2
9912810 Segre et al. Mar 2018 B2
9913130 Brisebois et al. Mar 2018 B2
9916519 Rodriguez et al. Mar 2018 B2
9916538 Zadeh et al. Mar 2018 B2
9918183 Rhoads et al. Mar 2018 B2
9922210 Oberg et al. Mar 2018 B2
9922642 Pitschel et al. Mar 2018 B2
9927879 Sirpal et al. Mar 2018 B2
9928383 York et al. Mar 2018 B2
9929982 Morris et al. Mar 2018 B2
9934775 Raitio et al. Apr 2018 B2
9934786 Miglietta et al. Apr 2018 B2
9940390 Seiber et al. Apr 2018 B1
9942783 Fan et al. Apr 2018 B2
9946706 Davidson et al. Apr 2018 B2
9946985 Macciola et al. Apr 2018 B2
9948583 Smullen et al. Apr 2018 B2
9953088 Gruber et al. Apr 2018 B2
9955012 Stolyar et al. Apr 2018 B2
9956393 Perez et al. May 2018 B2
9958987 Huppi et al. May 2018 B2
9959870 Hunt et al. May 2018 B2
9965553 Lyren May 2018 B2
9965748 Chen et al. May 2018 B2
9966060 Naik et al. May 2018 B2
9966065 Gruber et al. May 2018 B2
9966068 Cash et al. May 2018 B2
9967211 Galley et al. May 2018 B2
9967799 Wohlert et al. May 2018 B2
9971766 Pasupalak et al. May 2018 B2
9971774 Badaskar May 2018 B2
9972304 Paulik et al. May 2018 B2
9977779 Winer May 2018 B2
9978361 Sarikaya et al. May 2018 B2
9980072 Lyren et al. May 2018 B2
9986076 McLaren et al. May 2018 B1
9986419 Naik et al. May 2018 B2
9996532 Sarikaya et al. Jun 2018 B2
9997158 Chen et al. Jun 2018 B2
20010044751 Pugliese, III et al. Nov 2001 A1
20020005865 Hayes-Roth Jan 2002 A1
20020010000 Chern et al. Jan 2002 A1
20020077726 Thorisson Jun 2002 A1
20020111811 Bares et al. Aug 2002 A1
20020151992 Hoffberg et al. Oct 2002 A1
20020154124 Han Oct 2002 A1
20020194002 Petrushin Dec 2002 A1
20030017439 Rapoza et al. Jan 2003 A1
20030018510 Sanches Jan 2003 A1
20030018790 Nonaka Jan 2003 A1
20030028380 Freeland et al. Feb 2003 A1
20030028498 Hayes-Roth Feb 2003 A1
20030074222 Rosow et al. Apr 2003 A1
20030110038 Sharma et al. Jun 2003 A1
20030120593 Bansal et al. Jun 2003 A1
20030135630 Murrell et al. Jul 2003 A1
20030167195 Fernandes et al. Sep 2003 A1
20030167209 Hsieh Sep 2003 A1
20030187660 Gong Oct 2003 A1
20030191627 Au Oct 2003 A1
20030220799 Kim et al. Nov 2003 A1
20030228658 Shu et al. Dec 2003 A1
20040019560 Evans et al. Jan 2004 A1
20040024724 Rubin Feb 2004 A1
20040030556 Bennett Feb 2004 A1
20040030741 Wolton et al. Feb 2004 A1
20040054610 Amstutz et al. Mar 2004 A1
20040117189 Bennett Jun 2004 A1
20040179043 Viellescaze et al. Sep 2004 A1
20040181145 Al Bandar et al. Sep 2004 A1
20040199430 Hsieh Oct 2004 A1
20040203629 Dezonno et al. Oct 2004 A1
20040221224 Blattner Nov 2004 A1
20040236580 Bennett Nov 2004 A1
20040249635 Bennett Dec 2004 A1
20040249650 Freedman et al. Dec 2004 A1
20050080614 Bennett Apr 2005 A1
20050080625 Bennett et al. Apr 2005 A1
20050086046 Bennett Apr 2005 A1
20050086049 Bennett Apr 2005 A1
20050086059 Bennett Apr 2005 A1
20050119896 Bennett et al. Jun 2005 A1
20050119897 Bennett et al. Jun 2005 A1
20050138081 Alshab et al. Jun 2005 A1
20050144001 Bennett et al. Jun 2005 A1
20050144004 Bennett et al. Jun 2005 A1
20050213743 Huet et al. Sep 2005 A1
20050246412 Murrell et al. Nov 2005 A1
20050288954 McCarthy et al. Dec 2005 A1
20060004703 Spivack et al. Jan 2006 A1
20060010240 Chuah Jan 2006 A1
20060036430 Hu Feb 2006 A1
20060106637 Johnson et al. May 2006 A1
20060111931 Johnson et al. May 2006 A1
20060122834 Bennett Jun 2006 A1
20060155398 Hoffberg et al. Jul 2006 A1
20060165104 Kaye Jul 2006 A1
20060200253 Hoffberg et al. Sep 2006 A1
20060200258 Hoffberg et al. Sep 2006 A1
20060200259 Hoffberg et al. Sep 2006 A1
20060200260 Hoffberg et al. Sep 2006 A1
20060200353 Bennett Sep 2006 A1
20060221935 Wong et al. Oct 2006 A1
20060235696 Bennett Oct 2006 A1
20060282257 Huet et al. Dec 2006 A1
20060293921 McCarthy et al. Dec 2006 A1
20070011270 Klein et al. Jan 2007 A1
20070015121 Johnson et al. Jan 2007 A1
20070016476 Hoffberg et al. Jan 2007 A1
20070036334 Culbertson et al. Feb 2007 A1
20070053513 Hoffberg Mar 2007 A1
20070061735 Hoffberg et al. Mar 2007 A1
20070070038 Hoffberg et al. Mar 2007 A1
20070074114 Adjali et al. Mar 2007 A1
20070078294 Jain et al. Apr 2007 A1
20070082324 Johnson et al. Apr 2007 A1
20070094032 Bennett et al. Apr 2007 A1
20070127704 Marti et al. Jun 2007 A1
20070156625 Visel Jul 2007 A1
20070162283 Petrushin Jul 2007 A1
20070179789 Bennett Aug 2007 A1
20070185716 Bennett Aug 2007 A1
20070185717 Bennett Aug 2007 A1
20070198261 Chen Aug 2007 A1
20070203693 Estes Aug 2007 A1
20070206017 Johnson et al. Sep 2007 A1
20070217586 Marti et al. Sep 2007 A1
20070218987 Van Luchene et al. Sep 2007 A1
20070244980 Baker et al. Oct 2007 A1
20070250464 Hamilton Oct 2007 A1
20070255785 Hayashi et al. Nov 2007 A1
20070266042 Hsu et al. Nov 2007 A1
20070282765 Visel et al. Dec 2007 A1
20080016020 Estes Jan 2008 A1
20080021708 Bennett et al. Jan 2008 A1
20080046394 Zhou et al. Feb 2008 A1
20080052063 Bennett et al. Feb 2008 A1
20080052077 Bennett et al. Feb 2008 A1
20080052078 Bennett Feb 2008 A1
20080059153 Bennett Mar 2008 A1
20080065430 Rosow et al. Mar 2008 A1
20080065431 Rosow et al. Mar 2008 A1
20080065432 Rosow et al. Mar 2008 A1
20080065433 Rosow et al. Mar 2008 A1
20080065434 Rosow et al. Mar 2008 A1
20080089490 Mikkonen et al. Apr 2008 A1
20080091692 Keith et al. Apr 2008 A1
20080096533 Manfredi et al. Apr 2008 A1
20080101660 Seo May 2008 A1
20080162471 Bernard Jul 2008 A1
20080215327 Bennett Sep 2008 A1
20080221892 Nathan Sep 2008 A1
20080221926 Rosow et al. Sep 2008 A1
20080254419 Cohen Oct 2008 A1
20080254423 Cohen Oct 2008 A1
20080254424 Cohen Oct 2008 A1
20080254425 Cohen Oct 2008 A1
20080254426 Cohen Oct 2008 A1
20080255845 Bennett Oct 2008 A1
20080269958 Filev et al. Oct 2008 A1
20080297515 Bliss Dec 2008 A1
20080297586 Kurtz et al. Dec 2008 A1
20080297587 Kurtz et al. Dec 2008 A1
20080297588 Kurtz et al. Dec 2008 A1
20080297589 Kurtz et al. Dec 2008 A1
20080298571 Kurtz et al. Dec 2008 A1
20080300878 Bennett Dec 2008 A1
20080306959 Spivack et al. Dec 2008 A1
20080312971 Rosow et al. Dec 2008 A2
20080312972 Rosow et al. Dec 2008 A2
20080312973 Rosow et al. Dec 2008 A2
20080312974 Rosow et al. Dec 2008 A2
20080312975 Rosow et al. Dec 2008 A2
20090037398 Horvitz Feb 2009 A1
20090055190 Filev et al. Feb 2009 A1
20090055824 Rychtyckyj et al. Feb 2009 A1
20090063147 Roy Mar 2009 A1
20090063154 Gusikhin et al. Mar 2009 A1
20090064155 Giuli et al. Mar 2009 A1
20090094517 Brody et al. Apr 2009 A1
20090113033 Long et al. Apr 2009 A1
20090119127 Rosow et al. May 2009 A2
20090157401 Bennett Jun 2009 A1
20090187425 Thompson Jul 2009 A1
20090187455 Fernandes et al. Jul 2009 A1
20090193123 Mitzlaff Jul 2009 A1
20090210259 Cardot et al. Aug 2009 A1
20090222551 Neely et al. Sep 2009 A1
20090259619 Hsieh Oct 2009 A1
20090271201 Yoshizawa Oct 2009 A1
20090281804 Watanabe et al. Nov 2009 A1
20090281806 Parthasarathy Nov 2009 A1
20090281809 Reuss Nov 2009 A1
20090281966 Biggs et al. Nov 2009 A1
20090286509 Huber et al. Nov 2009 A1
20090286512 Huber et al. Nov 2009 A1
20090287483 Co et al. Nov 2009 A1
20090287484 Bushey et al. Nov 2009 A1
20090287486 Chang Nov 2009 A1
20090288140 Huber et al. Nov 2009 A1
20090292538 Barnish Nov 2009 A1
20090292778 Makar et al. Nov 2009 A1
20090299126 Fowler et al. Dec 2009 A1
20090306977 Takiguchi et al. Dec 2009 A1
20090326937 Chitsaz et al. Dec 2009 A1
20090326941 Catchpole Dec 2009 A1
20100004930 Strope et al. Jan 2010 A1
20100004932 Washio et al. Jan 2010 A1
20100005081 Bennett Jan 2010 A1
20100010814 Patel Jan 2010 A1
20100023329 Onishi Jan 2010 A1
20100023331 Duta et al. Jan 2010 A1
20100023332 Smith et al. Jan 2010 A1
20100027431 Morrison et al. Feb 2010 A1
20100030400 Komer et al. Feb 2010 A1
20100030559 Bou-Ghazale et al. Feb 2010 A1
20100030560 Yamamoto Feb 2010 A1
20100036660 Bennett Feb 2010 A1
20100040207 Bushey et al. Feb 2010 A1
20100048242 Rhoads et al. Feb 2010 A1
20100049516 Talwar et al. Feb 2010 A1
20100049521 Ruback et al. Feb 2010 A1
20100049525 Paden Feb 2010 A1
20100050078 Refuah et al. Feb 2010 A1
20100057450 Koll Mar 2010 A1
20100057451 Carraux et al. Mar 2010 A1
20100057457 Ogata et al. Mar 2010 A1
20100057461 Neubacher et al. Mar 2010 A1
20100057462 Herbig et al. Mar 2010 A1
20100063820 Seshadri Mar 2010 A1
20100070273 Rodriguez et al. Mar 2010 A1
20100070274 Cho et al. Mar 2010 A1
20100070448 Omoigui Mar 2010 A1
20100076334 Rothblatt Mar 2010 A1
20100076642 Hoffberg et al. Mar 2010 A1
20100076757 Li et al. Mar 2010 A1
20100076758 Li et al. Mar 2010 A1
20100076764 Chengalvarayan Mar 2010 A1
20100076765 Zweig et al. Mar 2010 A1
20100082340 Nakadai et al. Apr 2010 A1
20100082343 Levit et al. Apr 2010 A1
20100088096 Parsons Apr 2010 A1
20100088097 Tian et al. Apr 2010 A1
20100088098 Harada Apr 2010 A1
20100088101 Knott et al. Apr 2010 A1
20100088262 Visel et al. Apr 2010 A1
20100094626 Li et al. Apr 2010 A1
20100100378 Kroeker et al. Apr 2010 A1
20100100384 Ju et al. Apr 2010 A1
20100100828 Khandelwal et al. Apr 2010 A1
20100106497 Phillips Apr 2010 A1
20100106505 Shu Apr 2010 A1
20100145890 Donovan et al. Jun 2010 A1
20100152869 Morrison et al. Jun 2010 A1
20100180030 Murrell et al. Jul 2010 A1
20100191521 Huet et al. Jul 2010 A1
20100205541 Rapaport Aug 2010 A1
20100211683 Murrell et al. Aug 2010 A1
20100228540 Bennett Sep 2010 A1
20100228565 Rosow et al. Sep 2010 A1
20100235175 Donovan et al. Sep 2010 A1
20100235341 Bennett Sep 2010 A1
20100238262 Kurtz et al. Sep 2010 A1
20100245532 Kurtz et al. Sep 2010 A1
20100250196 Lawler et al. Sep 2010 A1
20100251147 Donovan et al. Sep 2010 A1
20100265834 Michaelis et al. Oct 2010 A1
20100266115 Fedorov et al. Oct 2010 A1
20100266116 Stolyar et al. Oct 2010 A1
20100274847 Anderson et al. Oct 2010 A1
20100322391 Michaelis et al. Dec 2010 A1
20100324926 Rosow et al. Dec 2010 A1
20100332231 Nakano et al. Dec 2010 A1
20100332648 Bohus et al. Dec 2010 A1
20100332842 Kalaboukis Dec 2010 A1
20110010367 Jockish et al. Jan 2011 A1
20110014932 Estevez Jan 2011 A1
20110034176 Lord et al. Feb 2011 A1
20110055186 Gopalakrishnan Mar 2011 A1
20110063404 Raffle et al. Mar 2011 A1
20110093271 Bernard Apr 2011 A1
20110093913 Wohlert et al. Apr 2011 A1
20110098029 Rhoads et al. Apr 2011 A1
20110098056 Rhoads et al. Apr 2011 A1
20110116505 Hymus et al. May 2011 A1
20110125793 Erhart et al. May 2011 A1
20110143811 Rodriguez Jun 2011 A1
20110152729 Oohashi et al. Jun 2011 A1
20110156896 Hoffberg et al. Jun 2011 A1
20110161076 Davis et al. Jun 2011 A1
20110165945 Dickins Jul 2011 A1
20110167078 Benjamin Jul 2011 A1
20110167110 Hoffberg et al. Jul 2011 A1
20110178803 Petrushin Jul 2011 A1
20110206198 Freedman et al. Aug 2011 A1
20110208798 Murrell et al. Aug 2011 A1
20110212717 Rhoads et al. Sep 2011 A1
20110213642 Makar et al. Sep 2011 A1
20110231203 Rosow et al. Sep 2011 A1
20110235530 Mani Sep 2011 A1
20110235797 Huet et al. Sep 2011 A1
20110238408 Larcheveque et al. Sep 2011 A1
20110238409 Larcheveque et al. Sep 2011 A1
20110238410 Larcheveque et al. Sep 2011 A1
20110244919 Aller et al. Oct 2011 A1
20110249658 Wohlert et al. Oct 2011 A1
20110250895 Wohlert et al. Oct 2011 A1
20110252011 Morris et al. Oct 2011 A1
20110275350 Weltlinger Nov 2011 A1
20110283190 Poltorak Nov 2011 A1
20110307496 Jones et al. Dec 2011 A1
20110313919 Evans et al. Dec 2011 A1
20110320277 Isaacs Dec 2011 A1
20110320951 Paillet et al. Dec 2011 A1
20120026865 Fan et al. Feb 2012 A1
20120036016 Hoffberg et al. Feb 2012 A1
20120047261 Murrell et al. Feb 2012 A1
20120052476 Graesser et al. Mar 2012 A1
20120059776 Estes Mar 2012 A1
20120066259 Huber et al. Mar 2012 A1
20120069131 Abelow Mar 2012 A1
20120078700 Pugliese et al. Mar 2012 A1
20120083246 Huber et al. Apr 2012 A1
20120089394 Teodosiu et al. Apr 2012 A1
20120094643 Brisebois et al. Apr 2012 A1
20120101865 Zhakov Apr 2012 A1
20120102050 Button et al. Apr 2012 A1
20120134480 Leeds et al. May 2012 A1
20120150651 Hoffberg et al. Jun 2012 A1
20120165046 Rhoads et al. Jun 2012 A1
20120191629 Shae et al. Jul 2012 A1
20120191716 Omoigui Jul 2012 A1
20120197824 Donovan et al. Aug 2012 A1
20120210171 Lawler et al. Aug 2012 A1
20120218436 Rhoads et al. Aug 2012 A1
20120220311 Rodriguez et al. Aug 2012 A1
20120221502 Jerram et al. Aug 2012 A1
20120232907 Ivey Sep 2012 A1
20120258776 Lord et al. Oct 2012 A1
20120259891 Edoja Oct 2012 A1
20120265531 Bennett Oct 2012 A1
20120271625 Bernard Oct 2012 A1
20120317294 Murrell et al. Dec 2012 A1
20120330869 Durham Dec 2012 A1
20120330874 Jerram et al. Dec 2012 A1
20130031476 Coin et al. Jan 2013 A1
20130050260 Reitan Feb 2013 A1
20130079002 Huber et al. Mar 2013 A1
20130091090 Spivack et al. Apr 2013 A1
20130106682 Davis et al. May 2013 A1
20130106683 Davis et al. May 2013 A1
20130106685 Davis et al. May 2013 A1
20130106695 Davis et al. May 2013 A1
20130106892 Davis et al. May 2013 A1
20130106893 Davis et al. May 2013 A1
20130106894 Davis et al. May 2013 A1
20130110565 Means, Jr. et al. May 2013 A1
20130110804 Davis et al. May 2013 A1
20130124435 Estes May 2013 A1
20130128060 Rhoads et al. May 2013 A1
20130132318 Tanimoto et al. May 2013 A1
20130135332 Davis et al. May 2013 A1
20130138665 Hu et al. May 2013 A1
20130148525 Sanchez et al. Jun 2013 A1
20130159235 Hatami-Hanza Jun 2013 A1
20130173281 Rosow et al. Jul 2013 A1
20130204619 Berman et al. Aug 2013 A1
20130204813 Master et al. Aug 2013 A1
20130212501 Anderson et al. Aug 2013 A1
20130217440 Lord et al. Aug 2013 A1
20130218339 Maisonnier et al. Aug 2013 A1
20130219357 Reitan Aug 2013 A1
20130222371 Reitan Aug 2013 A1
20130226758 Reitan Aug 2013 A1
20130226847 Cruse et al. Aug 2013 A1
20130229433 Reitan Sep 2013 A1
20130232430 Reitan Sep 2013 A1
20130234933 Reitan Sep 2013 A1
20130235034 Reitan Sep 2013 A1
20130235079 Reitan Sep 2013 A1
20130238778 Reitan Sep 2013 A1
20130246392 Farmaner et al. Sep 2013 A1
20130246512 Lawler et al. Sep 2013 A1
20130249947 Reitan Sep 2013 A1
20130249948 Reitan Sep 2013 A1
20130252604 Huber et al. Sep 2013 A1
20130262096 Wilhelms-Tricarico et al. Oct 2013 A1
20130262107 Bernard Oct 2013 A1
20130266925 Nunamaker, Jr. et al. Oct 2013 A1
20130268260 Lundberg et al. Oct 2013 A1
20130273968 Rhoads et al. Oct 2013 A1
20130294648 Rhoads et al. Nov 2013 A1
20130295881 Wohlert et al. Nov 2013 A1
20130295894 Rhoads et al. Nov 2013 A1
20130303119 Huber et al. Nov 2013 A1
20130317826 Jerram et al. Nov 2013 A1
20130324161 Rhoads et al. Dec 2013 A1
20130335407 Reitan Dec 2013 A1
20130346066 Deoras et al. Dec 2013 A1
20140012574 Pasupalak et al. Jan 2014 A1
20140019116 Lundberg et al. Jan 2014 A1
20140029472 Michaelis et al. Jan 2014 A1
20140040312 Gorman et al. Feb 2014 A1
20140049651 Voth Feb 2014 A1
20140049691 Burdzinski et al. Feb 2014 A1
20140049692 Sirpal et al. Feb 2014 A1
20140049693 Selim et al. Feb 2014 A1
20140049696 Sirpal et al. Feb 2014 A1
20140052785 Sirpal Feb 2014 A1
20140052786 de Paz Feb 2014 A1
20140053176 Milano et al. Feb 2014 A1
20140053177 Voth Feb 2014 A1
20140053178 Voth et al. Feb 2014 A1
20140053179 Voth Feb 2014 A1
20140053180 Shoykher Feb 2014 A1
20140053190 Sirpal Feb 2014 A1
20140053191 Selim Feb 2014 A1
20140053192 Sirpal Feb 2014 A1
20140053193 Selim et al. Feb 2014 A1
20140053194 Shoykher et al. Feb 2014 A1
20140053195 Sirpal et al. Feb 2014 A1
20140053196 Selim Feb 2014 A1
20140053197 Shoykher et al. Feb 2014 A1
20140053198 Sirpal et al. Feb 2014 A1
20140053200 de Paz et al. Feb 2014 A1
20140053202 Selim Feb 2014 A1
20140053203 Csiki Feb 2014 A1
20140053204 Milano Feb 2014 A1
20140053205 Sirpal et al. Feb 2014 A1
20140053206 Shoykher Feb 2014 A1
20140053207 Shoykher et al. Feb 2014 A1
20140053208 Sirpal et al. Feb 2014 A1
20140053211 Milano Feb 2014 A1
20140053212 Shoykher et al. Feb 2014 A1
20140053221 Sirpal et al. Feb 2014 A1
20140053222 Shoykher et al. Feb 2014 A1
20140053225 Shoykher et al. Feb 2014 A1
20140055673 Sirpal et al. Feb 2014 A1
20140059480 de Paz et al. Feb 2014 A1
20140059578 Voth et al. Feb 2014 A1
20140059589 Sirpal Feb 2014 A1
20140059596 Dourado Feb 2014 A1
20140059598 Milano Feb 2014 A1
20140059599 Sirpal et al. Feb 2014 A1
20140059600 Dourado Feb 2014 A1
20140059601 Sirpal Feb 2014 A1
20140059602 Sirpal Feb 2014 A1
20140059603 Lee et al. Feb 2014 A1
20140059605 Sirpal et al. Feb 2014 A1
20140059606 Selim et al. Feb 2014 A1
20140059609 Dourado Feb 2014 A1
20140059610 Sirpal et al. Feb 2014 A1
20140059612 Selim Feb 2014 A1
20140059613 Burdzinski et al. Feb 2014 A1
20140059614 Shoykher et al. Feb 2014 A1
20140059615 Sirpal et al. Feb 2014 A1
20140059625 Dourado et al. Feb 2014 A1
20140059626 Selim Feb 2014 A1
20140059635 Sirpal et al. Feb 2014 A1
20140059637 Chen et al. Feb 2014 A1
20140063061 Reitan Mar 2014 A1
20140067375 Wooters Mar 2014 A1
20140067954 Sirpal Mar 2014 A1
20140068673 Sirpal et al. Mar 2014 A1
20140068674 Sirpal et al. Mar 2014 A1
20140068682 Selim et al. Mar 2014 A1
20140068683 Selim et al. Mar 2014 A1
20140068685 Selim et al. Mar 2014 A1
20140068689 Sirpal et al. Mar 2014 A1
20140071272 Rodriguez et al. Mar 2014 A1
20140075475 Sirpal et al. Mar 2014 A1
20140075476 de Paz et al. Mar 2014 A1
20140075477 de Paz et al. Mar 2014 A1
20140075479 Soto et al. Mar 2014 A1
20140075483 de Paz et al. Mar 2014 A1
20140075484 Selim et al. Mar 2014 A1
20140075487 Selim Mar 2014 A1
20140079297 Tadayon et al. Mar 2014 A1
20140080428 Rhoads et al. Mar 2014 A1
20140086399 Haserodt et al. Mar 2014 A1
20140089241 Hoffberg et al. Mar 2014 A1
20140093849 Ahn et al. Apr 2014 A1
20140101319 Murrell et al. Apr 2014 A1
20140114886 Mays Apr 2014 A1
20140115633 Selim et al. Apr 2014 A1
20140129418 Jerram et al. May 2014 A1
20140129651 Gelfenbeyn et al. May 2014 A1
20140136013 Wolverton et al. May 2014 A1
20140136187 Wolverton et al. May 2014 A1
20140146644 Chen May 2014 A1
20140161250 Leeds et al. Jun 2014 A1
20140172899 Hakkani-Tur et al. Jun 2014 A1
20140173452 Hoffberg et al. Jun 2014 A1
20140177813 Leeds et al. Jun 2014 A1
20140180159 Rothblatt Jun 2014 A1
20140200891 Larcheveque et al. Jul 2014 A1
20140201126 Zadeh et al. Jul 2014 A1
20140207441 Larcheveque et al. Jul 2014 A1
20140229405 Govrin et al. Aug 2014 A1
20140235261 Fan et al. Aug 2014 A1
20140250145 Jones et al. Sep 2014 A1
20140254776 O'Connor et al. Sep 2014 A1
20140254790 Shaffer et al. Sep 2014 A1
20140255895 Shaffer et al. Sep 2014 A1
20140270138 Uba et al. Sep 2014 A1
20140279719 Bohus et al. Sep 2014 A1
20140287767 Wohlert et al. Sep 2014 A1
20140297268 Govrin et al. Oct 2014 A1
20140297568 Beilby et al. Oct 2014 A1
20140313208 Filev et al. Oct 2014 A1
20140316785 Bennett et al. Oct 2014 A1
20140317030 Shen et al. Oct 2014 A1
20140317193 Mitzlaff Oct 2014 A1
20140323142 Rodriguez et al. Oct 2014 A1
20140333794 Rhoads et al. Nov 2014 A1
20140337266 Kalns et al. Nov 2014 A1
20140337733 Rodriguez et al. Nov 2014 A1
20140337814 Kalns et al. Nov 2014 A1
20140342703 Huber et al. Nov 2014 A1
20140343950 Simpson et al. Nov 2014 A1
20140351765 Rodriguez et al. Nov 2014 A1
20140358549 O'Connor et al. Dec 2014 A1
20140359439 Lyren Dec 2014 A1
20140370852 Wohlert et al. Dec 2014 A1
20140379923 Oberg et al. Dec 2014 A1
20140380425 Lockett et al. Dec 2014 A1
20150003595 Yaghi et al. Jan 2015 A1
20150011194 Rodriguez Jan 2015 A1
20150012464 Gilbert Jan 2015 A1
20150022675 Lord et al. Jan 2015 A1
20150024800 Rodriguez et al. Jan 2015 A1
20150066479 Pasupalak et al. Mar 2015 A1
20150072321 Cohen Mar 2015 A1
20150081361 Lee et al. Mar 2015 A1
20150089399 Megill et al. Mar 2015 A1
20150100157 Houssin et al. Apr 2015 A1
20150112666 Jerram et al. Apr 2015 A1
20150112895 Jerram et al. Apr 2015 A1
20150127558 Erhart et al. May 2015 A1
20150134325 Skiba et al. May 2015 A1
20150142704 London May 2015 A1
20150142706 Gilbert May 2015 A1
20150156548 Sirpal et al. Jun 2015 A1
20150156554 Sirpal et al. Jun 2015 A1
20150161651 Rodriguez et al. Jun 2015 A1
20150161656 Rodriguez et al. Jun 2015 A1
20150163358 Klemm et al. Jun 2015 A1
20150163361 George Jun 2015 A1
20150163537 Sirpal et al. Jun 2015 A1
20150170236 O'Connor et al. Jun 2015 A1
20150170671 Jerram et al. Jun 2015 A1
20150172765 Shoykher et al. Jun 2015 A1
20150178392 Jockisch et al. Jun 2015 A1
20150185996 Brown et al. Jul 2015 A1
20150186154 Brown et al. Jul 2015 A1
20150186155 Brown et al. Jul 2015 A1
20150186156 Brown et al. Jul 2015 A1
20150186504 Gorman et al. Jul 2015 A1
20150189390 Sirpal et al. Jul 2015 A1
20150189585 Huber et al. Jul 2015 A1
20150195406 Dwyer et al. Jul 2015 A1
20150201147 Sirpal et al. Jul 2015 A1
20150207938 Shaffer et al. Jul 2015 A1
20150208135 Sirpal et al. Jul 2015 A1
20150208231 Brisebois et al. Jul 2015 A1
20150227559 Hatami-Hanza Aug 2015 A1
20150244850 Rodriguez et al. Aug 2015 A1
20150262016 Rothblatt Sep 2015 A1
20150281760 Sirpal et al. Oct 2015 A1
20150302536 Wahl et al. Oct 2015 A1
20150304797 Rhoads et al. Oct 2015 A1
20150319305 Matula et al. Nov 2015 A1
20150324727 Erhart et al. Nov 2015 A1
20150356127 Pierre et al. Dec 2015 A1
20150358525 Lord Dec 2015 A1
20160006875 Burmeister et al. Jan 2016 A1
20160012123 Hu et al. Jan 2016 A1
20160014222 Chen et al. Jan 2016 A1
20160014233 Chen et al. Jan 2016 A1
20160035353 Chen et al. Feb 2016 A1
20160037207 Soto et al. Feb 2016 A1
20160044362 Shoykher et al. Feb 2016 A1
20160044380 Barrett Feb 2016 A1
20160050462 Sirpal et al. Feb 2016 A1
20160055563 Grandhi Feb 2016 A1
20160057480 Selim et al. Feb 2016 A1
20160057502 Sirpal et al. Feb 2016 A1
20160066022 Sirpal et al. Mar 2016 A1
20160066023 Selim et al. Mar 2016 A1
20160066047 Sirpal et al. Mar 2016 A1
20160071517 Beaver et al. Mar 2016 A1
20160078866 Gelfenbeyn et al. Mar 2016 A1
20160086108 Abelow Mar 2016 A1
20160092522 Harden et al. Mar 2016 A1
20160092567 Li et al. Mar 2016 A1
20160094490 Li et al. Mar 2016 A1
20160094492 Li et al. Mar 2016 A1
20160094506 Harden et al. Mar 2016 A1
20160094507 Li et al. Mar 2016 A1
20160098663 Skiba et al. Apr 2016 A1
20160112567 Matula et al. Apr 2016 A1
20160117593 London Apr 2016 A1
20160117598 Donovan et al. Apr 2016 A1
20160119675 Voth et al. Apr 2016 A1
20160124945 Cruse et al. May 2016 A1
20160125200 York et al. May 2016 A1
20160127282 Nezarati et al. May 2016 A1
20160140236 Estes May 2016 A1
20160154631 Cruse et al. Jun 2016 A1
20160165316 Selim et al. Jun 2016 A1
20160170946 Lee Jun 2016 A1
20160171387 Suskind Jun 2016 A1
20160182958 Milano et al. Jun 2016 A1
20160205621 Huber et al. Jul 2016 A1
20160210116 Kim et al. Jul 2016 A1
20160210117 Kim et al. Jul 2016 A1
20160210279 Kim et al. Jul 2016 A1
20160210962 Kim et al. Jul 2016 A1
20160210963 Kim et al. Jul 2016 A1
20160217784 Gelfenbeyn et al. Jul 2016 A1
20160218933 Porras et al. Jul 2016 A1
20160219048 Porras et al. Jul 2016 A1
20160219078 Porras et al. Jul 2016 A1
20160220903 Miller et al. Aug 2016 A1
20160225372 Cheung et al. Aug 2016 A1
20160239480 Larcheveque et al. Aug 2016 A1
20160259767 Gelfenbeyn et al. Sep 2016 A1
20160259775 Gelfenbeyn et al. Sep 2016 A1
20160260029 Gelfenbeyn et al. Sep 2016 A1
20160285798 Smullen et al. Sep 2016 A1
20160285881 Huber et al. Sep 2016 A1
20160293043 Lacroix et al. Oct 2016 A1
20160294739 Stoehr et al. Oct 2016 A1
20160316055 Wohlert et al. Oct 2016 A1
20160328667 Macciola et al. Nov 2016 A1
20160335606 Chen et al. Nov 2016 A1
20160343378 Chen et al. Nov 2016 A1
20160349935 Gelfenbeyn et al. Dec 2016 A1
20160350101 Gelfenbeyn et al. Dec 2016 A1
20160351193 Chen et al. Dec 2016 A1
20160352656 Galley et al. Dec 2016 A1
20160352657 Galley et al. Dec 2016 A1
20160352903 Hp et al. Dec 2016 A1
20160360970 Tzvieli et al. Dec 2016 A1
20160379082 Rodriguez et al. Dec 2016 A1
20170004645 Donovan et al. Jan 2017 A1
20170011232 Xue et al. Jan 2017 A1
20170011233 Xue et al. Jan 2017 A1
20170011745 Navaratnam Jan 2017 A1
20170012907 Smullen et al. Jan 2017 A1
20170013127 Xue et al. Jan 2017 A1
20170013536 Wohlert et al. Jan 2017 A1
20170026514 Dwyer et al. Jan 2017 A1
20170032377 Navaratnam Feb 2017 A1
20170034718 Fan et al. Feb 2017 A1
20170041797 Wohlert et al. Feb 2017 A1
20170048170 Smullen et al. Feb 2017 A1
20170060835 Radford et al. Mar 2017 A1
20170068551 Vadodaria Mar 2017 A1
20170070478 Park et al. Mar 2017 A1
20170075877 Lepeltier Mar 2017 A1
20170075944 Overman Mar 2017 A1
20170075979 Overman Mar 2017 A1
20170076111 Overman Mar 2017 A1
20170078374 Overman Mar 2017 A1
20170078448 Overman Mar 2017 A1
20170080207 Perez et al. Mar 2017 A1
20170091171 Perez Mar 2017 A1
20170091312 Ajmera et al. Mar 2017 A1
20170097928 Davis Jones et al. Apr 2017 A1
20170099521 Sirpal et al. Apr 2017 A1
20170103329 Reddy Apr 2017 A1
20170116982 Gelfenbeyn et al. Apr 2017 A1
20170119295 Twyman et al. May 2017 A1
20170124457 Jerram et al. May 2017 A1
20170124460 Jerram et al. May 2017 A1
20170148431 Catanzaro et al. May 2017 A1
20170148433 Catanzaro et al. May 2017 A1
20170148434 Monceaux et al. May 2017 A1
20170160813 Divakaran et al. Jun 2017 A1
20170161372 Fern Ndez et al. Jun 2017 A1
20170164037 Selim et al. Jun 2017 A1
20170173262 Veltz Jun 2017 A1
20170178005 Kumar et al. Jun 2017 A1
20170178144 Follett et al. Jun 2017 A1
20170180284 Smullen et al. Jun 2017 A1
20170180499 Gelfenbeyn et al. Jun 2017 A1
20170185582 Gelfenbeyn et al. Jun 2017 A1
20170185945 Matula et al. Jun 2017 A1
20170186115 Sheppard et al. Jun 2017 A1
20170188168 Lyren et al. Jun 2017 A1
20170193997 Chen et al. Jul 2017 A1
20170199909 Hakkani-Tur et al. Jul 2017 A1
20170200075 Suskind et al. Jul 2017 A1
20170214701 Hasan Jul 2017 A1
20170214799 Perez et al. Jul 2017 A1
20170215028 Rhoads et al. Jul 2017 A1
20170221483 Poltorak Aug 2017 A1
20170221484 Poltorak Aug 2017 A1
20170228367 Pasupalak et al. Aug 2017 A1
20170236407 Rhoads et al. Aug 2017 A1
20170236524 Ray et al. Aug 2017 A1
20170245081 Lyren et al. Aug 2017 A1
20170249387 Hatami-Hanza Aug 2017 A1
20170250930 Ben-Itzhak Aug 2017 A1
20170251985 Howard Sep 2017 A1
20170256257 Froelich Sep 2017 A1
20170256258 Froelich Sep 2017 A1
20170256259 Froelich Sep 2017 A1
20170256261 Froelich Sep 2017 A1
20170257474 Rhoads et al. Sep 2017 A1
20170269946 Mays Sep 2017 A1
20170270822 Cohen Sep 2017 A1
20170285641 Goldman-Shenhar et al. Oct 2017 A1
20170287469 Kuo et al. Oct 2017 A1
20170288942 Plumb et al. Oct 2017 A1
20170288943 Plumb et al. Oct 2017 A1
20170289069 Plumb et al. Oct 2017 A1
20170289070 Plumb et al. Oct 2017 A1
20170289341 Rodriguez et al. Oct 2017 A1
20170299426 Lee et al. Oct 2017 A1
20170300648 Charlap Oct 2017 A1
20170308904 Navaratnam Oct 2017 A1
20170308905 Navaratnam Oct 2017 A1
20170316777 Perez et al. Nov 2017 A1
20170324866 Segre et al. Nov 2017 A1
20170324867 Tamblyn et al. Nov 2017 A1
20170324868 Tamblyn et al. Nov 2017 A1
20170334066 Levine et al. Nov 2017 A1
20170339503 Lyren et al. Nov 2017 A1
20170344532 Zhou et al. Nov 2017 A1
20170344886 Tong Nov 2017 A1
20170344889 Sengupta et al. Nov 2017 A1
20170345334 DiGiorgio Nov 2017 A1
20170347146 Selim et al. Nov 2017 A1
20170353405 O'Driscoll et al. Dec 2017 A1
20170353582 Zavesky et al. Dec 2017 A1
20170364336 Khan et al. Dec 2017 A1
20170364505 Sarikaya et al. Dec 2017 A1
20170365250 Sarikaya et al. Dec 2017 A1
20170366478 Mohammed et al. Dec 2017 A1
20170366479 Ladha et al. Dec 2017 A1
20170366842 Shoykher et al. Dec 2017 A1
20170371885 Aggarwal et al. Dec 2017 A1
20170372703 Sung et al. Dec 2017 A1
20170373992 Nair Dec 2017 A1
20180000347 Perez et al. Jan 2018 A1
20180006978 Smullen et al. Jan 2018 A1
20180007199 Quilici et al. Jan 2018 A1
20180011843 Lee et al. Jan 2018 A1
20180024644 Sirpal et al. Jan 2018 A1
20180025275 Jerram et al. Jan 2018 A1
20180025726 Gatte de Bayser et al. Jan 2018 A1
20180032889 Donovan et al. Feb 2018 A1
20180046923 Jerram et al. Feb 2018 A1
20180047201 Filev et al. Feb 2018 A1
20180048594 de Silva et al. Feb 2018 A1
20180052664 Zhang et al. Feb 2018 A1
20180053119 Zeng et al. Feb 2018 A1
20180054464 Zhang et al. Feb 2018 A1
20180054523 Zhang et al. Feb 2018 A1
20180060301 Li et al. Mar 2018 A1
20180060303 Sarikaya et al. Mar 2018 A1
20180061408 Andreas et al. Mar 2018 A1
20180063568 Shoykher et al. Mar 2018 A1
20180068234 Bohus et al. Mar 2018 A1
20180075335 Braz et al. Mar 2018 A1
20180075847 Lee et al. Mar 2018 A1
20180077131 Averboch et al. Mar 2018 A1
20180078215 Park et al. Mar 2018 A1
20180078754 Perez et al. Mar 2018 A1
20180084359 Lyren et al. Mar 2018 A1
20180085580 Perez et al. Mar 2018 A1
20180089163 Ben Ami et al. Mar 2018 A1
20180089315 Seiber et al. Mar 2018 A1
20180090135 Schlesinger et al. Mar 2018 A1
20180090141 Periorellis et al. Mar 2018 A1
20180096686 Borsutsky et al. Apr 2018 A1
20180098030 Morabia et al. Apr 2018 A1
20180101854 Jones-McFadden et al. Apr 2018 A1
20180108050 Halstvedt et al. Apr 2018 A1
20180108343 Stevans et al. Apr 2018 A1
20180113854 Vig et al. Apr 2018 A1
20180121062 Beaver et al. May 2018 A1
20180121678 York et al. May 2018 A1
20180122363 Braz et al. May 2018 A1
20180125689 Perez et al. May 2018 A1
20180129484 Kannan et al. May 2018 A1
20180129648 Chakravarthy et al. May 2018 A1
20180129941 Gustafson et al. May 2018 A1
20180129959 Gustafson et al. May 2018 A1
20180130067 Lindsay May 2018 A1
20180130156 Grau May 2018 A1
20180130372 Vinkers et al. May 2018 A1
20180130463 Jeon et al. May 2018 A1
20180131904 Segal May 2018 A1
20180137179 Kawanabe May 2018 A1
20180137203 Hennekey et al. May 2018 A1
20180137424 Gabaldon Royval et al. May 2018 A1
20180139069 Rawlins et al. May 2018 A1
20180144738 Yasavur et al. May 2018 A1
20180158068 Ker Jun 2018 A1
20180165581 Hwang et al. Jun 2018 A1
20180165723 Wright et al. Jun 2018 A1
20180173322 de Paz et al. Jun 2018 A1
20180173714 Moussa et al. Jun 2018 A1
20180173999 Renard Jun 2018 A1
20180174055 Tirumale et al. Jun 2018 A1
20180181558 Emery et al. Jun 2018 A1
20180183735 Naydonov Jun 2018 A1
20180189400 Gordon Jul 2018 A1
20180189408 O'Driscoll et al. Jul 2018 A1
20180189695 Macciola et al. Jul 2018 A1
20180190253 O'Driscoll et al. Jul 2018 A1
20180191654 O'Driscoll et al. Jul 2018 A1
20180192082 O'Driscoll et al. Jul 2018 A1
20180192286 Brisebois et al. Jul 2018 A1
20180196874 Seiber et al. Jul 2018 A1
20180197104 Marin et al. Jul 2018 A1
20180203852 Goyal et al. Jul 2018 A1
20180204107 Tucker Jul 2018 A1
20180204111 Zadeh et al. Jul 2018 A1
20180212904 Smullen et al. Jul 2018 A1
20180218042 Krishnan et al. Aug 2018 A1
20180218080 Krishnamurthy et al. Aug 2018 A1
20180218734 Somech et al. Aug 2018 A1
20180225365 Altaf et al. Aug 2018 A1
20180226066 Harris et al. Aug 2018 A1
20180226068 Hall et al. Aug 2018 A1
20180227422 Stolyar et al. Aug 2018 A1
20180227690 Lyren et al. Aug 2018 A1
20180232376 Zhu et al. Aug 2018 A1
20180233028 Rhoads et al. Aug 2018 A1
20180239758 Cruse et al. Aug 2018 A1
20180239815 Yi et al. Aug 2018 A1
20180240162 Krishnaswamy et al. Aug 2018 A1
20180246954 Andreas et al. Aug 2018 A1
20180247649 Chen et al. Aug 2018 A1
20180248995 Rhoads Aug 2018 A1
Foreign Referenced Citations (4)
Number Date Country
2002304401 Oct 2002 JP
2007207218 Aug 2007 JP
WO2004017596 Feb 2004 WO
WO2009077901 Jun 2009 WO
Related Publications (1)
Number Date Country
20170221483 A1 Aug 2017 US
Provisional Applications (1)
Number Date Country
61334564 May 2010 US
Continuations (1)
Number Date Country
Parent 13106575 May 2011 US
Child 15492833 US