Systems and methods to present voice message information to a user of a computing device

Information

  • Patent Grant
  • 10714091
  • Patent Number
    10,714,091
  • Date Filed
    Monday, October 1, 2018
    5 years ago
  • Date Issued
    Tuesday, July 14, 2020
    3 years ago
Abstract
Systems and methods to process and/or present information relating to voice messages for a user that are received from other persons. In one embodiment, a method implemented in a data processing system includes: receiving first data associated with prior communications or activities for a first user on a mobile device; receiving a voice message for the first user; transcribing the voice message using the first data to provide a transcribed message; and sending the transcribed message to the mobile device for display to the user.
Description
FIELD OF THE TECHNOLOGY

At least some embodiments disclosed herein relate to information processing systems in general, and more particularly, but not limited to, processing and/or presentation of information relating to or regarding voice messages, for a user of a computing device, that are received from other persons (e.g., persons having called the user).


BACKGROUND

Users of mobile devices such as Android and iPhone devices typically receive voice messages from other persons (e.g., friends or business associates). When the user of the mobile device is not available, the caller often leaves a voice message. The user in many cases may have numerous voice messages to review, and may desire to take follow-up action after reviewing one or more of these messages.


SUMMARY OF THE DESCRIPTION

Systems and methods to process and/or present information for a user regarding voice messages received from other persons are described herein. Some embodiments are summarized in this section.


In one embodiment, a method includes: receiving first data associated with prior communications or activities for a first user on a mobile device of the first user; receiving, via a computing apparatus, a voice message for the first user; transcribing, via the computing apparatus, the voice message using the first data to provide a transcribed message; and sending the transcribed message to the mobile device for display to the user.


In another embodiment, a method includes causing a mobile device of a first user to: send, using the mobile device, first data to a computing apparatus, wherein the first data is associated with prior communications or activities for the first user on the mobile device; send, using the mobile device, a voice message for the first user to the computing apparatus; and receive, at the mobile device, a transcribed message from the computing apparatus, wherein the computing apparatus has transcribed the voice message using the first data to create the transcribed message.


The disclosure includes methods and apparatuses which perform these methods, including data processing systems which perform these methods, and computer readable media containing instructions which when executed on data processing systems cause the systems to perform these methods.


Other features will be apparent from the accompanying drawings and from the detailed description which follows.





BRIEF DESCRIPTION OF THE DRAWINGS

The embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements.



FIG. 1 shows an example of a display screen provided to a user of a mobile device for reviewing voice messages according to one embodiment.



FIG. 2 shows an example of the display of several options for selection by the user for correcting misspelled words in the voice message of FIG. 1 according to one embodiment.



FIG. 3 shows an example of the display to the user of a list of voice messages awaiting review by the user according to one embodiment.



FIG. 4 shows a system to present voice message information to a user of a computing device according to one embodiment.



FIG. 5 shows a block diagram of a data processing system which can be used in various embodiments.



FIG. 6 shows a block diagram of a user device according to one embodiment.





DETAILED DESCRIPTION

The following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding. However, in certain instances, well known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure are not necessarily references to the same embodiment; and, such references mean at least one.


Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments.


In one embodiment, a computing device (e.g., a mobile device) owned by a user stores data (e.g., in a database in the form of person profiles) associated with prior communications and/or other activity of the user on the mobile device (e.g., data extracted from prior emails received by the user). A caller calls the mobile device and leaves a voice message for the user. The caller is identified (e.g., using caller ID). A subset of social and/or other data associated with the caller is retrieved from the database of the user (e.g., a person profile of the caller and/or a predefined number of the most recent emails sent by the caller to the person). This subset of data is used by a speech recognition system to transcribe the voice message. The transcribed message is provided to the user on a display of the mobile device.


In another embodiment, the user is further presented with a list of persons and/or emails or other communications that have been referenced in the transcribed message. For example, person profiles (or a link to each thereto) for two friends mentioned in the transcribed message may be displayed to the user on the same screen or page as the transcribed message. Also, a link to an email referenced by the caller in the transcribed message may be displayed on the same page or on another page (e.g., accessible by a link or icon on the page with the transcribed message).


Numerous examples of various types of data (e.g., person profiles for callers associated with the user) that may be collected in such a database (or collected in another form of data repository) for the user are described in U.S. patent application Ser. No. 14/792,698, incorporated by reference above.


In one embodiment, a mobile device of a user stores data (e.g., in a database in the form of person profiles) associated with prior communications and/or other activity of the user on the mobile device (e.g., data extracted from one of more of the following: prior communications such as email or text messages, voice messages, or other documents or information received by the user from the user's friends or other persons such as work associates). The other activity may include the manner or ways in which the user operates the mobile device (e.g., what buttons or functions or activated when the user has previously interacted with the caller, what online service is used by the user when previously interacting with the caller, etc.).


A caller calls the mobile device and leaves a voice message for the user. The caller is identified (e.g., using caller ID). A subset of social and/or other data associated with the caller is retrieved from the database of the user (e.g., a person profile of the caller and/or a predefined number of the most recent emails sent by the caller to the person profiles). In one embodiment, the subset of data and the identification of the caller are sent to a speech-to-text service (e.g., an online service) along with the voice message to be transcribed. This subset of data is used by the speech recognition service system to transcribe the voice message. The transcribed message is provided to the user on a display of the mobile device.



FIG. 1 shows an example of a display screen 100 provided to a user of a mobile device for reviewing voice messages (e.g., a message from Amy Bonforte) according to one embodiment. A voice message from Amy (left for the user, who may have been unavailable or not aware of the call from Amy) has been transcribed as described above and a transcription 102 is presented under Amy's name.


When a voice message is being reviewed, a visual indicator 106 indicates progress of the playing of the message. Also, a visual cursor 114 indicates the position in the transcribed message for the words that are then being heard by the user during the playing.


The transcription 102 is generated by a speech recognition system using a subset of the user's social data that is sent to the system prior to transcription. This subset of data is collected (e.g., by a server associated with the mobile device) after the voice message from Amy has been recorded. The subset may include a person profile for Amy (which includes the correct spelling of Amy's name), recent emails sent by Amy to the user, and person profiles for other persons that Amy and the caller have in common (e.g., other persons that have been cc′d on emails between Amy and the user).


The speech recognition system uses the subset of social data for transcribing this particular voice message. As other voice messages for the user arrive and need transcription, a new and different subset of data is selected and sent to the speech recognition service for use in transcription of the corresponding voice message. Thus, in one embodiment, each subset of data may be unique for each voice message to be transcribed, though this is not required. Each subset of data may be sent to the speech recognition service from the server associated with the mobile phone that is storing an implicit social graph for the user, or may be sent directly from the mobile device.


The caller's name 108 (“Amy”) is correctly transcribed from use of the caller name data provided to the transcription service. The two friends 110 (Terra and Matte), although not previously known to the transcription service, are transcribed with correct spelling using the subset of data provided from the user's social data database for use in transcription.


The transcribed message mentions an email 112 (which could be other forms of prior communication). Triggered by the use of this word “email”, the system uses correlation or other matching techniques to select prior emails from the caller to the user that are most closely associated with this message (e.g., by correlation of words in the message to words in prior emails and/or by the time that has passed since a prior email was sent to the user; also, a ranking system based on relevancy may be used). The single or multiple emails selected as being most relevant are presented in list 104 (along with other relevant information referenced in the message).


Links 104 may also include links to contact, person profile or other information for persons (e.g., Terra and Matte) that have been referenced in the transcribed message, and these links may be presented to the user in a display on the mobile device. The links to person and emails permits the user to click on a link 104 to initiate an action to contact the applicable person by phone or email.



FIG. 2 shows an example of a display of several options 200 for selection by the user for correcting misspelled words in the voice message of FIG. 1 according to one embodiment. When the user reviews the transcribed message, the user may select a word such as “Matte” in order to provide a corrected spelling. The options 200 presented to the user for correction are selected, at least in part, from the subset of data sent to the transcription service for transcribing the voice message. If the user selects a different spelling, the speech recognition system stores that correction and uses it in future transcriptions (e.g., for future voice messages to the user from Amy or even other callers) to improve accuracy.



FIG. 3 shows an example of a display to the user of a list 300 of voice messages awaiting review by the user according to one embodiment. For example, voice message 302 has been transcribed using a subset of social graph data from the user's social database (e.g., stored on the server associated with the mobile device). In one embodiment, the list 300 may be presented in a ranked order based on relevancy to the user by ranking of the person associated with each voice message. For example, this ranking may be done as described for ranking of contacts in U.S. patent application Ser. No. 14/792,698, incorporated by reference above.


Additional specific, non-limiting examples of the transcription and presentation of voice messages are now discussed below. In a first example, the above approach is used to improve transcription services for voice messages as provided by a telecommunications carrier to its mobile phone subscribers (e.g., Apple iPhone or Android phone device subscribers using a voicemail system). A telecommunications carrier may use person profile and/or other implicit social graph data to improve its voicemail service. When a user receives a voicemail from a caller, caller ID information may be used to make an identification of the caller. This identification (optionally along with other information and/or predefined criteria) is used to select the subset of data from the social graph data to send to a transcription service (e.g., a service used regularly by the carrier).


In another example, when the voice message is left, the subset of data sent includes the name of the person that called, and also the names and other information for persons that the user and the caller both know in common (and that will likely appear in the voice message). A relevancy ranking of these persons may also be provided. This subset of data becomes part of the voice message metadata. So, when the voice message is run through speech recognition, accuracy for names and other information in the transcribed message is improved. Thus, a context associated with the user is provided to the speech recognition system in order to better interpret words in the transcribed message.


As illustrated in FIG. 1 above, names of friends are often included in voice messages, but such names are difficult to recognize by a voice processing system because the system has not previously learned these names. However, the approach described above may result in providing the correct spelling of such names in the transcribed message. The names of persons (or person profiles) selected for the subset of social data may be limited to a predefined number (e.g., 10 names).


The subset of data is sent to an online service on the Internet that does speech-to-text conversion. It takes the recorded voicemail message provided by the handset or the carrier, and does the transcription. The subset of data may be provided in a server-to-server manner or via the user's smart phone to the transcription server. The online service may generate the transcription and sends back the results as a text message or a webpage.


As mentioned above, the reference in a message to a prior communication (e.g., “I just sent you an email”) may be used as a trigger for selecting certain types of information related to prior communications. For example, the subset of data may include all recent emails to and/or from the caller (and may include the subject lines for these emails) to aid in the transcription of factual or other information included in the voice message (e.g., the name of a performer or concert may not be known to the speech recognition, but may be included in a prior email). Words or other data used or associated with recent emails may significantly improve the ability to transcribe that word or other words in the transcribed message and thus increase accuracy.


In one example, the user interface permits the user to make corrections to the transcribed message, as discussed above. If a word is spelled incorrectly, the user may just tap on that word and briefly hold his finger down on the screen. Then, a list of relevant options for the user to select from appears (e.g., these options may be other likely synonyms from other people in the user's social graph such as other person names that sounded like “Matte” that the system might choose from in doing the transcription). This also improves the speech recognition system, which remembers the clip and the correction, and then this voice pattern maps more correctly in future transcriptions.


In another example, if the transcribed message references a prior email (e.g., “I just sent you an email ten minutes ago.”), the subset of data may include people who were cc′d on prior emails over the last 10 or 30 minutes or other time period as an additional set of people (whether or not the people are highly correlated to the user) in order to provide additional information to the speech recognition system.


In another example, an email to a user will frequently include an introduction to a new person (e.g., “Hey Jeff, this is David. I just sent you an email introduction to Jacob, who is the founder of this start-up company I want you to talk to. Can you give Jacob a call.”). The introductions are often followed by a phone call. The prior email is sent in the subset of data and the speech recognizer system has improved the accuracy in handling a name not previously encountered by the system. The subset of data may also include information from the user's database about the persons at the start-up company in order to get that particular transcription done more correctly.


In one example, the voicemail message is displayed to the user with the context (e.g., emails and contacts) believed to be referenced in the voicemail message.


In another example, based on the caller ID (from the mobile device or server having seen the caller's phone number before), the subset of data includes a small subset of the user's implicit graph, which is sent to the speech recognition system. In one example, the voice message may go simultaneously to the speech recognition system and to the user's phone. The user's smart phone can do some of the processing, but services that do voice message receipt and handling may do some or all of the processing.


For example, in a server-to-server case, the carrier sends a voice message to a service for transcription, but first pings the server associated with the user's mobile device (and storing the user's social graph) to indicate that the user got a voicemail from a particular telephone number. The server creates a subset of social data around that telephone number that includes people, phone numbers, etc., that may have been referenced as metadata. The transcription is sent back to the carrier, and the carrier sends the transcription to the mobile device.


In one example, the subset of data is highly targeted and highly tuned to this specific instance. The subset of data is also an implicit graph (derived by simply watching a user's prior communication habits). It does not need to be explicitly maintained like prior directory graphs.



FIG. 4 shows a system to present voice message information to a user of a computing device (e.g., a mobile device 150 such as an iPhone device) according to one embodiment. In FIG. 4, the user terminals (e.g., 141, 143, . . . , 145) and/or mobile devices including mobile device 150 are used to access a server 123 over a communication network 121.


The server 123 may include one or more web servers (or other types of data communication servers) to communicate with the user terminals (e.g., 141, 143, . . . , 145) and/or mobile devices.


The server 123 may be connected to a data storage facility to store user provided content, such as multimedia content, navigation data, preference data, etc. The server 123 may also store or have access to stored person profiles 154.


Person profiles 154 may be created and updated based on email or other communications to and from mobile device 150 and other mobile devices of various users. In an alternative embodiment, person profiles 152 may be stored in a memory of mobile device 150. During operation, mobile device 150 may access and use person profiles obtained locally from mobile device 150 or obtained over communication network 121 from server 123.


When a voice message sent or addressed to the user of mobile device 150 is received, one or more person profiles and/or data as described herein may be sent along with the voice message to a speech recognition system 160 over a communication network 121 in order to be transcribed as discussed herein.


System 160 may store person profiles 162, which may include profiles received from mobile device 150 and/or server 123. Person profiles 162 may also be received from other computing devices not illustrated in FIG. 4.


Although FIG. 4 illustrates an example system implemented in client server architecture, embodiments of the disclosure can be implemented in various alternative architectures. For example, the system can be implemented via a peer to peer network of user terminals, where content and data are shared via peer to peer communication connections.


In some embodiments, a combination of client server architecture and peer to peer architecture can be used, in which one or more centralized server may be used to provide some of the information and/or services and the peer to peer network is used to provide other information and/or services. Thus, embodiments of the disclosure are not limited to a particular architecture.



FIG. 5 shows a block diagram of a data processing system which can be used in various embodiments (e.g., to implement server 123 or speech recognition system 160). While FIG. 5 illustrates various components of a computer system, it is not intended to represent any particular architecture or manner of interconnecting the components. Other systems that have fewer or more components may also be used.


In FIG. 5, the system 201 includes an inter-connect 202 (e.g., bus and system core logic), which interconnects a microprocessor(s) 203 and memory 208. The microprocessor 203 is coupled to cache memory 204 in the example of FIG. 5.


The inter-connect 202 interconnects the microprocessor(s) 203 and the memory 208 together and also interconnects them to a display controller and display device 207 and to peripheral devices such as input/output (I/O) devices 205 through an input/output controller(s) 206. Typical I/O devices include mice, keyboards, modems, network interfaces, printers, scanners, video cameras and other devices which are well known in the art.


The inter-connect 202 may include one or more buses connected to one another through various bridges, controllers and/or adapters. In one embodiment the I/O controller 206 includes a USB (Universal Serial Bus) adapter for controlling USB peripherals, and/or an IEEE-1394 bus adapter for controlling IEEE-1394 peripherals.


The memory 208 may include ROM (Read Only Memory), and volatile RAM (Random Access Memory) and non-volatile memory, such as hard drive, flash memory, etc.


Volatile RAM is typically implemented as dynamic RAM (DRAM) which requires power continually in order to refresh or maintain the data in the memory. Non-volatile memory is typically a magnetic hard drive, a magnetic optical drive, or an optical drive (e.g., a DVD RAM), or other type of memory system which maintains data even after power is removed from the system. The non-volatile memory may also be a random access memory.


The non-volatile memory can be a local device coupled directly to the rest of the components in the data processing system. A non-volatile memory that is remote from the system, such as a network storage device coupled to the data processing system through a network interface such as a modem or Ethernet interface, can also be used. In one embodiment, a data processing system as illustrated in FIG. 5 is used to implement a server or speech recognition system, and/or other servers.


In one embodiment, a data processing system as illustrated in FIG. 5 is used to implement a user terminal. A user terminal may be in the form of a personal digital assistant (PDA), a cellular phone, a notebook computer or a personal desktop computer.


In some embodiments, one or more servers of the system can be replaced with the service of a peer to peer network of a plurality of data processing systems, or a network of distributed computing systems. The peer to peer network, or a distributed computing system, can be collectively viewed as a server data processing system.


Embodiments of the disclosure can be implemented via the microprocessor(s) 203 and/or the memory 208. For example, the functionalities described can be partially implemented via hardware logic in the microprocessor(s) 203 and partially using the instructions stored in the memory 208. Some embodiments are implemented using the microprocessor(s) 203 without additional instructions stored in the memory 208. Some embodiments are implemented using the instructions stored in the memory 208 for execution by one or more general purpose microprocessor(s) 203. Thus, the disclosure is not limited to a specific configuration of hardware and/or software.



FIG. 6 shows a block diagram of a user device (e.g., mobile device 150) according to one embodiment. In FIG. 6, the user device includes an inter-connect 221 connecting the presentation device 229, user input device 231, a processor 233, a memory 227, a position identification unit 225 and a communication device 223.


In FIG. 6, the position identification unit 225 is used to identify a geographic location for user content created for sharing. The position identification unit 225 may include a satellite positioning system receiver, such as a Global Positioning System (GPS) receiver, to automatically identify the current position of the user device.


In FIG. 6, the communication device 223 is configured to communicate with a server and/or speech recognition system. In one embodiment, the user input device 231 is configured to generate user data content. The user input device 231 may include a text input device, a still image camera, a video camera, and/or a sound recorder, etc.


Various further embodiments are now here described. In one embodiment, a method, comprises: receiving first data associated with prior communications or activities for a first user on a mobile device of the first user; receiving, via a computing apparatus, a voice message for the first user; transcribing, via the computing apparatus, the voice message using the first data to provide a transcribed message; and sending the transcribed message to the mobile device for display to the user.


In one embodiment, the first data comprises at least one person profile including a person profile for a caller that created the voice message. In one embodiment, the voice message is created by a caller, and the first data includes a predefined number of recent messages sent by the caller to the first user.


In one embodiment, the first data comprises a plurality of person profiles, including a person profile for a person referenced in the voice message other than the first user. In one embodiment, the voice message and the first data are received from the mobile device.


The first data may be received from a server, and the server may store a plurality of person profiles for users of mobile devices including the first user. The transcribing may be performed using a speech recognition system.


In one embodiment, the method further comprises sending, to the mobile device, a list of persons or messages for display to the first user, each person or message in the list being referenced in the transcribed message. In one embodiment the first data is associated with prior activities for the first user including manner of operation of the mobile device.


In one embodiment, the method further comprises sending, to the mobile device, a link to an email referenced in the transcribed message. The voice message may be created by a caller, and the method may further comprise sending a person profile to the mobile device for at least one person referenced in the transcribed message other than the caller.


In one embodiment, a non-transitory computer-readable storage medium stores computer-readable instructions, which when executed, cause a mobile device of a first user to: send, using the mobile device, first data to a computing apparatus, wherein the first data is associated with prior communications or activities for the first user on the mobile device; send, using the mobile device, a voice message for the first user to the computing apparatus; and receive, at the mobile device, a transcribed message from the computing apparatus, wherein the computing apparatus has transcribed the voice message using the first data to create the transcribed message.


In one embodiment, the first data comprises a plurality of person profiles, including a person profile for a person referenced in the voice message other than the first user, and the instructions further cause the mobile device to store the plurality of person profiles in a memory of the mobile device. In one embodiment, the instructions further cause the mobile device to send a person profile to a server other than the computing apparatus, wherein the server is configured to store a plurality of person profiles for users of mobile devices including the first user.


The computing apparatus may be a speech recognition system. The instructions may further cause the mobile device to receive person profiles for persons referenced in the transcribed message. The instructions may further cause the mobile device to present, on a display of the mobile device, a list of persons or messages to the first user, each person or message in the list being referenced in the transcribed message.


In one embodiment, a system comprises: at least one processor; and memory storing instructions configured to instruct the at least one processor to: receive first data associated with prior communications or activities for a first user on a mobile device of the first user; receive a voice message for the first user; transcribe the voice message using the first data to provide a transcribed message; and send the transcribed message to the mobile device for display to the user.


In one embodiment, the first data comprises at least one person profile including a person profile for a caller that created the voice message. In one embodiment, the first data is received from a server, and the server stores a plurality of person profiles for users of mobile devices including the first user.


In this description, various functions and operations may be described as being performed by or caused by software code to simplify description. However, those skilled in the art will recognize what is meant by such expressions is that the functions result from execution of the code by a processor, such as a microprocessor. Alternatively, or in combination, the functions and operations can be implemented using special purpose circuitry, with or without software instructions, such as using an Application-Specific Integrated Circuit (ASIC) or a Field-Programmable Gate Array (FPGA). Embodiments can be implemented using hardwired circuitry without software instructions, or in combination with software instructions. Thus, the techniques are limited neither to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by the data processing system.


While some embodiments can be implemented in fully functioning computers and computer systems, various embodiments are capable of being distributed as a computing product in a variety of forms and are capable of being applied regardless of the particular type of machine or computer-readable media used to actually effect the distribution.


At least some aspects disclosed can be embodied, at least in part, in software. That is, the techniques may be carried out in a computer system or other data processing system in response to its processor, such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.


Routines executed to implement the embodiments may be implemented as part of an operating system, middleware, service delivery platform, SDK (Software Development Kit) component, web services, or other specific application, component, program, object, module or sequence of instructions referred to as “computer programs.” Invocation interfaces to these routines can be exposed to a software development community as an API (Application Programming Interface). The computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects.


A machine readable medium can be used to store software and data which when executed by a data processing system causes the system to perform various methods. The executable software and data may be stored in various places including for example ROM, volatile RAM, non-volatile memory and/or cache. Portions of this software and/or data may be stored in any one of these storage devices. Further, the data and instructions can be obtained from centralized servers or peer to peer networks. Different portions of the data and instructions can be obtained from different centralized servers and/or peer to peer networks at different times and in different communication sessions or in a same communication session. The data and instructions can be obtained in entirety prior to the execution of the applications. Alternatively, portions of the data and instructions can be obtained dynamically, just in time, when needed for execution. Thus, it is not required that the data and instructions be on a machine readable medium in entirety at a particular instance of time.


Examples of computer-readable media include but are not limited to recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks (DVDs), etc.), among others. The computer-readable media may store the instructions.


The instructions may also be embodied in digital and analog communication links for electrical, optical, acoustical or other forms of propagated signals, such as carrier waves, infrared signals, digital signals, etc. However, propagated signals, such as carrier waves, infrared signals, digital signals, etc. are not tangible machine readable medium and are not configured to store instructions.


In general, a tangible machine readable medium includes any mechanism that provides (e.g., stores) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.).


In various embodiments, hardwired circuitry may be used in combination with software instructions to implement the techniques. Thus, the techniques are neither limited to any specific combination of hardware circuitry and software nor to any particular source for the instructions executed by the data processing system.


Although some of the drawings illustrate a number of operations in a particular order, operations which are not order dependent may be reordered and other operations may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be apparent to those of ordinary skill in the art and so do not present an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software or any combination thereof.


In the foregoing specification, the disclosure has been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.

Claims
  • 1. A method, comprising: receiving first data associated with prior communications for a first user of a computing device, the first data comprising a plurality of person profiles including a person profile for a person referenced in a prior communication between the first user and a caller that creates a voice message for the first user;receiving, via a computing apparatus, the voice message;transcribing, via the computing apparatus, the voice message using the first data to provide a transcribed message; andsending the transcribed message to the computing device for providing to the first user.
  • 2. The method of claim 1, wherein the plurality of person profiles further includes a person profile for a person referenced in the voice message other than the first user or the caller.
  • 3. The method of claim 1, wherein the first data includes a predefined number of recent messages sent by the caller to the first user.
  • 4. The method of claim 1, wherein the method further comprises sending a person profile to the computing device for at least one person referenced in the transcribed message other than the first user or the caller.
  • 5. The method of claim 1, wherein the voice message and the first data are received from the computing device.
  • 6. The method of claim 1, wherein the first data is received from a server, and wherein the server stores the plurality of person profiles for users of computing devices including the first user.
  • 7. The method of claim 1, wherein the transcribing is performed using a speech recognition system.
  • 8. The method of claim 1, further comprising sending, to the computing device, a list of persons or messages for display to the first user, wherein each person or message in the list is referenced in the transcribed message.
  • 9. The method of claim 1, further comprising sending, to the computing device, a link to a communication referenced in the transcribed message.
  • 10. The method of claim 1, wherein the first data is associated with prior activities of the first user, the prior activities including manner of operation by the first user of the computing device.
  • 11. A non-transitory computer-readable storage medium storing computer-readable instructions, which when executed, cause a computing device of a first user to: send, using the computing device, first data to a computing apparatus, wherein the first data is associated with prior communications for the first user, and the first data comprises a plurality of person profiles, including a person profile for a person referenced in a prior communication between the first user and a caller that creates a voice message for the first user;send, using the computing device, the voice message to the computing apparatus; andreceive, at the computing device, a transcribed message from the computing apparatus, wherein the computing apparatus transcribes the voice message using the first data to create the transcribed message.
  • 12. The storage medium of claim 11, wherein the plurality of person profiles further includes a person profile for a person referenced in the voice message other than the first user or the caller, and the instructions further cause the computing device to store the plurality of person profiles in a memory of the computing device.
  • 13. The storage medium of claim 11, wherein the instructions further cause the computing device to send a person profile to a server other than the computing apparatus, wherein the server is configured to store the plurality of person profiles for users of computing devices including the first user.
  • 14. The storage medium of claim 11, wherein the instructions further cause the computing device to receive person profiles for persons referenced in the transcribed message.
  • 15. The storage medium of claim 11, wherein the instructions further cause the computing device to present, on a display, at least one person or message referenced in the transcribed message.
  • 16. The storage medium of claim 11, wherein the computing apparatus is a speech recognition system.
  • 17. The storage medium of claim 11, wherein the plurality of person profiles further includes a person profile for the caller.
  • 18. A system, comprising: at least one processor; andmemory storing instructions configured to instruct the at least one processor to: receive first data associated with prior communications for a user of a computing device, the first data comprising data regarding a person referenced in a prior communication between the user and a caller;receive a voice message created by the caller;transcribe the voice message using the first data to provide a transcribed message; andsend the transcribed message to the computing device.
  • 19. The system of claim 18, wherein the first data is received from a server, and the server stores profiles for users of computing devices including the user.
  • 20. The system of claim 18, wherein the first data further includes data regarding the caller.
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application is a continuation of and claims the benefit of U.S. Pat. No. 10,089,986, filed Jun. 20, 2017, which is a continuation of and claims the benefit of U.S. Pat. No. 9,685,158, filed Feb. 27, 2015, which is a continuation of and claims the benefit of U.S. Pat. No. 8,972,257, filed Jun. 20, 2012, which claims the benefit of U.S. Provisional Application Ser. No. 61/499,643, filed Jun. 21, 2011, entitled “Systems and Methods to Present Voice Message Information to a User of a Computing Device,” by J. Bonforte, all of which are hereby incorporated by reference herein in their entirety. The present application is related to U.S. patent application Ser. No. 12/792,698, filed Jun. 2, 2010, entitled “SELF POPULATING ADDRESS BOOK,” by Smith et al., which was also published as U.S. Patent Publication No. 2010/0306185 on Dec. 2, 2010, the entire contents of which application is incorporated by reference as if fully set forth herein.

US Referenced Citations (595)
Number Name Date Kind
5396647 Thompson et al. Mar 1995 A
5610915 Elliott et al. Mar 1997 A
5966714 Huang et al. Oct 1999 A
6020884 MacNaughton et al. Feb 2000 A
6285999 Page Sep 2001 B1
6385644 Devine et al. May 2002 B1
6405197 Gilmour Jun 2002 B2
6484196 Maurille Nov 2002 B1
6510453 Apfel et al. Jan 2003 B1
6560620 Ching May 2003 B1
6594654 Salam et al. Jul 2003 B1
6615348 Gibbs Sep 2003 B1
6714967 Horvitz Mar 2004 B1
6721748 Knight et al. Apr 2004 B1
6816850 Culliss Nov 2004 B2
6832245 Isaacs et al. Dec 2004 B1
6931419 Lindquist Aug 2005 B1
6952805 Tafoya et al. Oct 2005 B1
6965918 Arnold et al. Nov 2005 B1
6996777 Hiipakka Feb 2006 B2
7003724 Newman Feb 2006 B2
7058892 MacNaughton et al. Jun 2006 B1
7076533 Knox et al. Jul 2006 B1
7085745 Klug Aug 2006 B2
7103806 Horvitz Sep 2006 B1
7181518 Matsumoto et al. Feb 2007 B1
7185065 Holtzman et al. Feb 2007 B1
7187932 Barchi Mar 2007 B1
7246045 Rappaport et al. Jul 2007 B1
7272637 Himmelstein Sep 2007 B1
7289614 Twerdahl et al. Oct 2007 B1
7328242 McCarthy et al. Feb 2008 B1
7333976 Auerbach et al. Feb 2008 B1
7359894 Liebman et al. Apr 2008 B1
7383307 Kirkland et al. Jun 2008 B2
7444323 Martinez et al. Oct 2008 B2
7454464 Puthenkulam et al. Nov 2008 B2
7475109 Fletcher et al. Jan 2009 B1
7475113 Stolze Jan 2009 B2
7512788 Choi et al. Mar 2009 B2
7512814 Chen et al. Mar 2009 B2
7536384 Venkataraman et al. May 2009 B2
7539676 Aravamudan et al. May 2009 B2
7580363 Sorvari et al. Aug 2009 B2
7593995 He et al. Sep 2009 B1
7606860 Puthenkulam et al. Oct 2009 B2
7620407 Donald et al. Nov 2009 B1
7624103 Wiegering et al. Nov 2009 B2
7627598 Burke Dec 2009 B1
7634463 Katragadda et al. Dec 2009 B1
7639157 Whitley et al. Dec 2009 B1
7653695 Flury et al. Jan 2010 B2
7685144 Katragadda Mar 2010 B1
7692653 Petro et al. Apr 2010 B1
7698140 Bhardwaj et al. Apr 2010 B2
7702730 Spataro et al. Apr 2010 B2
7707249 Spataro et al. Apr 2010 B2
7707509 Naono et al. Apr 2010 B2
7716140 Nielsen et al. May 2010 B1
7720916 Fisher et al. May 2010 B2
7724878 Timmins et al. May 2010 B2
7725492 Sittig et al. May 2010 B2
7743051 Kashyap et al. Jun 2010 B1
7752081 Calabria Jul 2010 B2
7756895 Emigh Jul 2010 B1
7756935 Gaucas Jul 2010 B2
7761436 Norton et al. Jul 2010 B2
7788260 Lunt et al. Aug 2010 B2
7805492 Thatcher et al. Sep 2010 B1
7818396 Dolin et al. Oct 2010 B2
7827208 Bosworth et al. Nov 2010 B2
7827265 Cheever et al. Nov 2010 B2
7831692 French et al. Nov 2010 B2
7836045 Schachter Nov 2010 B2
7836134 Pantalone Nov 2010 B2
7849141 Bellegarda et al. Dec 2010 B1
7849142 Clegg et al. Dec 2010 B2
7853602 Gorti et al. Dec 2010 B2
7853881 Aly Assal et al. Dec 2010 B1
7865562 Nesbitt et al. Jan 2011 B2
7870197 Lewis et al. Jan 2011 B2
7899806 Aravamudan et al. Mar 2011 B2
7899871 Kumar et al. Mar 2011 B1
7908647 Polis et al. Mar 2011 B1
7925690 Smith et al. Apr 2011 B2
7930430 Thatcher et al. Apr 2011 B2
7949611 Nielsen et al. May 2011 B1
7949627 Aravamudan et al. May 2011 B2
7970832 Perry, Jr. et al. Jun 2011 B2
7979569 Eisner et al. Jul 2011 B2
7991764 Rathod Aug 2011 B2
7996456 Gross Aug 2011 B2
8005806 Rupp et al. Aug 2011 B2
8032598 He et al. Oct 2011 B1
8055715 Bensky et al. Nov 2011 B2
8073928 Dolin et al. Dec 2011 B2
8086676 Palahnuk et al. Dec 2011 B2
8086968 McCaffrey et al. Dec 2011 B2
8112437 Katragadda et al. Feb 2012 B1
8140566 Boerries et al. Mar 2012 B2
8145791 Thatcher et al. Mar 2012 B2
8151358 Herold Apr 2012 B1
8161122 Sood et al. Apr 2012 B2
8200761 Tevanian Jun 2012 B1
8200808 Ishida Jun 2012 B2
8204897 Djabarov et al. Jun 2012 B1
8239197 Webb et al. Aug 2012 B2
8244848 Narayanan et al. Aug 2012 B1
8271025 Brisebois et al. Sep 2012 B2
8284783 Maufer et al. Oct 2012 B1
8291019 Rochelle et al. Oct 2012 B1
8296179 Rennison Oct 2012 B1
8316315 Portnoy et al. Nov 2012 B2
8363803 Gupta Jan 2013 B2
8365235 Hunt et al. Jan 2013 B2
8392409 Kashyap et al. Mar 2013 B1
8392836 Bau et al. Mar 2013 B1
8412174 Khosravi Apr 2013 B2
8423545 Cort et al. Apr 2013 B2
8433762 Wald et al. Apr 2013 B1
8443441 Stolfo et al. May 2013 B2
8447789 Geller May 2013 B2
8452745 Ramakrishna May 2013 B2
8463872 Pounds et al. Jun 2013 B2
8468168 Brezina et al. Jun 2013 B2
8495045 Wolf et al. Jul 2013 B2
8510389 Gurajada et al. Aug 2013 B1
8522257 Rupp et al. Aug 2013 B2
8549412 Brezina et al. Oct 2013 B2
8566306 Jones Oct 2013 B2
8600343 Brezina et al. Dec 2013 B2
8606335 Ozaki Dec 2013 B2
8620935 Rubin et al. Dec 2013 B2
8661002 Smith et al. Feb 2014 B2
8666035 Timmins et al. Mar 2014 B2
8694633 Mansfield et al. Apr 2014 B2
8745060 Brezina et al. Jun 2014 B2
8768291 Williams et al. Jul 2014 B2
8793625 Rhee et al. Jul 2014 B2
8818995 Guha Aug 2014 B1
8849816 Burba et al. Sep 2014 B2
8930463 Bonforte et al. Jan 2015 B2
8972257 Bonforte Mar 2015 B2
8984074 Monaco Mar 2015 B2
8990323 Hein et al. Mar 2015 B2
9009065 Reis et al. Apr 2015 B2
9020938 Cort et al. Apr 2015 B2
9058366 Brezina et al. Jun 2015 B2
9069825 Chang Jun 2015 B1
9087323 Hein et al. Jul 2015 B2
9159057 Monaco Oct 2015 B2
9195753 King et al. Nov 2015 B1
9195969 Bau et al. Nov 2015 B2
9235848 Gourley et al. Jan 2016 B1
9275118 Brezina et al. Mar 2016 B2
9275126 Smith et al. Mar 2016 B2
9298783 Brezina et al. Mar 2016 B2
9304621 Wakim et al. Apr 2016 B1
9501561 Rubin et al. Nov 2016 B2
9569529 Rubin et al. Feb 2017 B2
9584343 Brezina et al. Feb 2017 B2
9591086 Brezina et al. Mar 2017 B2
9594832 Rubin et al. Mar 2017 B2
9596308 Brezina et al. Mar 2017 B2
9685158 Bonforte Jun 2017 B2
9699258 Brezina et al. Jul 2017 B2
9716764 Brezina et al. Jul 2017 B2
9721228 Cort et al. Aug 2017 B2
9747583 Monaco Aug 2017 B2
9800679 Hein et al. Oct 2017 B2
9819765 Thatcher et al. Nov 2017 B2
9842144 Cort et al. Dec 2017 B2
9842145 Cort et al. Dec 2017 B2
9954963 Brezina et al. Apr 2018 B2
10089986 Bonforte Oct 2018 B2
20010037407 Dragulev et al. Nov 2001 A1
20010049628 Icho Dec 2001 A1
20020016818 Kirani et al. Feb 2002 A1
20020024536 Kahan et al. Feb 2002 A1
20020049751 Chen et al. Apr 2002 A1
20020054587 Baker et al. May 2002 A1
20020059402 Belanger May 2002 A1
20020059418 Bird et al. May 2002 A1
20020059425 Belfiore et al. May 2002 A1
20020073011 Brattain et al. Jun 2002 A1
20020073058 Kremer et al. Jun 2002 A1
20020076004 Brockenbrough et al. Jun 2002 A1
20020078090 Hwang et al. Jun 2002 A1
20020087647 Quine et al. Jul 2002 A1
20020091777 Schwartz Jul 2002 A1
20020103873 Ramanathan et al. Aug 2002 A1
20020103879 Mondragon Aug 2002 A1
20020107991 Maguire et al. Aug 2002 A1
20020116396 Somers et al. Aug 2002 A1
20020143871 Meyer et al. Oct 2002 A1
20020152216 Bouthors Oct 2002 A1
20020163539 Srinivasan Nov 2002 A1
20020194502 Sheth et al. Dec 2002 A1
20030028525 Santos et al. Feb 2003 A1
20030037116 Nolan et al. Feb 2003 A1
20030041030 Mansfield Feb 2003 A1
20030093483 Allen et al. May 2003 A1
20030114956 Cullen et al. Jun 2003 A1
20030120608 Pereyra Jun 2003 A1
20030142125 Salmimaa et al. Jul 2003 A1
20030167324 Farnham et al. Sep 2003 A1
20030195937 Kircher, Jr. et al. Oct 2003 A1
20030204439 Cullen, III Oct 2003 A1
20030220978 Rhodes Nov 2003 A1
20030220989 Tsuji et al. Nov 2003 A1
20030233419 Beringer Dec 2003 A1
20040002903 Stolfo et al. Jan 2004 A1
20040015547 Griffin et al. Jan 2004 A1
20040015554 Wilson Jan 2004 A1
20040034537 Gengarella et al. Feb 2004 A1
20040039630 Begole et al. Feb 2004 A1
20040056901 March et al. Mar 2004 A1
20040068545 Daniell et al. Apr 2004 A1
20040073616 Fellenstein et al. Apr 2004 A1
20040078443 Malik Apr 2004 A1
20040078444 Malik Apr 2004 A1
20040078445 Malik Apr 2004 A1
20040100497 Quillen et al. May 2004 A1
20040128355 Chao et al. Jul 2004 A1
20040128356 Bernstein et al. Jul 2004 A1
20040133561 Burke Jul 2004 A1
20040153504 Hutchinson et al. Aug 2004 A1
20040162878 Lewis et al. Aug 2004 A1
20040174964 Koch Sep 2004 A1
20040177048 Klug Sep 2004 A1
20040186851 Jhingan et al. Sep 2004 A1
20040202117 Wilson et al. Oct 2004 A1
20040205002 Layton Oct 2004 A1
20040210827 Burg et al. Oct 2004 A1
20040215726 Arning et al. Oct 2004 A1
20040260756 Forstall et al. Dec 2004 A1
20040268229 Paoli et al. Dec 2004 A1
20050015432 Cohen Jan 2005 A1
20050027779 Schinner Feb 2005 A1
20050038687 Galdes Feb 2005 A1
20050044152 Hardy et al. Feb 2005 A1
20050055409 Alsarraf et al. Mar 2005 A1
20050055639 Fogg Mar 2005 A1
20050060638 Mathew et al. Mar 2005 A1
20050076090 Thuerk Apr 2005 A1
20050080868 Malik Apr 2005 A1
20050091272 Smith et al. Apr 2005 A1
20050091314 Blagsvedt et al. Apr 2005 A1
20050102257 Onyon et al. May 2005 A1
20050102361 Winjum et al. May 2005 A1
20050108273 Brebner May 2005 A1
20050131888 Tafoya et al. Jun 2005 A1
20050138070 Huberman et al. Jun 2005 A1
20050138631 Bellotti et al. Jun 2005 A1
20050149620 Kirkland et al. Jul 2005 A1
20050159970 Buyukkokten et al. Jul 2005 A1
20050164704 Winsor Jul 2005 A1
20050165584 Boody et al. Jul 2005 A1
20050165893 Feinberg et al. Jul 2005 A1
20050188028 Brown, Jr. et al. Aug 2005 A1
20050198159 Kirsch Sep 2005 A1
20050198299 Beck et al. Sep 2005 A1
20050198305 Pezaris et al. Sep 2005 A1
20050203929 Hazarika et al. Sep 2005 A1
20050204009 Hazarika et al. Sep 2005 A1
20050213511 Reece, Jr. et al. Sep 2005 A1
20050216300 Appelman et al. Sep 2005 A1
20050222890 Cheng et al. Oct 2005 A1
20050228881 Reasor et al. Oct 2005 A1
20050228899 Wendkos et al. Oct 2005 A1
20050235224 Arend et al. Oct 2005 A1
20050278317 Gross et al. Dec 2005 A1
20060004713 Korte et al. Jan 2006 A1
20060004892 Lunt et al. Jan 2006 A1
20060004914 Kelly et al. Jan 2006 A1
20060015533 Wolf et al. Jan 2006 A1
20060020398 Vernon et al. Jan 2006 A1
20060031340 Mathew et al. Feb 2006 A1
20060031775 Sattler et al. Feb 2006 A1
20060047747 Erickson et al. Mar 2006 A1
20060053199 Pricken et al. Mar 2006 A1
20060056015 Nishiyama Mar 2006 A1
20060059151 Martinez et al. Mar 2006 A1
20060059238 Slater et al. Mar 2006 A1
20060064431 Kishore et al. Mar 2006 A1
20060064434 Gilbert et al. Mar 2006 A1
20060065733 Lee et al. Mar 2006 A1
20060074932 Fong et al. Apr 2006 A1
20060075046 Yozell-Epstein et al. Apr 2006 A1
20060083357 Howell et al. Apr 2006 A1
20060083358 Fong et al. Apr 2006 A1
20060085752 Beadle et al. Apr 2006 A1
20060095331 O'Malley et al. May 2006 A1
20060095502 Lewis et al. May 2006 A1
20060101285 Chen et al. May 2006 A1
20060101350 Scott May 2006 A1
20060123357 Okamura Jun 2006 A1
20060136494 Oh Jun 2006 A1
20060168073 Kogan et al. Jul 2006 A1
20060173824 Bensky et al. Aug 2006 A1
20060173961 Turski et al. Aug 2006 A1
20060179415 Cadiz et al. Aug 2006 A1
20060195361 Rosenberg Aug 2006 A1
20060195474 Cadiz et al. Aug 2006 A1
20060195785 Portnoy et al. Aug 2006 A1
20060217116 Cassett et al. Sep 2006 A1
20060218111 Cohen Sep 2006 A1
20060224675 Fox et al. Oct 2006 A1
20060242536 Yokokawa et al. Oct 2006 A1
20060242609 Potter et al. Oct 2006 A1
20060248151 Belakovskiy et al. Nov 2006 A1
20060256008 Rosenberg Nov 2006 A1
20060265460 Kiyohara Nov 2006 A1
20060271630 Bensky et al. Nov 2006 A1
20060281447 Lewis et al. Dec 2006 A1
20060282303 Hale et al. Dec 2006 A1
20070005702 Tokuda et al. Jan 2007 A1
20070005715 LeVasseur et al. Jan 2007 A1
20070005750 Lunt et al. Jan 2007 A1
20070011367 Scott et al. Jan 2007 A1
20070016647 Gupta et al. Jan 2007 A1
20070022447 Arseneau et al. Jan 2007 A1
20070038720 Reding et al. Feb 2007 A1
20070050455 Yach et al. Mar 2007 A1
20070060328 Zrike et al. Mar 2007 A1
20070071187 Apreutesei et al. Mar 2007 A1
20070083651 Ishida Apr 2007 A1
20070088687 Bromm et al. Apr 2007 A1
20070106780 Farnham et al. May 2007 A1
20070112761 Xu et al. May 2007 A1
20070115991 Ramani et al. May 2007 A1
20070118533 Ramer et al. May 2007 A1
20070123222 Cox et al. May 2007 A1
20070124432 Holtzman et al. May 2007 A1
20070129977 Forney Jun 2007 A1
20070130527 Kim Jun 2007 A1
20070135110 Athale et al. Jun 2007 A1
20070143414 Daigle Jun 2007 A1
20070153989 Howell et al. Jul 2007 A1
20070156732 Surendran et al. Jul 2007 A1
20070162432 Armstrong et al. Jul 2007 A1
20070174304 Shrufi et al. Jul 2007 A1
20070174432 Rhee et al. Jul 2007 A1
20070177717 Owens et al. Aug 2007 A1
20070185844 Schachter Aug 2007 A1
20070192490 Minhas Aug 2007 A1
20070198500 Lucovsky et al. Aug 2007 A1
20070203991 Fisher et al. Aug 2007 A1
20070208802 Barman et al. Sep 2007 A1
20070214141 Sittig et al. Sep 2007 A1
20070218900 Abhyanker Sep 2007 A1
20070244881 Cha et al. Oct 2007 A1
20070244977 Atkins Oct 2007 A1
20070250585 Ly et al. Oct 2007 A1
20070255794 Coutts Nov 2007 A1
20070271527 Paas et al. Nov 2007 A1
20070273517 Govind Nov 2007 A1
20070282956 Staats Dec 2007 A1
20070288578 Pantalone Dec 2007 A1
20070294281 Ward et al. Dec 2007 A1
20070294428 Guy et al. Dec 2007 A1
20080005247 Khoo Jan 2008 A9
20080005249 Hart Jan 2008 A1
20080031241 Toebes et al. Feb 2008 A1
20080037721 Yao et al. Feb 2008 A1
20080040370 Bosworth et al. Feb 2008 A1
20080040435 Buschi et al. Feb 2008 A1
20080040474 Zuckerberg et al. Feb 2008 A1
20080040475 Bosworth et al. Feb 2008 A1
20080055263 Lemay et al. Mar 2008 A1
20080056269 Madhani et al. Mar 2008 A1
20080065701 Lindstrom et al. Mar 2008 A1
20080071872 Gross Mar 2008 A1
20080077614 Roy Mar 2008 A1
20080104052 Ryan et al. May 2008 A1
20080113674 Baig May 2008 A1
20080114758 Rupp et al. May 2008 A1
20080119201 Kolber et al. May 2008 A1
20080120411 Eberle May 2008 A1
20080122796 Jobs et al. May 2008 A1
20080134081 Jeon et al. Jun 2008 A1
20080147639 Hartman et al. Jun 2008 A1
20080147810 Kumar et al. Jun 2008 A1
20080162347 Wagner Jul 2008 A1
20080162649 Lee et al. Jul 2008 A1
20080162651 Madnani Jul 2008 A1
20080163164 Chowdhary et al. Jul 2008 A1
20080170158 Jung et al. Jul 2008 A1
20080172362 Shacham et al. Jul 2008 A1
20080172464 Thattai et al. Jul 2008 A1
20080183832 Kirkland et al. Jul 2008 A1
20080189122 Coletrane et al. Aug 2008 A1
20080208812 Quoc et al. Aug 2008 A1
20080216092 Serlet Sep 2008 A1
20080220752 Forstall et al. Sep 2008 A1
20080222279 Cioffi et al. Sep 2008 A1
20080222546 Mudd et al. Sep 2008 A1
20080235353 Cheever et al. Sep 2008 A1
20080242277 Chen et al. Oct 2008 A1
20080270038 Partovi et al. Oct 2008 A1
20080270939 Mueller Oct 2008 A1
20080275748 John Nov 2008 A1
20080275865 Kretz et al. Nov 2008 A1
20080290987 Li Nov 2008 A1
20080293403 Quon et al. Nov 2008 A1
20080301166 Sugiyama et al. Dec 2008 A1
20080301175 Applebaum et al. Dec 2008 A1
20080301245 Estrada et al. Dec 2008 A1
20080307066 Amidon et al. Dec 2008 A1
20080319943 Fischer Dec 2008 A1
20090005076 Forstall et al. Jan 2009 A1
20090006366 Johnson et al. Jan 2009 A1
20090010353 She et al. Jan 2009 A1
20090029674 Brezina et al. Jan 2009 A1
20090030773 Kamhoot Jan 2009 A1
20090030872 Brezina et al. Jan 2009 A1
20090030919 Brezina et al. Jan 2009 A1
20090030927 Cases et al. Jan 2009 A1
20090030933 Brezina et al. Jan 2009 A1
20090030940 Brezina et al. Jan 2009 A1
20090031232 Brezina et al. Jan 2009 A1
20090031244 Brezina et al. Jan 2009 A1
20090031245 Brezina et al. Jan 2009 A1
20090037541 Wilson Feb 2009 A1
20090041224 Bychkov et al. Feb 2009 A1
20090048994 Applebaum et al. Feb 2009 A1
20090054091 van Wijk et al. Feb 2009 A1
20090070412 D'Angelo et al. Mar 2009 A1
20090076795 Bangalore et al. Mar 2009 A1
20090077026 Yanagihara Mar 2009 A1
20090082038 McKiou et al. Mar 2009 A1
20090083278 Zhao et al. Mar 2009 A1
20090100384 Louch Apr 2009 A1
20090106415 Brezina et al. Apr 2009 A1
20090106676 Brezina et al. Apr 2009 A1
20090111495 Sjolin et al. Apr 2009 A1
20090119678 Shih et al. May 2009 A1
20090138546 Cruzada May 2009 A1
20090150251 Zhitomirsky Jun 2009 A1
20090156170 Rossano et al. Jun 2009 A1
20090157717 Palahnuk et al. Jun 2009 A1
20090171930 Vaughan et al. Jul 2009 A1
20090171979 Lubarski et al. Jul 2009 A1
20090174680 Anzures et al. Jul 2009 A1
20090177754 Brezina et al. Jul 2009 A1
20090182788 Chung et al. Jul 2009 A1
20090191899 Wilson et al. Jul 2009 A1
20090198688 Venkataraman et al. Aug 2009 A1
20090209286 Bentley et al. Aug 2009 A1
20090213088 Hardy et al. Aug 2009 A1
20090217178 Niyogi et al. Aug 2009 A1
20090228555 Joviak et al. Sep 2009 A1
20090234815 Boerries et al. Sep 2009 A1
20090234925 Seippel, III et al. Sep 2009 A1
20090248415 Jablokov et al. Oct 2009 A1
20090249198 Davis et al. Oct 2009 A1
20090271370 Jagadish et al. Oct 2009 A1
20090271409 Ghosh Oct 2009 A1
20090299824 Barnes, Jr. Dec 2009 A1
20090300127 Du Dec 2009 A1
20090300546 Kwok et al. Dec 2009 A1
20090306981 Cromack et al. Dec 2009 A1
20090313573 Paek et al. Dec 2009 A1
20090319329 Aggarwal et al. Dec 2009 A1
20090328161 Puthenkulam et al. Dec 2009 A1
20100009332 Yaskin et al. Jan 2010 A1
20100015954 Yang Jan 2010 A1
20100030715 Eustice et al. Feb 2010 A1
20100036833 Yeung et al. Feb 2010 A1
20100049534 Whitnah et al. Feb 2010 A1
20100057858 Shen et al. Mar 2010 A1
20100057859 Shen et al. Mar 2010 A1
20100062753 Wen et al. Mar 2010 A1
20100070875 Turski et al. Mar 2010 A1
20100077041 Cowan et al. Mar 2010 A1
20100082693 Hugg et al. Apr 2010 A1
20100083182 Liu et al. Apr 2010 A1
20100088340 Muller et al. Apr 2010 A1
20100094869 Ebanks Apr 2010 A1
20100100899 Bradbury et al. Apr 2010 A1
20100121831 Lin et al. May 2010 A1
20100131447 Creutz et al. May 2010 A1
20100153832 Markus et al. Jun 2010 A1
20100158214 Gravino et al. Jun 2010 A1
20100161547 Carmel et al. Jun 2010 A1
20100161729 Leblanc et al. Jun 2010 A1
20100162171 Felt et al. Jun 2010 A1
20100164957 Lindsay et al. Jul 2010 A1
20100167700 Brock et al. Jul 2010 A1
20100169327 Lindsay et al. Jul 2010 A1
20100174784 Levey et al. Jul 2010 A1
20100185610 Lunt et al. Jul 2010 A1
20100191844 He et al. Jul 2010 A1
20100216509 Riemer et al. Aug 2010 A1
20100228560 Balasaygun et al. Sep 2010 A1
20100229096 Maiocco et al. Sep 2010 A1
20100229223 Shepard et al. Sep 2010 A1
20100235375 Sidhu et al. Sep 2010 A1
20100241579 Bassett et al. Sep 2010 A1
20100250682 Goldberg et al. Sep 2010 A1
20100275128 Ward et al. Oct 2010 A1
20100281535 Perry, Jr. et al. Nov 2010 A1
20100306185 Smith et al. Dec 2010 A1
20100312837 Bodapati et al. Dec 2010 A1
20100318614 Sager et al. Dec 2010 A1
20100330972 Angiolillo Dec 2010 A1
20110010423 Thatcher et al. Jan 2011 A1
20110035451 Smith et al. Feb 2011 A1
20110040726 Crosbie et al. Feb 2011 A1
20110072052 Skarin et al. Mar 2011 A1
20110078259 Rashad et al. Mar 2011 A1
20110086627 Khosravi Apr 2011 A1
20110087969 Hein et al. Apr 2011 A1
20110145192 Quintela et al. Jun 2011 A1
20110145219 Cierniak et al. Jun 2011 A1
20110173274 Sood Jul 2011 A1
20110173547 Lewis et al. Jul 2011 A1
20110191337 Cort et al. Aug 2011 A1
20110191340 Cort et al. Aug 2011 A1
20110191717 Cort et al. Aug 2011 A1
20110196802 Ellis et al. Aug 2011 A1
20110201275 Jabara et al. Aug 2011 A1
20110202532 Nakazawa et al. Aug 2011 A1
20110219317 Thatcher et al. Sep 2011 A1
20110225293 Rathod Sep 2011 A1
20110231407 Gupta et al. Sep 2011 A1
20110235790 Strope et al. Sep 2011 A1
20110252383 Miyashita Oct 2011 A1
20110276396 Rathod Nov 2011 A1
20110282905 Polis et al. Nov 2011 A1
20110291860 Ozaki et al. Dec 2011 A1
20110291933 Holzer et al. Dec 2011 A1
20110298701 Holzer et al. Dec 2011 A1
20120011204 Morin et al. Jan 2012 A1
20120017158 Maguire et al. Jan 2012 A1
20120036254 Onuma Feb 2012 A1
20120041907 Wang et al. Feb 2012 A1
20120054681 Cort et al. Mar 2012 A1
20120079023 Tejada-Gamero et al. Mar 2012 A1
20120084461 Athias et al. Apr 2012 A1
20120089678 Cort et al. Apr 2012 A1
20120089690 Hein et al. Apr 2012 A1
20120110080 Panyam et al. May 2012 A1
20120110096 Smarr et al. May 2012 A1
20120150970 Peterson et al. Jun 2012 A1
20120150978 Monaco et al. Jun 2012 A1
20120150979 Monaco Jun 2012 A1
20120166999 Thatcher et al. Jun 2012 A1
20120197871 Mandel et al. Aug 2012 A1
20120198348 Park Aug 2012 A1
20120246065 Yarvis et al. Sep 2012 A1
20120259834 Broder et al. Oct 2012 A1
20120271822 Schwendimann et al. Oct 2012 A1
20120278428 Harrison et al. Nov 2012 A1
20120330658 Bonforte Dec 2012 A1
20120330980 Rubin et al. Dec 2012 A1
20120331418 Bonforte Dec 2012 A1
20130007627 Monaco Jan 2013 A1
20130014021 Bau et al. Jan 2013 A1
20130080915 Lewis et al. Mar 2013 A1
20130091288 Shalunov et al. Apr 2013 A1
20130120444 Allyn et al. May 2013 A1
20130173712 Monjas Llorente et al. Jul 2013 A1
20130246931 Harris et al. Sep 2013 A1
20130260795 Papakipos et al. Oct 2013 A1
20140011481 Kho Jan 2014 A1
20140081914 Smith et al. Mar 2014 A1
20140081964 Rubin et al. Mar 2014 A1
20140087687 Brezina et al. Mar 2014 A1
20140089304 Rubin et al. Mar 2014 A1
20140089411 Rubin et al. Mar 2014 A1
20140095433 Cort et al. Apr 2014 A1
20140100861 Ledet Apr 2014 A1
20140115086 Chebiyyam Apr 2014 A1
20140156650 Jacobson Jun 2014 A1
20140207761 Brezina et al. Jul 2014 A1
20140214981 Mallet et al. Jul 2014 A1
20140280094 Brandstetter Sep 2014 A1
20140280097 Lee et al. Sep 2014 A1
20140287786 Bayraktar et al. Sep 2014 A1
20150074213 Monaco Mar 2015 A1
20150170650 Bonforte Jun 2015 A1
20150222719 Hein et al. Aug 2015 A1
20160070787 Brezina et al. Mar 2016 A1
20160147899 Smith et al. May 2016 A1
20160182661 Brezina et al. Jun 2016 A1
20170147699 Rubin et al. May 2017 A1
20170171124 Brezina et al. Jun 2017 A1
20170187663 Brezina et al. Jun 2017 A1
20170287483 Bonforte Oct 2017 A1
20170302749 Brezina et al. Oct 2017 A1
20170324821 Brezina et al. Nov 2017 A1
20170337514 Cort et al. Nov 2017 A1
20180046985 Monaco Feb 2018 A1
20180095970 Cort et al. Apr 2018 A1
Foreign Referenced Citations (15)
Number Date Country
101351818 Jan 2009 CN
0944002 Sep 1999 EP
944002 Sep 1999 EP
2003006116 Jan 2003 JP
2007249307 Sep 2007 JP
20060056015 May 2006 KR
1020090068819 Jun 2009 KR
1020090112257 Oct 2009 KR
1020090115239 Nov 2009 KR
1020020060386 Aug 2012 KR
2003098515 Nov 2003 WO
2007037875 Apr 2007 WO
2007143232 Dec 2007 WO
2012082886 Jun 2012 WO
2012082929 Jun 2012 WO
Non-Patent Literature Citations (33)
Entry
“OpenSocial Specification v0.9”, OpenSocial and Gadgets Specification Group, Apr. 2009.
“The Ultimate Guide for Everything Twitter”, Webdesigner Depot, archive.org webpage https://web.archive.org/web/20090325042115/http://www.webdesignerdepot.com/2009/03/the-ultimate-guide-for-everything- twitter/ from Mar. 25, 2009.
Android-Tips.com, “Android Tips & Tricks: How to Import Contacts into Android Phone,” located at http://android-tips.com/how-to-import-contacts-into-android/, Nov. 17, 2008 (document provided includes third-party comments submitted under the USPTO PeerToPatent program).
Bernstein, Michael S. et al., “Enhancing Directed Content Sharing on the Web,” Proceedings of the 28th International Conference on Human Factors in Computing Systems, Atlanta, GA, Apr. 10-15, 2010, pp. 971-980.
Carvalho, Vitor R. et al., “Ranking Users for Intelligent Message Addressing,” Proceedings of the 30th European Conference on Information Retrieval, Glasgow, England, Mar. 30-Apr. 3, 2008, pp. 321-333.
Culotta, Aron et al., “Extracting Social Networks and Contact Information from Email and the Web,” Proceedings of the First Conference on Email and Anti-Spam (CEAS), Mountain View, CA, Jul. 30-31, 2004 (document provided includes third-party comments submitted under the USPTO PeerToPatent program).
Elsayed, Tamer et al., “Personal Name Resolution in Email: A Heuristic Approach,” University of Maryland Technical Report No. TR-LAMP-150, Mar. 17, 2008.
Epstein, “Harnessing User Data to Improve Facebook Features”, Doctoral dissertation, Boston College, May 12, 2010.
European Patent Application No. 11849271.9, Extended Search Report, dated Apr. 3, 2014.
European Patent Application No. 12801970.0, Extended Search Report, dated Oct. 23, 2014.
European Patent Application 12801998.1, Extended Search Report, dated Feb. 10, 2015.
European Patent Application No. 10797483.4, extended European Search Report, dated Dec. 20, 2016.
Extended European Search Report, EP 10 78 3783, dated Mar. 24, 2014.
Fitzpatrick, Brad, “AddressBooker,” Github Social Coding, located at http://addressbooker.appspot.com/, Nov. 28, 2008 (document provided includes third-party comments submitted under the USPTO PeerToPatent program).
Google Inc. “OpenSocial Tutorial,” located at http://code.google.com/apis/opensocial/articles/tutorial/tutorial-0.8.html, Aug. 2008.
Google Inc., “Automatic Updating of Contacts,” Gmail help forum, located at http://74.125.4.16/support/forum/p/gmail/thread?tid=03f7b692150d9242&hl=en, Apr. 27, 2009 (document provided includes third-party comments submitted under the USPTO PeerToPatent program).
Hillebrand, Tim, “Plaxo: The Smart Auto Update Address Book,” Smart Phone Mag, located at http://www.smartphonemag.com/cms/blogs/9/plaxo_the_smart_auto_update_address_book, Nov. 6, 2006 (document provided includes third-party comments submitted under the USPTO PeerToPatent program).
Hannon et al., “Recommending Twitter Users to Follow Using Content and Collaborative Filtering Approaches”, RecSys2010, Sep. 26-30, 2010, Barcelona, Spain.
International Patent Application PCT/US10/34782, International Search Report and Written Opinion, dated Dec. 22, 2010.
International Patent Application PCT/US10/35405, International Search Report and Written Opinion, dated Jan. 3, 2011.
International Patent Application PCT/US10/52081, International Search Report and Written Opinion, dated May 20, 2011.
International Patent Application PCT/US10/56560, International Search Report and Written Opinion, dated Jun. 21, 2011.
International Patent Application PCT/US11/64958, International Search Report and Written Opinion, dated Jul. 31, 2012.
International Patent Application PCT/US12/043523, International Search Report and Written Opinion, dated Nov. 28, 2012.
International Patent Application PCT/US2011/064892, International Search Report and Written Opinion, dated Aug. 22, 2012.
International Patent Application PCT/US2012/043507, International Search Report and Written Opinion, dated Jan. 3, 2013.
Microsoft Corporation, “About AutoComplete Name Suggesting,” Microsoft Outlook 2003 help forum, located at http://office.microsoft.com/en-us/outlook/HP063766471033.aspx, 2003.
Oberhaus, Kristin, “Look for Cues: Targeting Without Personally Identifiable Information,” W3i, LLC blog entry located at http://blog.w3i.com/2009/09/03/looking-for-cues-targeting-without-personally-identifiable-information/, Sep. 3, 2009.
OpenSocial Foundation, “Social Application Tutorial (v0.9),” located at http://wiki.opensocial.org/index.php?title=Social_Application_Tutorial, accessed Oct. 8, 2010.
PCWorld Communications, Inc., “Your Contacts are Forever: Self-Updating Address Book,” located at http://www.pcworld.com/article/48192/your_contacts_are_forever_selfupdating_address_book.html, May 1, 2001 (document provided includes third-party comments submitted under the USPTO PeerToPatent program).
U.S. Appl. No. 61/407,018, filed Oct. 27, 2010.
W3i, LLC, “Advertiser Feedback System (AFS),” company product description. Sep. 22, 2009.
Wikimedia Foundation, Inc., “Machine Learning,” Wikipedia encyclopedia entry located at http://en.wikipedia.org/wiki/Machine_learning, Jan. 30, 2011.
Related Publications (1)
Number Date Country
20190035400 A1 Jan 2019 US
Provisional Applications (1)
Number Date Country
61499643 Jun 2011 US
Continuations (3)
Number Date Country
Parent 15627524 Jun 2017 US
Child 16148042 US
Parent 14634111 Feb 2015 US
Child 15627524 US
Parent 13528693 Jun 2012 US
Child 14634111 US