All of the material in this patent document is subject to copyright protection under the copyright laws of the United States and of other countries. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the governmental files or records, but otherwise reserves all copyright rights whatsoever.
Both text messaging and instant messaging are forms of personal communication that have grown in popularity and use over the last decade.
In this respect, “text messaging” refers to the sending and receiving of text messages (sometimes abbreviated as “SMSes”) via wireless telecommunication systems using a Short Message Service (sometimes abbreviated as SMS). The sending and receiving of such text messages is well known and commonly performed using mobile client devices, such as smart phones or PDAs. Common applications of SMS include person-to-person messaging. However, SMSes also are now used to interact with automated systems, such as ordering products and services for mobile client devices or participating in contests using mobile client devices such as, for example, voting for contestants in American Idol competitions.
In contrast to text messaging, “instant messaging” (sometimes abbreviated as “IM”) is a form of “real-time” communication between two or more people that is based on the transmission of text. The text is conveyed over a network such as the Internet. Instant messaging requires an IM client that connects to an IM service. The IM client commonly is installed on a computer such as a laptop or desktop. However, IM clients are now available for use on mobile client devices. Because IM is considered “real-time,” communications back and forth between users of IM clients sometimes is deemed a “conversation,” just as if the people were speaking directly to one another. The present invention has applicability both in text messaging as well as in instant messaging and, except where context clearly implies otherwise, aspects and features of the present invention apply in the context of both (a) SMS systems, methods, applications, and implementations as well as (b) IM systems, methods, applications, and implementations.
More recently, Automatic Speech Recognition (“ASR”) systems, which convert spoken audio into text, have been applied to text messaging and instant messaging. As used herein, the term “speech recognition” refers to the process of converting a speech (audio) signal to a sequence of words or a representation thereof (message strings), by means of an algorithm implemented as a computer program. Speech recognition applications that have emerged over the last few years include voice dialing (e.g., “Call home”), call routing (e.g., “I would like to make a collect call”), simple data entry (e.g., entering a credit card number), preparation of structured documents (e.g., a radiology report), and content-based spoken audio searching (e.g. finding a podcast where particular words were spoken).
As their accuracy has improved, ASR systems have become commonplace in recent years. For example, ASR systems have found wide application in customer service centers of companies. The customer service centers offer middleware and solutions for contact centers. For example, they answer and route calls to decrease costs for airlines, banks, etc. In order to accomplish this, companies such as IBM and Nuance create assets known as IVR (Interactive Voice Response) that answer the calls, then use ASR (Automatic Speech Recognition) paired with TTS (Text-To-Speech) software to decode what the caller is saying and communicate back to them.
The application of ASR systems to text messaging and instant messaging has been more recent. Text messaging and instant messaging usually involves the input of a textual message by a sender who presses letters and/or numbers associated with the sender's mobile phone or other mobile device. As recognized for example in the aforementioned, commonly-assigned U.S. patent application Ser. No. 11/697,074, it can be advantageous to make text messaging and instant messaging far easier for an end user by allowing the user to dictate his or her message rather than requiring the user to type it into his or her phones. In certain circumstances, such as when a user is driving a vehicle, typing a text message may not be possible and/or convenient, and may even be unsafe. On the other hand, text messages can be advantageous to a message receiver as compared to voicemail, as the receiver actually sees the message content in a written format rather than having to rely on an auditory signal.
Now or in the future, users can or will be able to use mobile client devices to interface with many web services via an IM client and/or SMSes. It is believed, for example, that users can or will interact with web services using text messages and/or instant messages such as those provided by Amazon, Facebook, and MySpace. This may be accomplished, for example, using either manually-typed text messages and/or instant messages or such messages that are transcribed from speech using an ASR engine.
Many such web services promote the establishment of user profiles in order to achieve “recommendation engines” and/or ad targeting. Currently, such web services require users to manually setup user profiles, which is usually done upon first establishing user accounts. Although convenient when first establishing the accounts, maintenance of the data in the user profiles, such as user preferences, requires that users manually login to the user accounts and modify and save changes to user preferences, as desired. Unfortunately, many users perform such manual action irregularly or not at all, and consequently user preferences and other data stored in user profiles tends to become outdated over time as user tastes and preferences change. As a result, these web services subsequently experience degradation in their ability to deliver relevant ads, recommendations, and suggestions to users over time, which can decrease their potential revenue per user that is generated from direct or indirect promotions.
Aspects and features of the present invention are believed to further enable and facilitate the use and acceptance of text messaging and instant messaging with mobile client devices. In particular, inventive aspects and features of the invention relate to parsing and/or filtering of message strings (text of instant messages or text messages) that are either manually typed, transcribed from speech, or part of a stream web services query, in order to identify keywords, phrases, or fragments based on which user preferences of user profiles are dynamically updated.
One or more steps of inventive aspects and features of methods of the invention may be performed in client and/or server side processing.
The present invention includes many aspects and features. Moreover, while many aspects and features relate to, and are described in, the context of providing profile information to a web service, the present invention is not limited to use only in such field, as will become apparent from the following summaries and detailed descriptions of aspects, features, and one or more embodiments of the present invention.
Accordingly, one aspect of the present invention relates to a method of providing profile information, derived from an utterance, from a mobile communication device to a web service. An exemplary such method includes the steps of receiving, at the mobile communication device, audio data representing an utterance; transcribing the audio data to text; processing the transcribed text, including parsing the text for profile information appropriate for use at one or more web services; and communicating, to the web service, the profile information parsed from the transcribed text. Furthermore, in this aspect of the invention, the processing step may be performed by a profile filter; the method may further comprise providing an interface to a user for manual user editing of the transcribed text; the transcription step may be performed at the mobile communication device; the transcription step may be performed by a separate automatic speech recognition system; the audio data may be a voicemail; the method may further comprise delivering ad impressions to a user based on the processed text; and the method may further comprise communicating the transcribed text, as a text-based message, from the mobile communication device to a recipient. In variations of this aspect, the recipient may be a cell phone; the recipient may be a smart phone; the recipient may be a PDA; the recipient may be a tablet notebook; the recipient may be a desktop computer; the recipient may be a laptop computer; the recipient may be a web service; the text-based message may be a text message, communicated using Short Message Service; and the text-based message may be an instant message, communicated via an instant message service.
Another aspect of the invention relates to a method of providing profile information, derived from an utterance, from a mobile communication device to a web service. An exemplary such method includes transcribing audio data, received as an utterance at the mobile communication device, to text; providing an interface to a user for manual user editing of the transcribed text; processing the edited text, including parsing the text for profile information appropriate for use at one or more web services; and communicating, to the web service, the profile information parsed from the transcribed text. Furthermore, in this aspect of the invention, the processing step may be performed by a profile filter; the transcription step may be performed at the mobile communication device; the transcription step may be performed by a separate automatic speech recognition system; the audio data may be a voicemail; the method may further comprise delivering ad impressions to a user based on the processed text; and the method may further comprise communicating the transcribed text, as a text-based message, from the mobile communication device to the recipient. In variations of this aspect, the recipient may be a cell phone; the recipient may be a smart phone; the recipient may be a PDA; the recipient may be a tablet notebook; the recipient may be a desktop computer; the recipient may be a laptop computer; the recipient may be a web service; the text-based message may be a text message, communicated using Short Message Service; and the text-based message may be an instant message, communicated via an instant message service.
Another aspect of the invention relates to a method of providing profile information, derived from an utterance, from a mobile communication device to a web service. An exemplary such method includes receiving, at the mobile communication device, audio data representing an utterance that is to be sent from the mobile communication device to a recipient; transcribing the utterance to text; parsing the transcribed text to identify relevant profile information for input to a web service; and communicating the transcribed text, as a text-based message, from the mobile communication device to the recipient. Furthermore, in this aspect of the invention, the parsing step may be performed by a profile filter; the method may further comprise providing an interface to a user for manual user editing of the transcribed text; the transcription step may be performed at the mobile communication device; the transcription step may be performed by a separate automatic speech recognition system; the audio data may be a voicemail; the method may further comprise delivering ad impressions to a user based on the parsed text; and the method may further comprise communicating, to the web service, the profile information parsed from the transcribed text. In variations of this aspect, the recipient may be a cell phone; the recipient may be a smart phone; the recipient may be a PDA; the recipient may be a tablet notebook; the recipient may be a desktop computer; the recipient may be a laptop computer; the recipient may be a web service; the text-based message may be a text message, communicated using Short Message Service; and the text-based message may be an instant message, communicated via an instant message service.
Another aspect of the invention relates to a method of providing profile information, derived from an utterance, from a mobile communication device to a web service. An exemplary such method includes receiving, at the mobile communication device, audio data representing an utterance that is to be sent from the mobile communication device to a recipient; transcribing the utterance to text; parsing the transcribed text to identify relevant profile information for input to a web service; and storing, in a profile information index, the profile information parsed from the transcribed text. Furthermore, in this aspect of the invention, the parsing step may be performed by a profile filter; the method may further comprise providing an interface to a user for manual user editing of the transcribed text; the transcription step may be performed at the mobile communication device; the transcription step may be performed by a separate automatic speech recognition system; the audio data may be a voicemail; the method may further comprise delivering ad impressions to a user based on the parsed text; the method may further comprise communicating, to the web service, the profile information parsed from the transcribed text; and the method may further comprise communicating the transcribed text, as a text-based message, from the mobile communication device to the recipient. In variations of this aspect, the recipient may be a cell phone; the recipient may be a smart phone; the recipient may be a PDA; the recipient may be a tablet notebook; the recipient may be a desktop computer; the recipient may be a laptop computer; the recipient may be a web service; the text-based message may be a text message, communicated using Short Message Service; and the text-based message may be an instant message, communicated via an instant message service.
Another aspect of the invention relates to a method of providing profile information, derived from a message string, from a mobile communication device to a web service. An exemplary such method includes receiving, at the mobile communication device, input representing a text-based message that is to be sent from the mobile communication device to a recipient; producing a message string from the input; parsing the message string to identify relevant profile information for input to a web service; communicating, to the web service, the profile information parsed from the message string; and communicating the message string, as a text-based message, from the mobile communication device to the recipient. Furthermore, in this aspect of the invention, the input may be audio data representing an utterance and the producing step includes transcribing the utterance to text; the parsing step may be performed by a profile filter; the method may further comprise providing an interface to a user for manual user editing of the transcribed text; the transcription step may be performed at the mobile communication device; the transcription step may be performed by a separate automatic speech recognition system; the audio data may be a voicemail; and the method may further comprise delivering ad impressions to a user based on the parsed text. In variations of this aspect, the recipient may be a cell phone; the recipient may be a smart phone; the recipient may be a PDA; the recipient may be a tablet notebook; the recipient may be a desktop computer; the recipient may be a laptop computer; the recipient may be a web service; the text-based message may be a text message, communicated using Short Message Service; and the text-based message may be an instant message, communicated via an instant message service.
Another aspect of the invention relates to a method of providing profile information, derived from an instant message, from a client device to a web service. An exemplary such method includes receiving, at the client device, input representing an instant message that is to be sent from the client device to a recipient; producing a message string from the input; parsing the message string to identify relevant profile information for input to a web service; communicating, to the web service, the profile information parsed from the message string; and communicating the message string, as an instant message, from the client device to the recipient. Furthermore, in this aspect of the invention, the input may be audio data representing an utterance and the producing step includes transcribing the utterance to text; the parsing step may be performed by a profile filter; the method may further comprise providing an interface to a user for manual user editing of the transcribed text; the transcription step may be performed at the mobile communication device; the transcription step may be performed by a separate automatic speech recognition system; the audio data may be a voicemail; and the method may further comprise delivering ad impressions to a user based on the parsed text. In variations of this aspect, the recipient may be a cell phone; the recipient may be a smart phone; the recipient may be a PDA; the recipient may be a tablet notebook; the recipient may be a desktop computer; the recipient may be a laptop computer; the recipient may be a web service; the text-based message may be a text message, communicated using Short Message Service; and the text-based message may be an instant message, communicated via an instant message service.
Still another aspect of the invention relates to a method of dynamically providing profile information, derived from message strings, to a web service. An exemplary such method includes establishing a user account configured to interface with a user profile at a web service; thereafter, repeatedly receiving message strings at a profile filter, each message string being representative of a text-based message to be communicated to a recipient; processing each message string, including parsing each message string for profile information appropriate for use by the user profile at the web service; and communicating, to the web service, the profile information parsed from the message strings.
Still yet another aspect of the invention relates to a system for providing profile information, derived from message strings, to a web service. An exemplary such system includes a mobile communication device; an automatic speech recognition engine adapted to transcribe audio data, received as an utterance at the mobile communication device, to text; a user account configured to interface with a user profile at a web service; and a profile filter adapted to parse the transcribed text, according to the configured user account, for profile information appropriate for use at the web service. Furthermore, in this aspect of the invention, the system may further comprise a profile information index, adapted to store profile information, for the user account.
In accordance with another aspect of the present invention, a system is disclosed for parsing and/or filtering message strings of text messages and/or instant messages in order to identify keywords, phrases, or fragments as a function of which user preferences of user profiles are dynamically updated. In accordance with yet another aspect of the present invention, a method is disclosed for parsing and/or filtering message strings of text messages or instant messages in order to identify keywords, phrases, or fragments as a function of which user preferences of user profiles are dynamically updated. In accordance with still yet another aspect of the present invention, software may be provided for parsing and/or filtering message strings of text messages or instant messages in order to identify keywords, phrases, or fragments as a function of which user preferences of user profiles are dynamically updated, as disclosed.
In features of these aspects, the user profiles are associated with user accounts of web services and/or social networking sites; an automatic speech recognition system generates the message strings from audio dictated by a user using a mobile device; and/or the parsing and/or filtering is performed by client side software and/or server side software.
In another feature of these aspects, users can grant, to a contact (e.g., a friend, family member, or associate), access to the user preferences of that user's profile such that the contact can query that user's profile for user profile data of known fields in the user preferences. In further features, the known fields include favorite bands and movies; the query by the contact is performed by sending a message string including an identification of the user and a known field; and/or the query by the contact is performed by sending a text message including an identification of the user and a known field.
In another feature of these aspects, ad impressions may be delivered to a user based on the parsing and/or filtering of one or more message strings of text messages and/or instant messages of the user. In further features, ad impressions are delivered to a user based at least in part on data of the user maintained in the user profile; ad impressions are delivered to a user based at least in part on data of the user maintained in the user profile; and/or an ad impression that is delivered is presented as a text message or an instant message. In still further features, such an ad impression is delivered to a mobile device of an author of a message string and/or presented to an author of a message string prior to sending of the message string as an instant message or text message; and an author of a message string is provided with an option of forwarding an ad impression to a recipient of the message string prior to sending of the message string as an instant message or text message.
In accordance with other aspects of the present invention, an ad impression is delivered to a mobile device for presentation to a user of the mobile device as disclosed herein; a method is provided for delivering an ad impression to a mobile device for presentation to a user of the mobile device as disclosed herein; and a method is provided for granting, by a user to a contact (e.g., a friend, family member, or associate), access to user preferences of that user maintained in a user profile of that user, and querying, by the contact, that user's profile for user profile data of known fields in the user preferences. In features of this latter aspect, the known fields include favorite bands and movies; the query by the contact is performed by sending a message string including an identification of the user and a known field; and/or the query by the contact is performed by sending a text message including an identification of the user and a known field.
In features of these aspects, the user profile may be dynamically updated based on parsing and/or filtering message strings of text messages and/or instant messages authored by the user; and/or the user profile is static.
In addition to the aforementioned aspects and features of the present invention, it should be noted that the present invention further encompasses the various possible combinations and subcombinations of such aspects and features.
Further aspects, features, embodiments, and advantages of the present invention will become apparent from the following detailed description with reference to the drawings, wherein:
As a preliminary matter, it will readily be understood by one having ordinary skill in the relevant art (“Ordinary Artisan”) that the present invention has broad utility and application. Furthermore, any embodiment discussed and identified as being “preferred” is considered to be part of a best mode contemplated for carrying out the present invention. Other embodiments also may be discussed for additional illustrative purposes in providing a full and enabling disclosure of the present invention. Moreover, many embodiments, such as adaptations, variations, modifications, and equivalent arrangements, will be implicitly disclosed by the embodiments described herein and fall within the scope of the present invention.
Accordingly, while the present invention is described herein in detail in relation to one or more embodiments, it is to be understood that this disclosure is illustrative and exemplary of the present invention, and is made merely for the purposes of providing a full and enabling disclosure of the present invention. The detailed disclosure herein of one or more embodiments is not intended, nor is to be construed, to limit the scope of patent protection afforded the present invention, which scope is to be defined by the claims and the equivalents thereof. It is not intended that the scope of patent protection afforded the present invention be defined by reading into any claim a limitation found herein that does not explicitly appear in the claim itself.
Thus, for example, any sequence(s) and/or temporal order of steps of various processes or methods that are described herein are illustrative and not restrictive. Accordingly, it should be understood that, although steps of various processes or methods may be shown and described as being in a sequence or temporal order, the steps of any such processes or methods are not limited to being carried out in any particular sequence or order, absent an indication otherwise. Indeed, the steps in such processes or methods generally may be carried out in various different sequences and orders while still falling within the scope of the present invention. Accordingly, it is intended that the scope of patent protection afforded the present invention is to be defined by the appended claims rather than the description set forth herein.
Additionally, it is important to note that each term used herein refers to that which the Ordinary Artisan would understand such term to mean based on the contextual use of such term herein. To the extent that the meaning of a term used herein—as understood by the Ordinary Artisan based on the contextual use of such term—differs in any way from any particular dictionary definition of such term, it is intended that the meaning of the term as understood by the Ordinary Artisan should prevail.
Furthermore, it is important to note that, as used herein, “a” and “an” each generally denotes “at least one,” but does not exclude a plurality unless the contextual use dictates otherwise. Thus, reference to “a picnic basket having an apple” describes “a picnic basket having at least one apple” as well as “a picnic basket having apples.” In contrast, reference to “a picnic basket having a single apple” describes “a picnic basket having only one apple.”
When used herein to join a list of items, “or” denotes “at least one of the items,” but does not exclude a plurality of items of the list. Thus, reference to “a picnic basket having cheese or crackers” describes “a picnic basket having cheese without crackers”, “a picnic basket having crackers without cheese”, and “a picnic basket having both cheese and crackers.” Finally, when used herein to join a list of items, “and” denotes “all of the items of the list.” Thus, reference to “a picnic basket having cheese and crackers” describes “a picnic basket having cheese, wherein the picnic basket further has crackers,” as well as describes “a picnic basket having crackers, wherein the picnic basket further has cheese.”
Referring now to the drawings, in which like numerals represent like components throughout the several views, the preferred embodiments of the present invention are next described. The following description of the preferred embodiment(s) is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses.
It will be appreciated that the illustrations of
More particularly, and as described, for example, in the aforementioned U.S. Patent Application Pub. No. US 2007/0239837,
A first transceiver tower 130A is positioned between the mobile phone 12 (or the user 32 of the mobile phone 12) and the mobile communication service provider 140, for receiving an audio message (V1), a text message (T1) and/or a verified text message (V/T1) from one of the mobile phone 12 and the mobile communication service provider 140 and transmitting it (V2, T1, V/T1) to the other of the mobile phone 12 and the mobile communication service provider 140. A second transceiver tower 130B is positioned between the mobile communication service provider 140 and mobile devices 170, generally defined as receiving devices 14 equipped to communicate wirelessly via mobile communication service provider 140, for receiving a verified text message (V/T1) from the mobile communication service provider 140 and transmitting it (V5 and T1) to the mobile devices 170. In at least some embodiments, the mobile devices 170 are adapted for receiving a text message converted from an audio message created in the mobile phone 12. Additionally, in at least some embodiment, the mobile devices 170 are also capable of receiving an audio message from the mobile phone 12. The mobile devices 170 include, but are not limited to, a pager, a palm PC, a mobile phone, or the like.
The system 10 also includes software, as disclosed below in more detail, installed in the mobile phone 12 and the backend server 160 for causing the mobile phone 12 and/or the backend server 160 to perform the following functions. The first step is to initialize the mobile phone 12 to establish communication between the mobile phone 12 and the backend server 160, which includes initializing a desired application from the mobile phone 12 and logging into a user account in the backend server 160 from the mobile phone 12. Then, the user 32 presses and holds one of the buttons of the mobile phone 12 and speaks an utterance, thus generating an audio message, V1. At this stage, the audio message V1 is recorded in the mobile phone 12. By releasing the button, the recorded audio message V1 is sent to the backend server 160 through the mobile communication service provider 140.
In the exemplary embodiment of the present invention as shown in
The backend server 160 then converts the audio message V4 into a text message, T1, and/or a digital signal, D1, in the backend server 160 by means of a speech recognition algorithm including a grammar algorithm and/or a transcription algorithm. The text message T1 and the digital signal D1 correspond to two different formats of the audio message V4. The text message T1 and/or the digital signal D1 are sent back to the Internet 150 that outputs them into a text message T1 and a digital signal D2, respectively.
The digital signal D2 is transmitted to a digital receiver 180, generally defined as a receiving device 14 equipped to communicate with the Internet and capable of receiving the digital signal D2. In at least some embodiments, the digital receiver 180 is adapted for receiving a digital signal converted from an audio message created in the mobile phone 12. Additionally, in at least some embodiments, the digital receiver 180 is also capable of receiving an audio message from the mobile phone 12. A conventional computer is one example of a digital receiver 180. In this context, a digital signal D2 may represent, for example, an email or instant message.
It should be understood that, depending upon the configuration of the backend server 160 and software installed on the mobile phone 12, and potentially based upon the system set up or preferences of the user 32, the digital signal D2 can either be transmitted directly from the backend server 160 or it can be provided back to the mobile phone 12 for review and acceptance by the user 32 before it is sent on to the digital receiver 180.
The text message T1 is sent to the mobile communication service provider 140 that outputs it (T1) into a text message T1. The output text message T1 is then transmitted to the first transceiver tower 130A. The first transceiver tower 130A then transmits it (T1) to the mobile phone 12 in the form of a text message T1. It is noted that the substantive content of all the text messages T1-T1 may be identical, which are the corresponding text form of the audio messages V1-V4.
Upon receiving the text message T1, the user 32 verifies it and sends the verified text message V/T1 to the first transceiver tower 130A that in turn, transmits it to the mobile communication service provider 140 in the form of a verified text V/T1. The verified text V/T1 is transmitted to the second transceiver tower 130B in the form of a verified text V/T1 from the mobile communication service provider 140. Then, the transceiver tower 130B transmits the verified text V/T1 to the mobile devices 170.
In at least one implementation, the audio message is simultaneously transmitted to the backend server 160 from the mobile phone 12, when the user 32 speaks to the mobile phone 12. In this circumstance, it is preferred that no audio message is recorded in the mobile phone 12, although it is possible that an audio message could be both transmitted and recorded.
Such a system may be utilized to convert an audio message into a text message. In at least one implementation, this may be accomplished by first initializing a transmitting device so that the transmitting device is capable of communicating with a backend server 160. Second, a user 32 speaks to or into the client device so as to create a stream of an audio message. The audio message can be recorded and then transmitted to the backend server 160, or the audio message can be simultaneously transmitted to the backend server 160 through a client-server communication protocol. Streaming may be accomplished according to processes described elsewhere herein and, in particular, in
Still further, in at least one implementation, one or both types of client device 12,14 may be located through a global positioning system (GPS); and listing locations, proximate to the position of the client device 12,14, of a target of interest may be presented in the converted text message.
When the first user 32 speaks an utterance 36 into the transmitting device 12, the recorded speech audio is sent to the ASR system 18, as described previously. In the example of
Furthermore, in converting speech to text, speech transcription performance indications may be provided to the receiving user 34 in accordance with the disclosure of the aforementioned U.S. patent application Ser. No. 12/197,213.
Additionally, in the context of SMS and/or IM messaging, the ASR system preferably makes use of both statistical language models (SLMs) for returning results from the audio data, and finite grammars used to post-process the text results, in accordance with the disclosure of the aforementioned U.S. patent application Ser. No. 12/198,112. This is believed to result in messages that are formatted in a way that looks more typical of how a human would have typed the message using a mobile device.
It will be appreciated that automated transcription of recorded utterances 36 is useful in other environments and applications as well. For example, in another system (not separately illustrated), a user speaks an utterance 36 into a device as a voicemail, and the recorded speech audio is sent to the ASR system 18. Other applications to which the teachings of the present invention may be applicable will be apparent to the Ordinary Artisan.
At step 510, one or more accounts are established for interfacing to user profiles established at the various web services. Such accounts may be established at the backend server 160/ASR system 18, the user's client device 12,14, or both. Accounts may be designated in any of a variety of ways. For example, a user may maintain one account for text messages and one for IMs, or may maintain a single unified account for both types of messages.
With one or more accounts established, the account or accounts are next configured at step 515 to interface with the user profile at each web service. In one embodiment, such configuration may be effected by the user by selecting one or more web services from a list of available web services displayed on the client device 12,14, while in another embodiment, such configuration may be effected by the user by using a browser on the client device 12,14 to access the web service and select an option for such configuration from the web service. Furthermore, in at least one embodiment, each web service makes use of a standard protocol by which one or both of the backend server 160/ASR system 18 and the user's device 12,14 may communicate with the web service to update the user profile. In another embodiment, a browser on the client device 12,14 may be utilized to access the web service and download a protocol specific to that web service. Preferably, the various configurations are organized and managed in one or more user accounts that correspond, for example, to the client device 12,14.
At step 520, preferences may be established for the configuration. These may, for example, be established directly via a user interface at the client device 12,14 or indirectly at the web service via a browser on the client device 12,14 or via a browser on a separate device. Preferences may include types of filters or the like to be employed as part of a “profile filter” described below, groups of web service profiles to be updated, message types (e.g., text messages, IMs, other messages, or the like, or a combination thereof) or utterance types to be considered, and the like. In at least one embodiment, default preferences are provided and utilized until if and when the user chooses to update the preferences.
Once the user's account or accounts are configured and appropriate preferences have been established, the method may be used to examine message strings for relevant information as shown at step 525. In conjunction with this method, the backend server 160/ASR system 18 may further include a profile filter for processing the text results thereof. Specifically, as transcribed text result is produced by the ASR system 18, whether the result is a message for communication to one or more other users and/or to one or more web services, or is some other type of transcription, the transcribed text result is parsed in order to identify keywords, fragments, or phrases that may represent relevant personal preference information. Such identification process may include, for example, keyword or grammar lookups, natural language understanding, semantic analysis, or other techniques in order to derive interestingness for further processing. The filter may constitute, at least in part, one or more of those filters found in the disclosure of one or more of the patent applications incorporated by reference herein.
In such case, the identification process also may include audio fingerprinting or audio watermarking, which may involve placing human-inaudible audio artifacts in an audio stream that can carry identification or configuration information. Audio fingerprinting or audio watermarking may help the backend server 160/ASR system 18 select the type of noise suppression done or may help it select from a given acoustic model (for example, by providing an indication as to what accent an individual is most likely to have). This may be particularly useful for client-less applications such as voicemail, where the chipset can tag these things which are eventually picked up by the backend server 160/ASR system 18 after it traverses the normal carrier audio factories. It may be desirable to have hidden parameters that would normally be passed if the audio data originated from a corresponding application on a client device.
Alternatively, a profile filter may be implemented on the client device 12,14, whether or not an ASR engine is present in the device 12,14. Still further, it will be appreciated that, in the context of text messaging, a profile filter may be implemented at the mobile communication service provider 140, and in the context of instant messaging, a profile filter may be implemented at an IM service provider (not specifically illustrated).
A separate ASR system 18 provides a convenient platform at which the profile filter may be disposed. However, a profile filter may additionally or alternatively be disposed at a transmitting device 12. In this arrangement, after transcription results are returned by an ASR engine (which may be part of an ASR system 18, may be included in the transmitting device 12, or may be included in the mobile communication service provider 140 or IM service provider) to the transmitting device 12, the transmitting user 32 can use a keyboard, keypad or other user input device on the transmitting device 12 to manually edit the transcription results before transmission. Alternatively, if the transcription results are particularly inaccurate, the user may choose to enter the entire intended message using such user input device on the transmitting device 12. In either case, the manually-edited or -created message may then be processed by a profile filter on the transmitting device 12.
In at least one embodiment, a profile filter may be implemented at a receiving device 14, such that incoming text messages, IMs and other message strings may be processed in a manner similar to that of transcribed utterances or outgoing messages strings.
Once the profile filter or the like has processed the message string, web service user profiles that are linked to the user account(s) can then be updated dynamically at step 530 as a function of the keywords, fragments, or phrases identified at step 525.
It is believed that such dynamic personalization will alleviate or even completely replace the need for users to manually update much of the profile information contained in user profiles linked to such user accounts, and that the accuracy of web service targeting and recommendation engines can be dynamically improved based on user text messaging, instant messaging and other message strings. Such information instead can be updated on the fly by the users simply linking their user profiles at such web services to their client device user account(s). For example, a user's preferences at social networking sites such as Facebook and MySpace can be dynamically updated based on that user's message strings without requiring that user to log into the user's account at each site or to modify and save the data in the user profile for the account. Similarly, user profiles associated with web services using recommendation engines, such as that utilized by Amazon, can be dynamically updated based on that user's message strings without requiring that user to manually update the profiles. Thus, as a result of the present invention, static profiles can be avoided.
In the illustrative example of
In addition to the foregoing dynamic updating of the user's profile information, ad impressions further can be targeted to the user based on the identified keyword and phrase, such as ad impressions relating to movie rentals for “Snakes on a Plane” or movie times of a local theater for showings of “Snakes on a Plane.” The advertisements may be pushed either prior to message strings actually being sent to recipients or to web services, or thereafter, as applicable.
The advertising that is pushed to a user's mobile device preferably comprises an ad impression that is displayed to the user in the form of an ad bubble. The ad impression elements may contain text, graphics, videos, and/or audio and may be downloaded from a server infrastructure or may already be resident within the mobile device and accessed directly there from. Preferably, each ad impression is designed to be as unobtrusive as possible to the user and allows the user to view or hear the advertisement or take some further action regarding the advertisement, if and as desired by the user, which may include opening a separate mobile browser with additional content relevant to the advertisement.
The ad impression may be delivered only to the author of the message string. Alternatively, the ad impression may be delivered both to the author of the message string and to the intended recipient of the message string, especially where the message string is intended to be sent to the mobile device of another user. Moreover, if the ad impression is sent to either of, but not both of, the author and intended recipient, then such person may be provided with the option of conveniently forwarding the ad impression to the other person if desired, whether by text message, instant message, email, hyperlink, or injection of the ad impression into a message string itself.
In taking further action with regard to an ad impression that is presented to a user, if desired, such user having seen or heard the ad impression may manually click on a displayed advertisement or portion thereof resulting in, for example, the launch of a mobile browser. The mobile browser may then allow the user to either complete a purchase or find relevant information associated with the advertisement. Moreover, rather than manually clicking on the displayed advertisement, the user may speak a keyword as a “voice click,” thereby resulting in the further action being taken. Such use of “voice click” may be in accordance with the disclosure of the aforementioned U.S. patent application Ser. No. 12/198,116, which is hereby incorporated herein by reference.
In the dynamic updating of the user's profile information at one or more web services, use may be made of one or more indexes for storing, in association with the particular user involved, some or all of the profile information that has been parsed from the message string. The index or indexes may include databases, grammars, language models, or the like. As profile information is identified, it may be stored in the appropriate index. If no index exists for the particular user, then it may be created automatically as profile information for the user is gathered.
In some embodiments, the index or indexes are stored at the backend server 160/ASR system 18 and updated directly by the profile filter or other element of the system 18. In at least one embodiment, corresponding indexes are maintained on the client devices 12,14 and synchronized at appropriate times with the system index. Synchronization may be accomplished by transmitting, from the client device 12,14 to the system 18, a delta model representing the differences between the new client device indexes (as updated most recently with profile information) and the last-synchronized information in the client device indexes. Use of delta models enables time and bandwidth to be conserved in the synchronization process. Still further, in at least one other embodiment, the indexes are maintained only on the client devices 12,14.
In some embodiments, the index or indexes may be used as a specific interface point for the web services where the user maintains profiles to be updated according to one or more of the methods and systems disclosed herein. More particularly, updated profile information may be placed in the index or indexes, and a separate process may be used to provide profile information from the index or indexes to the web services. These two separate processes may occur synchronously or asynchronously.
Further, the index(es) may be updated to include static profile information as well as the dynamic profile information derived as described herein.
In some embodiments, the index or indexes may be separately queried by one or more users. Any of a variety of means may be utilized to establish which users are to be given access to some or all of a particular user's profile information in the index(es). In one embodiment, any user (or corresponding user device) in a particular user's contact list, as stored in the particular user's client device 12,14, may be permitted to query the index(es). In particular, in accordance with one or more methods of the present invention, a user's contacts are allowed to query the user's preferences whereby they ask for areas of known content, such as the user's favorite bands or movies. As noted previously, the preferences/profile information could include both dynamic profile information and static profile information. This blend of static and dynamic profile data could also be utilized to target ads and/or promotions.
In an example of the foregoing, the first user 32 in
One commercial implementation of the foregoing principles is the Yap® and Yap9™ service (collectively, “the Yap service”), available from Yap Inc. of Charlotte, N.C. The Yap service includes one or more web applications and a client device application. The Yap web application is a J2EE application built using Java 5. It is designed to be deployed on an application server like IBM WebSphere Application Server or an equivalent J2EE application server. It is designed to be platform neutral, meaning the server hardware and OS can be anything supported by the web application server (e.g. Windows, Linux, MacOS X).
The Yap web application includes a plurality of servlets. As used herein, the term “servlet” refers to an object that receives a request and generates a response based on the request. Usually, a servlet is a small Java program that runs within a Web server. Servlets receive and respond to requests from Web clients, usually across HTTP and/or HTTPS, the HyperText Transfer Protocol. Currently, the Yap web application includes nine servlets: Correct, Debug, Install, Login, Notify, Ping, Results, Submit, and TTS. Each servlet is described below in the order typically encountered.
The communication protocol used for all messages between the Yap client and Yap server applications is HTTP and HTTPS. Using these standard web protocols allows the Yap web application to fit well in a web application container. From the application server's point of view, it cannot distinguish between the Yap client midlet and a typical web browser. This aspect of the design is intentional to convince the web application server that the Yap client midlet is actually a web browser. This allows a user to use features of the J2EE web programming model like session management and HTTPS security. It is also an important feature of the client as the MIDP specification requires that clients are allowed to communicate over HTTP.
More specifically, the Yap client uses the POST method and custom headers to pass values to the server. The body of the HTTP message in most cases is irrelevant with the exception of when the client submits audio data to the server in which case the body contains the binary audio data. The Server responds with an HTTP code indicating the success or failure of the request and data in the body which corresponds to the request being made. Preferably, the server does not depend on custom header messages being delivered to the client as the carriers can, and usually do, strip out unknown header values.
The Yap client is operated via a user interface (UI), known as “Yap9,” which is well suited for implementing methods of converting an audio message into a text message and messaging in mobile environments. Yap9 is a combined UI for SMS and web services (WS) that makes use of the buttons or keys of the client device by assigning a function to each button (sometimes referred to as a “Yap9” button or key). Execution of such functions is carried out by “Yaplets.” This process, and the usage of such buttons, are described elsewhere herein and, in particular, in
Usage Process—Install:
Installation of the Yap client device application is described in the aforementioned U.S. Patent Application Pub. No. US 2007/0239837 in a subsection titled “Install Process” of a section titled “System Architecture.”
Usage Process—Notify:
When a Yap client is installed, the install fails, or the install is canceled by the user, the Notify servlet is sent a message by the phone with a short description. This can be used for tracking purposes and to help diagnose any install problems.
Usage Process—Login:
When the Yap midlet is opened, the first step is to create a new session by logging into the Yap web application using the Login servlet. Preferably, however, multiple login servers exist, so as a preliminary step, a request is sent to find a server to log in to. Exemplary protocol details for such a request can be seen in
After receiving this response, a login request is sent. Exemplary protocol details for such a request can be seen in
Sessions are typically maintained using client-side cookies, however, a user cannot rely on the set-cookie header successfully returning to the Yap client because the carrier may remove that header from the HTTP response. The solution to this problem is to use the technique of URL rewriting. To do this, the session ID is extracted from the session API, which is returned to the client in the body of the response. This is called the “Yap Cookie” and is used in every subsequent request from the client. The Yap Cookie looks like this:
All requests from the client simply append this cookie to the end of each request and the session is maintained:
Usage Process—Submit:
After receiving a session ID, audio data may be submitted. The user presses and holds one of the Yap-9 buttons, speaks aloud, and releases the pressed button. The speech is recorded, and the recorded speech is then sent in the body of a request to the Submit servlet, which returns a unique receipt that the client can use later to identify this utterance. Exemplary protocol details for such a request can be seen in
One of the header values sent to the server during the login process is the format in which the device records. That value is stored in the session so the Submit servlet knows how to convert the audio into a format required by the ASR engine. This is done in a separate thread as the process can take some time to complete.
The Yap9 button and Yap9 screen numbers are passed to the Submit server in the HTTP request header. These values are used to lookup a user-defined preference of what each button is assigned to. For example, the 1 button may be used to transcribe audio for an SMS message, while the 2 button is designated for a grammar based recognition to be used in a web services location based search. The Submit servlet determines the appropriate “Yaplet” to use. When the engine has finished transcribing the audio or matching it against a grammar, the results are stored in a hash table in the session.
In the case of transcribed audio for an SMS text message, a number of filters can be applied to the text returned from the ASR engine. Such filters may include, but are not limited to, those shown Table 3.
Notably, after all of the filters are applied, both the filtered text and original text are returned to the client so that if text to speech is enabled for the user, the original unfiltered text can be used to generate the TTS audio.
Usage Process—Results:
The client retrieves the results of the audio by taking the receipt returned from the Submit servlet and submitting it as a request to the Results servlet. Exemplary protocol details for such a request can be seen in
Usage Process—TTS:
The user may choose to have the results read back via Text to Speech. This can be an option the user could disable to save network bandwidth, but adds value when in a situation where looking at the screen is not desirable, like when driving. If TTS is used, the TTS string is extracted from the results and sent via an HTTP request to the TTS servlet. Exemplary protocol details for such a request can be seen in
Usage Process—Correct:
As a means of tracking accuracy and improving future SMS based language models, if the user makes a correction to transcribed text on the phone via the keypad before sending the message, the corrected text is submitted to the Correct servlet along with the receipt for the request. This information is stored on the server for later use in analyzing accuracy and compiling a database of typical SMS messages. Exemplary protocol details for such a submission can be seen in
Usage Process—Ping:
Typically, web sessions will timeout after a certain amount of inactivity. The Ping servlet can be used to send a quick message from the client to keep the session alive. Exemplary protocol details for such a message can be seen in
Usage Process—Debug:
Used mainly for development purposes, the Debug servlet sends logging messages from the client to a debug log on the server. Exemplary protocol details can be seen in
Usage Process—Logout:
To logout from the Yap server, an HTTP logout request needs to be issued to the server. An exemplary such request would take the form: “/Yap/Logout;jsessionid=1234”, where 1234 is the session ID.
User Preferences:
In at least one embodiment, the Yap website has a section where the user can log in and customize their Yap client preferences. This allows them to choose from available Yaplets and assign them to Yap9 keys on their phone. The user preferences are stored and maintained on the server and accessible from the Yap web application. This frees the Yap client from having to know about all of the different back-end Yaplets. It just records the audio, submits it to the server along with the Yap9 key and Yap9 screen used for the recording and waits for the results. The server handles all of the details of what the user actually wants to have happen with the audio.
The client needs to know what type of format to utilize when presenting the results to the user. This is accomplished through a code in the Results object. The majority of requests fall into one of two categories: sending an SMS message, or displaying the results of a web services query in a list format. Notably, although these two are the most common, the Yap architecture supports the addition of new formats.
Based on the foregoing description, it will be readily understood by those persons skilled in the art that the present invention is susceptible of broad utility and application. Many embodiments and adaptations of the present invention other than those specifically described herein, as well as many variations, modifications, and equivalent arrangements, will be apparent from or reasonably suggested by the present invention and the foregoing descriptions thereof, without departing from the substance or scope of the present invention.
Accordingly, while the present invention has been described herein in detail in relation to one or more preferred embodiments, it is to be understood that this disclosure is only illustrative and exemplary of the present invention and is made merely for the purpose of providing a full and enabling disclosure of the invention. The foregoing disclosure is not intended to be construed to limit the present invention or otherwise exclude any such other embodiments, adaptations, variations, modifications or equivalent arrangements, the present invention being limited only by the claims appended hereto and the equivalents thereof.
The present application is a nonprovisional patent application of, and claims priority under 35 U.S.C. § 119(e) to, each of the following: (1) U.S. provisional patent application Ser. No. 60/972,851, filed Sep. 17, 2007 and titled “SYSTEM AND METHOD FOR DELIVERING MOBILE ADVERTISING WITHIN A THREADED SMS OR IM CHAT CONVERSATION ON A MOBILE DEVICE CLIENT”;(2) U.S. provisional patent application Ser. No. 60/972,853, filed Sep. 17, 2007 and titled “METHOD AND SYSTEM FOR DYNAMIC PERSONALIZATION AND QUERYING OF USER PROFILES BASED ON SMS/IM CHAT MESSAGING ON A MOBILE DEVICE”;(3) U.S. provisional patent application Ser. No. 60/972,854, filed Sep. 17, 2007 and titled “LOCATION, TIME & SEASON AWARE MOBILE ADVERTISING DELIVERY”;(4) U.S. provisional patent application Ser. No. 60/972,936, filed Sep. 17, 2007 and titled “DELIVERING TARGETED ADVERTISING TO MOBILE DEVICE FOR PRESENTATION WITHIN SMSes OR IM CONVERSATIONS”;(5) U.S. provisional patent application Ser. No. 60/972,943, filed Sep. 17, 2007 and titled “DYNAMIC PERSONALIZATION AND QUERYING OF USER PROFILES BASED ON SMSes AND IM CONVERSATIONS”; and(6) U.S. provisional patent application Ser. No. 60/972,944, filed Sep. 17, 2007 and titled “LOCATION, TIME, AND SEASON AWARE ADVERTISING DELIVERY TO AND PRESENTATION ON MOBILE DEVICE WITHIN SMSes OR IM CONVERSATIONS OR USER INTERFACE THEREOF”. Each of the foregoing patent applications from which priority is claimed is hereby incorporated herein by reference in its entirety. Additionally, U.S. Patent Application Publication No. US 2007/0239837 is incorporated herein by reference, and each of the following patent applications, and any corresponding patent application publications thereof, are incorporated herein by reference: U.S. nonprovisional patent application Ser. No. 12/197,213, filed Aug. 22, 2008 and titled “CONTINUOUS SPEECH TRANSCRIPTION PERFORMANCE INDICATION”; U.S. nonprovisional patent application Ser. No. 12/198,112, filed Aug. 25, 2008 and titled “FILTERING TRANSCRIPTIONS OF UTTERANCES;” U.S. nonprovisional patent application Ser. No. 12/198,116, filed Aug. 25, 2008 and titled “FACILITATING PRESENTATION BY MOBILE DEVICE OF ADDITIONAL CONTENT FOR A WORD OR PHRASE UPON UTTERANCE THEREOF”; U.S. nonprovisional patent application Ser. No. 12/197,227, filed Aug. 22, 2008 and titled “TRANSCRIBING AND MATCHING MOBILE DEVICE UTTERANCES TO KEYWORDS TAKEN FROM MOBILE DEVICE MESSAGES AND ASSOCIATED WITH WEB ADDRESSES”; and U.S. nonprovisional patent application Ser. No. 12/212,645, filed Sep. 17, 2008 and titled “FACILITATING PRESENTATION OF ADS RELATING TO WORDS OF A MESSAGE.” Finally, the disclosure of provisional application 60/789,837 is contained in APPENDIX A attached hereto and, likewise, is incorporated herein in its entirety by reference and is intended to provide background and technical information with regard to the systems and environments of the inventions of the current provisional patent application. Similarly, the disclosure of the brochure of APPENDIX B is incorporated herein in its entirety by reference.
Number | Name | Date | Kind |
---|---|---|---|
5036538 | Oken | Jul 1991 | A |
5623609 | Kaye et al. | Apr 1997 | A |
5675507 | Bobo | Oct 1997 | A |
5822730 | Roth et al. | Oct 1998 | A |
5852801 | Hon | Dec 1998 | A |
5864603 | Haavisto et al. | Jan 1999 | A |
5948061 | Merriman et al. | Sep 1999 | A |
5974413 | Beauregard et al. | Oct 1999 | A |
5995928 | Nguyen | Nov 1999 | A |
6026368 | Brown et al. | Feb 2000 | A |
6100882 | Sharman et al. | Aug 2000 | A |
6173259 | Bilj et al. | Jan 2001 | B1 |
6219638 | Padmanabhan et al. | Jan 2001 | B1 |
6212498 | Sherwood et al. | Apr 2001 | B1 |
6219407 | Kanevsky et al. | Apr 2001 | B1 |
6253177 | Lewis et al. | Jun 2001 | B1 |
6298326 | Feller | Oct 2001 | B1 |
6366886 | Dragosh et al. | Apr 2002 | B1 |
6401075 | Mason et al. | Jun 2002 | B1 |
6453290 | Jochumson | Sep 2002 | B1 |
6490561 | Wilson et al. | Dec 2002 | B1 |
6519562 | Phillips et al. | Feb 2003 | B1 |
6532446 | King | Mar 2003 | B1 |
6571210 | Hon et al. | May 2003 | B2 |
6604077 | Dragosh et al. | Aug 2003 | B2 |
6654448 | Agraharam et al. | Nov 2003 | B1 |
6687339 | Martin | Feb 2004 | B2 |
6687689 | Fung et al. | Feb 2004 | B1 |
6704034 | Rodriguez et al. | Mar 2004 | B1 |
6760700 | Lewis et al. | Jul 2004 | B2 |
6775360 | Davidson et al. | Aug 2004 | B2 |
6816468 | Cruickshank | Nov 2004 | B1 |
6816578 | Kredo et al. | Nov 2004 | B1 |
6820055 | Saindon et al. | Nov 2004 | B2 |
6850609 | Schrage | Feb 2005 | B1 |
6856960 | Dragosh et al. | Feb 2005 | B1 |
6859996 | Slife et al. | Mar 2005 | B1 |
6865258 | Polcyn | Mar 2005 | B1 |
6895084 | Saylor et al. | May 2005 | B1 |
6961700 | Mitchell et al. | Nov 2005 | B2 |
6980954 | Zhao et al. | Dec 2005 | B1 |
7007074 | Radwin | Feb 2006 | B2 |
7013275 | Arnold et al. | Mar 2006 | B2 |
7035804 | Saindon et al. | Apr 2006 | B2 |
7035901 | Kumagai et al. | Apr 2006 | B1 |
7039599 | Merriman et al. | May 2006 | B2 |
7047200 | Schmid et al. | May 2006 | B2 |
7062435 | Tzirkel-Hancock et al. | Jun 2006 | B2 |
7089184 | Rorex | Aug 2006 | B2 |
7089194 | Berstis et al. | Aug 2006 | B1 |
7133513 | Zhang | Nov 2006 | B1 |
7136875 | Anderson et al. | Nov 2006 | B2 |
7146320 | Ju et al. | Dec 2006 | B2 |
7146615 | Hervet et al. | Dec 2006 | B1 |
7181387 | Ju et al. | Feb 2007 | B2 |
7181398 | Thong et al. | Feb 2007 | B2 |
7200555 | Ballard et al. | Apr 2007 | B1 |
7206932 | Kirchhoff | Apr 2007 | B1 |
7225125 | Bennett et al. | May 2007 | B2 |
7225224 | Nakamura | May 2007 | B2 |
7233655 | Gailey et al. | Jun 2007 | B2 |
7236580 | Sarkar et al. | Jun 2007 | B1 |
7254384 | Gailey et al. | Aug 2007 | B2 |
7260534 | Gandhi et al. | Aug 2007 | B2 |
7280966 | Ju et al. | Oct 2007 | B2 |
7302280 | Hinckley et al. | Nov 2007 | B2 |
7310601 | Nishizaki et al. | Dec 2007 | B2 |
7313526 | Roth et al. | Dec 2007 | B2 |
7319957 | Robinson et al. | Jan 2008 | B2 |
7324942 | Mahowald et al. | Jan 2008 | B1 |
7328155 | Endo et al. | Feb 2008 | B2 |
7330815 | Jochumson | Feb 2008 | B1 |
7363229 | Falcon et al. | Apr 2008 | B2 |
7376556 | Bennett | May 2008 | B2 |
7379870 | Belvin et al. | May 2008 | B1 |
7392185 | Bennett | Jun 2008 | B2 |
7401122 | Chen | Jul 2008 | B2 |
7418387 | Mowatt et al. | Aug 2008 | B2 |
7475404 | Hamel | Jan 2009 | B2 |
7496625 | Belcher et al. | Feb 2009 | B1 |
7539086 | Jaroker | May 2009 | B2 |
7555431 | Bennett | Jun 2009 | B2 |
7571100 | Lenir et al. | Aug 2009 | B2 |
7577569 | Roth et al. | Aug 2009 | B2 |
7590534 | Vatland | Sep 2009 | B2 |
7634403 | Roth et al. | Dec 2009 | B2 |
7640158 | Detlef et al. | Dec 2009 | B2 |
7640160 | Di Cristo et al. | Dec 2009 | B2 |
7650284 | Cross et al. | Jan 2010 | B2 |
7657424 | Bennett | Feb 2010 | B2 |
7668718 | Kahn et al. | Feb 2010 | B2 |
7672841 | Bennett | Mar 2010 | B2 |
7680661 | Co et al. | Mar 2010 | B2 |
7685509 | Clark et al. | Mar 2010 | B1 |
7689415 | Jochumson | Mar 2010 | B1 |
7702508 | Bennett | Apr 2010 | B2 |
7707163 | Anzalone et al. | Apr 2010 | B2 |
7716058 | Roth et al. | May 2010 | B2 |
7725307 | Bennett | May 2010 | B2 |
7725321 | Bennett | May 2010 | B2 |
7729904 | Bennett | Jun 2010 | B2 |
7729912 | Bacchiani et al. | Jun 2010 | B1 |
7747437 | Verhasselt et al. | Jun 2010 | B2 |
7757162 | Barrus et al. | Jul 2010 | B2 |
7769764 | Ramer et al. | Aug 2010 | B2 |
7796980 | McKinney et al. | Sep 2010 | B1 |
7809574 | Roth et al. | Oct 2010 | B2 |
7822610 | Burns et al. | Oct 2010 | B2 |
7852993 | Ju et al. | Dec 2010 | B2 |
7890329 | Wu et al. | Feb 2011 | B2 |
7890586 | McNamara et al. | Feb 2011 | B1 |
7899670 | Young et al. | Mar 2011 | B1 |
7899671 | Cooper et al. | Mar 2011 | B2 |
7904301 | Densham et al. | Mar 2011 | B2 |
7907705 | Huff et al. | Mar 2011 | B1 |
7908141 | Belknap | Mar 2011 | B2 |
7908273 | DiMaria et al. | Mar 2011 | B2 |
7925716 | Zhang et al. | Apr 2011 | B2 |
7949529 | Weider et al. | May 2011 | B2 |
7957975 | Burns et al. | Jun 2011 | B2 |
7970610 | Downey | Jun 2011 | B2 |
8010358 | Chen | Aug 2011 | B2 |
8027836 | Baker et al. | Sep 2011 | B2 |
8032372 | Zimmerman et al. | Oct 2011 | B1 |
8050918 | Ghasemi et al. | Nov 2011 | B2 |
8069047 | Cross et al. | Nov 2011 | B2 |
8073700 | Jamarillo et al. | Dec 2011 | B2 |
8106285 | Gerl et al. | Jan 2012 | B2 |
8117268 | Jablokov et al. | Feb 2012 | B2 |
8121838 | Kobal et al. | Feb 2012 | B2 |
8126120 | Stifelman et al. | Feb 2012 | B2 |
8135578 | Hébert | Mar 2012 | B2 |
8140632 | Jablokov et al. | Mar 2012 | B1 |
8145485 | Brown | Mar 2012 | B2 |
8145493 | Cross, Jr. et al. | Mar 2012 | B2 |
8209184 | Dragosh et al. | Jun 2012 | B1 |
8229743 | Carter | Jul 2012 | B2 |
8296139 | Da Palma et al. | Oct 2012 | B2 |
8296377 | Jablokov et al. | Oct 2012 | B1 |
8301454 | Paden | Oct 2012 | B2 |
8311825 | Chen | Nov 2012 | B2 |
8326636 | White | Dec 2012 | B2 |
8335829 | Jablokov et al. | Dec 2012 | B1 |
8335830 | Jablokov et al. | Dec 2012 | B2 |
8352261 | Terrell, II et al. | Jan 2013 | B2 |
8352264 | White | Jan 2013 | B2 |
8355920 | Gopinath et al. | Jan 2013 | B2 |
8380511 | Cave et al. | Feb 2013 | B2 |
8401850 | Jochumson | Mar 2013 | B1 |
8433574 | Jablokov et al. | Mar 2013 | B2 |
8417530 | Hayes | Apr 2013 | B1 |
8498872 | White et al. | Jul 2013 | B2 |
8510094 | Chin et al. | Aug 2013 | B2 |
8510109 | Terrell, II et al. | Aug 2013 | B2 |
8543396 | Terrell, II et al. | Sep 2013 | B2 |
8589164 | Mengibar et al. | Nov 2013 | B1 |
8611871 | Terrell, II | Dec 2013 | B2 |
8670977 | Saraclar et al. | Mar 2014 | B2 |
8793122 | White et al. | Jul 2014 | B2 |
8898065 | Newman et al. | Nov 2014 | B2 |
9009055 | Jablokov et al. | Apr 2015 | B1 |
9053489 | Jablokov et al. | Jun 2015 | B2 |
9093061 | Secker-Walker et al. | Jul 2015 | B1 |
9099087 | Adams et al. | Aug 2015 | B2 |
9330401 | Terrell, II | May 2016 | B2 |
9369581 | Hirschberg et al. | Jun 2016 | B2 |
9384735 | White et al. | Jul 2016 | B2 |
9436951 | Jablokov et al. | Sep 2016 | B1 |
9542944 | Jablokov et al. | Jan 2017 | B2 |
9583107 | Terrell, II et al. | Feb 2017 | B2 |
20010047294 | Rothschild | Nov 2001 | A1 |
20010056350 | Calderone | Dec 2001 | A1 |
20010056369 | Takayama et al. | Dec 2001 | A1 |
20020016712 | Geurts et al. | Feb 2002 | A1 |
20020029101 | Larson et al. | Mar 2002 | A1 |
20020035474 | Alpdemir | Mar 2002 | A1 |
20020052781 | Aufricht et al. | May 2002 | A1 |
20020087330 | Lee et al. | Jul 2002 | A1 |
20020091570 | Sakagawa | Jul 2002 | A1 |
20020161579 | Saindon et al. | Oct 2002 | A1 |
20020165719 | Wang et al. | Nov 2002 | A1 |
20020165773 | Natsuno et al. | Nov 2002 | A1 |
20030008661 | Joyce et al. | Jan 2003 | A1 |
20030050778 | Nguyen et al. | Jan 2003 | A1 |
20030028601 | Rowe | Feb 2003 | A1 |
20030093315 | Sato | May 2003 | A1 |
20030101054 | Davis et al. | May 2003 | A1 |
20030105630 | MacGinitie et al. | Jun 2003 | A1 |
20030115060 | Junqua et al. | Jun 2003 | A1 |
20030125955 | Arnold et al. | Jul 2003 | A1 |
20030126216 | Avila et al. | Jul 2003 | A1 |
20030139922 | Hoffmann et al. | Jul 2003 | A1 |
20030144906 | Fujimoto et al. | Jul 2003 | A1 |
20030149566 | Levin et al. | Aug 2003 | A1 |
20030182113 | Huang | Sep 2003 | A1 |
20030187643 | Van Thong et al. | Oct 2003 | A1 |
20030191639 | Mazza | Oct 2003 | A1 |
20030200086 | Kawazoe et al. | Oct 2003 | A1 |
20030200093 | Lewis et al. | Oct 2003 | A1 |
20030212554 | Vatland | Nov 2003 | A1 |
20030220792 | Kobayashi et al. | Nov 2003 | A1 |
20030220798 | Schmid et al. | Nov 2003 | A1 |
20030223556 | Ju et al. | Dec 2003 | A1 |
20040005877 | Vaananen | Jan 2004 | A1 |
20040015547 | Griffin et al. | Jan 2004 | A1 |
20040019488 | Portillo | Jan 2004 | A1 |
20040059632 | Kang et al. | Mar 2004 | A1 |
20040059708 | Dean et al. | Mar 2004 | A1 |
20040059712 | Dean et al. | Mar 2004 | A1 |
20040107107 | Lenir et al. | Jun 2004 | A1 |
20040133655 | Yen et al. | Jul 2004 | A1 |
20040151358 | Yanagita et al. | Aug 2004 | A1 |
20040176906 | Matsubara et al. | Sep 2004 | A1 |
20040193420 | Kennewick et al. | Sep 2004 | A1 |
20040199595 | Banister et al. | Oct 2004 | A1 |
20050004799 | Lyudovyk | Jan 2005 | A1 |
20050010641 | Staack | Jan 2005 | A1 |
20050021344 | Davis et al. | Jan 2005 | A1 |
20050027538 | Halonen et al. | Feb 2005 | A1 |
20050080786 | Fish et al. | Apr 2005 | A1 |
20050101355 | Hon et al. | May 2005 | A1 |
20050102142 | Soufflet et al. | May 2005 | A1 |
20050149326 | Hogengout et al. | Jul 2005 | A1 |
20050154587 | Funari et al. | Jul 2005 | A1 |
20050165609 | Zuberec et al. | Jul 2005 | A1 |
20050177376 | Cooper et al. | Aug 2005 | A1 |
20050182628 | Choi | Aug 2005 | A1 |
20050187768 | Godden | Aug 2005 | A1 |
20050188029 | Asikainen et al. | Aug 2005 | A1 |
20050197145 | Chae et al. | Sep 2005 | A1 |
20050197840 | Wang et al. | Sep 2005 | A1 |
20050203751 | Stevens et al. | Sep 2005 | A1 |
20050209868 | Wan et al. | Sep 2005 | A1 |
20050239495 | Bayne | Oct 2005 | A1 |
20050240406 | Carroll | Oct 2005 | A1 |
20050261907 | Smolenski et al. | Nov 2005 | A1 |
20050266884 | Marriott et al. | Dec 2005 | A1 |
20050288926 | Benco et al. | Dec 2005 | A1 |
20060004570 | Ju et al. | Jan 2006 | A1 |
20060009974 | Junqua et al. | Jan 2006 | A1 |
20060143007 | Koh et al. | Jan 2006 | A1 |
20060052127 | Wolter | Mar 2006 | A1 |
20060053016 | Falcon et al. | Mar 2006 | A1 |
20060074895 | Belknap | Apr 2006 | A1 |
20060075055 | Littlefield | Apr 2006 | A1 |
20060111907 | Mowatt et al. | May 2006 | A1 |
20060122834 | Bennett | Jun 2006 | A1 |
20060129455 | Shah | Jun 2006 | A1 |
20060149558 | Kahn et al. | Jul 2006 | A1 |
20060149630 | Elliott et al. | Jul 2006 | A1 |
20060159507 | Jaweth et al. | Jul 2006 | A1 |
20060161429 | Falcon et al. | Jul 2006 | A1 |
20060195318 | Stanglmayr | Aug 2006 | A1 |
20060195541 | Ju et al. | Aug 2006 | A1 |
20060217159 | Watson | Sep 2006 | A1 |
20060235684 | Chang | Oct 2006 | A1 |
20060235695 | Thrift et al. | Oct 2006 | A1 |
20070005368 | Chutorash et al. | Jan 2007 | A1 |
20070005795 | Gonzalez | Jan 2007 | A1 |
20070033005 | Cristo et al. | Feb 2007 | A1 |
20070038451 | Cogne et al. | Feb 2007 | A1 |
20070038740 | Steeves | Feb 2007 | A1 |
20070038923 | Patel | Feb 2007 | A1 |
20070043569 | Potter et al. | Feb 2007 | A1 |
20070061146 | Jaramillo et al. | Mar 2007 | A1 |
20070061148 | Cross et al. | Mar 2007 | A1 |
20070061300 | Ramer et al. | Mar 2007 | A1 |
20070079383 | Gopalakrishnan | Apr 2007 | A1 |
20070086773 | Ramsten et al. | Apr 2007 | A1 |
20070106506 | Ma et al. | May 2007 | A1 |
20070106507 | Charoenruengkit et al. | May 2007 | A1 |
20070115845 | Hochwarth et al. | May 2007 | A1 |
20070118374 | Wise et al. | May 2007 | A1 |
20070118426 | Barnes, Jr. | May 2007 | A1 |
20070118592 | Bachenberg | May 2007 | A1 |
20070123222 | Cox et al. | May 2007 | A1 |
20070133769 | Da Palma et al. | Jun 2007 | A1 |
20070133771 | Stifelman et al. | Jun 2007 | A1 |
20070150275 | Garner et al. | Jun 2007 | A1 |
20070156400 | Wheeler | Jul 2007 | A1 |
20070180718 | Fourquin et al. | Aug 2007 | A1 |
20070233487 | Cohen et al. | Oct 2007 | A1 |
20070233488 | Carus et al. | Oct 2007 | A1 |
20070239837 | Jablokov et al. | Oct 2007 | A1 |
20070255794 | Coutts | Nov 2007 | A1 |
20080016142 | Schneider | Jan 2008 | A1 |
20080037720 | Thomson et al. | Feb 2008 | A1 |
20080040683 | Walsh | Feb 2008 | A1 |
20080052073 | Goto et al. | Feb 2008 | A1 |
20080052075 | He et al. | Feb 2008 | A1 |
20080063154 | Tamari | Mar 2008 | A1 |
20080063155 | Doulton | Mar 2008 | A1 |
20080065481 | Immorlica et al. | Mar 2008 | A1 |
20080065737 | Burke et al. | Mar 2008 | A1 |
20080077406 | Ganong, III | Mar 2008 | A1 |
20080091426 | Rempel et al. | Apr 2008 | A1 |
20080092168 | Logan | Apr 2008 | A1 |
20080120375 | Levy | May 2008 | A1 |
20080133232 | Doulton | Jun 2008 | A1 |
20080147404 | Liu et al. | Jun 2008 | A1 |
20080154600 | Tian et al. | Jun 2008 | A1 |
20080154870 | Evermann et al. | Jun 2008 | A1 |
20080155060 | Weber et al. | Jun 2008 | A1 |
20080172781 | Popowich et al. | Jul 2008 | A1 |
20080177551 | Schalk | Jul 2008 | A1 |
20080195588 | Kim et al. | Aug 2008 | A1 |
20080198898 | Skakkebaek et al. | Aug 2008 | A1 |
20080198980 | Skakkebaek et al. | Aug 2008 | A1 |
20080198981 | Skakkebaek et al. | Aug 2008 | A1 |
20080200153 | Fitzpatrick et al. | Aug 2008 | A1 |
20080201139 | Yu et al. | Aug 2008 | A1 |
20080208582 | Gallino | Aug 2008 | A1 |
20080208590 | Cross, Jr. et al. | Aug 2008 | A1 |
20080221897 | Cerra et al. | Sep 2008 | A1 |
20080243500 | Bisani et al. | Oct 2008 | A1 |
20080243504 | Poi | Oct 2008 | A1 |
20080261564 | Logan | Oct 2008 | A1 |
20080275864 | Kim et al. | Nov 2008 | A1 |
20080275873 | Bosarge et al. | Nov 2008 | A1 |
20080301250 | Hardy | Dec 2008 | A1 |
20080313039 | Altberg et al. | Dec 2008 | A1 |
20080317219 | Manzardo | Dec 2008 | A1 |
20090006194 | Sridharan et al. | Jan 2009 | A1 |
20090012793 | Dao et al. | Jan 2009 | A1 |
20090037255 | Chiu et al. | Feb 2009 | A1 |
20090043855 | Bookstaff et al. | Feb 2009 | A1 |
20090055175 | Terrell, II et al. | Feb 2009 | A1 |
20090055179 | Cho et al. | Feb 2009 | A1 |
20090055538 | Conradt | Feb 2009 | A1 |
20090063151 | Arrowood et al. | Mar 2009 | A1 |
20090063268 | Burgess et al. | Mar 2009 | A1 |
20090076821 | Brenner et al. | Mar 2009 | A1 |
20090076917 | Jablokov et al. | Mar 2009 | A1 |
20090077493 | Hempel et al. | Mar 2009 | A1 |
20090086958 | Altberg et al. | Apr 2009 | A1 |
20090100050 | Erol et al. | Apr 2009 | A1 |
20090117922 | Bell | May 2009 | A1 |
20090124272 | White et al. | May 2009 | A1 |
20090125299 | Wang | May 2009 | A1 |
20090141875 | Demmitt et al. | Jun 2009 | A1 |
20090150156 | Kennewick et al. | Jun 2009 | A1 |
20090150405 | Grouf et al. | Jun 2009 | A1 |
20090157401 | Bennett | Jun 2009 | A1 |
20090163187 | Terrell, II | Jun 2009 | A1 |
20090170478 | Doulton | Jul 2009 | A1 |
20090182559 | Gerl et al. | Jul 2009 | A1 |
20090182560 | White | Jul 2009 | A1 |
20090199101 | Cross et al. | Aug 2009 | A1 |
20090204410 | Mozer et al. | Aug 2009 | A1 |
20090210214 | Qian et al. | Aug 2009 | A1 |
20090228274 | Terrell, II et al. | Sep 2009 | A1 |
20090240488 | White et al. | Sep 2009 | A1 |
20090248415 | Jablokov et al. | Oct 2009 | A1 |
20090271194 | Davis et al. | Oct 2009 | A1 |
20090276215 | Hager | Nov 2009 | A1 |
20090282363 | Jhaveri et al. | Nov 2009 | A1 |
20090307090 | Gupta et al. | Dec 2009 | A1 |
20090312040 | Gupta et al. | Dec 2009 | A1 |
20090319187 | Deeming et al. | Dec 2009 | A1 |
20100017294 | Mancarella et al. | Jan 2010 | A1 |
20100049525 | Paden | Feb 2010 | A1 |
20100058200 | Jablokov et al. | Mar 2010 | A1 |
20100121629 | Cohen | May 2010 | A1 |
20100145700 | Kennewick et al. | Jun 2010 | A1 |
20100146077 | Davies et al. | Jun 2010 | A1 |
20100180202 | Del Valle Lopez | Jul 2010 | A1 |
20100182325 | Cederwall et al. | Jul 2010 | A1 |
20100191619 | Dicker et al. | Jul 2010 | A1 |
20100223056 | Kadirkamanathan | Sep 2010 | A1 |
20100268726 | Gorodyansky et al. | Oct 2010 | A1 |
20100278453 | King | Nov 2010 | A1 |
20100279667 | Wehrs et al. | Nov 2010 | A1 |
20100286901 | Geelen et al. | Nov 2010 | A1 |
20100293242 | Buchheit et al. | Nov 2010 | A1 |
20100312619 | Ala-Pietila et al. | Dec 2010 | A1 |
20100312640 | Haldeman et al. | Dec 2010 | A1 |
20110029876 | Slotznick et al. | Feb 2011 | A1 |
20110040629 | Chiu et al. | Feb 2011 | A1 |
20110054900 | Phillips et al. | Mar 2011 | A1 |
20110064207 | Chiu et al. | Mar 2011 | A1 |
20110144973 | Bocchieri et al. | Jun 2011 | A1 |
20110161072 | Terao et al. | Jun 2011 | A1 |
20110161276 | Krumm et al. | Jun 2011 | A1 |
20110047452 | Ativanichayaphong et al. | Dec 2011 | A1 |
20110296374 | Wu et al. | Dec 2011 | A1 |
20110313764 | Bacchiani et al. | Dec 2011 | A1 |
20120022875 | Cross et al. | Jan 2012 | A1 |
20120046950 | Jaramillo et al. | Feb 2012 | A1 |
20120059653 | Adams et al. | Mar 2012 | A1 |
20120095831 | Aaltonen et al. | Apr 2012 | A1 |
20120166202 | Carriere et al. | Jun 2012 | A1 |
20120259729 | Linden et al. | Oct 2012 | A1 |
20120324391 | Tocci | Dec 2012 | A1 |
20130041667 | Longe et al. | Feb 2013 | A1 |
20130158994 | Jaramillo et al. | Jun 2013 | A1 |
20130211815 | Seligman et al. | Aug 2013 | A1 |
20130226894 | Venkataraman et al. | Aug 2013 | A1 |
20130281007 | Edge et al. | Oct 2013 | A1 |
20140136199 | Hager | May 2014 | A1 |
20150255067 | White et al. | Sep 2015 | A1 |
20170004831 | White et al. | Jan 2017 | A1 |
Number | Date | Country |
---|---|---|
1274222 | Jan 2003 | EP |
2006101528 | Sep 2006 | WO |
Entry |
---|
Allauzen, C., et al., A Generalized Composition Algorithm for Weighted Finite-State Transducers, Interspeech, Brighton, U.K., Sep. 2009, pp. 1203-1206. |
Bisani, M., et al., Automatic Editing in a Back-End Speech-to-Text System, 2008, 7 pages. |
Board of Patent Appeals and Interferences Answer in U.S. Appl. No. 12/352,442 dated May 15, 2012. |
Brown, E., et al., Capitalization Recovery for Text, Springer-Verlag Berlin Heidelberg, 2002, 12 pages. |
Desilets, A., et al., Extracting Keyphrases From Spoken Audio Documents, Springer-Verlag Berlin Heidelberg, 2002, 15 pages. |
Glaser, M., et al., Web-Based Telephony Bridges for the Deaf, proceedings of the South African Telecommunications Networks & Applications Conference (2001), Wild Coast Sun, South Africa, 5 pages. |
Gotoh, Y., et al., Sentence Boundary Detection in Broadcase Speech Transcripts, Proceedings of the ISCA Workshop, 2000, 8 pages. |
Hori, T., et al., Efficient WFST-Based One-Pass Decoding With On-The-Fly Hypothesis Rescoring in Extremely Large Vocabulary Continuous Speech Recognition, IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, No. 4, May 2007, pp. 1352-1365. |
Huang, J., et al., Extracting Caller Information from Voicemail, IBM T.J. Watson Research Center, 2002, pp. 67-77. |
Huang, J., et al., Maximum Entropy Model for Punctuation Annotation From Speech, in ICSLP 2002, pp. 917-920. |
Information Disclosure Statement (IDS) Letter Regarding Common Patent Application(s) dated Jun. 4, 2010. |
Information Disclosure Statement (IDS) Letter Regarding Common Patent Application(s), dated Dec. 6, 2010. |
Information Disclosure Statement (IDS) Letter Regarding Common Patent Application(s), dated Feb. 14, 2012. |
Information Disclosure Statement (IDS) Letter Regarding Common Patent Application(s), dated Mar. 17, 2011. |
Information Disclosure Statement (IDS) Letter Regarding Common Patent Application(s), dated Nov. 24, 2009. |
International Search Report and Written Opinion International Patent Application No. PCT/US2007/008621, dated Nov. 13, 2007. |
J2EE Application Overview, publicly available on http://www.orionserever.com/docs/j2eeoverview.html, Mar. 1, 2001. |
Justo, R., et al., Phrase Classes in Two-Level Language Models for ASR, Springer-Verlag London Limited, 2008, 11 pages. |
Kimura, K., et al., 1992, Association-based natural language processing with neural networks, in proceedings of the 7th annual meeting of the Association of Computational Linguistics, pp. 223-231. |
Lewis, J., et al., SoftBridge: An Architecture for Building IP-Based Bridges Over the Digital Divide, Proceedings of the South African Telecommunications Networks & Applications Conference (SATNAC 2002), Drakensberg, South Africa, 5 pages. |
Li, X., et al., Time based language models, CIKM '03 Proceedings of the twelfth international conference on Information and knowledge management, pp. 469-475, 2003. |
Office Action in Canadian Application No. 2648617 dated Feb. 27, 2014. |
Ries, K., Segmenting conversations by topic, initiative, and style, Springer-Verlag Berlin Heidelberg, 2002, 16 pages. |
Schalkwyk, J., et al., Speech Recognition with Dynamic Grammars Using Finite-State Transducers, Eurospeech 2003-Geneva, pp. 1969-1972. |
Shriberg, E., et al., Prosody-based automatic segmentation of speech into sentences and topics, 2000, 31 pages. |
Soltau, H., and G. Saon, Dynamic Network Decoding Revisited, Automatic Speech Recognition and Understanding, 2009, IEEE Workshop, pp. 276-281. |
Stent, A., et al., Geo-Centric Language Models for Local Business Voice Search, Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the ACL, pp. 386-396, 2009. |
Thomae, M., et al., Hierarchical Language Models for One-Stage Speech Interpretation, in Interspeech, 2005, pp. 3425-3428. |
David H. Kemsley, et al., A Survey of Neural Network Research and Fielded Applications, 1992, in International Journal of Neural Networks: Research and Applications, vol. 2, No. 2/3/4, pp. 123-133. Accessed on Oct. 25, 2007 at http://citeseer.ist.psu.edu/cache/papers/cs/25638/ftp:zSzzSzaxon.cs.byu.eduzSzpubzSzpaperszSzkemsley_92.pdf/kemsley92survey.pdf. |
Transl8it! translation engine, publicly available on http://www.transl8it.com since May 30, 2002. Retrieved on Oct. 26, 2007. |
vBulletin Community Forum, thread posted on Mar. 5, 2004. Page retrieved on Oct. 26, 2007 from http://www.vbulletin.com/forum/showthread.php?t=96976. |
J2EE Application Overview, publicly available on http://www/orionserver.com/docs/j2eeoverview.html since Mar. 1, 2001. Retrieved on Oct. 26, 2007. |
Web-based Telephony Bridges for the Deaf, Glaser et al. |
SoftBridge: An Architecture for Building IP-based Bridges over the Digital Divide, Lewis et al. |
“International Search Report” and “Written Opinion of the International Search Authority” (Korean Intellectual Property Office) in Yap, Inc. International Patent Application Serial No. PCT/US2007/008621 corresponding to current U.S. patent application, Dated Nov. 13, 2007, 8 pages. |
Fielding, et al., Hypertext Transfer Protocol—HTTP/1.1, RFC 2616, Network Working Group, sections 7, 9.5, 14.30, 12 pages total. |
Marshall, James, HTTP Made Really Easy, Aug. 15, 1997, retrieved from http://www.jmarshall.com/easy/http/ on Jul. 25, 2008, 15 pages total. |
Knudsen, Jonathan, Session Handling in MIDP, Jan. 2002, retrieved from http://developers.sun.com/mobility/midp/articles/sessions/ on Jul. 25, 2008, 7 pages total. |
David H. Kemsley, et al., A Survey of Neural Network Research and Fielded Applications, 1992, in International Journal of Neural Networks: Research and Applications, vol. 2, No. 2/3/4, pp. 123-133. Accessed on Oct. 25, 2007 at http://citeseer.ist.psu.edu/cache/papers/cs/25638/ftp:zSzzSzaxon.cs.byu.eduzSzpubzSzpaperszSzkemsley_92.pdf/kemsley92survey.pdf, 12 pages total. |
Transl8it! translation engine, publicly available on http://www.transl8it.com since May 30, 2002. Retrieved on Oct. 26, 2007, 6 pages total. |
vBulletin Community Forum, thread posted on Mar. 5, 2004. Page retrieved on Oct. 26, 2007 from http://www.vbulletin.com/forum/showthread.php?t=96976, 1 page total. |
J2EE Application Overview, publicly available on http://www/orionserver.com/docs/j2eeoverview.html since Mar. 1, 2001. Retrieved on Oct. 26, 2007, 3 pages total. |
“International Search Report”and “Written Opinion of the International Search Authority” (Korean Intellectual Property Office) in Yap, Inc. International Patent Application Serial No. PCT/US2007/008621, dated Nov. 13, 2007, 13 pages total. |
Information Disclosure Statement (IDS) Letter Regarding Common Patent Application(s), submitted by Applicant on Jul. 21, 2009. |
Information Disclosure Statement (IDS) Letter Regarding Common Patent Application(s), dated Jun. 4, 2010. |
Huang, J., Zweig, G. Padmanabhan, M., 2002, Extracting caller information from voicemail, Springer-Verlag Berlin Heidelberg, 11 pages. |
Information Disclosure Statement (IDS) Letter Regarding Common Patent Application(s), dated Jul. 21, 2011. |
Number | Date | Country | |
---|---|---|---|
20090083032 A1 | Mar 2009 | US |
Number | Date | Country | |
---|---|---|---|
60972851 | Sep 2007 | US | |
60972853 | Sep 2007 | US | |
60972854 | Sep 2007 | US | |
60972936 | Sep 2007 | US | |
60972943 | Sep 2007 | US | |
60972944 | Sep 2007 | US |