The subject matter described herein relates to systems and methods for automatically recognizing and acting on the content of voicemail messages.
Most modern cellular telephone carriers offer a voicemail service. Conventional voicemail, however, is cumbersome to use, especially when a user needs to extract and/or act on information contained in a voicemail message. For example, if a user receives a voicemail message, and the user needs to extract and act on information contained in a voicemail message, the user needs to perform numerous steps to deal with the message. For example, if a caller leaves a voicemail message asking the user to email the caller a copy of a presentation that the user gave the day before, the user must first listen to the voicemail message; open an email application; locate the email address of the caller; attach the presentation to the email; and then send the email to the caller. This process is inefficient and time consuming. Accordingly, it would be desirable to have an automated system perform as many of these tasks as possible.
According to some implementations there is provided a method of operating a digital assistant. The method occurs at a device having one or more processors and memory, such as at a mobile telephone. A recorded voice message is provided from a caller to a user. For example a caller leaves a voicemail message for the user of the mobile device. In some embodiments, the recorder voicemail is first converted from speech to text.
A proposed action to be performed by the user is then extracted from the voice message. For example, the voicemail may state “this is John, call me at 650.987.0987 at 9 am tomorrow.” Here, the action is to call John.
At least one action parameter for undertaking the action is determined. Using the same example, the at least one action parameter includes (i) the telephone number of 650.987.0987, and 9 am the following morning. The at least one action parameter may be extracted from the voicemail message or it may be determined by other means. For example, the caller's telephone number may be obtained from caller identification, or by looking-up the caller's telephone number in the user's contact book.
Finally, the user is presented with a prompt to facilitate undertaking the action using the at least one the parameter. For example, the user may be given the option to set a reminder to call John the following morning at 9 am.
Some implementations provide a non-transitory computer-readable storage medium storing one or more programs for execution by the one or more processors. The one or more programs comprise instructions for performing the methods described herein.
Finally, some implementations provide a mobile or cellular telephone that includes a processor and memory coupled to the processor. The memory includes instructions for performing the methods described herein.
In some implementations, many or all of these steps occur automatically without user intervention.
The automatic processing of incoming voicemail messages realizes one or more of the following potential advantages. First, it reduces or eliminates the user having to remember, write down or type in contacts details left by callers in voicemail messages. Second, it provides a useful and convenient mechanism for users to process and respond to incoming voicemail messages. Accordingly, automatic processing of incoming voicemail messages saves the user time and effort, and greatly improves the efficiency of responding to or acting on information contained in received voicemail messages.
Details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
In some implementations, a communication device is a physical device implemented as hardware and configured to operate a software program. In some alternative implementations, a communication device is a virtual device that is implemented as a software application or module that is configured to establish a connection with another communication device. As examples, the communication devices 110 and 120 can be or be implemented as software in a mobile phone, personal digital assistant, portable computer, desktop computer, or other electronic communication device. Examples of communication channels 100 include Internet Protocol-based networks, cellular telephone networks, satellite networks, and other communication networks. Note that one or more other communication devices, in addition to the communication devices 110 and 120, can be connected over the communication channel 100.
The communication devices 110 and 120 can communicate in real-time or near real-time over the communication channel 100. For example, a real-time communication session, such as a phone conversation, can be conducted using two or more communication devices. In addition, a communication session can be established using voice-over Internet Protocol full duplex communications. The communication devices 110 and 120 can be implemented to permit full duplex conversations that include any electronically assisted communication mechanism or channel, e.g., over a mobile broadband network. The bidirectional nature of the communication devices 110 and 120 can enable two or more users to simultaneously exchange voice information during a communication session, e.g., a phone call. Voice information corresponds to the voice communication, e.g., conversation, between the parties to the communication session.
A communication device can include a communication module. In
Each of the components 230, 240, 250, and 300 can be interconnected, for example, using a data communication link 260. The communication module 200 can receive input 210 and produce output 220. The received input 210 can be audio data, e.g., in the form of digital or analog audio signals. For example, the communication module 200 can receive voice information input 210 encoded in a digital audio signal. The output 220 can include audio data, visual data, textual data, or any combination thereof. The output 220 can be displayed graphically in a display screen, or a user interface provided by a software application running on the communication device. For example, the communication module 200 can generate textual data output corresponding to the received digital audio signals and can display the textual data output in a display, e.g., a touch screen display of a smart phone. In some implementations, one or more of the communication module components 230, 240, 250, and 300 are located external to the communication device in which the communication module is located. The recording unit 230 records audio data. The audio data can include both received and transmitted voice information.
The recording unit 230 can be implemented to record a communication session between two or more communication devices. For example, the recording unit 230 can record a portion, or the entirety, of a phone conversation between two users communicating with mobile phones.
The recording unit 230 can be configured, e.g., by setting user preferences, to record voice information originating from one or more participants, e.g., callers using different communication devices, of a communication session. In some implementations, user preferences are used to select one or more particular participants for which voice information is recorded by the recording unit 230.
As an example, the recording unit 230 can be configured to record only one side of the phone conversation. The recording unit 230 can be configured to capture voice information spoken only by a first caller on a far end of a phone call and not by a second caller on a near end. The first caller on the far end is a caller using a first communication device that is exchanging voice information during a communication session with a second caller using a second communication device. The second caller on the near end is a caller using the second communication device in which the recording unit 230 is located. Alternatively, the recording unit 230 can capture voice information spoken only by the second caller on the near end.
In some implementations, the recording unit 230 automatically records the communication session. In some implementations, the recording unit 230 records the communication session in response to user input. For example, the recording unit 230 can continuously record one or more sides of a conversation in response to a user pressing a hardware button, a virtual button, or a soft record button, or issuing a voice command.
In these and other implementations, the communication module 200 can provide a notification to each participant of which voice information is being recorded. The notification can be a visual notification displayed in a display of the communication module of each participant, or an audio notification played by the communication module of each participant. In
In some implementations, the recording unit 230 determines an identifier that indicates a date and time, e.g., a time stamp, associated with the recorded audio data. In addition, the recording unit 230 can associate the recorded audio data with one or more other identifiers. Examples of identifiers include an identifier for a particular communication session, a particular communication device, or a particular user of a communication device, from which the recorded audio data was derived. The identifiers can be used to identify particular recorded audio data for processing.
The storage unit 240 can be implemented to store data, e.g., the recorded audio data. The storage unit 240 can receive audio data captured by the recording unit 230. For example, the storage unit 240 can store audio data and information associated with the audio data, e.g., the identifiers described above. The storage unit 240 can be implemented as a local storage device or local memory cache. In some implementations, the storage unit 240 is located external to both the communication module 200 and the communication device 120. For example, the storage unit 240 can reside in a server, e.g., a network device, located remotely from the communication device 120. Audio data stored at the storage unit 240 can be played back. Additionally, audio data stored at the storage unit 240 can be transcoded into textual data and can be provided as output 220.
The recognizer unit 250 can be implemented to automatically identify terms, e.g., identify without further user intervention one or more words, in the audio data received from a remote source, such as the communication device 110. In some implementations, the recognizer unit 250 uses conventional techniques and one or more language models to identify key words, e.g., part of speech, subject-verb-object word order (e.g., identifying declarative sentences) in the audio data. The recognizer unit 250 provides the key words as input to an application or service external to the communication module. As an example, the following conversation may occur:
The recognizer unit 250 can identify the key words “dinner”, “eight”, and “sushi”. Furthermore, the recognizer unit 250 can work with a location based service to determine a geographical location of one or more of the communication devices being used by the users in the communication session. The recognizer unit 250 can determine that, based on the detected key words, that a restaurant reservation service (e.g., a web application that makes restaurant reservations) may be useful for the user.
In some implementations, the recognizer unit 250 sends the input to a suggestion service external to the communication device that makes this type of determination. In some implementations, pattern matching can be used to identify the terms. An example pattern for a term representing a city, state, and zip code is “City, State NNNNN”, where N is a digit. An example pattern for a term representing an address is “X Y Drive”, where X is a number and Y is one or more words associated with the name of the drive. An example pattern for a term representing a phone number is “NNN NNN NNNN”, where N is a digit. Other patterns are possible.
In some implementations, the communication module 200 provides a notification to the user that the particular key words were detected and provide suggestions on how to act on the key words. For example, the communication module can provide visual feedback in the screen of the communication device that asks, “Would you like to make dinner reservations at 8:00 pm at a sushi restaurant?”. In some implementations, the communication module 200 automatically provides, e.g., without further user intervention, the key words to an application or service external to the communication module. In the example, the communication module may work with a restaurant reservation service to generate a request for the reservation. In particular, the communication may initiate, at the restaurant reservation service, a request to search for sushi restaurants with reservations available at 8:00 pm in a predetermined proximity to the geographical location (e.g., within 10 miles).
Other implementations are possible. For example, the recognizer unit 250 can send the input to applications or services local or external to the communication device, e.g., email applications, web browsers, and work with the local applications or services to provide a suggested operation or automatically initiate a subsequent action, e.g., generate a draft email, request a particular web page.
In some implementations, the recognizer unit 250 can identify the terms as being commands, e.g., voice commands, or target information, e.g., information upon which a command operates or performs an action. Upon detecting a command and target information, the recognizer unit 250 can provide the command and target information as output 220 (e.g., audible, visual, textual output) indicating to the user of the communication device that the command and target information were detected, and request instructions from the user whether to store the command and target information in an information log.
The commands and target information can be detected by the recognizer unit 250 using various techniques. In some implementations, the recognizer unit 250 identifies commands by comparing terms in the audio data to a collection of terms specified as being commands, e.g., in a dictionary of commands. In some implementations, the recognizer unit 250 uses conventional techniques and one or more language models to identify commands and target information based on linguistics, e.g., part of speech, subject-verb-object word order (e.g., identifying declarative sentences). In these and other implementations, pattern matching can also be used to identify commands and target information. For example, a predetermined number of tokens, e.g., characters or words that follow a detected command can be identified as being target information.
As an example, the recognizer unit 250 can be configured to identify, in the audio data received from the remote source, the term “phone number” as being a command and the next ten numerals following the words “phone number” as being target information. Upon identifying the term “phone number,” the recognizer unit 250 can be implemented to produce any of audible, visual, and textual output 220, indicating that the ten numerals associated with the words “phone number” have been recognized. The audio data from the remote source can be monitored by the recognizer unit 250 during any portion of the communication session. For example, the recognizer unit 250 can be implemented to continuously monitor spoken voice information transmitted from one or more communication devices during a phone conversation.
In some implementations, the recognizer unit 250 can detect key words in the audio data and send portions of the audio data associated with the detected key words to a recognizer service external to the communication device, e.g., a recognizer service located on a server device. The key words can be specified, for example, in a dictionary of key words. The portion of the audio data can be defined, for example, based on an amount of time before the key word occurs in the corresponding audio and an amount of time after the key word occurs, e.g., a portion of audio data that corresponds to the audio from seconds before the key word occurs to seconds after the key word occurs. The recognizer service can determine commands and target information and provide the commands and target information to the recognizer unit 250.
In some implementations, the recognizer unit 250 can generate an information log based on the identified terms, e.g., identified commands and target information, key words. In some implementations, the information log is a list or queue of information items (e.g., commands and target information) recognized during a communication session. When an information item is detected, the item of information can be added to the information log.
Additional information also can be associated with the item of information, such as a time stamp and/or an indication of the item's source, e.g., an identifier of a user or a communication device. The information log can be displayed, e.g., in a user interface of a communication device.
In some implementations, the communication module 200 provides a notification to a user of the communication device, e.g., a tone or haptic feedback, when a new information item is added to the information log. Once added to the information log, an item of information can be acted on. For example, a phone number recognized during a communication session and added to the information log can be dialed during the communication session, such as to initiate a three-way call. Also, an e-mail address can be accessed to generate a message or message template during the communication session.
The information log also can be accessed after the corresponding communication session ends. For example, a recognized telephone number can be used to initiate a new communication session or an item of contact information can be used to generate a new contact or update an existing contact. One or more items of information included in the information log also can be altered, including through editing and deleting. For example, the spelling of a recognized name can be corrected.
The information log can be stored to permit subsequent retrieval and processing. For example, a link to the information log corresponding to a communication session can be included in a call history list or a file structure, such as a folder or directory. In some implementations, an audio recording of the communication session can be accessed in conjunction with the information log, such as for verification of one or more recognized information items, In addition, a time stamp associated with an information item can be used to access the corresponding portion of the audio recording, permitting the information item to be compared with the corresponding recorded audio.
In some implementations, the recognizer unit 250 automatically stores the identified terms at the storage unit 240. In some implementations, the recognizer unit 250 stores the recognized words and phrases when a user responds to the audible, visual or textual output 220. In some implementations, the user responds to the output 220 with a response, such as by issuing a voice command or by pressing a hardware button, a virtual button, or a soft button to store the recognized words. Alternatively, the user can respond with a gesture, such as by holding the communication device 120 and making a pointing gesture, or with motion, such as by shaking the communication device 120.
The recognizer unit 250 can be implemented to receive audio data as the audio data is input 210 into the communication module 200. The recognizer unit 250 also can receive audio data captured by the recording unit 230. Additionally, the recognizer unit 250 can receive audio data stored at the storage unit 240. In some implementations, the recognizer unit 250 uses a Hidden-Markov speech recognition model.
The data communication link 260 can be implemented as a system bus or a signal line. Audio data and information associated with the audio data can be transmitted on the data communication link 260. The voice command controller 300 can be implemented to receive one or more commands. The one or more commands can be received from a user operating the communication device in which the voice command controller 300 is located.
In some implementations, the voice command controller 300 differentiates between voice information associated with a phone conversation and one or more voice commands spoken into a microphone operatively coupled to a communication device in which the voice command controller 300 is installed. The voice command controller can be implemented to recognize voice commands spoken by a caller on the near end, e.g., the originating source, from the real-time voice information transmitted during a communication session.
The voice command controller 300 also can be implemented to ignore voice commands spoken by a caller on the far end, e.g., the secondary source, of the phone conversation. The voice command controller 300 includes a detection device 310. The detection device 310 can be implemented to parse one or more voice commands included in audio data received from the local source (i.e., the device user) during the communication session. The one or more voice commands can be received during a connected and active communication session. The voice command controller 300 can receive the one or more voice commands without causing the communication device to switch from a conversational mode to a command mode. In some implementations, the detection device 310 filters out ambient noise during the communication session.
The detection device 310 can be programmed to recognize pre-defined key words and phrases associated with the one or more voice commands. The pre-defined key words and phrases can include words and/or phrases defined by either or both of the manufacturer and one or more device users. For example, the pre-defined key word “phone” can be programmed such that when the detection device 310 detects the key word “phone,” the detection device 310 recognizes that the key word is associated with a command and informs the voice command controller 300 that one or more actions corresponding to the command should be taken. Actions performed by the voice command controller 300 can include generating audible, visual or textual data corresponding to the received audio data. For example, the voice command controller 300 can output textual data corresponding to the ten digits associated with the audio data triggering the key word “phone” audio data, in a similar manner as described above with respect to the recognizer unit 250.
The detection device 310 can include a detection filter that recognizes the differences between a voice at the near end of the phone conversation, the local source, and a voice at a far end, a remote source. For example, the detection filter can include speech recognition software based on the Hidden-Markov model that can distinguish between one or more voices during a communication session. In some implementations, audio signals are detected without the detection filter. For example, audio signals received from the near end can be received through a microphone operatively coupled to the communication device and can be routed to the communication module 200.
In some implementations, a dictation recognition system (e.g., a parser) included in the detection device 310 interprets text from a phone conversation. The dictation recognition system can include a text post-processor, or data detector that is configured to parse through the generated text to obtain useful textual information, e.g., target information. Examples of useful textual information include phone numbers, email addresses, dates and home addresses. In some implementations, the useful textual information is highlighted, or otherwise enhanced, such that a user can perform one or more actions on the textual information. For example, a user can click on a phone number that was recognized and highlighted by a data detector, to call the party associated with the phone number.
In some implementations, the detection device 310 can detect and extract useful information from a live or automated conversation and can store the information in an information log. For example, information such as a physical address, an email address, a phone number, a date, and a uniform resource locator can be detected and inserted into the information log. The information log can be implemented as a list or queue of information items recognized during a communication session. For example, the information log can be configured to include information items associated with a list of pre-defined or programmed words and phrases that are detected and identified by the detection device 310 in the course of a communication session. When an item of information is detected, e.g. a phone number, the item of information can be inserted into the information log. Additional information also can be associated with the item of information, such as a time stamp and/or an indication of the item's source. The information log can be displayed, e.g., in a user interface display of a device, such as an interactive device.
The device also can be configured to output a signal, such as a tone or haptic feedback, when a new information item is added to the information log. Each information item can also be associated with an identifier that identifies a particular user or communication device from which the information item was derived. Once added to the information log, an item of information can be acted on, such as through a voice command or tactile input. For example, a phone number recognized during a communication session and added to the information log can be dialed during the communication session, such as to initiate a three-way call. Also, an e-mail address can be accessed to generate a message or message template during the communication session.
The information log also can be accessed after the corresponding communication session ends. For example, a recognized telephone number can be used to initiate a new communication session or an item of contact information can be used to generate a new contact or update an existing contact. One or more items of information included in the information log also can be altered, including through editing and deleting. For example, the spelling of a recognized name can be corrected. A user can also associate particular commands with one or more items of target information.
Further, the information log can be stored to permit subsequent retrieval and processing. For example, a link to the information log corresponding to a communication session can be included in a call history list or a file structure, such as a folder or directory.
In some implementations, an audio recording of the communication session is accessed in conjunction with the information log, such as for verification of one or more recognized information items. In addition, a time stamp associated with an information item can be used to access the corresponding portion of the audio recording, permitting the information item to be compared with the corresponding recorded audio.
The detection device 310 can be implemented to process the one or more voice commands concurrent with the phone conversation. The one or more voice commands also can be recorded and time stamped by the detection device 310 for later execution. The recorded time stamped voice commands can be stored and displayed in a command list in, e.g., a user interface display. The detection device 310 also can record and time stamp the detected key words associated with the one or more voice commands. The recorded time stamped key words further can be stored and displayed in an information log. In some implementations, the information log and the command list can be integrated.
The voice command controller 300 can receive input from an input unit 320. The input unit 320 can be implemented to provide one or more types of input to the voice command controller 300. The input received from the input unit 320 can include one or more of: voice input 322; tactile input 324; gesture input 326; and motion input 328. The voice input 322 can include one or more voice commands directing the voice command controller 300 to perform one or more actions corresponding to the one or more voice commands.
For example, the voice input 322 can include a command to the voice command controller 300 to prepare an electronic message for dissemination to a particular person. Upon receipt of the command, the voice command controller 300 can be implemented to generate a shell electronic message to a particular contact named as a part of the command. For example, in response to a command to prepare an email for “Greg,” the voice command controller 300 can generate an email addressed to Greg.
The voice input 322 also can include a command to initiate dictation, e.g., to generate an information log that is not associated with a particular communication session. For example, the voice command controller 300 can be implemented to transcribe and record Greg's email address as Greg's email address is dictated into the phone. The voice command controller 300 also can be implemented to read and recite stored information. For example, during a phone call with “Bob,” the near end user can provide voice input 322 commanding the voice command controller 300 to “recite Greg's phone number”; in response to receiving the voice input 322, the voice command controller 300 can produce output 330 reciting Greg's phone number that is audible to Bob, the near end user, or both.
The tactile input 324, gesture input 326 and motion input 328 can be implemented as physical inputs. The physical inputs can be used in conjunction with the voice input 322 to differentiate the one or more voice commands, e.g., commands, from the real-time voice information, including target information. The physical inputs can be received before, concurrently with, or after the voice input 322, i.e., one or more voice commands, is received. For example, as a user speaks one or more voice commands into the communication device, the user also can press a button located on the communication device to indicate that the spoken words are distinct from regular voice information associated with the phone conversations and should be treated as a command.
Tactile input 324, such as pressing a hardware, virtual or soft button, also can be used to determine whether one or more voice commands should be treated as a string of commands, or distinguished as separate individual commands. Gesture input 326, such as gesturing with one or more fingers while holding the communication device in the gesturing hand, also can be used to indicate that spoken words should be treated as a command, in addition to determining the difference between a string of commands and separate individual commands.
Additionally, motion input 328, such as moving or shaking the communication device, also can be used to indicate that spoken words should be treated as a command, as well as determining the difference between a string of commands and separate individual commands. In some implementations, the voice input 322, as well as the physical inputs, can cause one or more processors at the voice command controller 300 to generate a new file corresponding to the received input. In some implementations, the physical inputs can be the sole input instructing the voice command controller 300 to perform the one or more actions corresponding to the received input.
The voice command controller 300 can produce output at an output unit 330. The output unit 330 can be implemented to provide one or more types of output from the voice command controller 300. The output unit 330 can include producing textual data corresponding to the received audio data. The textual data can be displayed on a display screen of, e.g., the communication device 120 depicted in
Output from the voice command controller 300 can be stored in a storage unit 340. In some implementations, the storage unit 340 can be integrated physically and/or logically with the storage unit 240. In other implementations, the storage unit 340 can be both physically and logically separate from the storage unit 240. The storage unit 340 can be implemented to store information associated with the one or more actions taken by the voice command controller 300. For example, in response to receiving voice input 322 from a user directing the voice command controller 300 to “remember the phone number” recited by the caller on the far end, the voice command controller 300 can produce textual data corresponding to the phone number at the output unit 330 and also can store the phone number at the storage unit 340. The storage unit 340 can be implemented as the storage unit 240 in the communication module 200 depicted in
Voice information, including one or more voice commands, can be received during a real-time full duplex phone conversation or from a voicemail left for the user by a caller (as described above in relation to
In some implementations, the one or more voice commands can be received after the communication session has ended. For example, at the conclusion of a communication session, a user can instruct the communication device to take an action based on information received during the communication session. In some implementations, the one or more voice commands can be accompanied by tactile, gesture or motion input. The tactile, gesture and motion input can be associated with the one or more voice commands and can be used to differentiate the one or more voice commands from other portions of the phone conversation. The accompanying input also can be received by the communication device during an active communication session or after the communication session has ended.
Voice information received during the real-time full duplex phone conversation (or voicemail) can be recorded (410). The voice information can be received by the communication device. The voice information can be encoded in digital audio data. The recording can occur automatically, or in response to input initiating the recording. In some implementations, the voice information can be continuously monitored during a real-time communication session, including a bidirectional communication session. The voice information also can be recorded continuously for the duration of the real-time full duplex phone conversation.
A source of the one or more voice commands (or keywords) can be determined (415). A speech recognition algorithm, such as the Hidden-Markov model, implemented in a detection device can filter voice information in audio data to determine the source of the one or more voice commands. In some implementations, the source can be the caller operating the communication device, e.g., the originating source.
The one or more voice commands (or keywords) can be parsed from the audio data received from the source (420). Two or more users operating communication devices can participate in the communication session. For example, the communication session can include a telephone conversation between two or more users operating smart phones. Audio data can include any and all voice information exchanged during the telephone conversation. The one or more voice commands can be detected by a detection module in the communication device.
In some implementations, the detection module can be located external from the communication device. The detection module can be implemented to identify the one or more voice commands in the voice information received from the source during the real-time full duplex phone conversation. For example, the detection module can identify key words and phrases associated with the one or more voice commands, such as “phone, remember that street address,” from the remainder of the telephone conversation. The detection module can extract the information associated with the one or more voice commands and can manage the extracted information differently than the received audio data.
One or more actions based on the one or more voice commands (or keywords) can be performed (425). The one or more voice commands can cause a processing module (see, e.g.,
Information associated with the one or more actions can be stored (430). The information associated with the one or more actions can be stored in a storage unit located within or outside the communication device. For example, the storage unit can be implemented as a local storage device or local memory cache within the communication device. In some implementations, the information can be stored in a particular location of the storage unit based on the one or more commands. For example, in response to a voice command directing the communication device to “store the street address in my contacts folder,” the processing module can store the audio data corresponding to the street address in the contacts folder portion of the storage unit. In some implementations, physical commands can be used to direct the communication device to perform one or more actions. For example, a user can interact with, e.g., touch or press, a command button in an communication device user interface to store the street address in the contacts folder.
Information associated with the one or more actions can be displayed (435). For example, the generated textual data corresponding to the voice information recorded during the real-time full duplex phone conversation can be displayed in, e.g., a user interface of a data processing apparatus (e.g., a smart phone, an interactive device, or other electronic devices with display components). The information associated with the one or more actions also can include the corresponding voice commands and key words. In some implementations, the information can be presented in an information log.
Sensors, devices, and subsystems can be coupled to the peripherals interface 506 to facilitate multiple functionalities. For example, a motion sensor 510, a light sensor 512, and a proximity sensor 514 can be coupled to the peripherals interface 506 to facilitate orientation, lighting, and proximity functions. A location processor 515 (e.g., GPS receiver) can be connected to the peripherals interface 506 to provide geopositioning. A magnetic compass integrated circuit 516 can also be connected to the peripherals interface 506 to provide orientation (e.g., to determine the direction of due North).
A camera subsystem 520 and an optical sensor 522, e.g., a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, can be utilized to facilitate camera functions, such as recording photographs and video clips.
Communication functions can be facilitated through one or more wireless communication subsystems 524, which can include radio frequency receivers and transmitters and/or optical (e.g., infrared) receivers and transmitters. The specific design and implementation of the communication subsystem 524 can depend on the communication network(s) over which the interactive device 500 is intended to operate. For example, an interactive device 500 can include communication subsystems 524 designed to operate over a wireless network, such as a GSM network, a GPRS network, an EDGE network, a Wi-Fi or WiMax network, and a Bluetooth™ network, or a wired network. In particular, the wireless communication subsystems 524 may include hosting protocols such that the device 500 may be configured as a base station for other wireless devices.
An audio subsystem 526 can be coupled to a speaker 528 and a microphone 530 to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions. The I/O subsystem 540 can include a touch screen controller 542 and/or other input controller(s) 544. The touch-screen controller 542 can be coupled to a touch screen 546. The touch screen 546 and touch screen controller 542 can, for example, detect contact and movement or break thereof using any of a plurality of touch sensitivity technologies, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with the touch screen 546.
The other input controller(s) 544 can be coupled to other input/control devices 548, such as one or more buttons, rocker switches, thumb-wheel, infrared port, LISS port, and/or a pointer device such as a stylus. The one or more buttons (not shown) can include an up/down button for volume control of the speaker 528 and/or the microphone 530.
In one implementation, a pressing of the button for a first duration may disengage a lock of the touch screen 546; and a pressing of the button for a second duration that is longer than the first duration may turn power to the interactive device 500 on or off. The user may be able to customize a functionality of one or more of the buttons. The touch screen 546 can, for example, also be used to implement virtual or soft buttons and/or a keyboard.
In some implementations, the interactive device 500 can present recorded audio and/or video files, such as MP3, AAC, and MPEG files. In some implementations, the interactive device 500 can include the functionality of an MP3 player.
The memory interface 502 can be coupled to memory 550. The memory 550 can include high-speed random access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices, and/or flash memory (e.g., NAND, NOR). The memory 550 can store an operating system 552, such as Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks. The operating system 552 may include instructions for handling basic system services and for performing hardware dependent tasks. In some implementations, the operating system 552 can be a kernel (e.g., UNIX kernel). The memory 550 may also store communication instructions 554 to facilitate communicating with one or more additional devices, one or more computers and/or one or more servers.
The memory 550 may include graphical user interface instructions 556 to facilitate graphic user interface processing; sensor processing instructions 558 to facilitate sensor-related processing and functions; phone instructions 560 to facilitate phone-related processes and functions; electronic messaging instructions 562 to facilitate electronic messaging related processes and functions; web browsing instructions 564 to facilitate web browsing-related processes and functions; media processing instructions 566 to facilitate media processing-related processes and functions; GPS Navigation instructions 568 to facilitate GPS and navigation-related processes and instructions; camera instructions 570 to facilitate camera-related processes and functions; interactive game instructions 572 to facilitate interactive gaming; calibration instructions 574 to facilitate calibrating interactive devices; speech recognition instructions 576 to facilitate recognizing speech; voice command instructions 578 to facilitate detecting and distinguishing voice commands or keywords, as described in reference to
In some implementations, the voicemail messages 579 are stored locally in memory 550, while in other implementations, voicemail pointers are stored in memory 550, where the pointers point to voicemail messages stored on a remote sever. In some implementations, the voicemail messages 579 are audio recordings of voicemail messages left for the user of the device by one or more callers. In other implementations, the voicemail messages 579 are text files of audio messages that have been converted from speech to text by the speech recognition instructions 576. In some implementations, the voice commands or keywords detected by the voice command instructions 578 are an action and one or more associated action parameters as described in further detail in relation to
The memory 550 may also store other software instructions (not shown), such as web video instructions to facilitate web video-related processes and functions; and/or web shopping instructions to facilitate web shopping-related processes and functions. In some implementations, the media processing instructions 566 are divided into audio processing instructions and video processing instructions to facilitate audio processing-related processes and functions and video processing-related processes and functions, respectively. An activation record and International Mobile Equipment Identity (IMEI) or similar hardware identifier can also be stored in memory 550.
Each of the above identified instructions and applications can correspond to a set of instructions for performing one or more functions described above. These instructions need not be implemented as separate software programs, procedures, or modules. The memory 550 can include additional instructions or fewer instructions. Furthermore, various functions of the interactive device 500 may be implemented in hardware and/or in software, including in one or more signal processing and/or application specific integrated circuits.
In some implementations, each call in a call log, e.g., the logs for dialed calls, received calls, and missed calls, that has an associated information log can have a selectable interface element (e.g., a virtual button such as a chevron (>>rendered next to a call). A user selection of the selectable interface element causes a respective call log to be displayed in the communication device 620. As an example, the call log can be displayed in a pop-up window that is superimposed over the call log.
Associations between target information and particular commands can be indicated by aligning a particular command with associated target information. In
As described above, these same systems and methods can be applied to recorded information, like voicemail messages. The systems and methods may be implemented on the device itself, on a remote server, or on a combination of the device and a remote server. Further details of such a system are also described in U.S. Provisional Application Ser. No. 61/646,831, filed May 14, 2012, which is incorporated by reference herein.
In some implementations, the recorded voice messages are then converted (704) from speech to text. In some implementations, this conversion occurs automatically without user intervention as soon as the voicemail message is received at the device, while in other implementations, this occurs at any other suitable time, e.g., when the device has processing cycles to spare.
hereafter, a number of steps occur automatically without user intervention. First, a proposed action to be performed by the user is extracted (706) from the voice message. In some implementations, the voice command instructions 578 of
Second, at least one action parameter for undertaking the action is determined (708). The action parameters are any parameters that are necessary or optional for performing or undertaking the action. For example, in the above example, the action parameters are the caller's email address and “yesterday's presentation.” Both of these parameters may be required for performing or undertaking the action of sending via email a copy of yesterday's presentation to the caller. In some implementations, the one or more action parameters are also extracted from the voice message.
Finally, the user of the device is presented (710) with a prompt facilitate undertaking the action using the at least one the parameter. Completing the example above, the voice command instructions 578 of
In those implementations where the action is to call or send a text message to the caller or another person (e.g., “Dave, morn asked that you call her tonight”), a telephone number is required. If the telephone number is provided in the voicemail message, then that number may be used to call the caller/person. If a number is not provided (see example above), then the number (action parameter) is first obtained from the user's contact or address book. For example, if the voicemail is to call “mom” and no number is provided, then a search is performed (712) of the user's contact book for an entry matching “mom, “mother”, etc. The same method can be performed for any other contact details, such as an email address, physical address, alternative phone numbers, etc. Similarly, any other action parameter may be looked-up in the same way. For example, a URL, calendar entry, application identifier, online video, etc., may all be looked-up based on another action parameter extracted or inferred from the message (e.g., “look at today's WALL STREET JOURNAL” may initiate a search for a URL associated with “WALL STREET JOURNAL”). For example, if the voice mail says “Check out the XYZ website for Linda's new profile” without specifying the URL of the website, the URL of the XYZ website is looked-up and displayed to the user in a user interface element (e.g., a hyperlink) for accessing the website from the voicemail interface.
In some implementations, a source telephone number of the caller may be obtained from automatic caller identification, performing a reverse lookup etc. In other implementations, an existing contact is identified in a contact list or book associated with the user based on at least one of a source telephone number from which the recorded voice message originated and a name extracted from the recorded voice message.
In the implementations where the action is to send an email, the at least one action parameter is an email address of the caller, and the prompt presents the user with an option to send an email message to the email address. For example, the voicemail message may say “Dave, please can you email me at mark@newco.com to let me know if you are coming for dinner.” The at least one parameter is the email address (markgnewco.com) of the caller. If the email address is not given by the caller, e.g., “Dave, please can you email me to let me know if you are coming for dinner,” then the email address is obtained by first determining the name of the caller from caller identification (or any other means), and thereafter looking up the person's name in the user's contact book to locate an address. A prompt is then presented to the user with the option to email the caller. For example, in a voice mail retrieval user interface, the user is requested to confirm that he wants an email prepared to the caller's email address. Upon user confirmation by the user, a draft email is presented to the user, where the email includes the email address as a destination address (e.g., pre-populated into the “to” field).
In some implementations, a prompt presents the user with an option to store an email address extracted from the recorded voice message in the user's contact book (or update an existing contact entry). If the identity of the person leaving the voicemail message can be ascertained from the source phone number, or the voice mail message, the device optionally supplements existing contact information of the contact based on the email address left in the voice mail. In another implementation, the prompt provides the user with the option to store any other contact detail extracted from the voicemail message in the user's contact book. For example, where Mr. Smith calls from his office phone, and says “This is Kevin Smith, please call me at my cell 650-888-5889”, the device finds an existing contact “K. Smith” in the user's contact list with an office phone number different from the number left in the voicemail message, the device offers to store the number “650-888-5889” as an additional contact number for the contact “K. Smith.”
In some implementations, a transcript of the recorded voice message is also included in the body of the message, so that the user can easily see what they need to respond to, e.g., a question from the caller.
In implementations where a caller has left a voicemail about a previous email sent to the user, and where the user requests the user to write back, the user is presented with the option to prepare a reply email to the previously received incoming email mentioned in the recorded voice message. Upon user confirmation, a draft reply email to the incoming email mentioned in the recorded voice message is presented to the user.
In those implementations where the action is to send the caller certain information in a text message, e.g., and SMS message, the at least one parameter is a telephone number or email address of the caller. Here, the prompt presents the user with an option to send a text message to the telephone number or email address.
In some implementations where contact details are mentioned in the voicemail and other contact details exist for the same person in the user's contact book, the user may be presented with (i) only one or the other of the contact details, or (ii) the option to respond using one of multiple contact details. For example, if a caller leaves a callback number that is different from the source phone number, the device presents a user interface element to call either the callback number extracted from the voicemail message or the source phone number. In some implementations, the user interface element includes a “CALL” or “SEND” button or icon followed by the person's name or contact details. In another example where a caller has left a callback number that is different from the source phone number logged for the voicemail message, the device presents a user interface element to call the callback number extracted from the voicemail message, rather than the source phone number for the voice mail message. In some implementations, a determination is first made that the source phone number is a masked phone number (e.g., a company's main phone number), when choosing to not to display an option to call the source telephone number.
In some implementations, the prompt to the user is a speech prompt. In these implementations, the prompt is first generated as text. The prompt is then converted from text to speech, where after it is played to the user.
In the implementations where the action is to visit an online application store, the at least one parameter is a name of an application. Here, the prompt presents the user with an option to visit a page associated with the application at the online application store.
In the implementations where the action is to watch an online video, the at least one action parameter is a name of an online video. The prompt presents the user with an option to watch the online video. In some implementations, the device determines the correct video portal directly from the voice mail message. In some implementations, the device searches for the video mentioned in the message on one or more major or preferred online video portals beforehand, and presents the video from a suitable source that has been identified. In some implementations, the device merely takes the user to a default video portal, and enters the search for the user. The user can then browser through the search results that are returned. For example, after the user has viewed the video, the device presents an option for the user to callback the caller to discuss his/her opinions of the video. In some embodiments, the device determines the telephone number associated with the caller based on the contact list of the user, or the source phone number of the voice mail message.
In some implementations, instead of calling the caller, the device also allows the user to contact the caller via a text or email message.
In implementations where the action is to meet at a specified geographic location, the at least one action parameter comprises a name or an address of the geographic location. In some implementations, presenting the prompt further comprises presenting an option to the user to provide navigation to the specified geographic location. In some implementations, presenting the prompt further comprises presenting the user with an option to store the specified geographic location as a reminder or calendar entry. In some implementations, the at least one action parameter also includes a time period and the prompt presents the user with an option to store a reminder or calendar entry for meeting at the specified geographic location at the time period. For example, a reminder for “meet me at Pizza Hut in Cupertino in an hour” is created for the user.
In some implementations, the action is to perform a task at a later time, and the at least one action parameter is an action and a time for the task. Here, the prompt presents the user with an option to store a reminder to perform the task at the time. For example, if the voice mail message says, “This is morn, please call me tonight.” The device prepares a reminder to call a number associated with “mom” at 8 pm that night. The time of 8 pm may be arbitrarily chosen for “tonight” or a time that the user normally makes calls to mom in the evening is used instead. In another example, if the caller left a message at 4:30 pm saying “meet me at Pizza Hut in Cupertino in an hour” and the user did not look at the device until 6:30 pm, the device offers an option to call the caller immediately, without setting a reminder.
It may also be determined whether the recorded voice message requires immediate attention from the user based on the action and the at least one action parameter. If it is determined that the recorded voice message requires immediate user attention, the prompt is immediately presented to the user. However, if it is determined that the recorded voice message does not require immediate user attention, the prompt is presented to the user at the time that the user accesses the recorded voice message. For example, if the caller left a message at 4:30 pm saying “meet me at Pizza Hut in Cupertino in an hour” and the device detects that the user has not checked his voice mail at 4:00 pm, the device proactively presents a prompt for the user to review the voice mail message, and optionally provides the user directions to the location of the meeting.
In implementations where a time is provided in a voicemail message, the system may first determine the address of the caller from the user's contact book, and then determine the appropriate time taking time zones into account. For example, the voicemail may state “this is John Goodman, call me at work at 1 pm.” Here the system determines that John Goodman lives in California, while the user lives in Virginia; and offers to set a reminder to call John Goodman at 4 pm EST (1 pm PST) the following day.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a data processing apparatus, or programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
The term “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device, a keyboard, and a pointing device. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Although implementations have been described in detail above, other modifications are possible. For example, the flow diagrams depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flow diagrams, and other components may be added to, or removed from, the described systems. Accordingly, various modifications may be made to the disclosed implementations and still be within the scope of the following claims.
This application claims the benefit of U.S. Provisional Application No. 61/783,984, filed on Mar. 14, 2013, entitled SYSTEM AND METHOD FOR PROCESSING VOICEMAIL, which is hereby incorporated by reference in its entity for all purposes. This application is related to U.S. patent application Ser. No. 12/794,650, and U.S. Provisional Patent Application No. 61/184,717, entitled SMART DEDUCTION OF VOICE COMMANDS, filed Jun. 5, 2009, which are both hereby incorporated by reference in their entireties and for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
3704345 | Coker et al. | Nov 1972 | A |
3828132 | Flanagan et al. | Aug 1974 | A |
3979557 | Schulman et al. | Sep 1976 | A |
4278838 | Antonov | Jul 1981 | A |
4282405 | Taguchi | Aug 1981 | A |
4310721 | Manley et al. | Jan 1982 | A |
4348553 | Baker et al. | Sep 1982 | A |
4653021 | Takagi | Mar 1987 | A |
4677570 | Taki | Jun 1987 | A |
4680805 | Scott | Jul 1987 | A |
4688195 | Thompson et al. | Aug 1987 | A |
4692941 | Jacks et al. | Sep 1987 | A |
4718094 | Bahl et al. | Jan 1988 | A |
4724542 | Williford | Feb 1988 | A |
4726065 | Froessl | Feb 1988 | A |
4727354 | Lindsay | Feb 1988 | A |
4776016 | Hansen | Oct 1988 | A |
4783807 | Marley | Nov 1988 | A |
4811243 | Racine | Mar 1989 | A |
4819271 | Bahl et al. | Apr 1989 | A |
4827518 | Feustel et al. | May 1989 | A |
4827520 | Zeinstra | May 1989 | A |
4829576 | Porter | May 1989 | A |
4829583 | Monroe et al. | May 1989 | A |
4833712 | Bahl et al. | May 1989 | A |
4839853 | Deerwester et al. | Jun 1989 | A |
4852168 | Sprague | Jul 1989 | A |
4862504 | Nomura | Aug 1989 | A |
4878230 | Murakami et al. | Oct 1989 | A |
4903305 | Gillick et al. | Feb 1990 | A |
4905163 | Garber et al. | Feb 1990 | A |
4914586 | Swinehart et al. | Apr 1990 | A |
4914590 | Loatman et al. | Apr 1990 | A |
4944013 | Gouvianakis et al. | Jul 1990 | A |
4955047 | Morganstein et al. | Sep 1990 | A |
4965763 | Zamora | Oct 1990 | A |
4974191 | Amirghodsi et al. | Nov 1990 | A |
4975975 | Filipski | Dec 1990 | A |
4977598 | Doddington et al. | Dec 1990 | A |
4992972 | Brooks et al. | Feb 1991 | A |
5007098 | Kumagai | Apr 1991 | A |
5010574 | Wang | Apr 1991 | A |
5020112 | Chou | May 1991 | A |
5021971 | Lindsay | Jun 1991 | A |
5022081 | Hirose et al. | Jun 1991 | A |
5027406 | Roberts et al. | Jun 1991 | A |
5031217 | Nishimura | Jul 1991 | A |
5032989 | Tornetta | Jul 1991 | A |
5040218 | Vitale et al. | Aug 1991 | A |
5047614 | Bianco | Sep 1991 | A |
5047617 | Shepard et al. | Sep 1991 | A |
5057915 | Von Kohorn | Oct 1991 | A |
5072452 | Brown et al. | Dec 1991 | A |
5091945 | Kleijn | Feb 1992 | A |
5127053 | Koch | Jun 1992 | A |
5127055 | Larkey | Jun 1992 | A |
5128672 | Kaehler | Jul 1992 | A |
5133011 | McKiel, Jr. | Jul 1992 | A |
5142584 | Ozawa | Aug 1992 | A |
5164900 | Bernath | Nov 1992 | A |
5165007 | Bahl et al. | Nov 1992 | A |
5179627 | Sweet et al. | Jan 1993 | A |
5179652 | Rozmanith et al. | Jan 1993 | A |
5194950 | Murakami et al. | Mar 1993 | A |
5197005 | Shwartz et al. | Mar 1993 | A |
5199077 | Wilcox et al. | Mar 1993 | A |
5202952 | Gillick et al. | Apr 1993 | A |
5208862 | Ozawa | May 1993 | A |
5216747 | Hardwick et al. | Jun 1993 | A |
5220639 | Lee | Jun 1993 | A |
5220657 | Bly et al. | Jun 1993 | A |
5222146 | Bahl et al. | Jun 1993 | A |
5230036 | Akamine et al. | Jul 1993 | A |
5231670 | Goldhor et al. | Jul 1993 | A |
5235680 | Bijnagte | Aug 1993 | A |
5267345 | Brown et al. | Nov 1993 | A |
5268990 | Cohen et al. | Dec 1993 | A |
5282265 | Rohra Suda et al. | Jan 1994 | A |
5289562 | Mizuta et al. | Feb 1994 | A |
RE34562 | Murakami et al. | Mar 1994 | E |
5291286 | Murakami et al. | Mar 1994 | A |
5293448 | Honda | Mar 1994 | A |
5293452 | Picone et al. | Mar 1994 | A |
5296642 | Konishi | Mar 1994 | A |
5297170 | Eyuboglu et al. | Mar 1994 | A |
5301109 | Landauer et al. | Apr 1994 | A |
5303406 | Hansen et al. | Apr 1994 | A |
5309359 | Katz et al. | May 1994 | A |
5317507 | Gallant | May 1994 | A |
5317647 | Pagallo | May 1994 | A |
5325297 | Bird et al. | Jun 1994 | A |
5325298 | Gallant | Jun 1994 | A |
5327498 | Hamon | Jul 1994 | A |
5333236 | Bahl et al. | Jul 1994 | A |
5333275 | Wheatley et al. | Jul 1994 | A |
5345536 | Hoshimi et al. | Sep 1994 | A |
5349645 | Zhao | Sep 1994 | A |
5353377 | Kuroda et al. | Oct 1994 | A |
5377103 | Lamberti et al. | Dec 1994 | A |
5377301 | Rosenberg et al. | Dec 1994 | A |
5377303 | Firman | Dec 1994 | A |
5384892 | Strong | Jan 1995 | A |
5384893 | Hutchins | Jan 1995 | A |
5386494 | White | Jan 1995 | A |
5386556 | Hedin et al. | Jan 1995 | A |
5390279 | Strong | Feb 1995 | A |
5396625 | Parkes | Mar 1995 | A |
5400434 | Pearson | Mar 1995 | A |
5404295 | Katz et al. | Apr 1995 | A |
5412756 | Bauman et al. | May 1995 | A |
5412804 | Krishna | May 1995 | A |
5412806 | Du et al. | May 1995 | A |
5418951 | Damashek | May 1995 | A |
5424947 | Nagao et al. | Jun 1995 | A |
5434777 | Luciw | Jul 1995 | A |
5444823 | Nguyen | Aug 1995 | A |
5455888 | Iyengar et al. | Oct 1995 | A |
5469529 | Bimbot et al. | Nov 1995 | A |
5471611 | McGregor | Nov 1995 | A |
5475587 | Anick et al. | Dec 1995 | A |
5479488 | Lennig et al. | Dec 1995 | A |
5491758 | Bellegarda et al. | Feb 1996 | A |
5491772 | Hardwick et al. | Feb 1996 | A |
5493677 | Balogh et al. | Feb 1996 | A |
5495604 | Harding et al. | Feb 1996 | A |
5500905 | Martin et al. | Mar 1996 | A |
5502790 | Yi | Mar 1996 | A |
5502791 | Nishimura et al. | Mar 1996 | A |
5515475 | Gupta et al. | May 1996 | A |
5533182 | Bates et al. | Jul 1996 | A |
5536902 | Serra et al. | Jul 1996 | A |
5537618 | Boulton et al. | Jul 1996 | A |
5544264 | Bellegarda et al. | Aug 1996 | A |
5555343 | Luther | Sep 1996 | A |
5574823 | Hassanein et al. | Nov 1996 | A |
5577164 | Kaneko et al. | Nov 1996 | A |
5577241 | Spencer | Nov 1996 | A |
5578808 | Taylor | Nov 1996 | A |
5579436 | Chou et al. | Nov 1996 | A |
5581655 | Cohen et al. | Dec 1996 | A |
5584024 | Shwartz | Dec 1996 | A |
5596676 | Swaminathan et al. | Jan 1997 | A |
5596994 | Bro | Jan 1997 | A |
5608624 | Luciw | Mar 1997 | A |
5613036 | Strong | Mar 1997 | A |
5617507 | Lee et al. | Apr 1997 | A |
5619694 | Shimazu | Apr 1997 | A |
5621859 | Schwartz et al. | Apr 1997 | A |
5621903 | Luciw et al. | Apr 1997 | A |
5642464 | Yue et al. | Jun 1997 | A |
5642519 | Martin | Jun 1997 | A |
5644727 | Atkins | Jul 1997 | A |
5649060 | Ellozy et al. | Jul 1997 | A |
5661787 | Pocock | Aug 1997 | A |
5664055 | Kroon | Sep 1997 | A |
5675819 | Schuetze | Oct 1997 | A |
5682539 | Conrad et al. | Oct 1997 | A |
5687077 | Gough, Jr. | Nov 1997 | A |
5696962 | Kupiec | Dec 1997 | A |
5701400 | Amado | Dec 1997 | A |
5706442 | Anderson et al. | Jan 1998 | A |
5710886 | Christensen et al. | Jan 1998 | A |
5712957 | Waibel et al. | Jan 1998 | A |
5715468 | Budzinski | Feb 1998 | A |
5721827 | Logan et al. | Feb 1998 | A |
5727950 | Cook et al. | Mar 1998 | A |
5729694 | Holzrichter et al. | Mar 1998 | A |
5732216 | Logan et al. | Mar 1998 | A |
5732390 | Katayanagi et al. | Mar 1998 | A |
5734750 | Arai et al. | Mar 1998 | A |
5734791 | Acero et al. | Mar 1998 | A |
5737734 | Schultz | Apr 1998 | A |
5742705 | Parthasarathy | Apr 1998 | A |
5748974 | Johnson | May 1998 | A |
5749081 | Whiteis | May 1998 | A |
5757979 | Hongo et al. | May 1998 | A |
5759101 | Von Kohorn | Jun 1998 | A |
5777614 | Ando et al. | Jul 1998 | A |
5790978 | Olive et al. | Aug 1998 | A |
5794050 | Dahlgren et al. | Aug 1998 | A |
5794182 | Manduchi et al. | Aug 1998 | A |
5794207 | Walker et al. | Aug 1998 | A |
5794237 | Gore, Jr. | Aug 1998 | A |
5799276 | Komissarchik et al. | Aug 1998 | A |
5802526 | Fawcett et al. | Sep 1998 | A |
5812697 | Sakai et al. | Sep 1998 | A |
5812698 | Platt et al. | Sep 1998 | A |
5822730 | Roth et al. | Oct 1998 | A |
5822743 | Gupta et al. | Oct 1998 | A |
5825881 | Colvin, Sr. | Oct 1998 | A |
5826261 | Spencer | Oct 1998 | A |
5828999 | Bellegarda et al. | Oct 1998 | A |
5835893 | Ushioda | Nov 1998 | A |
5839106 | Bellegarda | Nov 1998 | A |
5841902 | Tu | Nov 1998 | A |
5845255 | Mayaud | Dec 1998 | A |
5857184 | Lynch | Jan 1999 | A |
5860063 | Gorin et al. | Jan 1999 | A |
5860064 | Henton | Jan 1999 | A |
5862223 | Walker et al. | Jan 1999 | A |
5862233 | Poletti | Jan 1999 | A |
5864806 | Mokbel et al. | Jan 1999 | A |
5864844 | James et al. | Jan 1999 | A |
5867799 | Lang et al. | Feb 1999 | A |
5873056 | Liddy et al. | Feb 1999 | A |
5875437 | Atkins | Feb 1999 | A |
5884323 | Hawkins et al. | Mar 1999 | A |
5890122 | Van et al. | Mar 1999 | A |
5895464 | Bhandari et al. | Apr 1999 | A |
5895466 | Goldberg et al. | Apr 1999 | A |
5899972 | Miyazawa et al. | May 1999 | A |
5909666 | Gould et al. | Jun 1999 | A |
5912952 | Brendzel | Jun 1999 | A |
5913193 | Huang et al. | Jun 1999 | A |
5915236 | Gould et al. | Jun 1999 | A |
5915249 | Spencer | Jun 1999 | A |
5920836 | Gould et al. | Jul 1999 | A |
5920837 | Gould et al. | Jul 1999 | A |
5930408 | Seto | Jul 1999 | A |
5930769 | Rose | Jul 1999 | A |
5933822 | Braden-Harder et al. | Aug 1999 | A |
5936926 | Yokouchi et al. | Aug 1999 | A |
5940811 | Norris | Aug 1999 | A |
5941944 | Messerly | Aug 1999 | A |
5943670 | Prager | Aug 1999 | A |
5948040 | Delorme et al. | Sep 1999 | A |
5950123 | Schwelb et al. | Sep 1999 | A |
5956699 | Wong et al. | Sep 1999 | A |
5960394 | Gould et al. | Sep 1999 | A |
5960422 | Prasad | Sep 1999 | A |
5963924 | Williams et al. | Oct 1999 | A |
5966126 | Szabo | Oct 1999 | A |
5970474 | Leroy et al. | Oct 1999 | A |
5974146 | Randle et al. | Oct 1999 | A |
5982891 | Ginter et al. | Nov 1999 | A |
5983179 | Gould et al. | Nov 1999 | A |
5987132 | Rowney et al. | Nov 1999 | A |
5987140 | Rowney et al. | Nov 1999 | A |
5987404 | Della Pietra et al. | Nov 1999 | A |
5987440 | O'Neil et al. | Nov 1999 | A |
5991441 | Jourjine | Nov 1999 | A |
5999895 | Forest | Dec 1999 | A |
5999908 | Abelow | Dec 1999 | A |
6016471 | Kuhn et al. | Jan 2000 | A |
6023684 | Pearson | Feb 2000 | A |
6024288 | Gottlich et al. | Feb 2000 | A |
6026345 | Shah et al. | Feb 2000 | A |
6026375 | Hall et al. | Feb 2000 | A |
6026388 | Liddy et al. | Feb 2000 | A |
6026393 | Gupta et al. | Feb 2000 | A |
6029132 | Kuhn et al. | Feb 2000 | A |
6035267 | Watanabe et al. | Mar 2000 | A |
6035336 | Lu et al. | Mar 2000 | A |
6038533 | Buchsbaum et al. | Mar 2000 | A |
6052656 | Suda et al. | Apr 2000 | A |
6055514 | Wren | Apr 2000 | A |
6055531 | Bennett et al. | Apr 2000 | A |
6064767 | Muir et al. | May 2000 | A |
6064959 | Young et al. | May 2000 | A |
6064960 | Bellegarda et al. | May 2000 | A |
6064963 | Gainsboro | May 2000 | A |
6070139 | Miyazawa et al. | May 2000 | A |
6070147 | Harms et al. | May 2000 | A |
6073097 | Gould et al. | Jun 2000 | A |
6076051 | Messerly et al. | Jun 2000 | A |
6076088 | Paik et al. | Jun 2000 | A |
6078914 | Redfern | Jun 2000 | A |
6081750 | Hoffberg et al. | Jun 2000 | A |
6081774 | De et al. | Jun 2000 | A |
6088671 | Gould et al. | Jul 2000 | A |
6088731 | Kiraly et al. | Jul 2000 | A |
6092043 | Squires et al. | Jul 2000 | A |
6094649 | Bowen et al. | Jul 2000 | A |
6101468 | Gould et al. | Aug 2000 | A |
6105865 | Hardesty | Aug 2000 | A |
6108627 | Sabourin | Aug 2000 | A |
6119101 | Peckover | Sep 2000 | A |
6122616 | Henton | Sep 2000 | A |
6125356 | Brockman et al. | Sep 2000 | A |
6138098 | Shieber et al. | Oct 2000 | A |
6144938 | Surace et al. | Nov 2000 | A |
6154720 | Onishi et al. | Nov 2000 | A |
6161084 | Messerly et al. | Dec 2000 | A |
6173251 | Ito et al. | Jan 2001 | B1 |
6173261 | Arai et al. | Jan 2001 | B1 |
6173279 | Levin et al. | Jan 2001 | B1 |
6177905 | Welch | Jan 2001 | B1 |
6188999 | Moody | Feb 2001 | B1 |
6195641 | Loring et al. | Feb 2001 | B1 |
6205456 | Nakao | Mar 2001 | B1 |
6208956 | Motoyama | Mar 2001 | B1 |
6208971 | Bellegarda et al. | Mar 2001 | B1 |
6226403 | Parthasarathy | May 2001 | B1 |
6233545 | Datig | May 2001 | B1 |
6233559 | Balakrishnan | May 2001 | B1 |
6233578 | Machihara et al. | May 2001 | B1 |
6246981 | Papineni et al. | Jun 2001 | B1 |
6249606 | Kiraly et al. | Jun 2001 | B1 |
6259826 | Pollard et al. | Jul 2001 | B1 |
6260011 | Heckerman et al. | Jul 2001 | B1 |
6260013 | Sejnoha | Jul 2001 | B1 |
6260024 | Shkedy | Jul 2001 | B1 |
6266637 | Donovan et al. | Jul 2001 | B1 |
6275824 | O'Flaherty et al. | Aug 2001 | B1 |
6282507 | Horiguchi et al. | Aug 2001 | B1 |
6285785 | Bellegarda et al. | Sep 2001 | B1 |
6285786 | Seni et al. | Sep 2001 | B1 |
6289124 | Okamoto | Sep 2001 | B1 |
6308149 | Gaussier et al. | Oct 2001 | B1 |
6311189 | deVries et al. | Oct 2001 | B1 |
6317594 | Gossman et al. | Nov 2001 | B1 |
6317707 | Bangalore et al. | Nov 2001 | B1 |
6317831 | King | Nov 2001 | B1 |
6321092 | Fitch et al. | Nov 2001 | B1 |
6324512 | Junqua et al. | Nov 2001 | B1 |
6334103 | Surace et al. | Dec 2001 | B1 |
6345250 | Martin | Feb 2002 | B1 |
6353794 | Davis et al. | Mar 2002 | B1 |
6356854 | Schubert et al. | Mar 2002 | B1 |
6356905 | Gershman et al. | Mar 2002 | B1 |
6360237 | Schulz et al. | Mar 2002 | B1 |
6366883 | Campbell et al. | Apr 2002 | B1 |
6366884 | Bellegarda et al. | Apr 2002 | B1 |
6397186 | Bush et al. | May 2002 | B1 |
6401065 | Kanevsky et al. | Jun 2002 | B1 |
6421672 | McAllister et al. | Jul 2002 | B1 |
6430551 | Thelen et al. | Aug 2002 | B1 |
6434522 | Tsuboka | Aug 2002 | B1 |
6434524 | Weber | Aug 2002 | B1 |
6438523 | Oberteuffer et al. | Aug 2002 | B1 |
6442518 | Van Thong et al. | Aug 2002 | B1 |
6446076 | Burkey et al. | Sep 2002 | B1 |
6448485 | Barile | Sep 2002 | B1 |
6449620 | Draper et al. | Sep 2002 | B1 |
6453281 | Walters et al. | Sep 2002 | B1 |
6453292 | Ramaswamy et al. | Sep 2002 | B2 |
6460029 | Fries et al. | Oct 2002 | B1 |
6463128 | Elwin | Oct 2002 | B1 |
6466654 | Cooper et al. | Oct 2002 | B1 |
6477488 | Bellegarda | Nov 2002 | B1 |
6487534 | Thelen et al. | Nov 2002 | B1 |
6489951 | Wong et al. | Dec 2002 | B1 |
6493428 | Hillier | Dec 2002 | B1 |
6499013 | Weber | Dec 2002 | B1 |
6501937 | Ho et al. | Dec 2002 | B1 |
6505158 | Conkie | Jan 2003 | B1 |
6505175 | Silverman et al. | Jan 2003 | B1 |
6505183 | Loofbourrow et al. | Jan 2003 | B1 |
6510417 | Woods et al. | Jan 2003 | B1 |
6513063 | Julia et al. | Jan 2003 | B1 |
6519565 | Clements et al. | Feb 2003 | B1 |
6519566 | Boyer et al. | Feb 2003 | B1 |
6523061 | Halverson et al. | Feb 2003 | B1 |
6523172 | Martinez-Guerra et al. | Feb 2003 | B1 |
6526351 | Whitham | Feb 2003 | B2 |
6526382 | Yuschik | Feb 2003 | B1 |
6526395 | Morris | Feb 2003 | B1 |
6532444 | Weber | Mar 2003 | B1 |
6532446 | King | Mar 2003 | B1 |
6546388 | Edlund et al. | Apr 2003 | B1 |
6553344 | Bellegarda et al. | Apr 2003 | B2 |
6556971 | Rigsby et al. | Apr 2003 | B1 |
6556983 | Altschuler et al. | Apr 2003 | B1 |
6563769 | Van Der Meulen | May 2003 | B1 |
6584464 | Warthen | Jun 2003 | B1 |
6598022 | Yuschik | Jul 2003 | B2 |
6598039 | Livowsky | Jul 2003 | B1 |
6601026 | Appelt et al. | Jul 2003 | B2 |
6601234 | Bowman-Amuah | Jul 2003 | B1 |
6604059 | Strubbe et al. | Aug 2003 | B2 |
6606388 | Townsend et al. | Aug 2003 | B1 |
6615172 | Bennett et al. | Sep 2003 | B1 |
6615175 | Gazdzinski | Sep 2003 | B1 |
6615220 | Austin et al. | Sep 2003 | B1 |
6622121 | Crepy et al. | Sep 2003 | B1 |
6622136 | Russell | Sep 2003 | B2 |
6625583 | Silverman et al. | Sep 2003 | B1 |
6628808 | Bach et al. | Sep 2003 | B1 |
6631346 | Karaorman et al. | Oct 2003 | B1 |
6633846 | Bennett et al. | Oct 2003 | B1 |
6643401 | Kashioka et al. | Nov 2003 | B1 |
6647260 | Dusse et al. | Nov 2003 | B2 |
6650735 | Burton et al. | Nov 2003 | B2 |
6654740 | Tokuda et al. | Nov 2003 | B2 |
6658389 | Alpdemir | Dec 2003 | B1 |
6665639 | Mozer et al. | Dec 2003 | B2 |
6665640 | Bennett et al. | Dec 2003 | B1 |
6665641 | Coorman et al. | Dec 2003 | B1 |
6680675 | Suzuki | Jan 2004 | B1 |
6684187 | Conkie | Jan 2004 | B1 |
6691064 | Vroman | Feb 2004 | B2 |
6691090 | Laurila et al. | Feb 2004 | B1 |
6691111 | Lazaridis et al. | Feb 2004 | B2 |
6691151 | Cheyer et al. | Feb 2004 | B1 |
6694295 | Lindholm et al. | Feb 2004 | B2 |
6697780 | Beutnagel et al. | Feb 2004 | B1 |
6697824 | Bowman-Amuah | Feb 2004 | B1 |
6701294 | Ball et al. | Mar 2004 | B1 |
6711585 | Cooperman et al. | Mar 2004 | B1 |
6718324 | Edlund et al. | Apr 2004 | B2 |
6720980 | Lui et al. | Apr 2004 | B1 |
6721728 | McGreevy | Apr 2004 | B2 |
6721734 | Subasic et al. | Apr 2004 | B1 |
6728675 | Maddalozzo, Jr. et al. | Apr 2004 | B1 |
6728729 | Jawa et al. | Apr 2004 | B1 |
6731312 | Robbin | May 2004 | B2 |
6735632 | Kiraly et al. | May 2004 | B1 |
6741264 | Lesser | May 2004 | B1 |
6742021 | Halverson et al. | May 2004 | B1 |
6751595 | Busayapongchai et al. | Jun 2004 | B2 |
6754504 | Reed | Jun 2004 | B1 |
6757362 | Cooper et al. | Jun 2004 | B1 |
6757718 | Halverson et al. | Jun 2004 | B1 |
6760754 | Isaacs et al. | Jul 2004 | B1 |
6766320 | Wang et al. | Jul 2004 | B1 |
6772123 | Cooklev et al. | Aug 2004 | B2 |
6778951 | Contractor | Aug 2004 | B1 |
6778952 | Bellegarda | Aug 2004 | B2 |
6778962 | Kasai et al. | Aug 2004 | B1 |
6778970 | Au | Aug 2004 | B2 |
6792082 | Levine | Sep 2004 | B1 |
6807574 | Partovi et al. | Oct 2004 | B1 |
6810379 | Vermeulen et al. | Oct 2004 | B1 |
6813491 | McKinney | Nov 2004 | B1 |
6829603 | Chai et al. | Dec 2004 | B1 |
6832194 | Mozer et al. | Dec 2004 | B1 |
6839464 | Hawkins et al. | Jan 2005 | B2 |
6839669 | Gould et al. | Jan 2005 | B1 |
6839670 | Stammler et al. | Jan 2005 | B1 |
6842767 | Partovi et al. | Jan 2005 | B1 |
6847966 | Sommer et al. | Jan 2005 | B1 |
6847979 | Allemang et al. | Jan 2005 | B2 |
6851115 | Cheyer et al. | Feb 2005 | B1 |
6859931 | Cheyer et al. | Feb 2005 | B1 |
6865533 | Addison et al. | Mar 2005 | B2 |
6882971 | Craner | Apr 2005 | B2 |
6885734 | Eberle et al. | Apr 2005 | B1 |
6895380 | Sepe, Jr. | May 2005 | B2 |
6895558 | Loveland | May 2005 | B1 |
6901364 | Nguyen et al. | May 2005 | B2 |
6901399 | Corston et al. | May 2005 | B1 |
6912498 | Stevens et al. | Jun 2005 | B2 |
6912499 | Sabourin et al. | Jun 2005 | B1 |
6915246 | Gusler et al. | Jul 2005 | B2 |
6917373 | Vong et al. | Jul 2005 | B2 |
6924828 | Hirsch | Aug 2005 | B1 |
6928614 | Everhart | Aug 2005 | B1 |
6931384 | Horvitz et al. | Aug 2005 | B1 |
6934684 | Alpdemir et al. | Aug 2005 | B2 |
6937975 | Elworthy | Aug 2005 | B1 |
6937986 | Denenberg et al. | Aug 2005 | B2 |
6944593 | Kuzunuki et al. | Sep 2005 | B2 |
6954755 | Reisman | Oct 2005 | B2 |
6957076 | Hunzinger | Oct 2005 | B2 |
6960734 | Park | Nov 2005 | B1 |
6964023 | Maes et al. | Nov 2005 | B2 |
6968311 | Knockeart et al. | Nov 2005 | B2 |
6970935 | Maes | Nov 2005 | B1 |
6978127 | Bulthuis et al. | Dec 2005 | B1 |
6980949 | Ford | Dec 2005 | B2 |
6980955 | Okutani et al. | Dec 2005 | B2 |
6983251 | Umemoto et al. | Jan 2006 | B1 |
6985865 | Packingham et al. | Jan 2006 | B1 |
6988071 | Gazdzinski | Jan 2006 | B1 |
6996520 | Levin | Feb 2006 | B2 |
6996531 | Korall et al. | Feb 2006 | B2 |
6999066 | Litwiller | Feb 2006 | B2 |
6999914 | Boerner et al. | Feb 2006 | B1 |
6999927 | Mozer et al. | Feb 2006 | B2 |
7000189 | Dutta et al. | Feb 2006 | B2 |
7003463 | Maes et al. | Feb 2006 | B1 |
7010581 | Brown et al. | Mar 2006 | B2 |
7020685 | Chen et al. | Mar 2006 | B1 |
7024363 | Comerford et al. | Apr 2006 | B1 |
7024364 | Guerra et al. | Apr 2006 | B2 |
7027974 | Busch et al. | Apr 2006 | B1 |
7027990 | Sussman | Apr 2006 | B2 |
7031530 | Driggs et al. | Apr 2006 | B2 |
7036128 | Julia et al. | Apr 2006 | B1 |
7050977 | Bennett | May 2006 | B1 |
7054888 | LaChapelle et al. | May 2006 | B2 |
7058569 | Coorman et al. | Jun 2006 | B2 |
7058888 | Gjerstad et al. | Jun 2006 | B1 |
7058889 | Trovato et al. | Jun 2006 | B2 |
7062428 | Hogenhout et al. | Jun 2006 | B2 |
7069220 | Coffman et al. | Jun 2006 | B2 |
7069560 | Cheyer et al. | Jun 2006 | B1 |
7084758 | Cole | Aug 2006 | B1 |
7085723 | Ross et al. | Aug 2006 | B2 |
7092887 | Mozer et al. | Aug 2006 | B2 |
7092928 | Elad et al. | Aug 2006 | B1 |
7093693 | Gazdzinski | Aug 2006 | B1 |
7107204 | Liu et al. | Sep 2006 | B1 |
7127046 | Smith et al. | Oct 2006 | B1 |
7127403 | Saylor et al. | Oct 2006 | B1 |
7136710 | Hoftberg et al. | Nov 2006 | B1 |
7137126 | Coffman et al. | Nov 2006 | B1 |
7139714 | Bennett et al. | Nov 2006 | B2 |
7139722 | Perrella et al. | Nov 2006 | B2 |
7149319 | Roeck | Dec 2006 | B2 |
7152070 | Musick et al. | Dec 2006 | B1 |
7166791 | Robbin et al. | Jan 2007 | B2 |
7174295 | Kivimaki | Feb 2007 | B1 |
7177794 | Mani et al. | Feb 2007 | B2 |
7177798 | Hsu et al. | Feb 2007 | B2 |
7190794 | Hinde | Mar 2007 | B2 |
7197120 | Luehrig et al. | Mar 2007 | B2 |
7197460 | Gupta et al. | Mar 2007 | B1 |
7200559 | Wang | Apr 2007 | B2 |
7203646 | Bennett | Apr 2007 | B2 |
7216008 | Sakata | May 2007 | B2 |
7216073 | Lavi et al. | May 2007 | B2 |
7216080 | Tsiao et al. | May 2007 | B2 |
7219063 | Schalk et al. | May 2007 | B2 |
7219123 | Fiechter et al. | May 2007 | B1 |
7225125 | Bennett et al. | May 2007 | B2 |
7228278 | Nguyen et al. | Jun 2007 | B2 |
7231343 | Treadgold et al. | Jun 2007 | B1 |
7233790 | Kjellberg et al. | Jun 2007 | B2 |
7233904 | Luisi | Jun 2007 | B2 |
7246151 | Isaacs et al. | Jul 2007 | B2 |
7251313 | Miller | Jul 2007 | B1 |
7260529 | Lengen | Aug 2007 | B1 |
7266496 | Wang et al. | Sep 2007 | B2 |
7269556 | Kiss et al. | Sep 2007 | B2 |
7277854 | Bennett et al. | Oct 2007 | B2 |
7290039 | Lisitsa et al. | Oct 2007 | B1 |
7292579 | Morris | Nov 2007 | B2 |
7299033 | Kjellberg et al. | Nov 2007 | B2 |
7302392 | Thenthiruperai et al. | Nov 2007 | B1 |
7302686 | Togawa | Nov 2007 | B2 |
7310600 | Garner et al. | Dec 2007 | B1 |
7315818 | Stevens et al. | Jan 2008 | B2 |
7319957 | Robinson et al. | Jan 2008 | B2 |
7324947 | Jordan et al. | Jan 2008 | B2 |
7349953 | Lisitsa et al. | Mar 2008 | B2 |
7362738 | Taube et al. | Apr 2008 | B2 |
7376556 | Bennett | May 2008 | B2 |
7376632 | Sadek et al. | May 2008 | B1 |
7376645 | Bernard | May 2008 | B2 |
7379874 | Schmid et al. | May 2008 | B2 |
7380203 | Keely et al. | May 2008 | B2 |
7386449 | Sun et al. | Jun 2008 | B2 |
7389224 | Elworthy | Jun 2008 | B1 |
7392185 | Bennett | Jun 2008 | B2 |
7398209 | Kennewick et al. | Jul 2008 | B2 |
7403938 | Harrison et al. | Jul 2008 | B2 |
7409337 | Potter et al. | Aug 2008 | B1 |
7415100 | Cooper et al. | Aug 2008 | B2 |
7418392 | Mozer et al. | Aug 2008 | B1 |
7426467 | Nashida et al. | Sep 2008 | B2 |
7427024 | Gazdzinski et al. | Sep 2008 | B1 |
7447635 | Konopka et al. | Nov 2008 | B1 |
7454351 | Jeschke et al. | Nov 2008 | B2 |
7460652 | Chang | Dec 2008 | B2 |
7467087 | Gillick et al. | Dec 2008 | B1 |
7475010 | Chao | Jan 2009 | B2 |
7478037 | Strong | Jan 2009 | B2 |
7483832 | Tischer | Jan 2009 | B2 |
7483894 | Cao | Jan 2009 | B2 |
7487089 | Mozer | Feb 2009 | B2 |
7496498 | Chu et al. | Feb 2009 | B2 |
7496512 | Zhao et al. | Feb 2009 | B2 |
7502738 | Kennewick et al. | Mar 2009 | B2 |
7508373 | Lin et al. | Mar 2009 | B2 |
7522927 | Fitch et al. | Apr 2009 | B2 |
7523108 | Cao | Apr 2009 | B2 |
7526466 | Au | Apr 2009 | B2 |
7528713 | Singh et al. | May 2009 | B2 |
7529671 | Rockenbeck et al. | May 2009 | B2 |
7529676 | Koyama | May 2009 | B2 |
7536565 | Girish et al. | May 2009 | B2 |
7539656 | Fratkina et al. | May 2009 | B2 |
7543232 | Easton, Jr. et al. | Jun 2009 | B2 |
7546382 | Healey et al. | Jun 2009 | B2 |
7546529 | Reynar et al. | Jun 2009 | B2 |
7548895 | Pulsipher | Jun 2009 | B2 |
7552055 | Lecoeuche | Jun 2009 | B2 |
7555431 | Bennett | Jun 2009 | B2 |
7558730 | Davis et al. | Jul 2009 | B2 |
7559026 | Girish et al. | Jul 2009 | B2 |
7561069 | Horstemeyer | Jul 2009 | B2 |
7571106 | Cao et al. | Aug 2009 | B2 |
7577522 | Rosenberg | Aug 2009 | B2 |
7580551 | Srihari et al. | Aug 2009 | B1 |
7580576 | Wang et al. | Aug 2009 | B2 |
7599918 | Shen et al. | Oct 2009 | B2 |
7603381 | Burke et al. | Oct 2009 | B2 |
7613264 | Wells et al. | Nov 2009 | B2 |
7617094 | Aoki et al. | Nov 2009 | B2 |
7620549 | Di Cristo et al. | Nov 2009 | B2 |
7624007 | Bennett | Nov 2009 | B2 |
7627481 | Kuo et al. | Dec 2009 | B1 |
7630901 | Omi | Dec 2009 | B2 |
7634409 | Kennewick et al. | Dec 2009 | B2 |
7634413 | Kuo et al. | Dec 2009 | B1 |
7636657 | Ju et al. | Dec 2009 | B2 |
7640160 | Di Cristo et al. | Dec 2009 | B2 |
7647225 | Bennett et al. | Jan 2010 | B2 |
7649454 | Singh et al. | Jan 2010 | B2 |
7657424 | Bennett | Feb 2010 | B2 |
7664558 | Lindahl et al. | Feb 2010 | B2 |
7664638 | Cooper et al. | Feb 2010 | B2 |
7672841 | Bennett | Mar 2010 | B2 |
7673238 | Girish et al. | Mar 2010 | B2 |
7676026 | Baxter, Jr. | Mar 2010 | B1 |
7684985 | Dominach et al. | Mar 2010 | B2 |
7684990 | Caskey et al. | Mar 2010 | B2 |
7693715 | Hwang et al. | Apr 2010 | B2 |
7693719 | Chu et al. | Apr 2010 | B2 |
7693720 | Kennewick et al. | Apr 2010 | B2 |
7698131 | Bennett | Apr 2010 | B2 |
7702500 | Blaedow | Apr 2010 | B2 |
7702508 | Bennett | Apr 2010 | B2 |
7707027 | Balchandran et al. | Apr 2010 | B2 |
7707032 | Wang et al. | Apr 2010 | B2 |
7707267 | Lisitsa et al. | Apr 2010 | B2 |
7711129 | Lindahl et al. | May 2010 | B2 |
7711565 | Gazdzinski | May 2010 | B1 |
7711672 | Au | May 2010 | B2 |
7716056 | Weng et al. | May 2010 | B2 |
7720674 | Kaiser et al. | May 2010 | B2 |
7720683 | Vermeulen et al. | May 2010 | B1 |
7721301 | Wong et al. | May 2010 | B2 |
7725307 | Bennett | May 2010 | B2 |
7725318 | Gavalda et al. | May 2010 | B2 |
7725320 | Bennett | May 2010 | B2 |
7725321 | Bennett | May 2010 | B2 |
7729904 | Bennett | Jun 2010 | B2 |
7729916 | Coffman et al. | Jun 2010 | B2 |
7734461 | Kwak et al. | Jun 2010 | B2 |
7747616 | Yamada et al. | Jun 2010 | B2 |
7752152 | Paek et al. | Jul 2010 | B2 |
7756868 | Lee | Jul 2010 | B2 |
7774204 | Mozer et al. | Aug 2010 | B2 |
7778632 | Kurlander et al. | Aug 2010 | B2 |
7783486 | Rosser et al. | Aug 2010 | B2 |
7801729 | Mozer | Sep 2010 | B2 |
7809570 | Kennewick et al. | Oct 2010 | B2 |
7809610 | Cao | Oct 2010 | B2 |
7818176 | Freeman et al. | Oct 2010 | B2 |
7818291 | Ferguson et al. | Oct 2010 | B2 |
7822608 | Cross, Jr. et al. | Oct 2010 | B2 |
7823123 | Sabbouh | Oct 2010 | B2 |
7826945 | Zhang et al. | Nov 2010 | B2 |
7827047 | Anderson et al. | Nov 2010 | B2 |
7831426 | Bennett | Nov 2010 | B2 |
7840400 | Lavi et al. | Nov 2010 | B2 |
7840447 | Kleinrock et al. | Nov 2010 | B2 |
7853444 | Wang et al. | Dec 2010 | B2 |
7853574 | Kraenzel et al. | Dec 2010 | B2 |
7853664 | Wang et al. | Dec 2010 | B1 |
7873519 | Bennett | Jan 2011 | B2 |
7873654 | Bernard | Jan 2011 | B2 |
7881936 | Longe et al. | Feb 2011 | B2 |
7885844 | Cohen et al. | Feb 2011 | B1 |
7890652 | Bull et al. | Feb 2011 | B2 |
7899666 | Varone | Mar 2011 | B2 |
7912702 | Bennett | Mar 2011 | B2 |
7917367 | Di Cristo et al. | Mar 2011 | B2 |
7917497 | Harrison et al. | Mar 2011 | B2 |
7920678 | Cooper et al. | Apr 2011 | B2 |
7920682 | Byrne et al. | Apr 2011 | B2 |
7920857 | Lau et al. | Apr 2011 | B2 |
7925525 | Chin | Apr 2011 | B2 |
7930168 | Weng et al. | Apr 2011 | B2 |
7930197 | Ozzie et al. | Apr 2011 | B2 |
7949529 | Weider et al. | May 2011 | B2 |
7949534 | Davis et al. | May 2011 | B2 |
7974844 | Sumita | Jul 2011 | B2 |
7974972 | Cao | Jul 2011 | B2 |
7983915 | Knight et al. | Jul 2011 | B2 |
7983917 | Kennewick et al. | Jul 2011 | B2 |
7983997 | Allen et al. | Jul 2011 | B2 |
7986431 | Emori et al. | Jul 2011 | B2 |
7987151 | Schott et al. | Jul 2011 | B2 |
7996228 | Miller et al. | Aug 2011 | B2 |
7999669 | Singh et al. | Aug 2011 | B2 |
8000453 | Cooper et al. | Aug 2011 | B2 |
8005664 | Hanumanthappa | Aug 2011 | B2 |
8005679 | Jordan et al. | Aug 2011 | B2 |
8015006 | Kennewick et al. | Sep 2011 | B2 |
8015144 | Zheng et al. | Sep 2011 | B2 |
8024195 | Mozer et al. | Sep 2011 | B2 |
8032383 | Bhardwaj et al. | Oct 2011 | B1 |
8036901 | Mozer | Oct 2011 | B2 |
8041570 | Mirkovic et al. | Oct 2011 | B2 |
8041611 | Kleinrock et al. | Oct 2011 | B2 |
8050500 | Batty et al. | Nov 2011 | B1 |
8055502 | Clark et al. | Nov 2011 | B2 |
8055708 | Chitsaz et al. | Nov 2011 | B2 |
8065143 | Yanagihara | Nov 2011 | B2 |
8065155 | Gazdzinski | Nov 2011 | B1 |
8065156 | Gazdzinski | Nov 2011 | B2 |
8069046 | Kennewick et al. | Nov 2011 | B2 |
8069422 | Sheshagiri et al. | Nov 2011 | B2 |
8073681 | Baldwin et al. | Dec 2011 | B2 |
8078473 | Gazdzinski | Dec 2011 | B1 |
8082153 | Coffman et al. | Dec 2011 | B2 |
8095364 | Longe et al. | Jan 2012 | B2 |
8099289 | Mozer et al. | Jan 2012 | B2 |
8103510 | Sato | Jan 2012 | B2 |
8107401 | John et al. | Jan 2012 | B2 |
8112275 | Kennewick et al. | Feb 2012 | B2 |
8112280 | Lu | Feb 2012 | B2 |
8117037 | Gazdzinski | Feb 2012 | B2 |
8122353 | Bouta | Feb 2012 | B2 |
8131557 | Davis et al. | Mar 2012 | B2 |
8138912 | Singh et al. | Mar 2012 | B2 |
8140335 | Kennewick et al. | Mar 2012 | B2 |
8150700 | Shin et al. | Apr 2012 | B2 |
8165886 | Gagnon et al. | Apr 2012 | B1 |
8166019 | Lee et al. | Apr 2012 | B1 |
8170790 | Lee et al. | May 2012 | B2 |
8179370 | Yamasani et al. | May 2012 | B1 |
8188856 | Singh et al. | May 2012 | B2 |
8190359 | Bourne | May 2012 | B2 |
8195467 | Mozer et al. | Jun 2012 | B2 |
8204238 | Mozer | Jun 2012 | B2 |
8205788 | Gazdzinski et al. | Jun 2012 | B1 |
8219406 | Yu et al. | Jul 2012 | B2 |
8219407 | Roy et al. | Jul 2012 | B1 |
8219608 | alSafadi et al. | Jul 2012 | B2 |
8224649 | Chaudhari et al. | Jul 2012 | B2 |
8239207 | Seligman et al. | Aug 2012 | B2 |
8285551 | Gazdzinski | Oct 2012 | B2 |
8285553 | Gazdzinski | Oct 2012 | B2 |
8290777 | Nguyen et al. | Oct 2012 | B1 |
8290778 | Gazdzinski | Oct 2012 | B2 |
8290781 | Gazdzinski | Oct 2012 | B2 |
8296146 | Gazdzinski | Oct 2012 | B2 |
8296153 | Gazdzinski | Oct 2012 | B2 |
8296383 | Lindahl | Oct 2012 | B2 |
8301456 | Gazdzinski | Oct 2012 | B2 |
8311834 | Gazdzinski | Nov 2012 | B1 |
8370158 | Gazdzinski | Feb 2013 | B2 |
8371503 | Gazdzinski | Feb 2013 | B2 |
8374871 | Ehsani et al. | Feb 2013 | B2 |
8406745 | Upadhyay | Mar 2013 | B1 |
8447612 | Gazdzinski | May 2013 | B2 |
8498857 | Kopparapu et al. | Jul 2013 | B2 |
20010029455 | Chin et al. | Oct 2001 | A1 |
20010030660 | Zainoulline | Oct 2001 | A1 |
20010047264 | Roundtree | Nov 2001 | A1 |
20020002039 | Qureshey et al. | Jan 2002 | A1 |
20020002461 | Tetsumoto | Jan 2002 | A1 |
20020004703 | Gaspard, II | Jan 2002 | A1 |
20020010584 | Schultz et al. | Jan 2002 | A1 |
20020013852 | Janik | Jan 2002 | A1 |
20020031262 | Imagawa et al. | Mar 2002 | A1 |
20020032564 | Ehsani et al. | Mar 2002 | A1 |
20020032751 | Bharadwaj | Mar 2002 | A1 |
20020035474 | Alpdemir | Mar 2002 | A1 |
20020042707 | Zhao et al. | Apr 2002 | A1 |
20020045438 | Tagawa et al. | Apr 2002 | A1 |
20020046025 | Hain | Apr 2002 | A1 |
20020046315 | Miller et al. | Apr 2002 | A1 |
20020052747 | Sarukkai | May 2002 | A1 |
20020059068 | Rose et al. | May 2002 | A1 |
20020067308 | Robertson | Jun 2002 | A1 |
20020069063 | Buchner et al. | Jun 2002 | A1 |
20020072816 | Shdema et al. | Jun 2002 | A1 |
20020077817 | Atal | Jun 2002 | A1 |
20020080163 | Morey | Jun 2002 | A1 |
20020099552 | Rubin et al. | Jul 2002 | A1 |
20020103641 | Kuo et al. | Aug 2002 | A1 |
20020107684 | Gao | Aug 2002 | A1 |
20020116171 | Russell | Aug 2002 | A1 |
20020116185 | Cooper et al. | Aug 2002 | A1 |
20020116189 | Yeh et al. | Aug 2002 | A1 |
20020128827 | Bu et al. | Sep 2002 | A1 |
20020133347 | Schoneburg et al. | Sep 2002 | A1 |
20020135565 | Gordon et al. | Sep 2002 | A1 |
20020138265 | Stevens et al. | Sep 2002 | A1 |
20020143533 | Lucas et al. | Oct 2002 | A1 |
20020143551 | Sharma et al. | Oct 2002 | A1 |
20020151297 | Remboski et al. | Oct 2002 | A1 |
20020154160 | Hosokawa | Oct 2002 | A1 |
20020164000 | Cohen et al. | Nov 2002 | A1 |
20020169605 | Damiba et al. | Nov 2002 | A1 |
20020173889 | Odinak et al. | Nov 2002 | A1 |
20020184189 | Hay et al. | Dec 2002 | A1 |
20020198714 | Zhou | Dec 2002 | A1 |
20030001881 | Mannheimer et al. | Jan 2003 | A1 |
20030016770 | Trans et al. | Jan 2003 | A1 |
20030020760 | Takatsu et al. | Jan 2003 | A1 |
20030026392 | Brown | Feb 2003 | A1 |
20030033153 | Olson et al. | Feb 2003 | A1 |
20030046401 | Abbott et al. | Mar 2003 | A1 |
20030074198 | Sussman | Apr 2003 | A1 |
20030078766 | Appelt et al. | Apr 2003 | A1 |
20030079038 | Robbin et al. | Apr 2003 | A1 |
20030083884 | Odinak et al. | May 2003 | A1 |
20030088414 | Huang et al. | May 2003 | A1 |
20030097210 | Horst et al. | May 2003 | A1 |
20030098892 | Hiipakka | May 2003 | A1 |
20030099335 | Tanaka et al. | May 2003 | A1 |
20030101045 | Moffatt et al. | May 2003 | A1 |
20030115060 | Junqua et al. | Jun 2003 | A1 |
20030115064 | Gusler et al. | Jun 2003 | A1 |
20030115552 | Jahnke et al. | Jun 2003 | A1 |
20030117365 | Shteyn | Jun 2003 | A1 |
20030120494 | Jost et al. | Jun 2003 | A1 |
20030125927 | Seme | Jul 2003 | A1 |
20030126559 | Fuhrmann | Jul 2003 | A1 |
20030135740 | Talmor et al. | Jul 2003 | A1 |
20030144846 | Denenberg et al. | Jul 2003 | A1 |
20030157968 | Boman et al. | Aug 2003 | A1 |
20030158737 | Csicsatka | Aug 2003 | A1 |
20030167318 | Robbin et al. | Sep 2003 | A1 |
20030167335 | Alexander | Sep 2003 | A1 |
20030190074 | Loudon et al. | Oct 2003 | A1 |
20030197744 | Irvine | Oct 2003 | A1 |
20030212961 | Soin et al. | Nov 2003 | A1 |
20030233230 | Ammicht et al. | Dec 2003 | A1 |
20030233237 | Garside et al. | Dec 2003 | A1 |
20030234824 | Litwiller | Dec 2003 | A1 |
20040051729 | Borden, IV | Mar 2004 | A1 |
20040052338 | Celi, Jr. et al. | Mar 2004 | A1 |
20040054535 | Mackie et al. | Mar 2004 | A1 |
20040054690 | Hillerbrand et al. | Mar 2004 | A1 |
20040055446 | Robbin et al. | Mar 2004 | A1 |
20040061717 | Menon et al. | Apr 2004 | A1 |
20040085162 | Agarwal et al. | May 2004 | A1 |
20040114731 | Gillett et al. | Jun 2004 | A1 |
20040127241 | Shostak | Jul 2004 | A1 |
20040135701 | Yasuda et al. | Jul 2004 | A1 |
20040145607 | Alderson | Jul 2004 | A1 |
20040162741 | Flaxer et al. | Aug 2004 | A1 |
20040176958 | Salmenkaita et al. | Sep 2004 | A1 |
20040186714 | Baker | Sep 2004 | A1 |
20040193420 | Kennewick et al. | Sep 2004 | A1 |
20040193426 | Maddux et al. | Sep 2004 | A1 |
20040199375 | Ehsani et al. | Oct 2004 | A1 |
20040199387 | Wang et al. | Oct 2004 | A1 |
20040205671 | Sukehiro et al. | Oct 2004 | A1 |
20040218451 | Said et al. | Nov 2004 | A1 |
20040220798 | Chi et al. | Nov 2004 | A1 |
20040225746 | Niell et al. | Nov 2004 | A1 |
20040236778 | Junqua et al. | Nov 2004 | A1 |
20040243419 | Wang | Dec 2004 | A1 |
20040257432 | Girish et al. | Dec 2004 | A1 |
20050002507 | Timmins et al. | Jan 2005 | A1 |
20050015254 | Beaman | Jan 2005 | A1 |
20050015772 | Saare et al. | Jan 2005 | A1 |
20050033582 | Gadd et al. | Feb 2005 | A1 |
20050045373 | Born | Mar 2005 | A1 |
20050049880 | Roth et al. | Mar 2005 | A1 |
20050055403 | Brittan | Mar 2005 | A1 |
20050058438 | Hayashi | Mar 2005 | A1 |
20050071332 | Ortega et al. | Mar 2005 | A1 |
20050080625 | Bennett et al. | Apr 2005 | A1 |
20050080780 | Colledge et al. | Apr 2005 | A1 |
20050086059 | Bennett | Apr 2005 | A1 |
20050091118 | Fano | Apr 2005 | A1 |
20050100214 | Zhang et al. | May 2005 | A1 |
20050102614 | Brockett et al. | May 2005 | A1 |
20050102625 | Lee et al. | May 2005 | A1 |
20050108001 | Aarskog | May 2005 | A1 |
20050108074 | Bloechl et al. | May 2005 | A1 |
20050108338 | Simske et al. | May 2005 | A1 |
20050114124 | Liu et al. | May 2005 | A1 |
20050114140 | Brackett et al. | May 2005 | A1 |
20050119897 | Bennett et al. | Jun 2005 | A1 |
20050125216 | Chitrapura et al. | Jun 2005 | A1 |
20050125235 | Lazay et al. | Jun 2005 | A1 |
20050132301 | Ikeda | Jun 2005 | A1 |
20050143972 | Gopalakrishnan et al. | Jun 2005 | A1 |
20050149332 | Kuzunuki et al. | Jul 2005 | A1 |
20050152602 | Chen et al. | Jul 2005 | A1 |
20050165607 | Di Fabbrizio et al. | Jul 2005 | A1 |
20050182616 | Kotipalli | Aug 2005 | A1 |
20050182628 | Choi | Aug 2005 | A1 |
20050182629 | Coorman et al. | Aug 2005 | A1 |
20050192801 | Lewis et al. | Sep 2005 | A1 |
20050196733 | Budra et al. | Sep 2005 | A1 |
20050201572 | Lindahl et al. | Sep 2005 | A1 |
20050203747 | Lecoeuche | Sep 2005 | A1 |
20050203991 | Kawamura et al. | Sep 2005 | A1 |
20050222843 | Kahn et al. | Oct 2005 | A1 |
20050228665 | Kobayashi et al. | Oct 2005 | A1 |
20050273626 | Pearson et al. | Dec 2005 | A1 |
20050283364 | Longe et al. | Dec 2005 | A1 |
20050288934 | Omi | Dec 2005 | A1 |
20050288936 | Busayapongchai et al. | Dec 2005 | A1 |
20050289463 | Wu et al. | Dec 2005 | A1 |
20060009973 | Nguyen et al. | Jan 2006 | A1 |
20060018492 | Chiu et al. | Jan 2006 | A1 |
20060061488 | Dunton | Mar 2006 | A1 |
20060067535 | Culbert et al. | Mar 2006 | A1 |
20060067536 | Culbert et al. | Mar 2006 | A1 |
20060074660 | Waters et al. | Apr 2006 | A1 |
20060077055 | Basir | Apr 2006 | A1 |
20060095846 | Nurmi | May 2006 | A1 |
20060095848 | Naik | May 2006 | A1 |
20060106592 | Brockett et al. | May 2006 | A1 |
20060106594 | Brockett et al. | May 2006 | A1 |
20060106595 | Brockett et al. | May 2006 | A1 |
20060111906 | Cross et al. | May 2006 | A1 |
20060116874 | Samuelsson et al. | Jun 2006 | A1 |
20060117002 | Swen | Jun 2006 | A1 |
20060119582 | Ng et al. | Jun 2006 | A1 |
20060122834 | Bennett | Jun 2006 | A1 |
20060122836 | Cross et al. | Jun 2006 | A1 |
20060143007 | Koh et al. | Jun 2006 | A1 |
20060143576 | Gupta et al. | Jun 2006 | A1 |
20060153040 | Girish et al. | Jul 2006 | A1 |
20060156252 | Sheshagiri et al. | Jul 2006 | A1 |
20060190269 | Tessel et al. | Aug 2006 | A1 |
20060193518 | Dong | Aug 2006 | A1 |
20060200253 | Hoffberg et al. | Sep 2006 | A1 |
20060200342 | Corston-Oliver et al. | Sep 2006 | A1 |
20060217967 | Goertzen et al. | Sep 2006 | A1 |
20060221788 | Lindahl et al. | Oct 2006 | A1 |
20060235700 | Wong et al. | Oct 2006 | A1 |
20060239471 | Mao et al. | Oct 2006 | A1 |
20060242190 | Wnek | Oct 2006 | A1 |
20060262876 | LaDue | Nov 2006 | A1 |
20060274051 | Longe et al. | Dec 2006 | A1 |
20060274905 | Lindahl et al. | Dec 2006 | A1 |
20060277058 | J″maev et al. | Dec 2006 | A1 |
20060282264 | Denny et al. | Dec 2006 | A1 |
20060293876 | Kamatani et al. | Dec 2006 | A1 |
20060293886 | Odell et al. | Dec 2006 | A1 |
20070006098 | Krumm et al. | Jan 2007 | A1 |
20070021956 | Qu et al. | Jan 2007 | A1 |
20070027732 | Hudgens | Feb 2007 | A1 |
20070033003 | Morris | Feb 2007 | A1 |
20070038436 | Cristoe et al. | Feb 2007 | A1 |
20070038609 | Wu | Feb 2007 | A1 |
20070041361 | Iso-Sipila | Feb 2007 | A1 |
20070043568 | Dhanakshirur et al. | Feb 2007 | A1 |
20070044038 | Horentrup et al. | Feb 2007 | A1 |
20070047719 | Dhawan et al. | Mar 2007 | A1 |
20070050191 | Weider et al. | Mar 2007 | A1 |
20070050712 | Hull et al. | Mar 2007 | A1 |
20070052586 | Horstemeyer | Mar 2007 | A1 |
20070055514 | Beattie et al. | Mar 2007 | A1 |
20070055525 | Kennewick et al. | Mar 2007 | A1 |
20070055529 | Kanevsky et al. | Mar 2007 | A1 |
20070058832 | Hug et al. | Mar 2007 | A1 |
20070073540 | Hirakawa et al. | Mar 2007 | A1 |
20070083467 | Lindahl et al. | Apr 2007 | A1 |
20070088556 | Andrew | Apr 2007 | A1 |
20070094026 | Ativanichayaphong et al. | Apr 2007 | A1 |
20070100635 | Mahajan et al. | May 2007 | A1 |
20070100790 | Cheyer et al. | May 2007 | A1 |
20070106674 | Agrawal et al. | May 2007 | A1 |
20070118377 | Badino et al. | May 2007 | A1 |
20070118378 | Skuratovsky | May 2007 | A1 |
20070124149 | Shen et al. | May 2007 | A1 |
20070135949 | Snover et al. | Jun 2007 | A1 |
20070157268 | Girish et al. | Jul 2007 | A1 |
20070174188 | Fish | Jul 2007 | A1 |
20070180383 | Naik | Aug 2007 | A1 |
20070182595 | Ghasabian | Aug 2007 | A1 |
20070185754 | Schmidt | Aug 2007 | A1 |
20070185917 | Prahlad et al. | Aug 2007 | A1 |
20070198269 | Braho et al. | Aug 2007 | A1 |
20070208569 | Subramanian et al. | Sep 2007 | A1 |
20070208579 | Peterson | Sep 2007 | A1 |
20070208726 | Krishnaprasad et al. | Sep 2007 | A1 |
20070211071 | Slotznick et al. | Sep 2007 | A1 |
20070225980 | Sumita | Sep 2007 | A1 |
20070239429 | Johnson et al. | Oct 2007 | A1 |
20070265831 | Dinur et al. | Nov 2007 | A1 |
20070276651 | Bliss et al. | Nov 2007 | A1 |
20070276714 | Beringer | Nov 2007 | A1 |
20070276810 | Rosen | Nov 2007 | A1 |
20070282595 | Tunning et al. | Dec 2007 | A1 |
20070286399 | Ramamoorthy | Dec 2007 | A1 |
20070288241 | Cross et al. | Dec 2007 | A1 |
20070291108 | Huber et al. | Dec 2007 | A1 |
20070294263 | Punj et al. | Dec 2007 | A1 |
20070299664 | Peters et al. | Dec 2007 | A1 |
20080012950 | Lee et al. | Jan 2008 | A1 |
20080015864 | Ross et al. | Jan 2008 | A1 |
20080021708 | Bennett et al. | Jan 2008 | A1 |
20080034032 | Healey et al. | Feb 2008 | A1 |
20080040339 | Zhou et al. | Feb 2008 | A1 |
20080042970 | Liang et al. | Feb 2008 | A1 |
20080048908 | Sato | Feb 2008 | A1 |
20080052063 | Bennett et al. | Feb 2008 | A1 |
20080052073 | Goto et al. | Feb 2008 | A1 |
20080056579 | Guha | Mar 2008 | A1 |
20080071544 | Beaufays et al. | Mar 2008 | A1 |
20080075296 | Lindahl et al. | Mar 2008 | A1 |
20080077384 | Agapi et al. | Mar 2008 | A1 |
20080077406 | Ganong | Mar 2008 | A1 |
20080079566 | Singh et al. | Apr 2008 | A1 |
20080082332 | Mallett et al. | Apr 2008 | A1 |
20080082338 | O'Neil et al. | Apr 2008 | A1 |
20080082390 | Hawkins et al. | Apr 2008 | A1 |
20080082651 | Singh et al. | Apr 2008 | A1 |
20080091406 | Baldwin et al. | Apr 2008 | A1 |
20080109222 | Liu | May 2008 | A1 |
20080118143 | Gordon et al. | May 2008 | A1 |
20080120102 | Rao | May 2008 | A1 |
20080120112 | Bennett et al. | May 2008 | A1 |
20080120342 | Reed et al. | May 2008 | A1 |
20080126100 | Grost et al. | May 2008 | A1 |
20080129520 | Lee | Jun 2008 | A1 |
20080131006 | Oliver | Jun 2008 | A1 |
20080133215 | Sarukkai | Jun 2008 | A1 |
20080133228 | Rao | Jun 2008 | A1 |
20080140413 | Millman et al. | Jun 2008 | A1 |
20080140416 | Shostak | Jun 2008 | A1 |
20080140652 | Millman et al. | Jun 2008 | A1 |
20080140657 | Azvine et al. | Jun 2008 | A1 |
20080154612 | Evermann et al. | Jun 2008 | A1 |
20080157867 | Krah | Jul 2008 | A1 |
20080165980 | Pavlovic et al. | Jul 2008 | A1 |
20080189106 | Low et al. | Aug 2008 | A1 |
20080189114 | Fail et al. | Aug 2008 | A1 |
20080208585 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208587 | Ben-David et al. | Aug 2008 | A1 |
20080221866 | Katragadda et al. | Sep 2008 | A1 |
20080221880 | Cerra et al. | Sep 2008 | A1 |
20080221889 | Cerra et al. | Sep 2008 | A1 |
20080221903 | Kanevsky et al. | Sep 2008 | A1 |
20080228463 | Mori et al. | Sep 2008 | A1 |
20080228490 | Fischer et al. | Sep 2008 | A1 |
20080228496 | Yu et al. | Sep 2008 | A1 |
20080240569 | Tonouchi | Oct 2008 | A1 |
20080247519 | Abella et al. | Oct 2008 | A1 |
20080249770 | Kim et al. | Oct 2008 | A1 |
20080253577 | Eppolito | Oct 2008 | A1 |
20080255845 | Bennett | Oct 2008 | A1 |
20080256613 | Grover | Oct 2008 | A1 |
20080270118 | Kuo et al. | Oct 2008 | A1 |
20080281510 | Shahine | Nov 2008 | A1 |
20080300878 | Bennett | Dec 2008 | A1 |
20080313335 | Jung et al. | Dec 2008 | A1 |
20080319763 | Di Fabbrizio et al. | Dec 2008 | A1 |
20090003115 | Lindahl et al. | Jan 2009 | A1 |
20090005891 | Batson et al. | Jan 2009 | A1 |
20090006100 | Badger et al. | Jan 2009 | A1 |
20090006343 | Platt et al. | Jan 2009 | A1 |
20090006488 | Lindahl et al. | Jan 2009 | A1 |
20090006671 | Batson et al. | Jan 2009 | A1 |
20090011709 | Akasaka et al. | Jan 2009 | A1 |
20090012775 | El Hady et al. | Jan 2009 | A1 |
20090018835 | Cooper et al. | Jan 2009 | A1 |
20090022329 | Mahowald | Jan 2009 | A1 |
20090028435 | Wu et al. | Jan 2009 | A1 |
20090030800 | Grois | Jan 2009 | A1 |
20090048845 | Burckart et al. | Feb 2009 | A1 |
20090055179 | Cho et al. | Feb 2009 | A1 |
20090058823 | Kocienda | Mar 2009 | A1 |
20090060472 | Bull et al. | Mar 2009 | A1 |
20090070097 | Wu et al. | Mar 2009 | A1 |
20090076792 | Lawson-Tancred | Mar 2009 | A1 |
20090076796 | Daraselia | Mar 2009 | A1 |
20090077165 | Rhodes et al. | Mar 2009 | A1 |
20090083047 | Lindahl et al. | Mar 2009 | A1 |
20090092239 | MacWan | Apr 2009 | A1 |
20090092260 | Powers | Apr 2009 | A1 |
20090092261 | Bard | Apr 2009 | A1 |
20090092262 | Costa et al. | Apr 2009 | A1 |
20090094033 | Mozer et al. | Apr 2009 | A1 |
20090100049 | Cao | Apr 2009 | A1 |
20090106026 | Ferrieux | Apr 2009 | A1 |
20090112572 | Thorn | Apr 2009 | A1 |
20090112677 | Rhett | Apr 2009 | A1 |
20090112892 | Cardie et al. | Apr 2009 | A1 |
20090123071 | Iwasaki | May 2009 | A1 |
20090125477 | Lu et al. | May 2009 | A1 |
20090144049 | Haddad et al. | Jun 2009 | A1 |
20090146848 | Ghassabian | Jun 2009 | A1 |
20090150156 | Kennewick et al. | Jun 2009 | A1 |
20090154669 | Wood et al. | Jun 2009 | A1 |
20090157401 | Bennett | Jun 2009 | A1 |
20090164441 | Cheyer | Jun 2009 | A1 |
20090164655 | Pettersson et al. | Jun 2009 | A1 |
20090167508 | Fadell et al. | Jul 2009 | A1 |
20090167509 | Fadell et al. | Jul 2009 | A1 |
20090171664 | Kennewick et al. | Jul 2009 | A1 |
20090172542 | Girish et al. | Jul 2009 | A1 |
20090177461 | Ehsani et al. | Jul 2009 | A1 |
20090182445 | Girish et al. | Jul 2009 | A1 |
20090187577 | Reznik et al. | Jul 2009 | A1 |
20090191895 | Singh et al. | Jul 2009 | A1 |
20090192782 | Drewes | Jul 2009 | A1 |
20090204409 | Mozer et al. | Aug 2009 | A1 |
20090216704 | Zheng et al. | Aug 2009 | A1 |
20090222488 | Boerries et al. | Sep 2009 | A1 |
20090228273 | Wang et al. | Sep 2009 | A1 |
20090234655 | Kwon | Sep 2009 | A1 |
20090239552 | Churchill et al. | Sep 2009 | A1 |
20090248182 | Logan et al. | Oct 2009 | A1 |
20090252350 | Seguin | Oct 2009 | A1 |
20090253457 | Seguin | Oct 2009 | A1 |
20090253463 | Shin et al. | Oct 2009 | A1 |
20090254339 | Seguin | Oct 2009 | A1 |
20090271109 | Lee et al. | Oct 2009 | A1 |
20090271175 | Bodin et al. | Oct 2009 | A1 |
20090271178 | Bodin et al. | Oct 2009 | A1 |
20090287583 | Holmes | Nov 2009 | A1 |
20090290718 | Kahn et al. | Nov 2009 | A1 |
20090299745 | Kennewick et al. | Dec 2009 | A1 |
20090299849 | Cao et al. | Dec 2009 | A1 |
20090306967 | Nicolov et al. | Dec 2009 | A1 |
20090306980 | Shin | Dec 2009 | A1 |
20090306981 | Cromack et al. | Dec 2009 | A1 |
20090306985 | Roberts et al. | Dec 2009 | A1 |
20090306989 | Kaji | Dec 2009 | A1 |
20090307162 | Bui et al. | Dec 2009 | A1 |
20090313026 | Coffman et al. | Dec 2009 | A1 |
20090319266 | Brown et al. | Dec 2009 | A1 |
20090326936 | Nagashima | Dec 2009 | A1 |
20090326938 | Marila et al. | Dec 2009 | A1 |
20100005081 | Bennett | Jan 2010 | A1 |
20100023320 | Di Cristo et al. | Jan 2010 | A1 |
20100030928 | Conroy et al. | Feb 2010 | A1 |
20100031143 | Rao et al. | Feb 2010 | A1 |
20100036660 | Bennett | Feb 2010 | A1 |
20100042400 | Block et al. | Feb 2010 | A1 |
20100049514 | Kennewick et al. | Feb 2010 | A1 |
20100057457 | Ogata et al. | Mar 2010 | A1 |
20100060646 | Unsal et al. | Mar 2010 | A1 |
20100063825 | Williams et al. | Mar 2010 | A1 |
20100064113 | Lindahl et al. | Mar 2010 | A1 |
20100070899 | Hunt et al. | Mar 2010 | A1 |
20100076760 | Kraenzel et al. | Mar 2010 | A1 |
20100081456 | Singh et al. | Apr 2010 | A1 |
20100081487 | Chen et al. | Apr 2010 | A1 |
20100082970 | Lindahl et al. | Apr 2010 | A1 |
20100088020 | Sano et al. | Apr 2010 | A1 |
20100088100 | Lindahl | Apr 2010 | A1 |
20100100212 | Lindahl et al. | Apr 2010 | A1 |
20100100384 | Ju et al. | Apr 2010 | A1 |
20100106500 | McKee et al. | Apr 2010 | A1 |
20100125460 | Mellott et al. | May 2010 | A1 |
20100131273 | Aley-Raz et al. | May 2010 | A1 |
20100138215 | Williams | Jun 2010 | A1 |
20100138224 | Bedingfield, Sr. | Jun 2010 | A1 |
20100138416 | Bellotti | Jun 2010 | A1 |
20100145694 | Ju et al. | Jun 2010 | A1 |
20100145700 | Kennewick et al. | Jun 2010 | A1 |
20100146442 | Nagasaka et al. | Jun 2010 | A1 |
20100161554 | Datuashvili et al. | Jun 2010 | A1 |
20100185448 | Meisel | Jul 2010 | A1 |
20100204986 | Kennewick et al. | Aug 2010 | A1 |
20100217604 | Baldwin et al. | Aug 2010 | A1 |
20100228540 | Bennett | Sep 2010 | A1 |
20100231474 | Yamagajo et al. | Sep 2010 | A1 |
20100235341 | Bennett | Sep 2010 | A1 |
20100257160 | Cao | Oct 2010 | A1 |
20100257478 | Longe et al. | Oct 2010 | A1 |
20100262599 | Nitz | Oct 2010 | A1 |
20100277579 | Cho et al. | Nov 2010 | A1 |
20100278320 | Arsenault et al. | Nov 2010 | A1 |
20100278453 | King | Nov 2010 | A1 |
20100280983 | Cho et al. | Nov 2010 | A1 |
20100286985 | Kennewick et al. | Nov 2010 | A1 |
20100299133 | Kopparapu et al. | Nov 2010 | A1 |
20100299142 | Freeman et al. | Nov 2010 | A1 |
20100312547 | Van Os et al. | Dec 2010 | A1 |
20100312566 | Odinak et al. | Dec 2010 | A1 |
20100318576 | Kim | Dec 2010 | A1 |
20100324905 | Kurzweil et al. | Dec 2010 | A1 |
20100332235 | David | Dec 2010 | A1 |
20100332280 | Bradley et al. | Dec 2010 | A1 |
20100332348 | Cao | Dec 2010 | A1 |
20110010178 | Lee et al. | Jan 2011 | A1 |
20110022952 | Wu et al. | Jan 2011 | A1 |
20110047072 | Ciurea | Feb 2011 | A1 |
20110054901 | Qin et al. | Mar 2011 | A1 |
20110060584 | Ferrucci et al. | Mar 2011 | A1 |
20110060807 | Martin et al. | Mar 2011 | A1 |
20110076994 | Kim et al. | Mar 2011 | A1 |
20110082688 | Kim et al. | Apr 2011 | A1 |
20110090078 | Kim et al. | Apr 2011 | A1 |
20110099000 | Rai et al. | Apr 2011 | A1 |
20110112827 | Kennewick et al. | May 2011 | A1 |
20110112921 | Kennewick et al. | May 2011 | A1 |
20110119049 | Ylonen | May 2011 | A1 |
20110125540 | Jang et al. | May 2011 | A1 |
20110130958 | Stahl et al. | Jun 2011 | A1 |
20110131036 | Dicristo et al. | Jun 2011 | A1 |
20110131045 | Cristo et al. | Jun 2011 | A1 |
20110143811 | Rodriguez | Jun 2011 | A1 |
20110144999 | Jang et al. | Jun 2011 | A1 |
20110161076 | Davis et al. | Jun 2011 | A1 |
20110161309 | Lung et al. | Jun 2011 | A1 |
20110175810 | Markovic et al. | Jul 2011 | A1 |
20110184721 | Subramanian et al. | Jul 2011 | A1 |
20110184730 | LeBeau et al. | Jul 2011 | A1 |
20110195758 | Damale et al. | Aug 2011 | A1 |
20110201387 | Paek et al. | Aug 2011 | A1 |
20110218855 | Cao et al. | Sep 2011 | A1 |
20110224972 | Millett et al. | Sep 2011 | A1 |
20110228913 | Cochinwala | Sep 2011 | A1 |
20110231182 | Weider et al. | Sep 2011 | A1 |
20110231188 | Kennewick et al. | Sep 2011 | A1 |
20110231474 | Locker et al. | Sep 2011 | A1 |
20110260861 | Singh et al. | Oct 2011 | A1 |
20110264643 | Cao | Oct 2011 | A1 |
20110279368 | Klein et al. | Nov 2011 | A1 |
20110288861 | Kurzweil et al. | Nov 2011 | A1 |
20110298585 | Barry | Dec 2011 | A1 |
20110306426 | Novak et al. | Dec 2011 | A1 |
20110314404 | Kotler et al. | Dec 2011 | A1 |
20120002820 | Leichter | Jan 2012 | A1 |
20120016678 | Gruber et al. | Jan 2012 | A1 |
20120020490 | Leichter | Jan 2012 | A1 |
20120022787 | LeBeau et al. | Jan 2012 | A1 |
20120022857 | Baldwin et al. | Jan 2012 | A1 |
20120022860 | Lloyd et al. | Jan 2012 | A1 |
20120022868 | LeBeau et al. | Jan 2012 | A1 |
20120022869 | Lloyd et al. | Jan 2012 | A1 |
20120022870 | Kristjansson et al. | Jan 2012 | A1 |
20120022872 | Gruber et al. | Jan 2012 | A1 |
20120022874 | Lloyd et al. | Jan 2012 | A1 |
20120022876 | LeBeau et al. | Jan 2012 | A1 |
20120023088 | Cheng et al. | Jan 2012 | A1 |
20120034904 | LeBeau et al. | Feb 2012 | A1 |
20120035908 | Lebeau et al. | Feb 2012 | A1 |
20120035924 | Jitkoff et al. | Feb 2012 | A1 |
20120035931 | LeBeau et al. | Feb 2012 | A1 |
20120035932 | Jitkoff et al. | Feb 2012 | A1 |
20120042343 | Laligand et al. | Feb 2012 | A1 |
20120078627 | Wagner | Mar 2012 | A1 |
20120084086 | Gilbert et al. | Apr 2012 | A1 |
20120108221 | Thomas et al. | May 2012 | A1 |
20120136572 | Norton | May 2012 | A1 |
20120137367 | Dupont et al. | May 2012 | A1 |
20120149394 | Singh et al. | Jun 2012 | A1 |
20120150580 | Norton | Jun 2012 | A1 |
20120173464 | Tur et al. | Jul 2012 | A1 |
20120185237 | Gajic et al. | Jul 2012 | A1 |
20120197998 | Kessel et al. | Aug 2012 | A1 |
20120214517 | Singh et al. | Aug 2012 | A1 |
20120221339 | Wang et al. | Aug 2012 | A1 |
20120245719 | Story, Jr. et al. | Sep 2012 | A1 |
20120245944 | Gruber et al. | Sep 2012 | A1 |
20120265528 | Gruber et al. | Oct 2012 | A1 |
20120271625 | Bernard | Oct 2012 | A1 |
20120271635 | Ljolje | Oct 2012 | A1 |
20120271676 | Aravamudan et al. | Oct 2012 | A1 |
20120284027 | Mallett et al. | Nov 2012 | A1 |
20120296649 | Bansal et al. | Nov 2012 | A1 |
20120309363 | Gruber et al. | Dec 2012 | A1 |
20120310642 | Cao et al. | Dec 2012 | A1 |
20120310649 | Cannistraro et al. | Dec 2012 | A1 |
20120311583 | Gruber et al. | Dec 2012 | A1 |
20120311584 | Gruber et al. | Dec 2012 | A1 |
20120311585 | Gruber et al. | Dec 2012 | A1 |
20120330661 | Lindahl | Dec 2012 | A1 |
20130006638 | Lindahl | Jan 2013 | A1 |
20130007648 | Gamon | Jan 2013 | A1 |
20130110505 | Gruber et al. | May 2013 | A1 |
20130110515 | Guzzoni et al. | May 2013 | A1 |
20130110518 | Gruber et al. | May 2013 | A1 |
20130110519 | Cheyer et al. | May 2013 | A1 |
20130110520 | Cheyer et al. | May 2013 | A1 |
20130111348 | Gruber et al. | May 2013 | A1 |
20130111487 | Cheyer et al. | May 2013 | A1 |
20130115927 | Gruber et al. | May 2013 | A1 |
20130117022 | Chen et al. | May 2013 | A1 |
20130185074 | Gruber et al. | Jul 2013 | A1 |
20130185081 | Cheyer et al. | Jul 2013 | A1 |
20130325443 | Begeja et al. | Dec 2013 | A1 |
Number | Date | Country |
---|---|---|
681573 | Apr 1993 | CH |
202035047 | Nov 2011 | CN |
3837590 | May 1990 | DE |
19841541 | Dec 2007 | DE |
0138061 | Apr 1985 | EP |
0218859 | Apr 1987 | EP |
0262938 | Apr 1988 | EP |
0138061 | Jun 1988 | EP |
0293259 | Nov 1988 | EP |
0299572 | Jan 1989 | EP |
0313975 | May 1989 | EP |
0314908 | May 1989 | EP |
0327408 | Aug 1989 | EP |
0389271 | Sep 1990 | EP |
0411675 | Feb 1991 | EP |
0558312 | Sep 1993 | EP |
0559349 | Sep 1993 | EP |
0570660 | Nov 1993 | EP |
0863453 | Sep 1998 | EP |
0559349 | Jan 1999 | EP |
0981236 | Feb 2000 | EP |
1229496 | Aug 2002 | EP |
1245023 | Oct 2002 | EP |
1311102 | May 2003 | EP |
1315084 | May 2003 | EP |
1315086 | May 2003 | EP |
2109295 | Oct 2009 | EP |
2293667 | Apr 1996 | GB |
60-19965 | Jan 1994 | JP |
7-199379 | Aug 1995 | JP |
11-6743 | Jan 1999 | JP |
2001-125896 | May 2001 | JP |
2002-14954 | Jan 2002 | JP |
2002-24212 | Jan 2002 | JP |
2003-517158 | May 2003 | JP |
2004-152063 | May 2004 | JP |
2007-4633 | Jan 2007 | JP |
2008-236448 | Oct 2008 | JP |
2008-271481 | Nov 2008 | JP |
2009-36999 | Feb 2009 | JP |
2009-294913 | Dec 2009 | JP |
10-0757496 | Sep 2007 | KR |
10-0776800 | Nov 2007 | KR |
10-0801227 | Feb 2008 | KR |
10-0810500 | Mar 2008 | KR |
10-2008-0109322 | Dec 2008 | KR |
10-2009-0086805 | Aug 2009 | KR |
10-0920267 | Oct 2009 | KR |
10-2010-0119519 | Nov 2010 | KR |
10-1032792 | May 2011 | KR |
10-2011-0113414 | Oct 2011 | KR |
1014847 | Oct 2001 | NL |
1995002221 | Jan 1995 | WO |
1997010586 | Mar 1997 | WO |
1997026612 | Jul 1997 | WO |
1998041956 | Sep 1998 | WO |
1999001834 | Jan 1999 | WO |
1999008238 | Feb 1999 | WO |
1999056227 | Nov 1999 | WO |
2000029964 | May 2000 | WO |
2000060435 | Oct 2000 | WO |
2000060435 | Apr 2001 | WO |
2001035391 | May 2001 | WO |
2002073603 | Sep 2002 | WO |
20041008801 | Jan 2004 | WO |
2006129967 | Dec 2006 | WO |
2007080559 | Jul 2007 | WO |
2008085742 | Jul 2008 | WO |
2008109835 | Sep 2008 | WO |
2010075623 | Jul 2010 | WO |
2011088053 | Jul 2011 | WO |
2011133543 | Oct 2011 | WO |
2012167168 | Dec 2012 | WO |
Entry |
---|
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2010/037378, dated Aug. 25, 2010, 14 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2012/040571, dated Nov. 16, 2012, 14 pages. |
Extended European Search Report and Search Opinion received for European Patent Application No. 12185276.8, dated Dec. 18, 2012, 4 pages. |
Extended European Search Report received for European Patent Application No. 12186663.6, dated Jul. 16, 2013, 6 pages. |
“Top 10 Best Practices for Voice User Interface Design” available at <http://www.developer.com/voice/article.php/1567051/Top-10-Best-Practices-for-Voice-UserInterface-Design.htm>, Nov. 1, 2002, 4 pages. |
Apple Computer, “Knowledge Navigator”, published by Apple Computer no later than 2008, as depicted in “Exemplary Screenshots from video entitled “Knowledge Navigator””, 2008, 7 pages. |
Bellegarda, Jerome R., “Latent Semantic Mapping”, IEEE Signal Processing Magazine, vol. 22, No. 5, Sep. 2005, pp. 70-80. |
Car Working Group, “Hands-Free Profile 1.5 HFP1.5_SPEC”, Bluetooth Doc, available at <www.bluetooth.org>, Nov. 25, 2005, 93 pages. |
Cohen et al., “Voice User Interface Design,”, Excerpts from Chapter 1 and Chapter 10, 2004, 36 pages. |
“Mel Scale”, Wikipedia the Free Encyclopedia, Last modified on Oct. 13, 2009 Available and retrieved on Jul. 28, 2010, online at <http://en.wikipedia.org/wiki/Mel_scale>, 2 pages. |
“Minimum Phase”, Wikipedia the free Encyclopedia, Last modified on Jan. 12, 2010 and retrieved on Jul. 28, 2010, Available online at <http://en.wikipedia.org/wiki/Minimum_phase>, 8 pages. |
Busemann et al., “Natural Language Diaglogue Service for Appointment Scheduling Agents”, Technical Report RR-97-02, Deutsches Forschungszentrum fur Kunstliche Intelligenz GmbH, 1997, 8 pages. |
Acero et al., “Environmental Robustness in Automatic Speech Recognition”, International Conference on Acoustics, Speech and Signal Processing (ICASSP'90), Apr. 1990, 4 pages. |
Acero et al., “Robust Speech Recognition by Normalization of the Acoustic Space”, International Conference on Acoustics, Speech and Signal Processing, 1991, 4 pages. |
Agnas et al., “Spoken Language Translator: First-Year Report”, SICS (ISSN 0283-3638), SRI and Telia Research AB, Jan. 1994, 161 pages. |
Ahlbom et al., Modeling Spectral Speech Transitions Using Temporal Decomposition Techniques, IEEE International Conference of Acoustics, Speech and Signal Processing (ICASSP'87), vol. 12, Apr. 1987, 4 pages. |
Aikawa et al., “Speech Recognition Using Time-Warping Neural Networks”, Proceedings of the 1991, IEEE Workshop on Neural Networks for Signal Processing, 1991, 10 pages. |
Alfred App, “Alfred”, Available online at <http://www.alfredapp.com/>, retrieved on Feb. 8, 2012, 5 pages. |
Allen, J., “Natural Language Understanding”, 2nd Edition, The Benjamin/Cummings Publishing Company, Inc., 1995, 671 pages. |
Alshawi et al., “CLARE: A Contextual Reasoning and Co-operative Response Framework for the Core Language Engine”, SRI International, Cambridge Computer Science Research Centre, Cambridge, Dec. 1992, 273 pages. |
Alshawi et al., “Declarative Derivation of Database Queries from Meaning Representations”, Proceedings of the BANKAI Workshop on Intelligent Information Access, Oct. 1991, 12 pages. |
Alshawi et al., “Logical Forms in the Core Language Engine”, Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics, 1989, pp. 25-32. |
Alshawi et al., “Overview of the Core Language Engine”, Proceedings of Future Generation Computing Systems, Tokyo, 13 pages. |
Alshawi, H., “Translation and Monotonic Interpretation/Generation”, SRI International, Cambridge Computer Science Research Centre, Cambridge, Available online at <http://www.cam.sri.com/tr/crc024/paper.ps.Z 1992>, Jul. 1992, 18 pages. |
Ambite et al., “Design and Implementation of the CALO Query Manager”, American Association for Artificial Intelligence, 2006, 8 pages. |
Ambite et al., “Integration of Heterogeneous Knowledge Sources in the CALO Query Manager”, The 4th International Conference on Ontologies, Databases and Applications of Semantics (ODBASE), 2005, 18 pages. |
Anastasakos et al., “Duration Modeling in Large Vocabulary Speech Recognition”, International Conference on Acoustics, Speech and Signal Processing (ICASSP'95), May 1995, pp. 628-631. |
Anderson et al., “Syntax-Directed Recognition of Hand-Printed Two-Dimensional Mathematics”, Proceedings of Symposium on Interactive Systems for Experimental Applied Mathematics: Proceedings of the Association for Computing Machinery Inc. Symposium, 1967, 12 pages. |
Ansari et al., “Pitch Modification of Speech using a Low-Sensitivity Inverse Filter Approach”, IEEE Signal Processing Letters, vol. 5, No. 3, Mar. 1998, pp. 60-62. |
Anthony et al., “Supervised Adaption for Signature Verification System”, IBM Technical Disclosure, Jun. 1, 1978, 3 pages. |
Appelt et al., “Fastus: A Finite-State Processor for Information Extraction from Real-world Text”, Proceedings of IJCAI, 1993, 8 pages. |
Appelt et al., “SRI International Fastus System MUC-6 Test Results and Analysis”, SRI International, Menlo Park, California, 1995, 12 pages. |
Apple Computer, “Guide Maker User's Guide”, Apple Computer, Inc., Apr. 27, 1994, 8 pages. |
Apple Computer, “Introduction to Apple Guide”, Apple Computer, Inc., Apr. 28, 1994, 20 pages. |
Archbold et al., “A Team User's Guide”, SRI International, Dec. 21, 1981, 70 pages. |
Asanovic et al., “Experimental Determination of Precision Requirements for Back-Propagation Training of Artificial Neural Networks”, Proceedings of the 2nd International Conference of Microelectronics for Neural Networks, 1991, www.ICSI.Berkelev.EDU, 1991, 7 pages. |
Atal et al., “Efficient Coding of LPC Parameters by Temporal Decomposition”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'83), Apr. 1983, 4 pages. |
Bahl et al., “A Maximum Likelihood Approach to Continuous Speech Recognition”, IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. PAMI-5, No. 2, Mar. 1983, 13 pages. |
Bahl et al., “A Tree-Based Statistical Language Model for Natural Language Speech Recognition”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 37, No. 7, Jul. 1989, 8 pages. |
Bahl et al., “Acoustic Markov Models Used in the Tangora Speech Recognition System”, Proceeding of International Conference on Acoustics, Speech and Signal Processing (ICASSP'88), vol. 1, Apr. 1988, 4 pages. |
Bahl et al., “Large Vocabulary Natural Language Continuous Speech Recognition”, Proceedings of 1989 International Conference on Acoustics, Speech and Signal Processing, vol. 1, May, 1989, 6 pages. |
Bahl et al., “Multonic Markov Word Models for Large Vocabulary Continuous Speech Recognition”, IEEE Transactions on Speech and Audio Processing, vol. 1, No. 3, Jul. 1993, 11 pages. |
Bahl et al., “Speech Recognition with Continuous-Parameter Hidden Markov Models”, Proceeding of International Conference on Acoustics, Speech and Signal Processing (ICASSP'88), vol. 1, Apr. 1988, 8 pages. |
Banbrook, M., “Nonlinear Analysis of Speech from a Synthesis Perspective”, A Thesis Submitted for the Degree of Doctor of Philosophy, The University of Edinburgh, Oct. 15, 1996, 35 pages. |
Bear et al., “A System for Labeling Self-Repairs in Speech”, SRI International, Feb. 22, 1993, 9 pages. |
Bear et al., “Detection and Correction of Repairs in Human-Computer Dialog”, SRI International, May 1992, 11 pages. |
Bear et al., “Integrating Multiple Knowledge Sources for Detection and Correction of Repairs in Human-Computer Dialog”, Proceedings of the 30th Annual Meeting on Association for Computational Linguistics (ACL), 1992, 8 pages. |
Bear et al., “Using Information Extraction to Improve Document Retrieval”, SRI International, Menlo Park, California, 1998, 11 pages. |
Belaid et al., “A Syntactic Approach for Handwritten Mathematical Formula Recognition”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-6, No. 1, Jan. 1984, 7 pages. |
Bellegarda, Jerome R., “Exploiting both Local and Global Constraints for Multi-Span Statistical Language Modeling”, Proceeding of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing (1CASSP'98), vol. 2, May 1998, 5 pages. |
Bellegarda et al., “A Latent Semantic Analysis Framework for Large-Span Language Modeling”, 5th European Conference on Speech, Communication and Technology (EUROSPEECH'97), Sep. 1997, 4 pages. |
Bellegarda et al., “A Multispan Language Modeling Framework for Large Vocabulary Speech Recognition”, IEEE Transactions on Speech and Audio Processing, vol. 6, No. 5, Sep. 1998, 12 pages. |
Bellegarda et al., “A Novel Word Clustering Algorithm Based on Latent Semantic Analysis”, Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'96), vol. 1, 1996, 4 pages. |
Bellegarda et al., “Experiments Using Data Augmentation for Speaker Adaptation”, International Conference on Acoustics, Speech and Signal Processing (ICASSP'95), May 1995, 4 pages. |
Bellegarda, Jerome R., “Exploiting Latent Semantic Information in Statistical Language Modeling”, Proceedings of the IEEE, vol. 88, No. 8, Aug. 2000, 18 pages. |
Bellegarda, Jerome R., “Interaction-Driven Speech Input—A Data-Driven Approach to the Capture of both Local and Global Language Constraints”, Available online at <http://old.sig.chi.ora/bulletin/1998.2/bellegarda.html>, 1992, 7 pages. |
Bellegarda, Jerome R., “Large Vocabulary Speech Recognition with Multispan Statistical Language Models”, IEEE Transactions on Speech and Audio Processing, vol. 8, No. 1, Jan. 2000, 9 pages. |
Bellegarda et al., “On-Line Handwriting Recognition using Statistical Mixtures”, Advances in Handwriting and Drawings: A Multidisciplinary Approach, Europia, 6th International IGS Conference on Handwriting and Drawing, Paris, France, Jul. 1993, 11 pages. |
Appelt et al., “SRI: Description of the JV-FASTUS System used for MUC-5”, SRI International, Artificial Intelligence Center, 1993, 19 pages. |
Zue et al., “The Voyager Speech Understanding System: Preliminary Development and Evaluation”, Proceedings of IEEE, International Conference on Acoustics, Speech and Signal Processing, 1990, 4 pages. |
Zue, Victor W., “Toward Systems that Understand Spoken Language”, ARPA Strategic Computing Institute, Feb. 1994, 9 pages. |
Lyons et al., “Augmenting Conversations Using Dual-Purpose Speech”, available at <http://research.nokia.com/files/2004-LYONS-UIST04-DPS.pdf>, 2004, 10 pages. |
Martin et al., “The Open Agent Architecture: A Framework for Building Distributed Software Systems”, Applied Artificial Intelligence: An International Journal, vol. 13, No. 1-2, available at <http://adam.cheyer.com/papers/oaa.pdf>>, retrieved from internet on Jan.-Mar. 1999. |
Schnelle, D., “Context Aware Voice User Interfaces for Workflow Support”, Dissertation paper, Aug. 27, 2007, 254 pages. |
Bellegarda et al., “Performance of the IBM Large Vocabulary Continuous Speech Recognition System on the ARPA Wall Street Journal Task”, Signal Processing VII: Theories and Applications, European Association for Signal Processing, 1994, 4 pages. |
Bellegarda et al., “The Metamorphic Algorithm: a Speaker Mapping Approach to Data Augmentation”, IEEE Transactions on Speech and Audio Processing, vol. 2, No. 3, Jul. 1994, 8 pages. |
Belvin et al., “Development of the HRL Route Navigation Dialogue System”, Proceedings of the First International Conference on Human Language Technology Research, Paper, 2001, 5 pages. |
Berry et al., “PTIME: Personalized Assistance for Calendaring”, ACM Transactions on Intelligent Systems and Technology, vol. 2, No. 4, Article 40, Jul. 2011, pp. 1-22. |
Berry et al., “Task Management under Change and Uncertainty Constraint Solving Experience with the CALO Project”, Proceedings of CP'05 Workshop on Constraint Solving under Change, 2005, 5 pages. |
Black et al., “Automatically Clustering Similar Units for Unit Selection in Speech Synthesis”, Proceedings of Eurospeech, vol. 2, 1997, 4 pages. |
Blair et al., “An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System”, Communications of the ACM, vol. 28, No. 3, Mar. 1985, 11 pages. |
Bobrow et al., “Knowledge Representation for Syntactic/Semantic Processing”, From: AAA-80 Proceedings, Copyright 1980, AAAI, 1980, 8 pages. |
Bouchou et al., “Using Transducers in Natural Language Database Query”, Proceedings of 4th International Conference on Applications of Natural Language to Information Systems, Austria, Jun. 1999, 17 pages. |
Bratt et al., “The SRI Telephone-Based ATIS System”, Proceedings of ARPA Workshop on Spoken Language Technology, 1995, 3 pages. |
Briner, L. L., “Identifying Keywords in Text Data Processing”, In Zelkowitz, Marvin V., Ed, Directions and Challenges, 15th Annual Technical Symposium, Gaithersbury, Maryland, Jun. 17, 1976, 7 pages. |
Bulyko et al., “Error-Correction Detection and Response Generation in a Spoken Dialogue System”, Speech Communication, vol. 45, 2005, pp. 271-288. |
Bulyko et al., “Joint Prosody Prediction and Unit Selection for Concatenative Speech Synthesis”, Electrical Engineering Department, University of Washington, Seattle, 2001, 4 pages. |
Burke et al., “Question Answering from Frequently Asked Question Files”, AI Magazine, vol. 18, No. 2, 1997, 10 pages. |
Burns et al., “Development of a Web-Based Intelligent Agent for the Fashion Selection and Purchasing Process via Electronic Commerce”, Proceedings of the Americas Conference on Information System (AMCIS), Dec. 31, 1998, 4 pages. |
Bussey, et al., “Service Architecture, Prototype Description and Network Implications of a Personalized Information Grazing Service”, INFOCOM'90, Ninth Annual Joint Conference of the IEEE Computer and Communication Societies, Available online at <http://slrohall.com/oublications/>, Jun. 1990, 8 pages. |
Bussler et al., “Web Service Execution Environment (WSMX)”, Retrieved from Internet on Sep. 17, 2012, Available online at <http://wwww3.org/Submission/WSMX>, Jun. 3, 2005, 29 pages. |
Butcher, Mike, “EVI Arrives in Town to go Toe-to-Toe with Siri”, TechCrunch, Jan. 23, 2012, 2 pages. |
Buzo et al., “Speech Coding Based Upon Vector Quantization”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. Assp-28, No. 5, Oct. 1980, 13 pages. |
Caminero-Gil et al., “Data-Driven Discourse Modeling for Semantic Interpretation”, Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, May 1996, 6 pages. |
Carter, D., “Lexical Acquisition in the Core Language Engine”, Proceedings of the Fourth Conference of the European Chapter of the Association for Computational Linguistics, 1989, 8 pages. |
Carter et al., “The Speech-Language Interface in the Spoken Language Translator”, SRI International, Nov. 23, 1994, 9 pages. |
Cawley, Gavin C. “The Application of Neural Networks to Phonetic Modelling”, PhD. Thesis, University of Essex, Mar. 1996, 13 pages. |
Chai et al., “Comparative Evaluation of a Natural Language Dialog Based System and a Menu Driven System for Information Access: A Case Study”, Proceedings of the International Conference on Multimedia Information Retrieval (RIAO), Paris, Apr. 2000, 11 pages. |
Chang et al., “A Segment-Based Speech Recognition System for Isolated Mandarin Syllables”, Proceedings TEN CON '93, IEEE Region 10 Conference on Computer, Communication, Control and Power Engineering, vol. 3, Oct. 1993, 6 pages. (3 pages of English Translation and 3 pages of Office Action). |
Chen, Yi, “Multimedia Siri Finds and Plays Whatever You Ask for”, PSFK Report, Feb. 9, 2012, 9 pages. |
Cheyer, Adam, “A Perspective on AI & Agent Technologies for SCM”, VerticalNet Presentation, 2001, 22 pages. |
Cheyer, Adam, “About Adam Cheyer”, Available online at <http://www.adam.cheyer.com/about.html>, retrieved on Sep. 17, 2012, 2 pages. |
Cheyer et al., “Multimodal Maps: An Agent-Based Approach”, International Conference on Co-operative Multimodal Communication, 1995, 15 pages. |
Cheyer et al., “Spoken Language and Multimodal Applications for Electronic Realties”, Virtual Reality, vol. 3, 1999, pp. 1-15. |
Cheyer et al., “The Open Agent Architecture”, Autonomous Agents and Multi-Agent Systems, vol. 4, Mar. 1, 2001, 6 pages. |
Cheyer et al., “The Open Agent Architecture: Building Communities of Distributed Software Agents”, Artificial Intelligence Center, SRI International, Power Point Presentation, Available online at <http://www.ai.sri.com/-oaa/>, retrieved on Feb. 21, 1998, 25 pages. |
Codd, E. F., “Databases: Improving Usability and Responsiveness—How About Recently”, Copyright 1978, Academic Press, Inc., 1978, 28 pages. |
Cohen et al., “An Open Agent Architecture”, Available Online at <http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.480>, 1994, 8 pages. |
Coles et al., “Chemistry Question-Answering”, SRI International, Jun. 1969, 15 pages. |
Coles et al., “Techniques for Information Retrieval Using an Inferential Question-Answering System with Natural-Language Input”, SRI International, Nov. 1972, 198 Pages. |
Coles et al., “The Application of Theorem Proving to Information Retrieval”, SRI International, Jan. 1971, 21 pages. |
Conklin, Jeff, “Hypertext: An Introduction and Survey”, Computer Magazine, Sep. 1987, 25 pages. |
Connolly et al., “Fast Algorithms for Complex Matrix Multiplication Using Surrogates”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 37, No. 6, Jun. 1989, 13 pages. |
Constantinides et al., “A Schema Based Approach to Dialog Control”, Proceedings of the International Conference on Spoken Language Processing, 1998, 4 pages. |
Cox et al., “Speech and Language Processing for Next-Millennium Communications Services”, Proceedings of the IEEE, vol. 88, No. 8, Aug. 2000, 24 pages. |
Craig et al., “Deacon: Direct English Access and Control”, AFIPS Conference Proceedings, vol. 19, San Francisco, Nov. 1966, 18 pages. |
Cutkosky et al., “PACT: An Experiment in Integrating Concurrent Engineering Systems”, Journal & Magazines, Computer, vol. 26, No. 1, Jan. 1993, 14 pages. |
Dar et al., “DTL's DataSpot: Database Exploration Using Plain Language”, Proceedings of the 24th VLDB Conference, New York, 1998, 5 pages. |
Davis et al., “A Personal Handheld Multi-Modal Shopping Assistant”, IEEE, 2006, 9 pages. |
Decker et al., “Designing Behaviors for Information Agents”, The Robotics Institute, Carnegie-Mellon University, Paper, Jul. 1996, 15 pages. |
Decker et al., “Matchmaking and Brokering”, The Robotics Institute, Carnegie-Mellon University, Paper, May 1996, 19 pages. |
Deerwester et al., “Indexing by Latent Semantic Analysis”, Journal of the American Society for Information Science, vol. 41, No. 6, Sep. 1990, 19 pages. |
Deller, Jr. et al., “Discrete-Time Processing of Speech Signals”, Prentice Hall, ISBN: 0-02-328301-7, 1987, 14. |
Digital Equipment Corporation, “Open Vms Software Overview”, Software Manual, Dec. 1995, 159 pages. |
Jelinek, F., “Self-Organized Language Modeling for Speech Recognition”, Readings in Speech Recognition, Edited by Alex Waibel and Kai-Fu Lee, Morgan Kaufmann Publishers, Inc., ISBN: 1-55860-124-4, 1990, 63 pages. |
Jennings et al., “A Personal News Service Based on a User Model Neural Network”, IEICE Transactions on Information and Systems, vol. E75-D, No. 2, Mar. 1992, 12 pages. |
Ji et al., “A Method for Chinese Syllables Recognition Based upon Sub-syllable Hidden Markov Model”, 1994 International Symposium on Speech, Image Processing and Neural Networks, Hong Kong, Apr. 1994, 4 pages. |
Johnson, Julia Ann., “A Data Management Strategy for Transportable Natural Language Interfaces”, Doctoral Thesis Submitted to the Department of Computer Science, University of British Columbia, Canada, Jun. 1989, 285 pages. |
Jones, J., “Speech Recognition for Cyclone”, Apple Computer, Inc., E.R.S. Revision 2.9, Sep. 10, 1992, 93 pages. |
Julia et al., “Http://www.speech.sri.com/demos/atis.html”, Proceedings of AAAI, Spring Symposium, 1997, 5 pages. |
Julia et al., “Un Editeur Interactif De Tableaux Dessines a Main Levee (An Interactive Editor for Hand-Sketched Tables)”, Traitement du Signal, vol. 12, No. 6, 1995, pp. 619-626. |
Kahn et al., “CoABS Grid Scalability Experiments”, Autonomous Agents and Multi-Agent Systems, vol. 7, 2003, pp. 171-178. |
Kamel et al., “A Graph Based Knowledge Retrieval System”, IEEE International Conference on Systems, Man and Cybernetics, 1990, pp. 269-275. |
Karp, P. D., “A Generic Knowledge-Base Access Protocol”, Available online at <http://lecture.cs.buu.ac.th/-f50353/Document/gfp.pdf>, May 12, 1994, 66 pages. |
Katz, Boris, “A Three-Step Procedure for Language Generation”, Massachusetts Institute of Technology, A.I. Memo No. 599, Dec. 1980, pp. 1-40. |
Katz, Boris, “Annotating the World Wide Web Using Natural Language”, Proceedings of the 5th RIAO Conference on Computer Assisted Information Searching on the Internet, 1997, 7 pages. |
Katz, S. M., “Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. ASSP-35, No. 3, Mar. 1987, 3 pages. |
Katz et al., “Exploiting Lexical Regularities in Designing Natural Language Systems”, Proceedings of the 12th International Conference on Computational Linguistics, 1988, pp. 1-22. |
Katz et al., “REXTOR: A System for Generating Relations from Natural Language”, Proceedings of the ACL Workshop on Natural Language Processing and Information Retrieval (NLP&IR), Oct. 2000, 11 pages. |
Katz, Boris, “Using English for Indexing and Retrieving”, Proceedings of the 1st RIAO Conference on User-Oriented Content-Based Text and Image Handling, 1988, pp. 314-332. |
Kitano, H., “PhiDM-Dialog, An Experimental Speech-to-Speech Dialog Translation System”, Computer, vol. 24, No. 6, Jun. 1991, 13 pages. |
Klabbers et al., “Reducing Audible Spectral Discontinuities”, IEEE Transactions on Speech and Audio Processing, vol. 9, No. 1, Jan. 2001, 13 pages. |
Klatt et al., “Linguistic Uses of Segmental Duration in English: Acoustic and Perpetual Evidence”, Journal of the Acoustical Society of America, vol. 59, No. 5, May 1976, 16 pages. |
Knownav, “Knowledge Navigator”, YouTube Video available at <http://www.youtube.com/watch?v=QRH8eimU_20>, Apr. 29, 2008, 1 page. |
Kominek et al., “Impact of Durational Outlier Removal from Unit Selection Catalogs”, 5th ISCA Speech Synthesis Workshop, Jun. 14-16, 2004, 6 pages. |
Konolige, Kurt, “A Framework for a Portable Natural-Language Interface to Large Data Bases”, SRI International, Technical Note 197, Oct. 12, 1979, 54 pages. |
Kubala et al., “Speaker Adaptation from a Speaker-Independent Training Corpus”, International Conference on Acoustics, Speech and Signal Processing (ICASSP'90), Apr. 1990, 4 pages. |
Kubala et al., “The Hub and Spoke Paradigm for CSR Evaluation”, Proceedings of the Spoken Language Technology Workshop, Mar. 1994, 9 pages. |
Laird et al., “SOAR: An Architecture for General Intelligence”, Artificial Intelligence, vol. 33, 1987, pp. 1-64. |
Langley et al., “A Design for the Icarus Architechture”, SIGART Bulletin, vol. 2, No. 4, 1991, pp. 104-109. |
Larks, “Intelligent Software Agents”, Available online at <http://www.cs.cmu.eduk-softagents/larks.html> retrieved on Mar. 15, 2013, 2 pages. |
Lee et al., “A Real-Time Mandarin Dictation Machine for Chinese Language with Unlimited Texts and Very Large Vocabulary”, International Conference on Acoustics, Speech and Signal Processing, vol. 1, Apr. 1990, 5 pages. |
Lee et al., “Golden Mandarin (II)—An Improved Single-Chip Real-Time Mandarin Dictation Machine for Chinese Language with Very Large Vocabulary”, IEEE International Conference of Acoustics, Speech and Signal Processing, vol. 2, 1993, 4 pages. |
Lee et al., “Golden Mandarin (II)—An Intelligent Mandarin Dictation Machine for Chinese Character Input with Adaptation/Learning Functions”, International Symposium on Speech, Image Processing and Neural Networks, Hong Kong, Apr. 1994, 5 pages. |
Lee, K. F., “Large-Vocabulary Speaker-Independent Continuous Speech Recognition: The SPHINX System”, Partial Fulfillment of the Requirements for the Degree of Doctorof Philosophy, Computer Science Department, Carnegie Mellon University, Apr. 1988, 195 pages. |
Lee et al., “System Description of Golden Mandarin (I) Voice Input for Unlimited Chinese Characters”, International Conference on Computer Processing of Chinese & Oriental Languages, vol. 5, No. 3 & 4, Nov. 1991, 16 pages. |
Lemon et al., “Multithreaded Context for Robust Conversational Interfaces: Context-Sensitive Speech Recognition and Interpretation of Corrective Fragments”, ACM Transactions on Computer-Human Interaction, vol. 11, No. 3, Sep. 2004, pp. 241-267. |
Leong et al., “CASIS: A Context-Aware Speech Interface System”, Proceedings of the 10th International Conference on Intelligent User Interfaces, Jan. 2005, pp. 231-238. |
Lieberman et al., “Out of Context: Computer Systems that Adapt to, and Learn from, Context”, IBM Systems Journal, vol. 39, No. 3 & 4, 2000, pp. 617-632. |
Lin et al., “A Distributed Architecture for Cooperative Spoken Dialogue Agents with Coherent Dialogue State and History”, Available on line at <http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.42.272>, 1999, 4 pages. |
Lin et al., “A New Framework for Recognition of Mandarin Syllables with Tones Using Sub-syllabic Unites”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP-93), Apr. 1993, 4 pages. |
Linde et al., “An Algorithm for Vector Quantizer Design”, IEEE Transactions on Communications, vol. 28, No. 1, Jan. 1980, 12 pages. |
Liu et al., “Efficient Joint Compensation of Speech for the Effects of Additive Noise and Linear Filtering”, IEEE International Conference of Acoustics, Speech and Signal Processing, ICASSP-92, Mar. 1992, 4 pages. |
Logan et al., “Mel Frequency Cepstral Co-efficients for Music Modeling”, International Symposium on Music Information Retrieval, 2000, 2 pages. |
Lowerre, B. T., “The-Harpy Speech Recognition System”, Doctoral Dissertation, Department of Computer Science, Carnegie Mellon University, Apr. 1976, 20 pages. |
Maghbouleh, Arman, “An Empirical Comparison of Automatic Decision Tree and Linear Regression Models for Vowel Durations”, Revised Version of a Paper Presented at the Computational Phonology in Speech Technology Workshop, 1996 Annual Meeting of the Association for Computational Linguistics in Santa Cruz, California, 7 pages. |
Markel et al., “Linear Prediction of Speech”, Springer-Verlag, Berlin, Heidelberg, New York, 1976, 12 pages. |
Martin et al., “Building and Using Practical Agent Applications”, SRI International, PAAM Tutorial, 1998, 78 pages. |
Martin et al., “Building Distributed Software Systems with the Open Agent Architecture”, Proceedings of the Third International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, Mar. 1998, pp. 355-376. |
Martin et al., “Development Tools for the Open Agent Architecture”, Proceedings of the International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, Apr. 1996, pp. 1-17. |
Martin et al., Information Brokering in an Agent Architecture, Proceedings of the Second International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, Apr. 1997, pp. 1-20. |
Martin et al., “Transportability and Generality in a Natural-Language Interface System”, Proceedings of the Eighth International Joint Conference on Artificial Intelligence, Technical Note 293, Aug. 1983, 21 pages. |
Matiasek et al., “Tamic-P: A System for NL Access to Social Insurance Database”, 4th International Conference on Applications of Natural Language to Information Systems, Jun. 1999, 7 pages. |
McGuire et al., “Shade: Technology for Knowledge-Based Collaborative Engineering”, Journal of Concurrent Engineering Applications and Research (CERA), 1993, 18 pages. |
Rabiner et al., “Fundamental of Speech Recognition”, AT&T, Published by Prentice-Hall, Inc., ISBN: 0-13-285826-6, 1993, 17 pages. |
Rabiner et al., “Note on the Properties of a Vector Quantizer for LPC Coefficients”, Bell System Technical Journal, vol. 62, No. 8, Oct. 1983, 9 pages. |
Ratcliffe, M., “ClearAccess 2.0 Allows SQL Searches Off-Line (Structured Query Language) (ClearAccess Corp. Preparing New Version of Data-Access Application with Simplified User Interface, New Features) (Product Announcement)”, MacWeek, vol. 6, No. 41, Nov. 16, 1992, 2 pages. |
Ravishankar, Mosur K., “Efficient Algorithms for Speech Recognition”, Doctoral Thesis Submitted to School of Computer Science, Computer Science Division, Carnegie Mellon University, Pittsburgh, May 15, 1996, 146 pages. |
Rayner, M., “Abductive Equivalential Translation and its Application to Natural Language Database Interfacing”, Dissertation Paper, SRI International, Sep. 1993, 162 pages. |
Rayner et al., “Adapting the Core Language Engine to French and Spanish”, Cornell University Library, Available online at <http:l/arxiv.org/abs/cmp-lg/9605015>, May 10, 1996, 9 pages. |
Rayner et al., “Deriving Database Queries from Logical Forms by Abductive Definition Expansion”, Proceedings of the Third Conference on Applied Natural Language Processing, ANLC, 1992, 8 pages. |
Rayner, Manny, “Linguistic Domain Theories: Natural-Language Database Interfacing from First Principles”, SRI International, Cambridge, 1993, 11 pages. |
Rayner et al., “Spoken Language Translation with Mid-90's Technology: A Case Study”, Eurospeech, ISCA, Available online at <http://citeseerxist.psu.edu/viewdoc/summary?doi=10.1.1.54.8608>, 1993, 4 pages. |
Remde et al., “SuperBook: An Automatic Tool for Information Exploration-Hypertext?”, In Proceedings of Hypertext, 87 Papers, Nov. 1987, 14 pages. |
Reynolds, C. F., “On-Line Reviews: A New Application of the HICOM Conferencing System”, IEEE Colloquium on Human Factors in Electronic Mail and Conferencing Systems, Feb. 3, 1989, 4 pages. |
Rice et al., “Monthly Program: Nov. 14, 1995”, The San Francisco Bay Area Chapter of ACM SIGCHI, Available online at <http://www.baychi.org/calendar/19951114>, Nov. 14, 1995, 2 pages. |
Rice et al., “Using the Web Instead of a Window System”, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI'96, 1996, pp. 1-14. |
Rigoll, G., “Speaker Adaptation for Large Vocabulary Speech Recognition Systems Using Speaker Markov Models”, International Conference on Acoustics, Speech and Signal Processing (ICASSP'89), May 1989, 4 pages. |
Riley, M D., “Tree-Based Modelling of Segmental Durations”, Talking Machines Theories, Models and Designs, Elsevier Science Publishers B.V., North-Holland, ISBN: 08-444-89115.3, 1992, 15 pages. |
Rivlin et al., “Maestro: Conductor of Multimedia Analysis Technologies”, SRI International, 1999, 7 pages. |
Rivoira et al., “Syntax and Semantics in a Word-Sequence Recognition System”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'79), Apr. 1979, 5 pages. |
Roddy et al., “Communication and Collaboration in a Landscape of B2B eMarketplaces”, VerticalNet Solutions, White Paper, Jun. 15, 2000, 23 pages. |
Rosenfeld, R., “A Maximum Entropy Approach to Adaptive Statistical Language Modelling”, Computer Speech and Language, vol. 10, No. 3, Jul. 1996, 25 pages. |
Roszkiewicz, A., “Extending your Apple”, Back Talk-Lip Service, A+ Magazine, The Independent Guide for Apple Computing, vol. 2, No. 2, Feb. 1984, 5 pages. |
Rudnicky et al., “Creating Natural Dialogs in the Carnegie Mellon Communicator System”, Proceedings of Eurospeech, vol. 4, 1999, pp. 1531-1534. |
Russell et al., “Artificial Intelligence, A Modern Approach”, Prentice Hall, Inc., 1995, 121 pages. |
Sacerdoti et al., “A Ladder User's Guide (Revised)”, SRI International Artificial Intelligence Center, Mar. 1980, 39 pages. |
Sagalowicz, D., “AD-Ladder User's Guide”, SRI International, Sep. 1980, 42 pages. |
Sakoe et al., “Dynamic Programming Algorithm Optimization for Spoken Word Recognition”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. ASSP-26, No. 1, Feb. 1978, 8 pages. |
Salton et al., “On the Application of Syntactic Methodologies in Automatic Text Analysis”, Information Processing and Management, vol. 26, No. 1, Great Britain, 1990, 22 pages. |
Sameshima et al., “Authorization with Security Attributes and Privilege Delegation Access control beyond the ACL”, Computer Communications, vol. 20, 1997, 9 pages. |
San-Segundo et al., “Confidence Measures for Dialogue Management in the CU Communicator System”, Proceedings of Acoustics, Speech and Signal Processing (ICASSP'00), Jun. 2000, 4 pages. |
Sato, H., “A Data Model, Knowledge Base and Natural Language Processing for Sharing a Large Statistical Database”, Statistical and Scientific Database Management, Lecture Notes in Computer Science, vol. 339, 1989, 20 pages. |
Savoy, J., “Searching Information in Hypertext Systems Using Multiple Sources of Evidence”, International Journal of Man-Machine Studies, vol. 38, No. 6, Jun. 1996, 15 pages. |
Scagliola, C., “Language Models and Search Algorithms for Real-Time Speech Recognition”, International Journal of Man-Machine Studies, vol. 22, No. 5, 1985, 25 pages. |
Schmandt et al., “Augmenting a Window System with Speech Input”, IEEE Computer Society, Computer, vol. 23, No. 8, Aug. 1990, 8 pages. |
Schütze, H., “Dimensions of Meaning”, Proceedings of Supercomputing'92 Conference, Nov. 1992, 10 pages. |
Seneff et al., “A New Restaurant Guide Conversational System: Issues in Rapid Prototyping for Specialized Domains”, Proceedings of Fourth International Conference on Spoken Language, vol. 2, 1996, 4 pages. |
Sharoff et al., “Register-Domain Separation as a Methodology for Development of Natural Language Interfaces to Databases”, Proceedings of Human-Computer Interaction (INTERACT'99), 1999, 7 pages. |
Sheth et al., “Evolving Agents for Personalized Information Filtering”, Proceedings of the Ninth Conference on Artificial Intelligence for Applications, Mar. 1993, 9 pages. |
Sheth et al., “Relationships at the Heart of Semantic Web: Modeling, Discovering, and Exploiting Complex Semantic Relationships”, Enhancing the Power of the Internet: Studies in Fuzziness and Soft Computing, Oct. 13, 2002, pp. 1-38. |
Shikano et al., “Speaker Adaptation through Vector Quantization”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'86), vol. 11, Apr. 1986, 4 pages. |
Shimazu et al., “CAPIT: Natural Language Interface Design Tool with Keyword Analyzer and Case-Based Parser”, NEG Research & Development, vol. 33, No. 4, Oct. 1992, 11 pages. |
Shinkle, L., “Team User's Guide”, SRI International, Artificial Intelligence Center, Nov. 1984, 78 pages. |
Shklar et al., “InfoHarness: Use of Automatically Generated Metadata for Search and Retrieval of Heterogeneous Information”, Proceedings of CAiSE'95, Finland, 1995, 14 pages. |
Sigurdsson et al., “Mel Frequency Cepstral Co-efficients: An Evaluation of Robustness of MP3 Encoded Music”, Proceedings of the 7th International Conference on Music Information Retrieval, 2006, 4 pages. |
Silverman et al., “Using a Sigmoid Transformation for Improved Modeling of Phoneme Duration”, Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Mar. 1999, 5 pages. |
Simonite, Tom, “One Easy Way to Make Siri Smarter”, Technology Review, Oct. 18, 2011, 2 pages. |
Singh, N., “Unifying Heterogeneous Information Models”, Communications of the ACM, 1998, 13 pages. |
SRI International, “The Open Agent Architecture TM 1.0 Distributiom”, Open Agent Architecture (OAA), 1999, 2 pages. |
SRI2009, “SRI Speech: Products: Software Development Kits: EduSpeak”, Available online at <http://web.archive.org/web/20090828084033/http://www.speechatsri.com/products/eduspeak/shtml>, 2009, 2 pages. |
Starr et al., “Knowledge-Intensive Query Processing”, Proceedings of the 5th KRDB Workshop, Seattle, May 31, 1998, 6 pages. |
Stent et al., “The CommandTalk Spoken Dialogue System”, SRI International, 1999, pp. 183-190. |
Stern et al., “Multiple Approaches to Robust Speech Recognition”, Proceedings of Speech and Natural Language Workshop, 1992, 6 pages. |
Stickel, Mark E., “A Nonclausal Connection—Graph Resolution Theorem-Proving Program”, Proceedings of AAAI'82, 1982, 5 pages. |
Sugumaran, V., “A Distributed Intelligent Agent-Based Spatial Decision Support System”, Proceedings of the Americas Conference on Information systems (AMCIS), Dec. 31, 1998, 4 pages. |
Sycara et al., “Coordination of Multiple Intelligent Software Agents”, International Journal of Cooperative Information Systems (IJCIS), vol. 5, No. 2 & 3, 1996, 31 pages. |
Sycara et al., “Distributed Intelligent Agents”, IEEE Expert, vol. 11, No. 6, Dec. 1996, 32 pages. |
Sycara et al., “Dynamic Service Matchmaking among Agents in Open Information Environments”, SIGMOD Record, 1999, 7 pages. |
Sycara et al., “The RETSINA MAS Infrastructure”, Autonomous Agents and Multi-Agent Systems, vol. 7, 2003, 20 pages. |
Tenenbaum et al., “Data Structure Using Pascal”, Prentice-Hall, Inc., 1981, 34 pages. |
Textndrive, “Text'nDrive App Demo—Listen and Reply to your Messages by Voice while Driving!”, YouTube Video available at <http://www.youtube.com/watch?v=WaGfzoHsAMw>, Apr. 27, 2010, 1 page. |
Tofel, Kevin C., “SpeakTolt: A Personal Assistant for Older iPhones, iPads”, Apple News, Tips and Reviews, Feb. 9, 2012, 7 pages. |
Tsai et al., “Attributed Grammar—A Tool for Combining Syntactic and Statistical Approaches to Pattern Recognition”, IEEE Transactions on Systems, Man and Cybernetics, vol. SMC-10, No. 12, Dec. 1980, 13 pages. |
Tucker, Joshua, “Too Lazy to Grab Your TV Remote? Use Siri Instead”, Engadget, Nov. 30, 2011, 8 pages. |
Tur et al., “The CALO Meeting Assistant System”, IEEE Transactions on Audio, Speech and Language Processing, vol. 18, No. 6, Aug. 2010, pp. 1601-1611. |
Tur et al., “The CALO Meeting Speech Recognition and Understanding System”, Proc. IEEE Spoken Language Technology Workshop, 2008, 4 pages. |
Tyson et al., “Domain-Independent Task Specification in the TACITUS Natural Language System”, SRI International, Artificial Intelligence Center, May 1990, 16 pages. |
Udell, J., “Computer Telephony”, BYTE, vol. 19, No. 7, Jul. 1994, 9 pages. |
Van Santen, J. P.H., “Contextual Effects on Vowel Duration”, Journal Speech Communication, vol. 11, No. 6, Dec. 1992, pp. 513-546. |
Vepa et al., “New Objective Distance Measures for Spectral Discontinuities in Concatenative Speech Synthesis”, Proceedings of the IEEE 2002 Workshop on Speech Synthesis, 2002, 4 pages. |
Verschelde, Jan, “MATLAB Lecture 8. Special Matrices in MATLAB”, UIC, Dept. of Math, Stat. & CS, MCS 320, Introduction to Symbolic Computation, 2007, 4 pages. |
Vingron, Martin, “Near-Optimal Sequence Alignment”, Current Opinion in Structural Biology, vol. 6, No. 3, 1996, pp. 346-352. |
Vlingo, “Vlingo Launches Voice Enablement Application on Apple App Store”, Press Release, Dec. 3, 2008, 2 pages. |
Vlingo Lncar, “Distracted Driving Solution with Vlingo InCar”, YouTube Video, Available online at <http://www.youtube.com/watch?v=Vqs8XfXxgz4>, Oct. 2010, 2 pages. |
Voiceassist, “Send Text, Listen to and Send E-Mail by Voice”, YouTube Video, Available online at <http://www.youtube.com/watch?v=0tEU61nHHA4>, Jul. 30, 2009, 1 page. |
VoiceontheGo, “Voice on the Go (BlackBerry)”, YouTube Video, available online at <http://www.youtube.com/watch?v=pJqpWgQS98w>, Jul. 27, 2009, 1 page. |
Wahlster et al., “Smartkom: Multimodal Communication with a Life-Like Character”, Eurospeech—Scandinavia, 7th European Conference on Speech Communication and Technology, 2001, 5 pages. |
Waldinger et al., “Deductive Question Answering from Multiple Resources”, New Directions in Question Answering, Published by AAAI, Menlo Park, 2003, 22 pages. |
Walker et al., “Natural Language Access to Medical Text”, SRI International, Artificial Intelligence Center, Mar. 1981, 23 pages. |
Waltz, D., “An English Language Question Answering System for a Large Relational Database”, ACM, vol. 21, No. 7, 1978, 14 pages. |
Ward et al., “A Class Based Language Model for Speech Recognition”, IEEE, 1996, 3 pages. |
Ward et al., “Recent Improvements in the CMU Spoken Language Understanding System”, ARPA Human Language Technology Workshop, 1994, 4 pages. |
Ward, Wayne, “The CMU Air Travel Information Service: Understanding Spontaneous Speech”, Proceedings of the Workshop on Speech and Natural Language, HLT '90, 1990, pp. 127-129. |
Warren et al., “An Efficient Easily Adaptable System for Interpreting Natural Language Queries”, American Journal of Computational Linguistics, vol. 8, No. 3-4 , 1982, 11 pages. |
Weizenbaum, J., “ELIZA—A Computer Program for the Study of Natural Language Communication Between Man and Machine”, Communications of the ACM, vol. 9, No. 1, Jan. 1966, 10 pages. |
Werner et al., “Prosodic Aspects of Speech, Universite de Lausanne”, Fundamentals of Speech Synthesis and Speech Recognition: Basic Concepts, State of the Art and Future Challenges, 1994, 18 pages. |
Winiwarter et al., “Adaptive Natural Language Interfaces to FAQ Knowledge Bases”, Proceedings of 4th International Conference on Applications of Natural Language to Information Systems, Austria, Jun. 1999, 22 pages. |
Wolff, M., “Post Structuralism and the Artful Database: Some Theoretical Considerations”, Information Technology and Libraries, vol. 13, No. 1, Mar. 1994, 10 pages. |
Wu, M., “Digital Speech Processing and Coding”, Multimedia Signal Processing, Lecture-2 Course Presentation, University of Maryland, College Park, 2003, 8 pages. |
Wu et al., “KDA: A Knowledge-Based Database Assistant”, Proceeding of the Fifth International Conference on Engineering (IEEE Cat.No. 89CH2695-5), 1989, 8 pages. |
Wu, M., “Speech Recognition, Synthesis, and H.C.I.”, Multimedia Signal Processing, Lecture-3 Course Presentation, University of Maryland, College Park, 2003, 11 pages. |
Wyle, M. F., “A Wide Area Network Information Filter”, Proceedings of First International Conference on Artificial Intelligence on Wall Street, Oct. 1991, 6 pages. |
Yang et al., “Smart Sight: A Tourist Assistant System”, Proceedings of Third International Symposium on Wearable Computers, 1999, 6 pages. |
Yankelovich et al., “Intermedia: The Concept and the Construction of a Seamless Information Environment”, Computer Magazine, IEEE, Jan. 1988, 16 pages. |
Yoon et al., “Letter-to-Sound Rules for Korean”, Department of Linguistics, The Ohio State University, 2002, 4 pages. |
Zeng et al., “Cooperative Intelligent Software Agents”, The Robotics Institute, Carnegie-Mellon University, Mar. 1995, 13 pages. |
Zhao, Y., “An Acoustic-Phonetic-Based Speaker Adaptation Technique for Improving Speaker-Independent Continuous Speech Recognition”, IEEE Transactions on Speech and Audio Processing, vol. 2, No. 3, Jul. 1994, pp. 380-394. |
Zhao et al., “Intelligent Agents for Flexible Workflow Systems”, Proceedings of the Americas Conference on Information Systems (AMCIS), Oct. 1998, 4 pages. |
Zovato et al., “Towards Emotional Speech Synthesis: A Rule based Approach”, Proceedings of 5th Isca Speech Synthesis Workshop—Pittsburgh, 2004, pp. 219-220. |
Zue, Victor, “Conversational Interfaces: Advances and Challenges”, Spoken Language System Group, Sep. 1997, 10 pages. |
Zue et al., “From Interface to Content: Translingual Access and Delivery of On-Line Information”, Eurospeech, 1997, 4 pages. |
Zue et al., “Jupiter: A Telephone-Based Conversational Interface for Weather Information”, IEEE Transactions on Speech and Audio Processing, Jan. 2000, 13 pages. |
Zue et al., “Pegasus: A Spoken Dialogue Interface for On-Line Air Travel Planning”, Speech Communication, vol. 15, 1994, 10 pages. |
Meng et al., “Wheels: A Conversational System in the Automobile Classified Domain”, Proceedings of Fourth International Conference on Spoken Language, ICSLP 96, vol. 1, Oct. 1996, 4 pages. |
Michos et al., “Towards an Adaptive Natural Language Interface to Command Languages”, Natural Language Engineering, vol. 2, No. 3, 1996, pp. 191-209. |
Milstead et al., “Metadata: Cataloging by Any Other Name”, Available online at <http://www.iicm.tugraz.at/thesis/cguetl_diss/literatur/Kapitel06/References/Milstead_et_al._1999/metadata.html>, Jan. 1999, 18 pages. |
Milward et al., “D2.2: Dynamic Multimodal Interface Reconfiguration, Talk and Look: Tools for Ambient Linguistic Knowledge”, Available online at <http://www.ihmc.us/users/nblaylock!Pubs/Files/talk d2.2.pdf>, Aug. 8, 2006, 69 pages. |
Minker et al., “Hidden Understanding Models for Machine Translation”, Proceedings of ETRW on Interactive Dialogue in Multi-Modal Systems, Jun. 1999, pp. 1-4. |
Mitra et al., “A Graph-Oriented Model for Articulation of Ontology Interdependencies”, Advances in Database Technology, Lecture Notes in Computer Science, vol. 1777, 2000, pp. 1-15. |
Modi et al., “CMRadar: A Personal Assistant Agent for Calendar Management”, AAAI, Intelligent Systems Demonstrations, 2004, pp. 1020-1021. |
Moore et al., “Combining Linguistic and Statistical Knowledge Sources in Natural-Language Processing for ATIS”, SRI International, Artificial Intelligence Center, 1995, 4 pages. |
Moore, Robert C., “Handling Complex Queries in a Distributed Data Base”, SRI International, Technical Note 170, Oct. 8, 1979, 38 pages. |
Moore, Robert C., “Practical Natural-Language Processing by Computer”, SRI International, Technical Note 251, Oct. 1981, 34 pages. |
Moore et al., “SRI's Experience with the ATIS Evaluation”, Proceedings of the Workshop on Speech and Natural Language, Jun. 1990, pp. 147-148. |
Moore et al., “The Information Warfare Advisor: An Architecture for Interacting with Intelligent Agents Across the Web”, Proceedings of Americas Conference on Information Systems (AMCIS), Dec. 31, 1998, pp. 186-188. |
Moore, Robert C., “The Role of Logic in Knowledge Representation and Commonsense Reasoning”, SRI International, Technical Note 264, Jun. 1982, 19 pages. |
Moore, Robert C., “Using Natural-Language Knowledge Sources in Speech Recognition”, SRI International, Artificial Intelligence Center, Jan. 1999, pp. 1-24. |
Moran et al., “Intelligent Agent-Based User Interfaces”, Proceedings of International Workshop on Human Interface Technology, Oct. 1995, pp. 1-4. |
Moran et al., “Multimodal User Interfaces in the Open Agent Architecture”, International Conference on Intelligent User Interfaces (IUI97), 1997, 8 pages. |
Moran, Douglas B., “Quantifier Scoping in the SRI Core Language Engine”, Proceedings of the 26th Annual Meeting on Association for Computational Linguistics, 1988, pp. 33-40. |
Morgan, B., “Business Objects (Business Objects for Windows) Business Objects Inc.”, DBMS, vol. 5, No. 10, Sep. 1992, 3 pages. |
Motro, Amihai, “Flex: A Tolerant and Cooperative User Interface to Databases”, IEEE Transactions on Knowledge and Data Engineering, vol. 2, No. 2, Jun. 1990, pp. 231-246. |
Mountford et al., “Talking and Listening to Computers”, The Art of Human-Computer Interface Design, Apple Computer, Inc., Addison-Wesley Publishing Company, Inc., 1990, 17 pages. |
Mozer, Michael C., “An Intelligent Environment must be Adaptive”, IEEE Intelligent Systems, 1999, pp. 11-13. |
Muhlhauser, Max, “Context Aware Voice User Interfaces for Workflow Support”, 2007, 254 pages. |
Murty et al., “Combining Evidence from Residual Phase and MFCC Features for Speaker Recognition”, IEEE Signal Processing Letters, vol. 13, No. 1, Jan. 2006, 4 pages. |
Murveit et al., “Integrating Natural Language Constraints into HMM-Based Speech Recognition”, International Conference on Acoustics, Speech and Signal Processing, Apr. 1990, 5 pages. |
Murveit et al., “Speech Recognition in SRI's Resource Management and ATIS Systems”, Proceedings of the Workshop on Speech and Natural Language, 1991, pp. 94-100. |
Nakagawa et al., “Speaker Recognition by Combining MFCC and Phase Information”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Mar. 2010, 4 pages. |
Naone, Erica, “TR10: Intelligent Software Assistant”, Technology Review, Mar.-Apr. 2009, 2 pages. |
Neches et al., “Enabling Technology for Knowledge Sharing”, Fall, 1991, pp. 37-56. |
Niesler et al., “A Variable-Length Category-Based N-Gram Language Model”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'96), vol. 1, May 1996, 6 pages. |
Noth et al., “Verbmobil: The Use of Prosody in the Linguistic Components of a Speech Understanding System”, IEEE Transactions on Speech and Audio Processing, vol. 8, No. 5, Sep. 2000, pp. 519-532. |
Odubiyi et al., “SAIRE—A Scalable Agent-Based Information Retrieval Engine”, Proceedings of the First International Conference on Autonomous Agents, 1997, 12 pages. |
Owei et al., “Natural Language Query Filtration in the Conceptual Query Language”, IEEE, 1997, pp. 539-549. |
Pannu et al., “A Learning Personal Agent for Text Filtering and Notification”, Proceedings of the International Conference of Knowledge Based Systems, 1996, pp. 1-11. |
Papadimitriou et al., “Latent Semantic Indexing: A Probabilistic Analysis”, Available online at <http://citeseerx.ist.psu.edu/messaqes/downloadsexceeded.html>, Nov. 14, 1997, 21 pages. |
Parson, T. W., “Voice and Speech Processing”, Pitch and Formant Estimation, McGraw-Hill, Inc., ISBN: 0-07-0485541-0, 1987, 15 pages. |
Parsons, T. W., “Voice and Speech Processing”, Linguistics and Technical Fundamentals, Articulatory Phonetics and Phonemics, McGraw-Hill, Inc., ISBN: 0-07-0485541-0, 1987, 5 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US1993/012637, dated Apr. 10, 1995, 7 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US1993/012666, dated Mar. 1, 1995, 5 pages. |
International Search Report received for PCT Patent Application No. PCT/US1993/012666, dated Nov. 9, 1994, 8 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US1994/011011, dated Feb. 28, 1996, 4 pages. |
International Search Report received for PCT Patent Application No. PCT/US1994/011011, dated Feb. 8, 1995, 7 pages. |
Written Opinion received for PCT Patent Application No. PCT/US1994/011011, dated Aug. 21, 1995, 4 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US1995/008369, dated Oct. 9, 1996, 4 pages. |
International Search Report received for PCT Patent Application No. PCT/US1995/008369, dated Nov. 8, 1995, 6 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2011/020861, dated Nov. 29, 2011, 12 pages. |
Pereira, Fernando, “Logic for Natural Language Analysis”, SRI International, Technical Note 275, Jan. 1983, 194 pages. |
Perrault et al., “Natural-Language Interfaces”, SRI International, Technical Note 393, Aug. 22, 1986, 48 pages. |
Phoenix Solutions, Inc., “Declaration of Christopher Schmandt Regarding the MIT Galaxy System”, West Interactive Corp., A Delaware Corporation, Document 40, Jul. 2, 2010, 162 pages. |
Picone, J., “Continuous Speech Recognition using Hidden Markov Models”, IEEE ASSP Magazine, vol. 7, No. 3, Jul. 1990, 16 pages. |
Pulman et al., “Clare: A Combined Language and Reasoning Engine”, Proceedings of JFIT Conference, Available online at <http://www.cam.sri.com/tr/crc042/paper.ps.Z>, 1993, 8 pages. |
Domingue et al., “Web Service Modeling Ontology (WSMO)—An Ontology for Semantic Web Services”, Position Paper at the W3C Workshop on Frameworks for Semantics in Web Services, Innsbruck, Austria, Jun. 2005, 6 pages. |
Donovan, R. E., “A New Distance Measure for Costing Spectral Discontinuities in Concatenative Speech Synthesisers”, Available online at <http://citeseerx.ist.osu.edu/viewdoc/summarv?doi=10.1.1.21.6398>, 2001, 4 pages. |
Dowding et al., “Gemini: A Natural Language System for Spoken-Language Understanding”, Proceedings of the Thirty-First Annual Meeting of the Association for Computational Linguistics, 1993, 8 pages. |
Dowding et al., “Interleaving Syntax and Semantics in an Efficient Bottom-Up Parser”, Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, 1994, 7 pages. |
Elio et al., “On Abstract Task Models and Conversation Policies”, Proc. Workshop on Specifying and Implementing Conversation Policies, Autonomous Agents'99 Conference, 1999, pp. 1-10. |
Epstein et al., “Natural Language Access to a Melanoma Data Base”, SRI International, Sep. 1978, 7 pages. |
Ericsson et al., “Software Illustrating a Unified Approach to Multimodality and Multilinguality in the In-Home Domain”, Talk and Look: Tools for Ambient Linguistic Knowledge, Dec. 2006, 127 pages. |
Evi, “Meet Evi: The One Mobile Application that Provides Solutions for your Everyday Problems”, Feb. 2012, 3 pages. |
Exhibit 1, “Natural Language Interface Using Constrained Intermediate Dictionary of Results”, List of Publications Manually Reviewed for the Search of U.S. Pat. No. 7,177,798, Mar. 22, 2013, 1 page. |
Feigenbaum et al., “Computer-Assisted Semantic Annotation of Scientific Life Works”, Oct. 15, 2007, 22 pages. |
Ferguson et al., “TRIPS: An Integrated Intelligent Problem-Solving Assistant”, Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI-98) and Tenth Conference on Innovative Applications of Artificial Intelligence (IAAI-98), 1998, 7 pages. |
Fikes et al., “A Network-Based Knowledge Representation and its Natural Deduction System”, SRI International, Jul. 1977, 43 pages. |
Frisse, M. E., “Searching for Information in a Hypertext Medical Handbook”, Communications of the ACM, vol. 31, No. 7, Jul. 1988, 8 pages. |
Gamback et al., “The Swedish Core Language Engine”, NOTEX Conference, 1992, 17 pages. |
Gannes, Liz, “Alfred App Gives Personalized Restaurant Recommendations”, AllThingsD, Jul. 18, 2011, pp. 1-3. |
Gautier et al., “Generating Explanations of Device Behavior Using Compositional Modeling and Causal Ordering”, CiteSeerx, 1993, pp. 89-97. |
Gervasio et al., “Active Preference Learning for Personalized Calendar Scheduling Assistance”, CiteSeerx, Proceedings of IUI'05, Jan. 2005, pp. 90-97. |
Glass, Alyssa, “Explaining Preference Learning”, CiteSeerx, 2006, pp. 1-5. |
Glass et al., “Multilingual Language Generation Across Multiple Domains”, International Conference on Spoken Language Processing, Japan, Sep. 1994, 5 pages. |
Glass et al., “Multilingual Spoken-Language Understanding in the Mit Voyager System”, Available online at <http://groups.csail.mit.edu/sls/publications/1995/speechcomm95-voyager.pdf>, Aug. 1995, 29 pages. |
Goddeau et al., “A Form-Based Dialogue Manager for Spoken Language Applications”, Available online at <http://phasedance.com/pdf!icslp96.pdf>, Oct. 1996, 4 pages. |
Goddeau et al., “Galaxy: A Human-Language Interface to On-Line Travel Information”, International Conference on Spoken Language Processing, Yokohama, 1994, pp. 707-710. |
Goldberg et al., “Using Collaborative Filtering to Weave an Information Tapestry”, Communications of the ACM, vol. 35, No. 12, Dec. 1992, 10 pages. |
Gorin et al., “On Adaptive Acquisition of Language”, International Conference on Acoustics, Speech and Signal Processing (ICASSP'90), vol. 1, Apr. 1990, 5 pages. |
Gotoh et al., “Document Space Models Using Latent Semantic Analysis”, In Proceedings of Eurospeech, 1997, 4 pages. |
Gray, R. M., “Vector Quantization”, IEEE ASSP Magazine, Apr. 1984, 26 pages. |
Green, C., “The Application of Theorem Proving to Question-Answering Systems”, SRI Stanford Research Institute, Artificial Intelligence Group, Jun. 1969, 169 pages. |
Gregg et al., “DSS Access on the WWW: An Intelligent Agent Prototype”, Proceedings of the Americas Conference on Information Systems, Association for Information Systems, 1998, 3 pages. |
Grishman et al., “Computational Linguistics: An Introduction”, Cambridge University Press, 1986, 172 pages. |
Grosz et al., “Dialogic: A Core Natural-Language Processing System”, SRI International, Nov. 1982, 17 pages. |
Grosz et al., “Research on Natural-Language Processing at SRI”, SRI International, Nov. 1981, 21 pages. |
Grosz, B., “Team: A Transportable Natural-Language Interface System”, Proceedings of the First Conference on Applied Natural Language Processing, 1983, 7 pages. |
Grosz et al., “TEAM: An Experiment in the Design of Transportable Natural-Language Interfaces”, Artificial Intelligence, vol. 32, 1987, 71 pages. |
Gruber, Tom, “(Avoiding) the Travesty of the Commons”, Presentation at NPUC, New Paradigms for User Computing, IBM Almaden Research Center, Jul. 24, 2006, 52 pages. |
Gruber, Tom, “2021: Mass Collaboration and the Really New Economy”, TNTY Futures, vol. 1, No. 6, Available online at <http://tomgruber.org/writing/tnty2001.htm>, Aug. 2001, 5 pages. |
Gruber, Thomas R., “A Translation Approach to Portable Ontology Specifications”, Knowledge Acquisition, vol. 5, No. 2, Jun. 1993, pp. 199-220. |
Gruber et al., “An Ontology for Engineering Mathematics”, Fourth International Conference on Principles of Knowledge Representation and Reasoning, Available online at <http://www-ksl.stanford.edu/knowledge-sharing/papers/engmath.html>, 1994, pp. 1-22. |
Gruber, Thomas R., “Automated Knowledge Acquisition for Strategic Knowledge”, Machine Learning, vol. 4, 1989, pp. 293-336. |
Gruber, Tom, “Big Think Small Screen: How Semantic Computing in the Cloud will Revolutionize the Consumer Experience on the Phone”, Keynote Presentation at Web 3.0 Conference, Jan. 2010, 41 pages. |
Gruber, Tom, “Collaborating Around Shared Content on the WWW, W3C Workshop on WWW and Collaboration”, Available online at <http://wwww3.org/Collaboration/Workshop/Proceedings/P9.html>, Sep. 1995, 1 page. |
Gruber, Tom, “Collective Knowledge Systems: Where the Social Web Meets the Semantic Web”, Web Semantics: Science, Services and Agents on the World Wide Web, 2007, pp. 1-19. |
Gruber, Tom, “Despite Our Best Efforts, Ontologies are not the Problem”, AAAI Spring Symposium, Available online at <http://tomgruber.org/writing/aaai-ss08.htm>, Mar. 2008, pp. 1-40. |
Gruber, Tom, “Enterprise Collaboration Management with Intraspect”, Intraspect Technical White Paper, Jul. 2001, pp. 1-24. |
Gruber, Tom, “Every Ontology is a Treaty—A Social Agreement—Among People with Some Common Motive in Sharing”, Official Quarterly Bulletin of AIS Special Interest Group on Semantic Web and Information Systems, vol. 1, No. 3, 2004, pp. 1-5. |
Gruber et al., “Generative Design Rationale: Beyond the Record and Replay Paradigm”, Knowledge Systems Laboratory, Technical Report KSL 92-59, Dec. 1991, Updated Feb. 1993, 24 pages. |
Gruber, Tom, “Helping Organizations Collaborate, Communicate, and Learn”, Presentation to NASA Ames Research, Available online at <http://tomgruber.org/writing/organizational-intelligence-talk.htm>, Mar.-Oct. 2003, 30 pages. |
Gruber, Tom, “Intelligence at the Interface: Semantic Technology and the Consumer Internet Experience”, Presentation at Semantic Technologies Conference, Available online at <http://tomgruber.org/writing/semtech08.htm>, May 20, 2008, pp. 1-40. |
Gruber, Thomas R., “Interactive Acquisition of Justifications: Learning “Why” by Being Told “What””, Knowledge Systems Laboratory, Technical Report KSL 91-17, Original Oct. 1990, Revised Feb. 1991, 24 pages. |
Gruber, Tom, “It Is What It Does: The Pragmatics of Ontology for Knowledge Sharing”, Proceedings of the International CIDOC CRM Symposium, Available online at <http://tomgruber.org/writing/cidoc-ontology.htm>, Mar. 26, 2003, 21 pages. |
Gruber et al., “Machine-Generated Explanations of Engineering Models: A Compositional Modeling Approach”, Proceedings of International Joint Conference on Artificial Intelligence, 1993, 7 pages. |
Gruber et al., “NIKE: A National Infrastructure for Knowledge Exchange”, A Whitepaper Advocating and ATP Initiative on Technologies for Lifelong Learning, Oct. 1994, pp. 1-10. |
Gruber, Tom, “Ontologies, Web 2.0 and Beyond”, Ontology Summit, Available online at <http://tomgruber.org/writing/ontolog-social-web-keynote.htm>, Apr. 2007, 17 pages. |
Gruber, Tom, “Ontology of Folksonomy: A Mash-Up of Apples and Oranges”, Int'l Journal on Semantic Web & Information Systems, vol. 3, No. 2, 2007, 7 pages. |
Gruber, Tom, “Siri, A Virtual Personal Assistant—Bringing Intelligence to the Interface”, Semantic Technologies Conference, Jun. 16, 2009, 21 pages. |
Gruber, Tom, “TagOntology”, Presentation to Tag Camp, Oct. 29, 2005, 20 pages. |
Gruber et al., “Toward a Knowledge Medium for Collaborative Product Development”, Proceedings of the Second International Conference on Artificial Intelligence in Design, Jun. 1992, pp. 1-19. |
Gruber, Thomas R., “Toward Principles for the Design of Ontologies used for Knowledge Sharing?”, International Journal of Human-Computer Studies, vol. 43, No. 5-6, Nov. 1995, pp. 907-928. |
Gruber, Tom, “Where the Social Web Meets the Semantic Web”, Presentation at the 5th International Semantic Web Conference, Nov. 2006, 38 pages. |
Guida et al., “NLI: A Robust Interface for Natural Language Person-Machine Communication”, International Journal of Man-Machine Studies, vol. 17, 1982, 17 pages. |
Guzzoni et al., “A Unified Platform for Building Intelligent Web Interaction Assistants”, Proceedings of the 2006 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, Computer Society, 2006, 4 pages. |
Guzzoni et al., “Active, A Platform for Building Intelligent Operating Rooms”, Surgetica 2007 Computer-Aided Medical Interventions: Tools and Applications, 2007, pp. 191-198. |
Guzzoni et al., “Active, A platform for Building Intelligent Software”, Computational Intelligence, Available online at <http://www.informatik.uni-trier.del-ley/pers/hd/g/Guzzoni:Didier >, 2006, 5 pages. |
Guzzoni et al., “Active, A Tool for Building Intelligent User Interfaces”, ASC 2007, Palma de Mallorca, Aug. 2007, 6 pages. |
Guzzoni, D., “Active: A Unified Platform for Building Intelligent Assistant Applications”, Oct. 25, 2007, 262 pages. |
Guzzoni et al., “Many Robots Make Short Work”, AAAI Robot Contest, SRI International, 1996, 9 pages. |
Guzzoni et al., “Modeling Human-Agent Interaction with Active Ontologies”, AAAI Spring Symposium, Interaction Challenges for Intelligent Assistants, Stanford University, Palo Alto, California, 2007, 8 pages. |
Haas et al., “An Approach to Acquiring and Applying Knowledge”, SRI international, Nov. 1980, 22 pages. |
Hadidi et al., “Student's Acceptance of Web-Based Course Offerings: An Empirical Assessment”, Proceedings of the Americas Conference on Information Systems(AMCIS), 1998, 4 pages. |
Hardwar, Devindra, “Driving App Waze Builds its own Siri for Hands-Free Voice Control”, Available online at <http://venturebeat.com/2012/02/09/driving-app-waze-builds-its-own-siri-for-hands-free-voice-control/>, retrieved on Feb. 9, 2012, 4 pages. |
Harris, F. J., “On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform”, In Proceedings of the IEEE, vol. 66, No. 1, Jan. 1978, 34 pages. |
Hawkins et al., “Hierarchical Temporal Memory: Concepts, Theory and Terminology”, Numenta, Inc., Mar. 27, 2007, 20 pages. |
He et al., “Personal Security Agent: KQML-Based PKI”, The Robotics Institute, Carnegie-Mellon University, Paper, 1997, 14 pages. |
Helm et al., “Building Visual Language Parsers”, Proceedings of CHI'91, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1991, 8 pages. |
Hendrix et al., “Developing a Natural Language Interface to Complex Data”, ACM Transactions on Database Systems, vol. 3, No. 2, Jun. 1978, pp. 105-147. |
Hendrix, Gary G., “Human Engineering for Applied Natural Language Processing”, SRI International, Technical Note 139, Feb. 1977, 27 pages. |
Hendrix, Gary G., “Klaus: A System for Managing Information and Computational Resources”, SRI International, Technical Note 230, Oct. 1980, 34 pages. |
Hendrix, Gary G., “Lifer: A Natural Language Interface Facility”, SRI Stanford Research Institute, Technical Note 135, Dec. 1976, 9 pages. |
Hendrix, Gary G., “Natural-Language Interface”, American Journal of Computational Linguistics, vol. 8, No. 2, Apr.-Jun. 1982, pp. 56-61. |
Hendrix, Gary G., “The Lifer Manual: A Guide to Building Practical Natural Language Interfaces”, SRI International, Technical Note 138, Feb. 1977, 76 pages. |
Hendrix et al., “Transportable Natural-Language Interfaces to Databases”, SRI International, Technical Note 228, Apr. 30, 1981, 18 pages. |
Hermansky, H., “Perceptual Linear Predictive (PLP) Analysis of Speech”, Journal of the Acoustical Society of America, vol. 87, No. 4, Apr. 1990, 15 pages. |
Hermansky, H., “Recognition of Speech in Additive and Convolutional Noise Based on Rasta Spectral Processing”, Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'93), Apr. 1993, 4 pages. |
Hirschman et al., “Multi-Site Data Collection and Evaluation in Spoken Language Understanding”, Proceedings of the Workshop on Human Language Technology, 1993, pp. 19-24. |
Hobbs et al., “Fastus: A System for Extracting Information from Natural-Language Text”, SRI International, Technical Note 519, Nov. 19, 1992, 26 pages. |
Hobbs et al., “Fastus: Extracting Information from Natural-Language Texts”, SRI International, 1992, pp. 1-22. |
Hobbs, Jerry R., “Sublanguage and Knowledge”, SRI International, Technical Note 329, Jun. 1984, 30 pages. |
Hodjat et al., “Iterative Statistical Language Model Generation for use with an Agent-Oriented Natural Language Interface”, Proceedings of HCI International, vol. 4, 2003, pp. 1422-1426. |
Hoehfeld et al., “Learning with Limited Numerical Precision Using the Cascade-Correlation Algorithm”, IEEE Transactions on Neural Networks, vol. 3, No. 4, Jul. 1992, 18 pages. |
Holmes, J. N., “Speech Synthesis and Recognition-Stochastic Models for Word Recognition”, Published by Chapman & Hall, London, ISBN 0 412 534304, 1998, 7 pages. |
Hon et al., “CMU Robust Vocabulary—Independent Speech Recognition System”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP-91), Apr. 1991, 4 pages. |
Huang et al., “The SPHINX-II Speech Recognition System: An Overview”, Computer, Speech and Language, vol. 7, No. 2, 1993, 14 pages. |
IBM, “Integrated Audio-Graphics User Interface”, IBM Technical Disclosure Bulletin, vol. 33, No. 11, Apr. 1991, 4 pages. |
IBM, “Speech Editor”, IBM Technical Disclosure Bulletin, vol. 29, No. 10, Mar. 10, 1987, 3 pages. |
IBM, “Speech Recognition with Hidden Markov Models of Speech Waveforms”, IBM Technical Disclosure Bulletin, vol. 34, No. 1, Jun. 1991, 10 pages. |
Intraspect Software, “The Intraspect Knowledge Management Solution: Technical Overview”, Available online at <http://tomgruber.org/writing/intraspect-whitepaper-1998.pdf>, 1998, 18 pages. |
Iowegian International, “FIR Filter Properties, DSPGuru, Digital Signal Processing Central”, Available online at <http://www.dspguru.com/dsp/faq/fir/properties> retrieved on Jul. 28, 2010, 6 pages. |
Issar et al., “CMU's Robust Spoken Language Understanding System”, Proceedings of Eurospeech, 1993, 4 pages. |
Issar, Sunil, “Estimation of Language Models for New Spoken Language Applications”, Proceedings of 4th International Conference on Spoken language Processing, Oct. 1996, 4 pages. |
Jacobs et al., “Scisor: Extracting Information from On-Line News”, Communications of the ACM, vol. 33, No. 11, Nov. 1990, 10 pages. |
Janas, Jurgen M., “The Semantics-Based Natural Language Interface to Relational Databases”, Chapter 6, Cooperative Interfaces to Information Systems, 1986, pp. 143-188. |
Goliath, “2004 Chrysler Pacifica: U-Connect Hands-Free Communication System. (The Best and Brightest of 2004) (Brief Article)”, Automotive Industries, Sep. 2003, 1 pages. |
Massy, Kevin, “2007 Lexus GS 450H, 4Dr Sedan (3.5L, 6cyl Gas/Electric Hybrid CVT)”, ZDNet Reviews, Reviewed on Aug. 3, 2006, 10 pages. |
“All Music”, Available online at <http://www.allmusic.com/cg/amg.dll?p=amg&sql=32:amg/info_pages/a_about.html>, retrieved on Mar. 19, 2007, 2 pages. |
“BluePhoneElite: About”, Available online at <http://www.reelintelligence.com/BluePhoneElite>, retrieved on Sep. 25, 2006, 2 pages. |
“BluePhoneElite: Features”, Available online at <http://www.reelintelligence.com/BluePhoneElite/features.shtml>, retrieved on Sep. 25, 2006, 2 pages. |
“Digital Audio in the New Era”, Electronic Design and Application, No. 6, Jun. 30, 2003, 3 pages. |
“Interactive Voice”, Available online at <http://www.helloivee.com/company/>, retrieved on Feb. 10, 2014, 2 pages. |
“Meet Ivee, Your Wi-Fi Voice Activated Assistant”, Available online at <http://www.helloivee.com/>, retrieved from on Feb. 10, 2014, 8 pages. |
“Mobile Speech Solutions, Mobile Accessibility”, SVOX AG Product Information Sheet, Available online at <http://www.svox.com/site/bra840604/con782768/mob965831936.aSQ?osLang=1>, 1 page. |
Wireless Ground, “N200 Hands-Free Bluetooth Car Kit”, Available on line at <www.wirelessground.com>, retrieved on Mar. 19, 2007, 3 pages. |
“PhatNoise”, Voice Index on Tap, Kenwood Music Keg, Available online at <http://www.phatnoise.com/kenwood/kenwoodssamail.html>, retrieved on Jul. 13, 2006, 1 pages. |
“What is Fuzzy Logic?”, Available online at <http://www.cs.cmu.edu/Groups/AI/html/faqs/ai/fuzzy/part1/faq-doc-2.html>, retrieved on Mar. 19, 2007, 5 pages. |
“Windows XP: A Big Surprise!—Experiencing Amazement from Windows XP”, New Computer, No. 2, Feb. 28, 2002, 8 pages. |
Aikawa et al., “Generation for Multilingual MT”, Available online at <http://mtarchive.info/MTS-2001-Aikawa.pdf>, retrieved on Sep. 18, 2001, 6 pages. |
Anhui USTC IFL Ytek Co. Ltd., “Flytek Research Center Information Datasheet”, Available online at <http://www.iflttek.com/english/Research_htm>, retrieved on Oct. 15, 2004, 3 pages. |
Anonymous, “Speaker Recognition”, Wikipedia, The Free Enclyclopedia, Nov. 2, 2010, 4 pages. |
Applebaum et al., “Enhancing the Discrimination of Speaker Independent Hidden Markov Models with Corrective Training”, International Conference on Acoustics, Speech, and Signal Processing, May 23, 1989, pp. 302-305. |
Bellegarda et al., “Tied Mixture Continuous Parameter Modeling for Speech Recognition”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 38, No. 12, Dec. 1990, pp. 2033-2045. |
Borden IV, G.R., “An Aural User Interface for Ubiquitous Computing”, Proceedings of the 6th International Symposium on Wearable Computers, IEEE, 2002, 2 pages. |
Brain, Marshall, “How MP3 Files Work”, Available online at <http://computerhowstuffworks.com/mp31.htm>, retrieved on Mar. 19, 2007, 4 pages. |
Chang et al., “Discriminative Training of Dynamic Programming Based Speech Recognizers”, IEEE Transactions on Speech and Audio Processing, vol. 1, No. 2, Apr. 1993, pp. 135-143. |
Cheyer et al., “Demonstration Video of Multimodal Maps Using an Agent Architecture”, Published by SRI International no later than 1996, as Depicted in Exemplary Screenshots from Video Entitled “Demonstration Video of Multimodal Maps Using an Agent Architecture”, 1996, 6 pages. |
Cheyer et al., “Demonstration Video of Multimodal Maps Using an Open-Agent Architecture”, Published by SRI International no later than 1996, as Depicted in Exemplary Screenshots from Video Entitled “Demonstration Video of Multimodal Maps Using an Open-Agent Architecture”, 6 pages. |
Cheyer, A., “Demonstration Video of Vanguard Mobile Portal”, Published by SRI International no later than 2004, as Depicted in Exemplary Screenshots from Video Entitled “Demonstration Video of Vanguard Mobile Portal”, 2004, 10 pages. |
Choi et al., “Acoustic and Visual Signal Based Context Awareness System for Mobile Application”, IEEE Transactions on Consumer Electronics, vol. 57, No. 2, May 2011, pp. 738-746. |
Dusan et al., “Multimodal Interaction on PDA's Integrating Speech and Pen Inputs”, Eurospeech Geneva, 2003, 4 pages. |
Kickstarter, “Ivee Sleek: Wi-Fi Voice-Activated Assistant”, Available online at <https://www.kickstarter.com/discover/categories/hardware?ref=category>, retrieved on Feb. 10, 2014, 13 pages. |
Lamel et al., “Generation and Synthesis of Broadcast Messages”, Proceedings of ESCA-NATO Workshop: Applications of Speech Technology, Sep. 10, 1993, 4 pages. |
Macsimum News, “Apple Files Patent for an Audio Interface for the iPod”, Available online at <http://www.macsimumnews.com/index.php/archive/apple_files_patent_for_an_audio_interface_for_the_ipod>, retrieved on May 4, 2006, 8 pages. |
Navigli, Roberto, “Word Sense Disambiguation: A Survey”, ACM Computing Surveys, vol. 41, No. 2, Article 10, Feb. 2009, 70 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2004/016519, dated Nov. 3, 2005, 16 pages. |
Partial International Search Report and Invitation to Pay Additional Fees received for PCT Patent Application No. PCT/US2004/016519, dated Aug. 4, 2005, 6 pages. |
International Search Report received for PCT Patent Application No. PCT/US2011/037014, dated Oct. 4, 2011, 6 pages. |
Invitation to Pay Additional Search Fees received for PCT Application No. PCT/US2011/037014, dated Aug. 2, 2011, 6 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2012/029810, dated Oct. 3, 2013, 9 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2012/029810, dated Aug. 17, 2012, 11 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2012/043098, dated Nov. 14, 2012, 9 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2012/056382, dated Dec. 20, 2012, 11 pages. |
Gong et al., “Guidelines for Handheld Mobile Device Interface Design”, Proceedings of DSI 2004 Annual Meeting, 2004, pp. 3751-3756. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2013/040971, dated Nov. 12, 2013, 11 pages. |
Quazza et al., “Actor: A Multilingual Unit-Selection Speech Synthesis System”, Proceedings of 4th ISCA Tutorial and Research Workshop on Speech Synthesis, Jan. 1, 2001, 6 pages. |
Ricker, T., “Apple Patents Auciio User Interface”, Engadget, Available online at <http://www.engadget.com/2006/05/04/apple-patents-audio-user-interface>, May 4, 2006, 6 pages. |
Santaholma, M., “Grammar Sharing Techniques for Rule-Based Multilingual NLP Systems”, Proceedings of the 16th Nordic Conference of Computational Linguistics, NODALIDA 2007, May 25, 2007, 8 pages. |
Taylor et al., “Speech Synthesis by Phonological Structure Matching”, International Speech Communication Association, vol. 2, Section 3, 1999, 4 pages. |
Xu, “Speech-Based Interactive Games for Language Learning: Reading, Translation and Question-Answering”, Computational Linguistics and Chinese Language Processing, vol. 14, No. 2, Jun. 2009, pp. 133-160. |
Yunker, John, “Beyond Borders: Web Globalization Strategies”, New Riders, Aug. 22, 2002, 11 pages. |
Combined Search Report and Examination Report under Sections 17 and 18(3) received for GB Patent Application No. 1009318.5, dated Oct. 8, 2010, 5 pages. |
Combined Search Report and Examination Report under Sections 17 and 18(3) received for GB Patent Application No. 1217449.6, dated Jan. 17, 2013, 6 pages. |
Horvitz et al., “Handsfree Decision Support: Toward a Non-invasive Human-Computer Interface”, Proceedings of the Symposium on Computer Applications in Medical Care, IEEE Computer Society Press, 1995, p. 955. |
Horvitz et al., “In Pursuit of Effective Handsfree Decision Support: Coupling Bayesian Inference, Speech Understanding, and User Models”, 1995, 8 pages. |
Number | Date | Country | |
---|---|---|---|
20140273979 A1 | Sep 2014 | US |
Number | Date | Country | |
---|---|---|---|
61783984 | Mar 2013 | US |