The disclosed implementations relate generally to digital assistants, and more specifically, to a method and system for a voice trigger for a digital assistant.
Recently, voice-based digital assistants, such as Apple's SIRI, have been introduced into the marketplace to handle various tasks such as web searching and navigation. One advantage of such voice-based digital assistants is that users can interact with a device in a hands-free manner without handling or even looking at the device. Hands-free operation can be particularly beneficial when a person cannot or should not physically handle a device, such as when they are driving. However, to initiate the voice-based assistant, users typically must press a button or select an icon on a touch screen. This tactile input detracts from the hands-free experience. Accordingly, it would be advantageous to provide a method and system of activating a voice-based digital assistant (or other speech-based service) using a voice input or signal, and not a tactile input.
Activating a voice-based assistant using a voice input requires monitoring an audio channel to detect the voice input. This monitoring consumes electrical power, which is a limited resource on handheld or portable devices that rely on batteries and on which such voice-based digital assistants often run. Thus, it would be beneficial to provide an energy-efficient voice trigger that can be used to initiate voice- and/or speech-based services on a device.
Accordingly, there is a need for a low-power voice trigger that can provide “always-listening” voice trigger functionality without excessively consuming limited power resources.
The implementations described below provide systems and methods for initiating a voice-based assistant using a voice trigger at an electronic device. Interactions with a voice-based digital assistant (or other speech-based services, such as a speech-to-text transcription service) often begin when a user presses an affordance (e.g., a button or icon) on a device in order to activate the digital assistant, followed by the device providing some indication to the user that the digital assistant is active and listening, such as a light, a sound (e.g., a beep), or a vocalized output (e.g., “what can I do for you?”). As described herein, voice triggers can also be implemented so that they are activated in response to a specific, predetermined word, phrase, or sound, and without requiring a physical interaction by the user. For example, a user may be able to activate a SIRI digital assistant on an iPHONE (both provided by Apple Inc., the assignee of the present application) by reciting the phrase “Hey, SIRI.” In response, the device outputs a beep, sound, or speech output (e.g., “what can I do for you?”) indicating to the user that the listening mode is active. Accordingly, the user can initiate an interaction with the digital assistant without having to physically touch the device that provides the digital assistant functionality.
One technique for initiating a speech-based service with a voice trigger is to have the speech-based service continuously listen for a predetermined trigger word, phrase, or sound (any of which may be referred to herein as “the trigger sound”). However, continuously operating the speech-based service (e.g., the voice-based digital assistant) requires substantial audio processing and battery power. In order to reduce the power consumed by providing voice trigger functionality, several techniques may be employed. In some implementations, the main processor of an electronic device (i.e., an “application processor”) is kept in a low-power or un-powered state while one or more sound detectors that use less power (e.g., because they do not rely on the application processor) remain active. (When it is in a low-power or un-powered state, an application processor or any other processor, program, or module may be described as inactive or in a standby mode.) For example, a low-power sound detector is used to monitor an audio channel for a trigger sound even when the application processor is inactive. This sound detector is sometimes referred to herein as a trigger sound detector. In some implementations, it is configured to detect particular sounds, phonemes, and/or words. The trigger sound detector (including hardware and/or software components) is designed to recognize specific words, sound, or phrases, but is generally not capable of or optimized for providing full speech-to-text functionality, as such tasks require greater computational and power resources. Thus, in some implementations, the trigger sound detector recognizes whether a voice input includes a predefined pattern (e.g., a sonic pattern matching the words “Hey, SIRI”), but is not able to (or is not configured to) convert the voice input into text or recognize a significant amount of other words. Once the trigger sound has been detected, then, the digital assistant is brought out of a standby mode so that the user can provide a voice command.
In some implementations, the trigger sound detector is configured to detect several different trigger sounds, such as a set of words, phrases, sounds, and/or combinations thereof. The user can then use any of those sounds to initiate the speech-based service. In one example, a voice trigger is preconfigured to respond to the phrases “Hey, SIRI,” “Wake up, SIRI,” “Invoke my digital assistant,” or “Hello, HAL, do you read me, HAL?” In some implementations, the user must select one of the preconfigured trigger sounds as the sole trigger sound. In some implementations, the user selects a subset of the preconfigured trigger sounds, so that the user can initiate the speech-based service with different trigger sounds. In some implementations, all of the preconfigured trigger sounds remain valid trigger sounds.
In some implementations, another sound detector is used so that even the trigger sound detector can be kept in a low- or no-power mode for much of the time. For example, a different type of sound detector (e.g., one that uses less power than the trigger sound detector) is used to monitor an audio channel to determine whether the sound input corresponds to a certain type of sound. Sounds are categorized as different “types” based on certain identifiable characteristics of the sounds. For example, sounds that are of the type “human voice” have certain spectral content, periodicity, fundamental frequencies, etc. Other types of sounds (e.g., whistles, hand claps, etc.) have different characteristics. Sounds of different types are identified using audio and/or signal processing techniques, as described herein.
This sound detector is sometimes referred to herein as a “sound-type detector.” For example, if a predetermined trigger phrase is “Hey, SIRI”, the sound-type detector determines whether the input likely corresponds to human speech. If the trigger sound is a non-voiced sound, such as a whistle, the sound-type detector determines whether a sound input likely corresponds to a whistle. When the appropriate type of sound is detected, the sound-type detector initiates the trigger sound detector to further process and/or analyze the sound. And because the sound-type detector requires less power than the trigger sound detector (e.g., because it uses circuitry with lower power demands and/or more efficient audio processing algorithms than the trigger-sound detector), the voice trigger functionality consumes even less power than with a trigger sound detector alone.
In some implementations, yet another sound detector is used so that both the sound-type detector and the trigger sound detector described above can be kept in a low- or no-power mode for much of the time. For example, a sound detector that uses less power than the sound-type detector is used to monitor an audio channel to determine whether a sound input satisfies a predetermined condition, such as an amplitude (e.g., volume) threshold. This sound detector may be referred to herein as a noise detector. When the noise detector detects a sound that satisfies the predetermined threshold, the noise detector initiates the sound-type detector to further process and/or analyze the sound. And because the noise detector requires less power than either the sound-type detector or the trigger sound detector (e.g., because it uses circuitry with lower power demands and/or more efficient audio processing algorithms), the voice trigger functionality consumes even less power than the combination of the sound-type detector and the trigger sound detector without the noise detector.
In some implementations, any one or more of the sound detectors described above are operated according to a duty cycle, where they are cycled between “on” and “off’ states. This further helps to reduce power consumption of the voice trigger. For example, in some implementations, the noise detector is “on” (i.e., actively monitoring an audio channel) for 10 milliseconds, and “off” for the following 90 milliseconds. This way, the noise detector is “off’ 90% of the time, while still providing effectively continuous noise detection functionality. In some implementations, the on and off durations for the sound detectors are selected so that all of the detectors are be activated while the trigger sound is still being input. For example, for a trigger phrase of “Hey, SIRI,” the sound detectors may be configured so that no matter where in the duty cycle(s) the trigger phrase begins, the trigger sound detector is activated in time to analyze a sufficient amount of the input. For example, the trigger sound detector will be activated in time to receive, process, and analyze the sounds “ay SIRI,” which is enough to determine that the sound matches the trigger phrase. In some implementations, sound inputs are stored in memory as they are received and passed to an upstream detector so that a larger portion of the sound input can be analyzed. Accordingly, even if the trigger sound detector is not initiated until after a trigger phrase has been uttered, it can still analyze the entire recorded trigger phrase.
Some implementations provide a method for operating a voice trigger. The method is performed at an electronic device including one or more processors and memory storing instructions for execution by the one or more processors. The method includes receiving a sound input. The method further includes determining whether at least a portion of the sound input corresponds to a predetermined type of sound. The method further includes, upon a determination that at least a portion of the sound input corresponds to the predetermined type, determining whether the sound input includes predetermined content. The method further includes, upon a determination that the sound input includes the predetermined content, initiating a speech-based service. In some implementations, the speech-based service is a voice-based digital assistant. In some implementations, speech-based service is a dictation service.
In some implementations, determining whether the sound input corresponds to a predetermined type of sound is performed by a first sound detector, and determining whether the sound input includes predetermined content is performed by a second sound detector. In some implementations, the first sound detector consumes less power while operating than the second sound detector. In some implementations, the first sound detector performs frequency-domain analysis of the sound input. In some implementations, determining whether the sound input corresponds to the predetermined type of sound is performed upon a determination that the sound input satisfies a predetermined condition (e.g., as determined by a third sound detector, discussed below).
In some implementations, the first sound detector periodically monitors an audio channel according to a duty cycle. In some implementations, the duty cycle includes an on-time of about 20 milliseconds, and an off-time of about 100 milliseconds.
In some implementations, the predetermined type is a human voice and the predetermined content is one or more words. In some implementations, determining whether at least a portion of the sound input corresponds to the predetermined type of sound includes determining whether at least a portion of the sound input includes frequencies characteristic of a human voice.
In some implementations, the second sound detector is initiated in response to a determination by the first sound detector that the sound input corresponds to the predetermined type. In some implementations, the second sound detector is operated for at least a predetermined amount of time after a determination by the first sound detector that the sound input corresponds to the predetermined type. In some implementations, the predetermined amount of time corresponds to a duration of the predetermined content.
In some implementations, the predetermined content is one or more predetermined phonemes. In some implementations, the one or more predetermined phonemes constitute at least one word.
In some implementations, the method includes, prior to determining whether the sound input corresponds to a predetermined type of sound, determining whether the sound input satisfies a predetermined condition. In some implementations, the predetermined condition is an amplitude threshold. In some implementations, determining whether the sound input satisfies a predetermined condition is performed by a third sound detector, wherein the third sound detector consumes less power while operating than the first sound detector. In some implementations, the third sound detector periodically monitors an audio channel according to a duty cycle. In some implementations, the duty cycle includes an on-time of about 20 milliseconds, and an off-time of about 500 milliseconds. In some implementations, the third sound detector performs time-domain analysis of the sound input.
In some implementations, the method includes storing at least a portion of the sound input in memory, and providing the portion of the sound input to the speech-based service once the speech-based service is initiated. In some implementations, the portion of the sound input is stored in memory using direct memory access.
In some implementations, the method includes determining whether the sound input corresponds to a voice of a particular user. In some implementations, the speech-based service is initiated upon a determination that the sound input includes the predetermined content and that the sound input corresponds to the voice of the particular user. In some implementations, the speech-based service is initiated in a limited access mode upon a determination that the sound input includes the predetermined content and that the sound input does not correspond to the voice of the particular user. In some implementations, the method includes, upon a determination that the sound input corresponds to the voice of the particular user, outputting a voice prompt including a name of the particular user.
In some implementations, determining whether the sound input includes predetermined content includes comparing a representation of the sound input to a reference representation, and determining that the sound input includes the predetermined content when the representation of the sound input matches the reference representation. In some implementations, a match is determined if the representation of the sound input matches the reference representation to a predetermined confidence. In some implementations, the method includes receiving a plurality of sound inputs including the sound input; and iteratively adjusting the reference representation, using respective ones of the plurality of sound inputs, in response to determining that the respective sound inputs include the predetermined content.
In some implementations, the method includes determining whether the electronic device is in a predetermined orientation, and upon a determination that the electronic device is in the predetermined orientation, activating a predetermined mode of the voice trigger. In some implementations, the predetermined orientation corresponds to a display screen of the device being substantially horizontal and facing down, and the predetermined mode is a standby mode. In some implementations, the predetermined orientation corresponds to a display screen of the device being substantially horizontal and facing up, and the predetermined mode is a listening mode.
Some implementations provide a method for operating a voice trigger. The method is performed at an electronic device including one or more processors and memory storing instructions for execution by the one or more processors. The method includes operating a voice trigger in a first mode. The method further includes determining whether the electronic device is in a substantially enclosed space by detecting that one or more of a microphone and a camera of the electronic device is occluded. The method further includes, upon a determination that the electronic device is in a substantially enclosed space, switching the voice trigger to a second mode. In some implementations, the second mode is a standby mode.
Some implementations provide a method for operating a voice trigger. The method is performed at an electronic device including one or more processors and memory storing instructions for execution by the one or more processors. The method includes determining whether the electronic device is in a predetermined orientation, and, upon a determination that the electronic device is in the predetermined orientation, activating a predetermined mode of a voice trigger. In some implementations, the predetermined orientation corresponds to a display screen of the device being substantially horizontal and facing down, and the predetermined mode is a standby mode. In some implementations, the predetermined orientation corresponds to a display screen of the device being substantially horizontal and facing up, and the predetermined mode is a listening mode.
In accordance with some implementations, an electronic device includes a sound receiving unit configured to receive sound input; and a processing unit coupled to the sound receiving unit. The processing unit is configured to determine whether at least a portion of the sound input corresponds to a predetermined type of sound; upon a determination that at least a portion of the sound input corresponds to the predetermined type, determine whether the sound input includes predetermined content; and upon a determination that the sound input includes the predetermined content, initiate a speech-based service. In some implementations, the processing unit is further configured to, prior to determining whether the sound input corresponds to a predetermined type of sound, determine whether the sound input satisfies a predetermined condition. In some implementations, the processing unit is further configured to determine whether the sound input corresponds to a voice of a particular user.
In accordance with some implementations, an electronic device includes a voice trigger unit configured to operate a voice trigger in a first mode of a plurality of modes; and a processing unit coupled to the voice trigger unit. In some implementations, the processing unit is configured to: determine whether the electronic device is in a substantially enclosed space by detecting that one or more of a microphone and a camera of the electronic device is occluded; and upon a determination that the electronic device is in a substantially enclosed space, switch the voice trigger to a second mode. In some implementations, the processing unit is configured to determine whether the electronic device is in a predetermined orientation; and upon a determination that the electronic device is in the predetermined orientation, activate a predetermined mode of a voice trigger.
In accordance with some implementations, a computer-readable storage medium (e.g., a non-transitory computer readable storage medium) is provided, the computer-readable storage medium storing one or more programs for execution by one or more processors of an electronic device, the one or more programs including instructions for performing any of the methods described herein.
In accordance with some implementations, an electronic device (e.g., a portable electronic device) is provided that comprises means for performing any of the methods described herein.
In accordance with some implementations, an electronic device (e.g., a portable electronic device) is provided that comprises a processing unit configured to perform any of the methods described herein.
In accordance with some implementations, an electronic device (e.g., a portable electronic device) is provided that comprises one or more processors and memory storing one or more programs for execution by the one or more processors, the one or more programs including instructions for performing any of the methods described herein.
In accordance with some implementations, an information processing apparatus for use in an electronic device is provided, the information processing apparatus comprising means for performing any of the methods described herein.
Like reference numerals refer to corresponding parts throughout the drawings.
Specifically, once initiated, a digital assistant system is capable of accepting a user request at least partially in the form of a natural language command, request, statement, narrative, and/or inquiry. Typically, the user request seeks either an informational answer or performance of a task by the digital assistant system. A satisfactory response to the user request is generally either provision of the requested informational answer, performance of the requested task, or a combination of the two. For example, a user may ask the digital assistant system a question, such as “Where am I right now?” Based on the user's current location, the digital assistant may answer, “You are in Central Park near the west gate.” The user may also request the performance of a task, for example, by stating “Please invite my friends to my girlfriend's birthday party next week.” In response, the digital assistant may acknowledge the request by generating a voice output, “Yes, right away,” and then send a suitable calendar invite from the user's email address to each of the user' friends listed in the user's electronic address book or contact list. There are numerous other ways of interacting with a digital assistant to request information or performance of various tasks. In addition to providing verbal responses and taking programmed actions, the digital assistant can also provide responses in other visual or audio forms (e.g., as text, alerts, music, videos, animations, etc.).
As shown in
In some implementations, the DA server 106 includes a client-facing I/O interface 112, one or more processing modules 114, data and models 116, an I/O interface to external services 118, a photo and tag database 130, and a photo-tag module 132. The client-facing I/O interface facilitates the client-facing input and output processing for the digital assistant server 106. The one or more processing modules 114 utilize the data and models 116 to determine the user's intent based on natural language input and perform task execution based on the deduced user intent. Photo and tag database 130 stores fingerprints of digital photographs, and, optionally digital photographs themselves, as well as tags associated with the digital photographs. Photo-tag module 132 creates tags, stores tags in association with photographs and/or fingerprints, automatically tags photographs, and links tags to locations within photographs.
In some implementations, the DA server 106 communicates with external services 120 (e.g., navigation service(s) 122-1, messaging service(s) 122-2, information service(s) 122-3, calendar service 122-4, telephony service 122-5, photo service(s) 122-6, etc.) through the network(s) 110 for task completion or information acquisition. The I/O interface to the external services 118 facilitates such communications.
Examples of the user device 104 include, but are not limited to, a handheld computer, a personal digital assistant (PDA), a tablet computer, a laptop computer, a desktop computer, a cellular telephone, a smartphone, an enhanced general packet radio service (EGPRS) mobile phone, a media player, a navigation device, a game console, a television, a remote control, or a combination of any two or more of these data processing devices or any other suitable data processing devices. More details on the user device 104 are provided in reference to an exemplary user device 104 shown in
Examples of the communication network(s) 110 include local area networks (LAN) and wide area networks (WAN), e.g., the Internet. The communication network(s) 110 may be implemented using any known network protocol, including various wired or wireless protocols, such as Ethernet, Universal Serial Bus (USB), FIREWIRE, Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wi-Fi, voice over Internet Protocol (VoIP), Wi-MAX, or any other suitable communication protocol.
The server system 108 can be implemented on at least one data processing apparatus and/or a distributed network of computers. In some implementations, the server system 108 also employs various virtual devices and/or services of third party service providers (e.g., third-party cloud service providers) to provide the underlying computing resources and/or infrastructure resources of the server system 108.
Although the digital assistant system shown in
For example, in some implementations, a motion sensor 210 (e.g., an accelerometer), a light sensor 212, a GPS receiver 213, a temperature sensor, and a proximity sensor 214 are coupled to the peripherals interface 206 to facilitate orientation, light, and proximity sensing functions. In some implementations, other sensors 216, such as a biometric sensor, barometer, and the like, are connected to the peripherals interface 206, to facilitate related functionalities.
In some implementations, the user device 104 includes a camera subsystem 220 coupled to the peripherals interface 206. In some implementations, an optical sensor 222 of the camera subsystem 220 facilitates camera functions, such as taking photographs and recording video clips. In some implementations, the user device 104 includes one or more wired and/or wireless communication subsystems 224 provide communication functions. The communication subsystems 224 typically includes various communication ports, radio frequency receivers and transmitters, and/or optical (e.g., infrared) receivers and transmitters. In some implementations, the user device 104 includes an audio subsystem 226 coupled to one or more speakers 228 and one or more microphones 230 to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions. In some implementations, the audio subsystem 226 is coupled to a voice trigger system 400. In some implementations, the voice trigger system 400 and/or the audio subsystem 226 includes low-power audio circuitry and/or programs (i.e., including hardware and/or software) for receiving and/or analyzing sound inputs, including, for example, one or more analog-to-digital converters, digital signal processors (DSPs), sound detectors, memory buffers, codecs, and the like. In some implementations, the low-power audio circuitry (alone or in addition to other components of the user device 104) provides voice (or sound) trigger functionality for one or more aspects of the user device 104, such as a voice-based digital assistant or other speech-based service. In some implementations, the low-power audio circuitry provides voice trigger functionality even when other components of the user device 104 are shut down and/or in a standby mode, such as the processor(s) 204, I/O subsystem 240, memory 250, and the like. The voice trigger system 400 is described in further detail with respect to
In some implementations, an I/O subsystem 240 is also coupled to the peripheral interface 206. In some implementations, the user device 104 includes a touch screen 246, and the I/O subsystem 240 includes a touch screen controller 242 coupled to the touch screen 246. When the user device 104 includes the touch screen 246 and the touch screen controller 242, the touch screen 246 and the touch screen controller 242 are typically con figured to, for example, detect contact and movement or break thereof using any of a plurality of touch sensitivity technologies, such as capacitive, resistive, infrared, surface acoustic wave technologies, proximity sensor arrays, and the like. In some implementations, the user device 104 includes a display that does not include a touch-sensitive surface. In some implementations, the user device 104 includes a separate touch-sensitive surface. In some implementations, the user device 104 includes other input controller(s) 244. When the user device 104 includes the other input controller(s) 244, the other input controller(s) 244 are typically coupled to other input/control devices 248, such as one or more buttons, rocker switches, thumb-wheel, infrared port, USB port, and/or a pointer device such as a stylus.
The memory interface 202 is coupled to memory 250. In some implementations, memory 250 includes a non-transitory computer readable medium, such as high-speed random access memory and/or non-volatile memory (e.g., one or more magnetic disk storage devices, one or more flash memory devices, one or more optical storage devices, and/or other non-volatile solid-state memory devices).
In some implementations, memory 250 stores an operating system 252, a communications module 254, a graphical user interface module 256, a sensor processing module 258, a phone module 260, and applications 262, and a subset or superset thereof. The operating system 252 includes instructions for handling basic system services and for performing hardware dependent tasks. The communications module 254 facilitates communicating with one or more additional devices, one or more computers and/or one or more servers. The graphical user interface module 256 facilitates graphic user interface processing. The sensor processing module 258 facilitates sensor-related processing and functions (e.g., processing voice input received with the one or more microphones 228). The phone module 260 facilitates phone-related processes and functions. The application module 262 facilitates various functionalities of user applications, such as electronic-messaging, web browsing, media processing, navigation, imaging and/or other processes and functions. In some implementations, the user device 104 stores in memory 250 one or more software applications 270-1 and 270-2 each associated with at least one of the external service providers.
As described above, in some implementations, memory 250 also stores client-side digital assistant instructions (e.g., in a digital assistant client module 264) and various user data 266 (e.g., user-specific vocabulary data, preference data, and/or other data such as the user's electronic address book or contact list, to-do lists, shopping lists, etc.) to provide the client-side functionalities of the digital assistant.
In various implementations, the digital assistant client module 264 is capable of accepting voice input, text input, touch input, and/or gestural input through various user interfaces (e.g., the I/O subsystem 244) of the user device 104. The digital assistant client module 264 is also capable of providing output in audio, visual, and/or tactile forms. For example, output can be provided as voice, sound, alerts, text messages, menus, graphics, videos, animations, vibrations, and/or combinations of two or more of the above. During operation, the digital assistant client module 264 communicates with the digital assistant server (e.g., the digital assistant server 106,
In some implementations, the digital assistant client module 264 utilizes various sensors, subsystems and peripheral devices to gather additional information from the surrounding environment of the user device 104 to establish a context associated with a user input. In some implementations, the digital assistant client module 264 provides the context information or a subset thereof with the user input to the digital assistant server (e.g., the digital assistant server 106,
In some implementations, the context information that can accompany the user input includes sensor information, e.g., lighting, ambient noise, ambient temperature, images or videos of the surrounding environment, etc. In some implementations, the context information also includes the physical state of the device, e.g., device orientation, device location, device temperature, power level, speed, acceleration, motion patterns, cellular signals strength, etc. In some implementations, information related to the software state of the user device 106, e.g., running processes, installed programs, past and present network activities, background services, error logs, resources usage, etc., of the user device 104 is also provided to the digital assistant server (e.g., the digital assistant server 106,
In some implementations, the DA client module 264 selectively provides information (e.g., at least a portion of the user data 266) stored on the user device 104 in response to requests from the digital assistant server. In some implementations, the digital assistant client module 264 also elicits additional input from the user via a natural language dialogue or other user interfaces upon request by the digital assistant server 106 (
In some implementations, memory 250 may include additional instructions or fewer instructions. Furthermore, various functions of the user device 104 may be implemented in hardware and/or in firmware, including in one or more signal processing and/or application specific integrated circuits, and the user device 104, thus, need not include all modules and applications illustrated in
The digital assistant system 300 includes memory 302, one or more processors 304, an input/output (I/O) interface 306, and a network communications interface 308. These components communicate with one another over one or more communication buses or signal lines 310.
In some implementations, memory 302 includes a non-transitory computer readable medium, such as high-speed random access memory and/or a non-volatile computer readable storage medium (e.g., one or more magnetic disk storage devices, one or more flash memory devices, one or more optical storage devices, and/or other non-volatile solid-state memory devices).
The I/O interface 306 couples input/output devices 316 of the digital assistant system 300, such as displays, a keyboards, touch screens, and microphones, to the user interface module 322. The I/O interface 306, in conjunction with the user interface module 322, receives user inputs (e.g., voice input, keyboard inputs, touch inputs, etc.) and process them accordingly. In some implementations, when the digital assistant is implemented on a standalone user device, the digital assistant system 300 includes any of the components and I/O and communication interfaces described with respect to the user device 104 in
In some implementations, the network communications interface 308 includes wired communication port(s) 312 and/or wireless transmission and reception circuitry 314. The wired communication port(s) receive and send communication signals via one or more wired interfaces, e.g., Ethernet, Universal Serial Bus (USB), FIREWIRE, etc. The wireless circuitry 314 typically receives and sends RF signals and/or optical signals from/to communications networks and other communications devices. The wireless communications may use any of a plurality of communications standards, protocols and technologies, such as GSM, EDGE, CDMA, TDMA, Bluetooth, Wi-Fi, VoIP, Wi-MAX, or any other suitable communication protocol. The network communications interface 308 enables communication between the digital assistant system 300 with networks, such as the Internet, an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other devices.
In some implementations, the non-transitory computer readable storage medium of memory 302 stores programs, modules, instructions, and data structures including all or a subset of: an operating system 318, a communications module 320, a user interface module 322, one or more applications 324, and a digital assistant module 326. The one or more processors 304 execute these programs, modules, and instructions, and reads/writes from/to the data structures.
The operating system 318 (e.g., Darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communications between various hardware, firmware, and software components.
The communications module 320 facilitates communications between the digital assistant system 300 with other devices over the network communications interface 308. For example, the communication module 320 may communicate with the communications module 254 of the device 104 shown in
In some implementations, the user interface module 322 receives commands and/or inputs from a user via the I/O interface 306 (e.g., from a keyboard, touch screen, and/or microphone), and provides user interface objects on a display.
The applications 324 include programs and/or modules that are configured to be executed by the one or more processors 304. For example, if the digital assistant system is implemented on a standalone user device, the applications 324 may include user applications, such as games, a calendar application, a navigation application, or an email application. If the digital assistant system 300 is implemented on a server farm, the applications 324 may include resource management applications, diagnostic applications, or scheduling applications, for example.
Memory 302 also stores the digital assistant module (or the server portion of a digital assistant) 326. In some implementations, the digital assistant module 326 includes the following sub-modules, or a subset or superset thereof: an input/output processing module 328, a speech-to-text (STT) processing module 330, a natural language processing module 332, a dialogue flow processing module 334, a task flow processing module 336, a service processing module 338, and a photo module 132. Each of these processing modules has access to one or more of the following data and models of the digital assistant 326, or a subset or superset thereof: ontology 360, vocabulary index 344, user data 348, categorization module 349, disambiguation module 350, task flow models 354, service models 356, photo tagging module 358, search module 360, and local tag/photo storage 362.
In some implementations, using the processing modules (e.g., the input/output processing module 328, the STT processing module 330, the natural language processing module 332, the dialogue flow processing module 334, the task flow processing module 336, and/or the service processing module 338), data, and models implemented in the digital assistant module 326, the digital assistant system 300 performs at least some of the following: identifying a user's intent expressed in a natural language input received from the user; actively eliciting and obtaining information needed to fully deduce the user's intent (e.g., by disambiguating words, names, intentions, etc.); determining the task flow for fulfilling the deduced intent; and executing the task flow to fulfill the deduced intent. In some implementations, the digital assistant also takes appropriate actions when a satisfactory response was not or could not be provided to the user for various reasons.
In some implementations, as discussed below, the digital assistant system 300 identifies, from a natural language input, a user's intent to tag a digital photograph, and processes the natural language input so as to tag the digital photograph with appropriate information. In some implementations, the digital assistant system 300 performs other tasks related to photographs as well, such as searching for digital photographs using natural language input, auto-tagging photographs, and the like.
As shown in
In some implementations, the speech-to-text processing module 330 receives speech input (e.g., a user utterance captured in a voice recording) through the I/O processing module 328. In some implementations, the speech-to-text processing module 330 uses various acoustic and language models to recognize the speech input as a sequence of phonemes, and ultimately, a sequence of words or tokens written in one or more languages. The speech-to-text processing module 330 is implemented using any suitable speech recognition techniques, acoustic models, and language models, such as Hidden Markov Models, Dynamic Time Warping (DTW)-based speech recognition, and other statistical and/or analytical techniques. In some implementations, the speech-to-text processing can be performed at least partially by a third party service or on the user's device. Once the speech-to-text processing module 330 obtains the result of the speech-to-text processing (e.g., a sequence of words or tokens), it passes the result to the natural language processing module 332 for intent deduction.
The natural language processing module 332 (“natural language processor”) of the digital assistant 326 takes the sequence of words or tokens (“token sequence”) generated by the speech-to-text processing module 330, and attempts to associate the token sequence with one or more “actionable intents” recognized by the digital assistant. As used herein, an “actionable intent” represents a task that can be performed by the digital assistant 326 and/or the digital assistant system 300 (
In some implementations, in addition to the sequence of words or tokens obtained from the speech-to-text processing module 330, the natural language processor 332 also receives context information associated with the user request (e.g., from the I/O processing module 328). The natural language processor 332 optionally uses the context information to clarify, supplement, and/or further define the information contained in the token sequence received from the speech-to-text processing module 330. The context information includes, for example, user preferences, hardware and/or software states of the user device, sensor information collected before, during, or shortly after the user request, prior interactions (e.g., dialogue) between the digital assistant and the user, and the like.
In some implementations, the natural language processing is based on an ontology 360. The ontology 360 is a hierarchical structure containing a plurality of nodes, each node representing either an “actionable intent” or a “property” relevant to one or more of the “actionable intents” or other “properties.” As noted above, an “actionable intent” represents a task that the digital assistant system 300 is capable of performing (e.g., a task that is “actionable” or can be acted on). A “property” represents a parameter associated with an actionable intent or a sub-aspect of another property. A linkage between an actionable intent node and a property node in the ontology 360 defines how a parameter represented by the property node pertains to the task represented by the actionable intent node.
In some implementations, the ontology 360 is made up of actionable intent nodes and property nodes. Within the ontology 360, each actionable intent node is linked to one or more property nodes either directly or through one or more intermediate property nodes. Similarly, each property node is linked to one or more actionable intent nodes either directly or through one or more intermediate property nodes. For example, the ontology 360 shown in
An actionable intent node, along with its linked concept nodes, may be described as a “domain.” In the present discussion, each domain is associated with a respective actionable intent, and refers to the group of nodes (and the relationships therebetween) associated with the particular actionable intent. For example, the ontology 360 shown in
While
In some implementations, the ontology 360 includes all the domains (and hence actionable intents) that the digital assistant is capable of understanding and acting upon. In some implementations, the ontology 360 may be modified, such as by adding or removing domains or nodes, or by modifying relationships between the nodes within the ontology 360.
In some implementations, nodes associated with multiple related actionable intents may be clustered under a “super domain” in the ontology 360. For example, a “travel” super-domain may include a cluster of property nodes and actionable intent nodes related to travels. The actionable intent nodes related to travels may include “airline reservation,” “hotel reservation,” “car rental,” “get directions,” “find points of interest,” and so on. The actionable intent nodes under the same super domain (e.g., the “travels” super domain) may have many property nodes in common. For example, the actionable intent nodes for “airline reservation,” “hotel reservation,” “car rental,” “get directions,” “find points of interest” may share one or more of the property nodes “start location,” “destination,” “departure date/time,” “arrival date/time,” and “party size.”
In some implementations, each node in the ontology 360 is associated with a set of words and/or phrases that are relevant to the property or actionable intent represented by the node. The respective set of words and/or phrases associated with each node is the so-called “vocabulary” associated with the node. The respective set of words and/or phrases associated with each node can be stored in the vocabulary index 344 (
In some implementations, the natural language processor 332 shown in
In some implementations, the digital assistant system 300 also stores names of specific entities in the vocabulary index 344, so that when one of these names is detected in the user request, the natural language processor 332 will be able to recognize that the name refers to a specific instance of a property or sub-property in the ontology. In some implementations, the names of specific entities are names of businesses, restaurants, people, movies, and the like. In some implementations, the digital assistant system 300 can search and identify specific entity names from other data sources, such as the user's address book or contact list, a movies database, a musicians database, and/or a restaurant database. In some implementations, when the natural language processor 332 identifies that a word in the token sequence is a name of a specific entity (such as a name in the user's address book or contact list), that word is given additional significance in selecting the actionable intent within the ontology for the user request.
For example, when the words “Mr. Santo” are recognized from the user request, and the last name “Santo” is found in the vocabulary index 344 as one of the contacts in the user's contact list, then it is likely that the user request corresponds to a “send a message” or “initiate a phone call” domain. For another example, when the words “ABC Café” are found in the user request, and the term “ABC Café” is found in the vocabulary index 344 as the name of a particular restaurant in the user's city, then it is likely that the user request corresponds to a “restaurant reservation” domain.
User data 348 includes user-specific information, such as user-specific vocabulary, user preferences, user address, user's default and secondary languages, user's contact list, and other short-term or long-term information for each user. The natural language processor 332 can use the user-specific information to supplement the information contained in the user input to further define the user intent. For example, for a user request “invite my friends to my birthday party,” the natural language processor 332 is able to access user data 348 to determine who the “friends” are and when and where the “birthday party” would be held, rather than requiring the user to provide such information explicitly in his/her request.
In some implementations, natural language processor 332 includes categorization module 349. In some implementations, the categorization module 349 determines whether each of the one or more terms in a text string (e.g., corresponding to a speech input associated with a digital photograph) is one of an entity, an activity, or a location, as discussed in greater detail below. In some implementations, the categorization module 349 classifies each term of the one or more terms as one of an entity, an activity, or a location.
Once the natural language processor 332 identifies an actionable intent (or domain) based on the user request, the natural language processor 332 generates a structured query to represent the identified actionable intent. In some implementations, the structured query includes parameters for one or more nodes within the domain for the actionable intent, and at least some of the parameters are populated with the specific information and requirements specified in the user request. For example, the user may say “Make me a dinner reservation at a sushi place at 7.” In this case, the natural language processor 332 may be able to correctly identify the actionable intent to be “restaurant reservation” based on the user input. According to the ontology, a structured query for a “restaurant reservation” domain may include parameters such as {Cuisine}, {Time}, {Date}, {Party Size}, and the like. Based on the information contained in the user's utterance, the natural language processor 332 may generate a partial structured query for the restaurant reservation domain, where the partial structured query includes the parameters {Cuisine=“Sushi”} and {Time=“7 pm”}. However, in this example, the user's utterance contains insufficient information to complete the structured query associated with the domain. Therefore, other necessary parameters such as {Party Size} and {Date} are not specified in the structured query based on the information currently available. In some implementations, the natural language processor 332 populates some parameters of the structured query with received context information. For example, if the user requested a sushi restaurant “near me,” the natural language processor 332 may populate a {location} parameter in the structured query with GPS coordinates from the user device 104.
In some implementations, the natural language processor 332 passes the structured query (including any completed parameters) to the task flow processing module 336 (“task flow processor”). The task flow processor 336 is configured to perform one or more of: receiving the structured query from the natural language processor 332, completing the structured query, and performing the actions required to “complete” the user's ultimate request. In some implementations, the various procedures necessary to complete these tasks are provided in task flow models 354. In some implementations, the task flow models 354 include procedures for obtaining additional information from the user, and task flows for performing actions associated with the actionable intent.
As described above, in order to complete a structured query, the task flow processor 336 may need to initiate additional dialogue with the user in order to obtain additional information, and/or disambiguate potentially ambiguous utterances. When such interactions are necessary, the task flow processor 336 invokes the dialogue processing module 334 (“dialogue processor”) to engage in a dialogue with the user. In some implementations, the dialogue processing module 334 determines how (and/or when) to ask the user for the additional information, and receives and processes the user responses. In some implementations, the questions are provided to and answers are received from the users through the I/O processing module 328. For example, the dialogue processing module 334 presents dialogue output to the user via audio and/or visual output, and receives input from the user via spoken or physical (e.g., touch gesture) responses. Continuing with the example above, when the task flow processor 336 invokes the dialogue processor 334 to determine the “party size” and “date” information for the structured query associated with the domain “restaurant reservation,” the dialogue processor 334 generates questions such as “For how many people?” and “On which day?” to pass to the user. Once answers are received from the user, the dialogue processing module 334 populates the structured query with the missing information, or passes the information to the task flow processor 336 to complete the missing information from the structured query.
In some cases, the task flow processor 336 may receive a structured query that has one or more ambiguous properties. For example, a structured query for the “send a message” domain may indicate that the intended recipient is “Bob,” and the user may have multiple contacts named “Bob.” The task flow processor 336 will request that the dialogue processor 334 disambiguate this property of the structured query. In turn, the dialogue processor 334 may ask the user “Which Bob?”, and display (or read) a list of contacts named “Bob” from which the user may choose.
In some implementations, dialogue processor 334 includes disambiguation module 350. In some implementations, disambiguation module 350 disambiguates one or more ambiguous terms (e.g., one or more ambiguous terms in a text string corresponding to a speech input associated with a digital photograph). In some implementations, disambiguation module 350 identifies that a first term of the one or more teens has multiple candidate meanings, prompts a user for additional information about the first term, receives the additional information from the user in response to the prompt and identifies the entity, activity, or location associated with the first term in accordance with the additional information.
In some implementations, disambiguation module 350 disambiguates pronouns. In such implementations, disambiguation module 350 identifies one of the one or more terms as a pronoun and determines a noun to which the pronoun refers. In some implementations, disambiguation module 350 determines a noun to which the pronoun refers by using a contact list associated with a user of the electronic device. Alternatively, or in addition, disambiguation module 350 determines a noun to which the pronoun refers as a name of an entity, an activity, or a location identified in a previous speech input associated with a previously tagged digital photograph. Alternatively, or in addition, disambiguation module 350 determines a noun to which the pronoun refers as a name of a person identified based on a previous speech input associated with a previously tagged digital photograph.
In some implementations, disambiguation module 350 accesses information obtained from one or more sensors (e.g., proximity sensor 214, light sensor 212, GPS receiver 213, temperature sensor 215, and motion sensor 210) of a handheld electronic device (e.g., user device 104) for determining a meaning of one or more of the terms. In some implementations, disambiguation module 350 identifies two terms each associated with one of an entity, an activity, or a location. For example, a first of the two terms refers to a person, and a second of the two terms refers to a location. In some implementations, disambiguation module 350 identifies three terms each associated with one of an entity, an activity, or a location.
Once the task flow processor 336 has completed the structured query for an actionable intent, the task flow processor 336 proceeds to perform the ultimate task associated with the actionable intent. Accordingly, the task flow processor 336 executes the steps and instructions in the task flow model according to the specific parameters contained in the structured query. For example, the task flow model for the actionable intent of “restaurant reservation” may include steps and instructions for contacting a restaurant and actually requesting a reservation for a particular party size at a particular time. For example, using a structured query such as: {restaurant reservation, restaurant=ABC Café, date=3/12/2012, time=7 pm, party size=5}, the task flow processor 336 may perform the steps of: (1) logging onto a server of the ABC Café or a restaurant reservation system that is configured to accept reservations for multiple restaurants, such as the ABC Café, (2) entering the date, time, and party size information in a form on the website, (3) submitting the form, and (4) making a calendar entry for the reservation in the user's calendar. In another example, described in greater detail below, the task flow processor 336 executes steps and instructions associated with tagging or searching for digital photographs in response to a voice input, e.g., in conjunction with photo module 132.
In some implementations, the task flow processor 336 employs the assistance of a service processing module 338 (“service processor”) to complete a task requested in the user input or to provide an informational answer requested in the user input. For example, the service processor 338 can act on behalf of the task flow processor 336 to make a phone call, set a calendar entry, invoke a map search, invoke or interact with other user applications installed on the user device, and invoke or interact with third party services (e.g. a restaurant reservation portal, a social networking website or service, a banking portal, etc.,). In some implementations, the protocols and application programming interfaces (API) required by each service can be specified by a respective service model among the service models 356. The service processor 338 accesses the appropriate service model for a service and generates requests for the service in accordance with the protocols and APIs required by the service according to the service model.
For example, if a restaurant has enabled an online reservation service, the restaurant can submit a service model specifying the necessary parameters for making a reservation and the APIs for communicating the values of the necessary parameters to the online reservation service. When requested by the task flow processor 336, the service processor 338 can establish a network connection with the online reservation service using the web address stored in the service models 356, and send the necessary parameters of the reservation (e.g., time, date, party size) to the online reservation interface in a format according to the API of the online reservation service.
In some implementations, the natural language processor 332, dialogue processor 334, and task flow processor 336 are used collectively and iteratively to deduce and define the user's intent, obtain information to further clarify and refine the user intent, and finally generate a response (e.g., provide an output to the user, or complete a task) to fulfill the user's intent.
In some implementations, after all of the tasks needed to fulfill the user's request have been performed, the digital assistant 326 formulates a confirmation response, and sends the response back to the user through the I/O processing module 328. If the user request seeks an informational answer, the confirmation response presents the requested information to the user. In some implementations, the digital assistant also requests the user to indicate whether the user is satisfied with the response produced by the digital assistant 326.
Attention is now directed to
In some implementations, the voice trigger system 400 includes a noise detector 402, a sound-type detector 404, a trigger sound detector 406, and a speech-based service 408, and an audio subsystem 226, each coupled to an audio bus 401. In some implementations, more or fewer of these modules are used. The sound detectors 402, 404, and 406 may be referred to as modules, and may include hardware (e.g., circuitry, memory, processors, etc.), software (e.g., programs, software-on-a-chip, firmware, etc.), and/or any combinations thereof for performing the functionality described herein. In some implementations, the sound detectors are communicatively, programmatically, physically, and/or operationally coupled to one another (e.g., via a communications bus), as illustrated in
In some implementations, the audio subsystem 226 includes a codec 410, an audio digital signal processor (DSP) 412, and a memory buffer 414. In some implementations, the audio subsystem 226 is coupled to one or more microphones 230 (
In some implementations, the speech-based service 408 is a voice-based digital assistant, and corresponds to one or more components or functionalities of the digital assistant system described above with reference to
In some implementations, the noise detector 402 monitors an audio channel to determine whether a sound input from the audio subsystem 226 satisfies a predetermined condition, such as an amplitude threshold. The audio channel corresponds to a stream of audio information received by one or more sound pickup devices, such as the one or more microphones 230 (
In some implementations, the predetermined condition is whether the sound input is above a certain volume for a predetermined amount of time. In some implementations, the noise detector uses time-domain analysis of the sound input, which requires relatively little computational and battery resources as compared to other types of analysis (e.g., as performed by the sound-type detector 404, the trigger word detector 406, and/or the speech-based service 408). In some implementations, other types of signal processing and/or audio analysis are used, including, for example, frequency-domain analysis. If the noise detector 402 determines that the sound input satisfies the predetermined condition, it initiates an upstream sound detector, such as the sound-type detector 404 (e.g., by providing a control signal to initiate one or more processing routines, and/or by providing power to the upstream sound detector). In some implementations, the upstream sound detector is initiated in response to other conditions being satisfied. For example, in some implementations, the upstream sound detector is initiated in response to determining that the device is not being stored in an enclosed space (e.g., based on a light detector detecting a threshold level of light).
The sound-type detector 404 monitors the audio channel to determine whether a sound input corresponds to a certain type of sound, such as sound that is characteristic of a human voice, whistle, clap, etc. The type of sound that the sound-type detector 404 is configured to recognize will correspond to the particular trigger sound(s) that the voice trigger is configured to recognize. In implementations where the trigger sound is a spoken word or phrase, the sound-type detector 404 includes a “voice activity detector” (VAD). In some implementations, the sound-type detector 404 uses frequency-domain analysis of the sound input. For example, the sound-type detector 404 generates a spectrogram of a received sound input (e.g., using a Fourier transform), and analyzes the spectral components of the sound input to determine whether the sound input is likely to correspond to a particular type or category of sounds (e.g., human speech). Thus, in implementations where the trigger sound is a spoken word or phrase, if the audio channel is picking up ambient sound (e.g., traffic noise) but not human speech, the VAD will not initiate the trigger sound detector 406.
In some implementations, the sound-type detector 404 remains active for as long as predetermined conditions of any downstream sound detector (e.g., the noise detector 402) are satisfied. For example, in some implementations, the sound-type detector 404 remains active as long as the sound input includes sound above a predetermined amplitude threshold (as determined by the noise detector 402), and is deactivated when the sound drops below the predetermined threshold. In some implementations, once initiated, the sound-type detector 404 remains active until a condition is met, such as the expiration of a timer (e.g., for 1, 2, 5, or 10 seconds, or any other appropriate duration), the expiration of a certain number of on/off cycles of the sound-type detector 404, or the occurrence of an event (e.g., the amplitude of the sound falls below a second threshold, as determined by the noise detector 402 and/or the sound-type detector 404).
As mentioned above, if the sound-type detector 404 determines that the sound input corresponds to a predetermined type of sound, it initiates an upstream sound detector (e.g., by providing a control signal to initiate one or more processing routines, and/or by providing power to the upstream sound detector), such as the trigger sound detector 406.
The trigger sound detector 406 is configured to determine whether a sound input includes at least part of certain predetermined content (e.g., at least part of the trigger word, phrase, or sound). In some implementations, the trigger sound detector 406 compares a representation of the sound input (an “input representation”) to one or more reference representations of the trigger word. If the input representation matches at least one of the one or more reference representations with an acceptable confidence, the trigger sound detector 406 initiates the speech-based service 408 (e.g., by providing a control signal to initiate one or more processing routines, and/or by providing power to the upstream sound detector). In some implementations, the input representation and the one or more reference representations are spectrograms (or mathematical representations thereof), which represent how the spectral density of a signal varies with time. In some implementations, the representations are other types of audio signatures or voiceprints. In some implementations, initiating the speech-based service 408 includes bringing one or more circuits, programs, and/or processors out of a standby mode, and invoking the sound-based service. The sound-based service is then ready to provide more comprehensive speech recognition, speech-to-text processing, and/or natural language processing.
In some implementations, the voice-trigger system 400 includes voice authentication functionality, so that it can determine if a sound input corresponds to a voice of a particular person, such as an owner/user of the device. For example, in some implementations, the sound-type detector 404 uses a voiceprinting technique to determine that the sound input was uttered by an authorized user. Voice authentication and voiceprinting are described in more detail in U.S. patent application Ser. No. 13/053,144, assigned to the assignee of the instant application, which is hereby incorporated by reference in its entirety. In some implementations, voice authentication is included in any of the sound detectors described herein (e.g., the noise detector 402, the sound-type detector 404, the trigger sound detector 406, and/or the speech-based service 408). In some implementations, voice authentication is implemented as a separate module from the sound detectors listed above (e.g., as voice authentication module 428,
In some implementations, the trigger sound detector 406 remains active for as long as conditions of any downstream sound detector(s) (e.g., the noise detector 402 and/or the sound-type detector 404) are satisfied. For example, in some implementations, the trigger sound detector 406 remains active as long as the sound input includes sound above a predetermined threshold (as detected by the noise detector 402). In some implementations, it remains active as long as the sound input includes sound of a certain type (as detected by the sound-type detector 404). In some implementations, it remains active as long as both the foregoing conditions are met.
In some implementations, once initiated, the trigger sound detector 406 remains active until a condition is met, such as the expiration of a timer (e.g., for 1, 2, 5, or 10 seconds, or any other appropriate duration), the expiration of a certain number of on/off cycles of the trigger sound detector 406, or the occurrence of an event (e.g., the amplitude of the sound falls below a second threshold).
In some implementations, when one sound detector initiates another detector, both sound detectors remain active. However, the sound detectors may be active or inactive at various times, and it is not necessary that all of the downstream (e.g., the lower power and/or sophistication) sound detectors be active (or that their respective conditions are met) in order for upstream sound detectors to be active. For example, in some implementations, after the noise detector 402 and the sound-type detector 404 determine that their respective conditions are met, and the trigger sound detector 406 is initiated, one or both of the noise detector 402 and the sound-type detector 404 are deactivated and/or enter a standby mode while the trigger sound detector 406 operates. In other implementations, both the noise detector 402 and the sound-type detector 404 (or one or the other) stay active while the trigger sound detector 406 operates. In various implementations, different combinations of the sound detectors are active at different times, and whether one is active or inactive may depend on the state of other sound detectors, or may be independent of the state of other sound detectors.
While
Moreover, different combinations of sound detectors may be used at different times. For example, the particular combination of sound detectors and how they interact may depend on one or more conditions, such as the context or operating state of a device. As a specific example, if a device is plugged in (and thus not relying exclusively on battery power), the trigger sound detector 406 is active, while the noise detector 402 and the sound-type detector 404 remain inactive. In another example, if the device is in a pocket or backpack, all sound detectors are inactive.
By cascading sound detectors as described above, where the detectors that require more power are invoked only when necessary by detectors that require lower power, power efficient voice triggering functionality can be provided. As described above, additional power efficiency is achieved by operating one or more of the sound detectors according to a duty cycle. For example, in some implementations, the noise detector 402 operates according to a duty cycle so that it performs effectively continuous noise detection, even though the noise detector is off for at least part of the time. In some implementations, the noise detector 402 is on for 10 milliseconds and off for 90 milliseconds. In some implementations, the noise detector 402 is on for 20 milliseconds and off for 500 milliseconds. Other on and off durations are also possible.
In some implementations, if the noise detector 402 detects a noise during its “on” interval, the noise detector 402 will remain on in order to further process and/or analyze the sound input. For example, the noise detector 402 may be configured to initiate an upstream sound detector if it detects sound above a predetermined amplitude for a predetermined amount of time (e.g., 100 milliseconds). Thus, if the noise detector 402 detects sound above a predetermined amplitude during its 10 millisecond “on” interval, it will not immediately enter the “off” interval. Instead, the noise detector 402 remains active and continues to process the sound input to determine whether it exceeds the threshold for the full predetermined duration (e.g., 100 milliseconds).
In some implementations, the sound-type detector 404 operates according to a duty cycle. In some implementations, the sound-type detector 404 is on for 20 milliseconds and off for 100 milliseconds. Other on and off durations are also possible. In some implementations, the sound-type detector 404 is able to determine whether a sound input corresponds to a predetermined type of sound within the “on” interval of its duty cycle. Thus, the sound-type detector 404 will initiate the trigger sound detector 406 (or any other upstream sound detector) if the sound-type detector 404 determines, during its “on” interval, that the sound is of a certain type. Alternatively, in some implementations, if the sound-type detector 404 detects, during the “on” interval, sound that may correspond to the predetermined type, the detector will not immediately enter the “off” interval. Instead, the sound-type detector 404 remains active and continues to process the sound input and determine whether it corresponds to the predetermined type of sound. In some implementations, if the sound detector determines that the predetermined type of sound has been detected, it initiates the trigger sound detector 406 to further process the sound input and determine if the trigger sound has been detected.
Similar to the noise detector 402 and the sound-type detector 404, in some implementations, the trigger sound detector 406 operates according to a duty cycle. In some implementations, the trigger sound detector 406 is on for 50 milliseconds and off for 50 milliseconds. Other on and off durations are also possible. If the trigger sound detector 406 detects, during its “on” interval, that there is sound that may correspond to a trigger sound, the detector will not immediately enter the “off” interval. Instead, the trigger sound detector 406 remains active and continues to process the sound input and determine whether it includes the trigger sound. In some implementations, if such a sound is detected, the trigger sound detector 406 remains active to process the audio for a predetermined duration, such as 1, 2, 5, or 10 seconds, or any other appropriate duration. In some implementations, the duration is selected based on the length of the particular trigger word or sound that it is configured to detect. For example, if the trigger phrase is “Hey, SIRI,” the trigger word detector is operated for about 2 seconds to determine whether the sound input includes that phrase.
In some implementations, some of the sound detectors are operated according to a duty cycle, while others operate continuously when active. For example, in some implementations, only the first sound detector is operated according to a duty cycle (e.g., the noise detector 402 in
In some implementations, the voice trigger includes noise, echo, and/or sound cancellation functionality (referred to collectively as noise cancellation). In some implementations, noise cancellation is performed by the audio subsystem 226 (e.g., by the audio DSP 412). Noise cancellation reduces or removes unwanted noise or sounds from the sound input prior to it being processed by the sound detectors. In some cases, the unwanted noise is background noise from the user's environment, such as a fan or the clicking from a keyboard. In some implementations, the unwanted noise is any sound above, below, or at predetermined amplitudes or frequencies. For example, in some implementations, sound above the typical human vocal range (e.g., 3,000 Hz) is filtered out or removed from the signal. In some implementations, multiple microphones (e.g., the microphones 230) are used to help determine what components of received sound should be reduced and/or removed. For example, in some implementations, the audio subsystem 226 uses beam forming techniques to identify sounds or portions of sound inputs that appear to originate from a single point in space (e.g., a user's mouth). The audio subsystem 226 then focuses on this sound by removing from the sound input sounds that are received equally by all microphones (e.g., ambient sound that does not appear to originate from any particular direction).
In some implementations, the DSP 412 is configured to cancel or remove from the sound input sounds that are being output by the device on which the digital assistant is operating. For example, if the audio subsystem 226 is outputting music, radio, a podcast, a voice output, or any other audio content (e.g., via the speaker 228), the DSP 412 removes any of the outputted sound that was picked up by a microphone and included in the sound input. Thus, the sound input is free of the outputted audio (or at least contains less of the outputted audio). Accordingly, the sound input that is provided to the sound detectors will be cleaner, and the triggers more accurate. Aspects of noise cancellation are described in more detail in U.S. Pat. No. 7,272,224, assigned to the assignee of the instant application, which is hereby incorporated by reference in its entirety.
In some implementations, different sound detectors require that the sound input be filtered and/or preprocessed in different ways. For example, in some implementations, the noise detector 402 is configured to analyze time-domain audio signal between 60 and 20,000 Hz, and the sound-type detector is configured to perform frequency-domain analysis of audio between 60 and 3,000 Hz. Thus, in some implementations, the audio DSP 412 (and/or other audio DSPs of the device 104) preprocesses received audio according to the respective needs of the sound detectors. In some implementations, on the other hand, the sound detectors are configured to filter and/or preprocess the audio from the audio subsystem 226 according to their specific needs. In such cases, the audio DSP 412 may still perform noise cancellation prior to providing the sound input to the sound detectors.
In some implementations, the context of the electronic device is used to help determine whether and how to operate the voice trigger. For example, it may be unlikely that users will invoke a speech-based service, such as a voice-based digital assistant, when the device is stored in their pocket, purse, or backpack. Also, it may be unlikely that users will invoke a speech-based service when they are at a loud rock concert. For some users, it is unlikely that they will invoke a speech-based service at certain times of the day (e.g., late at night). On the other hand, there are also contexts in which it is more likely that a user will invoke a speech-based service using a voice trigger. For example, some users will be more likely to use a voice trigger when they are driving, when they are alone, when they are at work, or the like. Various techniques are used to determine the context of a device. In various implementations, the device uses information from any one or more of the following components or information sources to determine the context of a device: GPS receivers, light sensors, microphones, proximity sensors, orientation sensors, inertial sensors, cameras, communications circuitry and/or antennas, charging and/or power circuitry, switch positions, temperature sensors, compasses, accelerometers, calendars, user preferences, etc.
The context of the device can then be used to adjust how and whether the voice trigger operates. For example, in certain contexts, the voice trigger will be deactivated (or operated in a different mode) as long as that context is maintained. For example, in some implementations, the voice trigger is deactivated when the phone is in a predetermined orientation (e.g., lying face-down on a surface), during predetermined time periods (e.g., between 10:00 PM and 8:00 AM), when the phone is in a “silent” or a “do not disturb” mode (e.g., based on a switch position, mode setting, or user preference), when the device is in a substantially enclosed space (e.g., a pocket, bag, purse, drawer, or glove box), when the device is near other devices that have a voice trigger and/or speech-based services (e.g., based on proximity sensors, acoustic/wireless/infrared communications), and the like. In some implementations, instead of being deactivated, the voice trigger system 400 is operated in a low-power mode (e.g., by operating the noise detector 402 according to a duty cycle with a 10 millisecond “on” interval and a 5 second “off” interval). In some implementations, an audio channel is monitored more infrequently when the voice trigger system 400 is operated in a low-power mode. In some implementations, a voice trigger uses a different sound detector or combination of sound detectors when it is in a low-power mode than when it is in a normal mode. (The voice trigger may be capable of numerous different modes or operating states, each of which may use a different amount of power, and different implementations will use them according to their specific designs.)
On the other hand, when the device is in some other contexts, the voice trigger will be activated (or operated in a different mode) so long as that context is maintained. For example, in some implementations, the voice trigger remains active while it is plugged into a power source, when the phone is in a predetermined orientation (e.g., lying face-up on a surface), during predetermined time periods (e.g., between 8:00 AM and 10:00 PM), when the device is travelling and/or in a car (e.g., based on GPS signals, BLUETOOTH connection or docking with a vehicle, etc.), and the like. Aspects of detect lining when a device is in a vehicle are described in more detail in U.S. Provisional Patent Application No. 61/657,744, assigned to the assignee of the instant application, which is hereby incorporated by reference in its entirety. Several specific examples of how to determine certain contexts are provided below. In various embodiments, different techniques and/or information sources are used to detect these and other contexts.
As noted above, whether or not the voice trigger system 400 is active (e.g., listening) can depend on the physical orientation of a device. In some implementations, the voice trigger is active when the device is placed “face-up” on a surface (e.g., with the display and/or touchscreen surface visible), and/or is inactive when it is “face-down.” This provides a user with an easy way to activate and/or deactivate the voice trigger without requiring manipulation of settings menus, switches, or buttons. In some implementations, the device detects whether it is face-up or face-down on a surface using light sensors (e.g., based on the difference in incident light on a front and a back face of the device 104), proximity sensors, magnetic sensors, accelerometers, gyroscopes, tilt sensors, cameras, and the like.
In some implementations, other operating modes, settings, parameters, or preferences are affected by the orientation and/or position of the device. In some implementations, the particular trigger sound, word, or phrase of the voice trigger is listening for depends on the orientation and/or position of the device. For example, in some implementations, the voice trigger listens for a first trigger word, phrase, or sound when the device is in one orientation (e.g., laying face-up on a surface), and a different trigger word, phrase, or sound when the device is in another orientation (e.g., laying face-down). In some implementations, the trigger phrase for a face-down orientation is longer and/or more complex than for a face-up orientation. Thus, a user can place a device face-down when they are around other people or in a noisy environment so that the voice trigger can still be operational while also reducing false accepts, which may be more frequent for shorter or simpler trigger words. As a specific example, a face-up trigger phrase may be “Hey, SIRI,” while a face-down trigger phrase may be “Hey, SIRI, this is Andrew, please wake up.” The longer trigger phrase also provides a larger voice sample for the sound detectors and/or voice authenticators to process and/or analyze, thus increasing the accuracy of the voice trigger and decreasing false accepts.
In some implementations, the device 104 detects whether it is in a vehicle (e.g., a car). A voice trigger is particularly beneficial for invoking a speech-based service when the user is in a vehicle, as it helps reduce the physical interactions that are necessary to operate the device and/or the speech based service. Indeed, one of the benefits of a voice-based digital assistant is that it can be used to perform tasks where looking at and touching a device would be impractical or unsafe. Thus, the voice trigger may be used when the device is in a vehicle so that the user does not have to touch the device in order to invoke the digital assistant. In some implementations, the device determines that it is in a vehicle by detecting that it has been connected to and/or paired with a vehicle, such as through BLUETOOTH communications (or other wireless communications) or through a docking connector or cable. In some implementations, the device determines that it is in a vehicle by determining the device's location and/or speed (e.g., using GPS receivers, accelerometers, and/or gyroscopes). If it is determined that the device is likely in a vehicle, because it is travelling above 20 miles per hour and is determined to be travelling along a road, for example, then the voice trigger remains active and/or in a high-power or more sensitive state.
In some implementations, the device detects whether the device is stored (e.g., in a pocket, purse, bag, a drawer, or the like) by determining whether it is in a substantially enclosed space. In some implementations, the device uses light sensors (e.g., dedicated ambient light sensors and/or cameras) to determine that it is stored. For example, in some implementations, the device is likely being stored if light sensors detect little or no light. In some implementations, the time of day and/or location of the device are also considered. For example, if the light sensors detect low light levels when high light levels would be expected (e.g., during the day), the device may be in storage and the voice trigger system 400 not needed. Thus, the voice trigger system 400 will be placed in a low-power or standby state.
In some implementations, the difference in light detected by sensors located on opposite faces of a device can be used to determine its position, and hence whether or not it is stored. Specifically, users are likely to attempt to activate a voice trigger when the device is resting on a table or surface rather than when it is being stored in a pocket or bag. But when a device is lying face-down (or face-up) on a surface such as a table or desk, one surface of the device will be occluded so that little or no light reaches that surface, while the other surface will be exposed to ambient light. Thus, if light sensors on the front and back face of a device detect significantly different light levels, the device determines that it is not being stored. On the other hand, if light sensors on opposite faces detect the same or similar light levels, the device determines that it is being stored in a substantially enclosed space. Also, if the light sensors both detect a low light level during the daytime (or when the device would expect the phone to be in a bright environment, the device determines with a greater confidence that it is being stored.
In some implementations, other techniques are used (instead of or in addition to light sensors) to determine whether the device is stored. For example, in some implementations, the device emits one or more sounds (e.g., tones, clicks, pings, etc.) from a speaker or transducer (e.g., speaker 228), and monitors one or more microphones or transducers (e.g., microphone 230) to detect echoes of the omitted sound(s). (In some implementations, the device emits inaudible signals, such as sound outside of the human hearing range.) From the echoes, the device determines characteristics of the surrounding environment. For example, a relatively large environment (e.g., a room or a vehicle) will reflect the sound differently than a relatively small, enclosed environment (e.g., a pocket, purse, bag, a drawer, or the like).
In some implementations, the voice trigger system 400 is operates differently if it is near other devices (such as other devices that have voice triggers and/or speech-based services) than if it is not near other devices. This may be useful, for example, to shut down or decrease the sensitivity of the voice trigger system 400 when many devices are close together so that if one person utters a trigger word, other surrounding devices are not triggered as well. In some implementations, a device determines proximity to other devices using RFID, near-field communications, infrared/acoustic signals, or the like.
As noted above, voice triggers are particularly useful when a device is being operated in a hands-free mode, such as when the user is driving. In such cases, users often use external audio systems, such as wired or wireless headsets, watches with speakers and/or microphones, a vehicle's built-in microphones and speakers, etc., to free themselves from having to hold a device near their face to make a call or dictate text inputs. For example, wireless headsets and vehicle audio systems may connect to an electronic device using BLUETOOTH communications, or any other appropriate wireless communication. However, it may be inefficient for a voice trigger to monitor audio received via a wireless audio accessory because of the power required to maintain an open audio channel with the wireless accessory. In particular, a wireless headset may hold enough charge in its battery to provide a few hours of continuous talk-time, and it is therefore preferable to reserve the battery for when the headset is needed for actual communication, instead of using it to simply monitor ambient audio and wait for a possible trigger sound. Moreover, wired external headset accessories may require significantly more power than on-board microphones alone, and keeping the headset microphone active will deplete the device's battery charge. This is especially true considering that the ambient audio received by the wireless or wired headset will typically consist mostly of silence or irrelevant sounds. Thus, in some implementations, the voice trigger system 400 monitors audio from the microphone 230 on the device even when the device is coupled to an external microphone (wired or wireless). Then, when the voice trigger detects the trigger word, the device initializes an active audio link with the external microphone in order to receive subsequent sound inputs (such as a command to a voice-based digital assistant) via the external microphone rather than the on-device microphone 230.
When certain conditions are met, though, an active communication link can be maintained between an external audio system 416 (which may be communicatively coupled to the device 104 via wires or wirelessly) and the device so that the voice trigger system 400 can listen for a trigger sound via the external audio system 416 instead of (or in addition to) the on-device microphone 230. For example, in some implementations, characteristics of the motion of the electronic device and/or the external audio system 416 (e.g., as determined by accelerometers, gyroscopes, etc. on the respective devices) are used to determine whether the voice trigger system 400 should monitor ambient sound using the on-device microphone 230 or an external microphone 418. Specifically, the difference between the motion of the device and the external audio system 416 provides information about whether the external audio system 416 is actually in use. For example, if both the device and a wireless headset are moving (or not moving) substantially identically, it may be determined that the headset is not in use or is not being worn. This may occur, for example, because both devices are near to each other and idle (e.g., sitting on a table or stored in a pocket, bag, purse, drawer, etc.). Accordingly, under these conditions, the voice trigger system 400 monitors the on-device microphone, because it is unlikely that the headset is actually being used. If there is a difference in motion between the wireless headset and the device, however, it is determined that the headset is being worn by a user. These conditions may occur, for example, because the device has been set down (e.g., on a surface or in a bag), while the headset is being worn on the user's head (which will likely move at least a small amount, even when the wearer is relatively still). Under these conditions, because it is likely that the headset is being worn, the voice trigger system 400 maintains an active communication link and monitors the microphone 418 of the headset instead of (or in addition to) the on-device microphone 230. And because this technique focuses on the difference in the motion of the device and the headset, motion that is common to both devices can be canceled out. This may be useful, for example, when a user is using a headset in a moving vehicle, where the device (e.g., a cellular phone) is resting in a cup holder, empty seat, or in the user's pocket, and the headset is worn on the user's head. Once the motion that is common to both devices is cancelled out (e.g., the vehicle's motion), the relative motion of the headset as compared to the device (if any) can be determined in order to determine whether the headset is likely in use (or, whether the headset is not being worn). While the above discussion refers to wireless headsets, similar techniques are applied to wired headsets as well.
Because people's voices vary greatly, it may be necessary or beneficial to tune a voice trigger to improve its accuracy in recognizing the voice of a particular user. Also, people's voices may change over time, for example, because of illnesses, natural voice changes relating to aging or hormonal changes, and the like. Thus, in some implementations, the voice trigger system 400 is able to adapt its voice and/or sound recognition profiles for a particular user or group of users.
As described above, sound detectors (e.g., the sound-type detector 404 and/or the trigger sound detector 406) may be configured to compare a representation of a sound input (e.g., the sound or utterance provided by a user) to one or more reference representations. For example, if an input representation matches the reference representation to a predetermined confidence level, the sound detector will determine that the sound input corresponds to a predetermined type of sound (e.g., the sound-type detector 404), or that the sound input includes predetermined content (e.g., the trigger sound detector 406). In order to tune the voice trigger system 400, in some implementations, the device adjusts the reference representation to which the input representation is compared. In some implementations, the reference representation is adjusted (or created) as part of a voice enrollment or “training” procedure, where a user outputs the trigger sound several times so that the device can adjust (or create) the reference representation. The device can then create a reference representation using that person's actual voice.
In some implementations, the device uses trigger sounds that are received under normal use conditions to adjust the reference representation. For example, after a successful voice triggering event (e.g., where the sound input was found to satisfy all of the triggering criteria) the device will use information from the sound input to adjust and/or tune the reference representation. In some implementations, only sound inputs that were determined to satisfy all or some of the triggering criteria with a certain confidence level are used to adjust the reference representation. Thus, when the voice trigger is less confident that a sound input corresponds to or includes a trigger sound, that voice input may be ignored for the purposes of adjusting the reference representation. On the other hand, in some implementations, sound inputs that satisfied the voice trigger system 400 to a lower confidence are used to adjust the reference representation.
In some implementations, the device 104 iteratively adjusts the reference representation (using these or other techniques) as more and more sound inputs are received so that slight changes in a user's voice over time can be accommodated. For example, in some implementations, the device 104 (and/or associated devices or services) adjusts the reference representation after each successful triggering event. In some implementations, the device 104 analyzes the sound input associated with each successful triggering event and determines if the reference representations should be adjusted based on that input (e.g., if certain conditions are met), and only adjusts the reference representation if it is appropriate to do so. In some implementations, the device 104 maintains a moving average of the reference representation over time.
In some implementations, the voice trigger system 400 detects sounds that do not satisfy one or more of the triggering criteria (e.g., as determined by one or more of the sound detectors), but that may actually be attempts by an authorized user to do so. For example, voice trigger system 400 may be configured to respond to a trigger phrase such as “Hey, SIRI”, but if a user's voice has changed (e.g., due to sickness, age, accent/inflection changes, etc.), the voice trigger system 400 may not recognize the user's attempt to activate the device. (This may also occur when the voice trigger system 400 has not been properly tuned for that user's particular voice, such as when the voice trigger system 400 is set to default conditions and/or the user has not performed an initialization or training procedure to customize the voice trigger system 400 for his or her voice.) If the voice trigger system 400 does not respond to the user's first attempt to active the voice trigger, the user is likely to repeat the trigger phrase. The device detects that these repeated sound inputs are similar to one another, and/or that they are similar to the trigger phrase (though not similar enough to cause the voice trigger system 400 to activate the speech-based service). If such conditions are met, the device determines that the sound inputs correspond to valid attempts to activate the voice trigger system 400. Accordingly, in some implementations, the voice trigger system 400 uses those received sound inputs to adjust one or more aspects of the voice trigger system 400 so that similar utterances by the user will be accepted as valid triggers in the future. In some implementations, these sound inputs are used to adapt the voice trigger system 400 only if a certain conditions or combinations of conditions are met. For example, in some implementations, the sound inputs are used to adapt the voice trigger system 400 when a predetermined number of sound inputs are received in succession (e.g., 2, 3, 4, 5, or any other appropriate number), when the sound inputs are sufficiently similar to the reference representation, when the sound inputs are sufficiently similar to each other, when the sound inputs are close together (e.g., when they are received within a predetermined time period and/or at or near a predetermined interval), and/or any combination of these or other conditions.
In some cases, the voice trigger system 400 may detect one or more sound inputs that do not satisfy one or more of the triggering criteria, followed by a manual initiation of the speech-based service (e.g., by pressing a button or icon). In some implementations, the voice trigger system 400 determines that, because speech-based service was initiated shortly after the sound inputs were received, the sound inputs actually corresponded to failed voice triggering attempts. Accordingly, the voice trigger system 400 uses those received sound inputs to adjust one or more aspects of the voice trigger system 400 so that utterances by the user will be accepted as valid triggers in the future, as described above.
While the adaptation techniques described above refer to adjusting a reference representation, other aspects of the trigger sound detecting techniques may be adjusted in the same or similar manner in addition to or instead of adjusting the reference representation. For example, in some implementations, the device adjusts how sound inputs are filtered and/or what filters are applied to sound inputs, such as to focus on and/or eliminate certain frequencies or ranges of frequencies of a sound input. In some implementations, the device adjusts an algorithm that is used to compare the input representation with the reference representation. For example, in some implementations, one or more terms of a mathematical function used to determine the difference between an input representation and a reference representation are changed, added, or removed, or a different mathematical function is substituted.
In some implementations, adaptation techniques such as those described above require more resources than the voice trigger system 400 is able to or is configured to provide. In particular, the sound detectors may not have, or have access to, the amount or the types of processors, data, or memory that are necessary to perform the iterative adaptation of a reference representation and/or a sound detection algorithm (or any other appropriate aspect of the voice trigger system 400). Thus, in some implementations, one or more of the above described adaptation techniques are performed by a more powerful processor, such as an application processor (e.g., the processor(s) 204), or by a different device (e.g., the server system 108). However, the voice trigger system 400 is designed to operate even when the application processor is in a standby mode. Thus, the sound inputs which are to be used to adapt the voice trigger system 400 are received when the application processor is not active and cannot process the sound input. Accordingly, in some implementations, the sound input is stored by the device so that it can be further processed and/or analyzed after it is received. In some implementations, the sound input is stored in the memory buffer 414 of the audio subsystem 226. In some implementations, the sound input is stored in system memory (e.g., memory 250,
In some implementations, the electronic device determines whether the sound input satisfies a predetermined condition (504). In some implementations, the electronic device applies time-domain analysis to the sound input to determine whether the sound input satisfies the predetermined condition. For example, the electronic device analyzes the sound input over a time period in order to determine whether the sound amplitude reaches a predetermined level. In some implementations, the threshold is satisfied if the amplitude (e.g., the volume) of the sound input meets and/or exceeds a predetermined threshold. In some implementations, it is satisfied if the sound input meets and/or exceeds a predetermined threshold for a predetermined amount of time. As discussed in more detail below, in some implementations, determining whether the sound input satisfies the predetermined condition (504) is performed by a third sound detector (e.g., the noise detector 402). (The third sound detector is used in this case to differentiate the sound detector from other sound detectors (e.g., the first and second sound detectors that are discussed below), and does not necessarily indicate any operational position or order of the sound detectors.)
The electronic device determines whether the sound input corresponds to a predetermined type of sound (506). As noted above, sounds are categorized as different “types” based on certain identifiable characteristics of the sounds. Determining whether the sound input corresponds to a predetermined type includes determining whether the sound input includes or exhibits the characteristics of a particular type. In some implementations, the predetermined type of sound is a human voice. In such implementations, determining whether the sound input corresponds to a human voice includes determining whether the sound input includes frequencies characteristic of a human voice (508). As discussed in more detail below, in some implementations, determining whether the sound input corresponds to a predetermined type of sound (506) is performed by a first sound detector (e.g., the sound-type detector 404).
Upon a determination that the sound input corresponds to the predetermined type of sound, the electronic device determines whether the sound input includes predetermined content (510). In some implementations, the predetermined content corresponds to one or more predetermined phonemes (512). In some implementations, the one or more predetermined phonemes constitute at least one word. In some implementations, the predetermined content is a sound (e.g., a whistle, click, or clap). In some implementations, as discussed below, determining whether the sound input includes predetermined content (510) is performed by a second sound detector (e.g., the trigger sound detector 406).
Upon a determination that the sound input includes the predetermined content, the electronic device initiates a speech-based service (514). In some implementations, the speech-based service is a voice-based digital assistant, as described in detail above. In some implementations, the speech-based service is a dictation service in which speech inputs are converted into text and included in and/or displayed in a text input field (e.g., of an email, text message, word processing or note-taking application, etc.). In implementations where the speech-based service is a voice-based digital assistant, once the voice-based digital assistant is initiated, a prompt is issued to the user (e.g., a sound or a speech prompt) indicating that the user may provide a voice input and/or command to the digital assistant. In some implementations, initiating the voice-based digital assistant includes activating an application processor (e.g., the processor(s) 204,
In some implementations, the electronic device determines whether the sound input corresponds to a voice of a particular user (516). For example, one or more voice authentication techniques are applied to the sound input to determine whether it corresponds to the voice of an authorized user of the device. Voice authentication techniques are described in greater detail above. In some implementations, voice authentication is performed by one of the sound detectors (e.g., the trigger sound detector 406). In some implementations, voice authentication is performed by a dedicated voice authentication module (including any appropriate hardware and/or software).
In some implementations, the sound-based service is initiated in response to a determination that the sound input includes the predetermined content and the sound input corresponds to the voice of the particular user. Thus, for example, the sound-based service (e.g., a voice-based digital assistant) will only be initiated when the trigger word or phrase is spoken by an authorized user. This reduces the possibility that the service can be invoked by an unauthorized user, and may be particularly useful when multiple electronic devices are in close proximity, as one user's utterance of a trigger sound will not activate another user's voice trigger.
In some implementations, where the speech-based service is a voice-based digital assistant, in response to determining that the sound input includes the predetermined content but does not correspond to the voice of the particular user, the voice-based digital assistant is initiated in a limited access mode. In some implementations, the limited access mode allows the digital assistant to access only a subset of the data, services, and/or functionality that the digital assistant can otherwise provide. In some implementations, the limited access mode corresponds to a write-only mode (e.g., so that an unauthorized user of the digital assistant cannot access data from calendars, task lists, contacts, photographs, emails, text messages, etc). In some implementations, the limited access mode corresponds to a sandboxed instance of a speech-based service, so that the speech-based service will not read from or write to a user's data, such as user data 266 on the device 104 (
In some implementations, in response to a determination that the sound input includes the predetermined content and the sound input corresponds to the voice of the particular user, the voice-based digital assistant outputs a prompt including a name of the particular user. For example, when a particular user is identified via voice authentication, the voice-based digital assistant may output a prompt such as “What can I help you with, Peter?”, instead of a more generic prompt such as a tone, beep, or non-personalized voice prompt.
As noted above, in some implementations, a first sound detector determines whether the sound input corresponds to a predetermined type of sound (at step 506), and a second sound detector determines whether the sound detector includes the predetermined content (at step 510). In some implementations, the first sound detector consumes less power while operating than the second sound detector, for example, because the first sound detector uses a less processor-intensive technique than the second sound detector. In some implementations, the first sound detector is the sound-type detector 404, and the second sound detector is the trigger sound detector 406, both of which are discussed above with respect to
In some implementations, the first and/or the sound detector performs frequency-domain analysis of the sound input. For example, these sound detectors perform a Laplace, Z-, or Fourier transform to generate a frequency spectrum or to determine the spectral density of the sound input or a portion thereof. In some implementations, the first sound detector is a voice-activity detector that is configured to determine whether the sound input includes frequencies that are characteristic of a human voice (or other features, aspects, or properties of the sound input that are characteristic of a human voice).
In some implementations, the second sound detector is off or inactive until the first sound detector detects a sound input of the predetermined type. Accordingly, in some implementations, the method 500 includes initiating the second sound detector in response to determining that the sound input corresponds to the predetermined type. (In other implementations, the second sound detector is initiated in response to other conditions, or is continuously operated regardless of a determination from the first sound detector.) In some implementations, initiating the second sound detector includes activating hardware and/or software (including, for example, circuits, processors, programs, memory, etc.).
In some implementations, the second sound detector is operated (e.g., is active and is monitoring an audio channel) for at least a predetermined amount of time after it is initiated. For example, when the first sound detector determines that the sound input corresponds to a predetermined type (e.g., includes a human voice), the second sound detector is operated in order to determine if the sound input also includes the predetermined content (e.g., the trigger word). In some implementations, the predetermined amount of time corresponds to a duration of the predetermined content. Thus, if the predetermined content is the phrase “Hey, SIRI,” the predetermined amount of time will be long enough to determine if that phrase was uttered (e.g., 1 or 2 seconds, or any another appropriate duration). If the predetermined content is longer, such as the phrase “Hey, SIRI, please wake up and help me out,” the predetermined time will be longer (e.g., 5 seconds, or another appropriate duration). In some implementations, the second sound detector operates as long as the first sound detector detects sound corresponding to the predetermined type. In such implementations, for example, as long as the first sound detector detects human speech in a sound input, the second sound detector will process the sound input to determine if it includes the predetermined content.
As noted above, in some implementations, a third sound detector (e.g., the noise detector 402) determines whether the sound input satisfies a predetermined condition (at step 504). In some implementations, the third sound detector consumes less power while operating than the first sound detector. In some implementations, the third sound detector periodically monitors an audio channel according to a duty cycle, as discussed above with respect to
Similar to the discussion above with respect to initiating the second sound detector (e.g., a trigger sound detector 406) in response to a determination by the first sound detector (e.g., the sound-type detector 404), in some implementations, the first sound detector is initiated in response to a determination by the third sound detector (e.g., the noise detector 402). For example, in some implementations, the sound-type detector 404 is initiated in response to a determination by the noise detector 402 that the sound input satisfies a predetermined condition (e.g., is above a certain volume for a sufficient duration). In some implementations, initiating the first sound detector includes activating hardware and/or software (including, for example, circuits, processors, programs, memory, etc.). In other implementations, the first sound detector is initiated in response to other conditions, or is continuously operated.
In some implementations, the device stores at least a portion of the sound input in memory (518). In some implementations, the memory is the buffer 414 of the audio subsystem 226 (
In various implementations, steps (516)-(520) are performed at different positions within the method 500. For example, in some implementations, one or more of steps (516)-(520) are performed between steps (502) and (504), between steps (510) and (514), or at any other appropriate position.
Upon a determination that the electronic device is in the predetermined orientation, the electronic device activates a predetermined mode of a voice trigger (604). In some implementations, the predetermined orientation corresponds to a display screen of the device being substantially horizontal and facing down, and the predetermined mode is a standby mode (606). For example, in some implementations, if a smartphone or tablet is placed on a table or desk so that the screen is facing down, the voice trigger is placed in a standby mode (e.g., turned off) to prevent inadvertent activation of the voice trigger.
On the other hand, in some implementations, the predetermined orientation corresponds to a display screen of the device being substantially horizontal and facing up, and the predetermined mode is a listening mode (608). Thus, for example, if a smartphone or tablet is placed on a table or desk so that the screen is facing up, the voice trigger is placed in a listening mode so that it can respond to the user when it detects the trigger.
The electronic device determines whether it is in a substantially enclosed space by detecting that one or more of a microphone and a camera of the electronic device is occluded (704). In some implementations, a substantially enclosed space includes a pocket, purse, bag, drawer, glovebox, briefcase, or the like.
As described above, in some implementations, a device detects that a microphone is occluded by emitting one or more sounds (e.g., tones, clicks, pings, etc.) from a speaker or transducer, and monitoring one or more microphones or transducers to detect echoes of the omitted sound(s). For example, a relatively large environment (e.g., a room or a vehicle) will reflect the sound differently than a relatively small, substantially enclosed environment (e.g., a purse or pocket). Thus, if the device detects that the microphone (or the speaker that emitted the sounds) is occluded based on the echoes (or lack thereof), the device determines that it is in a substantially enclosed space. In some implementations, the device detects that a microphone is occluded by detecting that the microphone is picking up a sound characteristic of an enclosed space. For example, when a device is in a pocket, the microphone may detect a characteristic rustling noise due to the microphone coming into contact or close proximity with the fabric of the pocket.
In some implementations, a device detects that a camera is occluded based on the level of light received by a sensor, or by determining whether it can achieve a focused image. For example, if a camera sensor detects a low level of light during a time when a high level of light would be expected (e.g., during daylight hours), then the device determines that the camera is occluded, and that the device is in a substantially enclosed space. As another example, the camera may attempt to achieve an in-focus image on its sensor. Usually, this will be difficult if the camera is in an extremely dark place (e.g., a pocket or backpack), or if it is too close to the object on which it is attempting to focus (e.g., the inside of a purse or backpack). Thus, if the camera is unable to achieve an in-focus image, it determines that the device is in a substantially enclosed space.
Upon a determination that the electronic device is in a substantially enclosed space, the electronic device switches the voice trigger to a second mode (706). In some implementations, the second mode is a standby mode (708). In some implementations, when in the standby mode, the voice trigger system 400 will continue to monitor ambient audio, but will not respond to received sounds regardless of whether they would otherwise trigger the voice trigger system 400. In some implementations, in the standby mode, the voice trigger system 400 is deactivated, and does not process audio to detect trigger sounds. In some implementations, the second mode includes operating one or more sound detectors of a voice trigger system 400 according to a different duty cycle than the first mode. In some implementations, the second mode includes operating a different combination of sound detectors than the first mode.
In some implementations, the second mode corresponds to a more sensitive monitoring mode, so that the voice trigger system 400 can detect and respond to a trigger sound even though it is in a substantially enclosed space.
In some implementations, once the voice trigger is switched to the second mode, the device periodically determines whether the electronic device is still in a substantially enclosed space by detecting whether one or more of a microphone and a camera of the electronic device is occluded (e.g., using any of the techniques described above with respect to step (704)). If the device remains in a substantially enclosed space, the voice trigger system 400 will be kept in the second mode. In some implementations, if the device is removed from a substantially enclosed space, the electronic device will return the voice trigger to the first mode.
In accordance with some implementations,
As shown in
The processing unit 806 is configured to: determine whether at least a portion of the sound input corresponds to a predetermined type of sound (e.g., with the sound type detecting unit 810); upon a determination that at least a portion of the sound input corresponds to the predetermined type, determine whether the sound input includes predetermined content (e.g., with the trigger sound detecting unit 812); and upon a determination that the sound input includes the predetermined content, initiate a speech-based service (e.g., with the service initiating unit 814).
In some implementations, the processing unit 806 is also configured to, prior to determining whether the sound input corresponds to a predetermined type of sound, determine whether the sound input satisfies a predetermined condition (e.g., with the noise detecting unit 808). In some implementations, the processing unit 806 is also configured to determine whether the sound input corresponds to a voice of a particular user (e.g., with the voice authenticating unit 816).
In accordance with some implementations,
As shown in
In some implementations, the processing unit 906 is configured to: determine whether the electronic device is in a substantially enclosed space by detecting that one or more of a microphone and a camera of the electronic device is occluded (e.g., with the environment detecting unit 908); and upon a determination that the electronic device is in a substantially enclosed space, switching the voice trigger to a second mode (e.g., with the mode switching unit 910).
In some implementations, the processing unit is configured to: determine whether the electronic device is in a predetermined orientation (e.g., with the environment detecting unit 908); and upon a determination that the electronic device is in the predetermined orientation, activate a predetermined mode of a voice trigger (e.g., with the mode switching unit 910).
In accordance with some implementations,
As shown in
The processing unit 1006 is configured to: determine whether the electronic device is in a substantially enclosed space by detecting that one or more of a microphone and a camera of the electronic device is occluded (e.g., with the environment detecting unit 1008); and upon a determination that the electronic device is in a substantially enclosed space, switching the voice trigger to a second mode (e.g., with the mode switching unit 1010).
The foregoing description, for purpose of explanation, has been described with reference to specific implementations. However, the illustrative discussions above are not intended to be exhaustive or to limit the disclosed implementations to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The implementations were chosen and described in order to best explain the principles and practical applications of the disclosed ideas, to thereby enable others skilled in the art to best utilize them with various modifications as are suited to the particular use contemplated.
It will be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first sound detector could be termed a second sound detector, and, similarly, a second sound detector could be termed a first sound detector, without changing the meaning of the description, so long as all occurrences of the “first sound detector” are renamed consistently and all occurrences of the “second sound detector” are renamed consistently. The first sound detector and the second sound detector are both sound detectors, but they are not the same sound detector.
The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the claims. As used in the description of the implementations and the appended claims, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “upon a determination that” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
This Application is a continuation of U.S. application Ser. No. 14/175,864, filed on Feb. 7, 2014, entitled VOICE TRIGGER FOR A DIGITAL ASSISTANT, which claims the benefit of U.S. Provisional Application No. 61/762,260, filed on Feb. 7, 2013, entitled VOICE TRIGGER FOR A DIGITAL ASSISTANT. The contents of each of these applications are hereby incorporated by reference in their entireties for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
5764852 | Williams | Jun 1998 | A |
5907597 | Mark | May 1999 | A |
5983186 | Miyazawa et al. | Nov 1999 | A |
6070140 | Tran | May 2000 | A |
6397186 | Bush | May 2002 | B1 |
6532447 | Christensson | Mar 2003 | B1 |
6671672 | Heck | Dec 2003 | B1 |
7475010 | Chao | Jan 2009 | B2 |
7475015 | Epstein et al. | Jan 2009 | B2 |
7475063 | Datta et al. | Jan 2009 | B2 |
7477238 | Fux et al. | Jan 2009 | B2 |
7477240 | Yanagisawa | Jan 2009 | B2 |
7478037 | Strong | Jan 2009 | B2 |
7478091 | Mojsilovic et al. | Jan 2009 | B2 |
7478129 | Chemtob | Jan 2009 | B1 |
7479948 | Kim et al. | Jan 2009 | B2 |
7479949 | Jobs et al. | Jan 2009 | B2 |
7483832 | Tischer | Jan 2009 | B2 |
7483894 | Cao | Jan 2009 | B2 |
7487089 | Mozer | Feb 2009 | B2 |
7487093 | Mutsuno et al. | Feb 2009 | B2 |
7490034 | Finnigan et al. | Feb 2009 | B2 |
7490039 | Shaffer et al. | Feb 2009 | B1 |
7493251 | Gao et al. | Feb 2009 | B2 |
7493560 | Kipnes et al. | Feb 2009 | B1 |
7496498 | Chu et al. | Feb 2009 | B2 |
7496512 | Zhao et al. | Feb 2009 | B2 |
7499923 | Kawatani | Mar 2009 | B2 |
7502738 | Kennewick et al. | Mar 2009 | B2 |
7505795 | Lim et al. | Mar 2009 | B1 |
7508324 | Suraqui | Mar 2009 | B2 |
7508373 | Lin et al. | Mar 2009 | B2 |
7516123 | Betz et al. | Apr 2009 | B2 |
7519327 | White | Apr 2009 | B2 |
7519398 | Hirose | Apr 2009 | B2 |
7522927 | Fitch et al. | Apr 2009 | B2 |
7523036 | Akabane et al. | Apr 2009 | B2 |
7523108 | Cao | Apr 2009 | B2 |
7526466 | Au | Apr 2009 | B2 |
7526738 | Ording et al. | Apr 2009 | B2 |
7528713 | Singh et al. | May 2009 | B2 |
7529671 | Rockenbeck et al. | May 2009 | B2 |
7529676 | Koyama | May 2009 | B2 |
7529677 | Wittenber | May 2009 | B1 |
7535997 | McQuaide, Jr. et al. | May 2009 | B1 |
7536029 | Choi et al. | May 2009 | B2 |
7536565 | Girish et al. | May 2009 | B2 |
7538685 | Cooper et al. | May 2009 | B1 |
7539619 | Seligman et al. | May 2009 | B1 |
7539656 | Fratkina et al. | May 2009 | B2 |
7541940 | Upton | Jun 2009 | B2 |
7542967 | Hurst-Hiller et al. | Jun 2009 | B2 |
7542971 | Thione et al. | Jun 2009 | B2 |
7543232 | Easton, Jr. et al. | Jun 2009 | B2 |
7546382 | Healey et al. | Jun 2009 | B2 |
7546529 | Reynar et al. | Jun 2009 | B2 |
7548895 | Pulsipher | Jun 2009 | B2 |
7552045 | Barliga et al. | Jun 2009 | B2 |
7552055 | Lecoeuche | Jun 2009 | B2 |
7555431 | Bennett | Jun 2009 | B2 |
7555496 | Lantrip et al. | Jun 2009 | B1 |
7558381 | Ali et al. | Jul 2009 | B1 |
7558730 | Davis et al. | Jul 2009 | B2 |
7559026 | Girish et al. | Jul 2009 | B2 |
7561069 | Horstemeyer | Jul 2009 | B2 |
7562007 | Hwang | Jul 2009 | B2 |
7562032 | Abbosh et al. | Jul 2009 | B2 |
7565104 | Brown et al. | Jul 2009 | B1 |
7565380 | Venkatachary | Jul 2009 | B1 |
7568151 | Bergeron et al. | Jul 2009 | B2 |
7571092 | Nieh | Aug 2009 | B1 |
7571106 | Cao et al. | Aug 2009 | B2 |
7577522 | Rosenberg | Aug 2009 | B2 |
7580551 | Srihari et al. | Aug 2009 | B1 |
7580576 | Wang et al. | Aug 2009 | B2 |
7580839 | Tamura et al. | Aug 2009 | B2 |
7584092 | Brockett et al. | Sep 2009 | B2 |
7584093 | Potter et al. | Sep 2009 | B2 |
7584278 | Rajarajan et al. | Sep 2009 | B2 |
7584429 | Fabritius | Sep 2009 | B2 |
7593868 | Margiloff et al. | Sep 2009 | B2 |
7596269 | King et al. | Sep 2009 | B2 |
7596499 | Anguera et al. | Sep 2009 | B2 |
7596606 | Codignotto | Sep 2009 | B2 |
7596765 | Almas | Sep 2009 | B2 |
7599918 | Shen et al. | Oct 2009 | B2 |
7603349 | Kraft et al. | Oct 2009 | B1 |
7603381 | Burke et al. | Oct 2009 | B2 |
7606444 | Erol et al. | Oct 2009 | B1 |
7606712 | Smith et al. | Oct 2009 | B1 |
7607083 | Gong et al. | Oct 2009 | B2 |
7609179 | Diaz-Gutierrez et al. | Oct 2009 | B2 |
7610258 | Yuknewicz et al. | Oct 2009 | B2 |
7613264 | Wells et al. | Nov 2009 | B2 |
7614008 | Ording | Nov 2009 | B2 |
7617094 | Aoki et al. | Nov 2009 | B2 |
7620407 | Donald et al. | Nov 2009 | B1 |
7620549 | Di Cristo et al. | Nov 2009 | B2 |
7620894 | Kahn | Nov 2009 | B1 |
7623119 | Autio et al. | Nov 2009 | B2 |
7624007 | Bennett | Nov 2009 | B2 |
7627481 | Kuo et al. | Dec 2009 | B1 |
7630900 | Strom | Dec 2009 | B1 |
7630901 | Omi | Dec 2009 | B2 |
7633076 | Huppi et al. | Dec 2009 | B2 |
7634409 | Kennewick et al. | Dec 2009 | B2 |
7634413 | Kuo et al. | Dec 2009 | B1 |
7634718 | Nakajima | Dec 2009 | B2 |
7634732 | Blagsvedt et al. | Dec 2009 | B1 |
7636657 | Ju et al. | Dec 2009 | B2 |
7640158 | Detlef et al. | Dec 2009 | B2 |
7640160 | Di Cristo et al. | Dec 2009 | B2 |
7643990 | Bellegarda | Jan 2010 | B1 |
7647225 | Bennett et al. | Jan 2010 | B2 |
7649454 | Singh et al. | Jan 2010 | B2 |
7649877 | Vieri et al. | Jan 2010 | B2 |
7653883 | Hotelling et al. | Jan 2010 | B2 |
7656393 | King et al. | Feb 2010 | B2 |
7657424 | Bennett | Feb 2010 | B2 |
7657430 | Ogawa | Feb 2010 | B2 |
7657828 | Lucas et al. | Feb 2010 | B2 |
7657844 | Gibson et al. | Feb 2010 | B2 |
7657849 | Chaudhri et al. | Feb 2010 | B2 |
7660715 | Thambiratnam | Feb 2010 | B1 |
7663607 | Hotelling et al. | Feb 2010 | B2 |
7664558 | Lindahl et al. | Feb 2010 | B2 |
7664638 | Cooper et al. | Feb 2010 | B2 |
7668710 | Doyle | Feb 2010 | B2 |
7669134 | Christie et al. | Feb 2010 | B1 |
7672841 | Bennett | Mar 2010 | B2 |
7672952 | Isaacson et al. | Mar 2010 | B2 |
7673238 | Girish et al. | Mar 2010 | B2 |
7673251 | Wibisono | Mar 2010 | B1 |
7673340 | Cohen et al. | Mar 2010 | B1 |
7676026 | Baxter, Jr. | Mar 2010 | B1 |
7676365 | Hwang et al. | Mar 2010 | B2 |
7676463 | Thompson et al. | Mar 2010 | B2 |
7679534 | Kay et al. | Mar 2010 | B2 |
7680649 | Park | Mar 2010 | B2 |
7681126 | Roose | Mar 2010 | B2 |
7683886 | Willey | Mar 2010 | B2 |
7683893 | Kim | Mar 2010 | B2 |
7684985 | Dominach et al. | Mar 2010 | B2 |
7684990 | Caskey et al. | Mar 2010 | B2 |
7684991 | Stohr et al. | Mar 2010 | B2 |
7689245 | Cox et al. | Mar 2010 | B2 |
7689408 | Chen et al. | Mar 2010 | B2 |
7689409 | Heinecke | Mar 2010 | B2 |
7689412 | Wu et al. | Mar 2010 | B2 |
7689421 | Li et al. | Mar 2010 | B2 |
7693715 | Hwang et al. | Apr 2010 | B2 |
7693717 | Kahn et al. | Apr 2010 | B2 |
7693719 | Chu et al. | Apr 2010 | B2 |
7693720 | Kennewick et al. | Apr 2010 | B2 |
7698131 | Bennett | Apr 2010 | B2 |
7702500 | Blaedow | Apr 2010 | B2 |
7702508 | Bennett | Apr 2010 | B2 |
7703091 | Martin et al. | Apr 2010 | B1 |
7706510 | Ng | Apr 2010 | B2 |
7707026 | Liu | Apr 2010 | B2 |
7707027 | Balchandran et al. | Apr 2010 | B2 |
7707032 | Wang et al. | Apr 2010 | B2 |
7707221 | Dunning et al. | Apr 2010 | B1 |
7707226 | Tonse | Apr 2010 | B1 |
7707267 | Lisitsa et al. | Apr 2010 | B2 |
7710262 | Ruha | May 2010 | B2 |
7711129 | Lindahl et al. | May 2010 | B2 |
7711550 | Feinberg et al. | May 2010 | B1 |
7711565 | Gazdzinski | May 2010 | B1 |
7711672 | Au | May 2010 | B2 |
7712053 | Bradford et al. | May 2010 | B2 |
7716056 | Weng et al. | May 2010 | B2 |
7716216 | Harik et al. | May 2010 | B1 |
7720674 | Kaiser et al. | May 2010 | B2 |
7720683 | Vermeulen et al. | May 2010 | B1 |
7721226 | Barabe et al. | May 2010 | B2 |
7721301 | Wong et al. | May 2010 | B2 |
7724242 | Hillis et al. | May 2010 | B2 |
7724696 | Parekh | May 2010 | B1 |
7725307 | Bennett | May 2010 | B2 |
7725318 | Gavalda et al. | May 2010 | B2 |
7725320 | Bennett | May 2010 | B2 |
7725321 | Bennett | May 2010 | B2 |
7725838 | Williams | May 2010 | B2 |
7729904 | Bennett | Jun 2010 | B2 |
7729916 | Coffman et al. | Jun 2010 | B2 |
7734461 | Kwak et al. | Jun 2010 | B2 |
7735012 | Naik | Jun 2010 | B2 |
7739588 | Reynar et al. | Jun 2010 | B2 |
7742953 | King et al. | Jun 2010 | B2 |
7743188 | Haitani et al. | Jun 2010 | B2 |
7747616 | Yamada et al. | Jun 2010 | B2 |
7752152 | Peek et al. | Jul 2010 | B2 |
7756707 | Garner et al. | Jul 2010 | B2 |
7756708 | Cohen et al. | Jul 2010 | B2 |
7756868 | Lee | Jul 2010 | B2 |
7756871 | Yacoub et al. | Jul 2010 | B2 |
7757173 | Beaman | Jul 2010 | B2 |
7757176 | Vakil et al. | Jul 2010 | B2 |
7757182 | Elliott et al. | Jul 2010 | B2 |
7761296 | Bakis et al. | Jul 2010 | B1 |
7763842 | Hsu et al. | Jul 2010 | B2 |
7774202 | Spengler et al. | Aug 2010 | B2 |
7774204 | Mozer et al. | Aug 2010 | B2 |
7774388 | Runchey | Aug 2010 | B1 |
7777717 | Fux et al. | Aug 2010 | B2 |
7778432 | Larsen | Aug 2010 | B2 |
7778595 | White et al. | Aug 2010 | B2 |
7778632 | Kurlander et al. | Aug 2010 | B2 |
7778830 | Davis et al. | Aug 2010 | B2 |
7779353 | Grigoriu et al. | Aug 2010 | B2 |
7779356 | Griesmer | Aug 2010 | B2 |
7779357 | Naik | Aug 2010 | B2 |
7783283 | Kuusinen et al. | Aug 2010 | B2 |
7783486 | Rosser et al. | Aug 2010 | B2 |
7788590 | Taboada et al. | Aug 2010 | B2 |
7788663 | Illowsky et al. | Aug 2010 | B2 |
7796980 | McKinney et al. | Sep 2010 | B1 |
7797265 | Brinker et al. | Sep 2010 | B2 |
7797269 | Rieman et al. | Sep 2010 | B2 |
7797331 | Theimer et al. | Sep 2010 | B2 |
7797629 | Fux et al. | Sep 2010 | B2 |
7801721 | Rosart et al. | Sep 2010 | B2 |
7801728 | Ben-David et al. | Sep 2010 | B2 |
7801729 | Mozer | Sep 2010 | B2 |
7805299 | Coifman | Sep 2010 | B2 |
7809550 | Barrows | Oct 2010 | B1 |
7809565 | Coifman | Oct 2010 | B2 |
7809569 | Attwater et al. | Oct 2010 | B2 |
7809570 | Kennewick et al. | Oct 2010 | B2 |
7809610 | Cao | Oct 2010 | B2 |
7809744 | Nevidomski et al. | Oct 2010 | B2 |
7818165 | Carlgren et al. | Oct 2010 | B2 |
7818176 | Freeman et al. | Oct 2010 | B2 |
7818215 | King et al. | Oct 2010 | B2 |
7818291 | Ferguson et al. | Oct 2010 | B2 |
7818672 | Mccormack et al. | Oct 2010 | B2 |
7822608 | Cross, Jr. et al. | Oct 2010 | B2 |
7823123 | Sabbouh | Oct 2010 | B2 |
7826945 | Zhang et al. | Nov 2010 | B2 |
7827047 | Anderson et al. | Nov 2010 | B2 |
7831246 | Smith et al. | Nov 2010 | B1 |
7831423 | Schubert | Nov 2010 | B2 |
7831426 | Bennett | Nov 2010 | B2 |
7831432 | Bodin et al. | Nov 2010 | B2 |
7835504 | Donald et al. | Nov 2010 | B1 |
7836437 | Kacmarcik et al. | Nov 2010 | B2 |
7840348 | Kim et al. | Nov 2010 | B2 |
7840400 | Levi et al. | Nov 2010 | B2 |
7840447 | Kleinrock et al. | Nov 2010 | B2 |
7840581 | Ross et al. | Nov 2010 | B2 |
7840912 | Elias et al. | Nov 2010 | B2 |
7844394 | Kim | Nov 2010 | B2 |
7848924 | Nurminen et al. | Dec 2010 | B2 |
7848926 | Goto et al. | Dec 2010 | B2 |
7853444 | Wang et al. | Dec 2010 | B2 |
7853445 | Bachenko et al. | Dec 2010 | B2 |
7853574 | Kraenzel et al. | Dec 2010 | B2 |
7853577 | Sundaresan et al. | Dec 2010 | B2 |
7853664 | Wang et al. | Dec 2010 | B1 |
7853900 | Nguyen et al. | Dec 2010 | B2 |
7865817 | Ryan et al. | Jan 2011 | B2 |
7869998 | Di Fabbrizio et al. | Jan 2011 | B1 |
7869999 | Amato et al. | Jan 2011 | B2 |
7870118 | Jiang et al. | Jan 2011 | B2 |
7870133 | Krishnamoorthy et al. | Jan 2011 | B2 |
7873149 | Schultz et al. | Jan 2011 | B2 |
7873519 | Bennett | Jan 2011 | B2 |
7873654 | Bernard | Jan 2011 | B2 |
7877705 | Chambers et al. | Jan 2011 | B2 |
7880730 | Robinson et al. | Feb 2011 | B2 |
7881283 | Cormier et al. | Feb 2011 | B2 |
7881936 | Longe et al. | Feb 2011 | B2 |
7885390 | Chaudhuri et al. | Feb 2011 | B2 |
7885844 | Cohen et al. | Feb 2011 | B1 |
7886233 | Rainisto et al. | Feb 2011 | B2 |
7889101 | Yokota | Feb 2011 | B2 |
7889184 | Blumenberg et al. | Feb 2011 | B2 |
7889185 | Blumenberg et al. | Feb 2011 | B2 |
7890330 | Ozkaragoz et al. | Feb 2011 | B2 |
7890652 | Bull et al. | Feb 2011 | B2 |
7895039 | Braho et al. | Feb 2011 | B2 |
7895531 | Radtke et al. | Feb 2011 | B2 |
7899666 | Varone | Mar 2011 | B2 |
7904297 | Mirkovic et al. | Mar 2011 | B2 |
7908287 | Katragadda | Mar 2011 | B1 |
7912289 | Kansal et al. | Mar 2011 | B2 |
7912699 | Saraclar et al. | Mar 2011 | B1 |
7912702 | Bennett | Mar 2011 | B2 |
7912720 | Hakkani-Tur et al. | Mar 2011 | B1 |
7912828 | Bonnet et al. | Mar 2011 | B2 |
7913185 | Benson et al. | Mar 2011 | B1 |
7916979 | Simmons | Mar 2011 | B2 |
7917367 | Di Cristo et al. | Mar 2011 | B2 |
7917497 | Harrison et al. | Mar 2011 | B2 |
7920678 | Cooper et al. | Apr 2011 | B2 |
7920682 | Byrne et al. | Apr 2011 | B2 |
7920857 | Lau et al. | Apr 2011 | B2 |
7925525 | Chin | Apr 2011 | B2 |
7925610 | Elbaz et al. | Apr 2011 | B2 |
7929805 | Wang et al. | Apr 2011 | B2 |
7930168 | Weng et al. | Apr 2011 | B2 |
7930183 | Odell et al. | Apr 2011 | B2 |
7930197 | Ozzie et al. | Apr 2011 | B2 |
7936339 | Marggraff et al. | May 2011 | B2 |
7936861 | Martin et al. | May 2011 | B2 |
7936863 | John et al. | May 2011 | B2 |
7937075 | Zellner | May 2011 | B2 |
7941009 | Li et al. | May 2011 | B2 |
7945294 | Zhang et al. | May 2011 | B2 |
7945470 | Cohen et al. | May 2011 | B1 |
7949529 | Weider et al. | May 2011 | B2 |
7949534 | Davis et al. | May 2011 | B2 |
7949752 | Lange et al. | May 2011 | B2 |
7953679 | Chidlovskii et al. | May 2011 | B2 |
7957975 | Burns et al. | Jun 2011 | B2 |
7958136 | Curtis et al. | Jun 2011 | B1 |
7962179 | Huang | Jun 2011 | B2 |
7974835 | Balchandran et al. | Jul 2011 | B2 |
7974844 | Sumita | Jul 2011 | B2 |
7974972 | Cao | Jul 2011 | B2 |
7975216 | Woolf et al. | Jul 2011 | B2 |
7983478 | Liu et al. | Jul 2011 | B2 |
7983915 | Knight et al. | Jul 2011 | B2 |
7983917 | Kennewick et al. | Jul 2011 | B2 |
7983919 | Conkie | Jul 2011 | B2 |
7983997 | Allen et al. | Jul 2011 | B2 |
7984062 | Dunning et al. | Jul 2011 | B2 |
7986431 | Emori et al. | Jul 2011 | B2 |
7987151 | Schott et al. | Jul 2011 | B2 |
7987244 | Lewis et al. | Jul 2011 | B1 |
7991614 | Washio et al. | Aug 2011 | B2 |
7992085 | Wang-Aryattanwanich et al. | Aug 2011 | B2 |
7996228 | Miller et al. | Aug 2011 | B2 |
7996589 | Schultz et al. | Aug 2011 | B2 |
7996769 | Fux et al. | Aug 2011 | B2 |
7996792 | Anzures et al. | Aug 2011 | B2 |
7999669 | Singh et al. | Aug 2011 | B2 |
8000453 | Cooper et al. | Aug 2011 | B2 |
8005664 | Hanumanthappa | Aug 2011 | B2 |
8005679 | Jordan et al. | Aug 2011 | B2 |
8006180 | Tunning et al. | Aug 2011 | B2 |
8014308 | Gates et al. | Sep 2011 | B2 |
8015006 | Kennewick et al. | Sep 2011 | B2 |
8015011 | Nagano et al. | Sep 2011 | B2 |
8015144 | Zheng et al. | Sep 2011 | B2 |
8018431 | Zehr et al. | Sep 2011 | B1 |
8019271 | Izdepski | Sep 2011 | B1 |
8019604 | Ma | Sep 2011 | B2 |
8020104 | Robarts et al. | Sep 2011 | B2 |
8024195 | Mozer et al. | Sep 2011 | B2 |
8024415 | Horvitz et al. | Sep 2011 | B2 |
8027836 | Baker et al. | Sep 2011 | B2 |
8031943 | Chen et al. | Oct 2011 | B2 |
8032383 | Bhardwaj et al. | Oct 2011 | B1 |
8036901 | Mozer | Oct 2011 | B2 |
8037034 | Plachta et al. | Oct 2011 | B2 |
8041557 | Liu | Oct 2011 | B2 |
8041570 | Mirkovic et al. | Oct 2011 | B2 |
8041611 | Kleinrock et al. | Oct 2011 | B2 |
8042053 | Darwish et al. | Oct 2011 | B2 |
8046363 | Cha et al. | Oct 2011 | B2 |
8046374 | Bromwich et al. | Oct 2011 | B1 |
8050500 | Batty et al. | Nov 2011 | B1 |
8054180 | Scofield et al. | Nov 2011 | B1 |
8055502 | Clark et al. | Nov 2011 | B2 |
8055708 | Chitsaz et al. | Nov 2011 | B2 |
8056070 | Goller et al. | Nov 2011 | B2 |
8060824 | Brownrigg, Jr. et al. | Nov 2011 | B2 |
8064753 | Freeman | Nov 2011 | B2 |
8065143 | Yanagihara | Nov 2011 | B2 |
8065155 | Gazdzinski | Nov 2011 | B1 |
8065156 | Gazdzinski | Nov 2011 | B2 |
8068604 | Leeds et al. | Nov 2011 | B2 |
8069046 | Kennewick et al. | Nov 2011 | B2 |
8069422 | Sheshagiri et al. | Nov 2011 | B2 |
8073681 | Baldwin et al. | Dec 2011 | B2 |
8073695 | Hendricks et al. | Dec 2011 | B1 |
8077153 | Benko et al. | Dec 2011 | B2 |
8078473 | Gazdzinski | Dec 2011 | B1 |
8082153 | Coffman et al. | Dec 2011 | B2 |
8082498 | Salamon et al. | Dec 2011 | B2 |
8090571 | Elshishiny et al. | Jan 2012 | B2 |
8095364 | Longe et al. | Jan 2012 | B2 |
8099289 | Mozer et al. | Jan 2012 | B2 |
8099395 | Pabla et al. | Jan 2012 | B2 |
8099418 | Inoue et al. | Jan 2012 | B2 |
8103510 | Sato | Jan 2012 | B2 |
8107401 | John et al. | Jan 2012 | B2 |
8112275 | Kennewick et al. | Feb 2012 | B2 |
8112280 | Lu | Feb 2012 | B2 |
8117037 | Gazdzinski | Feb 2012 | B2 |
8117542 | Radtke et al. | Feb 2012 | B2 |
8121413 | Hwang et al. | Feb 2012 | B2 |
8121837 | Agapi et al. | Feb 2012 | B2 |
8122094 | Kotab | Feb 2012 | B1 |
8122353 | Bouta | Feb 2012 | B2 |
8130929 | Wilkes et al. | Mar 2012 | B2 |
8131557 | Davis et al. | Mar 2012 | B2 |
8135115 | Hogg, Jr. et al. | Mar 2012 | B1 |
8138912 | Singh et al. | Mar 2012 | B2 |
8140330 | Cevik et al. | Mar 2012 | B2 |
8140335 | Kennewick et al. | Mar 2012 | B2 |
8140567 | Padovitz et al. | Mar 2012 | B2 |
8145489 | Freeman et al. | Mar 2012 | B2 |
8150694 | Kennewick et al. | Apr 2012 | B2 |
8150700 | Shin et al. | Apr 2012 | B2 |
8155956 | Cho et al. | Apr 2012 | B2 |
8156005 | Vieri | Apr 2012 | B2 |
8160877 | Nucci et al. | Apr 2012 | B1 |
8160883 | Lecoeuche | Apr 2012 | B2 |
8165321 | Paquier et al. | Apr 2012 | B2 |
8165886 | Gagnon et al. | Apr 2012 | B1 |
8166019 | Lee et al. | Apr 2012 | B1 |
8166032 | Sommer et al. | Apr 2012 | B2 |
8170790 | Lee et al. | May 2012 | B2 |
8175872 | Kristjansson et al. | May 2012 | B2 |
8175876 | Bou-Ghazale et al. | May 2012 | B2 |
8179370 | Yamasani et al. | May 2012 | B1 |
8188856 | Singh et al. | May 2012 | B2 |
8190359 | Bourne | May 2012 | B2 |
8190596 | Nambiar et al. | May 2012 | B2 |
8195467 | Mozer et al. | Jun 2012 | B2 |
8195468 | Kennewick et al. | Jun 2012 | B2 |
8200489 | Baggenstoss | Jun 2012 | B1 |
8200495 | Braho et al. | Jun 2012 | B2 |
8201109 | Van Os et al. | Jun 2012 | B2 |
8204238 | Mozer | Jun 2012 | B2 |
8205788 | Gazdzinski et al. | Jun 2012 | B1 |
8209183 | Patel et al. | Jun 2012 | B1 |
8213911 | Williams et al. | Jul 2012 | B2 |
8219115 | Nelissen | Jul 2012 | B1 |
8219406 | Yu et al. | Jul 2012 | B2 |
8219407 | Roy et al. | Jul 2012 | B1 |
8219608 | alSafadi et al. | Jul 2012 | B2 |
8224649 | Chaudhari et al. | Jul 2012 | B2 |
8224757 | Bohle | Jul 2012 | B2 |
8228299 | Maloney et al. | Jul 2012 | B1 |
8233919 | Haag et al. | Jul 2012 | B2 |
8234111 | Lloyd et al. | Jul 2012 | B2 |
8239206 | LeBeau et al. | Aug 2012 | B1 |
8239207 | Seligman et al. | Aug 2012 | B2 |
8244712 | Serlet et al. | Aug 2012 | B2 |
8250071 | Killalea et al. | Aug 2012 | B1 |
8254829 | Kindred et al. | Aug 2012 | B1 |
8255216 | White | Aug 2012 | B2 |
8255217 | Stent et al. | Aug 2012 | B2 |
8260247 | Lazaridis et al. | Sep 2012 | B2 |
8260617 | Dhanakshirur et al. | Sep 2012 | B2 |
8270933 | Riemer et al. | Sep 2012 | B2 |
8271287 | Kermani | Sep 2012 | B1 |
8275621 | Alewine et al. | Sep 2012 | B2 |
8279171 | Hirai et al. | Oct 2012 | B2 |
8280438 | Barbera | Oct 2012 | B2 |
8285546 | Reich | Oct 2012 | B2 |
8285551 | Gazdzinski | Oct 2012 | B2 |
8285553 | Gazdzinski | Oct 2012 | B2 |
8290777 | Nguyen et al. | Oct 2012 | B1 |
8290778 | Gazdzinski | Oct 2012 | B2 |
8290781 | Gazdzinski | Oct 2012 | B2 |
8296124 | Holsztynska et al. | Oct 2012 | B1 |
8296145 | Clark et al. | Oct 2012 | B2 |
8296146 | Gazdzinski | Oct 2012 | B2 |
8296153 | Gazdzinski | Oct 2012 | B2 |
8296380 | Kelly et al. | Oct 2012 | B1 |
8296383 | Lindahl | Oct 2012 | B2 |
8300776 | Davies et al. | Oct 2012 | B2 |
8300801 | Sweeney et al. | Oct 2012 | B2 |
8301456 | Gazdzinski | Oct 2012 | B2 |
8311189 | Champlin et al. | Nov 2012 | B2 |
8311834 | Gazdzinski | Nov 2012 | B1 |
8311835 | Lecoeuche | Nov 2012 | B2 |
8311838 | Lindahl et al. | Nov 2012 | B2 |
8312017 | Martin et al. | Nov 2012 | B2 |
8321786 | Lunati et al. | Nov 2012 | B2 |
8326627 | Kennewick et al. | Dec 2012 | B2 |
8332205 | Krishnan et al. | Dec 2012 | B2 |
8332218 | Cross et al. | Dec 2012 | B2 |
8332224 | Di Cristo et al. | Dec 2012 | B2 |
8332748 | Karam | Dec 2012 | B1 |
8335689 | Wittenstein et al. | Dec 2012 | B2 |
8340975 | Rosenberger | Dec 2012 | B1 |
8345665 | Vieri et al. | Jan 2013 | B2 |
8346563 | Hjelm et al. | Jan 2013 | B1 |
8352183 | Thota et al. | Jan 2013 | B2 |
8352268 | Naik et al. | Jan 2013 | B2 |
8352272 | Rogers et al. | Jan 2013 | B2 |
8355919 | Silverman et al. | Jan 2013 | B2 |
8359234 | Vieri | Jan 2013 | B2 |
8370145 | Endo et al. | Feb 2013 | B2 |
8370158 | Gazdzinski | Feb 2013 | B2 |
8371503 | Gazdzinski | Feb 2013 | B2 |
8374871 | Ehsani et al. | Feb 2013 | B2 |
8375320 | Kotler et al. | Feb 2013 | B2 |
8380504 | Peden et al. | Feb 2013 | B1 |
8380507 | Herman et al. | Feb 2013 | B2 |
8381107 | Rottler et al. | Feb 2013 | B2 |
8381135 | Hotelling et al. | Feb 2013 | B2 |
8386485 | Kerschberg et al. | Feb 2013 | B2 |
8386926 | Matsuoka | Feb 2013 | B1 |
8391844 | Lamiraux et al. | Mar 2013 | B2 |
8396714 | Rogers et al. | Mar 2013 | B2 |
8401163 | Kirchhoff et al. | Mar 2013 | B1 |
8406745 | Upadhyay et al. | Mar 2013 | B1 |
8423288 | Stahl et al. | Apr 2013 | B2 |
8428758 | Naik et al. | Apr 2013 | B2 |
8433572 | Caskey et al. | Apr 2013 | B2 |
8433778 | Shreesha et al. | Apr 2013 | B1 |
8442821 | Vanhoucke | May 2013 | B1 |
8447612 | Gazdzinski | May 2013 | B2 |
8452597 | Bringert et al. | May 2013 | B2 |
8457959 | Kaiser | Jun 2013 | B2 |
8458115 | Cai et al. | Jun 2013 | B2 |
8458278 | Christie et al. | Jun 2013 | B2 |
8464150 | Davidson et al. | Jun 2013 | B2 |
8473289 | Jitkoff et al. | Jun 2013 | B2 |
8479122 | Hotelling et al. | Jul 2013 | B2 |
8484027 | Murphy | Jul 2013 | B1 |
8489599 | Bellotti | Jul 2013 | B2 |
8498857 | Kopparapu et al. | Jul 2013 | B2 |
8514197 | Shahraray et al. | Aug 2013 | B2 |
8515750 | Lei et al. | Aug 2013 | B1 |
8521513 | Millett et al. | Aug 2013 | B2 |
8521531 | Kim | Aug 2013 | B1 |
8527276 | Senior et al. | Sep 2013 | B1 |
8537033 | Gueziec | Sep 2013 | B2 |
8539342 | Lewis | Sep 2013 | B1 |
8543375 | Hong | Sep 2013 | B2 |
8543397 | Nguyen | Sep 2013 | B1 |
8543398 | Strope et al. | Sep 2013 | B1 |
8560229 | Park et al. | Oct 2013 | B1 |
8571851 | Tickner et al. | Oct 2013 | B1 |
8583416 | Huang et al. | Nov 2013 | B2 |
8583511 | Hendrickson | Nov 2013 | B2 |
8589869 | Wolfram | Nov 2013 | B2 |
8589911 | Sharkey et al. | Nov 2013 | B1 |
8595004 | Koshinaka | Nov 2013 | B2 |
8600743 | Lindahl et al. | Dec 2013 | B2 |
8600746 | Lei et al. | Dec 2013 | B1 |
8600930 | Sate et al. | Dec 2013 | B2 |
8606090 | Eyer | Dec 2013 | B2 |
8606568 | Tickner et al. | Dec 2013 | B1 |
8606576 | Barr et al. | Dec 2013 | B1 |
8620659 | Di Cristo et al. | Dec 2013 | B2 |
8620662 | Bellegarda | Dec 2013 | B2 |
8626681 | Jurca et al. | Jan 2014 | B1 |
8638363 | King et al. | Jan 2014 | B2 |
8639516 | Lindahl et al. | Jan 2014 | B2 |
8645137 | Bellegarda et al. | Feb 2014 | B2 |
8645138 | Weinstein et al. | Feb 2014 | B1 |
8654936 | Tofighbakhsh et al. | Feb 2014 | B1 |
8655646 | Lee et al. | Feb 2014 | B2 |
8655901 | Li et al. | Feb 2014 | B1 |
8660843 | Falcon et al. | Feb 2014 | B2 |
8660849 | Gruber et al. | Feb 2014 | B2 |
8660970 | Fiedorowicz | Feb 2014 | B1 |
8661112 | Creamer et al. | Feb 2014 | B2 |
8661340 | Goldsmith et al. | Feb 2014 | B2 |
8670979 | Gruber et al. | Mar 2014 | B2 |
8675084 | Bolton et al. | Mar 2014 | B2 |
8676904 | Lindahl et al. | Mar 2014 | B2 |
8677377 | Cheyer et al. | Mar 2014 | B2 |
8681950 | Vlack et al. | Mar 2014 | B2 |
8682667 | Haughey et al. | Mar 2014 | B2 |
8687777 | Lavian et al. | Apr 2014 | B1 |
8688446 | Yanagihara et al. | Apr 2014 | B2 |
8688453 | Joshi et al. | Apr 2014 | B1 |
8695074 | Saraf et al. | Apr 2014 | B2 |
8696364 | Cohen | Apr 2014 | B2 |
8706472 | Ramerth et al. | Apr 2014 | B2 |
8706474 | Blume et al. | Apr 2014 | B2 |
8706503 | Cheyer et al. | Apr 2014 | B2 |
8713119 | Lindahl et al. | Apr 2014 | B2 |
8713418 | King et al. | Apr 2014 | B2 |
8719006 | Bellegarda et al. | May 2014 | B2 |
8719014 | Wagner et al. | May 2014 | B2 |
8719039 | Sharifi | May 2014 | B1 |
8731610 | Appaji | May 2014 | B2 |
8731912 | Tickner et al. | May 2014 | B1 |
8731942 | Cheyer et al. | May 2014 | B2 |
8739208 | Rodriguez et al. | May 2014 | B2 |
8744852 | Seymour et al. | Jun 2014 | B1 |
8760537 | Johnson et al. | Jun 2014 | B2 |
8762145 | Ouchi et al. | Jun 2014 | B2 |
8762156 | Chen et al. | Jun 2014 | B2 |
8762469 | Lindahl et al. | Jun 2014 | B2 |
8768693 | Lempel et al. | Jul 2014 | B2 |
8768702 | Boettcher et al. | Jul 2014 | B2 |
8775154 | Clinchant et al. | Jul 2014 | B2 |
8775931 | Fux et al. | Jul 2014 | B2 |
8781456 | Prociw | Jul 2014 | B2 |
8781841 | Wang | Jul 2014 | B1 |
8798255 | Lubowich et al. | Aug 2014 | B2 |
8798995 | Edara et al. | Aug 2014 | B1 |
8799000 | Guzzoni et al. | Aug 2014 | B2 |
8805690 | LeBeau et al. | Aug 2014 | B1 |
8812302 | Xiao et al. | Aug 2014 | B2 |
8838457 | Cerra et al. | Sep 2014 | B2 |
8855915 | Furuhata et al. | Oct 2014 | B2 |
8861925 | Ohme | Oct 2014 | B1 |
8862252 | Rottler et al. | Oct 2014 | B2 |
8868409 | Mengibar et al. | Oct 2014 | B1 |
8880405 | Cerra et al. | Nov 2014 | B2 |
8886534 | Nakano et al. | Nov 2014 | B2 |
8886540 | Cerra et al. | Nov 2014 | B2 |
8886541 | Friedlander | Nov 2014 | B2 |
8892446 | Cheyer et al. | Nov 2014 | B2 |
8893023 | Perry et al. | Nov 2014 | B2 |
8898568 | Bull et al. | Nov 2014 | B2 |
8903716 | Chen et al. | Dec 2014 | B2 |
8909693 | Frissora et al. | Dec 2014 | B2 |
8930176 | Li et al. | Jan 2015 | B2 |
8930191 | Gruber et al. | Jan 2015 | B2 |
8938394 | Faaborg et al. | Jan 2015 | B1 |
8938450 | Spivack et al. | Jan 2015 | B2 |
8938688 | Bradford et al. | Jan 2015 | B2 |
8942986 | Cheyer et al. | Jan 2015 | B2 |
8943423 | Merrill et al. | Jan 2015 | B2 |
8972240 | Brockett et al. | Mar 2015 | B2 |
8972432 | Shaw et al. | Mar 2015 | B2 |
8972878 | Mohler et al. | Mar 2015 | B2 |
8983383 | Haskin | Mar 2015 | B1 |
8989713 | Doulton | Mar 2015 | B2 |
8990235 | King et al. | Mar 2015 | B2 |
8994660 | Neels et al. | Mar 2015 | B2 |
8996350 | Dub et al. | Mar 2015 | B1 |
8996376 | Fleizach et al. | Mar 2015 | B2 |
8996381 | Mozer et al. | Mar 2015 | B2 |
8996639 | Faaborg et al. | Mar 2015 | B1 |
9009046 | Stewart | Apr 2015 | B1 |
9020804 | Barbaiani et al. | Apr 2015 | B2 |
9026425 | Nikoulina et al. | May 2015 | B2 |
9031834 | Coorman et al. | May 2015 | B2 |
9037967 | Al-Jefri et al. | May 2015 | B1 |
9043208 | Koch et al. | May 2015 | B2 |
9043211 | Haiut et al. | May 2015 | B2 |
9049255 | MacFarlane et al. | Jun 2015 | B2 |
9049295 | Cooper et al. | Jun 2015 | B1 |
9053706 | Jitkoff et al. | Jun 2015 | B2 |
9058811 | Wang et al. | Jun 2015 | B2 |
9063979 | Chiu et al. | Jun 2015 | B2 |
9070366 | Mathias et al. | Jun 2015 | B1 |
9071701 | Donaldson et al. | Jun 2015 | B2 |
9076448 | Bennett et al. | Jul 2015 | B2 |
9076450 | Sadek et al. | Jul 2015 | B1 |
9081411 | Kalns et al. | Jul 2015 | B2 |
9081482 | Zhai et al. | Jul 2015 | B1 |
9082402 | Yadgar et al. | Jul 2015 | B2 |
9098467 | Blanksteen et al. | Aug 2015 | B1 |
9101279 | Ritchey et al. | Aug 2015 | B2 |
9112984 | Sejnoha et al. | Aug 2015 | B2 |
9117447 | Gruber et al. | Aug 2015 | B2 |
9123338 | Sanders et al. | Sep 2015 | B1 |
9164983 | Liu et al. | Oct 2015 | B2 |
9171541 | Kennewick et al. | Oct 2015 | B2 |
9171546 | Pike | Oct 2015 | B1 |
9190062 | Haughay | Nov 2015 | B2 |
9208153 | Zaveri et al. | Dec 2015 | B1 |
9218809 | Bellegarda | Dec 2015 | B2 |
9218819 | Stekkelpak et al. | Dec 2015 | B1 |
9223537 | Brown et al. | Dec 2015 | B2 |
9255812 | Maeoka et al. | Feb 2016 | B2 |
9258604 | Bilobrov et al. | Feb 2016 | B1 |
9262612 | Cheyer | Feb 2016 | B2 |
9280535 | Varma et al. | Mar 2016 | B2 |
9286910 | Li et al. | Mar 2016 | B1 |
9292487 | Weber | Mar 2016 | B1 |
9292489 | Sak et al. | Mar 2016 | B1 |
9299344 | Braho et al. | Mar 2016 | B2 |
9300718 | Khanna | Mar 2016 | B2 |
9305543 | Fleizach et al. | Apr 2016 | B2 |
9305548 | Kennewick et al. | Apr 2016 | B2 |
9311912 | Swietlinski et al. | Apr 2016 | B1 |
9313317 | LeBeau et al. | Apr 2016 | B1 |
9318108 | Gruber et al. | Apr 2016 | B2 |
9325809 | Barros et al. | Apr 2016 | B1 |
9330659 | Ju et al. | May 2016 | B2 |
9330720 | Lee | May 2016 | B2 |
9338493 | Van Os et al. | May 2016 | B2 |
9349368 | LeBeau et al. | May 2016 | B1 |
9361084 | Costa | Jun 2016 | B1 |
9367541 | Servan et al. | Jun 2016 | B1 |
9377871 | Waddell et al. | Jun 2016 | B2 |
9378740 | Rosen et al. | Jun 2016 | B1 |
9380155 | Reding et al. | Jun 2016 | B1 |
9383827 | Faaborg et al. | Jul 2016 | B1 |
9390726 | Smus et al. | Jul 2016 | B1 |
9396722 | Chung et al. | Jul 2016 | B2 |
9401147 | Jitkoff et al. | Jul 2016 | B2 |
9406224 | Sanders et al. | Aug 2016 | B1 |
9412392 | Lindahl | Aug 2016 | B2 |
9423266 | Clark et al. | Aug 2016 | B2 |
9424840 | Hart et al. | Aug 2016 | B1 |
9436918 | Pantel et al. | Sep 2016 | B2 |
9437186 | Liu et al. | Sep 2016 | B1 |
9437189 | Epstein et al. | Sep 2016 | B2 |
9454957 | Mathias et al. | Sep 2016 | B1 |
9465833 | Aravamudan et al. | Oct 2016 | B2 |
9471566 | Zhang et al. | Oct 2016 | B1 |
9484021 | Mairesse et al. | Nov 2016 | B1 |
9495129 | Fleizach et al. | Nov 2016 | B2 |
9501741 | Cheyer et al. | Nov 2016 | B2 |
9502025 | Kennewick et al. | Nov 2016 | B2 |
9508028 | Bannister et al. | Nov 2016 | B2 |
9510044 | Pereira et al. | Nov 2016 | B1 |
9524355 | Forbes et al. | Dec 2016 | B2 |
9535906 | Lee et al. | Jan 2017 | B2 |
9536527 | Carlson | Jan 2017 | B1 |
9547647 | Badaskar | Jan 2017 | B2 |
9548050 | Gruber et al. | Jan 2017 | B2 |
9569549 | Jenkins et al. | Feb 2017 | B1 |
9575964 | Yadgar et al. | Feb 2017 | B2 |
9578173 | Sanghavi et al. | Feb 2017 | B2 |
9607612 | Deleeuw | Mar 2017 | B2 |
9620113 | Kennewick et al. | Apr 2017 | B2 |
9620126 | Chiba | Apr 2017 | B2 |
9626955 | Fleizach et al. | Apr 2017 | B2 |
9633004 | Giuli et al. | Apr 2017 | B2 |
9633660 | Haughay | Apr 2017 | B2 |
9652453 | Mathur et al. | May 2017 | B2 |
9658746 | Cohn et al. | May 2017 | B2 |
9665567 | Liu et al. | May 2017 | B2 |
9668121 | Naik et al. | May 2017 | B2 |
9672725 | Dotan-Cohen et al. | Jun 2017 | B2 |
9691378 | Meyers et al. | Jun 2017 | B1 |
9697827 | Lilly et al. | Jul 2017 | B1 |
9720907 | Bangalore et al. | Aug 2017 | B2 |
9721566 | Newendorp et al. | Aug 2017 | B2 |
9734839 | Adams | Aug 2017 | B1 |
9741343 | Miles et al. | Aug 2017 | B1 |
9747083 | Roman et al. | Aug 2017 | B1 |
9755605 | Li et al. | Sep 2017 | B1 |
9842584 | Hart et al. | Dec 2017 | B1 |
9934785 | Hulaud | Apr 2018 | B1 |
9948728 | Linn et al. | Apr 2018 | B2 |
9966068 | Cash et al. | May 2018 | B2 |
9967381 | Kashimba et al. | May 2018 | B1 |
9990176 | Gray | Jun 2018 | B1 |
10037758 | Jing et al. | Jul 2018 | B2 |
10049663 | Orr et al. | Aug 2018 | B2 |
10074360 | Kim | Sep 2018 | B2 |
10096319 | Jin et al. | Oct 2018 | B1 |
10102359 | Cheyer | Oct 2018 | B2 |
10170123 | Orr et al. | Jan 2019 | B2 |
20010024967 | Bauer | Sep 2001 | A1 |
20050184958 | Gnanamgari et al. | Aug 2005 | A1 |
20050216271 | Konig | Sep 2005 | A1 |
20060206724 | Schaufele et al. | Sep 2006 | A1 |
20070004451 | Anderson | Jan 2007 | A1 |
20070088556 | Andrew | Apr 2007 | A1 |
20080267416 | Goldstein et al. | Oct 2008 | A1 |
20090003115 | Lindahl et al. | Jan 2009 | A1 |
20090005012 | Van Heugten | Jan 2009 | A1 |
20090005891 | Batson et al. | Jan 2009 | A1 |
20090006096 | Li et al. | Jan 2009 | A1 |
20090006097 | Etezadi et al. | Jan 2009 | A1 |
20090006099 | Sharpe et al. | Jan 2009 | A1 |
20090006100 | Badger et al. | Jan 2009 | A1 |
20090006343 | Platt et al. | Jan 2009 | A1 |
20090006345 | Platt et al. | Jan 2009 | A1 |
20090006488 | Lindahl et al. | Jan 2009 | A1 |
20090006671 | Batson et al. | Jan 2009 | A1 |
20090007001 | Morin et al. | Jan 2009 | A1 |
20090011709 | Akasaka et al. | Jan 2009 | A1 |
20090012748 | Beish et al. | Jan 2009 | A1 |
20090012775 | El Hady et al. | Jan 2009 | A1 |
20090018828 | Nakadai et al. | Jan 2009 | A1 |
20090018829 | Kuperstein | Jan 2009 | A1 |
20090018834 | Cooper et al. | Jan 2009 | A1 |
20090018835 | Cooper et al. | Jan 2009 | A1 |
20090018839 | Cooper et al. | Jan 2009 | A1 |
20090018840 | Lutz et al. | Jan 2009 | A1 |
20090022329 | Mahowald | Jan 2009 | A1 |
20090024595 | Chen | Jan 2009 | A1 |
20090028435 | Wu et al. | Jan 2009 | A1 |
20090030685 | Cerra et al. | Jan 2009 | A1 |
20090030800 | Grois | Jan 2009 | A1 |
20090030978 | Johnson et al. | Jan 2009 | A1 |
20090043580 | Mozer et al. | Feb 2009 | A1 |
20090043583 | Agapi et al. | Feb 2009 | A1 |
20090043763 | Peng | Feb 2009 | A1 |
20090044094 | Rapp et al. | Feb 2009 | A1 |
20090048821 | Yam et al. | Feb 2009 | A1 |
20090048841 | Pollet et al. | Feb 2009 | A1 |
20090048845 | Burckart et al. | Feb 2009 | A1 |
20090049067 | Murray | Feb 2009 | A1 |
20090055168 | Wu et al. | Feb 2009 | A1 |
20090055175 | Terrell et al. | Feb 2009 | A1 |
20090055179 | Cho et al. | Feb 2009 | A1 |
20090055186 | Lance et al. | Feb 2009 | A1 |
20090055380 | Peng et al. | Feb 2009 | A1 |
20090055381 | Wu et al. | Feb 2009 | A1 |
20090055648 | Kim et al. | Feb 2009 | A1 |
20090058823 | Kocienda | Mar 2009 | A1 |
20090058860 | Fong et al. | Mar 2009 | A1 |
20090060351 | Li et al. | Mar 2009 | A1 |
20090060472 | Bull et al. | Mar 2009 | A1 |
20090063974 | Bull et al. | Mar 2009 | A1 |
20090064031 | Bull et al. | Mar 2009 | A1 |
20090070097 | Wu et al. | Mar 2009 | A1 |
20090070102 | Maegawa | Mar 2009 | A1 |
20090070109 | Didcock et al. | Mar 2009 | A1 |
20090070114 | Staszak | Mar 2009 | A1 |
20090074214 | Bradford et al. | Mar 2009 | A1 |
20090076792 | Lawson-Tancred | Mar 2009 | A1 |
20090076796 | Daraselia | Mar 2009 | A1 |
20090076798 | Oh et al. | Mar 2009 | A1 |
20090076819 | Wouters et al. | Mar 2009 | A1 |
20090076821 | Brenner et al. | Mar 2009 | A1 |
20090076825 | Bradford et al. | Mar 2009 | A1 |
20090077047 | Cooper et al. | Mar 2009 | A1 |
20090077165 | Rhodes et al. | Mar 2009 | A1 |
20090077464 | Goldsmith et al. | Mar 2009 | A1 |
20090079622 | Seshadri et al. | Mar 2009 | A1 |
20090083034 | Hernandez et al. | Mar 2009 | A1 |
20090083035 | Huang et al. | Mar 2009 | A1 |
20090083036 | Zhao et al. | Mar 2009 | A1 |
20090083037 | Gleason et al. | Mar 2009 | A1 |
20090083047 | Lindahl et al. | Mar 2009 | A1 |
20090089058 | Bellegarda | Apr 2009 | A1 |
20090091537 | Huang et al. | Apr 2009 | A1 |
20090092239 | Macwan et al. | Apr 2009 | A1 |
20090092260 | Powers | Apr 2009 | A1 |
20090092261 | Bard | Apr 2009 | A1 |
20090092262 | Costa et al. | Apr 2009 | A1 |
20090094029 | Koch et al. | Apr 2009 | A1 |
20090094033 | Mozer et al. | Apr 2009 | A1 |
20090097634 | Nambiar et al. | Apr 2009 | A1 |
20090097637 | Boscher et al. | Apr 2009 | A1 |
20090098903 | Donaldson et al. | Apr 2009 | A1 |
20090100049 | Cao | Apr 2009 | A1 |
20090100454 | Weber | Apr 2009 | A1 |
20090104898 | Harris | Apr 2009 | A1 |
20090106026 | Ferrieux | Apr 2009 | A1 |
20090106376 | Tom et al. | Apr 2009 | A1 |
20090106397 | O'Keefe | Apr 2009 | A1 |
20090112572 | Thorn | Apr 2009 | A1 |
20090112576 | Jackson et al. | Apr 2009 | A1 |
20090112592 | Candelore et al. | Apr 2009 | A1 |
20090112596 | Syrdal et al. | Apr 2009 | A1 |
20090112677 | Rhett | Apr 2009 | A1 |
20090112892 | Cardie et al. | Apr 2009 | A1 |
20090119587 | Allen et al. | May 2009 | A1 |
20090123021 | Jung et al. | May 2009 | A1 |
20090123071 | Iwasaki | May 2009 | A1 |
20090125477 | Lu et al. | May 2009 | A1 |
20090125602 | Bhatia et al. | May 2009 | A1 |
20090125813 | Shen et al. | May 2009 | A1 |
20090125947 | Ibaraki | May 2009 | A1 |
20090128505 | Partridge et al. | May 2009 | A1 |
20090132253 | Bellegarda | May 2009 | A1 |
20090132255 | Lu | May 2009 | A1 |
20090137286 | Luke et al. | May 2009 | A1 |
20090138263 | Shozakai et al. | May 2009 | A1 |
20090138430 | Nambiar et al. | May 2009 | A1 |
20090138736 | Chin | May 2009 | A1 |
20090138828 | Schultz et al. | May 2009 | A1 |
20090144036 | Jorgensen et al. | Jun 2009 | A1 |
20090144049 | Haddad et al. | Jun 2009 | A1 |
20090144428 | Bowater et al. | Jun 2009 | A1 |
20090144609 | Liang et al. | Jun 2009 | A1 |
20090146848 | Ghassabian | Jun 2009 | A1 |
20090150147 | Jacoby et al. | Jun 2009 | A1 |
20090150156 | Kennewick et al. | Jun 2009 | A1 |
20090152349 | Bonev et al. | Jun 2009 | A1 |
20090153288 | Hope et al. | Jun 2009 | A1 |
20090154669 | Wood et al. | Jun 2009 | A1 |
20090157382 | Bar | Jun 2009 | A1 |
20090157384 | Toutanova et al. | Jun 2009 | A1 |
20090157401 | Bennett | Jun 2009 | A1 |
20090158200 | Palahnuk et al. | Jun 2009 | A1 |
20090158323 | Bober et al. | Jun 2009 | A1 |
20090158423 | Orlassino et al. | Jun 2009 | A1 |
20090160761 | Moosavi et al. | Jun 2009 | A1 |
20090160803 | Hashimoto | Jun 2009 | A1 |
20090163243 | Barbera | Jun 2009 | A1 |
20090164301 | O'Sullivan et al. | Jun 2009 | A1 |
20090164441 | Cheyer | Jun 2009 | A1 |
20090164655 | Pettersson et al. | Jun 2009 | A1 |
20090164937 | Alviar et al. | Jun 2009 | A1 |
20090167508 | Fadell et al. | Jul 2009 | A1 |
20090167509 | Fadell et al. | Jul 2009 | A1 |
20090171578 | Kim et al. | Jul 2009 | A1 |
20090171662 | Huang et al. | Jul 2009 | A1 |
20090171664 | Kennewick et al. | Jul 2009 | A1 |
20090172108 | Singh | Jul 2009 | A1 |
20090172542 | Girish et al. | Jul 2009 | A1 |
20090174667 | Kocienda et al. | Jul 2009 | A1 |
20090174677 | Gehani et al. | Jul 2009 | A1 |
20090177300 | Lee | Jul 2009 | A1 |
20090177461 | Ehsani et al. | Jul 2009 | A1 |
20090177966 | Chaudhri | Jul 2009 | A1 |
20090182445 | Girish et al. | Jul 2009 | A1 |
20090182549 | Anisimovich et al. | Jul 2009 | A1 |
20090182702 | Miller | Jul 2009 | A1 |
20090183070 | Robbins | Jul 2009 | A1 |
20090187402 | Scholl | Jul 2009 | A1 |
20090187577 | Reznik et al. | Jul 2009 | A1 |
20090187950 | Nicas et al. | Jul 2009 | A1 |
20090190774 | Wang et al. | Jul 2009 | A1 |
20090191895 | Singh et al. | Jul 2009 | A1 |
20090192782 | Drewes | Jul 2009 | A1 |
20090192787 | Roon | Jul 2009 | A1 |
20090192798 | Basson et al. | Jul 2009 | A1 |
20090198497 | Kwon | Aug 2009 | A1 |
20090204402 | Marhawa et al. | Aug 2009 | A1 |
20090204409 | Mozer et al. | Aug 2009 | A1 |
20090204478 | Kaib et al. | Aug 2009 | A1 |
20090204596 | Brun et al. | Aug 2009 | A1 |
20090204601 | Grasset | Aug 2009 | A1 |
20090204620 | Thione et al. | Aug 2009 | A1 |
20090210230 | Schwarz et al. | Aug 2009 | A1 |
20090210232 | Sanchez et al. | Aug 2009 | A1 |
20090213134 | Stephanick et al. | Aug 2009 | A1 |
20090215466 | Ahl et al. | Aug 2009 | A1 |
20090215503 | Zhang et al. | Aug 2009 | A1 |
20090216396 | Yamagata | Aug 2009 | A1 |
20090216528 | Gemello et al. | Aug 2009 | A1 |
20090216540 | Tessel et al. | Aug 2009 | A1 |
20090216704 | Zheng et al. | Aug 2009 | A1 |
20090219166 | MacFarlane et al. | Sep 2009 | A1 |
20090221274 | Venkatakrishnan et al. | Sep 2009 | A1 |
20090222257 | Sumita et al. | Sep 2009 | A1 |
20090222270 | Likens et al. | Sep 2009 | A2 |
20090222488 | Boerries et al. | Sep 2009 | A1 |
20090224867 | O'Shaughnessy et al. | Sep 2009 | A1 |
20090228126 | Spielberg et al. | Sep 2009 | A1 |
20090228273 | Wang et al. | Sep 2009 | A1 |
20090228277 | Bonforte et al. | Sep 2009 | A1 |
20090228281 | Singleton et al. | Sep 2009 | A1 |
20090228439 | Manolescu et al. | Sep 2009 | A1 |
20090228792 | Van Os et al. | Sep 2009 | A1 |
20090228842 | Westerman et al. | Sep 2009 | A1 |
20090233264 | Rogers et al. | Sep 2009 | A1 |
20090234638 | Ranjan et al. | Sep 2009 | A1 |
20090234651 | Basir et al. | Sep 2009 | A1 |
20090234655 | Kwon | Sep 2009 | A1 |
20090235280 | Tannier et al. | Sep 2009 | A1 |
20090239202 | Stone | Sep 2009 | A1 |
20090239552 | Churchill et al. | Sep 2009 | A1 |
20090240485 | Dalal et al. | Sep 2009 | A1 |
20090241054 | Hendricks | Sep 2009 | A1 |
20090241760 | Georges | Oct 2009 | A1 |
20090247237 | Mittleman et al. | Oct 2009 | A1 |
20090248182 | Logan et al. | Oct 2009 | A1 |
20090248395 | Alewine et al. | Oct 2009 | A1 |
20090248402 | Ito et al. | Oct 2009 | A1 |
20090248420 | Basir et al. | Oct 2009 | A1 |
20090248422 | Li et al. | Oct 2009 | A1 |
20090248456 | Fahmy et al. | Oct 2009 | A1 |
20090249198 | Davis et al. | Oct 2009 | A1 |
20090249247 | Tseng et al. | Oct 2009 | A1 |
20090252350 | Seguin | Oct 2009 | A1 |
20090253457 | Seguin | Oct 2009 | A1 |
20090253463 | Shin et al. | Oct 2009 | A1 |
20090254339 | Seguin | Oct 2009 | A1 |
20090254345 | Fleizach et al. | Oct 2009 | A1 |
20090254819 | Song et al. | Oct 2009 | A1 |
20090254823 | Barrett | Oct 2009 | A1 |
20090259472 | Schroeter | Oct 2009 | A1 |
20090259475 | Yamagami et al. | Oct 2009 | A1 |
20090259969 | Pallakoff | Oct 2009 | A1 |
20090265171 | Davis | Oct 2009 | A1 |
20090265368 | Crider et al. | Oct 2009 | A1 |
20090271109 | Lee et al. | Oct 2009 | A1 |
20090271175 | Bodin et al. | Oct 2009 | A1 |
20090271176 | Bodin et al. | Oct 2009 | A1 |
20090271178 | Bodin et al. | Oct 2009 | A1 |
20090271188 | Agapi et al. | Oct 2009 | A1 |
20090271189 | Agapi et al. | Oct 2009 | A1 |
20090274315 | Carnes et al. | Nov 2009 | A1 |
20090274376 | Selvaraj et al. | Nov 2009 | A1 |
20090278804 | Rubanovich et al. | Nov 2009 | A1 |
20090281789 | Waibel et al. | Nov 2009 | A1 |
20090284471 | Longe et al. | Nov 2009 | A1 |
20090284482 | Chin | Nov 2009 | A1 |
20090286514 | Lichorowic et al. | Nov 2009 | A1 |
20090287583 | Holmes | Nov 2009 | A1 |
20090290718 | Kahn et al. | Nov 2009 | A1 |
20090292987 | Sorenson | Nov 2009 | A1 |
20090296552 | Hicks et al. | Dec 2009 | A1 |
20090298474 | George | Dec 2009 | A1 |
20090298529 | Mahajan | Dec 2009 | A1 |
20090299745 | Kennewick et al. | Dec 2009 | A1 |
20090299849 | Cao et al. | Dec 2009 | A1 |
20090300391 | Jessup et al. | Dec 2009 | A1 |
20090300488 | Salamon et al. | Dec 2009 | A1 |
20090304198 | Herre et al. | Dec 2009 | A1 |
20090305203 | Okumura et al. | Dec 2009 | A1 |
20090306967 | Nicolov et al. | Dec 2009 | A1 |
20090306969 | Goud et al. | Dec 2009 | A1 |
20090306979 | Jaiswal et al. | Dec 2009 | A1 |
20090306980 | Shin | Dec 2009 | A1 |
20090306981 | Cromack et al. | Dec 2009 | A1 |
20090306985 | Roberts et al. | Dec 2009 | A1 |
20090306988 | Chen et al. | Dec 2009 | A1 |
20090306989 | Kaji | Dec 2009 | A1 |
20090307162 | Bui et al. | Dec 2009 | A1 |
20090307201 | Dunning et al. | Dec 2009 | A1 |
20090307584 | Davidson et al. | Dec 2009 | A1 |
20090307594 | Kosonen et al. | Dec 2009 | A1 |
20090309352 | Walker et al. | Dec 2009 | A1 |
20090313014 | Shin et al. | Dec 2009 | A1 |
20090313020 | Koivunen | Dec 2009 | A1 |
20090313023 | Jones | Dec 2009 | A1 |
20090313026 | Coffman et al. | Dec 2009 | A1 |
20090313544 | Wood et al. | Dec 2009 | A1 |
20090313564 | Rottler et al. | Dec 2009 | A1 |
20090316943 | Frigola Munoz et al. | Dec 2009 | A1 |
20090318119 | Basir et al. | Dec 2009 | A1 |
20090318198 | Carroll | Dec 2009 | A1 |
20090319257 | Blume et al. | Dec 2009 | A1 |
20090319266 | Brown et al. | Dec 2009 | A1 |
20090319342 | Shilman et al. | Dec 2009 | A1 |
20090326923 | Yan et al. | Dec 2009 | A1 |
20090326936 | Nagashima | Dec 2009 | A1 |
20090326938 | Marila et al. | Dec 2009 | A1 |
20090326949 | Douthitt et al. | Dec 2009 | A1 |
20090327977 | Bachfischer et al. | Dec 2009 | A1 |
20100004918 | Lee et al. | Jan 2010 | A1 |
20100004930 | Strope et al. | Jan 2010 | A1 |
20100004931 | Ma et al. | Jan 2010 | A1 |
20100005081 | Bennett | Jan 2010 | A1 |
20100010803 | Ishikawa et al. | Jan 2010 | A1 |
20100010814 | Patel | Jan 2010 | A1 |
20100010948 | Ito et al. | Jan 2010 | A1 |
20100013760 | Hirai et al. | Jan 2010 | A1 |
20100013796 | Abileah et al. | Jan 2010 | A1 |
20100017212 | Attwater et al. | Jan 2010 | A1 |
20100017382 | Katragadda et al. | Jan 2010 | A1 |
20100019834 | Zerbe et al. | Jan 2010 | A1 |
20100020035 | Ryu et al. | Jan 2010 | A1 |
20100023318 | Lemoine | Jan 2010 | A1 |
20100023320 | Di Cristo et al. | Jan 2010 | A1 |
20100023331 | Duta et al. | Jan 2010 | A1 |
20100026526 | Yokota | Feb 2010 | A1 |
20100030549 | Lee et al. | Feb 2010 | A1 |
20100030562 | Yoshizawa et al. | Feb 2010 | A1 |
20100030928 | Conroy et al. | Feb 2010 | A1 |
20100031143 | Rao et al. | Feb 2010 | A1 |
20100036653 | Kim et al. | Feb 2010 | A1 |
20100036655 | Cecil et al. | Feb 2010 | A1 |
20100036660 | Bennett | Feb 2010 | A1 |
20100036829 | Leyba | Feb 2010 | A1 |
20100036928 | Granito et al. | Feb 2010 | A1 |
20100037183 | Miyashita et al. | Feb 2010 | A1 |
20100042400 | Block et al. | Feb 2010 | A1 |
20100042576 | Roettger et al. | Feb 2010 | A1 |
20100046842 | Conwell et al. | Feb 2010 | A1 |
20100049498 | Cao et al. | Feb 2010 | A1 |
20100049514 | Kennewick et al. | Feb 2010 | A1 |
20100050064 | Liu et al. | Feb 2010 | A1 |
20100054512 | Solum | Mar 2010 | A1 |
20100057435 | Kent et al. | Mar 2010 | A1 |
20100057443 | Di Cristo et al. | Mar 2010 | A1 |
20100057457 | Ogata et al. | Mar 2010 | A1 |
20100057461 | Neubacher et al. | Mar 2010 | A1 |
20100057643 | Yang | Mar 2010 | A1 |
20100058200 | Jablokov et al. | Mar 2010 | A1 |
20100060646 | Unsal et al. | Mar 2010 | A1 |
20100063804 | Sato et al. | Mar 2010 | A1 |
20100063825 | Williams et al. | Mar 2010 | A1 |
20100063961 | Guiheneuf et al. | Mar 2010 | A1 |
20100064113 | Lindahl et al. | Mar 2010 | A1 |
20100064218 | Bull et al. | Mar 2010 | A1 |
20100064226 | Stefaniak et al. | Mar 2010 | A1 |
20100066546 | Aaron | Mar 2010 | A1 |
20100066684 | Shahraray et al. | Mar 2010 | A1 |
20100067723 | Bergmann et al. | Mar 2010 | A1 |
20100067867 | Lin et al. | Mar 2010 | A1 |
20100070281 | Conkie et al. | Mar 2010 | A1 |
20100070517 | Ghosh et al. | Mar 2010 | A1 |
20100070521 | Clinchant et al. | Mar 2010 | A1 |
20100070899 | Hunt et al. | Mar 2010 | A1 |
20100071003 | Bychkov et al. | Mar 2010 | A1 |
20100076760 | Kraenzel et al. | Mar 2010 | A1 |
20100076993 | Klawitter et al. | Mar 2010 | A1 |
20100077350 | Lim et al. | Mar 2010 | A1 |
20100079501 | Ikeda et al. | Apr 2010 | A1 |
20100080398 | Waldmann | Apr 2010 | A1 |
20100080470 | Deluca et al. | Apr 2010 | A1 |
20100081456 | Singh et al. | Apr 2010 | A1 |
20100081487 | Chen et al. | Apr 2010 | A1 |
20100082286 | Leung | Apr 2010 | A1 |
20100082327 | Rogers et al. | Apr 2010 | A1 |
20100082328 | Rogers et al. | Apr 2010 | A1 |
20100082329 | Silverman et al. | Apr 2010 | A1 |
20100082333 | Al-Shammari | Apr 2010 | A1 |
20100082346 | Rogers et al. | Apr 2010 | A1 |
20100082347 | Rogers et al. | Apr 2010 | A1 |
20100082348 | Silverman et al. | Apr 2010 | A1 |
20100082349 | Bellegarda et al. | Apr 2010 | A1 |
20100082376 | Levitt | Apr 2010 | A1 |
20100082567 | Rosenblatt et al. | Apr 2010 | A1 |
20100082970 | Lindahl et al. | Apr 2010 | A1 |
20100086152 | Rank et al. | Apr 2010 | A1 |
20100086153 | Hagen et al. | Apr 2010 | A1 |
20100086156 | Rank et al. | Apr 2010 | A1 |
20100088020 | Sano et al. | Apr 2010 | A1 |
20100088093 | Lee et al. | Apr 2010 | A1 |
20100088100 | Lindahl | Apr 2010 | A1 |
20100094632 | Davis et al. | Apr 2010 | A1 |
20100098231 | Wohlert et al. | Apr 2010 | A1 |
20100100212 | Lindahl et al. | Apr 2010 | A1 |
20100100384 | Ju et al. | Apr 2010 | A1 |
20100100385 | Davis et al. | Apr 2010 | A1 |
20100100816 | Mccloskey et al. | Apr 2010 | A1 |
20100103776 | Chan | Apr 2010 | A1 |
20100106486 | Hua et al. | Apr 2010 | A1 |
20100106498 | Morrison et al. | Apr 2010 | A1 |
20100106500 | McKee et al. | Apr 2010 | A1 |
20100106503 | Farrell et al. | Apr 2010 | A1 |
20100114856 | Kuboyama | May 2010 | A1 |
20100114887 | Conway et al. | May 2010 | A1 |
20100121637 | Roy et al. | May 2010 | A1 |
20100125456 | Weng et al. | May 2010 | A1 |
20100125458 | Franco et al. | May 2010 | A1 |
20100125460 | Mellott et al. | May 2010 | A1 |
20100125811 | Moore et al. | May 2010 | A1 |
20100128701 | Nagaraja | May 2010 | A1 |
20100131269 | Park et al. | May 2010 | A1 |
20100131273 | Aley-Raz et al. | May 2010 | A1 |
20100131498 | Linthicum et al. | May 2010 | A1 |
20100131899 | Hubert | May 2010 | A1 |
20100138215 | Williams | Jun 2010 | A1 |
20100138224 | Bedingfield, Sr. | Jun 2010 | A1 |
20100138416 | Bellotti | Jun 2010 | A1 |
20100138680 | Brisebois et al. | Jun 2010 | A1 |
20100138759 | Roy | Jun 2010 | A1 |
20100138798 | Wilson et al. | Jun 2010 | A1 |
20100142740 | Roerup | Jun 2010 | A1 |
20100145694 | Ju et al. | Jun 2010 | A1 |
20100145700 | Kennewick et al. | Jun 2010 | A1 |
20100145707 | Ljolje et al. | Jun 2010 | A1 |
20100146442 | Nagasaka et al. | Jun 2010 | A1 |
20100150321 | Harris et al. | Jun 2010 | A1 |
20100153114 | Shih et al. | Jun 2010 | A1 |
20100153115 | Klee et al. | Jun 2010 | A1 |
20100153448 | Harpur et al. | Jun 2010 | A1 |
20100161311 | Massuh | Jun 2010 | A1 |
20100161313 | Karttunen | Jun 2010 | A1 |
20100161337 | Pulz et al. | Jun 2010 | A1 |
20100161554 | Datuashvili et al. | Jun 2010 | A1 |
20100164897 | Morin et al. | Jul 2010 | A1 |
20100169075 | Raffa et al. | Jul 2010 | A1 |
20100169093 | Washio | Jul 2010 | A1 |
20100169097 | Nachman et al. | Jul 2010 | A1 |
20100169098 | Patch | Jul 2010 | A1 |
20100171713 | Kwok et al. | Jul 2010 | A1 |
20100174544 | Heifets | Jul 2010 | A1 |
20100175066 | Paik | Jul 2010 | A1 |
20100179932 | Yoon et al. | Jul 2010 | A1 |
20100179991 | Lorch et al. | Jul 2010 | A1 |
20100180218 | Boston et al. | Jul 2010 | A1 |
20100185448 | Meisel | Jul 2010 | A1 |
20100185949 | Jaeger | Jul 2010 | A1 |
20100191520 | Gruhn et al. | Jul 2010 | A1 |
20100197359 | Harris | Aug 2010 | A1 |
20100199180 | Brichter et al. | Aug 2010 | A1 |
20100199215 | Seymour et al. | Aug 2010 | A1 |
20100204986 | Kennewick et al. | Aug 2010 | A1 |
20100211199 | Naik et al. | Aug 2010 | A1 |
20100211379 | Gorman et al. | Aug 2010 | A1 |
20100211644 | Lavoie et al. | Aug 2010 | A1 |
20100216509 | Riemer et al. | Aug 2010 | A1 |
20100217604 | Baldwin et al. | Aug 2010 | A1 |
20100222033 | Scott et al. | Sep 2010 | A1 |
20100222098 | Garg | Sep 2010 | A1 |
20100223055 | Mclean | Sep 2010 | A1 |
20100223056 | Kadirkamanathan et al. | Sep 2010 | A1 |
20100223131 | Scott et al. | Sep 2010 | A1 |
20100225599 | Danielsson et al. | Sep 2010 | A1 |
20100225809 | Connors et al. | Sep 2010 | A1 |
20100227642 | Kim et al. | Sep 2010 | A1 |
20100228540 | Bennett | Sep 2010 | A1 |
20100228549 | Herman et al. | Sep 2010 | A1 |
20100228691 | Yang et al. | Sep 2010 | A1 |
20100229082 | Karmarkar et al. | Sep 2010 | A1 |
20100229100 | Miller et al. | Sep 2010 | A1 |
20100231474 | Yamagajo et al. | Sep 2010 | A1 |
20100235167 | Bourdon | Sep 2010 | A1 |
20100235341 | Bennett | Sep 2010 | A1 |
20100235729 | Kocienda et al. | Sep 2010 | A1 |
20100235732 | Bergman | Sep 2010 | A1 |
20100235770 | Ording et al. | Sep 2010 | A1 |
20100235780 | Westerman et al. | Sep 2010 | A1 |
20100241418 | Maeda et al. | Sep 2010 | A1 |
20100250542 | Fujimaki | Sep 2010 | A1 |
20100250599 | Schmidt et al. | Sep 2010 | A1 |
20100255858 | Juhasz | Oct 2010 | A1 |
20100257160 | Cao | Oct 2010 | A1 |
20100257478 | Longe et al. | Oct 2010 | A1 |
20100262599 | Nitz | Oct 2010 | A1 |
20100263015 | Pandey et al. | Oct 2010 | A1 |
20100268537 | Al-Telmissani | Oct 2010 | A1 |
20100268539 | Xu et al. | Oct 2010 | A1 |
20100269040 | Lee | Oct 2010 | A1 |
20100274753 | Liberty et al. | Oct 2010 | A1 |
20100277579 | Cho et al. | Nov 2010 | A1 |
20100278320 | Arsenault et al. | Nov 2010 | A1 |
20100278453 | King | Nov 2010 | A1 |
20100280983 | Cho et al. | Nov 2010 | A1 |
20100281034 | Petrou et al. | Nov 2010 | A1 |
20100286984 | Wandinger et al. | Nov 2010 | A1 |
20100286985 | Kennewick et al. | Nov 2010 | A1 |
20100287514 | Cragun et al. | Nov 2010 | A1 |
20100290632 | Lin | Nov 2010 | A1 |
20100293460 | Budelli | Nov 2010 | A1 |
20100295645 | Falldin et al. | Nov 2010 | A1 |
20100299133 | Kopparapu et al. | Nov 2010 | A1 |
20100299138 | Kim | Nov 2010 | A1 |
20100299142 | Freeman et al. | Nov 2010 | A1 |
20100302056 | Dutton et al. | Dec 2010 | A1 |
20100304342 | Zilber | Dec 2010 | A1 |
20100304705 | Hursey et al. | Dec 2010 | A1 |
20100305807 | Basir et al. | Dec 2010 | A1 |
20100305947 | Schwarz et al. | Dec 2010 | A1 |
20100312547 | Van Os et al. | Dec 2010 | A1 |
20100312566 | Odinak et al. | Dec 2010 | A1 |
20100318366 | Sullivan et al. | Dec 2010 | A1 |
20100318576 | Kim | Dec 2010 | A1 |
20100322438 | Siotis | Dec 2010 | A1 |
20100324709 | Starmen | Dec 2010 | A1 |
20100324895 | Kurzweil et al. | Dec 2010 | A1 |
20100324896 | Attwater et al. | Dec 2010 | A1 |
20100324905 | Kurzweil et al. | Dec 2010 | A1 |
20100325131 | Dumais et al. | Dec 2010 | A1 |
20100325158 | Oral et al. | Dec 2010 | A1 |
20100325573 | Estrada et al. | Dec 2010 | A1 |
20100325588 | Reddy et al. | Dec 2010 | A1 |
20100330908 | Maddern et al. | Dec 2010 | A1 |
20100332003 | Yaguez | Dec 2010 | A1 |
20100332220 | Hursey et al. | Dec 2010 | A1 |
20100332224 | Mäkelä et al. | Dec 2010 | A1 |
20100332235 | David | Dec 2010 | A1 |
20100332236 | Tan | Dec 2010 | A1 |
20100332280 | Bradley et al. | Dec 2010 | A1 |
20100332348 | Cao | Dec 2010 | A1 |
20100332428 | Mchenry et al. | Dec 2010 | A1 |
20100332976 | Fux et al. | Dec 2010 | A1 |
20100333030 | Johns | Dec 2010 | A1 |
20100333163 | Daly | Dec 2010 | A1 |
20110002487 | Panther et al. | Jan 2011 | A1 |
20110004475 | Bellegarda | Jan 2011 | A1 |
20110006876 | Moberg et al. | Jan 2011 | A1 |
20110009107 | Guba et al. | Jan 2011 | A1 |
20110010178 | Lee et al. | Jan 2011 | A1 |
20110010644 | Merrill et al. | Jan 2011 | A1 |
20110015928 | Odell et al. | Jan 2011 | A1 |
20110016150 | Engstrom et al. | Jan 2011 | A1 |
20110016421 | Krupka et al. | Jan 2011 | A1 |
20110018695 | Bells et al. | Jan 2011 | A1 |
20110021211 | Ohki | Jan 2011 | A1 |
20110021213 | Carr | Jan 2011 | A1 |
20110022292 | Shen et al. | Jan 2011 | A1 |
20110022388 | Wu et al. | Jan 2011 | A1 |
20110022393 | Waller et al. | Jan 2011 | A1 |
20110022394 | Wide et al. | Jan 2011 | A1 |
20110022472 | Zon et al. | Jan 2011 | A1 |
20110022952 | Wu et al. | Jan 2011 | A1 |
20110029616 | Wang et al. | Feb 2011 | A1 |
20110030067 | Wilson | Feb 2011 | A1 |
20110033064 | Johnson et al. | Feb 2011 | A1 |
20110034183 | Haag et al. | Feb 2011 | A1 |
20110035144 | Okamoto et al. | Feb 2011 | A1 |
20110035434 | Lockwood | Feb 2011 | A1 |
20110038489 | Visser et al. | Feb 2011 | A1 |
20110039584 | Merrett | Feb 2011 | A1 |
20110040707 | Theisen et al. | Feb 2011 | A1 |
20110045841 | Kuhlke et al. | Feb 2011 | A1 |
20110047072 | Ciurea | Feb 2011 | A1 |
20110047149 | Vaananen | Feb 2011 | A1 |
20110047161 | Myaeng et al. | Feb 2011 | A1 |
20110047266 | Yu et al. | Feb 2011 | A1 |
20110047605 | Sontag et al. | Feb 2011 | A1 |
20110050591 | Kim et al. | Mar 2011 | A1 |
20110050592 | Kim et al. | Mar 2011 | A1 |
20110054647 | Chipchase | Mar 2011 | A1 |
20110054894 | Phillips et al. | Mar 2011 | A1 |
20110054901 | Qin et al. | Mar 2011 | A1 |
20110055256 | Phillips et al. | Mar 2011 | A1 |
20110060584 | Ferrucci et al. | Mar 2011 | A1 |
20110060587 | Phillips et al. | Mar 2011 | A1 |
20110060589 | Weinberg et al. | Mar 2011 | A1 |
20110060807 | Martin et al. | Mar 2011 | A1 |
20110064387 | Mendeloff et al. | Mar 2011 | A1 |
20110065456 | Brennan et al. | Mar 2011 | A1 |
20110066366 | Ellanti et al. | Mar 2011 | A1 |
20110066436 | Bezar | Mar 2011 | A1 |
20110066468 | Huang et al. | Mar 2011 | A1 |
20110066634 | Phillips et al. | Mar 2011 | A1 |
20110072033 | White et al. | Mar 2011 | A1 |
20110072492 | Mohler et al. | Mar 2011 | A1 |
20110076994 | Kim et al. | Mar 2011 | A1 |
20110077943 | Miki et al. | Mar 2011 | A1 |
20110080260 | Wang et al. | Apr 2011 | A1 |
20110081889 | Gao et al. | Apr 2011 | A1 |
20110082688 | Kim et al. | Apr 2011 | A1 |
20110083079 | Farrell et al. | Apr 2011 | A1 |
20110087491 | Wittenstein et al. | Apr 2011 | A1 |
20110087685 | Lin et al. | Apr 2011 | A1 |
20110090078 | Kim et al. | Apr 2011 | A1 |
20110092187 | Miller | Apr 2011 | A1 |
20110093261 | Angott | Apr 2011 | A1 |
20110093265 | Stent et al. | Apr 2011 | A1 |
20110093271 | Bernard et al. | Apr 2011 | A1 |
20110093272 | Isobe et al. | Apr 2011 | A1 |
20110099000 | Rai et al. | Apr 2011 | A1 |
20110103682 | Chidlovskii et al. | May 2011 | A1 |
20110105097 | Tadayon et al. | May 2011 | A1 |
20110106736 | Aharonson et al. | May 2011 | A1 |
20110106892 | Nelson et al. | May 2011 | A1 |
20110110502 | Daye et al. | May 2011 | A1 |
20110111724 | Baptiste | May 2011 | A1 |
20110112825 | Bellegarda | May 2011 | A1 |
20110112827 | Kennewick et al. | May 2011 | A1 |
20110112837 | Kurki-Suonio et al. | May 2011 | A1 |
20110112838 | Adibi | May 2011 | A1 |
20110112921 | Kennewick et al. | May 2011 | A1 |
20110116610 | Shaw et al. | May 2011 | A1 |
20110119049 | Ylonen | May 2011 | A1 |
20110119051 | Li et al. | May 2011 | A1 |
20110119623 | Kim | May 2011 | A1 |
20110119715 | Chang et al. | May 2011 | A1 |
20110123004 | Chang et al. | May 2011 | A1 |
20110125498 | Pickering et al. | May 2011 | A1 |
20110125540 | Jang et al. | May 2011 | A1 |
20110125701 | Nair et al. | May 2011 | A1 |
20110130958 | Stahl et al. | Jun 2011 | A1 |
20110131036 | DiCristo et al. | Jun 2011 | A1 |
20110131038 | Oyaizu et al. | Jun 2011 | A1 |
20110131045 | Cristo et al. | Jun 2011 | A1 |
20110137636 | Srihari et al. | Jun 2011 | A1 |
20110137664 | Kho et al. | Jun 2011 | A1 |
20110141141 | Kankainen | Jun 2011 | A1 |
20110143726 | de Silva | Jun 2011 | A1 |
20110143811 | Rodriguez | Jun 2011 | A1 |
20110144857 | Wingrove et al. | Jun 2011 | A1 |
20110144901 | Wang | Jun 2011 | A1 |
20110144973 | Bocchieri et al. | Jun 2011 | A1 |
20110144999 | Jang et al. | Jun 2011 | A1 |
20110145718 | Ketola et al. | Jun 2011 | A1 |
20110151830 | Blanda et al. | Jun 2011 | A1 |
20110153209 | Geelen | Jun 2011 | A1 |
20110153322 | Kwak et al. | Jun 2011 | A1 |
20110153324 | Ballinger et al. | Jun 2011 | A1 |
20110153329 | Moorer | Jun 2011 | A1 |
20110153330 | Yazdani et al. | Jun 2011 | A1 |
20110153373 | Dantzig et al. | Jun 2011 | A1 |
20110154193 | Creutz et al. | Jun 2011 | A1 |
20110157029 | Tseng | Jun 2011 | A1 |
20110161072 | Terao et al. | Jun 2011 | A1 |
20110161076 | Davis et al. | Jun 2011 | A1 |
20110161079 | Gruhn et al. | Jun 2011 | A1 |
20110161309 | Lung et al. | Jun 2011 | A1 |
20110161852 | Vainio et al. | Jun 2011 | A1 |
20110166851 | LeBeau et al. | Jul 2011 | A1 |
20110167350 | Hoellwarth | Jul 2011 | A1 |
20110175810 | Markovic et al. | Jul 2011 | A1 |
20110178804 | Inoue et al. | Jul 2011 | A1 |
20110179002 | Dumitru et al. | Jul 2011 | A1 |
20110179372 | Moore et al. | Jul 2011 | A1 |
20110183627 | Ueda et al. | Jul 2011 | A1 |
20110183650 | Mckee et al. | Jul 2011 | A1 |
20110184721 | Subramanian et al. | Jul 2011 | A1 |
20110184730 | LeBeau et al. | Jul 2011 | A1 |
20110184736 | Slotznick | Jul 2011 | A1 |
20110184737 | Nakano et al. | Jul 2011 | A1 |
20110184768 | Norton et al. | Jul 2011 | A1 |
20110185288 | Gupta et al. | Jul 2011 | A1 |
20110191108 | Friedlander | Aug 2011 | A1 |
20110191271 | Baker et al. | Aug 2011 | A1 |
20110191344 | Jin et al. | Aug 2011 | A1 |
20110195758 | Damale et al. | Aug 2011 | A1 |
20110196670 | Dang et al. | Aug 2011 | A1 |
20110197128 | Assadollahi et al. | Aug 2011 | A1 |
20110199312 | Okuta | Aug 2011 | A1 |
20110201385 | Higginbotham et al. | Aug 2011 | A1 |
20110201387 | Paek et al. | Aug 2011 | A1 |
20110202526 | Lee et al. | Aug 2011 | A1 |
20110205149 | Tom et al. | Aug 2011 | A1 |
20110208511 | Sikstrom et al. | Aug 2011 | A1 |
20110208524 | Haughay | Aug 2011 | A1 |
20110209088 | Hinckley et al. | Aug 2011 | A1 |
20110212717 | Rhoads et al. | Sep 2011 | A1 |
20110216093 | Griffin | Sep 2011 | A1 |
20110218806 | Alewine et al. | Sep 2011 | A1 |
20110218855 | Cao et al. | Sep 2011 | A1 |
20110219018 | Bailey et al. | Sep 2011 | A1 |
20110223893 | Lau et al. | Sep 2011 | A1 |
20110224972 | Millett et al. | Sep 2011 | A1 |
20110228913 | Cochinwala et al. | Sep 2011 | A1 |
20110231182 | Weider et al. | Sep 2011 | A1 |
20110231184 | Kerr | Sep 2011 | A1 |
20110231188 | Kennewick et al. | Sep 2011 | A1 |
20110231218 | Tovar | Sep 2011 | A1 |
20110231432 | Sate et al. | Sep 2011 | A1 |
20110231474 | Locker et al. | Sep 2011 | A1 |
20110238191 | Kristjansson et al. | Sep 2011 | A1 |
20110238407 | Kent | Sep 2011 | A1 |
20110238408 | Larcheveque et al. | Sep 2011 | A1 |
20110238676 | Liu et al. | Sep 2011 | A1 |
20110239111 | Grover | Sep 2011 | A1 |
20110242007 | Gray et al. | Oct 2011 | A1 |
20110244888 | Ohki | Oct 2011 | A1 |
20110246471 | Rakib et al. | Oct 2011 | A1 |
20110249144 | Chang | Oct 2011 | A1 |
20110250570 | Mack et al. | Oct 2011 | A1 |
20110257966 | Rychlik | Oct 2011 | A1 |
20110258188 | Abdalmageed et al. | Oct 2011 | A1 |
20110260829 | Lee | Oct 2011 | A1 |
20110260861 | Singh et al. | Oct 2011 | A1 |
20110264643 | Cao | Oct 2011 | A1 |
20110264999 | Bells et al. | Oct 2011 | A1 |
20110274303 | Filson et al. | Nov 2011 | A1 |
20110276595 | Kirkland et al. | Nov 2011 | A1 |
20110276598 | Kozempel | Nov 2011 | A1 |
20110276944 | Bergman et al. | Nov 2011 | A1 |
20110279368 | Klein et al. | Nov 2011 | A1 |
20110282663 | Talwar et al. | Nov 2011 | A1 |
20110282888 | Koperski et al. | Nov 2011 | A1 |
20110282906 | Wong | Nov 2011 | A1 |
20110283189 | McCarty | Nov 2011 | A1 |
20110288852 | Dymetman et al. | Nov 2011 | A1 |
20110288855 | Roy | Nov 2011 | A1 |
20110288861 | Kurzweil et al. | Nov 2011 | A1 |
20110288863 | Rasmussen | Nov 2011 | A1 |
20110288866 | Rasmussen | Nov 2011 | A1 |
20110298585 | Barry | Dec 2011 | A1 |
20110301943 | Patch | Dec 2011 | A1 |
20110302162 | Xiao et al. | Dec 2011 | A1 |
20110302645 | Headley | Dec 2011 | A1 |
20110306426 | Novak et al. | Dec 2011 | A1 |
20110307241 | Weibel et al. | Dec 2011 | A1 |
20110307254 | Hunt et al. | Dec 2011 | A1 |
20110307491 | Fisk et al. | Dec 2011 | A1 |
20110307810 | Hilerio et al. | Dec 2011 | A1 |
20110313775 | Laligand et al. | Dec 2011 | A1 |
20110313803 | Friend et al. | Dec 2011 | A1 |
20110314003 | Ju et al. | Dec 2011 | A1 |
20110314032 | Bennett et al. | Dec 2011 | A1 |
20110314404 | Kotler et al. | Dec 2011 | A1 |
20110314539 | Horton | Dec 2011 | A1 |
20110320187 | Motik et al. | Dec 2011 | A1 |
20120002820 | Leichter | Jan 2012 | A1 |
20120005602 | Anttila et al. | Jan 2012 | A1 |
20120008754 | Mukherjee et al. | Jan 2012 | A1 |
20120010886 | Razavilar | Jan 2012 | A1 |
20120011138 | Dunning et al. | Jan 2012 | A1 |
20120013609 | Reponen et al. | Jan 2012 | A1 |
20120015629 | Olsen et al. | Jan 2012 | A1 |
20120016658 | Wu et al. | Jan 2012 | A1 |
20120016678 | Gruber et al. | Jan 2012 | A1 |
20120019400 | Patel et al. | Jan 2012 | A1 |
20120020490 | Leichter | Jan 2012 | A1 |
20120022787 | LeBeau et al. | Jan 2012 | A1 |
20120022857 | Baldwin et al. | Jan 2012 | A1 |
20120022860 | Lloyd et al. | Jan 2012 | A1 |
20120022868 | LeBeau et al. | Jan 2012 | A1 |
20120022869 | Lloyd et al. | Jan 2012 | A1 |
20120022870 | Kristjansson et al. | Jan 2012 | A1 |
20120022872 | Gruber et al. | Jan 2012 | A1 |
20120022874 | Lloyd et al. | Jan 2012 | A1 |
20120022876 | LeBeau et al. | Jan 2012 | A1 |
20120022967 | Bachman et al. | Jan 2012 | A1 |
20120023088 | Cheng et al. | Jan 2012 | A1 |
20120023095 | Wadycki et al. | Jan 2012 | A1 |
20120023462 | Rosing et al. | Jan 2012 | A1 |
20120029661 | Jones et al. | Feb 2012 | A1 |
20120029910 | Medlock et al. | Feb 2012 | A1 |
20120034904 | LeBeau et al. | Feb 2012 | A1 |
20120035907 | Lebeau et al. | Feb 2012 | A1 |
20120035908 | Lebeau et al. | Feb 2012 | A1 |
20120035924 | Jitkoff et al. | Feb 2012 | A1 |
20120035925 | Friend et al. | Feb 2012 | A1 |
20120035926 | Ambler | Feb 2012 | A1 |
20120035931 | LeBeau et al. | Feb 2012 | A1 |
20120035932 | Jitkoff et al. | Feb 2012 | A1 |
20120035935 | Park et al. | Feb 2012 | A1 |
20120036556 | LeBeau et al. | Feb 2012 | A1 |
20120039539 | Boiman et al. | Feb 2012 | A1 |
20120041752 | Wang et al. | Feb 2012 | A1 |
20120042014 | Desai et al. | Feb 2012 | A1 |
20120042343 | Laligand et al. | Feb 2012 | A1 |
20120053815 | Montanari et al. | Mar 2012 | A1 |
20120053829 | Agarwal et al. | Mar 2012 | A1 |
20120053945 | Gupta et al. | Mar 2012 | A1 |
20120056815 | Mehra | Mar 2012 | A1 |
20120059655 | Cartales | Mar 2012 | A1 |
20120059813 | Sejnoha et al. | Mar 2012 | A1 |
20120062473 | Xiao et al. | Mar 2012 | A1 |
20120066212 | Jennings | Mar 2012 | A1 |
20120066581 | Spalink | Mar 2012 | A1 |
20120075054 | Ge et al. | Mar 2012 | A1 |
20120077479 | Sabotta et al. | Mar 2012 | A1 |
20120078611 | Soltani et al. | Mar 2012 | A1 |
20120078624 | Yook et al. | Mar 2012 | A1 |
20120078627 | Wagner | Mar 2012 | A1 |
20120078635 | Rothkopf et al. | Mar 2012 | A1 |
20120078747 | Chakrabarti et al. | Mar 2012 | A1 |
20120082317 | Pance et al. | Apr 2012 | A1 |
20120083286 | Kim et al. | Apr 2012 | A1 |
20120084086 | Gilbert et al. | Apr 2012 | A1 |
20120084634 | Wong et al. | Apr 2012 | A1 |
20120088219 | Briscoe et al. | Apr 2012 | A1 |
20120089331 | Schmidt et al. | Apr 2012 | A1 |
20120101823 | Weng et al. | Apr 2012 | A1 |
20120108166 | Hymel | May 2012 | A1 |
20120108221 | Thomas et al. | May 2012 | A1 |
20120110456 | Larco et al. | May 2012 | A1 |
20120116770 | Chen et al. | May 2012 | A1 |
20120117499 | Mori et al. | May 2012 | A1 |
20120124126 | Alcazar et al. | May 2012 | A1 |
20120128322 | Shaffer et al. | May 2012 | A1 |
20120130709 | Bocchieri et al. | May 2012 | A1 |
20120136572 | Norton | May 2012 | A1 |
20120136649 | Freising et al. | May 2012 | A1 |
20120136855 | Ni et al. | May 2012 | A1 |
20120136985 | Popescu et al. | May 2012 | A1 |
20120137367 | Dupont et al. | May 2012 | A1 |
20120149342 | Cohen et al. | Jun 2012 | A1 |
20120149394 | Singh et al. | Jun 2012 | A1 |
20120150544 | McLoughlin et al. | Jun 2012 | A1 |
20120150580 | Norton | Jun 2012 | A1 |
20120158293 | Burnham | Jun 2012 | A1 |
20120158399 | Tremblay et al. | Jun 2012 | A1 |
20120158422 | Burnham et al. | Jun 2012 | A1 |
20120159380 | Kocienda et al. | Jun 2012 | A1 |
20120163710 | Skaff et al. | Jun 2012 | A1 |
20120166196 | Ju et al. | Jun 2012 | A1 |
20120166959 | Hilerio et al. | Jun 2012 | A1 |
20120173222 | Wang et al. | Jul 2012 | A1 |
20120173244 | Kwak et al. | Jul 2012 | A1 |
20120173464 | Tur et al. | Jul 2012 | A1 |
20120174121 | Treat et al. | Jul 2012 | A1 |
20120179457 | Newman et al. | Jul 2012 | A1 |
20120179467 | Williams | Jul 2012 | A1 |
20120185237 | Gajic et al. | Jul 2012 | A1 |
20120185480 | Ni et al. | Jul 2012 | A1 |
20120185781 | Guzman et al. | Jul 2012 | A1 |
20120191461 | Lin et al. | Jul 2012 | A1 |
20120192096 | Bowman et al. | Jul 2012 | A1 |
20120197743 | Grigg et al. | Aug 2012 | A1 |
20120197995 | Caruso | Aug 2012 | A1 |
20120197998 | Kessel et al. | Aug 2012 | A1 |
20120201362 | Crossan et al. | Aug 2012 | A1 |
20120209654 | Romagnino et al. | Aug 2012 | A1 |
20120209853 | Desai et al. | Aug 2012 | A1 |
20120209874 | Wong et al. | Aug 2012 | A1 |
20120210266 | Jiang et al. | Aug 2012 | A1 |
20120214141 | Raya et al. | Aug 2012 | A1 |
20120214517 | Singh et al. | Aug 2012 | A1 |
20120215762 | Hall et al. | Aug 2012 | A1 |
20120221339 | Wang et al. | Aug 2012 | A1 |
20120221552 | Reponen et al. | Aug 2012 | A1 |
20120223889 | Medlock et al. | Sep 2012 | A1 |
20120223936 | Aughey et al. | Sep 2012 | A1 |
20120232885 | Barbosa et al. | Sep 2012 | A1 |
20120232886 | Capuozzo et al. | Sep 2012 | A1 |
20120232906 | Lindahl et al. | Sep 2012 | A1 |
20120233207 | Mohajer | Sep 2012 | A1 |
20120233266 | Hassan et al. | Sep 2012 | A1 |
20120239661 | Giblin | Sep 2012 | A1 |
20120239761 | Linner et al. | Sep 2012 | A1 |
20120242482 | Elumalai et al. | Sep 2012 | A1 |
20120245719 | Story, Jr. et al. | Sep 2012 | A1 |
20120245939 | Braho et al. | Sep 2012 | A1 |
20120245941 | Cheyer | Sep 2012 | A1 |
20120245944 | Gruber et al. | Sep 2012 | A1 |
20120246064 | Balkow | Sep 2012 | A1 |
20120250858 | Iqbal et al. | Oct 2012 | A1 |
20120252367 | Gaglio et al. | Oct 2012 | A1 |
20120252540 | Kirigaya | Oct 2012 | A1 |
20120253785 | Hamid et al. | Oct 2012 | A1 |
20120253791 | Heck et al. | Oct 2012 | A1 |
20120254143 | Varma et al. | Oct 2012 | A1 |
20120254152 | Park et al. | Oct 2012 | A1 |
20120254290 | Naaman | Oct 2012 | A1 |
20120259615 | Morin et al. | Oct 2012 | A1 |
20120262296 | Bezar | Oct 2012 | A1 |
20120265482 | Grokop | Oct 2012 | A1 |
20120265528 | Gruber et al. | Oct 2012 | A1 |
20120265535 | Bryant-Rich et al. | Oct 2012 | A1 |
20120265806 | Blanchflower et al. | Oct 2012 | A1 |
20120271625 | Bernard | Oct 2012 | A1 |
20120271634 | Lenke | Oct 2012 | A1 |
20120271635 | Ljolje | Oct 2012 | A1 |
20120271640 | Basir | Oct 2012 | A1 |
20120271676 | Aravamudan et al. | Oct 2012 | A1 |
20120275377 | Lehane et al. | Nov 2012 | A1 |
20120278744 | Kozitsyn | Nov 2012 | A1 |
20120284015 | Drewes | Nov 2012 | A1 |
20120284027 | Mallett et al. | Nov 2012 | A1 |
20120290291 | Shelley et al. | Nov 2012 | A1 |
20120290300 | Lee et al. | Nov 2012 | A1 |
20120295708 | Hernandez-Abrego et al. | Nov 2012 | A1 |
20120296638 | Patwa | Nov 2012 | A1 |
20120296649 | Bansal et al. | Nov 2012 | A1 |
20120296654 | Hendrickson et al. | Nov 2012 | A1 |
20120296891 | Rangan | Nov 2012 | A1 |
20120297348 | Santoro | Nov 2012 | A1 |
20120303369 | Brush et al. | Nov 2012 | A1 |
20120303371 | Labsky et al. | Nov 2012 | A1 |
20120304124 | Chen et al. | Nov 2012 | A1 |
20120309363 | Gruber et al. | Dec 2012 | A1 |
20120310642 | Cao et al. | Dec 2012 | A1 |
20120310649 | Cannistraro et al. | Dec 2012 | A1 |
20120310652 | O'Sullivan | Dec 2012 | A1 |
20120310922 | Johnson et al. | Dec 2012 | A1 |
20120311478 | Van Os et al. | Dec 2012 | A1 |
20120311583 | Gruber et al. | Dec 2012 | A1 |
20120311584 | Gruber et al. | Dec 2012 | A1 |
20120311585 | Gruber et al. | Dec 2012 | A1 |
20120316862 | Sultan et al. | Dec 2012 | A1 |
20120316875 | Nyquist et al. | Dec 2012 | A1 |
20120316878 | Singleton et al. | Dec 2012 | A1 |
20120317194 | Tian | Dec 2012 | A1 |
20120317498 | Logan et al. | Dec 2012 | A1 |
20120321112 | Schubert et al. | Dec 2012 | A1 |
20120324391 | Tocci et al. | Dec 2012 | A1 |
20120327009 | Fleizach | Dec 2012 | A1 |
20120329529 | van der Raadt | Dec 2012 | A1 |
20120330660 | Jaiswal | Dec 2012 | A1 |
20120330661 | Lindahl | Dec 2012 | A1 |
20120330990 | Chen et al. | Dec 2012 | A1 |
20130002716 | Walker et al. | Jan 2013 | A1 |
20130005405 | Prociw | Jan 2013 | A1 |
20130006633 | Grokop et al. | Jan 2013 | A1 |
20130006637 | Kanevsky et al. | Jan 2013 | A1 |
20130006638 | Lindahl | Jan 2013 | A1 |
20130007648 | Gamon et al. | Jan 2013 | A1 |
20130009858 | Lacey | Jan 2013 | A1 |
20130010575 | He et al. | Jan 2013 | A1 |
20130013313 | Shechtman et al. | Jan 2013 | A1 |
20130013319 | Grant et al. | Jan 2013 | A1 |
20130014026 | Beringer et al. | Jan 2013 | A1 |
20130018659 | Chi | Jan 2013 | A1 |
20130024576 | Dishneau et al. | Jan 2013 | A1 |
20130027875 | Zhu et al. | Jan 2013 | A1 |
20130030787 | Cancedda et al. | Jan 2013 | A1 |
20130030789 | Dalce | Jan 2013 | A1 |
20130030804 | Zavaliagko et al. | Jan 2013 | A1 |
20130030815 | Madhvanath et al. | Jan 2013 | A1 |
20130030913 | Zhu et al. | Jan 2013 | A1 |
20130030955 | David | Jan 2013 | A1 |
20130031162 | Willis et al. | Jan 2013 | A1 |
20130031476 | Coin et al. | Jan 2013 | A1 |
20130033643 | Kim et al. | Feb 2013 | A1 |
20130035086 | Chardon et al. | Feb 2013 | A1 |
20130035942 | Kim et al. | Feb 2013 | A1 |
20130035961 | Yegnanarayanan | Feb 2013 | A1 |
20130041647 | Ramerth et al. | Feb 2013 | A1 |
20130041654 | Walker et al. | Feb 2013 | A1 |
20130041661 | Lee et al. | Feb 2013 | A1 |
20130041665 | Jang et al. | Feb 2013 | A1 |
20130041667 | Longe et al. | Feb 2013 | A1 |
20130041968 | Cohen et al. | Feb 2013 | A1 |
20130046544 | Kay et al. | Feb 2013 | A1 |
20130050089 | Neels et al. | Feb 2013 | A1 |
20130054550 | Bolohan | Feb 2013 | A1 |
20130054609 | Rajput et al. | Feb 2013 | A1 |
20130054613 | Bishop | Feb 2013 | A1 |
20130054675 | Jenkins et al. | Feb 2013 | A1 |
20130054706 | Graham et al. | Feb 2013 | A1 |
20130055099 | Yao et al. | Feb 2013 | A1 |
20130055147 | Vasudev et al. | Feb 2013 | A1 |
20130063611 | Papakipos | Mar 2013 | A1 |
20130066832 | Sheehan et al. | Mar 2013 | A1 |
20130067307 | Tian et al. | Mar 2013 | A1 |
20130073286 | Bastea-Forte et al. | Mar 2013 | A1 |
20130073346 | Chun et al. | Mar 2013 | A1 |
20130078930 | Chen | Mar 2013 | A1 |
20130080152 | Brun et al. | Mar 2013 | A1 |
20130080162 | Chang et al. | Mar 2013 | A1 |
20130080167 | Mozer | Mar 2013 | A1 |
20130080177 | Chen | Mar 2013 | A1 |
20130080251 | Dempski | Mar 2013 | A1 |
20130082967 | Hillis et al. | Apr 2013 | A1 |
20130085755 | Bringert et al. | Apr 2013 | A1 |
20130085761 | Bringert et al. | Apr 2013 | A1 |
20130090921 | Liu et al. | Apr 2013 | A1 |
20130091090 | Spivack et al. | Apr 2013 | A1 |
20130095805 | Lebeau et al. | Apr 2013 | A1 |
20130096909 | Brun et al. | Apr 2013 | A1 |
20130096917 | Edgar et al. | Apr 2013 | A1 |
20130097566 | Berglund | Apr 2013 | A1 |
20130097682 | Zeljkovic et al. | Apr 2013 | A1 |
20130100268 | Mihailidis et al. | Apr 2013 | A1 |
20130103391 | Millmore et al. | Apr 2013 | A1 |
20130103405 | Namba et al. | Apr 2013 | A1 |
20130106742 | Lee et al. | May 2013 | A1 |
20130110505 | Gruber et al. | May 2013 | A1 |
20130110515 | Guzzoni et al. | May 2013 | A1 |
20130110518 | Gruber et al. | May 2013 | A1 |
20130110519 | Cheyer et al. | May 2013 | A1 |
20130110520 | Cheyer et al. | May 2013 | A1 |
20130110943 | Menon et al. | May 2013 | A1 |
20130111330 | Staikos et al. | May 2013 | A1 |
20130111348 | Gruber et al. | May 2013 | A1 |
20130111365 | Chen et al. | May 2013 | A1 |
20130111487 | Cheyer et al. | May 2013 | A1 |
20130111581 | Griffin et al. | May 2013 | A1 |
20130115927 | Gruber et al. | May 2013 | A1 |
20130117022 | Chen et al. | May 2013 | A1 |
20130124189 | Baldwin et al. | May 2013 | A1 |
20130132084 | Stonehocker et al. | May 2013 | A1 |
20130132089 | Fanty et al. | May 2013 | A1 |
20130132871 | Zeng et al. | May 2013 | A1 |
20130141551 | Kim | Jun 2013 | A1 |
20130142317 | Reynolds | Jun 2013 | A1 |
20130142345 | Waldmann | Jun 2013 | A1 |
20130144594 | Bangalore et al. | Jun 2013 | A1 |
20130144616 | Bangalore et al. | Jun 2013 | A1 |
20130151339 | Kim et al. | Jun 2013 | A1 |
20130152092 | Yadgar et al. | Jun 2013 | A1 |
20130154811 | Ferren et al. | Jun 2013 | A1 |
20130157629 | Lee et al. | Jun 2013 | A1 |
20130158977 | Senior | Jun 2013 | A1 |
20130159847 | Banke et al. | Jun 2013 | A1 |
20130165232 | Nelson et al. | Jun 2013 | A1 |
20130166303 | Chang et al. | Jun 2013 | A1 |
20130166442 | Nakajima et al. | Jun 2013 | A1 |
20130170738 | Capuozzo et al. | Jul 2013 | A1 |
20130172022 | Seymour et al. | Jul 2013 | A1 |
20130173258 | Liu et al. | Jul 2013 | A1 |
20130174034 | Brown et al. | Jul 2013 | A1 |
20130176244 | Yamamoto et al. | Jul 2013 | A1 |
20130176592 | Sasaki | Jul 2013 | A1 |
20130179172 | Nakamura et al. | Jul 2013 | A1 |
20130179440 | Gordon | Jul 2013 | A1 |
20130183944 | Mozer et al. | Jul 2013 | A1 |
20130185059 | Riccardi et al. | Jul 2013 | A1 |
20130185074 | Gruber et al. | Jul 2013 | A1 |
20130185081 | Cheyer et al. | Jul 2013 | A1 |
20130185336 | Singh et al. | Jul 2013 | A1 |
20130187850 | Schulz et al. | Jul 2013 | A1 |
20130187857 | Griffin et al. | Jul 2013 | A1 |
20130191117 | Atti et al. | Jul 2013 | A1 |
20130197911 | Wei et al. | Aug 2013 | A1 |
20130204813 | Master et al. | Aug 2013 | A1 |
20130204897 | McDougall | Aug 2013 | A1 |
20130207898 | Sullivan et al. | Aug 2013 | A1 |
20130218553 | Fujii et al. | Aug 2013 | A1 |
20130218560 | Hsiao et al. | Aug 2013 | A1 |
20130219333 | Palwe et al. | Aug 2013 | A1 |
20130222249 | Pasquero et al. | Aug 2013 | A1 |
20130225128 | Gomar | Aug 2013 | A1 |
20130226935 | Bai et al. | Aug 2013 | A1 |
20130231917 | Naik | Sep 2013 | A1 |
20130234947 | Kristensson et al. | Sep 2013 | A1 |
20130235987 | Arroniz-Escobar et al. | Sep 2013 | A1 |
20130238326 | Kim et al. | Sep 2013 | A1 |
20130238647 | Thompson | Sep 2013 | A1 |
20130244615 | Miller et al. | Sep 2013 | A1 |
20130246048 | Nagase et al. | Sep 2013 | A1 |
20130246050 | Yu et al. | Sep 2013 | A1 |
20130246329 | Pasquero et al. | Sep 2013 | A1 |
20130253911 | Petri et al. | Sep 2013 | A1 |
20130253912 | Medlock et al. | Sep 2013 | A1 |
20130268263 | Park et al. | Oct 2013 | A1 |
20130275117 | Winer | Oct 2013 | A1 |
20130275138 | Gruber et al. | Oct 2013 | A1 |
20130275164 | Gruber et al. | Oct 2013 | A1 |
20130275199 | Proctor, Jr. et al. | Oct 2013 | A1 |
20130275625 | Taivalsaari et al. | Oct 2013 | A1 |
20130275875 | Gruber et al. | Oct 2013 | A1 |
20130275899 | Schubert et al. | Oct 2013 | A1 |
20130282709 | Zhu et al. | Oct 2013 | A1 |
20130283168 | Brown et al. | Oct 2013 | A1 |
20130285913 | Griffin et al. | Oct 2013 | A1 |
20130289991 | Eshwar et al. | Oct 2013 | A1 |
20130289993 | Rao et al. | Oct 2013 | A1 |
20130289994 | Newman et al. | Oct 2013 | A1 |
20130291015 | Pan | Oct 2013 | A1 |
20130297317 | Lee et al. | Nov 2013 | A1 |
20130297319 | Kim | Nov 2013 | A1 |
20130297348 | Cardoza et al. | Nov 2013 | A1 |
20130300645 | Fedorov | Nov 2013 | A1 |
20130300648 | Kim et al. | Nov 2013 | A1 |
20130303106 | Martin | Nov 2013 | A1 |
20130304479 | Teller et al. | Nov 2013 | A1 |
20130304758 | Gruber et al. | Nov 2013 | A1 |
20130304815 | Puente et al. | Nov 2013 | A1 |
20130305119 | Kern et al. | Nov 2013 | A1 |
20130307855 | Lamb et al. | Nov 2013 | A1 |
20130307997 | O'Keefe et al. | Nov 2013 | A1 |
20130308922 | Sano et al. | Nov 2013 | A1 |
20130311184 | Badavne et al. | Nov 2013 | A1 |
20130311997 | Gruber et al. | Nov 2013 | A1 |
20130316746 | Miller et al. | Nov 2013 | A1 |
20130322634 | Bennett et al. | Dec 2013 | A1 |
20130325436 | Wang et al. | Dec 2013 | A1 |
20130325443 | Begeja et al. | Dec 2013 | A1 |
20130325447 | Levien et al. | Dec 2013 | A1 |
20130325448 | Levien et al. | Dec 2013 | A1 |
20130325480 | Lee | Dec 2013 | A1 |
20130325481 | Van Os et al. | Dec 2013 | A1 |
20130325484 | Chakladar et al. | Dec 2013 | A1 |
20130325967 | Parks et al. | Dec 2013 | A1 |
20130325970 | Roberts et al. | Dec 2013 | A1 |
20130325979 | Mansfield et al. | Dec 2013 | A1 |
20130328809 | Smith | Dec 2013 | A1 |
20130329023 | Suplee, III et al. | Dec 2013 | A1 |
20130331127 | Sabatelli et al. | Dec 2013 | A1 |
20130332159 | Federighi et al. | Dec 2013 | A1 |
20130332162 | Keen | Dec 2013 | A1 |
20130332164 | Nalk | Dec 2013 | A1 |
20130332168 | Kim et al. | Dec 2013 | A1 |
20130332172 | Prakash et al. | Dec 2013 | A1 |
20130332400 | González | Dec 2013 | A1 |
20130339256 | Shroff | Dec 2013 | A1 |
20130346068 | Solem et al. | Dec 2013 | A1 |
20130346347 | Patterson et al. | Dec 2013 | A1 |
20140006012 | Zhou et al. | Jan 2014 | A1 |
20140006025 | Krishnan et al. | Jan 2014 | A1 |
20140006027 | Kim et al. | Jan 2014 | A1 |
20140006030 | Fleizach et al. | Jan 2014 | A1 |
20140006153 | Thangam et al. | Jan 2014 | A1 |
20140012574 | Pasupalak et al. | Jan 2014 | A1 |
20140012580 | Ganong et al. | Jan 2014 | A1 |
20140012586 | Rubin et al. | Jan 2014 | A1 |
20140019116 | Lundberg et al. | Jan 2014 | A1 |
20140019133 | Bao et al. | Jan 2014 | A1 |
20140019460 | Sambrani et al. | Jan 2014 | A1 |
20140028735 | Williams et al. | Jan 2014 | A1 |
20140032453 | Eustice et al. | Jan 2014 | A1 |
20140033071 | Gruber et al. | Jan 2014 | A1 |
20140035823 | Khoe et al. | Feb 2014 | A1 |
20140039888 | Taubman et al. | Feb 2014 | A1 |
20140039893 | Weiner Steven | Feb 2014 | A1 |
20140039894 | Shostak | Feb 2014 | A1 |
20140040274 | Aravamudan et al. | Feb 2014 | A1 |
20140040748 | Lemay et al. | Feb 2014 | A1 |
20140040801 | Patel et al. | Feb 2014 | A1 |
20140040918 | Li et al. | Feb 2014 | A1 |
20140046934 | Zhou et al. | Feb 2014 | A1 |
20140047001 | Phillips et al. | Feb 2014 | A1 |
20140052680 | Nitz et al. | Feb 2014 | A1 |
20140052791 | Chakra et al. | Feb 2014 | A1 |
20140053082 | Park et al. | Feb 2014 | A1 |
20140053210 | Cheong et al. | Feb 2014 | A1 |
20140057610 | Olincy et al. | Feb 2014 | A1 |
20140059030 | Hakkani-Tur et al. | Feb 2014 | A1 |
20140067361 | Nikoulina et al. | Mar 2014 | A1 |
20140067371 | Liensberger | Mar 2014 | A1 |
20140067402 | Kim | Mar 2014 | A1 |
20140068751 | Last et al. | Mar 2014 | A1 |
20140074454 | Brown et al. | Mar 2014 | A1 |
20140074466 | Sharifi et al. | Mar 2014 | A1 |
20140074470 | Jansche et al. | Mar 2014 | A1 |
20140074472 | Lin et al. | Mar 2014 | A1 |
20140074483 | Van Os | Mar 2014 | A1 |
20140074815 | Plimton | Mar 2014 | A1 |
20140078065 | Akkok et al. | Mar 2014 | A1 |
20140079195 | Srivastava et al. | Mar 2014 | A1 |
20140080428 | Rhoads et al. | Mar 2014 | A1 |
20140081619 | Solntseva et al. | Mar 2014 | A1 |
20140081633 | Badaskar et al. | Mar 2014 | A1 |
20140082501 | Bae et al. | Mar 2014 | A1 |
20140086458 | Rogers et al. | Mar 2014 | A1 |
20140087711 | Geyer et al. | Mar 2014 | A1 |
20140088961 | Woodward et al. | Mar 2014 | A1 |
20140095171 | Lynch et al. | Apr 2014 | A1 |
20140095172 | Cabaco et al. | Apr 2014 | A1 |
20140095173 | Lynch et al. | Apr 2014 | A1 |
20140096209 | Saraf et al. | Apr 2014 | A1 |
20140098247 | Rao et al. | Apr 2014 | A1 |
20140108017 | Mason et al. | Apr 2014 | A1 |
20140114554 | Lagassey | Apr 2014 | A1 |
20140118155 | Bowers et al. | May 2014 | A1 |
20140122059 | Patel et al. | May 2014 | A1 |
20140122086 | Kapur et al. | May 2014 | A1 |
20140122136 | Jayanthi | May 2014 | A1 |
20140122153 | Truitt | May 2014 | A1 |
20140134983 | Jung et al. | May 2014 | A1 |
20140135036 | Bonanni et al. | May 2014 | A1 |
20140136187 | Wolverton et al. | May 2014 | A1 |
20140136195 | Abdossalami et al. | May 2014 | A1 |
20140136212 | Kwon et al. | May 2014 | A1 |
20140136946 | Matas | May 2014 | A1 |
20140136987 | Rodriguez | May 2014 | A1 |
20140142922 | Liang et al. | May 2014 | A1 |
20140142923 | Jones et al. | May 2014 | A1 |
20140142935 | Lindahl et al. | May 2014 | A1 |
20140143550 | Ganong, III et al. | May 2014 | A1 |
20140143721 | Suzuki et al. | May 2014 | A1 |
20140146200 | Scott et al. | May 2014 | A1 |
20140152577 | Yuen et al. | Jun 2014 | A1 |
20140153709 | Byrd et al. | Jun 2014 | A1 |
20140155031 | Lee et al. | Jun 2014 | A1 |
20140156262 | Yuen et al. | Jun 2014 | A1 |
20140156279 | Okamoto et al. | Jun 2014 | A1 |
20140157422 | Livshits et al. | Jun 2014 | A1 |
20140163951 | Nikoulina et al. | Jun 2014 | A1 |
20140163953 | Parikh | Jun 2014 | A1 |
20140163954 | Joshi et al. | Jun 2014 | A1 |
20140163976 | Park et al. | Jun 2014 | A1 |
20140163977 | Hoffmeister et al. | Jun 2014 | A1 |
20140163981 | Cook et al. | Jun 2014 | A1 |
20140163995 | Burns et al. | Jun 2014 | A1 |
20140164476 | Thomson | Jun 2014 | A1 |
20140164532 | Lynch et al. | Jun 2014 | A1 |
20140164533 | Lynch et al. | Jun 2014 | A1 |
20140169795 | Clough | Jun 2014 | A1 |
20140172878 | Clark et al. | Jun 2014 | A1 |
20140173460 | Kim | Jun 2014 | A1 |
20140180499 | Cooper et al. | Jun 2014 | A1 |
20140180689 | Kim et al. | Jun 2014 | A1 |
20140180697 | Torok et al. | Jun 2014 | A1 |
20140181865 | Koganei | Jun 2014 | A1 |
20140188477 | Zhang | Jul 2014 | A1 |
20140195226 | Yun et al. | Jul 2014 | A1 |
20140195230 | Han et al. | Jul 2014 | A1 |
20140195233 | Bapat | Jul 2014 | A1 |
20140195244 | Cha et al. | Jul 2014 | A1 |
20140195251 | Zeinstra et al. | Jul 2014 | A1 |
20140195252 | Gruber et al. | Jul 2014 | A1 |
20140198048 | Unruh et al. | Jul 2014 | A1 |
20140203939 | Harrington et al. | Jul 2014 | A1 |
20140205076 | Kumar et al. | Jul 2014 | A1 |
20140207439 | Venkatapathy et al. | Jul 2014 | A1 |
20140207446 | Klein et al. | Jul 2014 | A1 |
20140207466 | Smadi et al. | Jul 2014 | A1 |
20140207468 | Bartnik | Jul 2014 | A1 |
20140207582 | Flinn et al. | Jul 2014 | A1 |
20140214429 | Pantel | Jul 2014 | A1 |
20140214537 | Yoo et al. | Jul 2014 | A1 |
20140218372 | Missig et al. | Aug 2014 | A1 |
20140222436 | Binder et al. | Aug 2014 | A1 |
20140222678 | Sheets et al. | Aug 2014 | A1 |
20140223377 | Shaw et al. | Aug 2014 | A1 |
20140223481 | Fundament | Aug 2014 | A1 |
20140230055 | Boehl | Aug 2014 | A1 |
20140232656 | Pasquero et al. | Aug 2014 | A1 |
20140236595 | Gray | Aug 2014 | A1 |
20140236986 | Guzman | Aug 2014 | A1 |
20140237042 | Ahmed et al. | Aug 2014 | A1 |
20140244248 | Arisoy et al. | Aug 2014 | A1 |
20140244249 | Mohamed et al. | Aug 2014 | A1 |
20140244254 | Ju et al. | Aug 2014 | A1 |
20140244257 | Colibro et al. | Aug 2014 | A1 |
20140244258 | Song et al. | Aug 2014 | A1 |
20140244263 | Pontual et al. | Aug 2014 | A1 |
20140244266 | Brown et al. | Aug 2014 | A1 |
20140244268 | Abdelsamie et al. | Aug 2014 | A1 |
20140244271 | Lindahl | Aug 2014 | A1 |
20140244712 | Walters et al. | Aug 2014 | A1 |
20140245140 | Brown et al. | Aug 2014 | A1 |
20140247383 | Dave et al. | Sep 2014 | A1 |
20140247926 | Gainsboro et al. | Sep 2014 | A1 |
20140249817 | Hart et al. | Sep 2014 | A1 |
20140249821 | Kennewick et al. | Sep 2014 | A1 |
20140250046 | Winn et al. | Sep 2014 | A1 |
20140257809 | Goel et al. | Sep 2014 | A1 |
20140257815 | Zhao et al. | Sep 2014 | A1 |
20140257902 | Moore et al. | Sep 2014 | A1 |
20140258857 | Dykstra-Erickson et al. | Sep 2014 | A1 |
20140267022 | Kim | Sep 2014 | A1 |
20140267599 | Drouin et al. | Sep 2014 | A1 |
20140272821 | Pitschel et al. | Sep 2014 | A1 |
20140274005 | Luna et al. | Sep 2014 | A1 |
20140274203 | Ganong et al. | Sep 2014 | A1 |
20140274211 | Sejnoha et al. | Sep 2014 | A1 |
20140278343 | Tran | Sep 2014 | A1 |
20140278349 | Grieves et al. | Sep 2014 | A1 |
20140278379 | Coccaro et al. | Sep 2014 | A1 |
20140278390 | Kingsbury et al. | Sep 2014 | A1 |
20140278391 | Braho et al. | Sep 2014 | A1 |
20140278394 | Bastyr et al. | Sep 2014 | A1 |
20140278406 | Tsumura et al. | Sep 2014 | A1 |
20140278413 | Pitschel et al. | Sep 2014 | A1 |
20140278429 | Ganong, III | Sep 2014 | A1 |
20140278435 | Ganong et al. | Sep 2014 | A1 |
20140278436 | Khanna et al. | Sep 2014 | A1 |
20140278443 | Gunn et al. | Sep 2014 | A1 |
20140278513 | Prakash et al. | Sep 2014 | A1 |
20140280138 | Li et al. | Sep 2014 | A1 |
20140280292 | Skinder | Sep 2014 | A1 |
20140280353 | Delaney et al. | Sep 2014 | A1 |
20140280450 | Luna | Sep 2014 | A1 |
20140281983 | Xian et al. | Sep 2014 | A1 |
20140282003 | Gruber et al. | Sep 2014 | A1 |
20140282007 | Fleizach | Sep 2014 | A1 |
20140282045 | Ayanam et al. | Sep 2014 | A1 |
20140282201 | Pasquero et al. | Sep 2014 | A1 |
20140282203 | Pasquero et al. | Sep 2014 | A1 |
20140282586 | Shear et al. | Sep 2014 | A1 |
20140282743 | Howard et al. | Sep 2014 | A1 |
20140288990 | Moore et al. | Sep 2014 | A1 |
20140289508 | Wang | Sep 2014 | A1 |
20140297267 | Spencer et al. | Oct 2014 | A1 |
20140297281 | Togawa et al. | Oct 2014 | A1 |
20140297284 | Gruber et al. | Oct 2014 | A1 |
20140297288 | Yu et al. | Oct 2014 | A1 |
20140304605 | Ohmura et al. | Oct 2014 | A1 |
20140309996 | Zhang | Oct 2014 | A1 |
20140310001 | Kalns et al. | Oct 2014 | A1 |
20140310002 | Nitz et al. | Oct 2014 | A1 |
20140316585 | Boesveld et al. | Oct 2014 | A1 |
20140317030 | Shen et al. | Oct 2014 | A1 |
20140317502 | Brown et al. | Oct 2014 | A1 |
20140324884 | Lindahl et al. | Oct 2014 | A1 |
20140330569 | Kolavennu et al. | Nov 2014 | A1 |
20140337048 | Brown et al. | Nov 2014 | A1 |
20140337266 | Wolverton et al. | Nov 2014 | A1 |
20140337371 | Li | Nov 2014 | A1 |
20140337438 | Govande et al. | Nov 2014 | A1 |
20140337751 | Lim et al. | Nov 2014 | A1 |
20140337814 | Kalns et al. | Nov 2014 | A1 |
20140342762 | Hajdu et al. | Nov 2014 | A1 |
20140344627 | Schaub et al. | Nov 2014 | A1 |
20140344687 | Durham et al. | Nov 2014 | A1 |
20140350924 | Zurek et al. | Nov 2014 | A1 |
20140350933 | Bak et al. | Nov 2014 | A1 |
20140351741 | Medlock et al. | Nov 2014 | A1 |
20140351760 | Skory et al. | Nov 2014 | A1 |
20140358519 | Mirkin et al. | Dec 2014 | A1 |
20140358523 | Sheth et al. | Dec 2014 | A1 |
20140361973 | Raux et al. | Dec 2014 | A1 |
20140365209 | Evermann | Dec 2014 | A1 |
20140365214 | Bayley | Dec 2014 | A1 |
20140365216 | Gruber et al. | Dec 2014 | A1 |
20140365226 | Sinha | Dec 2014 | A1 |
20140365227 | Cash et al. | Dec 2014 | A1 |
20140365407 | Brown et al. | Dec 2014 | A1 |
20140365880 | Bellegarda | Dec 2014 | A1 |
20140365885 | Carson et al. | Dec 2014 | A1 |
20140365895 | Paulson et al. | Dec 2014 | A1 |
20140365922 | Yang | Dec 2014 | A1 |
20140370817 | Luna | Dec 2014 | A1 |
20140370841 | Roberts et al. | Dec 2014 | A1 |
20140372112 | Xue et al. | Dec 2014 | A1 |
20140372356 | Bilal et al. | Dec 2014 | A1 |
20140372931 | Zhai et al. | Dec 2014 | A1 |
20140379334 | Fry | Dec 2014 | A1 |
20140379341 | Seo et al. | Dec 2014 | A1 |
20140380285 | Gabel et al. | Dec 2014 | A1 |
20150003797 | Schmidt | Jan 2015 | A1 |
20150006148 | Goldszmit et al. | Jan 2015 | A1 |
20150006157 | Andrade Silva et al. | Jan 2015 | A1 |
20150006176 | Pogue et al. | Jan 2015 | A1 |
20150006178 | Peng et al. | Jan 2015 | A1 |
20150006184 | Marti et al. | Jan 2015 | A1 |
20150006199 | Snider et al. | Jan 2015 | A1 |
20150012271 | Peng et al. | Jan 2015 | A1 |
20150019219 | Tzirkel-hancock et al. | Jan 2015 | A1 |
20150019221 | Lee et al. | Jan 2015 | A1 |
20150019974 | Doi et al. | Jan 2015 | A1 |
20150031416 | Wells et al. | Jan 2015 | A1 |
20150033219 | Breiner et al. | Jan 2015 | A1 |
20150033275 | Natani et al. | Jan 2015 | A1 |
20150039292 | Suleman et al. | Feb 2015 | A1 |
20150039299 | Weinstein et al. | Feb 2015 | A1 |
20150039305 | Huang | Feb 2015 | A1 |
20150040012 | Faaborg et al. | Feb 2015 | A1 |
20150045003 | Vora et al. | Feb 2015 | A1 |
20150045068 | Soffer et al. | Feb 2015 | A1 |
20150046537 | Rakib | Feb 2015 | A1 |
20150050633 | Christmas et al. | Feb 2015 | A1 |
20150053779 | Adamek et al. | Feb 2015 | A1 |
20150058013 | Pakhomov et al. | Feb 2015 | A1 |
20150058018 | Georges et al. | Feb 2015 | A1 |
20150058785 | Ookawara | Feb 2015 | A1 |
20150065200 | Namgung et al. | Mar 2015 | A1 |
20150066494 | Salvador et al. | Mar 2015 | A1 |
20150066496 | Deoras et al. | Mar 2015 | A1 |
20150066506 | Romano et al. | Mar 2015 | A1 |
20150066516 | Nishikawa et al. | Mar 2015 | A1 |
20150067485 | Kim et al. | Mar 2015 | A1 |
20150067822 | Randall | Mar 2015 | A1 |
20150071121 | Patil et al. | Mar 2015 | A1 |
20150073788 | Allauzen et al. | Mar 2015 | A1 |
20150073804 | Senior et al. | Mar 2015 | A1 |
20150074524 | Nicholson et al. | Mar 2015 | A1 |
20150074615 | Han et al. | Mar 2015 | A1 |
20150082229 | Ouyang et al. | Mar 2015 | A1 |
20150088511 | Bharadwaj et al. | Mar 2015 | A1 |
20150088514 | Typrin | Mar 2015 | A1 |
20150088522 | Hendrickson et al. | Mar 2015 | A1 |
20150088523 | Schuster | Mar 2015 | A1 |
20150095031 | Conkie et al. | Apr 2015 | A1 |
20150095278 | Flinn et al. | Apr 2015 | A1 |
20150100316 | Williams et al. | Apr 2015 | A1 |
20150100537 | Grieves et al. | Apr 2015 | A1 |
20150100983 | Pan | Apr 2015 | A1 |
20150106093 | Weeks et al. | Apr 2015 | A1 |
20150113407 | Hoffert et al. | Apr 2015 | A1 |
20150120723 | Deshmukh et al. | Apr 2015 | A1 |
20150121216 | Brown et al. | Apr 2015 | A1 |
20150127348 | Follis | May 2015 | A1 |
20150127350 | Agiomyrgiannakis | May 2015 | A1 |
20150133109 | Freeman et al. | May 2015 | A1 |
20150134334 | Sachidanandam et al. | May 2015 | A1 |
20150135085 | Shoham et al. | May 2015 | A1 |
20150135123 | Carr et al. | May 2015 | A1 |
20150142420 | Sarikaya et al. | May 2015 | A1 |
20150142438 | Dai et al. | May 2015 | A1 |
20150142447 | Kennewick et al. | May 2015 | A1 |
20150142851 | Gupta et al. | May 2015 | A1 |
20150148013 | Baldwin et al. | May 2015 | A1 |
20150149177 | Kalns et al. | May 2015 | A1 |
20150149182 | Kalns et al. | May 2015 | A1 |
20150149354 | Mccoy | May 2015 | A1 |
20150149469 | Xu et al. | May 2015 | A1 |
20150154185 | Waibel | Jun 2015 | A1 |
20150154976 | Mutagi | Jun 2015 | A1 |
20150160855 | Bi | Jun 2015 | A1 |
20150161370 | North et al. | Jun 2015 | A1 |
20150161521 | Shah et al. | Jun 2015 | A1 |
20150161989 | Hsu et al. | Jun 2015 | A1 |
20150162001 | Kar et al. | Jun 2015 | A1 |
20150163558 | Wheatley | Jun 2015 | A1 |
20150169284 | Quast et al. | Jun 2015 | A1 |
20150169336 | Harper et al. | Jun 2015 | A1 |
20150170664 | Doherty et al. | Jun 2015 | A1 |
20150172463 | Quast et al. | Jun 2015 | A1 |
20150178388 | Winnemoeller et al. | Jun 2015 | A1 |
20150179176 | Ryu et al. | Jun 2015 | A1 |
20150185964 | Stout | Jul 2015 | A1 |
20150186012 | Coleman et al. | Jul 2015 | A1 |
20150186110 | Kannan | Jul 2015 | A1 |
20150186154 | Brown et al. | Jul 2015 | A1 |
20150186155 | Brown et al. | Jul 2015 | A1 |
20150186156 | Brown et al. | Jul 2015 | A1 |
20150186351 | Hicks et al. | Jul 2015 | A1 |
20150187355 | Parkinson et al. | Jul 2015 | A1 |
20150187369 | Dadu et al. | Jul 2015 | A1 |
20150189362 | Lee et al. | Jul 2015 | A1 |
20150193379 | Mehta | Jul 2015 | A1 |
20150193391 | Khvostichenko et al. | Jul 2015 | A1 |
20150193392 | Greenblatt et al. | Jul 2015 | A1 |
20150194152 | Katuri et al. | Jul 2015 | A1 |
20150195379 | Zhang et al. | Jul 2015 | A1 |
20150195606 | McDevitt | Jul 2015 | A1 |
20150199077 | Zuger et al. | Jul 2015 | A1 |
20150199960 | Huo et al. | Jul 2015 | A1 |
20150199965 | Leak et al. | Jul 2015 | A1 |
20150199967 | Reddy et al. | Jul 2015 | A1 |
20150201064 | Bells et al. | Jul 2015 | A1 |
20150205858 | Xie et al. | Jul 2015 | A1 |
20150208226 | Kuusilinna et al. | Jul 2015 | A1 |
20150212791 | Kumar et al. | Jul 2015 | A1 |
20150213796 | Waltermann et al. | Jul 2015 | A1 |
20150220507 | Mohajer et al. | Aug 2015 | A1 |
20150221304 | Stewart | Aug 2015 | A1 |
20150221307 | Shah et al. | Aug 2015 | A1 |
20150227505 | Morimoto | Aug 2015 | A1 |
20150227633 | Shapira | Aug 2015 | A1 |
20150228281 | Raniere | Aug 2015 | A1 |
20150230095 | Smith et al. | Aug 2015 | A1 |
20150234636 | Barnes, Jr. | Aug 2015 | A1 |
20150234800 | Patrick et al. | Aug 2015 | A1 |
20150242091 | Lu et al. | Aug 2015 | A1 |
20150243278 | Kibre et al. | Aug 2015 | A1 |
20150243283 | Halash et al. | Aug 2015 | A1 |
20150245154 | Dadu et al. | Aug 2015 | A1 |
20150248651 | Akutagawa et al. | Sep 2015 | A1 |
20150248886 | Sarikaya et al. | Sep 2015 | A1 |
20150254057 | Klein et al. | Sep 2015 | A1 |
20150254058 | Klein et al. | Sep 2015 | A1 |
20150254333 | Fife et al. | Sep 2015 | A1 |
20150255071 | Chiba | Sep 2015 | A1 |
20150256873 | Klein et al. | Sep 2015 | A1 |
20150261496 | Faaborg et al. | Sep 2015 | A1 |
20150261850 | Mittal | Sep 2015 | A1 |
20150269139 | McAteer et al. | Sep 2015 | A1 |
20150277574 | Jain et al. | Oct 2015 | A1 |
20150278370 | Stratvert et al. | Oct 2015 | A1 |
20150279358 | Kingsbury et al. | Oct 2015 | A1 |
20150279360 | Mengibar et al. | Oct 2015 | A1 |
20150281380 | Wang et al. | Oct 2015 | A1 |
20150286627 | Chang et al. | Oct 2015 | A1 |
20150287401 | Lee et al. | Oct 2015 | A1 |
20150287409 | Jang | Oct 2015 | A1 |
20150288629 | Choi et al. | Oct 2015 | A1 |
20150294086 | Kare et al. | Oct 2015 | A1 |
20150294377 | Chow | Oct 2015 | A1 |
20150294516 | Chiang | Oct 2015 | A1 |
20150295915 | Xiu | Oct 2015 | A1 |
20150302855 | Kim et al. | Oct 2015 | A1 |
20150302856 | Kim et al. | Oct 2015 | A1 |
20150302857 | Yamada | Oct 2015 | A1 |
20150309997 | Lee et al. | Oct 2015 | A1 |
20150310858 | Li et al. | Oct 2015 | A1 |
20150310862 | Dauphin et al. | Oct 2015 | A1 |
20150310879 | Buchanan et al. | Oct 2015 | A1 |
20150312182 | Langholz | Oct 2015 | A1 |
20150317069 | Clements et al. | Nov 2015 | A1 |
20150317310 | Eiche et al. | Nov 2015 | A1 |
20150324041 | Varley et al. | Nov 2015 | A1 |
20150324334 | Lee et al. | Nov 2015 | A1 |
20150331664 | Osawa et al. | Nov 2015 | A1 |
20150331711 | Huang et al. | Nov 2015 | A1 |
20150332667 | Mason | Nov 2015 | A1 |
20150339049 | Kasemset et al. | Nov 2015 | A1 |
20150339391 | Kang et al. | Nov 2015 | A1 |
20150340033 | Di Fabbrizio et al. | Nov 2015 | A1 |
20150340040 | Mun et al. | Nov 2015 | A1 |
20150340042 | Sejnoha et al. | Nov 2015 | A1 |
20150341717 | Song et al. | Nov 2015 | A1 |
20150347086 | Liedholm et al. | Dec 2015 | A1 |
20150347381 | Bellegarda | Dec 2015 | A1 |
20150347382 | Dolfing et al. | Dec 2015 | A1 |
20150347385 | Flor et al. | Dec 2015 | A1 |
20150347393 | Futrell et al. | Dec 2015 | A1 |
20150347733 | Tsou et al. | Dec 2015 | A1 |
20150347985 | Gross et al. | Dec 2015 | A1 |
20150348547 | Paulik et al. | Dec 2015 | A1 |
20150348548 | Piernot et al. | Dec 2015 | A1 |
20150348549 | Giuli et al. | Dec 2015 | A1 |
20150348551 | Gruber et al. | Dec 2015 | A1 |
20150348554 | Orr et al. | Dec 2015 | A1 |
20150350031 | Burks et al. | Dec 2015 | A1 |
20150352999 | Bando et al. | Dec 2015 | A1 |
20150355879 | Beckhardt et al. | Dec 2015 | A1 |
20150363587 | Ahn et al. | Dec 2015 | A1 |
20150364140 | Thorn | Dec 2015 | A1 |
20150370531 | Faaborg | Dec 2015 | A1 |
20150370780 | Wang et al. | Dec 2015 | A1 |
20150371639 | Foerster et al. | Dec 2015 | A1 |
20150371665 | Naik et al. | Dec 2015 | A1 |
20150373183 | Woolsey et al. | Dec 2015 | A1 |
20150379993 | Subhojit et al. | Dec 2015 | A1 |
20150382047 | Napolitano et al. | Dec 2015 | A1 |
20150382079 | Lister et al. | Dec 2015 | A1 |
20160004690 | Bangalore et al. | Jan 2016 | A1 |
20160005320 | deCharms et al. | Jan 2016 | A1 |
20160012038 | Edwards et al. | Jan 2016 | A1 |
20160014476 | Caliendo, Jr. et al. | Jan 2016 | A1 |
20160018900 | Tu et al. | Jan 2016 | A1 |
20160019886 | Hong | Jan 2016 | A1 |
20160026258 | Ou et al. | Jan 2016 | A1 |
20160027431 | Kurzweil et al. | Jan 2016 | A1 |
20160028666 | Li | Jan 2016 | A1 |
20160029316 | Mohan et al. | Jan 2016 | A1 |
20160034042 | Joo | Feb 2016 | A1 |
20160034811 | Paulik et al. | Feb 2016 | A1 |
20160042735 | Vibbert et al. | Feb 2016 | A1 |
20160042748 | Jain et al. | Feb 2016 | A1 |
20160048666 | Dey et al. | Feb 2016 | A1 |
20160055422 | Li | Feb 2016 | A1 |
20160062605 | Agarwal et al. | Mar 2016 | A1 |
20160063998 | Krishnamoorthy et al. | Mar 2016 | A1 |
20160070581 | Soon-Shiong | Mar 2016 | A1 |
20160071516 | Lee et al. | Mar 2016 | A1 |
20160071521 | Haughey | Mar 2016 | A1 |
20160072940 | Cronin | Mar 2016 | A1 |
20160077794 | Kim et al. | Mar 2016 | A1 |
20160078860 | Paulik et al. | Mar 2016 | A1 |
20160080165 | Ehsani et al. | Mar 2016 | A1 |
20160086116 | Rao et al. | Mar 2016 | A1 |
20160091967 | Prokofieva et al. | Mar 2016 | A1 |
20160092447 | Venkataraman et al. | Mar 2016 | A1 |
20160092766 | Sainath et al. | Mar 2016 | A1 |
20160093291 | Kim | Mar 2016 | A1 |
20160093298 | Naik et al. | Mar 2016 | A1 |
20160093301 | Bellegarda et al. | Mar 2016 | A1 |
20160093304 | Kim et al. | Mar 2016 | A1 |
20160094700 | Lee et al. | Mar 2016 | A1 |
20160094979 | Naik et al. | Mar 2016 | A1 |
20160098991 | Luo et al. | Apr 2016 | A1 |
20160104486 | Penilla et al. | Apr 2016 | A1 |
20160111091 | Bakish | Apr 2016 | A1 |
20160117386 | Ajmera et al. | Apr 2016 | A1 |
20160118048 | Heide | Apr 2016 | A1 |
20160119338 | Cheyer | Apr 2016 | A1 |
20160125048 | Hamada | May 2016 | A1 |
20160125071 | Gabbai | May 2016 | A1 |
20160132484 | Nauze et al. | May 2016 | A1 |
20160132488 | Clark et al. | May 2016 | A1 |
20160133254 | Vogel et al. | May 2016 | A1 |
20160139662 | Dabhade | May 2016 | A1 |
20160140951 | Agiomyrgiannakis et al. | May 2016 | A1 |
20160147725 | Patten et al. | May 2016 | A1 |
20160148610 | Kennewick, Jr. et al. | May 2016 | A1 |
20160155442 | Kannan et al. | Jun 2016 | A1 |
20160155443 | Khan et al. | Jun 2016 | A1 |
20160162456 | Munro et al. | Jun 2016 | A1 |
20160163311 | Crook et al. | Jun 2016 | A1 |
20160163312 | Naik et al. | Jun 2016 | A1 |
20160170966 | Kolo | Jun 2016 | A1 |
20160173578 | Sharma et al. | Jun 2016 | A1 |
20160173960 | Snibbe et al. | Jun 2016 | A1 |
20160179462 | Bjorkengren | Jun 2016 | A1 |
20160180844 | Vanblon et al. | Jun 2016 | A1 |
20160182410 | Janakiraman et al. | Jun 2016 | A1 |
20160182709 | Kim et al. | Jun 2016 | A1 |
20160188181 | Smith | Jun 2016 | A1 |
20160188738 | Gruber et al. | Jun 2016 | A1 |
20160189717 | Kannan et al. | Jun 2016 | A1 |
20160198319 | Huang et al. | Jul 2016 | A1 |
20160210981 | Lee | Jul 2016 | A1 |
20160212488 | Os et al. | Jul 2016 | A1 |
20160217784 | Gelfenbeyn et al. | Jul 2016 | A1 |
20160224540 | Stewart et al. | Aug 2016 | A1 |
20160224774 | Pender | Aug 2016 | A1 |
20160225372 | Cheung et al. | Aug 2016 | A1 |
20160240187 | Fleizach et al. | Aug 2016 | A1 |
20160247061 | Trask et al. | Aug 2016 | A1 |
20160253312 | Rhodes | Sep 2016 | A1 |
20160253528 | Gao et al. | Sep 2016 | A1 |
20160259623 | Sumner et al. | Sep 2016 | A1 |
20160259656 | Sumner et al. | Sep 2016 | A1 |
20160259779 | Labsk et al. | Sep 2016 | A1 |
20160260431 | Newendorp et al. | Sep 2016 | A1 |
20160260433 | Sumner et al. | Sep 2016 | A1 |
20160260434 | Gelfenbeyn et al. | Sep 2016 | A1 |
20160260436 | Lemay et al. | Sep 2016 | A1 |
20160266871 | Schmid et al. | Sep 2016 | A1 |
20160267904 | Biadsy et al. | Sep 2016 | A1 |
20160275941 | Bellegarda et al. | Sep 2016 | A1 |
20160275947 | Li et al. | Sep 2016 | A1 |
20160282956 | Ouyang et al. | Sep 2016 | A1 |
20160284005 | Daniel et al. | Sep 2016 | A1 |
20160284199 | Dotan-Cohen et al. | Sep 2016 | A1 |
20160286045 | Shaltiel et al. | Sep 2016 | A1 |
20160293168 | Chen | Oct 2016 | A1 |
20160299685 | Zhai et al. | Oct 2016 | A1 |
20160299882 | Hegerty et al. | Oct 2016 | A1 |
20160299883 | Zhu et al. | Oct 2016 | A1 |
20160307566 | Bellegarda | Oct 2016 | A1 |
20160313906 | Kilchenko et al. | Oct 2016 | A1 |
20160314788 | Jitkoff et al. | Oct 2016 | A1 |
20160314792 | Alvarez et al. | Oct 2016 | A1 |
20160321261 | Spasojevic et al. | Nov 2016 | A1 |
20160322045 | Hatfeild et al. | Nov 2016 | A1 |
20160322050 | Wang et al. | Nov 2016 | A1 |
20160328205 | Agrawal et al. | Nov 2016 | A1 |
20160328893 | Cordova et al. | Nov 2016 | A1 |
20160336007 | Hanazawa | Nov 2016 | A1 |
20160336010 | Lindahl | Nov 2016 | A1 |
20160336024 | Choi et al. | Nov 2016 | A1 |
20160337299 | Lane et al. | Nov 2016 | A1 |
20160337301 | Rollins et al. | Nov 2016 | A1 |
20160342685 | Basu et al. | Nov 2016 | A1 |
20160342781 | Jeon | Nov 2016 | A1 |
20160351190 | Binder et al. | Dec 2016 | A1 |
20160352567 | Robbins et al. | Dec 2016 | A1 |
20160357304 | Hatori et al. | Dec 2016 | A1 |
20160357728 | Bellegarda et al. | Dec 2016 | A1 |
20160357861 | Carlhian et al. | Dec 2016 | A1 |
20160357870 | Hentschel et al. | Dec 2016 | A1 |
20160358598 | Williams et al. | Dec 2016 | A1 |
20160358600 | Nallasamy et al. | Dec 2016 | A1 |
20160358619 | Ramprashad et al. | Dec 2016 | A1 |
20160359771 | Sridhar | Dec 2016 | A1 |
20160360039 | Sanghavi et al. | Dec 2016 | A1 |
20160360336 | Gross et al. | Dec 2016 | A1 |
20160360382 | Gross et al. | Dec 2016 | A1 |
20160364378 | Futrell et al. | Dec 2016 | A1 |
20160365101 | Foy et al. | Dec 2016 | A1 |
20160371250 | Rhodes | Dec 2016 | A1 |
20160372112 | Miller et al. | Dec 2016 | A1 |
20160378747 | Orr et al. | Dec 2016 | A1 |
20160379091 | Lin et al. | Dec 2016 | A1 |
20160379626 | Deisher et al. | Dec 2016 | A1 |
20160379633 | Lehman et al. | Dec 2016 | A1 |
20160379641 | Liu et al. | Dec 2016 | A1 |
20170004824 | Yoo et al. | Jan 2017 | A1 |
20170011303 | Annapureddy et al. | Jan 2017 | A1 |
20170011742 | Jing et al. | Jan 2017 | A1 |
20170018271 | Khan et al. | Jan 2017 | A1 |
20170019987 | Dragone et al. | Jan 2017 | A1 |
20170025124 | Mixter et al. | Jan 2017 | A1 |
20170031576 | Saoji et al. | Feb 2017 | A1 |
20170032783 | Lord et al. | Feb 2017 | A1 |
20170032791 | Elson et al. | Feb 2017 | A1 |
20170040002 | Basson et al. | Feb 2017 | A1 |
20170055895 | Des Jardins et al. | Mar 2017 | A1 |
20170060853 | Lee et al. | Mar 2017 | A1 |
20170068423 | Napolitano et al. | Mar 2017 | A1 |
20170068513 | Stasior et al. | Mar 2017 | A1 |
20170068550 | Zeitlin | Mar 2017 | A1 |
20170068670 | Orr et al. | Mar 2017 | A1 |
20170076720 | Gopalan et al. | Mar 2017 | A1 |
20170076721 | Bargetzi et al. | Mar 2017 | A1 |
20170083179 | Gruber et al. | Mar 2017 | A1 |
20170083285 | Meyers et al. | Mar 2017 | A1 |
20170090569 | Levesque | Mar 2017 | A1 |
20170091168 | Bellegarda et al. | Mar 2017 | A1 |
20170092270 | Newendorp et al. | Mar 2017 | A1 |
20170092278 | Evermann et al. | Mar 2017 | A1 |
20170102915 | Kuscher et al. | Apr 2017 | A1 |
20170103749 | Zhao et al. | Apr 2017 | A1 |
20170105190 | Logan et al. | Apr 2017 | A1 |
20170116177 | Walia | Apr 2017 | A1 |
20170116989 | Yadgar et al. | Apr 2017 | A1 |
20170124190 | Wang et al. | May 2017 | A1 |
20170125016 | Wang | May 2017 | A1 |
20170127124 | Wilson et al. | May 2017 | A9 |
20170131778 | Iyer | May 2017 | A1 |
20170132199 | Vescovi et al. | May 2017 | A1 |
20170140644 | Hwang et al. | May 2017 | A1 |
20170154033 | Lee | Jun 2017 | A1 |
20170154055 | Dimson et al. | Jun 2017 | A1 |
20170161018 | Lemay et al. | Jun 2017 | A1 |
20170161268 | Badaskar | Jun 2017 | A1 |
20170169818 | VanBlon et al. | Jun 2017 | A1 |
20170169819 | Mese et al. | Jun 2017 | A1 |
20170178619 | Naik et al. | Jun 2017 | A1 |
20170178626 | Gruber et al. | Jun 2017 | A1 |
20170180499 | Gelfenbeyn et al. | Jun 2017 | A1 |
20170185375 | Martel et al. | Jun 2017 | A1 |
20170185581 | Bojja et al. | Jun 2017 | A1 |
20170186429 | Giuli et al. | Jun 2017 | A1 |
20170193083 | Bhatt et al. | Jul 2017 | A1 |
20170199874 | Patel et al. | Jul 2017 | A1 |
20170200066 | Wang et al. | Jul 2017 | A1 |
20170221486 | Kurata et al. | Aug 2017 | A1 |
20170227935 | Su et al. | Aug 2017 | A1 |
20170228382 | Haviv et al. | Aug 2017 | A1 |
20170230709 | Van Os et al. | Aug 2017 | A1 |
20170242653 | Lang et al. | Aug 2017 | A1 |
20170243468 | Dotan-Cohen et al. | Aug 2017 | A1 |
20170256256 | Wang et al. | Sep 2017 | A1 |
20170263247 | Kang et al. | Sep 2017 | A1 |
20170263248 | Gruber et al. | Sep 2017 | A1 |
20170263249 | Akbacak et al. | Sep 2017 | A1 |
20170264451 | Yu et al. | Sep 2017 | A1 |
20170278514 | Mathias et al. | Sep 2017 | A1 |
20170285915 | Napolitano et al. | Oct 2017 | A1 |
20170286397 | Gonzalez | Oct 2017 | A1 |
20170295446 | Thagadur Shivappa | Oct 2017 | A1 |
20170316775 | Le et al. | Nov 2017 | A1 |
20170316782 | Haughay et al. | Nov 2017 | A1 |
20170323637 | Naik | Nov 2017 | A1 |
20170345411 | Raitio et al. | Nov 2017 | A1 |
20170346949 | Sanghavi et al. | Nov 2017 | A1 |
20170352346 | Paulik et al. | Dec 2017 | A1 |
20170352350 | Booker et al. | Dec 2017 | A1 |
20170357478 | Piersol et al. | Dec 2017 | A1 |
20170357632 | Pagallo et al. | Dec 2017 | A1 |
20170357633 | Wang et al. | Dec 2017 | A1 |
20170357637 | Nell et al. | Dec 2017 | A1 |
20170357640 | Bellegarda et al. | Dec 2017 | A1 |
20170357716 | Bellegarda et al. | Dec 2017 | A1 |
20170358300 | Laurens et al. | Dec 2017 | A1 |
20170358301 | Raitio et al. | Dec 2017 | A1 |
20170358302 | Orr et al. | Dec 2017 | A1 |
20170358303 | Walker, II et al. | Dec 2017 | A1 |
20170358304 | Castillo et al. | Dec 2017 | A1 |
20170358305 | Kudurshian et al. | Dec 2017 | A1 |
20170371885 | Aggarwal et al. | Dec 2017 | A1 |
20180007060 | Leblang et al. | Jan 2018 | A1 |
20180007538 | Naik et al. | Jan 2018 | A1 |
20180012596 | Piernot et al. | Jan 2018 | A1 |
20180033431 | Newendorp et al. | Feb 2018 | A1 |
20180054505 | Hart et al. | Feb 2018 | A1 |
20180060312 | Won | Mar 2018 | A1 |
20180063624 | Boesen | Mar 2018 | A1 |
20180067914 | Chen et al. | Mar 2018 | A1 |
20180090143 | Saddler et al. | Mar 2018 | A1 |
20180107945 | Gao et al. | Apr 2018 | A1 |
20180108346 | Paulik et al. | Apr 2018 | A1 |
20180130470 | Lemay et al. | May 2018 | A1 |
20180137856 | Gilbert | May 2018 | A1 |
20180137857 | Zhou et al. | May 2018 | A1 |
20180144748 | Leong | May 2018 | A1 |
20180190273 | Karimli et al. | Jul 2018 | A1 |
20180196683 | Radebaugh et al. | Jul 2018 | A1 |
20180213448 | Segal et al. | Jul 2018 | A1 |
20180218735 | Hunt et al. | Aug 2018 | A1 |
20180232203 | Gelfenbeyn et al. | Aug 2018 | A1 |
20180276197 | Nell et al. | Sep 2018 | A1 |
20180308485 | Kudurshian et al. | Oct 2018 | A1 |
20180308486 | Saddler et al. | Oct 2018 | A1 |
20180322112 | Bellegarda et al. | Nov 2018 | A1 |
20180329677 | Gruber et al. | Nov 2018 | A1 |
20180329957 | Frazzingaro et al. | Nov 2018 | A1 |
20180329982 | Patel et al. | Nov 2018 | A1 |
20180330714 | Paulik et al. | Nov 2018 | A1 |
20180330722 | Newendorp et al. | Nov 2018 | A1 |
20180330723 | Acero et al. | Nov 2018 | A1 |
20180330730 | Garg et al. | Nov 2018 | A1 |
20180330731 | Zeitlin et al. | Nov 2018 | A1 |
20180330737 | Paulik et al. | Nov 2018 | A1 |
20180332118 | Phipps et al. | Nov 2018 | A1 |
20180336275 | Graham et al. | Nov 2018 | A1 |
20180336892 | Kim et al. | Nov 2018 | A1 |
20180336894 | Graham et al. | Nov 2018 | A1 |
20180336905 | Kim et al. | Nov 2018 | A1 |
20180373487 | Gruber et al. | Dec 2018 | A1 |
20190014450 | Gruber et al. | Jan 2019 | A1 |
Number | Date | Country |
---|---|---|
2015203483 | Jul 2015 | AU |
2694314 | Aug 2010 | CA |
2792412 | Jul 2011 | CA |
2666438 | Jun 2013 | CA |
101416471 | Apr 2009 | CN |
101427244 | May 2009 | CN |
101448340 | Jun 2009 | CN |
101453498 | Jun 2009 | CN |
101459722 | Jun 2009 | CN |
101499156 | Aug 2009 | CN |
101500041 | Aug 2009 | CN |
101515952 | Aug 2009 | CN |
101535983 | Sep 2009 | CN |
101547396 | Sep 2009 | CN |
101557432 | Oct 2009 | CN |
101567167 | Oct 2009 | CN |
101601088 | Dec 2009 | CN |
101604521 | Dec 2009 | CN |
101632316 | Jan 2010 | CN |
101636736 | Jan 2010 | CN |
101673544 | Mar 2010 | CN |
101751387 | Jun 2010 | CN |
101833286 | Sep 2010 | CN |
101847405 | Sep 2010 | CN |
101894547 | Nov 2010 | CN |
101930789 | Dec 2010 | CN |
101939740 | Jan 2011 | CN |
101951553 | Jan 2011 | CN |
102137193 | Jul 2011 | CN |
102160043 | Aug 2011 | CN |
102201235 | Sep 2011 | CN |
102246136 | Nov 2011 | CN |
202035047 | Nov 2011 | CN |
102282609 | Dec 2011 | CN |
202092650 | Dec 2011 | CN |
102340590 | Feb 2012 | CN |
102368256 | Mar 2012 | CN |
102405463 | Apr 2012 | CN |
102498457 | Jun 2012 | CN |
102629246 | Aug 2012 | CN |
102682769 | Sep 2012 | CN |
102682771 | Sep 2012 | CN |
102685295 | Sep 2012 | CN |
102693725 | Sep 2012 | CN |
102694909 | Sep 2012 | CN |
102792320 | Nov 2012 | CN |
102801853 | Nov 2012 | CN |
102870065 | Jan 2013 | CN |
102917004 | Feb 2013 | CN |
102918493 | Feb 2013 | CN |
103035240 | Apr 2013 | CN |
103038728 | Apr 2013 | CN |
103093334 | May 2013 | CN |
103135916 | Jun 2013 | CN |
103209369 | Jul 2013 | CN |
103365279 | Oct 2013 | CN |
103744761 | Apr 2014 | CN |
103795850 | May 2014 | CN |
103930945 | Jul 2014 | CN |
104038621 | Sep 2014 | CN |
104090652 | Oct 2014 | CN |
104144377 | Nov 2014 | CN |
104281259 | Jan 2015 | CN |
104284257 | Jan 2015 | CN |
104335234 | Feb 2015 | CN |
104423625 | Mar 2015 | CN |
104463552 | Mar 2015 | CN |
104516522 | Apr 2015 | CN |
104854583 | Aug 2015 | CN |
104951077 | Sep 2015 | CN |
105100356 | Nov 2015 | CN |
105247511 | Jan 2016 | CN |
105264524 | Jan 2016 | CN |
105471705 | Apr 2016 | CN |
107919123 | Apr 2018 | CN |
102008024258 | Nov 2009 | DE |
202016008226 | May 2017 | DE |
1909263 | Jan 2009 | EP |
1335620 | Mar 2009 | EP |
2069895 | Jun 2009 | EP |
2081185 | Jul 2009 | EP |
2094032 | Aug 2009 | EP |
2096840 | Sep 2009 | EP |
2107553 | Oct 2009 | EP |
2109295 | Oct 2009 | EP |
1720375 | Jul 2010 | EP |
2205010 | Jul 2010 | EP |
2309491 | Apr 2011 | EP |
2329348 | Jun 2011 | EP |
2339576 | Jun 2011 | EP |
2400373 | Dec 2011 | EP |
2431842 | Mar 2012 | EP |
2523188 | Nov 2012 | EP |
2551784 | Jan 2013 | EP |
2555536 | Feb 2013 | EP |
2575128 | Apr 2013 | EP |
2632129 | Aug 2013 | EP |
2669889 | Dec 2013 | EP |
2683175 | Jan 2014 | EP |
2733598 | May 2014 | EP |
2760015 | Jul 2014 | EP |
2801890 | Nov 2014 | EP |
2801972 | Nov 2014 | EP |
2824564 | Jan 2015 | EP |
2849177 | Mar 2015 | EP |
2930715 | Oct 2015 | EP |
2938022 | Oct 2015 | EP |
2940556 | Nov 2015 | EP |
2950307 | Dec 2015 | EP |
3035329 | Jun 2016 | EP |
3224708 | Oct 2017 | EP |
3246916 | Nov 2017 | EP |
3300074 | Mar 2018 | EP |
2983065 | Aug 2018 | EP |
2445436 | Jul 2008 | GB |
2-146099 | Jun 1990 | JP |
09-062293 | Mar 1997 | JP |
10-312194 | Nov 1998 | JP |
11-288296 | Oct 1999 | JP |
2000-29661 | Jan 2000 | JP |
2003-44090 | Feb 2003 | JP |
2003-255991 | Sep 2003 | JP |
2003-298687 | Oct 2003 | JP |
2004-056226 | Feb 2004 | JP |
2004-101901 | Apr 2004 | JP |
2004-310034 | Nov 2004 | JP |
3726448 | Dec 2005 | JP |
2007-272773 | Oct 2007 | JP |
2009-2850 | Jan 2009 | JP |
2009-503623 | Jan 2009 | JP |
2009-36999 | Feb 2009 | JP |
2009-505142 | Feb 2009 | JP |
2009-47920 | Mar 2009 | JP |
2009-069062 | Apr 2009 | JP |
2009-98490 | May 2009 | JP |
2009-110300 | May 2009 | JP |
2009-116841 | May 2009 | JP |
2009-134409 | Jun 2009 | JP |
2009-140444 | Jun 2009 | JP |
2009-169470 | Jul 2009 | JP |
2009-186989 | Aug 2009 | JP |
2009-193448 | Aug 2009 | JP |
2009-193457 | Aug 2009 | JP |
2009-193532 | Aug 2009 | JP |
2009-205367 | Sep 2009 | JP |
2009-223840 | Oct 2009 | JP |
2009-294913 | Dec 2009 | JP |
2009-294946 | Dec 2009 | JP |
2009-543166 | Dec 2009 | JP |
2010-66519 | Mar 2010 | JP |
2010-78979 | Apr 2010 | JP |
2010-108378 | May 2010 | JP |
2010-518475 | May 2010 | JP |
2010-518526 | May 2010 | JP |
2010-146347 | Jul 2010 | JP |
2010-157207 | Jul 2010 | JP |
2010-166478 | Jul 2010 | JP |
2010-205111 | Sep 2010 | JP |
2010-224236 | Oct 2010 | JP |
4563106 | Oct 2010 | JP |
2010-535377 | Nov 2010 | JP |
2010-287063 | Dec 2010 | JP |
2011-33874 | Feb 2011 | JP |
2011-41026 | Feb 2011 | JP |
2011-45005 | Mar 2011 | JP |
2011-59659 | Mar 2011 | JP |
2011-81541 | Apr 2011 | JP |
2011-525045 | Sep 2011 | JP |
2011-238022 | Nov 2011 | JP |
2011-250027 | Dec 2011 | JP |
2012-014394 | Jan 2012 | JP |
2012-33997 | Feb 2012 | JP |
2012-508530 | Apr 2012 | JP |
2012-089020 | May 2012 | JP |
2012-116442 | Jun 2012 | JP |
2012-142744 | Jul 2012 | JP |
2012-147063 | Aug 2012 | JP |
2012-518847 | Aug 2012 | JP |
2013-37688 | Feb 2013 | JP |
2013-511214 | Mar 2013 | JP |
2013-65284 | Apr 2013 | JP |
2013-73240 | Apr 2013 | JP |
2013-513315 | Apr 2013 | JP |
2013-080476 | May 2013 | JP |
2013-517566 | May 2013 | JP |
2013-134430 | Jul 2013 | JP |
2013140520 | Jul 2013 | JP |
2013-527947 | Jul 2013 | JP |
2013-528012 | Jul 2013 | JP |
2013-156349 | Aug 2013 | JP |
2013-200423 | Oct 2013 | JP |
2013-205999 | Oct 2013 | JP |
2013-238936 | Nov 2013 | JP |
2014-10688 | Jan 2014 | JP |
2014-026629 | Feb 2014 | JP |
2014-60600 | Apr 2014 | JP |
2014-72586 | Apr 2014 | JP |
2014-077969 | May 2014 | JP |
2014109889 | Jun 2014 | JP |
2014-124332 | Jul 2014 | JP |
2014-126600 | Jul 2014 | JP |
2014-145842 | Aug 2014 | JP |
2014-150323 | Aug 2014 | JP |
2014-222514 | Nov 2014 | JP |
2015-8001 | Jan 2015 | JP |
2015-18365 | Jan 2015 | JP |
2015-501022 | Jan 2015 | JP |
2015-41845 | Mar 2015 | JP |
2015-94848 | May 2015 | JP |
2015-519675 | Jul 2015 | JP |
2015-524974 | Aug 2015 | JP |
2015-526776 | Sep 2015 | JP |
2015-528140 | Sep 2015 | JP |
2015-528918 | Oct 2015 | JP |
2016-508007 | Mar 2016 | JP |
2016-119615 | Jun 2016 | JP |
2016-151928 | Aug 2016 | JP |
2017-19331 | Jan 2017 | JP |
10-2006-0068393 | Jun 2006 | KR |
10-2006-0084455 | Jul 2006 | KR |
10-0702645 | Apr 2007 | KR |
10-2009-0001716 | Jan 2009 | KR |
10-2009-0028464 | Mar 2009 | KR |
10-2009-0030117 | Mar 2009 | KR |
10-2009-0086805 | Aug 2009 | KR |
10-0920267 | Oct 2009 | KR |
10-2009-0122944 | Dec 2009 | KR |
10-2009-0127961 | Dec 2009 | KR |
10-2009-0129192 | Dec 2009 | KR |
10-2010-0015958 | Feb 2010 | KR |
10-2010-0048571 | May 2010 | KR |
10-2010-0053149 | May 2010 | KR |
10-2010-0119519 | Nov 2010 | KR |
10-2011-0043644 | Apr 2011 | KR |
10-1032792 | May 2011 | KR |
10-2011-0068490 | Jun 2011 | KR |
10-2011-0072847 | Jun 2011 | KR |
10-2011-0086492 | Jul 2011 | KR |
10-2011-0100620 | Sep 2011 | KR |
10-2011-0113414 | Oct 2011 | KR |
10-2011-0115134 | Oct 2011 | KR |
10-2012-0020164 | Mar 2012 | KR |
10-2012-0031722 | Apr 2012 | KR |
10-1178310 | Aug 2012 | KR |
10-2012-0120316 | Nov 2012 | KR |
10-2012-0137435 | Dec 2012 | KR |
10-2012-0137440 | Dec 2012 | KR |
10-2012-0138826 | Dec 2012 | KR |
10-2012-0139827 | Dec 2012 | KR |
10-1193668 | Dec 2012 | KR |
10-2013-0035983 | Apr 2013 | KR |
10-2013-0108563 | Oct 2013 | KR |
10-1334342 | Nov 2013 | KR |
10-2013-0131252 | Dec 2013 | KR |
10-2013-0133629 | Dec 2013 | KR |
10-2014-0031283 | Mar 2014 | KR |
10-2014-0033574 | Mar 2014 | KR |
10-2014-0147557 | Dec 2014 | KR |
10-2015-0013631 | Feb 2015 | KR |
10-2015-0038375 | Apr 2015 | KR |
10-2015-0043512 | Apr 2015 | KR |
10-2016-0004351 | Jan 2016 | KR |
10-2016-0010523 | Jan 2016 | KR |
10-2016-0040279 | Apr 2016 | KR |
10-2017-0036805 | Apr 2017 | KR |
2349970 | Mar 2009 | RU |
2353068 | Apr 2009 | RU |
2364917 | Aug 2009 | RU |
M348993 | Jan 2009 | TW |
200943903 | Oct 2009 | TW |
201018258 | May 2010 | TW |
201027515 | Jul 2010 | TW |
201028996 | Aug 2010 | TW |
201110108 | Mar 2011 | TW |
2011-42823 | Dec 2011 | TW |
201227715 | Jul 2012 | TW |
201245989 | Nov 2012 | TW |
201312548 | Mar 2013 | TW |
2009009240 | Jan 2009 | WO |
2009016631 | Feb 2009 | WO |
2009017280 | Feb 2009 | WO |
2009034686 | Mar 2009 | WO |
2009075912 | Jun 2009 | WO |
2009104126 | Aug 2009 | WO |
2009156438 | Dec 2009 | WO |
2009156978 | Dec 2009 | WO |
2010013369 | Feb 2010 | WO |
2010054373 | May 2010 | WO |
2010075623 | Jul 2010 | WO |
2010100937 | Sep 2010 | WO |
2010141802 | Dec 2010 | WO |
2011057346 | May 2011 | WO |
2011060106 | May 2011 | WO |
2011088053 | Jul 2011 | WO |
2011093025 | Aug 2011 | WO |
2011116309 | Sep 2011 | WO |
2011133543 | Oct 2011 | WO |
2011150730 | Dec 2011 | WO |
2011163350 | Dec 2011 | WO |
2011088053 | Jan 2012 | WO |
2012019020 | Feb 2012 | WO |
2012019637 | Feb 2012 | WO |
2012129231 | Sep 2012 | WO |
2012135157 | Oct 2012 | WO |
2012154317 | Nov 2012 | WO |
2012155079 | Nov 2012 | WO |
2012167168 | Dec 2012 | WO |
2013009578 | Jan 2013 | WO |
2013022135 | Feb 2013 | WO |
2013022223 | Feb 2013 | WO |
2013048880 | Apr 2013 | WO |
2013049358 | Apr 2013 | WO |
2013163113 | Oct 2013 | WO |
2013169842 | Nov 2013 | WO |
2013173504 | Nov 2013 | WO |
2013173511 | Nov 2013 | WO |
2013176847 | Nov 2013 | WO |
2013184953 | Dec 2013 | WO |
2013184990 | Dec 2013 | WO |
2014003138 | Jan 2014 | WO |
2014021967 | Feb 2014 | WO |
2014022148 | Feb 2014 | WO |
2014028797 | Feb 2014 | WO |
2014031505 | Feb 2014 | WO |
2014047047 | Mar 2014 | WO |
2014066352 | May 2014 | WO |
2014070872 | May 2014 | WO |
2014078965 | May 2014 | WO |
2014096506 | Jun 2014 | WO |
2014124332 | Aug 2014 | WO |
2014137074 | Sep 2014 | WO |
2014138604 | Sep 2014 | WO |
2014143959 | Sep 2014 | WO |
2014144579 | Sep 2014 | WO |
2014159581 | Oct 2014 | WO |
2014197336 | Dec 2014 | WO |
2014200728 | Dec 2014 | WO |
2014204659 | Dec 2014 | WO |
2015018440 | Feb 2015 | WO |
2015029379 | Mar 2015 | WO |
2015030796 | Mar 2015 | WO |
2015041892 | Mar 2015 | WO |
2015084659 | Jun 2015 | WO |
2015092943 | Jun 2015 | WO |
2015094169 | Jun 2015 | WO |
2015094369 | Jun 2015 | WO |
2015099939 | Jul 2015 | WO |
2015116151 | Aug 2015 | WO |
2015151133 | Oct 2015 | WO |
2015153310 | Oct 2015 | WO |
2015157013 | Oct 2015 | WO |
2015183401 | Dec 2015 | WO |
2015183699 | Dec 2015 | WO |
2015184186 | Dec 2015 | WO |
2015200207 | Dec 2015 | WO |
2016027933 | Feb 2016 | WO |
2016028946 | Feb 2016 | WO |
2016033257 | Mar 2016 | WO |
2016054230 | Apr 2016 | WO |
2016057268 | Apr 2016 | WO |
2016075081 | May 2016 | WO |
2016085775 | Jun 2016 | WO |
2016100139 | Jun 2016 | WO |
2016111881 | Jul 2016 | WO |
2016144840 | Sep 2016 | WO |
2016144982 | Sep 2016 | WO |
2016175354 | Nov 2016 | WO |
2016190950 | Dec 2016 | WO |
2016209444 | Dec 2016 | WO |
2017044260 | Mar 2017 | WO |
2017044629 | Mar 2017 | WO |
2017053311 | Mar 2017 | WO |
Entry |
---|
Adium, “AboutAdium—Adium X—Trac”, available at <http://web.archive.org/web/20070819113247/http://trac.adiumx.com/wiki/AboutAdium>, retrieved on Nov. 25, 2011, 2 pages. |
Alfred App, “Alfred”, available at <http://www.alfredapp.com/>, retrieved on Feb. 8, 2012, 5 pages. |
Apple, “VoiceOver”, available at <http://www.apple.com/accessibility/voiceover/>, May 19, 2014, 3 pages. |
Berry et al., “PTIME: Personalized Assistance for Calendaring”, ACM Transactions on Intelligent Systems and Technology, vol. 2, No. 4, Article 40, Jul. 2011, pp. 1-22. |
Bocchieri et al., “Use of Geographical Meta-Data in ASR Language and Acoustic Models”, IEEE International Conference on Acoustics Speech and Signal Processing, 2010, pp. 5118-5121. |
Butcher, Mike, “EVI Arrives in Town to go Toe-to-Toe with Siri”, TechCrunch, Jan. 23, 2012, 2 pages. |
Chen, Yi, “Multimedia Siri Finds and Plays Whatever You Ask For”, PSFK Report, Feb. 9, 2012, 9 pages. |
Cheyer, Adam, “About Adam Cheyer”, available at <http://www.adam.cheyer.com/about.html>, retrieved on Sep. 17, 2012, 2 pages. |
Choi et al., “Acoustic and Visual Signal based Context Awareness System for Mobile Application”, IEEE Transactions on Consumer Electronics, vol. 57, No. 2, May 2011, pp. 738-746. |
Evi, “Meet Evi: The One Mobile Application that Provides Solutions for your Everyday Problems”, Feb. 2012, 3 pages. |
Examiners Pre-Review Report received for Japanese Patent Application No. 2015-557147, dated Mar. 1, 2018, 4 pages. |
Exhibit 1, “Natural Language Interface Using Constrained Intermediate Dictionary of Results”, List of Publications Manually Reviewed for the Search of U.S. Pat. No. 7,177,798, Mar. 22, 2013, 1 page. |
Final Office Action received for U.S. Appl. No. 14/175,864, dated Feb. 23, 2018, 26 pages. |
Findlater et al., “Beyond QWERTY: Augmenting Touch-Screen Keyboards with Multi-Touch Gestures for Non-Alphanumeric Input”, CHI '12, Austin, Texas, USA, May 5-10, 2012, 4 pages. |
Gannes, Liz, “Alfred App Gives Personalized Restaurant Recommendations”, AllThingsD, Jul. 18, 2011, pp. 1-3. |
Gruber, Thomas R., et al., U.S. Appl. No. 61/657,744 filed Jun. 9, 2012 titled “Automatically Adapting User Interfaces for Hands-Free Interaction”, 40 pages. |
Gruber, Tom, “Big Think Small Screen: How Semantic Computing in the Cloud will Revolutionize the Consumer Experience on the Phone”, Keynote Presentation at Web 3.0 Conference, Jan. 2010, 41 pages. |
Gruber, Tom, “Siri, A Virtual Personal Assistant-Bringing Intelligence to the Interface”, Semantic Technologies Conference, Jun. 16, 2009, 21 pages. |
Guim, Mark, “How to Set a Person-Based Reminder with Cortana”, available at <http://www.wpcentral.com/how-to-person-based-reminder-cortana>, Apr. 26, 2014, 15 pages. |
Hardwar, Devindra, “Driving App Waze Builds its own Siri for Hands-Free Voice Control”, Available online at <http://venturebeat.com/2012/02/09/driving-app-waze-builds-its-own-siri-for-hands-free-voice-control/>, retrieved on Feb. 9, 2012, 4 pages. |
Interactive Voice, available at <http://www.helloivee.com/company/>, retrieved on Feb. 10, 2014, 2 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2014/015418, dated Aug. 20, 2015, 12 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2014/015418, dated Aug. 26, 2014, 17 pages. |
Invitation to pay additional fees received for the PCT Patent Application No. PCT/US2014/015418, dated May 26, 2014, 5 pages. |
Iowegian International, “FIR Filter Properties, DSPGuru, Digital Signal Processing Central”, available at <http://www.dspguru.com/dsp/faq/fir/properties> retrieved on Jul. 28, 2010, 6 pages. |
Kickstarter, “Ivee Sleek: Wi-Fi Voice-Activated Assistant”, available at <https://www.kickstarter.com/projects/ivee/ivee-sleek-wi-fi-voice-activated-assistant>, retrieved on Feb. 10, 2014, 13 pages. |
Meet Ivee, Your Wi-Fi Voice Activated Assistant, available at <http://www.helloivee.com/>, retrieved on Feb. 10, 2014, 8 pages. |
Mel Scale, Wikipedia the Free Encyclopedia, Last modified on Oct. 13, 2009 and retrieved on Jul. 28, 2010, available at <http://en.wikipedia.org/wiki/Mel_scale>, 2 pages. |
Microsoft, “Turn on and Use Magnifier”, available at <http://www.microsoft.com/windowsxp/using/accessibility/magnifierturnon.mspx>, retrieved on Jun. 6, 2009. |
Miller, Chance, “Google Keyboard Updated with New Personalized Suggestions Feature”, available at <http://9to5google.com/2014/03/19/google-keyboard-updated-with-new-personalized-suggestions-feature/>, Mar. 19, 2014, 4 pages. |
Minimum Phase, Wikipedia the free Encyclopedia, Last modified on Jan. 12, 2010 and retrieved on Jul. 28, 2010, available at <http://en.wikipedia.org/wiki/Minimum_phase>, 8 pages. |
Mobile Speech Solutions, Mobile Accessibility, SVOX AG Product Information Sheet, available at <http://www.svox.com/site/bra840604/con782768/mob965831936.aSQ?osLang=1>, Sep. 27, 2012, 1 page. |
My Cool Aids, “What's New”, available at <http://www.mycoolaids.com/>, 2012, 1 page. |
Myers, Brad A., “Shortcutter for Palm”, available at <http://www.cs.cmu.edu/˜pebbles/v5/shortcutter/palm/index.html>, retrieved on Jun. 18, 2014, 10 pages. |
Naone, Erica, “TR10: Intelligent Software Assistant”, Technology Review, Mar.-Apr. 2009, 2 pages. |
Navigli, Roberto, “Word Sense Disambiguation: A Survey”, ACM Computing Surveys, vol. 41, No. 2, Feb. 2009, 69 pages. |
Non-Final Office Action received for U.S. Appl. No. 14/175,864, dated Jul. 7, 2017, 27 pages. |
Non-Final Office Action received for U.S. Appl. No. 14/175,864, dated Jun. 22, 2016, 17 pages. |
Non-Final Office Action received for U.S. Appl. No. 14/175,864, dated Mar. 15, 2017, 21 pages. |
Non-Final Office Action received for U.S. Appl. No. 14/175,864, dated Sep. 24, 2015, 17 pages. |
Notice of Allowance received for U.S. Appl. No. 14/175,864, dated Jul. 2, 2018, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 14/175,864, dated Oct. 24, 2018, 8 pages. |
Office Action received for Australian Patent Application No. 2014214676, dated Aug. 2, 2017, 4 pages. |
Office Action received for Australian Patent Application No. 2014214676, dated Aug. 3, 2016, 3 pages. |
Office Action received for Australian Patent Application No. 2015101078, dated Jan. 25, 2016, 8 pages. |
Office Action received for Australian Patent Application No. 2015101078, dated Oct. 9, 2015, 3 pages. |
Office Action received for Australian Patent Application No. 2017210578, dated Mar. 16, 2018, 3 pages. |
Office Action received for Australian Patent Application No. 2017210578, dated Nov. 23, 2018, 5 pages. |
Office Action received for Chinese Patent Application No. 201480007349.6, dated Jun. 30, 2017, 16 pages. |
Office Action received for Chinese Patent Application No. 201480007349.6, dated Sep. 3, 2018, 12 pages. |
Office Action received for European Patent Application No. 14707872.9, dated May 12, 2016, 3 pages. |
Office Action received for Japanese Patent Application No. 2015-557147, dated Oct. 14, 2016, 7 pages. |
Office Action received for Japanese Patent Application No. 2015-557147, dated Sep. 1, 2017, 7 pages. |
Office Action received for Japanese Patent Application No. 2017-250005, dated Oct. 26, 2018, 7 pages. |
Office Action received for Korean Patent Application No. 10-2015-7021438, dated Feb. 24, 2017, 11 pages. |
Office Action received for Korean Patent Application No. 10-2015-7021438, dated Mar. 21, 2018, 14 pages. |
Office Action received for Korean Patent Application No. 10-2015-7021438, dated May 23, 2016, 11 pages. |
Office Action received for Korean Patent Application No. 10-2015-7021438, dated Nov. 20, 2017, 6 pages. |
Office Action received for Korean Patent Application No. 10-2016-7029691, dated Apr. 27, 2018, 10 pages. |
Office Action received for Korean Patent Application No. 10-2016-7029691, dated Dec. 27, 2017, 6 pages. |
Office Action received for Korean Patent Application No. 10-2016-7029691, dated Feb. 13, 2017, 9 pages. |
Office Action received for Korean Patent Application No. 10-2018-7017535, dated Sep. 25, 2018, 11 pages. |
Phoenix Solutions, Inc., “Declaration of Christopher Schmandt Regarding the MIT Galaxy System”, West Interactive Corp., A Delaware Corporation, Document 40, Jul. 2, 2010, 162 pages. |
Sarawagi, Sunita, “CRF Package Page”, available at <http://crf.sourceforge.net/>, retrieved on Apr. 6, 2011, 2 pages. |
Simonite, Tom, “One Easy Way to Make Siri Smarter”, Technology Review, Oct. 18, 2011, 2 pages. |
Speaker Recognition, Wikipedia, The Free Enclyclopedia, Nov. 2, 2010, 4 pages. |
SRI, “SRI Speech: Products: Software Development Kits: EduSpeak”, available at <http://web.archive.org/web/20090828084033/http://www.speechatsri.com/products/eduspeak>shtml, retrieved on Jun. 20, 2013, 2 pages. |
Stent et al., “Geo-Centric Language Models for Local Business Voice Search”, AT&T Labs—Research, 2009, pp. 389-396. |
Sullivan, Danny, “How Google Instant's Autocomplete Suggestions Work”, available at <http://searchengineland.com/how-google-instant-autocomplete-suggestions-work-62592>, Apr. 6, 2011, 12 pages. |
Textndrive, “Text'nDrive App Demo-Listen and Reply to your Messages by Voice while Driving!”, YouTube Video available at <http://www.youtube.com/watch?v=WaGfzoHsAMw>, Apr. 27, 2010, 1 page. |
Tofel, Kevin C., “SpeakTolt: A Personal Assistant for Older iPhones, iPads”, Apple News, Tips and Reviews, Feb. 9, 2012, 7 pages. |
Tucker, Joshua, “Too Lazy to Grab Your TV Remote? Use Siri Instead”, Engadget, Nov. 30, 2011, 8 pages. |
Tur et al., “The CALO Meeting Assistant System”, IEEE Transactions on Audio, Speech and Language Processing, vol. 18, No. 6, Aug. 2010, pp. 1601-1611. |
Vlingo Incar, “Distracted Driving Solution with Vlingo InCar”, YouTube Video, Available online at <http://www.youtube.com/watch?v=Vgs8XfXxgz4>, Oct. 2010, 2 pages. |
Voiceassist, “Send Text, Listen to and Send E-Mail by Voice”, YouTube Video, Available online at <http://www.youtube.com/watch?v=0tEU61nHHA4>, Jul. 30, 2009, 1 page. |
Voiceonthego, “Voice on the Go (BlackBerry)”, YouTube Video, available online at <http://www.youtube.com/watch?v=pJqpWgQS98w>, Jul. 27, 2009, 1 page. |
Wikipedia, “Acoustic Model”, available at <http://en.wikipedia.org/wiki/AcousticModel>, retrieved on Sep. 14, 2011, 2 pages. |
Wikipedia, “Language Model”, available at <http://en.wikipedia.org/wiki/Language_model>, retrieved on Sep. 14, 2011, 4 pages. |
Wikipedia, “Speech Recognition”, available at <http://en.wikipedia.org/wiki/Speech_recognition>, retrieved on Sep. 14, 2011, 12 pages. |
Wilson, Mark, “New iPod Shuffle Moves Buttons to Headphones, Adds Text to Speech”, available at <http://gizmodo.com/5167946/new-ipod-shuffle-moves-buttons-to-headphones-adds-text-to-speech>, Mar. 11, 2009, 12 pages. |
Xu et al., “Speech-Based Interactive Games for Language Learning: Reading, Translation, and Question-Answering”, Computational Linguistics and Chinese Language Processing, vol. 14, No. 2, Jun. 2009, pp. 133-160. |
Zainab, “Google Input Tools Shows Onscreen Keyboard in Multiple Languages [Chrome]”, available at <http://www.addictivetips.com/internet-tips/google-input-tools-shows-multiple-language-onscreen-keyboards-chrome/>, Jan. 3, 2012, 3 pages. |
Zhang et al., “Research of Text Classification Model Based on Latent Semantic Analysis and Improved HS-SVM”, Intelligent Systems and Applications (ISA), 2010 2nd International Workshop, May 22-23, 2010, 5 pages. |
“Alexa, Turn Up the Heat!”, Smartthings Samsung [online], Available online at https://web.archive.org/web/20160329142041/https://blog.smartthings.com/news/smartthingsupdates/alexa-turn-up-the-heat/, Mar. 3, 2016, 3 pages. |
Anania, Peter, “Amazon Echo with Home Automation (Smartthings)”, Available online at https://www.youtube.com/watch?v=LMW6aXmsWNE, Dec. 20, 2015, 1 page. |
Api.AI, “Android App Review—Speaktoit Assistant”, Available at <https://www.youtube.com/watch?v=myE498nnyw>, Mar. 30, 2011, 3 pages. |
Asakura et al., “What LG thinks; How the TV should be in the Living Room”, HiVi, vol. 31, No. 7 (Jul. 2013), Stereo Sound Publishing, Inc., Jun. 17, 2013, pp. 68-71 (Official Copy Only). (See Communication under 37 CFR § 1.98(a) (3)). |
Ashbrook, Daniel L.., “Enabling Mobile Microinteractions”, Retrieved from the Internet: URL: “http://danielashbrook.com/wp-content/uploads/2012/06/2009-Ashbrook-Thesis.pdf”, May 2010, 186 pages. |
Ashingtondctech & Gaming, “SwipeStatusBar—Reveal the Status Bar in a Fullscreen App”, Online Available at: <https://www.youtube.com/watch?v=wA_tT9IAreQ>, Jul. 1, 2013, 3 pages. |
“Ask Alexa—Things That Are Smart Wiki”, Available online at <URL:http://thingsthataresmart.wiki/index.php?title=Ask_Alexa&oldid=4283>, [retrieved from internet on Aug. 2, 2017], Jun. 8, 2016, pp. 1-31. |
Bertulucci, Jeff, “Google Adds Voice Search to Chrome Browser”, PC World, Jun. 14, 2011, 5 pages. |
Cambria et al., “Jumping NLP Curves: A Review of Natural Language Processing Research”, IEEE Computational Intelligence Magazine, 2014, vol. 9, May 2014, pp. 48-57. |
Caraballo et al., “Language Identification Based on a Discriminative Text Categorization Technique”, Iberspeech 2012—VII Jornadas En Tecnologia Del Habla and Iii Iberiansl Tech Workshop, Nov. 21, 2012, pp. 1-10. |
Castleos, “Whole House Voice Control Demonstration”, available online at : https://www.youtube.com/watch?v=9SRCoxrZ_W4, Jun. 2, 2012, 26 pages. |
Colt, Sam, “Here's One Way Apple's Smartwatch Could Be Better Than Anything Else”, Business Insider, Aug. 21, 2014, pp. 1-4. |
Deedeevuu, “Amazon Echo Alarm Feature”, Available online at https://www.youtube.com/watch?v=fdjU8eRLk7c, Feb. 16, 2015, 1 page. |
“DIRECTV™ Voice”, Now Part of the DIRECTTV Mobile App for Phones, Sep. 18, 2013, 5 pages. |
EARTHLING1984, “Samsung Galaxy Smart Stay Feature Explained”, Available online at:—“https://www.youtube.com/watch?v=RpjBNtSjupl”, May 29, 2013, 1 page. |
Filipowicz, Luke, “How to use the Quick Type Keyboard in iOS 8”, available online at <https://www.imore.com/comment/568232>, Oct. 11, 2014, pp. 1-17. |
Finkel et al., “Joint Parsing and Named Entity Recognition”, Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the ACL, Jun. 2009, pp. 326-334. |
“Galaxy S7: How to Adjust Screen Timeout & Lock Screen Timeout”, Available online at:—“https://www.youtube.com/watch?v=n6e1WKUS2ww”, Jun. 9, 2016, 1 page. |
Gomez et al., “Mouth Gesture and Voice Command Based Robot Command Interface”, IEEE International Conference on Robotics and Automation, May 12-17, 2009, pp. 333-338. |
Google Developers, “Voice Search in Your App”, Available online at:—https://www.youtube.com/watch?v=PS1FbB5gWEI, Nov. 12, 2014, 1 page. |
Gruber, Thomas R., et al., U.S. Appl. No. 61/186,414, filed Jun. 12, 2009 titled “System and Method for Semantic Auto-Completion” 13 pages (Copy Not Attached). |
Guay, Matthew, “Location-Driven Productivity with Task Ave”, available at <http://iphone.appstorm.net/reviews/productivity/location-driven-productivity-with-task-ave/>, Feb. 19, 2011, 7 pages. |
Hashimoto, Yoshiyuki, “Simple Guide for iPhone Siri, Which Can Be Operated with Your Voice”, Shuwa System Co., Ltd., vol. 1, Jul. 5, 2012, pp. 8, 130, 131. |
Headset Button Controller v7.3 APK Full APP Download for Android, Blackberry, iPhone, Jan. 27, 2014, 11 pages. |
Hear voice from Google translate, Available on URL:https://www.youtube.com/watch?v=18AvMhFgD28, Jan. 28, 2011, 1 page. |
“Hey Google: How to Create a Shopping List with Your Google Assistant”, Available online at:-https://www.youtube.com/watch?v=w9NCsElax1Y, May 25, 2018, 1 page. |
“How to Enable Google Assistant on Galaxy 57 and other Android Phones (No Root)”, Available online at:—“https://www.youtube.com/watch?v=HeklQbWyksE”, Mar. 20, 2017, 1 page. |
“How to Use Ok Google Assistant Even Phone is Locked”, Available online at:-“https://www.youtube.com/watch?v=9B_gP4j_SP8”, Mar. 12, 2018, 1 page. |
id3.org, “id3v2.4.0-Frames”, available at <http://id3.org/id3v2.4.0-frames?action=print>, retrieved on Jan. 22, 2015, 41 pages. |
iNEWS and Tech, “How to Use the QuickType Keyboard in iOS 8”, Available online at:—“http://www.inewsandtech.com/how-to-use-the-quicktype-keyboard-in-ios-8/”, Sep. 17, 2014, 6 pages. |
IOS 8 Release, “Quick Type Keyboard on iOS 8 Makes Typing Easier”, Retrieved from the Internet: URL:https://www.youtube.com/watch?v=0CidLR4fhVU, [Retrieved on Nov. 23, 2018], XP054978896, Jun. 3, 2014, 1 page. |
Jawaid et al., “Machine Translation with Significant Word Reordering and Rich Target-Side Morphology”, WDS'1 1 Proceedings of Contributed Papers, Part I, 2011, pp. 161-166. |
Jiang et al., “A Syllable-based Name Transliteration System”, Proc. of the 2009 Named Entities Workshop, Aug. 7 2009, pp. 96-99. |
Jonsson et al, “Proximity-based Reminders Using Bluetooth”, 2014 IEEE International Conference on Pervasive Computing and Communications Demonstrations, 2014, pp. 151-153. |
Jouvet et al., “Evaluating Grapheme-to-phoneme Converters in Automatic Speech Recognition Context”, IEEE, 2012, pp. 4821-4824. |
Karn, Ujjwal, “An Intuitive Explanation of Convolutional Neural Networks”, The Data Science Blog, Aug. 11, 2016, 23 pages. |
Kazmucha, Allyson, “How to Send Map Locations Using iMessage”, iMore.com, Available at <http://www.imore.com/how-use-imessage-share-your-location-your-iphone>, Aug. 2, 2012, 6 pages. |
Lewis, Cameron, “Task Ave for iPhone Review”, Mac Life, Available at <http://www.maclife.com/article/reviews/task_ave_iphone_review>, Mar. 3, 2011, 5 pages. |
Liou et al., “Autoencoder for Words”, Neurocomputing, vol. 139, Sep. 2014, pp. 84-96. |
Majerus, Wesley, “Cell Phone Accessibility for your Blind Child”, Retrieved from the Internet <URL:https://web.archive.org/web/20100210001100/https://nfb.org/images/nfb/publications/fr/fr28/3/fr280314.htm>, 2010, pp. 1-5. |
Marketing Land, “Amazon Echo: Play Music”, Online Available at: <https://www.youtube.com/watch?v=A7V5NPbsXi4>, Apr. 27, 2015, 3 pages. |
Mhatre et al., “Donna Interactive Chat-bot acting as a Personal Assistant”, International Journal of Computer Applications (0975-8887), vol. 140, No. 10, Apr. 2016, 6 pages. |
Mikolov et al., “Linguistic Regularities in Continuous Space Word Representations”, Proceedings of NAACL-HLT, Jun. 9-14, 2013, pp. 746-751. |
Morrison, Jonathan, “iPhone 5 Siri Demo”, Online Available at <https://www.youtube.com/watch?v=_wHWwG5lhWc>, Sep. 21, 2012, 3 pages. |
Nakamura, Satoshi, “Overcoming the Language Barrier with Speech Translation Technology, Science & Technology Trends”, Quarterly Review No. 31, Apr. 2009, pp. 36-49. |
Nakazawa et al., “Detection and Labeling of Significant Scenes from TV program based on Twitter Analysis”, Proceedings of the 3rd Forum on Data Engineering and Information Management (DEIM 2011 proceedings), IEICE Data Engineering Technical Group. Available online at: http://db-event.jpn.org/deim2011/proceedings/pdf/f5-6.pdf, Feb. 28, 2011, 10 pages. (See Communication under 37 CFR § 1.98(a) (3)). |
NDTV, “Sony SmartWatch 2 Launched in India for Rs. 14,990”, available at <http://gadgets.ndtv.com/others/news/sony-smartwatch-2-launched-in-india-for-rs-14990-420319>, Sep. 18, 2013, 4 pages. |
Ng, Simon, “Google's Task List Now Comes to Iphone”, SimonBlog, Available at <http://www.simonblog.com/2009/02/04/googles-task-list-now-comes-to-iphone/>, Feb. 4, 2009, 3 pages. |
Nozawa, Naoki et al., “iPhone 4S Perfect Manual”, vol. 1, First Edition, Nov. 11, 2011, 5 pages. (See Communication under 37 CFR § 1.98(a) (3)). |
Office Action received for European Patent Application No. 14707872.9, dated Nov. 29, 2018, 6 pages. |
Okuno et al., “System for Japanese Input Method based on the Internet”, Technical Report of Information Processing Society of Japan, Natural Language Processing, Japan, Information Processing Society of Japan, vol. 2009, No. 36, Mar. 18, 2009, 8 pages. (See Communication under 37 CFR § 1.98(a) (3)). |
Osxdaily, “Get a List of Siri Commands Directly from Siri”, Available at <http://osxdaily.com/2013/02/05/list-siri-commands/>, Feb. 5, 2013, 15 pages. |
Pan et al., “Natural Language Aided Visual Query Building for Complex Data Access”, In proceeding of: Proceedings of the Twenty-Second Conference on Innovative Applications of Artificial Intelligence, XP055114607, Jul. 11, 2010, pp. 1821-1826. |
Pathak et al., “Privacy-preserving Speech Processing: Cryptographic and String-matching Frameworks Show Promise”, In: IEEE signal processing magazine, retrieved from <http://www.merl.com/publications/docs/TR2013-063.pdf>, Feb. 13, 2013, 16 pages. |
Patra et al., “A Kernel-Based Approach for Biomedical Named Entity Recognition”, Scientific World Journal, vol. 2013, 2013, pp. 1-7. |
Pennington et al., “GloVe: Global Vectors for Word Representation”, Proceedings of the Conference on Empirical Methods Natural Language Processing (EMNLP), Oct. 25-29, 2014, pp. 1532-1543. |
Perlow, Jason, “Alexa Loop Mode With Playlist for Sleep Noise”, Online Available at: <https://www.youtube.com/watch?v=nSkSuXziJSg>, Apr. 11, 2016, 3 pages. |
Powell, Josh, “Now You See Me . . . Show/Hide Performance”, available at http://www.learningjguery.com/2010/05/now-you-see-me-showhide-performance, May 4, 2010, 3 pages. |
Rios, Mafe, “New bar search for Facebook”, Youtube, available at “https://www.youtube.com/watch?v=vwgN1WbvCas”, Jul. 19, 2013, 2 pages. |
Routines, “SmartThings Support”, Available online at <https://web.archive.org/web/20151207165701/https://support.smartthings.com/hc/en-us/articles/205380034-Routines>, 2015, 2 pages. |
Samsung Support, “Create a Qucik Command in Bixby to Launch Custom Settings by at your Command”, Retrieved from internet: https://www.facebook.com/samsungsupport/videos/10154746303151213, Nov. 13, 2017, 1 page. |
Samsung, “SGH-a885 Series—Portable Quad-Band Mobile Phone-User Manual”, Retrieved from the Internet: URL: “http://web.archive.org/web/20100106113758/http://www.comparecellular.com/images/phones/userguide1896.pdF”, Jan. 1, 2009, 144 pages. |
Seehafer, Brent, “Activate google assistant on Galaxy S7 with screen off”, Available online at:—“https://productforums.google.com/forum/#!topic/websearch/lp3gIGBHLVI”, Mar. 8, 2017, 4 pages. |
Selfrifge et al., “Interact: Tightly-coupling Multimodal Dialog with an Interactive Virtual Assistant”, International Conference on Multimodal Interaction, ACM, Nov. 9, 2015, pp. 381-382. |
“SmartThings +Amazon Echo”, Smartthings Samsung [online], Available online at <https://web.archive.org/web/20160509231428/https://blog.smartthings.com/featured/alexa-turn-on-my-smartthings/>, Aug. 21, 2015, 3 pages. |
Spivack, Nova, “Sneak Preview of Siri—Part Two—Technical Foundations—Interview with Tom Gruber, CTO of Siri”, Online Available at <https://web.archive.org/web/20100114234454/http://www.twine.com/item/12vhy39k4-22m/interview-with-tom-gruber-of-siri>, Jan. 14, 2010, 5 pages. |
Sundaram et al., “Latent Perceptual Mapping with Data-Driven Variable-Length Acoustic Units for Template-Based Speech Recognition”, ICASSP 2012, Mar. 2012, pp. 4125-4128. |
Sundermeyer et al., “From Feedforward to Recurrent LSTM Neural Networks for Language Modeling”, IEEE Transactions to Audio, Speech, and Language Processing, 2015, vol. 23, Mar. 2015, pp. 517-529. |
Sundermeyer et al., “LSTM Neural Networks for Language Modeling”, Interspeech 2012, ISCA's 13 Annual Conference, Sep. 9-13, 2012, pp. 194-197. |
Tanaka, Tatsuo, “Next Generation IT Channel Strategy Through “Experience Technology””, Intellectual Resource Creation, Japan, Nomura Research Institute Ltd. vol. 19, No. 1, Dec. 20, 2010, 17 pages. (See Communication under 37 CFR § 1.98(a) (3)). |
“The world of Virtual Assistants—more SemTech . . . ”, End of Business as Usual—Glenn's External blog, Online Available at <https://web.archive.org/web/20091101840940/http://glennas.wordpress.com/2009/10/17/the-world-of-virtual-assistants-more-semtech/>, Oct. 17, 2009, 5 pages. |
Vodafone Deutschland, “Samsung Galaxy S3 Tastatur Spracheingabe”, Available online at—“https://www.youtube.com/watch?v=6k0d6Gr8uFE”, Aug. 22, 2012, 1 page. |
X.Al, “How it Works”, May 2016, 6 pages. |
Xiang et al., “Correcting Phoneme Recognition Errors in Learning Word Pronunciation through Speech Interaction”, Speech Communication, vol. 55, No. 1, Jan. 1, 2013, pp. 190-203. |
Xu, Yuhong, “Policy optimization of dialogue management in spoken dialogue system for out-of-domain utterances”, 2016 International Conference on Asian Language Processing (IALP), IEEE, Nov. 21, 2016, pp. 10-13. |
Yan et al., “A Scalable Approach to Using DNN-Derived Features in GMM-HMM Based Acoustic Modeling for LVCSR”, InInterspeech, 2013, pp. 104-108. |
Yates, Michael C., “How can I exit Google Assistant after i'm finished with it”, Available online at:—“https://productforums.google.com/forum/#!msg/phone-by-google/faECnR2RJwA/gKNtOkQgAQAJ”, Jan. 11, 2016, 2 pages. |
Young et al., “The Hidden Information State model: A practical framework for POMDP-based spoken dialogue management”, Computer Speech & Language, vol. 24, Issue 2, 2010, pp. 150-174. |
Zangerle et al., “Recommending #-Tag in Twitter”, Proceedings of the Workshop On Semantic Adaptive Socail Web, 2011, pp. 1-12. |
Zhong et al., “JustSpeak: Enabling Universal Voice Control on Android”, W4A'14, Proceedings of the 11th Web for All Conference, No. 36, Apr. 7-9, 2014, 8 pages. |
Office Action received for Japanese Patent Application No. 2017-250005, dated Jul. 5, 2019, 4 pages (2 pages of English Translation). |
Notice of Acceptance received for Australian Patent Application No. 2017210578, dated Mar. 13, 2019, 3 pages. |
Decision of the Board of Appeal received for Japanese Patent Application No. 2015-557147, dated Jan. 21, 2019, 27 pages (3 pages of English Translation). |
Office Action received for Indian Patent Application No. 4245/CHENP/2015, dated Feb. 28, 2019, 7 pages. |
Office Action received for Korean Patent Application No. 10-2016-7029691, dated Mar. 28, 2019, 12 pages (6 pages of English Translation and 6 pages of Official Copy). |
Office Action received for Chinese Patent Application No. 201480007349.6, dated Apr. 10, 2019, 10 pages (4 pages of English Translation and 6 pages of Official Copy). |
Office Action received for Korean Patent Application No. 10-2018-7017535, dated Apr. 26, 2019, 7 pages (3 pages of English Translation 4 pages of Official Copy). |
Office Action received for European Patent Application No. 14707872.9, dated May 29, 2019, 7 pages. |
Office Action received for German Patent Application No. 112014000709.9, dated Sep. 19, 2019, 9 pages (3 pages of English Translation and 6 pages of Official Copy). |
Result of Consultation received for European Patent Application No. 14707872.9, dated Nov. 25, 2019, 3 pages. |
Office Action received for Chinese Patent Application No. 201480007349.6, dated Aug. 26, 2019, 10 pages (4 pages of English Translation and 6 pages of Official Copy). |
Notice of Allowance received for Japanese Patent Application No. 2017-250005, dated Aug. 2, 2019, 4 pages (1 page of English Translation 3 pages of Official Copy). |
Notice of Allowance received for Korean Patent Application No. 10-2016-7029691, dated Feb. 26, 2020, 5 pages (2 pages of English Translation 3 pages of Official Copy). |
Notice of Allowance received for Korean Patent Application No. 10-2018-7017535, dated Feb. 26, 2020, 5 pages (2 pages of English Translation 3 pages of Official Copy). |
Number | Date | Country | |
---|---|---|---|
20190122692 A1 | Apr 2019 | US |
Number | Date | Country | |
---|---|---|---|
61762260 | Feb 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14175864 | Feb 2014 | US |
Child | 16222249 | US |