The present invention relates to a method and apparatus for recognition of human speech, and more particularly, to a method and apparatus to distinguish user speech which is the desired focus of machine-interpretation from extraneous background speech.
In modern production environments, it is increasingly desirable for human operators to be able to record data and to control electronic devices in a “hands-free” mode, typically via speech control. This typically entails the use of portable electronic voice-processing devices which can detect human speech, interpret the speech, and process the speech to recognize words, to record data, and/or to control nearby electronic systems.
Voice-driven systems typically include at least one microphone and at least one processor-based device (e.g., computer system) which is operated in response to human voice or spoken input, for instance spoken commands and/or spoken information.
There are numerous applications in which voice-driven systems may be employed. For instance, there are many applications where it is advantageous for a user to have their hands free to perform tasks other than operating a keyboard, keypad, mouse, trackball or other user input device. An example of one such application is a warehouse, where a user may need to handle items such as boxes while concurrently interacting with a processor-based device. Another example application is a courier or delivery person, who may be handling parcels or driving a vehicle while concurrently interacting with a processor-based device. An example of a further such application is a medical care provider, who may be using their hands during the performance of therapeutic or diagnostic medical services, while concurrently interacting with a processor-based device. There are of course numerous other examples of applications.
In many of these exemplary applications it is also advantageous or even necessary for the user to be mobile. For applications in which mobility is desirable, the user may wear a headset and a portable processor-based device (referred to below in this document at the speech recognition device 106, 300, or SRD). The headset typically includes at least one loud-speaker and/or microphone. The portable processor-based device typically takes the form of a wearable computer system. The headset is communicatively coupled to the portable processor-based device, for instance via a coiled wire or a wireless connection, for example, a Bluetooth connection. In some embodiments, the portable processor-based device may be incorporated directly into the headset.
In some applications, the portable processor-based device may in turn be communicatively coupled to a host or backend computer system (e.g., server computer). In many applications, two or more portable processor-based devices (clients) may be communicatively coupled to the host or backend computer system/server.
The server may function as a centralized computer system providing computing and data-processing functions to various users via respective portable processor-based devices and headsets. Such may, for example, be advantageously employed in an inventory management system in which a central or server computer system performs tracking and management; a plurality of users each wearing respective portable computer systems and headsets interface with the central or server computer system.
This client (headset)/server approach allows the user(s) to receive audible instructions and/or information from the server of the voice driven system. For instance, the user may: receive from the server voice instructions; may ask questions of the server; may provide to the server reports on progress of their assigned tasks; and may also report working conditions, such as inventory shortages, damaged goods or parcels; and/or the user may receive directions such as location information specifying locations for picking up or delivering goods.
Background Sounds
Voice driven systems are often utilized in noisy environments where various extraneous sounds interfere with voice or spoken input. For example, in a warehouse or logistics center environment, extraneous sounds are often prevalent, including for instance: public address announcements; conversations from persons which are not intended as input (that is, persons other than the user of the voice driven system); and/or the movement of boxes or pallets; noise from the operation of lift vehicles (e.g., forklifts), motors, compressors, and other nearby machinery. To be effective, voice driven systems need to distinguish between voice or speech as intended input and extraneous background sounds, including unwanted voices, which may otherwise be erroneously interpreted as desired speech from a headset-wearing user.
Sounds or noise associated with public address (PA) systems are particularly difficult to address. Public address systems are intentionally loud, so that announcements can be heard above other extraneous noise in the ambient environment. Therefore, it is very likely that a headset microphone will pick up such sounds. Additionally, public address system announcements are not unintelligible noise, but rather are typically human voice or spoken, thereby having many of the same aural qualities as voice or spoken input.
Therefore, there exists a need for a system and method for addressing extraneous sounds including background speech and PA system speech, in order to prevent those extraneous sounds from interfering with the desired operation of the voice driven systems.
Accordingly, in one aspect, the present system and method solves the problem by preparing, in advance of field-use, a voice-data model which is created in a training environment, where the training environment exhibits both desired user speech and unwanted background sounds, including unwanted speech from persons other than a user, and also unwanted speech from a PA system.
The speech recognition system is trained or otherwise programmed to identify wanted user speech which may be spoken concurrently with the background sounds. In an embodiment, during the pre-field-use phase the training or programming is accomplished in part by having persons who are training listeners audit the pre-recorded sounds, and having the training listeners identify the desired user speech, a process referred to as “tagging”. Tagging may also entail have the training listeners identify background speech from persons other than the user, background speech from PA system sounds, and other environmental noises. In an embodiment, during the pre-field-use phase the training or programming is further accomplished in part by training a processor-based learning system to duplicate the assessments made by the human listeners.
In the following description, certain specific details are set forth in order to provide a thorough understanding of various embodiments. However, one skilled in the art will understand that the invention may be practiced without these details. In other instances, well-known structures associated with voice recognition systems and speech recognition devices have not been shown or described in detail to avoid unnecessarily obscuring descriptions of the embodiments.
Unless the context requires otherwise, throughout the specification and claims which follow, the word “comprise” and variations thereof, such as, “comprises” and “comprising” are to be construed in an open sense, that is as “including, but not limited to.”
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
The headings provided herein are for convenience only and do not interpret the scope or meaning of the claimed invention.
Electronic System for Voice Processing
The present system and method embraces electronic devices designed to interpret human speech and language, and to operate in response to human speech, also known as voice-driven systems, speech-driven systems, or spoken-language recognition systems.
In particular, the speech driven system 102 includes a headset 104 and a processor-based speech recognition device 106. In use, the user typically wears the headset 104, and optionally wears the processor-based speech recognition device 106. The processor-based speech recognition device 106 is communicatively coupled, either directly or indirectly (that is, via either wired or wireless coupling), with the headset 104. For example, the processor-based speech recognition device 106 and headset 104 may be wirelessly communicatively coupled via one or more radios (e.g., transmitters, receivers, transceivers) as indicated by radio frequency signal 108. Alternatively, the processor-based speech recognition device 106 and headset 104 may be communicatively coupled via one or more cables, for instance one or more wire or optical cables (not shown).
Optionally, the speech driven system 102 may also include one or more backend computer systems 110 (only one shown), which may include or be communicatively coupled to one or more data stores stored on one or more non-transitory computer- or processor-readable media 111. The backend computer system(s) 110 is or are communicatively coupled to one or more processor-based speech recognition devices 106. For example, a wireless networking system may include one or more antennas 112 (only one shown) positioned about a work environment. Antenna 112 can provide wireless communications (for example, by radio frequency signal 109) between the one or more processor-based speech recognition devices 106 and the one or more backend computer system(s) 110.
The user 100 may engage in various activities which may require the use of the user's hands, for instance to handle goods or packages 114. Alternatively, the activities may not require use of the user's hands; however hand-free operation may be more comfortable or otherwise advantageous to the user 100.
The headset 104 may include a headband 116, one or more loud-speakers or headphones 118 (only one visible in
The circuitry (not shown in
The processor-based speech recognition device 106 may be portable or stationary. For example, the processor-based speech recognition device 106 may be worn by the user 100, for instance on a belt as illustrated in
Alternatively, the processor-based speech recognition device 106 may be manually carried or otherwise transported, for instance on a vehicle (e.g., fork lift, tug). Alternatively or additionally, the processor-based speech recognition device 106 may be stationary. Such implementations may employ a plurality of antennas positioned throughout a work environment and/or sufficiently more powerful communications devices, for instance WiFi radios.
The circuitry (not shown in
The headset 104 and processor-based speech recognition device 106 permit various users 100 to communicate with one or more backend computer systems 110 (e.g., server computer systems). In use, the processor-based speech recognition device 106 receives digital instructions from the backend computer system 110 and converts those instructions to audio, which is provided to the user 100 via loud-speakers 118 of the headset 104. The user 100 provides spoken input via the microphone 120 of the headset, which the processor-based speech recognition device 106 may convert to a digital format (e.g., words, text, or encoding symbolic of words and text) to be transferred to the backend computer system 110.
The backend computer system(s) 110 may be part of a larger system for sending and receiving information regarding the activities and tasks to be performed by the user(s) 100. The backend computer system(s) 110 may execute one or more system software routines, programs or packages for handling particular tasks. Tasks may, for example, include tasks related to inventory and warehouse management.
In an alternative embodiment of the present system and method, the backend computer system(s) 110 may implement some, or all, of the functionality otherwise described herein as being associated with the processor-based speech recognition device 106.
The backend computer system/server 110 may be any targeted computer or automated device, and may be located anywhere with respect to the user and the various components. For instance, the backend computer system 110 will typically be located remotely from the user, such as in another room or facility.
However, the background computer system 110 may be located locally with the user, for instance carried or worn by the user or carried by a vehicle operated by the user. In some implementations, that backend computer system 110 may be combined with the processor-based speech recognition device 106.
In an alternative embodiment, the headset 104 and the speech recognition devise (SRD) 106 may be connected and may communicate via a wired connection, such as a coiled cable.
Headset
The headset 200 includes a microphone 202, and may include one or more secondary microphones (not shown). The microphone 202 is operable as a transducer to convert acoustic energy (e.g., sounds, such as voice or other sounds) to analog signals (e.g., voltages, currents) that have respective signal levels. The headset 200 preferably includes one or more loudspeakers 206a, 206b (two shown, collectively 206). Each of the loud-speakers 206 is operable as a transducer to convert analog signals (e.g., voltages, currents) that have respective signal levels into acoustic energy (e.g., sounds, such as recorded or artificially generated spoken syllables, words or phrases or utterances).
The microphone(s) 202 is (are) positioned or configured (e.g., directional and oriented) to primarily capture speech or utterances by the user 100. However, the microphone 202 may also capture background speech from other users in the work environment, as well as background speech from PA systems. In this document, background speech will be understood to include both speech from persons other than the user 100 and Public Address (PA) system speech.
The microphone 202 may be positioned such that when the headset 104 (
With respect to PA systems, background speech from a PA system may be amplified, and so may be picked up by the microphone 202 as being approximately as loud as the user speech. However, due to various factors—emanating from a remote loud-speaker, frequency band limitations of the PA system, and due to echoes and other factors—remote speech from a PA system may have different acoustic qualities at the microphone 202, as compared to the acoustic qualities of user speech.
In other words, user speech or other utterances by the user 100 are likely to have different acoustic signatures than background speech from other persons at some distance from the user 100, or and also different acoustic signatures from sounds from a PA system. In one embodiment, the present system and method may rely, in part or in whole, on signal processing techniques, as applied to such acoustic differences, to distinguish user speech from background speech.
In an alternative embodiment, some implementations of the present system and method may employ additional secondary microphones (not shown), for example two or more secondary microphones, to help distinguish user speech from background speech.
The headset 200 may include one or more audio coder/decoders (CODECs). For example, the headset 200 may include an audio CODEC 208 coupled to the microphone(s) 202 to process analog signals from the microphone 202 and produce digital signals representative of the analog signals. The CODEC 208 or another audio CODEC (not shown) may be coupled to the one or more loud-speakers 206 to produce analog drive signals from digital signals in order to drive the loudspeakers 206. Suitable audio CODECs may for example include the audio CODEC commercially available from Philips under the identifier UDA 1341 and other similar audio CODECs.
The headset 200 may include one or more buffers 210. The buffer(s) 210 may temporarily store or hold signals. The buffer 210 is illustrated as positioned relatively downstream of the CODEC 208 in a signal flow from the microphone 202.
The headset 200 includes a control subsystem 212. The control subsystem 212 may, for example include one or more controllers 214, one or more sets of companion circuitry 216, and one or more non-transitory computer- or processor-readable storage media such a non-volatile memory 218 and volatile memory 220.
The controller(s) 214 may take a variety of forms, for instance one or more microcontrollers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), programmable gate arrays (PGAs), graphical processing unit (CPUs) and/or programmable logic controllers (PLCs). The controller(s) 214 may, for example, take the form of a processor commercially available from CSR under the identifier BlueCore5 Multimedia. The BlueCore5 Multimedia does not require companion circuitry 216. Alternatively, the controller(s) 214 may take the form of a processor commercially available from Intel under the identifier SA-1110. Optional companion circuitry 216 may take the form of one or more digital, or optionally analog, circuits, which may, or may not, be in the form of one or more integrated circuits. The companion circuitry 216 may, for example, take the form of a companion chip commercially available from Intel under the identifier SA-1111. The controller(s) 214 may function as a main processor, with the companion circuitry functioning as a co-processor to handle specific tasks. In some implementations, the companion circuitry 216 may take the form of one or more DSPs or GPUs.
Non-volatile memory 218 may take a variety of forms, for example one or more read only memories (ROMs), one or more writeable memories, for instance EEPROM and/or one or more FLASH memories. The volatile memory 220 may take a variety of forms, for example one or more random access memories (RAM) including static random access memory (SRAM) and/or dynamic random access memories (DRAM) for instance synchronous DRAM (SDRAM)). The various controllers 214, companion circuits 216, volatile memories 218 and/or nonvolatile memories 220 may be communicatively coupled via one or more buses (only one shown) 222, for instance instructions buses, data buses, address buses, power buses, etc.
The controllers 214 and/or companion circuitry 216 may execute instructions stored in or by the non-volatile memories 218 and/or volatile memories 220. The controllers 214 and/or companion circuitry 216 may employ data, values, or other information stored in or by the volatile memories 220 and/or nonvolatile memories 218.
In an embodiment of the present system and method, the control subsystem 212 may incorporate audio filtering circuitry or implement audio filtering by way of a general purpose processor which processes suitable instructions stored in non-volatile memory 218 or volatile memory 220. Audio filtering may, for example, implement signal processing or data comparisons as described further herein to distinguish acceptable user speech from background user speech. Audio filtering may rely upon a comparison of frames of speech provided from microphone 202, via codec 208 and buffer 210, with previously-established speech samples stored in nonvolatile memory 218 or volatile memory 220.
In an alternative embodiment of the present system and method, some or all audio filtering, speech-processing, and speech-comparisons may be instead be accomplished via circuitry on the speech recognition device 106 (
As described further herein below, in an embodiment of the present system and method, the sound signal from the microphone 202 is passed to the processor-based speech recognition device 106 (
The headset 200 optionally includes one or more radios 224 (only one shown) and associated antennas 226 (only one shown) operable to wirelessly communicatively couple the headset 200 to the processor-based speech recognition device 106 and/or backend computer system 110. The radio 224 and antenna 226 may take a variety of forms, for example a wireless transmitter, wireless receiver, or wireless transceiver. In an embodiment where the headset 104, 200 and SRD 106, 300 are connected by a wired connection, radio 224 may not be required, or may be required only to communicate with the backend computer system 110.
The radio 224 and antenna 226 may, for instance, be a radio suitable for short range communications, for example compatible or compliant with the BlueTooth protocol, which allows bi-directional communications (e.g., transmit, receive). Alternatively, the radio 224 and antenna 226 may take other forms, such as those compliant with one or more variants of the IEEE 802.11 protocols (e.g., 802.11n protocol, 802.1 lac protocol). The radio 224 and antenna 226 may, for example, take the form of an RF communications card, received via a connector, for instance a PCMCIA slot, to couple the RF communications card to the controller 214. RF communications cards are commercially available from a large number of vendors. The range of the radio 224 and antenna 226 should be sufficient to ensure wireless communications in the expected work environment, for instance wireless communications with a processor-based speech recognition device 106 worn by a same user as wears the headset 200.
In an alternative embodiment, some or all of the electronic circuitry described above as being part of the headset 104, 200 may instead be placed on the SRD 106, 300. The circuitry of the SRD 106, 300 is discussed further immediately below.
Processor-Based Speech Recognition Device
The processor-based speech recognition device 300 may include one or more controllers, for example a microprocessor 302 and DSP 304. While illustrated as a microprocessor 302 and a DSP 304, the controller(s) may take a variety of forms, for instance one or more microcontrollers, ASICs, PGAs, GRUs, and/or PLCs.
The processor-based speech recognition device 300 may include one or more non-transitory computer- or processor-readable storage media such as non-volatile memory 306 and volatile memory 308. Non-volatile memory 306 may take a variety of forms, for example one or more read-only memories (ROMs), one or more writeable memories, for instance EEPROM and/or or one or more FLASH memories. The volatile memory 308 may take a variety of forms, for example one or more random access memories (RAM) including static and/or dynamic random access memories. The various controllers 302, 304 and memories 306, 308 may be communicatively coupled via one or more buses (only one shown) 310, for instance instructions buses, data buses, address buses, power buses, etc.
The controllers 302, 304 may execute instructions stored in or by the memories 306, 308. The controllers 302, 304 may employ data, values, or other information stored in or by the memories 306, 308. The memories 306, 308 may for example store instructions which implements the methods described further below herein to distinguish user speech from background speech, as in exemplary methods 400 and 600 (see
The processor-based speech recognition device 300 optionally includes one or more radios 312 and associated antennas 314 (only one shown) operable to wirelessly communicatively couple the processor-based speech recognition device 300, 106 to the headset 200, 104. Such radio 312 and antenna 314 may be particularly suited to relatively short-range communications (e.g., 1 meter, 3 meters, 10 meters). The radio 312 and antenna 314 may take a variety of forms, for example a wireless transmitter, wireless receiver, or wireless transceiver. The radio 312 and antenna 314 may, for instance, be a radio suitable for short range communications, for example compatible or compliant with the Bluetooth protocol. The range of the radio 312 and antenna 314 should be sufficient to ensure wireless communications in the expected work environment, for instance wireless communications with a processor-based headset 104, 200.
The processor-based speech recognition device 300 optionally includes one or more radios 316 and associated antennas 318 (only one shown) operable to wirelessly communicatively couple the processor-based speech recognition device 300, 106 to the backend computer system/server 110 (
General Speech Analysis Considerations
Note that the terms frames and fragments are used interchangeably throughout this specification to indicate information associated with a segment of audio. Also note that frames or fragments for the purposes of classification into user speech and background speech do not necessarily need to correlate one to one to frames or fragments generated for purposes of feature generation for other aspects of speech recognition, e.g., speech detection, training, decoding, or general background noise removal. They may have many different parameters, such as using different frame rates, amounts of overlap, number of samples, etc.
A speech recognition system attempts to map spoken human speech to known language vocabulary. To do so, a voice system will, among other operational elements, typically compare (i) received real-time speech against (ii) a stored audio template, also referred to as an audio characterization model ACM, of previously captured/analyzed voice samples. Such an audio template is derived from a collection of voice training samples and other training samples referred to, for the present system and method, as the training corpus TC.
In general, speech recognition may involves several general stages. Presented here is an exemplary general process for real-time speech interpretation.
(1) Conversion of received sound to digital signal—Audio waves emanating from a human speaker, as well as nearby sounds from other sources, are converted to an analog electrical signal. This may be done for example by a microphone 120, 202 in a headset 104, 200. The analog electrical signal is then digitalized, i.e., converted to binary l's and 0's. This may be accomplished for example by the CODEC 208 of the headset 104, 200, or by the processor 302 of the speech recognition device 106, 300.
(2) Division of digitized sound into frames—The digitized sound is divided into frames, that is, segments of suitable length for analysis to identify speech. The length of segments may be geared to identify specific phonemes (sound units, such as a vowel sound or a consonant sound), or words or phrases.
NOTE: Further processing stages identified immediately below may be performed, for example, by the microprocessor 302 or digital signal processor 304 of the speech recognition device 106, 300, possibly based on instructions stored in non-volatile memory 306 or volatile memory 308. In an alternative embodiment, these tasks may be performed in whole or part by elements of headset 104, 200, or server 110.
(3) Conversion to frequency domain—The frames of the received, digitized audio signal are typically converted from the time domain to the frequency domain. This is accomplished for example via a Fourier transform or Fast Fourier transform, or similar processing.
(4) Conversion to secondary representation (state vectors)—In an embodiment, a frequency domain representation may be converted to other mathematical representations better suited for further processing. For example, while the frequency domain representation may be substantially continuous, various forms of concise representations may encapsulate the essential or key elements of the frequency domain representation. For example, amplitudes at various specific frequencies may be captured, or amplitudes of only the peak frequencies may be captured. Various other mathematical encapsulations are possible as well. The resulting mathematical characterization of the audio frames is sometimes referred to as “state vectors”.
(5) Normalizations and other supplemental signal processing—One of the challenges inherent in voice recognition is that human voices differ in their harmonics and speech patterns; for example, the same exact word spoken by two different persons may sound dramatically different in a variety of respects, such as pitch, loudness, and duration, as well as variations due to age, accents, etc. To help compensate for this, voice systems typically attempt to normalize diverse samples of the same speech to similar mathematical representations. Thus, normalizations attempt to ensure that, for example, human vowel sounds (such as “ah”, “eh”, or “oh”) coming from different speakers will all have a substantially similar mathematical representation, common to all speakers, during processing. The process of converting digitized speech samples from different speakers to a partially or substantially similar form is referred t as “normalization.” A variety of established methods for this are known in the art.
In embodiments of the present system and method, one exemplary method of normalization is Vocal Length Tract Normalization (VTLN), which applies compensations for the varied pitches of the human voice (including, but not limited to, the typical differences between male and female voices). In alternative embodiments of the present system and method, another system of normalization which may be employed is Maximum Likelihood Linear Regression (MLLR), which adapts parameters within the stored template data to be a closer to match to a currently received sound signal.
Other signal conversions may be employed as well at various stages For example, various frequency domains may be either boosted or suppressed.
(6) Comparison of received voice signal against the template—The processed, received voice signal is compared against a template of pre-processed, stored voice signals also referred to as an audio characterization model ACM. A favorable comparison is indicative of a user voice, which is accepted by the speech driven system 102; an unfavorable comparison is indicative of a background voice (or possibly a user voice which is corrupted by extraneous background sounds), and which is thereby rejected by the voice driven system 102.
Audio Characterization Model and Training Process
The audio characterization model ACM typically includes stored mathematical representations of human voices expressing certain words, for example storing the state vectors described above. The audio characterization model ACM also contains data which matches the stored audio signals to specific textual representations, i.e., textual transcriptions of the spoken words. The audio signal representations (state vectors) and textual transcriptions may be vowel or consonant sounds, whole words, phrases or sentence fragments, or even whole sentences. The comparison discussed above determines if the received voice signal is a match for a voice signal in the audio characterization model ACM (the stored audio template).
In an embodiment of the present system and method, a training corpus is prepared during a training phase which occurs in time prior to the release/use of the speech driven system 102 for field-use in factories, warehouses, or other industrial environments. The ACM (that is, the template) is prepared from the training corpus. Thus the audio characterization model ACM may be understood in part as a preset or pre-determined representation of correlations between audio signal representations (state vectors, etc.) and associated text, such as syllables, words, and/or phrases.
Training environment: In an embodiment of the present system and method, the audio vocabulary of the training corpus is initially recorded in a training environment which is the same as, or which mimics, an industrial environment in which the speech recognition device 106, 300 may be used. In this way, the audio samples in the training corpus are likely to be representative of audio samples which will be obtained from actual device users 100 during field operations. For example, if the speech recognition device 106, 300 is to be used in factory and warehouse settings, then the audio samples collected for training purposes may be collected in an actual factory or warehouse setting, or an environment designed to mimic such settings.
In one embodiment, the use of a field-realistic setting to record training sounds includes an environment which may have one or more of the following audio or acoustic aspects:
It will be understood by persons skilled in the art that, as detected by the microphone 120, background voices for example those from a PA system or from roving persons—will have audio qualities which are distinctive from the audio qualities of a user 100 whose mouth is in immediate proximity to the microphone 120. Physiologically-based differences in the voices (between the user 100 and roving persons) also result in audio quality differences. It is a feature of the present system and method to distinguish a user voice from a background voice emitted from a PA system or from a roving person.
In an alternative embodiment, a training corpus may be obtained in an audio environment which is not the same as the field environment, for example, in a sound studio.
Training vocabulary: In an embodiment of the present system and method, the speech recognition device 106, 300 may be expected to be used principally in conjunction with a specific or well-defined vocabulary of terms. For example, it may be anticipated that users of the speech recognition device will principally speak terms associated with certain manufacturing processes or with certain warehouse procedures.
For example, the vocabulary may entail the use of digits or numbers, the names of certain specific procedures, and/or certain specific signal words or confirmation words for known tasks. In an embodiment, and in such cases, the vocabulary for the training corpus (and so, ultimately, for the audio characterization model ACM) may be principally confined to words, terms, or phrases which are expected/anticipated to be used by the users 100 of the speech recognition device 106, 300.
In an alternative embodiment, the vocabulary for the training corpus may be a substantially more extended vocabulary, including terms and phrases of broader general usage apart from the most commonly expected terms for the particular field environment.
“Training users” and generalized user audio training: In an embodiment of the present system and method, the training corpus is representative of selected word sounds or word phrases, as they may be potentially by spoken by many different individuals. This may include individuals of different genders, different ages, different ethnic groups, persons with varying accents, and in general people whose widely varying physiologies may result in a broad array of distinctive vocal qualities, even when voicing the same word or phrase.
In an embodiment, this may entail that during the training phase, multiple different persons, referred to here as training users, are employed to help create the training corpus. In an embodiment, each such person (that is, each training person) is present in the training environment (not necessary at the time same time). It is noted that, while the training users could be the same as some people who will use the speech driven system 102 in the field, more typically the training users are not actual users 100. In an embodiment of the present system and method, the training users may be selected to represent various typical users or generic users 100. In an alternative embodiment, the training users may be selected to be representative of certain expected sub-populations of typical users 100, for example male users or female users.
During training, each training user dons a headset 104 with microphone 120 (see
In this way, and for a single word or phoneme (e.g., “one”, “two”, “confirmed”, “stored”, etc.) multiple redundant samples may be gathered from each training user. Each such audio sample may sometimes be collected with just the word and no background sounds, and at other times with varying elements of background sounds (PA sounds, other roving speakers, machine sounds) from the training environment. In addition, the same training samples, provided by the multiple different training speakers, results in redundancy in terms of having multiple voice samples of the same text.
Combined user voices and background sounds: As will be apparent from the above description, the collected voice samples from training users may include backgrounds sounds. In an embodiment of the present system and method, training users may be deliberately directed to utter some voice samples when little or no backgrounds sounds are present; and to speak other voice samples (including possibly redundant trainings vocabulary) when background sounds are present. As a result, the training corpus may include the same training vocabulary both with and without backgrounds sounds.
Digitization and normalization of the training voice samples: After collection of the voice samples in the training environment, the voice samples are digitized and combined into the training corpus, for example in the form of their raw audio spectrum. The audio characterization model ACM may contain the same audio samples in compressed forms, for example in the form of state vectors, or in other signal representations.
The process of integrating the audio samples into the audio characterization model ACM may include various forms of mathematical processing of the samples such as vocal length tract normalization (VLTN) and/or maximum likelihood linear regression (MLLR). In an embodiment of the present system and method, the result is that within the audio characterization model ACM, a digitized samples of a single training word (e.g., “one,” “two,” “three”, etc.) may be normalized to represent a single, standardized user voice. In an alternative embodiment, within the training corpus a digitized sample of a single training word may be given multiple digital representations for multiple types of voices (for example, one representation for a generic female voice and one representation for a generic male voice).
In an embodiment of the present system and method, the audio characterization model ACM may include both one or more discrete samples of a given training word without backgrounds sounds; and one or more samples of the given training word with background sounds. In an alternative embodiment, the training corpus combines all instances of a training word into a single sample.
Transcribing and Marking the Training Corpus: In an embodiment of the present system and method, the digitized training corpus must be transcribed and tagged. This process entails have a training listener (or multiple training listeners) listen to the corpus. Each training listener is assigned to transcribe (via a keyboard action or mouse interaction with a computer, or similar) recognizable words or phrases, such as by typing the text of the recognizable words or phrases.
Further pre-field-use processing then includes combining the transcribed text digitally into the audio characterization model ACM. In this way, the finalized audio characterization model ACM includes both digital representations of recognizable audio; and, along with the digital representations, text of the associated word(s).
Acceptable and Unacceptable Articulations of Words: In addition, the training listener may be tasked to provide additional flags for sounds within the training corpus. This is also referred to as tagging. The present system and method pertains to distinguishing speech of a user of a voice recognition system from other speech which is background speech. In an embodiment of the present system and method, and during the training process, the training listener may flag (tag):
Finalized Audio Characterization Model ACM: The finalized audio characterization model ACM includes multiple elements, which may include for example and without limitation:
The digital representations of training audio may include representations of user speech samples which were detected along with simultaneous background sounds, such as PA system voices, roving person voices, and other background sounds. These digital representations may therefore be indicative of user speech combined with expected background sounds for the industrial environment.
Audio Feature Extraction and Quantization: Sound Characterizations, Audio Characterization Models and Rejection Threshold: In an embodiment of the present system and method, a further stage of the pre-field-use process includes establishing a sound characterization for each speech audio sample in the audio characterization model. The sound characterization is indicative of a standardized sound quality of each audio sample, and in an embodiment may be derived via one or more mathematical algorithms from the spectrum of the audio sample. For example, the sound characterization may be based on a VTLN of each speech sample. In an alternative embodiment, the sound characterization may be based on an MLLR of each speech sample. The collection of sound characterizations and related threshold data (discussed below) constitute the audio characterization model ACM for the audio environment.
In an alternative embodiment, the sound characterization may be based on one or more formants, such as the lower order (1st, 2nd and/or 3rd) formants, of each speech sample; it may be based on raw values of the formants, normalized values, spacing between formants, or other related calculations. (Speech formants are either the spectral peaks of a sound and/or the resonances associated with the spectral peaks.)
In an alternative embodiment, the sound characterization for each speech audio sample is not determined during pre-field-use processing; rather, spectral data is stored directly in the audio characterization model, and the sound characterizations in the model are calculated by the speech recognition device 106 during run-time, that is, during field use.
In an embodiment of the present system and method, a final stage of the pre-field-use process may include establishing a rejection threshold. In an embodiment, the rejection threshold may be a specific mathematical value which distinguishes acceptable user speech from un-acceptable user speech.
In an embodiment of the present system and method employment a neural network or other trained learning system, a final stage of the pre-field-use process may entail classifying the audio as one of “User Speech”, “Background Speech”, “PA Speech”, with the possibility of also including a “Environment Noise” classification.
In embodiment of the present system and method, in field use, received vocalizations in the field (for example, in a factory or warehouse) may processed by either headset 104, 200 or by speech recognition device 106, 300 to obtain a real-time sound characterization of the received vocalization. The sound characterization for the received speech may be compared against a sound characterization stored in the audio characterization model ACM. If the difference between the two values is less than the rejection threshold, the received vocalization is construed to be user speech, and is accepted. If the difference between the two values is greater than the rejection threshold, the received vocalization is construed to not be user speech and is rejected.
In an embodiment of the present system and method, for speech in the audio characterization model ACM, explicit sound characterizations values and an explicit rejection threshold may be established during the pre-field-use processing.
Implicit Audio Characterizations Models, Implicit Rejection Threshold and Learning System Training: In an alternative embodiment of the present system and method, implicit values are used to characterize the training vocalizations, and for the rejection threshold.
In one exemplary embodiment, a machine learning system is trained to distinguish acceptable user speech from user speech that is not acceptable due to excessive background noise. The learning system may be hardware or software-based, and may include for example and without limitation: a neural network system, a support vector machine, or an inductive logic system. Other machine learning systems may be employed as well.
In one embodiment of the present system and method, machine learning such as neural network training may occur concurrently with transcription (as discussed above). In such an embodiment, the human trainer may be the same person or persons as the training listener. In an alternative embodiment, neural network training or other learning system training may be separate, pre-field-use process.
In general, machine learning entails presenting the learning system with data samples, and conclusions based on the data samples. The learning system then defines rules or other data structures which can substantially reproduce the same conclusions based on the same data samples.
For example, a neural network system may be presented with the training corpus, and asked to classify a newly presented voice sample against the training corpus as being either acceptable or unacceptable. The training process relies on a human trainer to define, in fact, the correct assessment (for example, the speech as being acceptable or unacceptable).
The machine learning system generates a provisional hypothesis, either that the newly presented voice sample is acceptable or unacceptable. The machine learning system presents the hypothesis (acceptable or not). The human trainer provides feedback to the learning system, either confirming the hypothesis (for example, that a voice sample which was predicted as being acceptable is in fact acceptable), or rejecting the hypothesis (for example, indicating that a voice sample which was predicted as being acceptable was, in fact, not acceptable).
Responsive to the feedback from the human trainer, the neural system modified internal data structures responsive to the feedback, and according to a suitable training/learning algorithm. For example, in the case of a neural network, the learning system modifies adaptive weights of neural links. Over time, with enough training, the result is a network which can significantly reproduce the desired outcomes as defined by the human trainer. For example, the learning system learns to distinguish acceptable user speech from unacceptable user speech, as it would be determined by a user.
As described above, a learning system therefore is trained to distinguish a level of difference between (i) a newly presented user voice sample and (ii) a training voice sample stored within the audio characterization model ACM. Beyond a certain level of difference, the user voice sample is deemed unacceptable. Therefore, in the finalized model of the neural network, there is at least an implicit rejection threshold beyond which received audio vocalizations are not acceptable. Similarly, the learning system establishes an implicit sound characterization for sound samples stored in the audio characterization model ACM. In field use, at least an implicit sound characterization is formed for received audio as well.
Further below in this document, reference will generally be made to sound characterizations, rejection thresholds, and more generally to the audio characterization model ACM which characterizes acceptable speech for the audio environment. It will be understand that the sound characterizations and rejection thresholds may be either explicit values, or may be implicitly stored as distributed parameters or data structures in a learning system such as a suitably trained neural network.
Exemplary Pre-Field-Use Creation of Learning Corpus, Audio Characterization Model, and Rejection Threshold Via Learning Algorithm
Some of the steps shown in method 400 may be performed in different orders, or in parallel. The order of presentation below is for convenience only, and should not be construed as limiting. The method may be performed in part or in whole in a training environment, as described above.
The method may begin at step 405. In step 405, training users are prompted to voice speech samples, for example, to recite samples of a limited vocabulary which is expected to be employed in field-use of the speech driven system 102. The training users are typically wearing headsets 104, 200, with microphones suitably placed near their mouths, as described above. The speech samples are collected from the microphones 120, 202 of the headsets, and may be stored on a computer system as audio files (or in one integrated audio file) for further processing.
In an embodiment of the present system and method, user speech samples may be collected with no background sounds present. In an alternative embodiment, user speech samples may be collected when backgrounds sounds are always present (that is, audible and concurrent in time) as well. In an alternative embodiment, user speech sounds may be collected both with and without background sounds being present.
In step 407, audio samples are collected, via the user headsets 104, 200 as worn by the training users, of backgrounds sounds in the training environment. The backgrounds sounds emanate from other persons in the training environment, that is, persons other than the training user wearing the headset. The background voices may be recorded from persons at varying distances and in varying positions in relation to the user and the headset 104, 200.
In steps 410a and 410b (collectively, 410), the recordings of the audio samples are transcribed by the training listeners, including both transcriptions of user voice samples and transcriptions of background voice samples.
In step 415, some or all of the audio user speech samples may be mixed with audio of background speech samples. In this way, audio representations may be created which were not actually heard in the training environment. For example, a single training word spoken by users (for example, a word such as “three” or “selected”) can be mixed with multiple different background sounds.
In an embodiment, multiple such samples may be created with, for example and without limitation: a single user word mixed with single different backgrounds sounds; a single user word mixed with multiple concurrent background sounds; and a single user word mixed with background sounds at different relative sound levels between the user sound and the background sound. In this way, many multiple realistic samples of user speech with background speech can be created from a limited set of samples initial.
In step 420, the present system and method calculates sound characterizations for the various combined user-voice/background-speech audio samples. As per the discussion above, the sound characterizations may be VTLNs of each sample, an MLLR of each sample, or other mathematical characterizations. In an embodiment of the present system and method, the VTLN values for users can be refined over time; as additional user voice samples are collected during field use (see
In step 425, the present system and method collects the sound characterizations for the different phonemes or words, and tags them based on the transcriptions, thereby creating part of the training corpus TC. The tags may include the text representation associated with the audio or other text indicator (that is, an indicator of meaning) associated with the audio; and the tag may also include the determined quality indicator of “acceptable” or “unacceptable”. For some audio samples, the training listener may be unable to determine the spoken words, in which case only a quality tag or quality indicator may be provided.
Returning to step 405, where the speech samples were collected from training users, in step 430, the present system and method calculates sound characterizations for the various user voice samples. As per the discussion above, the sound characterizations may be VTLNs of each sample, an MLLR of each sample, or other characterizations.
In an embodiment of the present system and method, in step 430 it is possible to use VTLN factors calculated given various examples for a given user in place of the current VTLN factor for each example. This increases the number of examples by the number of pre-calculated VTLN factors used.
In step 435, the present system and method collects the sound characterizations for the different user phonemes or words, and tags them based on the transcriptions, thereby creating part of the training corpus TC. The tags may include the text representation associated with the audio or other text indicator (that is, an indicator of meaning) associated with the audio; and the tag may also include the determined quality indicator of “acceptable” or “unacceptable”.
In an embodiment where the user voice samples are collected without background sounds, it is expected that most or all of the user voice samples will be of sufficient clarity to be acceptable and to be transcribed. However, any such user voice samples which are not sufficiently clear may be tagged as unacceptable. In an embodiment where some or all user voice samples are collected with background sounds present, it is expected that at least some audio samples will not be fully intelligible, in which case only a quality tag or quality indicator may be provided.
The training corpus TC consists of all the tagged, transcribed audio samples, possibly with suitable condensation (for example, multiple samples of the same word or the same phoneme may be condensed to one representation).
In step 440, the present system and method determines a suitable audio characterization model ACM and Rejection Threshold RT. The audio characterization model ACM includes sound characterizations for multiple words, phrases, and/or phonemes. In an embodiment, and as described above, this may entail training a learning system to distinguish acceptable voice samples from unacceptable voice samples, and thereby having the learning system establish a suitable rejection threshold (either explicit or implicit).
Audio Environment in Field-Use
The audio environment may include activity prompts 507 or other prompts which are provided to the user 100 from the SRD 106, 300 (sometimes via an application running on the SRD, and other times originating from server 110), and which the user 100 hears via headphones 118 of headset 104. These prompts may include instructions for field activity, such as selecting certain items from specified locations or bins in a warehouse, or may include prompts for certain manufacturing activities such as stages in assembly of some hardware. The prompts may include prompts to the user to speak certain words specifically for audio-training purposes (see for example step 602 of method 600,
In response to prompts, the user 100 may either speak certain expected, prompted responses 505a, or perform certain activities, or both. For example, a prompt 507 may tell a user to pick an item from a bin numbered “A991”. In response, the user may be expected to recite back the words “‘A’ nine nine one”, then actually pick the item from a bin numbered “A991”, and then recite some confirmation phrase such as “picked” or “selected.”
Hinted user speech and non-hinted user speech—In general, the user's speech 505 comprises both: (i) some speech in response to prompts, where a specific response or specific choices of response to the prompt is/are expected from the user, and which is referred to as hinted speech 505a; and (ii) all other user speech 505b, which is non-hinted user speech. Some prompted responses are hinted, which occurs in specific parts of the user's workflow (for example, when the user is prompted to select a particular part from storage). Hinted speech will have an expected value for a reply, which is typically a dynamic value that is associated with the task at hand (for example, a part number).
Non-hinted speech 505b may include general conversation which the user engages in with other persons, as well as some requests or other data provided to the server 110 by the user 100. All user speech 505 is detected by microphone 120 of headset 104.
Speech detected by microphone 120, 105 in the field will also typically include background speech 510 from other persons in the area, and PA system speech 515.
Stored data on speech recognition device: In embodiment of the present system and method, all collected sounds—user speech 505, background speech 510, and PA system speech 515, as well as other background sounds (not illustrated) are transmitted or passed from headset 104 as audio samples 520 to the speech recognition device (SRD) 106, 300.
In an embodiment of the present system and method, the SRD 106, 300 is pre-programmed with, pre-configured with, and or/stores both a suitable audio characterization model ACM and/or training corpus TC for the current industrial environment, and a suitable rejection threshold TC.
In an embodiment, SRD 106, 300 is also pre-programmed with, pre-configured with, and/or stores a vocabulary of hint text HT expected to be used in the field. In an alternative embodiment, some or all of the audio characterization model ACM, training corpus TC, rejection threshold RT, and/or Hint Text HT may be prepared or stored on server 110 (for example, at the factory or warehouse where the SRD is to be used).
Exemplary Method of Field-Use of the Speech Recognition Device
The method 600 begins with step 602, which entails training the speech recognition device (SRD) 106, 300 to be operable with a particular user 100, by recognizing the voice of the user 100. In an embodiment, step 602 is a field-use step, but is a one-time step and is performed preliminary to the main use of the SRD 106, 300 to support user activity in the field.
In an embodiment, the training of the speech recognition device may entail prompting the user to speak specific, expected words, typically a limited vocabulary of words which the user will employ in the course of work. These prompted words may include digits, numbers, letters of the alphabet, and certain key words which may be commonly used in a given setting, for example, “Ready”, “Okay”, “Check”, “Found”, “Identified”, “Loaded”, “Stored”, “Completed”, or other words which may indicate a status of a warehouse or factory activity. In an embodiment, the prompted words may include some or all of the words expected to be used as hint words 505a during field use (see
The user is prompted to speak these words (typically one word at a time), and the user then speaks the prompted words in reply. The SRD 106, 300 records the user replies and digitizes them. In an embodiment, the SRD 106, 300 may calculate for each word a set of state vectors and/or sound characterizations, employing calculations in a manner similar to that of pre-field use processing (see
In an embodiment of the present system and method, state vectors and/or sound characterizations obtained during training step 602 may be used to modify the current user characterization and/or the rejection threshold RT which is stored on the SRD 106, 300.
Routine or on-going field-use of the SRD may commence with step 605. In step 605, the headset 104 functions interactively with the user 100 and the larger audio environment 500. Prompts 507 for user activity may be provided via headphones 104. The prompts 507 may originate, for example, on server 110. Responsive to the prompts 507, the user 100 may engage in various appropriate activities, such as for example picking stock items from prompted locations, moving stock items to prompted locations, or other activities. Responsive to the prompts and to their own activity, the user may also speak various words 505. These user-spoken words 505 or phrases may for example confirm recognition of a prompt, or may confirm completion of a task, or may confirm identification of a location or object.
The user spoken words 505 are detected by microphone 120, 200 of headset 104. Microphone 120, 200 may also detect background speech 510 from other persons present in the environment, as well as PA system speech 515, and other background sounds.
In an embodiment, step 605 may entail creation of digitized audio samples, packets, or frames 520, which may include user speech 505, background speech 510, PA system speech 515, and other sounds, either concurrently or serially. In an embodiment, the digitized audio samples 520 are passed from headset 104 to SRD 106.
In step 610 the SRD 106 calculates a suitable sound characterization 690 of the audio sample 520. Suitable sound characterizations are those which are comparable to those stored in the training corpus TC. For example, suitable sound characterizations may include VTLNs of an audio sample 520, or MLLRs of an audio sample 520. Other sound characterizations, suitable to match those of training corpus TC, may be employed as well.
Comparison of Received Sound with Hint Text: In one embodiment of the present system and method, in step 615 the method compares the received audio sample 520, and/or the sound characterization of the received audio sample 690, against stored sound characterizations of the hint text HT.
In step 620, a determination is made if the received audio sample 520, 690 matches any of the words in the hint text HT. If the received audio sample 520, 690 matches hint text HT, then it is presumed that the audio sample 520, 690 comes from a valid user 100, and further processing with respect to possible background speech may be skipped. In this event, the method proceeds to step 625 (along the path marked “Yes” in the figure), where the received audio sample 520, 690 is accepted.
In an embodiment of the present system and method, the SRD 106, 300 may also at some times be running in a field training state or a noise sampling state. In such a training or noise-sampling state, then at step 620 (and whether or not the received audio sample 520 matched the hint text HT) the method would automatically accept the user speech; the method would then automatically proceed to steps 625 and 630, discussed immediately below.
In an embodiment of the present system and method, from step 625 the method may proceed to step 630. In step 630, the present system and method uses calculated sound characterization 690 to improve the stored user sound characterizations and/or the audio characterization model ACM. For example, the state vectors which characterize user speech in the training corpus TC may be refined based on actual speech from actual users in the field. In an embodiment, this refinement of the training corpus TC may occur in the field in substantially real-time, with the sound characterizations in the training corpus TC being updated in real-time.
In an embodiment of the present system and method, each time the user 100 powers up the SRD 106, 300, the system starts over in building the characterization for that specific user. In an alternative embodiment, the SRD 106, 300 may persist the user characterization across power cycles (for example, storing the characterization in memory 306), so it is not necessary to start over each time. This specific characterization would not be stored in the audio characterization model (ACM).
In an alternative embodiment, user sound characterizations 690 collected in the field may be stored on server 110 or another suitable storage medium; the collected user sound characterizations 690 may then be processed en masse, using methods the same or similar to those of method 400 (
Returning now to step 620, it may be the case that the received audio sample 520 and/or the sound characterization 690 of the received speech does not match any of the vocabulary in the stored list of hint text HT. In that event, the method continues with step 635 (along the path marked “No” in the figure).
In step 635, the present system and method compares the sound characterization 690 of the received audio sample against sound characterizations in the audio characterization model ACM. The comparison searches for an acceptably close match, but also determines a quality level of the match. In step 640, a determination is made as to whether the match is of acceptable quality. If the match is of acceptable quality (the path marked “Yes”), then in step 645 the speech is accepted as user speech. If the match is not of acceptable quality (the path marked “No”), then in step 650 the speech is rejected as not being user speech. As described above, in an embodiment such a determination made be made by a suitably trained learning system such as a neural network system trained as described above in this document (see for example
Shown in the figure is a supplemental example SE which illustrates one possible particular embodiment of steps 635 through 650. In step 635E (corresponding to step 635), a difference value DV is calculated as the absolute value of the difference between the VTLN of the received audio sample 690 and the VTLN factor of a suitable audio example in the training corpus TC.
In step 640E (corresponding to step 640), a determination is made as to whether the difference value DV is less than the rejection threshold RT. If the match is of acceptable quality (so that the difference value DV is less than the rejection threshold RT), then in step 645E the speech is accepted as user speech. If the match is not of acceptable quality (so that the difference value DV is greater than the rejection threshold RT), then in step 650E the speech is rejected as not being user speech.
As will be appreciated by persons skilled in the art, once an audio sample has been approved as acceptable speech, the meaning of the audio sample may be determined (based for example on transcription data in the training corpus TC). Based on the meaning of the received audio sample, suitable further actions may be taken by the speech driven system 102.
In an alternative embodiment of the present system and method, steps 615 and 620 (pertaining to the hint text comparison) may be omitted, along with omission of steps 625 and 630. In such an embodiment, control may pass directly from step 610 to step 635, 635E.
In an embodiment of the present system and method, a comparison is made between a real-time speech sample and pre-established sound samples. The pre-established sound samples are indicative of acceptable user vocalizations, and also indicative of unacceptable vocalizations—that is, vocalizations which are due to background voices, PA systems, or due to user vocalizations but which may be unintelligible due to concurrent background sounds.
A suitable metric is defined to analytically or numerically characterize the sameness or difference of the real-time speech sample against the pre-established sound samples.
The level of closeness or difference between a real-time sound sample and the stored sound samples is determined with relation to a suitable threshold value.
Audio Comparison Matrix: In an embodiment, to distinguish user speech from background speech, the present system and method may employ a stored, audio-derived data structure which incorporates sound data as a basis for comparisons. In one embodiment, the audio data structure may be a sound matrix, or an array of sound characterizations. Some cells in the audio matrix tend to characterize sounds which are valid user voice sounds, while other audio matrix cells tend to characterize sounds which are background voice sounds.
In real-time, newly recorded sounds are compared against cells in the audio matrix. Incoming vocalizations which compare favorably with valid user vocalizations in the sound matrix are considered to be acceptable user speech; incoming vocalizations which do not compare favorably with valid vocalizations in the matrix are rejected.
In some embodiments, the number of different available, stored voice characterizations may be too many to store in a matrix or array; or the voice characterizations may blend to a degree that does not lend towards distinguishing the characterizations as discrete, one-cell-per-one sound characterization storage. Instead, other comparison methods, based on mathematically continuous representations of voice characterizations, may be employed.
Thus, in various embodiments, signal matching and comparison methods may employ other data structures than a matrix of sound characterizations to make a comparison. A variety of signal processing techniques and artificial intelligence techniques, including neural networks and other learning system techniques, may be used to compare real-time field vocalizations against data stored in distributed or other forms in the learning system.
In further embodiments, labeled A1 through A10, the present system and method may also be characterized as:
Persons skilled in the relevant arts will recognize that various elements of embodiments A1 through A10 can be combined with each other, as well as combined with elements of other embodiments disclosed throughout this application, to create still further embodiments consistent with the present system and method.
To supplement the present disclosure, this application incorporates entirely by reference the following commonly assigned patents, patent application publications, and patent applications:
U.S. patent application Ser. No. 14/740,320 for TACTILE SWITCH FOR A MOBILE ELECTRONIC DEVICE filed Jun. 16, 2015 (Bandringa);
In the specification and/or figures, typical embodiments of the invention have been disclosed. The present invention is not limited to such exemplary embodiments. The use of the term “and/or” includes any and all combinations of one or more of the associated listed items The figures are schematic representations and so are not necessarily drawn to scale. Unless otherwise noted, specific terms have been used in a generic and descriptive sense and not for purposes of limitation.
The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flow charts, schematics, exemplary data structures, and examples. Insofar as such block diagrams, flow charts, schematics, exemplary data structures, and examples contain one or more functions and/or operations, it will be understood by those skilled in the art that each function and/or operation within such block diagrams, flowcharts, schematics, exemplary data structures, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof.
In one embodiment, the present subject matter may be implemented via Application Specific Integrated Circuits (ASICs). However, those skilled in the art will recognize that the embodiments disclosed herein, in whole or in part, can be equivalently implemented in standard integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more controllers (e.g., microcontrollers) as one or more programs running on one or more processors (e.g., microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of ordinary skill in the art in light of this disclosure.
In addition, those skilled in the art will appreciate that the control mechanisms taught herein are capable of being distributed as a program product in a variety of tangible forms, and that an illustrative embodiment applies equally regardless of the particular type of tangible instruction bearing media used to actually carry out the distribution. Examples of tangible instruction bearing media include, but are not limited to, the following: recordable type media such as floppy disks, hard disk drives, CD ROMs, digital tape, flash drives, and computer memory.
The various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the present systems and methods in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the invention to the specific embodiments disclosed in the specification and the claims, but should be construed to include all voice-recognition systems that read in accordance with the claims. Accordingly, the invention is not limited by the disclosure, but instead its scope is to be determined entirely by the following claims.
This application is a continuation of and claims priority to U.S. application Ser. No. 17/449,213, filed Sep. 28, 2021, titled “DISTINGUISHING USER SPEECH FROM BACKGROUND SPEECH IN SPEECH-DENSE ENVIRONMENTS,” which is a continuation of and claims priority to U.S. application Ser. No. 16/695,555, filed Nov. 26, 2019, titled “DISTINGUISHING USER SPEECH FROM BACKGROUND SPEECH IN SPEECH-DENSE ENVIRONMENTS,” (now U.S. Pat. No. 11,158,336), which is a continuation of and claims priority to U.S. application Ser. No. 15/220,584, filed Jul. 27, 2016, titled “DISTINGUISHING USER SPEECH FROM BACKGROUND SPEECH IN SPEECH-DENSE ENVIRONMENTS,” (now U.S. Pat. No. 10,714,121), the contents of which are incorporated by reference herein in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4882757 | Fisher et al. | Nov 1989 | A |
4928302 | Kaneuchi et al. | May 1990 | A |
4959864 | Van et al. | Sep 1990 | A |
4977598 | Doddington et al. | Dec 1990 | A |
5127043 | Hunt et al. | Jun 1992 | A |
5127055 | Larkey | Jun 1992 | A |
5230023 | Nakano | Jul 1993 | A |
5297194 | Hunt et al. | Mar 1994 | A |
5349645 | Zhao | Sep 1994 | A |
5428707 | Gould et al. | Jun 1995 | A |
5457768 | Tsuboi et al. | Oct 1995 | A |
5465317 | Epstein | Nov 1995 | A |
5488652 | Bielby et al. | Jan 1996 | A |
5566272 | Brems et al. | Oct 1996 | A |
5602960 | Hon et al. | Feb 1997 | A |
5625748 | McDonough et al. | Apr 1997 | A |
5640485 | Ranta | Jun 1997 | A |
5644680 | Bielby et al. | Jul 1997 | A |
5651094 | Takagi et al. | Jul 1997 | A |
5684925 | Morin et al. | Nov 1997 | A |
5710864 | Juang et al. | Jan 1998 | A |
5717826 | Setlur et al. | Feb 1998 | A |
5737489 | Chou et al. | Apr 1998 | A |
5737724 | Atal et al. | Apr 1998 | A |
5742928 | Suzuki | Apr 1998 | A |
5774837 | Yeldener et al. | Jun 1998 | A |
5774841 | Salazar et al. | Jun 1998 | A |
5774858 | Taubkin et al. | Jun 1998 | A |
5787387 | Aguilar | Jul 1998 | A |
5797123 | Chou et al. | Aug 1998 | A |
5799273 | Mitchell et al. | Aug 1998 | A |
5832430 | Lleida et al. | Nov 1998 | A |
5839103 | Mammone et al. | Nov 1998 | A |
5842163 | Weintraub | Nov 1998 | A |
5870706 | Alshawi | Feb 1999 | A |
5890108 | Yeldener | Mar 1999 | A |
5893057 | Fujimoto et al. | Apr 1999 | A |
5893059 | Raman | Apr 1999 | A |
5893902 | Transue et al. | Apr 1999 | A |
5895447 | Ittycheriah et al. | Apr 1999 | A |
5899972 | Miyazawa et al. | May 1999 | A |
5946658 | Miyazawa et al. | Aug 1999 | A |
5960447 | Holt et al. | Sep 1999 | A |
5970450 | Hattori | Oct 1999 | A |
6003002 | Netsch | Dec 1999 | A |
6006183 | Lai et al. | Dec 1999 | A |
6073096 | Gao et al. | Jun 2000 | A |
6076057 | Narayanan et al. | Jun 2000 | A |
6088669 | Maes | Jul 2000 | A |
6094632 | Hattori | Jul 2000 | A |
6101467 | Bartosik | Aug 2000 | A |
6122612 | Goldberg | Sep 2000 | A |
6151574 | Lee et al. | Nov 2000 | A |
6182038 | Balakrishnan et al. | Jan 2001 | B1 |
6192343 | Morgan et al. | Feb 2001 | B1 |
6205426 | Nguyen et al. | Mar 2001 | B1 |
6230129 | Morin et al. | May 2001 | B1 |
6230138 | Everhart | May 2001 | B1 |
6233555 | Parthasarathy et al. | May 2001 | B1 |
6233559 | Balakrishnan | May 2001 | B1 |
6243713 | Nelson et al. | Jun 2001 | B1 |
6246980 | Glorion et al. | Jun 2001 | B1 |
6292782 | Weideman | Sep 2001 | B1 |
6330536 | Parthasarathy et al. | Dec 2001 | B1 |
6351730 | Chen | Feb 2002 | B2 |
6374212 | Phillips et al. | Apr 2002 | B2 |
6374220 | Kao | Apr 2002 | B1 |
6374221 | Haimi-Cohen | Apr 2002 | B1 |
6374227 | Ye | Apr 2002 | B1 |
6377662 | Hunt et al. | Apr 2002 | B1 |
6377949 | Gilmour | Apr 2002 | B1 |
6397179 | Crespo et al. | May 2002 | B2 |
6397180 | Jaramillo et al. | May 2002 | B1 |
6421640 | Dolfing et al. | Jul 2002 | B1 |
6438519 | Campbell et al. | Aug 2002 | B1 |
6438520 | Curt et al. | Aug 2002 | B1 |
6456973 | Fado et al. | Sep 2002 | B1 |
6487532 | Schoofs et al. | Nov 2002 | B1 |
6496800 | Kong et al. | Dec 2002 | B1 |
6505155 | Vanbuskirk et al. | Jan 2003 | B1 |
6507816 | Ortega | Jan 2003 | B2 |
6526380 | Thelen et al. | Feb 2003 | B1 |
6539078 | Hunt et al. | Mar 2003 | B1 |
6542866 | Jiang et al. | Apr 2003 | B1 |
6567775 | Maali et al. | May 2003 | B1 |
6571210 | Hon et al. | May 2003 | B2 |
6581036 | Varney, Jr. | Jun 2003 | B1 |
6587824 | Everhart et al. | Jul 2003 | B1 |
6594629 | Basu et al. | Jul 2003 | B1 |
6598017 | Yamamoto et al. | Jul 2003 | B1 |
6606598 | Holthouse et al. | Aug 2003 | B1 |
6629072 | Thelen et al. | Sep 2003 | B1 |
6662163 | Albayrak et al. | Dec 2003 | B1 |
6675142 | Ortega et al. | Jan 2004 | B2 |
6701293 | Bennett et al. | Mar 2004 | B2 |
6725199 | Brittan et al. | Apr 2004 | B2 |
6732074 | Kuroda | May 2004 | B1 |
6735562 | Zhang et al. | May 2004 | B1 |
6754627 | Woodward | Jun 2004 | B2 |
6766295 | Murveit et al. | Jul 2004 | B1 |
6799162 | Goronzy et al. | Sep 2004 | B1 |
6813491 | McKinney | Nov 2004 | B1 |
6829577 | Gleason | Dec 2004 | B1 |
6832224 | Gilmour | Dec 2004 | B2 |
6832725 | Gardiner et al. | Dec 2004 | B2 |
6834265 | Balasuriya | Dec 2004 | B2 |
6839667 | Reich | Jan 2005 | B2 |
6856956 | Thrasher et al. | Feb 2005 | B2 |
6868381 | Peters et al. | Mar 2005 | B1 |
6868385 | Gerson | Mar 2005 | B1 |
6871177 | Hovell et al. | Mar 2005 | B1 |
6876968 | Veprek | Apr 2005 | B2 |
6876987 | Bahler et al. | Apr 2005 | B2 |
6879956 | Honda et al. | Apr 2005 | B1 |
6882972 | Kompe et al. | Apr 2005 | B2 |
6910012 | Hartley et al. | Jun 2005 | B2 |
6917918 | Rockenbeck et al. | Jul 2005 | B2 |
6922466 | Peterson et al. | Jul 2005 | B1 |
6922669 | Schalk et al. | Jul 2005 | B2 |
6941264 | Konopka et al. | Sep 2005 | B2 |
6961700 | Mitchell et al. | Nov 2005 | B2 |
6961702 | Dobler et al. | Nov 2005 | B2 |
6985859 | Morin | Jan 2006 | B2 |
6988068 | Fado et al. | Jan 2006 | B2 |
6999931 | Zhou | Feb 2006 | B2 |
7010489 | Lewis et al. | Mar 2006 | B1 |
7031918 | Hwang | Apr 2006 | B2 |
7035800 | Tapper | Apr 2006 | B2 |
7039166 | Peterson et al. | May 2006 | B1 |
7050550 | Steinbiss et al. | May 2006 | B2 |
7058575 | Zhou | Jun 2006 | B2 |
7062435 | Tzirkel-Hancock et al. | Jun 2006 | B2 |
7062441 | Townshend | Jun 2006 | B1 |
7065488 | Yajima et al. | Jun 2006 | B2 |
7069513 | Damiba | Jun 2006 | B2 |
7072750 | Pi et al. | Jul 2006 | B2 |
7072836 | Shao | Jul 2006 | B2 |
7103542 | Doyle | Sep 2006 | B2 |
7103543 | Hernandez-Abrego et al. | Sep 2006 | B2 |
7128266 | Zhu et al. | Oct 2006 | B2 |
7159783 | Walczyk et al. | Jan 2007 | B2 |
7203644 | Anderson et al. | Apr 2007 | B2 |
7203651 | Baruch et al. | Apr 2007 | B2 |
7216148 | Matsunami et al. | May 2007 | B2 |
7225127 | Lucke | May 2007 | B2 |
7240010 | Papadimitriou et al. | Jul 2007 | B2 |
7266494 | Droppo et al. | Sep 2007 | B2 |
7272556 | Aguilar et al. | Sep 2007 | B1 |
7305340 | Rosen et al. | Dec 2007 | B1 |
7319960 | Riis et al. | Jan 2008 | B2 |
7386454 | Gopinath et al. | Jun 2008 | B2 |
7392186 | Duan et al. | Jun 2008 | B2 |
7401019 | Seide et al. | Jul 2008 | B2 |
7406413 | Geppert et al. | Jul 2008 | B2 |
7413127 | Ehrhart et al. | Aug 2008 | B2 |
7430509 | Jost et al. | Sep 2008 | B2 |
7454340 | Sakai et al. | Nov 2008 | B2 |
7457745 | Kadambe et al. | Nov 2008 | B2 |
7493258 | Kibkalo et al. | Feb 2009 | B2 |
7542907 | Epstein et al. | Jun 2009 | B2 |
7565282 | Carus et al. | Jul 2009 | B2 |
7609669 | Sweeney et al. | Oct 2009 | B2 |
7684984 | Kemp | Mar 2010 | B2 |
7726575 | Wang et al. | Jun 2010 | B2 |
7805412 | Gibson et al. | Sep 2010 | B1 |
7813771 | Escott | Oct 2010 | B2 |
7827032 | Braho et al. | Nov 2010 | B2 |
7865362 | Braho et al. | Jan 2011 | B2 |
7885419 | Wahl et al. | Feb 2011 | B2 |
7895039 | Braho et al. | Feb 2011 | B2 |
7949533 | Braho et al. | May 2011 | B2 |
7983912 | Hirakawa et al. | Jul 2011 | B2 |
8200495 | Braho et al. | Jun 2012 | B2 |
8255219 | Braho et al. | Aug 2012 | B2 |
8294969 | Plesko | Oct 2012 | B2 |
8317105 | Kotlarsky et al. | Nov 2012 | B2 |
8322622 | Liu | Dec 2012 | B2 |
8366005 | Kotlarsky et al. | Feb 2013 | B2 |
8371507 | Haggerty et al. | Feb 2013 | B2 |
8374870 | Braho et al. | Feb 2013 | B2 |
8376233 | Horn et al. | Feb 2013 | B2 |
8381979 | Franz | Feb 2013 | B2 |
8390909 | Plesko | Mar 2013 | B2 |
8408464 | Zhu et al. | Apr 2013 | B2 |
8408468 | Van et al. | Apr 2013 | B2 |
8408469 | Good | Apr 2013 | B2 |
8424768 | Rueblinger et al. | Apr 2013 | B2 |
8448863 | Xian et al. | May 2013 | B2 |
8457013 | Essinger et al. | Jun 2013 | B2 |
8459557 | Havens et al. | Jun 2013 | B2 |
8469272 | Kearney | Jun 2013 | B2 |
8474712 | Kearney et al. | Jul 2013 | B2 |
8479992 | Kotlarsky et al. | Jul 2013 | B2 |
8490877 | Kearney | Jul 2013 | B2 |
8517271 | Kotlarsky et al. | Aug 2013 | B2 |
8523076 | Good | Sep 2013 | B2 |
8528818 | Ehrhart et al. | Sep 2013 | B2 |
8532282 | Bracey | Sep 2013 | B2 |
8544737 | Gomez et al. | Oct 2013 | B2 |
8548420 | Grunow et al. | Oct 2013 | B2 |
8550335 | Samek et al. | Oct 2013 | B2 |
8550354 | Gannon et al. | Oct 2013 | B2 |
8550357 | Kearney | Oct 2013 | B2 |
8556174 | Kosecki et al. | Oct 2013 | B2 |
8556176 | Van et al. | Oct 2013 | B2 |
8556177 | Hussey et al. | Oct 2013 | B2 |
8559767 | Barber et al. | Oct 2013 | B2 |
8561895 | Gomez et al. | Oct 2013 | B2 |
8561903 | Sauerwein, Jr. | Oct 2013 | B2 |
8561905 | Edmonds et al. | Oct 2013 | B2 |
8565107 | Pease et al. | Oct 2013 | B2 |
8571307 | Li et al. | Oct 2013 | B2 |
8579200 | Samek et al. | Nov 2013 | B2 |
8583924 | Caballero et al. | Nov 2013 | B2 |
8584945 | Wang et al. | Nov 2013 | B2 |
8587595 | Wang | Nov 2013 | B2 |
8587697 | Hussey et al. | Nov 2013 | B2 |
8588869 | Sauerwein et al. | Nov 2013 | B2 |
8590789 | Nahill et al. | Nov 2013 | B2 |
8596539 | Havens et al. | Dec 2013 | B2 |
8596542 | Havens et al. | Dec 2013 | B2 |
8596543 | Havens et al. | Dec 2013 | B2 |
8599271 | Havens et al. | Dec 2013 | B2 |
8599957 | Peake et al. | Dec 2013 | B2 |
8600158 | Li et al. | Dec 2013 | B2 |
8600167 | Showering | Dec 2013 | B2 |
8602309 | Longacre et al. | Dec 2013 | B2 |
8608053 | Meier et al. | Dec 2013 | B2 |
8608071 | Liu et al. | Dec 2013 | B2 |
8611309 | Wang et al. | Dec 2013 | B2 |
8615487 | Gomez et al. | Dec 2013 | B2 |
8621123 | Caballero | Dec 2013 | B2 |
8622303 | Meier et al. | Jan 2014 | B2 |
8628013 | Ding | Jan 2014 | B2 |
8628015 | Wang et al. | Jan 2014 | B2 |
8628016 | Winegar | Jan 2014 | B2 |
8629926 | Wang | Jan 2014 | B2 |
8630491 | Longacre et al. | Jan 2014 | B2 |
8635309 | Berthiaume et al. | Jan 2014 | B2 |
8636200 | Kearney | Jan 2014 | B2 |
8636212 | Nahill et al. | Jan 2014 | B2 |
8636215 | Ding et al. | Jan 2014 | B2 |
8636224 | Wang | Jan 2014 | B2 |
8638806 | Wang et al. | Jan 2014 | B2 |
8640958 | Lu et al. | Feb 2014 | B2 |
8640960 | Wang et al. | Feb 2014 | B2 |
8643717 | Li et al. | Feb 2014 | B2 |
8644489 | Noble et al. | Feb 2014 | B1 |
8646692 | Meier et al. | Feb 2014 | B2 |
8646694 | Wang et al. | Feb 2014 | B2 |
8657200 | Ren et al. | Feb 2014 | B2 |
8659397 | Vargo et al. | Feb 2014 | B2 |
8668149 | Good | Mar 2014 | B2 |
8678285 | Kearney | Mar 2014 | B2 |
8678286 | Smith et al. | Mar 2014 | B2 |
8682077 | Longacre, Jr. | Mar 2014 | B1 |
D702237 | Oberpriller et al. | Apr 2014 | S |
8687282 | Feng et al. | Apr 2014 | B2 |
8692927 | Pease et al. | Apr 2014 | B2 |
8695880 | Bremer et al. | Apr 2014 | B2 |
8698949 | Grunow et al. | Apr 2014 | B2 |
8702000 | Barber et al. | Apr 2014 | B2 |
8717494 | Gannon | May 2014 | B2 |
8720783 | Biss et al. | May 2014 | B2 |
8723804 | Fletcher et al. | May 2014 | B2 |
8723904 | Marty et al. | May 2014 | B2 |
8727223 | Wang | May 2014 | B2 |
8740082 | Wilz, Sr. | Jun 2014 | B2 |
8740085 | Furlong et al. | Jun 2014 | B2 |
8746563 | Hennick et al. | Jun 2014 | B2 |
8750445 | Peake et al. | Jun 2014 | B2 |
8752766 | Xian et al. | Jun 2014 | B2 |
8756059 | Braho et al. | Jun 2014 | B2 |
8757495 | Qu et al. | Jun 2014 | B2 |
8760563 | Koziol et al. | Jun 2014 | B2 |
8763909 | Reed et al. | Jul 2014 | B2 |
8777108 | Coyle | Jul 2014 | B2 |
8777109 | Oberpriller et al. | Jul 2014 | B2 |
8779898 | Havens et al. | Jul 2014 | B2 |
8781520 | Payne et al. | Jul 2014 | B2 |
8783573 | Havens et al. | Jul 2014 | B2 |
8789757 | Barten | Jul 2014 | B2 |
8789758 | Hawley et al. | Jul 2014 | B2 |
8789759 | Xian et al. | Jul 2014 | B2 |
8794520 | Wang et al. | Aug 2014 | B2 |
8794522 | Ehrhart | Aug 2014 | B2 |
8794525 | Amundsen et al. | Aug 2014 | B2 |
8794526 | Wang et al. | Aug 2014 | B2 |
8798367 | Ellis | Aug 2014 | B2 |
8807431 | Wang et al. | Aug 2014 | B2 |
8807432 | Van et al. | Aug 2014 | B2 |
8820630 | Qu et al. | Sep 2014 | B2 |
8822848 | Meagher | Sep 2014 | B2 |
8824692 | Sheerin et al. | Sep 2014 | B2 |
8824696 | Braho | Sep 2014 | B2 |
8842849 | Wahl et al. | Sep 2014 | B2 |
8844822 | Kotlarsky et al. | Sep 2014 | B2 |
8844823 | Fritz et al. | Sep 2014 | B2 |
8849019 | Li et al. | Sep 2014 | B2 |
D716285 | Chaney et al. | Oct 2014 | S |
8851383 | Yeakley et al. | Oct 2014 | B2 |
8854633 | Laffargue et al. | Oct 2014 | B2 |
8866963 | Grunow et al. | Oct 2014 | B2 |
8868421 | Braho et al. | Oct 2014 | B2 |
8868519 | Maloy et al. | Oct 2014 | B2 |
8868802 | Barten | Oct 2014 | B2 |
8868803 | Caballero | Oct 2014 | B2 |
8870074 | Gannon | Oct 2014 | B1 |
8879639 | Sauerwein, Jr. | Nov 2014 | B2 |
8880426 | Smith | Nov 2014 | B2 |
8881983 | Havens et al. | Nov 2014 | B2 |
8881987 | Wang | Nov 2014 | B2 |
8903172 | Smith | Dec 2014 | B2 |
8908995 | Benos et al. | Dec 2014 | B2 |
8910870 | Li et al. | Dec 2014 | B2 |
8910875 | Ren et al. | Dec 2014 | B2 |
8914290 | Hendrickson et al. | Dec 2014 | B2 |
8914788 | Pettinelli et al. | Dec 2014 | B2 |
8915439 | Feng et al. | Dec 2014 | B2 |
8915444 | Havens et al. | Dec 2014 | B2 |
8916789 | Woodburn | Dec 2014 | B2 |
8918250 | Hollifield | Dec 2014 | B2 |
8918564 | Caballero | Dec 2014 | B2 |
8925818 | Kosecki et al. | Jan 2015 | B2 |
8939374 | Jovanovski et al. | Jan 2015 | B2 |
8942480 | Duane | Jan 2015 | B2 |
8944313 | Williams et al. | Feb 2015 | B2 |
8944327 | Meier et al. | Feb 2015 | B2 |
8944332 | Harding et al. | Feb 2015 | B2 |
8950678 | Germaine et al. | Feb 2015 | B2 |
D723560 | Zhou et al. | Mar 2015 | S |
8967468 | Gomez et al. | Mar 2015 | B2 |
8971346 | Sevier | Mar 2015 | B2 |
8976030 | Cunningham et al. | Mar 2015 | B2 |
8976368 | El et al. | Mar 2015 | B2 |
8978981 | Guan | Mar 2015 | B2 |
8978983 | Bremer et al. | Mar 2015 | B2 |
8978984 | Hennick et al. | Mar 2015 | B2 |
8985456 | Zhu et al. | Mar 2015 | B2 |
8985457 | Soule et al. | Mar 2015 | B2 |
8985459 | Kearney et al. | Mar 2015 | B2 |
8985461 | Gelay et al. | Mar 2015 | B2 |
8988578 | Showering | Mar 2015 | B2 |
8988590 | Gillet et al. | Mar 2015 | B2 |
8991704 | Hopper et al. | Mar 2015 | B2 |
8996194 | Davis et al. | Mar 2015 | B2 |
8996384 | Funyak et al. | Mar 2015 | B2 |
8998091 | Edmonds et al. | Apr 2015 | B2 |
9002641 | Showering | Apr 2015 | B2 |
9007368 | Laffargue et al. | Apr 2015 | B2 |
9010641 | Qu et al. | Apr 2015 | B2 |
9015513 | Murawski et al. | Apr 2015 | B2 |
9016576 | Brady et al. | Apr 2015 | B2 |
D730357 | Fitch et al. | May 2015 | S |
9022288 | Nahill et al. | May 2015 | B2 |
9030964 | Essinger et al. | May 2015 | B2 |
9033240 | Smith et al. | May 2015 | B2 |
9033242 | Gillet et al. | May 2015 | B2 |
9036054 | Koziol et al. | May 2015 | B2 |
9037344 | Chamberlin | May 2015 | B2 |
9038911 | Xian et al. | May 2015 | B2 |
9038915 | Smith | May 2015 | B2 |
D730901 | Oberpriller et al. | Jun 2015 | S |
D730902 | Fitch et al. | Jun 2015 | S |
D733112 | Chaney et al. | Jun 2015 | S |
9047098 | Barten | Jun 2015 | B2 |
9047359 | Caballero et al. | Jun 2015 | B2 |
9047420 | Caballero | Jun 2015 | B2 |
9047525 | Barber et al. | Jun 2015 | B2 |
9047531 | Showering et al. | Jun 2015 | B2 |
9047865 | Aguilar et al. | Jun 2015 | B2 |
9049640 | Wang et al. | Jun 2015 | B2 |
9053055 | Caballero | Jun 2015 | B2 |
9053378 | Hou et al. | Jun 2015 | B1 |
9053380 | Xian et al. | Jun 2015 | B2 |
9057641 | Amundsen et al. | Jun 2015 | B2 |
9058526 | Powilleit | Jun 2015 | B2 |
9064165 | Havens et al. | Jun 2015 | B2 |
9064167 | Xian et al. | Jun 2015 | B2 |
9064168 | Todeschini et al. | Jun 2015 | B2 |
9064254 | Todeschini et al. | Jun 2015 | B2 |
9066032 | Wang | Jun 2015 | B2 |
9070032 | Corcoran | Jun 2015 | B2 |
D734339 | Zhou et al. | Jul 2015 | S |
D734751 | Oberpriller et al. | Jul 2015 | S |
9082023 | Feng et al. | Jul 2015 | B2 |
9135913 | Toru | Sep 2015 | B2 |
9224022 | Ackley et al. | Dec 2015 | B2 |
9224027 | Van et al. | Dec 2015 | B2 |
D747321 | Ondon et al. | Jan 2016 | S |
9230140 | Ackley | Jan 2016 | B1 |
9250712 | Todeschini | Feb 2016 | B1 |
9258033 | Showering | Feb 2016 | B2 |
9261398 | Amundsen et al. | Feb 2016 | B2 |
9262633 | Todeschini et al. | Feb 2016 | B1 |
9262664 | Soule et al. | Feb 2016 | B2 |
9274806 | Barten | Mar 2016 | B2 |
9282501 | Wang et al. | Mar 2016 | B2 |
9292969 | Affargue et al. | Mar 2016 | B2 |
9298667 | Caballero | Mar 2016 | B2 |
9310609 | Rueblinger et al. | Apr 2016 | B2 |
9319548 | Showering et al. | Apr 2016 | B2 |
D757009 | Oberpriller et al. | May 2016 | S |
9342724 | McCloskey et al. | May 2016 | B2 |
9342827 | Smith | May 2016 | B2 |
9355294 | Smith et al. | May 2016 | B2 |
9367722 | Xian et al. | Jun 2016 | B2 |
9375945 | Bowles | Jun 2016 | B1 |
D760719 | Zhou et al. | Jul 2016 | S |
9390596 | Todeschini | Jul 2016 | B1 |
9396375 | Qu et al. | Jul 2016 | B2 |
9398008 | Todeschini et al. | Jul 2016 | B2 |
D762604 | Fitch et al. | Aug 2016 | S |
D762647 | Fitch et al. | Aug 2016 | S |
9407840 | Wang | Aug 2016 | B2 |
9412242 | Van et al. | Aug 2016 | B2 |
9418252 | Nahill et al. | Aug 2016 | B2 |
D766244 | Zhou et al. | Sep 2016 | S |
9443123 | Hejl | Sep 2016 | B2 |
9443222 | Singel et al. | Sep 2016 | B2 |
9448610 | Davis et al. | Sep 2016 | B2 |
9478113 | Xie et al. | Oct 2016 | B2 |
D771631 | Fitch et al. | Nov 2016 | S |
9507974 | Todeschini | Nov 2016 | B1 |
D777166 | Bidwell et al. | Jan 2017 | S |
9582696 | Barber et al. | Feb 2017 | B2 |
D783601 | Schulte et al. | Apr 2017 | S |
9616749 | Chamberlin | Apr 2017 | B2 |
9618993 | Murawski et al. | Apr 2017 | B2 |
D785617 | Bidwell et al. | May 2017 | S |
D785636 | Oberpriller et al. | May 2017 | S |
D790505 | Vargo et al. | Jun 2017 | S |
D790546 | Zhou et al. | Jun 2017 | S |
D790553 | Fitch et al. | Jun 2017 | S |
9697818 | Hendrickson et al. | Jul 2017 | B2 |
9715614 | Todeschini et al. | Jul 2017 | B2 |
9728188 | Rosen et al. | Aug 2017 | B1 |
9734493 | Gomez et al. | Aug 2017 | B2 |
9786101 | Ackley | Oct 2017 | B2 |
9813799 | Gecawicz et al. | Nov 2017 | B2 |
9857167 | Jovanovski et al. | Jan 2018 | B2 |
9891612 | Charpentier et al. | Feb 2018 | B2 |
9891912 | Balakrishnan et al. | Feb 2018 | B2 |
9892876 | Bandringa | Feb 2018 | B2 |
9954871 | Hussey et al. | Apr 2018 | B2 |
9978088 | Pape | May 2018 | B2 |
10007112 | Fitch et al. | Jun 2018 | B2 |
10019334 | Caballero et al. | Jul 2018 | B2 |
10021043 | Sevier | Jul 2018 | B2 |
10038716 | Todeschini et al. | Jul 2018 | B2 |
10066982 | Ackley et al. | Sep 2018 | B2 |
10327158 | Wang et al. | Jun 2019 | B2 |
10360728 | Venkatesha et al. | Jul 2019 | B2 |
10401436 | Young et al. | Sep 2019 | B2 |
10410029 | Powilleit | Sep 2019 | B2 |
10685643 | Hendrickson et al. | Jun 2020 | B2 |
10714121 | Hardek | Jul 2020 | B2 |
10732226 | Kohtz et al. | Aug 2020 | B2 |
10909490 | Raj et al. | Feb 2021 | B2 |
11158336 | Hardek | Oct 2021 | B2 |
20020007273 | Chen | Jan 2002 | A1 |
20020054101 | Beatty | May 2002 | A1 |
20020128838 | Veprek | Sep 2002 | A1 |
20020129139 | Ramesh | Sep 2002 | A1 |
20020138274 | Sharma et al. | Sep 2002 | A1 |
20020143540 | Malayath et al. | Oct 2002 | A1 |
20020145516 | Moskowitz et al. | Oct 2002 | A1 |
20020152071 | Chaiken et al. | Oct 2002 | A1 |
20020178004 | Chang et al. | Nov 2002 | A1 |
20020178074 | Bloom | Nov 2002 | A1 |
20020184027 | Brittan et al. | Dec 2002 | A1 |
20020184029 | Brittan et al. | Dec 2002 | A1 |
20020198712 | Hinde et al. | Dec 2002 | A1 |
20030023438 | Schramm et al. | Jan 2003 | A1 |
20030061049 | Erten | Mar 2003 | A1 |
20030120486 | Brittan et al. | Jun 2003 | A1 |
20030141990 | Coon | Jul 2003 | A1 |
20030191639 | Mazza | Oct 2003 | A1 |
20030220791 | Toyama | Nov 2003 | A1 |
20040181461 | Raiyani et al. | Sep 2004 | A1 |
20040181467 | Raiyani et al. | Sep 2004 | A1 |
20040193422 | Fado et al. | Sep 2004 | A1 |
20040215457 | Meyer | Oct 2004 | A1 |
20040230420 | Kadambe et al. | Nov 2004 | A1 |
20040242160 | Ichikawa et al. | Dec 2004 | A1 |
20050044129 | McCormack et al. | Feb 2005 | A1 |
20050049873 | Bartur et al. | Mar 2005 | A1 |
20050055205 | Jersak et al. | Mar 2005 | A1 |
20050070337 | Byford et al. | Mar 2005 | A1 |
20050071158 | Byford | Mar 2005 | A1 |
20050071161 | Shen | Mar 2005 | A1 |
20050080627 | Hennebert et al. | Apr 2005 | A1 |
20050177369 | Stoimenov et al. | Aug 2005 | A1 |
20060235739 | Levis et al. | Oct 2006 | A1 |
20070063048 | Havens et al. | Mar 2007 | A1 |
20070080930 | Logan et al. | Apr 2007 | A1 |
20070184881 | Wahl et al. | Aug 2007 | A1 |
20080052068 | Aguilar et al. | Feb 2008 | A1 |
20080185432 | Caballero et al. | Aug 2008 | A1 |
20080280653 | Ma et al. | Nov 2008 | A1 |
20090006164 | Kaiser et al. | Jan 2009 | A1 |
20090099849 | Iwasawa | Apr 2009 | A1 |
20090134221 | Zhu et al. | May 2009 | A1 |
20090164902 | Cohen et al. | Jun 2009 | A1 |
20090192705 | Golding et al. | Jul 2009 | A1 |
20100057465 | Kirsch et al. | Mar 2010 | A1 |
20100177076 | Essinger et al. | Jul 2010 | A1 |
20100177080 | Essinger et al. | Jul 2010 | A1 |
20100177707 | Essinger et al. | Jul 2010 | A1 |
20100177749 | Essinger et al. | Jul 2010 | A1 |
20100226505 | Kimura | Sep 2010 | A1 |
20100250243 | Schalk et al. | Sep 2010 | A1 |
20100265880 | Rautiola et al. | Oct 2010 | A1 |
20110029312 | Braho et al. | Feb 2011 | A1 |
20110029313 | Braho et al. | Feb 2011 | A1 |
20110093269 | Braho et al. | Apr 2011 | A1 |
20110119623 | Kim | May 2011 | A1 |
20110169999 | Grunow et al. | Jul 2011 | A1 |
20110202554 | Powilleit et al. | Aug 2011 | A1 |
20110208521 | McClain | Aug 2011 | A1 |
20110237287 | Klein et al. | Sep 2011 | A1 |
20110282668 | Stefan et al. | Nov 2011 | A1 |
20120111946 | Golant | May 2012 | A1 |
20120168511 | Kotlarsky et al. | Jul 2012 | A1 |
20120168512 | Kotlarsky et al. | Jul 2012 | A1 |
20120193423 | Samek | Aug 2012 | A1 |
20120197678 | Ristock et al. | Aug 2012 | A1 |
20120203647 | Smith | Aug 2012 | A1 |
20120223141 | Good et al. | Sep 2012 | A1 |
20120228382 | Havens et al. | Sep 2012 | A1 |
20120248188 | Kearney | Oct 2012 | A1 |
20120253548 | Davidson | Oct 2012 | A1 |
20120316962 | Rathod | Dec 2012 | A1 |
20130043312 | Van Horn | Feb 2013 | A1 |
20130075168 | Amundsen et al. | Mar 2013 | A1 |
20130080173 | Talwar et al. | Mar 2013 | A1 |
20130082104 | Kearney et al. | Apr 2013 | A1 |
20130090089 | Rivere | Apr 2013 | A1 |
20130175341 | Kearney et al. | Jul 2013 | A1 |
20130175343 | Good | Jul 2013 | A1 |
20130257744 | Daghigh et al. | Oct 2013 | A1 |
20130257759 | Daghigh | Oct 2013 | A1 |
20130270346 | Xian et al. | Oct 2013 | A1 |
20130287258 | Kearney | Oct 2013 | A1 |
20130292475 | Kotlarsky et al. | Nov 2013 | A1 |
20130292477 | Hennick et al. | Nov 2013 | A1 |
20130293539 | Hunt et al. | Nov 2013 | A1 |
20130293540 | Laffargue et al. | Nov 2013 | A1 |
20130306728 | Thuries et al. | Nov 2013 | A1 |
20130306731 | Pedrao | Nov 2013 | A1 |
20130307964 | Bremer et al. | Nov 2013 | A1 |
20130308625 | Park et al. | Nov 2013 | A1 |
20130313324 | Koziol et al. | Nov 2013 | A1 |
20130313325 | Wilz et al. | Nov 2013 | A1 |
20130325763 | Cantor et al. | Dec 2013 | A1 |
20130342717 | Havens et al. | Dec 2013 | A1 |
20140001267 | Giordano et al. | Jan 2014 | A1 |
20140002828 | Laffargue et al. | Jan 2014 | A1 |
20140008439 | Wang | Jan 2014 | A1 |
20140025584 | Liu et al. | Jan 2014 | A1 |
20140034734 | Sauerwein, Jr. | Feb 2014 | A1 |
20140036848 | Pease et al. | Feb 2014 | A1 |
20140039693 | Havens et al. | Feb 2014 | A1 |
20140042814 | Kather et al. | Feb 2014 | A1 |
20140049120 | Kohtz et al. | Feb 2014 | A1 |
20140049635 | Laffargue et al. | Feb 2014 | A1 |
20140058801 | Deodhar et al. | Feb 2014 | A1 |
20140061306 | Wu et al. | Mar 2014 | A1 |
20140063289 | Hussey et al. | Mar 2014 | A1 |
20140066136 | Sauerwein et al. | Mar 2014 | A1 |
20140067692 | Ye et al. | Mar 2014 | A1 |
20140070005 | Nahill et al. | Mar 2014 | A1 |
20140071840 | Venancio | Mar 2014 | A1 |
20140074746 | Wang | Mar 2014 | A1 |
20140076974 | Havens et al. | Mar 2014 | A1 |
20140078341 | Havens et al. | Mar 2014 | A1 |
20140078342 | Li et al. | Mar 2014 | A1 |
20140078345 | Showering | Mar 2014 | A1 |
20140097249 | Gomez et al. | Apr 2014 | A1 |
20140098792 | Wang et al. | Apr 2014 | A1 |
20140100774 | Showering | Apr 2014 | A1 |
20140100813 | Showering | Apr 2014 | A1 |
20140103115 | Meier et al. | Apr 2014 | A1 |
20140104413 | McCloskey et al. | Apr 2014 | A1 |
20140104414 | McCloskey et al. | Apr 2014 | A1 |
20140104416 | Giordano et al. | Apr 2014 | A1 |
20140104451 | Todeschini et al. | Apr 2014 | A1 |
20140106594 | Skvoretz | Apr 2014 | A1 |
20140106725 | Sauerwein, Jr. | Apr 2014 | A1 |
20140108010 | Maltseff et al. | Apr 2014 | A1 |
20140108402 | Gomez et al. | Apr 2014 | A1 |
20140108682 | Caballero | Apr 2014 | A1 |
20140110485 | Toa et al. | Apr 2014 | A1 |
20140114530 | Fitch et al. | Apr 2014 | A1 |
20140124577 | Wang et al. | May 2014 | A1 |
20140124579 | Ding | May 2014 | A1 |
20140125842 | Winegar | May 2014 | A1 |
20140125853 | Wang | May 2014 | A1 |
20140125999 | Longacre et al. | May 2014 | A1 |
20140129378 | Richardson | May 2014 | A1 |
20140131438 | Kearney | May 2014 | A1 |
20140131441 | Nahill et al. | May 2014 | A1 |
20140131443 | Smith | May 2014 | A1 |
20140131444 | Wang | May 2014 | A1 |
20140131445 | Ding et al. | May 2014 | A1 |
20140131448 | Xian et al. | May 2014 | A1 |
20140133379 | Wang et al. | May 2014 | A1 |
20140136208 | Maltseff et al. | May 2014 | A1 |
20140140585 | Wang | May 2014 | A1 |
20140151453 | Meier et al. | Jun 2014 | A1 |
20140152882 | Samek et al. | Jun 2014 | A1 |
20140158770 | Sevier et al. | Jun 2014 | A1 |
20140159869 | Zumsteg et al. | Jun 2014 | A1 |
20140166755 | Liu et al. | Jun 2014 | A1 |
20140166757 | Smith | Jun 2014 | A1 |
20140166759 | Liu et al. | Jun 2014 | A1 |
20140168787 | Wang et al. | Jun 2014 | A1 |
20140175165 | Havens et al. | Jun 2014 | A1 |
20140175172 | Jovanovski et al. | Jun 2014 | A1 |
20140191644 | Chaney | Jul 2014 | A1 |
20140191913 | Ge et al. | Jul 2014 | A1 |
20140195290 | Plost et al. | Jul 2014 | A1 |
20140197238 | Liu et al. | Jul 2014 | A1 |
20140197239 | Havens et al. | Jul 2014 | A1 |
20140197304 | Feng et al. | Jul 2014 | A1 |
20140203087 | Smith et al. | Jul 2014 | A1 |
20140204268 | Grunow et al. | Jul 2014 | A1 |
20140214631 | Hansen | Jul 2014 | A1 |
20140217166 | Berthiaume et al. | Aug 2014 | A1 |
20140217180 | Liu | Aug 2014 | A1 |
20140231500 | Ehrhart et al. | Aug 2014 | A1 |
20140232930 | Anderson | Aug 2014 | A1 |
20140247315 | Marty et al. | Sep 2014 | A1 |
20140263493 | Amurgis et al. | Sep 2014 | A1 |
20140263645 | Smith et al. | Sep 2014 | A1 |
20140267609 | Franck | Sep 2014 | A1 |
20140270196 | Braho et al. | Sep 2014 | A1 |
20140270229 | Braho | Sep 2014 | A1 |
20140278387 | Digregorio | Sep 2014 | A1 |
20140278391 | Braho et al. | Sep 2014 | A1 |
20140282210 | Bianconi | Sep 2014 | A1 |
20140284384 | Lu et al. | Sep 2014 | A1 |
20140288933 | Braho et al. | Sep 2014 | A1 |
20140297058 | Barker et al. | Oct 2014 | A1 |
20140299665 | Barber et al. | Oct 2014 | A1 |
20140312121 | Lu et al. | Oct 2014 | A1 |
20140319220 | Coyle | Oct 2014 | A1 |
20140319221 | Oberpriller et al. | Oct 2014 | A1 |
20140326787 | Barten | Nov 2014 | A1 |
20140330606 | Paget et al. | Nov 2014 | A1 |
20140332590 | Wang et al. | Nov 2014 | A1 |
20140344943 | Todeschini et al. | Nov 2014 | A1 |
20140346233 | Liu et al. | Nov 2014 | A1 |
20140351317 | Smith et al. | Nov 2014 | A1 |
20140353373 | Van et al. | Dec 2014 | A1 |
20140361073 | Qu et al. | Dec 2014 | A1 |
20140361082 | Xian et al. | Dec 2014 | A1 |
20140362184 | Jovanovski et al. | Dec 2014 | A1 |
20140363015 | Braho | Dec 2014 | A1 |
20140369511 | Sheerin et al. | Dec 2014 | A1 |
20140374483 | Lu | Dec 2014 | A1 |
20140374485 | Xian et al. | Dec 2014 | A1 |
20150001301 | Ouyang | Jan 2015 | A1 |
20150001304 | Todeschini | Jan 2015 | A1 |
20150003673 | Fletcher | Jan 2015 | A1 |
20150009338 | Laffargue et al. | Jan 2015 | A1 |
20150009610 | London et al. | Jan 2015 | A1 |
20150014416 | Kotlarsky et al. | Jan 2015 | A1 |
20150021397 | Rueblinger et al. | Jan 2015 | A1 |
20150028102 | Ren et al. | Jan 2015 | A1 |
20150028103 | Jiang | Jan 2015 | A1 |
20150028104 | Ma et al. | Jan 2015 | A1 |
20150029002 | Yeakley et al. | Jan 2015 | A1 |
20150032709 | Maloy et al. | Jan 2015 | A1 |
20150039309 | Braho et al. | Feb 2015 | A1 |
20150039878 | Barten | Feb 2015 | A1 |
20150040378 | Saber et al. | Feb 2015 | A1 |
20150048168 | Fritz et al. | Feb 2015 | A1 |
20150049347 | Laffargue et al. | Feb 2015 | A1 |
20150051992 | Smith | Feb 2015 | A1 |
20150053766 | Havens et al. | Feb 2015 | A1 |
20150053768 | Wang et al. | Feb 2015 | A1 |
20150053769 | Thuries et al. | Feb 2015 | A1 |
20150060544 | Feng et al. | Mar 2015 | A1 |
20150062366 | Liu et al. | Mar 2015 | A1 |
20150063215 | Wang | Mar 2015 | A1 |
20150063676 | Lloyd et al. | Mar 2015 | A1 |
20150069130 | Gannon | Mar 2015 | A1 |
20150071819 | Todeschini | Mar 2015 | A1 |
20150083800 | Li et al. | Mar 2015 | A1 |
20150086114 | Todeschini | Mar 2015 | A1 |
20150088522 | Hendrickson et al. | Mar 2015 | A1 |
20150096872 | Woodburn | Apr 2015 | A1 |
20150099557 | Pettinelli et al. | Apr 2015 | A1 |
20150100196 | Hollifield | Apr 2015 | A1 |
20150102109 | Huck | Apr 2015 | A1 |
20150115035 | Meier et al. | Apr 2015 | A1 |
20150127791 | Kosecki et al. | May 2015 | A1 |
20150128116 | Chen et al. | May 2015 | A1 |
20150129659 | Feng et al. | May 2015 | A1 |
20150133047 | Smith et al. | May 2015 | A1 |
20150134470 | Hejl et al. | May 2015 | A1 |
20150136851 | Harding et al. | May 2015 | A1 |
20150136854 | Lu et al. | May 2015 | A1 |
20150142492 | Kumar | May 2015 | A1 |
20150144692 | Hejl | May 2015 | A1 |
20150144698 | Teng et al. | May 2015 | A1 |
20150144701 | Xian et al. | May 2015 | A1 |
20150149946 | Benos et al. | May 2015 | A1 |
20150161429 | Xian | Jun 2015 | A1 |
20150169925 | Chen et al. | Jun 2015 | A1 |
20150169929 | Williams et al. | Jun 2015 | A1 |
20150178523 | Gelay et al. | Jun 2015 | A1 |
20150178534 | Jovanovski et al. | Jun 2015 | A1 |
20150178535 | Bremer et al. | Jun 2015 | A1 |
20150178536 | Hennick et al. | Jun 2015 | A1 |
20150178537 | El et al. | Jun 2015 | A1 |
20150181093 | Zhu et al. | Jun 2015 | A1 |
20150181109 | Gillet et al. | Jun 2015 | A1 |
20150186703 | Chen et al. | Jul 2015 | A1 |
20150193268 | Layton et al. | Jul 2015 | A1 |
20150193644 | Kearney et al. | Jul 2015 | A1 |
20150193645 | Colavito et al. | Jul 2015 | A1 |
20150199957 | Funyak et al. | Jul 2015 | A1 |
20150204671 | Showering | Jul 2015 | A1 |
20150210199 | Payne | Jul 2015 | A1 |
20150220753 | Zhu et al. | Aug 2015 | A1 |
20150236984 | Sevier | Aug 2015 | A1 |
20150254485 | Feng et al. | Sep 2015 | A1 |
20150261643 | Caballero et al. | Sep 2015 | A1 |
20150302859 | Aguilar et al. | Oct 2015 | A1 |
20150312780 | Wang et al. | Oct 2015 | A1 |
20150324623 | Powilleit | Nov 2015 | A1 |
20150327012 | Bian et al. | Nov 2015 | A1 |
20160014251 | Hejl | Jan 2016 | A1 |
20160040982 | Li et al. | Feb 2016 | A1 |
20160042241 | Todeschini | Feb 2016 | A1 |
20160057230 | Todeschini et al. | Feb 2016 | A1 |
20160092805 | Geisler et al. | Mar 2016 | A1 |
20160109219 | Ackley et al. | Apr 2016 | A1 |
20160109220 | Laffargue et al. | Apr 2016 | A1 |
20160109224 | Thuries et al. | Apr 2016 | A1 |
20160112631 | Ackley et al. | Apr 2016 | A1 |
20160112643 | Laffargue et al. | Apr 2016 | A1 |
20160117627 | Raj et al. | Apr 2016 | A1 |
20160124516 | Schoon et al. | May 2016 | A1 |
20160125217 | Todeschini | May 2016 | A1 |
20160125342 | Miller et al. | May 2016 | A1 |
20160125873 | Braho et al. | May 2016 | A1 |
20160133253 | Braho et al. | May 2016 | A1 |
20160171720 | Todeschini | Jun 2016 | A1 |
20160178479 | Goldsmith | Jun 2016 | A1 |
20160180678 | Ackley et al. | Jun 2016 | A1 |
20160189087 | Morton et al. | Jun 2016 | A1 |
20160227912 | Oberpriller et al. | Aug 2016 | A1 |
20160232891 | Pecorari | Aug 2016 | A1 |
20160253023 | Aoyama | Sep 2016 | A1 |
20160292477 | Bidwell | Oct 2016 | A1 |
20160294779 | Yeakley et al. | Oct 2016 | A1 |
20160306769 | Kohtz et al. | Oct 2016 | A1 |
20160314276 | Wilz et al. | Oct 2016 | A1 |
20160314294 | Kubler et al. | Oct 2016 | A1 |
20160377414 | Thuries et al. | Dec 2016 | A1 |
20170011735 | Kim et al. | Jan 2017 | A1 |
20170060320 | Li | Mar 2017 | A1 |
20170069288 | Kanishima et al. | Mar 2017 | A1 |
20170076720 | Gopalan et al. | Mar 2017 | A1 |
20170200108 | Au et al. | Jul 2017 | A1 |
20180091654 | Miller et al. | Mar 2018 | A1 |
20180204128 | Avrahami et al. | Jul 2018 | A1 |
20190114572 | Gold et al. | Apr 2019 | A1 |
20190124388 | Schwartz | Apr 2019 | A1 |
20190250882 | Swansey et al. | Aug 2019 | A1 |
20190354911 | Alaniz et al. | Nov 2019 | A1 |
20190370721 | Issac | Dec 2019 | A1 |
20200265828 | Hendrickson et al. | Aug 2020 | A1 |
20200311650 | Xu et al. | Oct 2020 | A1 |
20200342420 | Zatta et al. | Oct 2020 | A1 |
20210117901 | Raj et al. | Apr 2021 | A1 |
20220013137 | Hardek | Jan 2022 | A1 |
Number | Date | Country |
---|---|---|
3005795 | Feb 1996 | AU |
9404098 | Apr 1999 | AU |
3372199 | Oct 1999 | AU |
0867857 2 | Sep 1998 | EP |
0905677 | Mar 1999 | EP |
1011094 | Jun 2000 | EP |
1377000 | Jan 2004 | EP |
3009968 | Apr 2016 | EP |
63-179398 | Jul 1988 | JP |
64-004798 | Jan 1989 | JP |
04-296799 | Oct 1992 | JP |
06-059828 | Mar 1994 | JP |
06-095828 | Apr 1994 | JP |
06-130985 | May 1994 | JP |
06-161489 | Jun 1994 | JP |
07-013591 | Jan 1995 | JP |
07-199985 | Aug 1995 | JP |
11-175096 | Jul 1999 | JP |
2000-181482 | Jun 2000 | JP |
2001-042886 | Feb 2001 | JP |
2001-343992 | Dec 2001 | JP |
2001-343994 | Dec 2001 | JP |
2002-328696 | Nov 2002 | JP |
2003-177779 | Jun 2003 | JP |
2004-126413 | Apr 2004 | JP |
2004-334228 | Nov 2004 | JP |
2005-173157 | Jun 2005 | JP |
2005-331882 | Dec 2005 | JP |
2006-058390 | Mar 2006 | JP |
9602050 | Jan 1996 | WO |
9916050 | Apr 1999 | WO |
9950828 | Oct 1999 | WO |
0211121 | Feb 2002 | WO |
2005119193 | Dec 2005 | WO |
2006031752 | Mar 2006 | WO |
2013163789 | Nov 2013 | WO |
2013173985 | Nov 2013 | WO |
2014019130 | Feb 2014 | WO |
2014110495 | Jul 2014 | WO |
Entry |
---|
US 8,548,242 B1, 10/2013, Longacre (withdrawn) |
US 8,616,454 B2, 12/2013, Havens et al. (withdrawn) |
D. Barchiesi, D. Giannoulis, D. Stowell and M. D. Plumbley, “Acoustic Scene Classification: Classifying environments from the sounds they produce,” in IEEE Signal Processing Magazine, vol. 32, No. 3, pp. 16-34, May 2015, doi: 10.1109/MSP.2014.2326181. (Year: 2015) (Year: 2015). |
A. Gupta, N. Patel and S. Khan, “Automatic speech recognition technique for voice command,” 2014 International Conference on Science Engineering and Management Research (ICSEMR), 2014, pp. 1-5, doi: 10.1109/ICSEMR.2014.7043641. (Year: 2014) (Year: 2014). |
D. Barchiesi, D. Giannoulis, D. Stowell and M. D. Plumbley, “Acoustic Scene Classification: Classifying environments from the sounds they produce,” in IEEE Signal Processing Magazine, vol. 32, No. 3, pp. 16-34, May 2015, doi: 10.1109/MSP.2014.2326181. (Year: 2015) (Year: 2015 (Year: 2015). |
A. Gupta, N. Patel and S. Khan, “Automatic speech recognition technique for voice command,” 2014 International Conference on Science Engineering and Management Research (ICSEMR), 2014, pp. 1-5, doi: 10.1109/ICSEMR.2014.7043641. (Year: 2014) (Year: 2014) (Year: 2014). |
Voxware, Inc., “Voxware VMS, Because nothing short of the best will do,” Copyright 2019, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2019/01/Voxware-VMS-w.pdf> on May 26, 2023, 2 pages. |
Voxware, Inc., “Voxware VoiceLogistics, Voice Solutions for Logistics Excellence,” Product Literature, Copyright 2005, captured on Mar. 14, 2006 by the Internet Archive WayBack Machine, retrieved from the Internet at <https://web.archive.org/web/20060314191653/http://www.voxware.com/media/pdf/Product_Literature_VoiceLogistics_03.pdf> on May 26, 2023, 5 pages. |
Voxware, Inc., “Voxware VoxConnect, Make Integrating Voice and WMS Fast and Fluid,” Brochure, Copyright 2019, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2019/01/Voxware-VoxConnect-w.pdf> on May 25, 2023, 2 pages. |
Voxware, Inc., “Voxware VoxPilot, Get 10-15% more productivity and drive critical decisions with insights from VoxPilot,” Copyright 2019, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2019/01/Voxware-VoxPilot-w.pdf> on May 26, 2023, 2 pages. |
Voxware, Inc., v. Honeywell International Inc., Hand Held Products, Inc., Intermec Inc., and Vocollect, Inc., Jury Trial Demanded: First Amended Complaint for Declaratory Judgment of No Patent Infringement, Patent Invalidity, and Patent Unenforceability, Violation of Antitrust Laws, Deceptive Trade Practices, Unfair Competition, and Tortious Interference with Prospective Business Relations, Apr. 26, 2023, 66 pages, In the U.S. District Court for the District of Delaware, C.A. No. 23-052 (RGA). |
Voxware, Inc., v. Honeywell International Inc., Hand Held Products, Inc., Intermec Inc., and Vocollect, Inc., Demand for Jury Trial: Defendants Answer, Defenses, and Counterclaims, Mar. 29, 2023, 43 pages, In the U.S. District Court for the District of Delaware, C.A. No. 23-052 (RGA). |
Voxware, Inc., v. Honeywell International Inc., Hand Held Products, Inc., Intermec Inc., and Vocollect, Inc., Jury Trial Demanded: Complaint for Declaratory Judgment of No Patent Infringement, Patent Invalidity, and Patent Unenforceability, Violation of Antitrust Laws, Deceptive Trade Practices, Unfair Competition, and Tortious Interference with Prospective Business Relations, Jan. 17, 2023, 44 pages, In the U.S. District Court for the District of Delaware, C.A. No. 23-052 (RGA). |
Voxware.com, “Voice Directed Picking Software for Warehouses”, retrieved from the Internet at: <https://www.voxware.com/voxware-vms/> on May 25, 2023, 11 pages. |
Worldwide Testing Services (Taiwan) Co., Ltd., Registration No. W6D21808-18305-FCC, FCC ID: SC6BTH430, External Photos, Appendix pp. 2-5, retrieved from the Internet at: <https://fccid.io/SC6BTH430/External-Photos/External-Photos-4007084.pdf> on May 25, 2023, 4 pages. |
Y. Muthusamy, R. Agarwal, Yifan Gong and V. Viswanathan, “Speech-enabled information retrieval in the automobile environment,” 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No.99CH36258), 1999, pp. 2259-2262 vol. 4. (Year: 1999). |
Notice of Allowance and Fees Due (PTOL-85) dated Jun. 28, 2021 for U.S. Appl. No. 16/695,555, 9 page(s). |
Notice of Allowance and Fees Due (PTOL-85) dated Jun. 29, 2023 for U.S. Appl. No. 16/869,228, 10 page(s). |
Notice of Allowance and Fees Due (PTOL-85) dated Mar. 1, 2017 for U.S. Appl. No. 14/561,648, 9 page(s). |
Notice of Allowance and Fees Due (PTOL-85) dated Mar. 11, 2020 for U.S. Appl. No. 15/220,584, 9 page(s). |
Notice of Allowance and Fees Due (PTOL-85) dated May 20, 2020 for U.S. Appl. No. 15/635,326. |
Notice of Allowance and Fees Due (PTOL-85) dated Sep. 4, 2019 for U.S. Appl. No. 15/220,584, 9 page(s). |
Notice of Allowance and Fees Due (PTOL-85) dated Sep. 23, 2020 for U.S. Appl. No. 14/880,482. |
Office Action in related European Application No. 15189657.8 dated May 12, 2017, pp. 1-6. |
Osamu Segawa, Kazuya Takeda, An Information Retrieval System for Telephone Dialogue in Load Dispatch Center, IEEJ Trans, EIS, Sep. 1, 2005, vol. 125, No. 9, pp. 1438-1443. (Abstract Only). |
Result of Consultation (Interview Summary) received for EP Application No. 15189657.8, dated Nov. 19, 2018, 4 pages. |
Roberts, Mike, et al., “Intellestra: Measuring What Matters Most,” Voxware Webinar, dated Jun. 22, 2016, retrieved from the Internet at <https://vimeo.com/195626331> on May 26, 2023, 4 pages. |
Search Report and Written Opinion in counterpart European Application No. 15189657.8 dated Feb. 5, 2016, pp. 1-7. |
Silke Goronzy, Krzysztof Marasek, Ralf Kompe, Semi-Supervised Speaker Adaptation, in Proceedings of the Sony Research Forum 2000, vol. 1, Tokyo, Japan, 2000. |
Smith, Ronnie W., An Evaluation of Strategies for Selective Utterance Verification for Spoken Natural Language Dialog, Proc. Fifth Conference on Applied Natural Language Processing (ANLP), 1997, 41-48. |
Summons to attend Oral Proceedings for European Application No. 15189657.9, dated Jan. 3, 2019, 2 pages. |
Summons to attend Oral Proceedings pursuant to Rule 115(1) EPC received for EP Application No. 15189657.8, dated Jul. 6, 2018, 11 pages. |
T. B. Martin, “Practical applications of voice input to machines,” in Proceedings of the IEEE, vol. 64, No. 4, pp. 487-501, Apr. 1976, doi: 10.1109/PROC.1976.10157. (Year: 1976). |
T. Kuhn, A. Jameel, M. Stumpfle and A. Haddadi, “Hybrid in-car speech recognition for mobile multimedia applications,” 1999 IEEE 49th Vehicular Technology Conference (Cat. No. 99CH36363), 1999, pp. 2009-2013 vol. 3. (Year: 1999). |
U.S. Patent Application for a Laser Scanning Module Employing an Elastomeric U-Hinge Based Laser Scanning Assembly, filed Feb. 7, 2012 (Feng et al.), U.S. Appl. No. 13/367,978, abandoned. |
U.S. Patent Application for Indicia Reader filed Apr. 1, 2015 (Huck), U.S. Appl. No. 14/676,109, abandoned. |
U.S. Patent Application for Multifunction Point of Sale Apparatus With Optical Signature Capture filed Jul. 30, 2014 (Good et al.); 37 pages; now abandoned, U.S. Appl. No. 14/446,391. |
U.S. Patent Application for Terminal Having Illumination and Focus Control filed May 21, 2014 (Liu et al.); 31 pages; now abandoned, U.S. Appl. No. 14/283,282. |
U.S. Patent Application for “Systems and Methods for Dynamically Improving User Intelligibility of Synthesized Speech in a Work Environment”, Unpublished (Filing Date Jun. 2, 2023), (James Hendrickson, Inventor), (Vocollect, Inc.), U.S. Appl. No. 18/328,189. |
U.S. Patent Application for “Systems and Methods for Worker Resource Management”, Unpublished (Filing Date Jun. 1, 2023), (Mohit Raj, Inventor), (Vocollect, Inc., Assignee), U.S. Appl. No. 18/327,673. |
Voxware Inc., “Voxware Headsets, Portfolio, Features & Specifications,” Brochure, Sep. 2011, retrieved from the Internet at <http://webshop.advania.se/pdf/9FEB1CF7-2B40-4A63-8644-471F2D282B65.pdf> on May 25, 2023, 4 pages. |
Voxware, “People . . . Power . . . Performance,” Product Literature, captured Mar. 14, 2006 by the Internet Archive WayBack Machine, retrieved from the Internet at <https://web.archive.org/web/20060314191729/http://www.voxware.com/media/pdf/Product_Literature_Company_02.pdf> on May 26, 2023, 3 pages. |
Voxware, “The Cascading Benefits of Multimodal Automation in Distribution Centers,” retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2020/12/Voxware-Cascading-Benefits.pdf> on May 26, 2023, 14 pages. |
Voxware, “Voice in the Warehouse: The Hidden Decision, Making the Open and Shut Case”, White Paper, Copyright 2008, retrieved from the Internet at: <https://www.voxware.com/wp-content/uploads/2016/11/Voice_in_the_Warehouse-The_Hidden_Decision.pdf> on May 25, 2023, 3 pages. |
Voxware, “Voice-Directed Results, VoiceLogistics Helps Dunkin' Donuts Deliver,” Case Study, captured on Oct. 15, 2006 by the Internet Archive WayBack Machine, retrieved from the Internet at <https://web.archive.org/web/20061015223800/http://www.voxware.com/fileadmin/Download_Center/Case_Studies/VoiceLogistics_Helps_Dunkin_Donuts_Deliver.pdf> on May 26, 2023, 3 pages. |
Voxware, “VoiceLogistics Results, Reed Boardall Doesn't Leave Customers Out in the Cold!,” Case Study, captured on Oct. 15, 2006 by the Internet Archive WayBack Machine, retrieved from the Internet at <https://web.archive.org/web/20061015223031/http://www.voxware.com/fileadmin/Download_Center/Case_Studies/Reed_Boardall_Doesn_t_Leave_Customers_in_the_Cold.pdf> on May 26, 2023, 3 pages. |
Voxware, “VoxConnect, Greatly simplify the integration of your voice solution,” retrieved from the Internet at <https://www.voxware.com/voxware-vms/voxconnect/> on May 26, 2023, 4 pages. |
Voxware, “VoxPilot, Supply Chain Analytics,” retrieved from the Internet at <https://www.voxware.com/supply-chain-analytics/> on May 26, 2023, 8 pages. |
Voxware, “Voxware Intellestra provides real-time view of data across supply chain,” Press Release, dated Apr. 14, 2015, retrieved from the Internet at <https://www.fleetowner.com/refrigerated-transporter/cold-storage-logistics/article/21229403/voxware-intellestra-provides-realtime-view-of-data-across-entire-supply-chain> on May 26, 2023, 2 pages. |
Voxware, “Voxware Intellestra, What if supply chain managers could see the future?”, Brochure, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2017/04/Voxware-Intellestra-w.pdf> on May 26, 2023, 2 pages. |
Voxware, “Why Cloud VMS, All of voice's benefits with a faster ROI: Cloud VMS,” retrieved from the Internet at <https://www.voxware.com/voxware-vms/why-cloud-vms/> on May 26, 2023, 4 pages. |
Voxware, Inc., “4-Bay Smart Charger,” Product Literature, Copyright 2005, captured on Mar. 14, 2006 by the Internet Archive WayBack Machine, retrieved from the Internet at <https://web.archive.org/web/20060314191719/http://www.voxware.com/media/pdf/Smart_Charger_01.pdf> on May 26, 2023, 3 pages. |
Voxware, Inc., “Bluetooth Modular Headset, Single-Ear (Mono) BT HD, BTH430 Quick Start Guide v.1” retrieved from the Internet at <https://usermanual.wiki/Voxware/BTH430/pdf> on May 25, 2023, 12 pages. |
Voxware, Inc., “Certified Client Devices for Voxware VMS Voice Solutions,” Product Sheets, Effective Feb. 2012, retrieved from the Internet at <https://docplayer.net/43814384-Certified-client-devices-for-voxware-vms-voice-solutions-effective-february-2012.html> on May 26, 2023, 30 pages. |
Voxware, Inc., “Dispelling Myths About Voice in the Warehouse: Maximizing Choice and Control Across the 4 Key Components of Every Voice Solution”, White Paper, Copyright 2012, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2016/11/Dispelling_Myths.pdf> on May 25, 2023, 6 pages. |
Voxware, Inc., “Innovative Voice Solutions Powered by Voxware, Broadening the Role of Voice in Supply Chain Operations,” Product Literature, Copyright 2005, captured on Mar. 14, 2006 by the Internet Archive WayBack Machine, retrieved from the Internet at <https://web.archive.org/web/20060314191628/http://www.voxware.com/media/pdf/VoxBrowserVoxManager_02.pdf> on May 26, 2023, 5 pages. |
Voxware, Inc., “Intellestra BI & Analytics,” Product Sheet, Copyright 2015, retrieved form the Internet at <https://www.voxware.com/wp-content/uploads/2016/12/Voxware_Intellestra_Product_Overview.pdf> on May 26, 2023, 1 page. |
Voxware, Inc., “Is Your Voice Solution Engineered For Change?”, White Paper, Copyright 2012, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2016/11/WhitePaper_Engineered_For_Change.pdf> on May 25, 2023, 9 pages. |
Voxware, Inc., “MX3X—VoiceLogistics on a Versatile Platform”, Product Literature, Copyright 2004, captured on Mar. 14, 2006 by the Internet Archive WayBack Machine, retrieved from the Internet at <https://web.archive.org/web/20060314191822/http://www.voxware.com/media/pdf/LXE_MX3X_01.pdf> on May 26, 2023, 2 pages. |
Voxware, Inc., “Optimizing Work Performance, Voice-Directed Operations in the Warehouse,” White Paper, Copyright 2012, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2016/11/WhitePaper_OptimizingWorkerPerformance.pdf> on May 25, 2023, 6 pages. |
Voxware, Inc., “VLS-410 >>Wireless Voice Recognition << ,” Product Literature, Copyright 2004, Captured on Mar. 14, 2006 by the Internet Archive WayBack Machine, retrieved from the Internet at <https://web.archive.org/web/20060314191604/http://www.voxware.com/media/pdf/VLS-410_05.pdf> on May 26, 2023, 3 pages. |
Voxware, Inc., “Voice in the Cloud: Opportunity for Warehouse Optimization,” White Paper, Copyright 2012, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2016/11/Vox_whitepaper_VoiceCloud.pdf> on May 26, 2023, 7 pages. |
Voxware, Inc., “Voice in the Warehouse: Does the Recognizer Matter? Why You Need 99.9% Recognition Accuracy,” White Paper, Copyright 2010, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2016/11/WhitePaper_Recognizer.pdf> on May 25, 2023, 7 pages. |
Voxware, Inc., “VoiceLogistics, Technology Architecture,” Product Literature, Copyright 2003, captured Mar. 14, 2006 by the Internet Archive WayBack Machine, retrieved from the Internet at <https://web.archive.org/web/20060314191745/http://www.voxware.com/media/pdf/Product_Literature_VLS_Architechture_02.pdf> on May 26, 2023, 5 pages. |
Voxware, Inc., “VoxPilot, Active Decision Support for Warehouse Voice,” Brochure, Copyright 2012, retrieved from the Internet at <https://voxware.com/wp-content/uploads/2016/11/Solutions_VoxApp_VoxPilot_2.pdf> on May 26, 2023, 2 pages. |
Voxware, Inc., “Voxware Integrated Speech Engine Adapts to Your Workforce and Your Warehouse,” Brochure, Copyright 2021, retrieved from the Internet at <https://www.voxware.com/wp-content/uploads/2016/11/Vox_product_VISE_Recognition_Engine.pdf> on May 25, 2023, 2 pages. |
U.S. Appl. No. 17/449,213, filed Sep. 28, 2021, 2022-0013137. |
U.S. Appl. No. 16/695,555, filed Nov. 26, 2019, U.S. Pat. No. 11,158,336. |
U.S. Appl. No. 15/220,584, filed Jul. 27, 2016, U.S. Pat. No. 10,714,121. |
U.S. Patent Application for Multipurpose Optical Reader, filed May 14, 2014 (Jovanovski et al.); 59 pages, U.S. Appl. No. 14/277,337, abandoned. |
A. Gupta, N. Patel and S. Khan, “Automatic speech recognition technique for voice command,” 2014 International Conference on Science Engineering and Management Research (ICSEMR), 2014, pp. 1-5, doi: 10.1109/ICSEMR.2014.7043641. (Year: 2014). |
A. L. Kun, W. T. Miller and W. H. Lenharth, “Evaluating the user interfaces of an integrated system of in-car electronic devices,” Proceedings. 2005 IEEE Intelligent Transportation Systems, 2005., 2005, pp. 953-958. (Year: 2005). |
A. L. Kun, W. T. Miller, A. Pelhe and R. L. Lynch, “A software architecture supporting in-car speech interaction,” IEEE Intelligent Vehicles Symposium, 2004, 2004, pp. 471-476. (Year: 2004). |
Abel Womack, “Voxware announces sales partnership with Buton eBusiness Solutions”, retrieved from the Internet at <https://www.abelwomack.com/voxware-announces-sales-partnership-with-buton-ebusiness-solutions/> on May 26, 2023, 2 pages. |
Advisory Action (PTOL-303) Mailed on Oct. 18, 2022 for U.S. Appl. No. 17/111,164, 3 page(s). |
Annex to the communication Mailed on Jan. 3, 2019 for EP Application No. 15189657, 1 page(s). |
Annex to the communication Mailed on Jul. 6, 2018 for EP Application No. 15189657, 6 page(s). |
Annex to the communication Mailed on Nov. 19, 2018 for EP Application No. 15189657, 2 page(s). |
Applicant Initiated Interview Summary (PTOL-413) Mailed on Jun. 15, 2020 for U.S. Appl. No. 15/220,584. |
Chengyi Zheng and Yonghong Yan, “Improving Speaker Adaptation by Adjusting the Adaptation Data Set”; 2000 IEEE International Symposium on Intelligent Signal Processing and Communication Systems, Nov. 5-8, 2000. |
Christensen, “Speaker Adaptation of Hidden Markov Models using Maximum Likelihood Linear Regression”, Thesis, Aalborg University, Apr. 1996. |
D. Barchiesi, D. Giannoulis, D. Stowell and M. D. Plumbley, “Acoustic Scene Classification: Classifying environments from the sounds they produce,” in IEEE Signal Processing Magazine, vol. 32, No. 3, pp. 16-34, May 2015, doi: 10.1109/MSP.2014.2326181. (Year: 2015). |
DC Velocity Staff, “Voxware shows Intellestra supply chain analytics tool”, dated Apr. 6, 2016, retrieved from the Internet at <https://www.dcvelocity.com/articles/31486-voxware-shows-intellestra-supply-chain-analytics-tool> on May 26, 2023, 7 pages. |
Decision to Refuse European Application No. 15189657.8, dated Jan. 3, 2019, 10 pages. |
Decision to Refuse European Application No. 15189657.9, dated Jul. 6, 2018, 2 pages. |
E. Erzin, Y. Yemez, A. M. Tekalp, A. Ercil, H. Erdogan and H. Abut, “Multimodal person recognition for human-vehicle interaction,” in IEEE MultiMedia, vol. 13, No. 2, pp. 18-31, Apr.-Jun. 2006. (Year: 2006). |
Examiner initiated interview summary (PTOL-413B) Mailed on Apr. 11, 2017 for U.S. Appl. No. 14/561,648, 1 page(s). |
Examiner initiated interview summary (PTOL-413B) Mailed on Sep. 14, 2018 for U.S. Appl. No. 15/220,584, 1 page(s). |
Examiner Interview Summary Record (PTOL-413) Mailed on Mar. 26, 2021 for U.S. Appl. No. 16/695,555. |
Examiner Interview Summary Record (PTOL-413) Mailed on Oct. 18, 2022 for U.S. Appl. No. 17/111,164, 1 page(s). |
Final Rejection Mailed on Apr. 13, 2023 for U.S. Appl. No. 16/869,228, 45 page(s). |
Final Rejection Mailed on Aug. 7, 2019 for U.S. Appl. No. 15/635,326, 37 page(s). |
Final Rejection Mailed on Jul. 25, 2022 for U.S. Appl. No. 17/111,164, 22 page(s). |
Final Rejection Mailed on Jun. 5, 2019 for U.S. Appl. No. 15/220,584, 14 page(s). |
Final Rejection Mailed on May 7, 2020 for U.S. Appl. No. 14/880,482. |
Final Rejection Mailed on May 30, 2019 for U.S. Appl. No. 14/880,482. |
Jie Yi, Kei Miki, Takashi Yazu, Study of Speaker Independent Continuous Speech Recognition, Oki Electric Research and Development, Oki Electric Industry Co., Ltd., Apr. 1, 1995, vol. 62, No. 2, pp. 7-12. |
Kellner, A., et al., Strategies for Name Recognition in Automatic Directory Assistance Systems, Interactive Voice Technology for Telecommunication Application, IVTTA '98 Proceedings, 1998 IEEE 4th Workshop, Sep. 29, 1998 Submited previously in related application prosecution. |
Marc Glassman, Inc. Deploys Vocollect Voice on Psion Teklogix Workabout Pro; HighJump WMS Supports Open Voice Platform PR Newswire [New York] Jan. 8, 2007 (Year: 2007). |
Material Handling Wholesaler, “Buton and Voxware announce value-added reseller agreement,” retrieved from the Internet at <https://www.mhwmag.com/shifting-gears/buton-and-voxware-announce-value-added-reseller-agreement/> on May 26, 2023, 4 pages. |
Minutes of the Oral Proceeding before the Examining Division received for EP Application No. 15189657.8, dated Jan. 3, 2019, 16 pages. |
Mokbel, “Online Adaptation of HMMs to Real-Life Conditions: A Unified Framework”, IEEE Trans. on Speech and Audio Processing, May 2001. |
Non-Final Rejection Mailed on Feb. 4, 2022 for U.S. Appl. No. 17/111,164, 21 page(s). |
Non-Final Rejection Mailed on Jan. 18, 2023 for U.S. Appl. No. 17/111,164. |
Non-Final Rejection Mailed on Mar. 1, 2019 for U.S. Appl. No. 15/220,584, 12 page(s). |
Non-Final Rejection Mailed on Mar. 21, 2019 for U.S. Appl. No. 15/635,326, 31 page(s). |
Non-Final Rejection Mailed on Mar. 26, 2021 for U.S. Appl. No. 16/695,555. |
Non-Final Rejection Mailed on Nov. 1, 2018 for U.S. Appl. No. 14/880,482. |
Non-Final Rejection Mailed on Nov. 1, 2019 for U.S. Appl. No. 15/635,326, 8 page(s). |
Non-Final Rejection Mailed on Nov. 14, 2019 for U.S. Appl. No. 14/880,482. |
Non-Final Rejection Mailed on Oct. 4, 2021 for U.S. Appl. No. 17/111,164, 19 page(s). |
Non-Final Rejection Mailed on Oct. 14, 2022 for U.S. Appl. No. 16/869,228, 42 page(s). |
Non-Final Rejection Mailed on Oct. 31, 2022 for U.S. Appl. No. 17/449,213, 5 page(s). |
Non-Final Rejection Mailed on Sep. 8, 2016 for U.S. Appl. No. 14/561,648, 20 page(s). |
Notice of Allowance and Fees Due (PTOL-85) Mailed on Apr. 11, 2017 for U.S. Appl. No. 14/561,648. |
Notice of Allowance and Fees Due (PTOL-85) Mailed on Aug. 15, 2014 for U.S. Appl. No. 13/474,921. |
Notice of Allowance and Fees Due (PTOL-85) Mailed on Feb. 10, 2020 for U.S. Appl. No. 15/635,326. |
Notice of Allowance and Fees Due (PTOL-85) Mailed on Feb. 28, 2023 for U.S. Appl. No. 17/449,213. |
Notice of Allowance and Fees Due (PTOL-85) Mailed on Jun. 15, 2020 for U.S. Appl. No. 15/220,584. |
Notice of Allowance and Fees Due (PTOL-85) Mailed on Jun. 20, 2023 for U.S. Appl. No. 17/449,213, 10 page(s). |
J. Odell and K. Mukerjee, “Architecture, User Interface, and Enabling Technology in Windows Vista's Speech Systems,” in IEEE Transactions on Computers, vol. 56, No. 9, pp. 1156-1168, Sep. 2007, doi: 10.1109/TC.2007.1065. (Year: 2007). |
Lukowicz, Paul, et al. “Wearit@ work: Toward real-world industrial wearable computing.” IEEE Pervasive Computing 6.4 (Oct.-Dec. 2007): pp. 8-13. (Year: 2007). |
Non-Final Rejection Mailed on Aug. 17, 2023 for U.S. Appl. No. 18/327,673, 25 page(s). |
Non-Final Rejection Mailed on Aug. 17, 2023 for U.S. Appl. No. 18/328,189, 14 page(s). |
Roger G. Byford, “Voice System Technologies and Architecture”, A White Paper by Roger G. Byford CTO, Vocollect published May 10, 2003. Retrieved from Internet archive: Wayback Machine. (n.d.). Https://web.archive.org/web/20030510234253/http://www.vocollect.com/productsNoiceTechWP.pdf, 16 pages (Year: 2003). |
S. Furui, “Speech recognition technology in the ubiquitous/wearable computing environment,” 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100), Istanbul, Turkey, Jun. 5-9, 2000, pp. 3735-3738 vol.6, doi: 10.1109/ICASSP.2000.860214. (Year: 2000). |
V. Stanford, “Wearable computing goes live in industry,” in IEEE Pervasive Computing, vol. 1, No. 4, pp. 14-19, Oct.-Dec. 2002, doi: 10.1109/MPRV.2002.1158274. (Year: 2002). |
W. Kurschl, S. Mitsch, R. Prokop and J. Schoenboeck, “Gulliver-A Framework for Building Smart Speech-Based Applications,” 2007 40th Annual Hawaii International Conference on System Sciences (HICSS'07), Waikoloa, HI, USA, Jan. 2007, 8 pages, doi: 10.1109/ HICSS.2007.243. (Year: 2007). |
Exhibit 16—U.S. Pat. No. 6,662,163 (“Albayrak”), Initial Invalidity Chart for U.S. Pat. No. 8,914,290 (the “'290 Patent”), Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc., v. Honeywell International Inc. et. al., C.A. No. 23-052-RGA (D. Del), 53 pages. |
Exhibit 17—2012 Vocollect Voice Solutions Brochure in view of 2012 VoiceArtisan Brochure, in further view of August 2013 VoiceConsole 5.0 Implementation Guide, and in further view of 2011 VoiceConsole Brochure, Initial Invalidity Chart for U.S. Pat. No. 10,909,490 (the “'490 Patent”), Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc., v. Honeywell International Inc. et. al., C.A. No. 23-052-RGA (D. Del), 72 pages. |
Exhibit 18—Vocollect's Pre-Oct. 15, 2013 Vocollect Voice Solution, Initial Invalidity Chart for U.S. Pat. No. 10,909,490 (the “'490 Patent”), Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc., v. Honeywell International Inc. et. al., C.A. No. 23-052-RGA (D. Del), 76 pages. |
Exhibit 21—Vocollect's Pre-Feb. 4, 2004 Talkman Management System System, Initial Invalidity Chart for U.S. Pat. No. 11,158,336 (the “'336 Patent”), Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc., v. Honeywell International Inc. et. al., C.A. No. 23-052-RGA (D. Del), 85 pages. |
Exhibit 22—the Talkman T2 Manual, Initial Invalidity Chart for U.S. Pat. No. 11,158,336 (the “'336 Patent”), Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc., v. Honeywell International Inc. et. al., C.A. No. 23-052-RGA (D. Del), 86 pages. |
Exhibit VOX001914—Voxware VLS-410 Wireless Voice Recognition, brochure, copyright 2004, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc., v. Honeywell International Inc. et. al., C.A. No. 23-052-RGA (D. Del), 2 pages. |
Exhibit VOX001917—Voxbeans User Manual, Version 1, Sep. 3, 2004, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc., v. Honeywell International Inc. et. al., C.A. No. 23-052-RGA (D. Del), 146 pages. |
Exhibit VOX002498—Appendix L: Manual, Talkman System, FCC: Part 15.247, FCC ID: MQOTT600-40300, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc., v. Honeywell International Inc. et. al., C.A. No. 23-052-RGA (D. Del), 187 pages. |
Exhibit VOX002692—SEC Form 10-K for Voxware, Inc., Fiscal Year Ended Jun. 30, 2001, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc., v. Honeywell International Inc. et. al., C.A. No. 23-052-RGA (D. Del), 66 pages. |
Exhibit VOX002833—Vocollect by Honeywell, Vocollect VoiceConsole, brochure, copyright 2011, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc., v. Honeywell International Inc. et. al., C.A. No. 23-052-RGA (D. Del), 2 pages. |
Exhibit VOX002835—Vocollect (Intermec), Vocollect VoiceArtisan, brochure, copyright 2012, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc., v. Honeywell International Inc. et. al., C.A. No. 23-052-RGA (D. Del), 6 pages. |
Exhibit VOX002908—Appendix K: Manual, Vocollect Hardware Documentation, Model No. HBT1000-01, Aug. 2012, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc., v. Honeywell International Inc. et. al., C.A. No. 23-052-RGA (D. Del), 77 pages. |
Exhibit VOX002985—Vocollect Voice Solutions, Transforming Workflow Performance with Best Practice Optimization, brochure, copyright 2012, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc., v. Honeywell International Inc. et. al., C.A. No. 23-052-RGA (D. Del), 8 pages. |
Exhibit VOX002993—Vocollect VoiceConsole 5.0 Implementation Guide, Aug. 2013, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc., v. Honeywell International Inc. et. al., C.A. No. 23-052-RGA (D. Del), 118 pages. |
Final Rejection Mailed on Aug. 30, 2023 for U.S. Appl. No. 17/111,164, 28 page(s). |
Non-Final Office Action (Letter Restarting Period for Response) Mailed on Aug. 25, 2023 for U.S. Appl. No. 18/327,673, 26 page(s). |
Notice of Allowance and Fees Due (PTOL-85) Mailed on Sep. 6, 2023 for U.S. Appl. No. 18/328,189, 9 page(s). |
Voxware, Voxware Integrated Speech Engine (VISE), Adapts to Your Workforce and Your Warehouse, brochure, copyright 2012, Plaintiff's Initial Invalidity Contentions, Aug. 29, 2023, Voxware, Inc., v. Honeywell International Inc. et. al., C.A. No. 23-052-RGA (D. Del), 2 pages. |
Examiner Interview Summary Record (PTOL-413) Mailed on Dec. 18, 2023 for U.S. Appl. No. 17/111,164, 2 page(s). |
Exhibit 20—Albayrak in view of 2004 Voxbeans Manaul, in further view of the 2012 VISE Brochure, Inital Invalidity Chart for U.S. Pat. No. 11,158,336 (the “336 Patent”), Plaintiffs Initial Invalidity Contention, Aug. 26, 2023, Voxware, Inc., v. Honeywell International Inc. et. al., C.A. No. 23-052-RGA (D. Del), 73 pages. |
Final Rejection Mailed on Dec. 14, 2023 for U.S. Appl. No. 18/327,673, 31 page(s). |
Requirement for Information under 37 CFR § 1.105 Mailed on Jan. 16, 2024 for U.S. Appl. No. 17/111,164, 4 page(s). |
Advistory Action (PTOL-303) Mailed on Mar. 12, 2024 for U.S. Appl. No. 18/327,673, 3 page(s). |
Examiner Interview Summary Record (PTOL-413) Mailed on Mar. 12, 2024 for U.S. Appl. No. 18/327,673, 1 page(s). |
Non-Final Rejection Mailed on May 20, 2024 for U.S. Appl. No. 18/327,673, 14 page(s). |
Number | Date | Country | |
---|---|---|---|
20230317101 A1 | Oct 2023 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17449213 | Sep 2021 | US |
Child | 18328034 | US | |
Parent | 16695555 | Nov 2019 | US |
Child | 17449213 | US | |
Parent | 15220584 | Jul 2016 | US |
Child | 16695555 | US |