The application generally relates to digital audio signal processing and, more specifically, to noise suppression utilizing a machine-learning framework.
An automatic speech processing engine, including, but not limited to, an automatic speech recognition (ASR) engine, in an audio device may be used to recognize spoken words or phonemes within the words in order to identify spoken commands by a user is described. Conventional automatic speech processing is sensitive to noise present in audio signals including user speech. Various noise reduction or noise suppression pre-processing techniques may offer significant benefits to operations of an automatic speech processing engine. For example, a modified frequency domain representation of an audio signal may be used to compute speech-recognition features without having to perform any transformation to the time-domain. In other examples, automatic speech processing techniques may be performed in the frequency-domain and may include applying a real, positive gain mask to the frequency domain representation of the audio signal before converting the signal back to a time-domain signal, which may be then fed to the automatic speech processing engine.
The gain mask may be computed to attenuate the audio signal such that background noise is decreased or eliminated to an extent, while the desired speech is preserved to an extent. Conventional noise suppression techniques may include dynamic noise power estimation to derive a local signal-to-noise ratio (SNR), which may then be used to derive the gain mask using either a formula (e.g., spectral subtraction, Wiener filter, and the like) or a data-driven approach (e.g., table lookup). The gain mask obtained in this manner may not be an optimal mask because an estimated SNR is often inaccurate, and the reconstructed time-domain signal may be very different from the clean speech signal.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
The aspects of the present disclosure provide for noise suppression techniques applicable in digital audio pre-processing for automatic speech processing systems, including but not limited to automatic speech recognition (ASR) systems. The principles of noise suppression lie in the use of a machine-learning framework trained on cues pertaining to clean and noisy speech signals. According to exemplary embodiments, the present technology may utilize a plurality of predefined clean speech signals and a plurality of predefined noise signals to train at least one machine-learning technique and map synthetically generated noisy speech signals with the cues of clean speech signals and noise signals. The trained machine-learning technique may be further used to process and decompose real audio signals into clean speech and noise signals by extracting and analyzing cues of the real audio signal. The cues may be used to dynamically generate an appropriate gain mask, which may precisely eliminate the noise components from the real audio signal. The audio signal pre-processed in such manner may then be applied to an automatic speech processing engine for corresponding interpretation or processing. In other aspects of the present disclosure, the machine-learning technique may enable extracting cues associated with clean automatic speech processing features, which may be directly used by the automatic speech processing engine.
According to one or more embodiments of the present disclosure, there is provided a computer-implemented method for noise suppression. The method may comprise the operations of receiving, by a first processor communicatively coupled with a first memory, first noisy speech, the first noisy speech obtained using two or more microphones. The method may further include extracting, by the first processor, one or more first cues from the first noisy speech, the first cues including cues associated with noise suppression and automatic speech processing. The automatic speech processing may be one or more of automatic speech recognition, language recognition, keyword recognition, speech confirmation, emotion detection, voice sensing, and speaker recognition. The method may further include creating clean automatic speech processing features using a mapping and the extracted one or more first cues, the clean automatic speech processing features being for use in automatic speech processing. The machine-learning technique may include one or more of a neural network, regression tree, a non-linear transform, a linear transform, and a Gaussian Mixture Model (GMM).
According to one or more embodiments of the present disclosure, there is provided yet another computer-implemented method for noise suppression. The method may include the operations of receiving, by a second processor communicatively coupled with a second memory, clean speech and noise; and producing, by the second processor, second noisy speech using the clean speech and the noise. The method may further include extracting, by the second processor, one or more second cues from the second noisy speech, the one or more second cues including cues associated with noise suppression and noisy automatic speech processing; and extracting clean automatic speech processing cues from the clean speech. The process may include generating, by the second processor, a mapping from the one or more second cues associated with the noise suppression cues and noisy automatic speech processing cues to clean automatic speech processing cues, the generating including at least one second machine-learning technique.
The clean speech and noise may each obtained using at least two microphones, the one or more first and second cues each including at least one inter-microphone level difference (ILD) cues and inter-microphone phase difference (IPD) cues. The automatic speech processing comprises one or more of automatic speech recognition, language recognition, keyword recognition, speech confirmation, emotion detection, voice sensing, and speaker recognition. The cues may include at least one of inter-microphone level difference (ILD) cues and inter-microphone phase difference (IPD) cues. The cues may further include at least one of energy at channel cues, voice activity detection (VAD) cues, spatial cues, frequency cues, Wiener gain mask estimates, pitch-based cues, periodicity-based cues, noise estimates, and context cues. The machine-learning technique may include one or more of a neural network, regression tree, a non-linear transform, a linear transform, and a Gaussian Mixture Model (GMM).
According to one or more embodiments of the present disclosure, there is provided a system for noise suppression. An example system may include a first frequency analysis module configured to receive first noisy speech, the first noisy speech being each obtained using at least two microphones; a first cue extraction module configured to extract one or more first cues from the first noisy speech, the first cues including cues associated with noise suppression and automatic speech processing; and a modification module being configured to create clean automatic speech processing features using a mapping and the extracted one or more first cues. The clean automatic speech processing features being for use in automatic speech processing.
According to some embodiments, the method may include receiving, by a processor communicatively coupled with a memory, clean speech and noise, the clean speech and noise each obtained using at least two microphones; producing, by the processor, noisy speech using the clean speech and the noise; extracting, by the processor, one or more cues from the noisy speech, the cues being associated with at least two microphones; and determining, by the processor, a mapping between the cues and one or more gain coefficients using the clean speech and the noisy speech, the determining including at least one machine-learning technique.
Embodiments described herein may be practiced on any device that is configured to receive and/or provide audio such as, but not limited to, personal computers (PCs), tablet computers, phablet computers; mobile devices, cellular phones, phone handsets, headsets, media devices, and systems for teleconferencing applications.
Other example embodiments of the disclosure and aspects will become apparent from the following description taken in conjunction with the following drawings.
Embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements.
Various aspects of the subject matter disclosed herein are now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding of one or more aspects. It may be evident, however, that such aspects may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form to facilitate describing one or more aspects.
The techniques of the embodiments disclosed herein may be implemented using a variety of technologies. For example, the methods described herein may be implemented in software executing on a computer system or in hardware utilizing either a combination of processors or other specially designed application-specific integrated circuits, programmable logic devices, or various combinations thereof. In particular, the methods described herein may be implemented by a series of processor-executable instructions residing on a non-transitory storage medium such as a disk drive or a processor-readable medium. The methods may be implemented in software that is cloud-based.
In general, the techniques of the embodiments disclosed herein provide for digital methods for audio signal pre-processing involving noise suppression appropriate for further use in various automatic speech processing systems. The disclosed methods for noise suppression employ one or more machine-learning algorithms for mapping cues between predetermined, reference noise signals/clean speech signals and noisy speech signals. The mapping data may be used in dynamic calculation of an appropriate gain mask estimate suitable for noise suppression.
In order to obtain a better estimate of the gain mask, embodiments of the present disclosure may use various cues extracted at various places in a noise suppression (NS) system. In addition to an estimated SNR, additional cues such as an ILD, IPD, coherence, and other intermediate features extracted by blocks upstream of the gain mask generation may be used. Cues extracted from previous or following spectral frames, as well as from adjacent frequency taps, may also be used.
The set of cues may then be used in a machine-learning framework, along with the “oracle” ideal gain mask (e.g., which may be extracted when the clean speech is available), to derive a mapping between the cues and the mask. The mapping may be implemented, for example, as one or more machine-learning algorithms including a non-linear transformation, linear transformation, statistical algorithms, neural networks, regression tree methods, GMMs, heuristic algorithms, support vector machine algorithms, k-nearest neighbor algorithms, and so forth. The mapping may be learned from a training database, and one such mapping may exist per frequency domain tap or per group of frequency domain taps.
During this processing, the extracted cues may be fed to the mapper, and the gain mask may be provided by the output of the mapper and applied to the noisy signal, yielding a “de-noised” spectral representation of the signal. From the spectral representation, the time-domain signal may be reconstructed and provided to the ASR engine. In further embodiments, automatic speech processing specific cues may be derived from the spectral representation of the signal. The automatic speech processing cues may be but are not limited to automatic speech recognition, language recognition, keyword recognition, speech confirmation, emotion detection, voice sensing, and speaker recognition. The cues may be provided to the automatic speech processing engine directly, e.g., bypassing the automatic speech processing engine's front end. Although descriptions may be included by way of example to automatic speech recognition (ASR) and features thereof to help describe certain embodiments, various embodiments are not so limited and may include other automatic speech processing and features thereof.
Other embodiments of the present disclosure may include working directly in the automatic speech processing feature, e.g., ASR feature, domain. During the training phase, available NS cues may be produced (as discussed above), and the ASR cues may be extracted from both the clean and the noisy signals. The training phase may then learn an optimal mapping scheme that transforms the NS cues and noisy ASR cues into clean ASR features. In other words, instead of learning a mapping from the NS cues to a gain mask, the mapping may be learned directly from NS cues and noisy ASR cues to the clean ASR cues. During normal processing of input audio signal, the NS cues and noisy ASR cues provided to the mapper, which produces clean ASR cues, which in turn may be used by the ASR engine.
In various embodiments of the present disclosure, the optimal gain mask may be derived from a series of cues extracted from the input noisy signal in a data-driven or machine-learning approach. The training process for these techniques may select the cues that provide substantial information to produce a more accurate approximation of the ideal gain mask. Furthermore, in the case of the use of regression trees as machine-learning techniques, substantially informative features may be dynamically selected at run time when the tree is traversed.
These and other embodiments will be now described in greater details with respect to various embodiments and with reference to accompanying drawings.
Example System Implementation
The primary microphone 106 and secondary microphone 108 may be omnidirectional microphones. Alternatively, embodiments may utilize other forms of microphones or acoustic sensors, such as directional microphones.
While the microphones 106 and 108 receive sound (i.e., audio signals) from the audio source 102, the microphones 106 and 108 also pick up noise 110. Although the noise 110 is shown coming from a single location in
Some embodiments may utilize level differences (e.g., energy differences) between the audio signals received by the two microphones 106 and 108. Because the primary microphone 106 is much closer to the audio source 102 than the secondary microphone 108 in a close-talk use case, the intensity level is higher for the primary microphone 106, resulting in a larger energy level received by the primary microphone 106 during a speech/voice segment, for example.
The level difference may then be used to discriminate speech and noise in the time-frequency domain. Further embodiments may use a combination of energy level differences and time delays to discriminate speech. Based on such inter-microphone differences, speech signal extraction or speech enhancement may be performed.
The processor 202 may execute instructions and modules stored in a memory (not illustrated in
The exemplary receiver 200 is an acoustic sensor configured to receive or transmit a signal from a communications network. Hence, receiver 200 may be used as a transmitter in addition to a receiver. In some embodiments, the receiver 200 may include an antenna device. The signal may then be forwarded to the audio processing system 210 to reduce noise using the techniques described herein, and provide an audio signal to the output device 206. The present technology may be used in the transmit path and/or receive path of the audio device 104.
The audio processing system 210 is configured to receive the audio signals from an acoustic source via the primary microphone 106 and secondary microphone 108 and process the audio signals. Processing may include performing noise reduction within an audio signal. The audio processing system 210 is discussed in more detail below. The primary and secondary microphones 106, 108 may be spaced a distance apart in order to allow for detecting an energy level difference, time difference, or phase difference between the audio signals received by the microphones. The audio signals received by primary microphone 106 and secondary microphone 108 may be converted into electrical signals (i.e., a primary electrical signal and a secondary electrical signal). The electrical signals may themselves be converted by an analog-to-digital converter (not shown) into digital signals for processing, in accordance with some embodiments.
In order to differentiate the audio signals for clarity purposes, the audio signal received by the primary microphone 106 is herein referred to as the primary audio signal, while the audio signal received from by the secondary microphone 108 is herein referred to as the secondary audio signal. The primary audio signal and the secondary audio signal may be processed by the audio processing system 210 to produce a signal with an improved signal-to-noise ratio. It should be noted that embodiments of the technology described herein may be practiced utilizing only the primary microphone 106.
The output device 206 is any device that provides an audio output to the user. For example, the output device 206 may include a speaker, an earpiece of a headset or handset, or a speaker on a conference device.
Noise Suppression by Estimating Gain Mask
In operation, the audio processing system 210 may receive input audio signals including one or more time-domain input signals from the primary microphone 106 and the secondary microphone 108. The input audio signals, when combined by the frequency analysis module 310, may represent noisy speech to be pre-processed before applying to the ASR engine 340. The frequency analysis module 310 may be used to combine the signals from the primary microphone 106 and the secondary microphone 108 and optionally transform them into a frequency-domain for further noise suppression pre-processing.
Further, the noisy speech signal may be fed to the FE module 350, which is used for extraction of one or more cues from the noisy speech. As discussed, these cues may refer to at least one of ILD cues, IPD cues, energy at channel cues, VAD cues, spatial cues, frequency cues, Wiener gain mask estimates, pitch-based cues, periodicity-based cues, noise estimates, context cues, and so forth. The cues may further be fed to the MG module 360 for performing a mapping operation and determining an appropriate gain mask or gain mask estimate based thereon. The MG module 360 may include a mapper (not shown), which employs one or more machine-learning techniques. The mapper may use tables or sets of predetermined reference cues of noise and cues of clean speech stored in the memory to map predefined cues with newly extracted ones in a dynamic, regular manner. As a result of mapping, the mapper may associate the extracted cues with predefined cues of clean speech and/or predefined noise so as to calculate gain factors or a gain map for further input signal processing. In particular, the MOD module 380 applies the gain factors or gain mask to the noise signal to perform noise suppression. The resulting signal with noise suppressed characteristics may be then fed to the Recon module 330 and the ASR engine 340 or directly to the ASR engine 340.
Training System
As follows from this figure, a frequency analysis module 450 and/or combination module 460 of the training system 410 may receive predetermined reference clean speech signals and predetermined reference noise signals from the clean speech database 420 and the noise database 430, respectively. These reference clean speech and noise signals may be combined by a combination module 460 of the training system 410 into “synthetic” noisy speech signals. The synthetic noisy speech signals may then be processed, and one or more cues may be extracted therefrom, by a Frequency Extractor (FE) module 470 of the training system 410. As discussed, these cues may refer to at least one of ILD cues, IPD cues, energy at channel cues, VAD cues, spatial cues, frequency cues, Wiener gain mask estimates, pitch-based cues, periodicity-based cues, noise estimates, context cues, and so forth.
With continuing reference to
Example Operation Principles
The method 500 may commence in operation 510 with the frequency analysis module 450 receiving reference clean speech and reference noise from the databases 420, 430, accordingly, or from one or more microphones (e.g., the primary microphone 106 and the secondary microphone 108). At operation 520, the combination module 460 may generate noisy speech using the clean speech and the noise as received by the frequency analysis module 450. At operation 530, the FE module 470 extracts NS cues from noisy speech and oracle gain from clean speech. At operation 540, the learning module 480 may determine/generate a mapping from the NS cues to the oracle gain using one or more machine learning techniques.
The method 600 may commence in operation 610 with the frequency analysis module 310 receiving noisy speech from the primary microphone 106 and the secondary microphone 108 (e.g., the inputs from both microphones may be combined into a single signal and transformed from time-domain to a frequency domain). At this operation, the memory 370 may also provide or receive an appropriate mapping data generated at a training process of at least one machine-learning technique as discussed above, for example, with reference to
Further, at operation 620, the FE module 350 extracts one or more cues from the noisy speech as received by the frequency analysis module 310. The cues may refer to at least one of ILD cues, IPD cues, energy at channel cues, VAD cues, spatial cues, frequency cues, Wiener gain mask estimates, pitch-based cues, periodicity-based cues, noise estimates, context cues, and so forth. At operation 630, the MG module 360 determines a gain mask from the cues using the mapping and a selected one or more machine-learning algorithms. At operation 640, the MOD module 380 applies the gain mask (e.g., a set of gain coefficients in a frequency domain) to the noisy speech so as to suppress unwanted noise levels. At operation 650, the Recon module 330 may reconstruct the noise suppressed speech signal and optionally transform it from the frequency domain into a time domain.
The method 700 may commence in operation 710 with the frequency analysis module 450 receiving predetermined reference clean speech from the clean speech database 420 and predetermined reference noise from the noise database 430. At operation 720, the combination module 460 may generate noisy speech using the clean speech and the noise received by the frequency analysis module 450. At operation 730, the FE module 470 may extract noisy automatic speech processing cues and NS cues from the noisy speech and clean ASR cues from clean speech. The automatic speech processing cues may be, but are not limited to, automatic speech recognition, language recognition, keyword recognition, speech confirmation, emotion detection, voice sensing, or speaker recognition cues. At operation 740, the learning module 480 may determine/generate a mapping from noisy automatic speech processing cues and NS cues to clean automatic speech processing cues, the mapping may be optionally stored in the memory 370 of
The method 800 may commence in operation 810 with the frequency analysis module 310 receiving noisy speech from the primary microphone 106 and the secondary microphone 108, and with the memory 370 providing or receiving mapping data generated at a training process of at least one machine-learning technique as discussed above, for example, with reference to
Further, at operation 820, the FE module 350 extracts NS and automatic speech processing cues from the input noisy speech. At operation 830, the MOD module 380 may apply the mapping to produce clean automatic speech processing features. The automatic speech processing features may be, but are not limited to, automatic speech recognition, language recognition, keyword recognition, speech confirmation, emotion detection, voice sensing, or speaker recognition features. In one example for ASR, at operation 840, the clean automatic speech processing features are fed into the ASR engine 340 for speech recognition. In this method, the ASR engine 340 may generate clean speech signals based on the clean automatic speech processing (e.g., ASR) features without a need to reconstruct the noisy input signal.
In some embodiments, the processing of the noise suppression for speech processing based on machine-learning mask estimation may be cloud-based.
Example Computer System
In various example embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a PC, a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cellular telephone, a portable music player (e.g., a portable hard drive audio device such as a Moving Picture Experts Group Audio Layer 3 (MP3) player), a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
The example computer system 900 includes a processor or multiple processors 910 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), memory 920, static mass storage 930, portable storage device 940, which communicate with each other via a bus 990. The computer system 900 may further include a graphics display unit 970 (e.g., a liquid crystal display (LCD), touchscreen and the like). The computer system 900 may also include input devices 960 (e.g., physical and/or virtual keyboard, keypad, a cursor control device, a mouse, touchpad, touchscreen, and the like), output devices 950 (e.g., speakers), peripherals 980 (e.g., a speaker, one or more microphones, printer, modem, communication device, network adapter, router, radio, modem, and the like). The computer system 900 may further include a data encryption module (not shown) to encrypt data.
The memory 920 and/or mass storage 930 include a computer-readable medium on which is stored one or more sets of instructions and data structures (e.g., instructions) embodying or utilizing any one or more of the methodologies or functions described herein. The instructions may also reside, completely or at least partially, within the main memory 920 and/or within the processors 910 during execution thereof by the computer system 900. The memory 920 and the processors 910 may also constitute machine-readable media. The instructions may further be transmitted or received over a wired and/or wireless network (not shown) via the network interface device (e.g. peripherals 980). While the computer-readable medium discussed herein in an example embodiment is a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions. The term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media. Such media may also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAM), read only memory (ROM), and the like.
In some embodiments, the computing system 900 may be implemented as a cloud-based computing environment, such as a virtual machine operating within a computing cloud. In other embodiments, the computing system 900 may itself include a cloud-based computing environment, where the functionalities of the computing system 900 are executed in a distributed fashion. Thus, the computing system 900, when configured as a computing cloud, may include pluralities of computing devices in various forms, as will be described in greater detail below.
In general, a cloud-based computing environment is a resource that typically combines the computational power of a large grouping of processors (such as within web servers) and/or that combines the storage capacity of a large grouping of computer memories or storage devices. Systems that provide cloud-based resources may be utilized exclusively by their owners or such systems may be accessible to outside users who deploy applications within the computing infrastructure to obtain the benefit of large computational or storage resources.
The cloud may be formed, for example, by a network of web servers that comprise a plurality of computing devices, such as the computing device 200, with each server (or at least a plurality thereof) providing processor and/or storage resources. These servers may manage workloads provided by multiple users (e.g., cloud resource customers or other users). Typically, each user places workload demands upon the cloud that vary in real-time, sometimes dramatically. The nature and extent of these variations typically depends on the type of business associated with the user.
While the present embodiments have been described in connection with a series of embodiments, these descriptions are not intended to limit the scope of the subject matter to the particular forms set forth herein. It will be further understood that the methods are not necessarily limited to the discrete components described. To the contrary, the present descriptions are intended to cover such alternatives, modifications, and equivalents as may be included within the spirit and scope of the subject matter as disclosed herein and defined by the appended claims and otherwise appreciated by one of ordinary skill in the art.
This non-provisional patent application claims priority to U.S. provisional patent application No. 61/709,908, filed Oct. 4, 2012, which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
3976863 | Engel | Aug 1976 | A |
3978287 | Fletcher et al. | Aug 1976 | A |
4137510 | Iwahara | Jan 1979 | A |
4433604 | Ott | Feb 1984 | A |
4516259 | Yato et al. | May 1985 | A |
4535473 | Sakata | Aug 1985 | A |
4536844 | Lyon | Aug 1985 | A |
4581758 | Coker et al. | Apr 1986 | A |
4628529 | Borth et al. | Dec 1986 | A |
4630304 | Borth et al. | Dec 1986 | A |
4649505 | Zinser, Jr. et al. | Mar 1987 | A |
4658426 | Chabries et al. | Apr 1987 | A |
4674125 | Carlson et al. | Jun 1987 | A |
4718104 | Anderson | Jan 1988 | A |
4811404 | Vilmur et al. | Mar 1989 | A |
4812996 | Stubbs | Mar 1989 | A |
4864620 | Bialick | Sep 1989 | A |
4920508 | Yassaie et al. | Apr 1990 | A |
4991166 | Julstrom | Feb 1991 | A |
5027410 | Williamson et al. | Jun 1991 | A |
5054085 | Meisel et al. | Oct 1991 | A |
5058419 | Nordstrom et al. | Oct 1991 | A |
5099738 | Hotz | Mar 1992 | A |
5115404 | Lo et al. | May 1992 | A |
5119711 | Bell et al. | Jun 1992 | A |
5142961 | Paroutaud | Sep 1992 | A |
5150413 | Nakatani et al. | Sep 1992 | A |
5175769 | Hejna, Jr. et al. | Dec 1992 | A |
5177482 | Cideciyan et al. | Jan 1993 | A |
5187776 | Yanker | Feb 1993 | A |
5208864 | Kaneda | May 1993 | A |
5210366 | Sykes, Jr. | May 1993 | A |
5216423 | Mukherjee | Jun 1993 | A |
5222251 | Roney, IV et al. | Jun 1993 | A |
5224170 | Waite, Jr. | Jun 1993 | A |
5230022 | Sakata | Jul 1993 | A |
5319736 | Hunt | Jun 1994 | A |
5323459 | Hirano | Jun 1994 | A |
5341432 | Suzuki et al. | Aug 1994 | A |
5381473 | Andrea et al. | Jan 1995 | A |
5381512 | Holton et al. | Jan 1995 | A |
5400409 | Linhard | Mar 1995 | A |
5402493 | Goldstein | Mar 1995 | A |
5402496 | Soli et al. | Mar 1995 | A |
5406635 | Jarvinen | Apr 1995 | A |
5416847 | Boze | May 1995 | A |
5471195 | Rickman | Nov 1995 | A |
5473759 | Slaney et al. | Dec 1995 | A |
5479564 | Vogten et al. | Dec 1995 | A |
5502663 | Lyon | Mar 1996 | A |
5544250 | Urbanski | Aug 1996 | A |
5546458 | Iwami | Aug 1996 | A |
5550924 | Helf et al. | Aug 1996 | A |
5574824 | Slyh et al. | Nov 1996 | A |
5590241 | Park et al. | Dec 1996 | A |
5602962 | Kellermann | Feb 1997 | A |
5625697 | Bowen et al. | Apr 1997 | A |
5633631 | Teckman | May 1997 | A |
5675778 | Jones | Oct 1997 | A |
5694474 | Ngo et al. | Dec 1997 | A |
5706395 | Arslan et al. | Jan 1998 | A |
5717829 | Takagi | Feb 1998 | A |
5729612 | Abel et al. | Mar 1998 | A |
5732189 | Johnston et al. | Mar 1998 | A |
5749064 | Pawate et al. | May 1998 | A |
5754665 | Hosoi | May 1998 | A |
5757937 | Itoh et al. | May 1998 | A |
5774837 | Yeldener et al. | Jun 1998 | A |
5777658 | Kerr et al. | Jul 1998 | A |
5792971 | Timis et al. | Aug 1998 | A |
5796819 | Romesburg | Aug 1998 | A |
5806025 | Vis et al. | Sep 1998 | A |
5809463 | Gupta et al. | Sep 1998 | A |
5819215 | Dobson et al. | Oct 1998 | A |
5839101 | Vahatalo et al. | Nov 1998 | A |
5845243 | Smart et al. | Dec 1998 | A |
5887032 | Cioffi | Mar 1999 | A |
5917921 | Sasaki et al. | Jun 1999 | A |
5920840 | Satyamurti et al. | Jul 1999 | A |
5933495 | Oh | Aug 1999 | A |
5943429 | Handel | Aug 1999 | A |
5978824 | Ikeda | Nov 1999 | A |
5983139 | Zierhofer | Nov 1999 | A |
5990405 | Auten et al. | Nov 1999 | A |
6002776 | Bhadkamkar et al. | Dec 1999 | A |
6011853 | Koski et al. | Jan 2000 | A |
6061456 | Andrea et al. | May 2000 | A |
6072881 | Linder | Jun 2000 | A |
6084916 | Ott | Jul 2000 | A |
6092126 | Rossum | Jul 2000 | A |
6097820 | Turner | Aug 2000 | A |
6098038 | Hermansky et al. | Aug 2000 | A |
6108626 | Cellario et al. | Aug 2000 | A |
6122384 | Mauro | Sep 2000 | A |
6122610 | Isabelle | Sep 2000 | A |
6125175 | Goldberg et al. | Sep 2000 | A |
6134524 | Peters et al. | Oct 2000 | A |
6137349 | Menkhoff et al. | Oct 2000 | A |
6140809 | Doi | Oct 2000 | A |
6144937 | Ali | Nov 2000 | A |
6173255 | Wilson et al. | Jan 2001 | B1 |
6188797 | Moledina et al. | Feb 2001 | B1 |
6205421 | Morii | Mar 2001 | B1 |
6205422 | Gu et al. | Mar 2001 | B1 |
6208671 | Paulos et al. | Mar 2001 | B1 |
6216103 | Wu et al. | Apr 2001 | B1 |
6222927 | Feng et al. | Apr 2001 | B1 |
6223090 | Brungart | Apr 2001 | B1 |
6263307 | Arslan et al. | Jul 2001 | B1 |
6266633 | Higgins et al. | Jul 2001 | B1 |
6317501 | Matsuo | Nov 2001 | B1 |
6321193 | Nystrom et al. | Nov 2001 | B1 |
6324235 | Savell et al. | Nov 2001 | B1 |
6327370 | Killion et al. | Dec 2001 | B1 |
6339706 | Tillgren et al. | Jan 2002 | B1 |
6339758 | Kanazawa et al. | Jan 2002 | B1 |
6343267 | Kuhn et al. | Jan 2002 | B1 |
6355869 | Mitton | Mar 2002 | B1 |
6363345 | Marash et al. | Mar 2002 | B1 |
6381469 | Wojick | Apr 2002 | B1 |
6381570 | Li et al. | Apr 2002 | B2 |
6389142 | Hagen et al. | May 2002 | B1 |
6411930 | Burges | Jun 2002 | B1 |
6424938 | Johansson et al. | Jul 2002 | B1 |
6430295 | Handel et al. | Aug 2002 | B1 |
6434417 | Lovett | Aug 2002 | B1 |
6449586 | Hoshuyama | Sep 2002 | B1 |
6453284 | Paschall | Sep 2002 | B1 |
6453289 | Ertem et al. | Sep 2002 | B1 |
6456209 | Savari | Sep 2002 | B1 |
6469732 | Chang et al. | Oct 2002 | B1 |
6477489 | Lockwood et al. | Nov 2002 | B1 |
6480610 | Fang et al. | Nov 2002 | B1 |
6487257 | Gustafsson et al. | Nov 2002 | B1 |
6496795 | Malvar | Dec 2002 | B1 |
6513004 | Rigazio et al. | Jan 2003 | B1 |
6516066 | Hayashi | Feb 2003 | B2 |
6516136 | Lee | Feb 2003 | B1 |
6526140 | Marchok et al. | Feb 2003 | B1 |
6529606 | Jackson, Jr. II et al. | Mar 2003 | B1 |
6531970 | McLaughlin et al. | Mar 2003 | B2 |
6549630 | Bobisuthi | Apr 2003 | B1 |
6584203 | Elko et al. | Jun 2003 | B2 |
6615170 | Liu et al. | Sep 2003 | B1 |
6647067 | Hjelm et al. | Nov 2003 | B1 |
6683938 | Henderson | Jan 2004 | B1 |
6717991 | Gustafsson et al. | Apr 2004 | B1 |
6718309 | Selly | Apr 2004 | B1 |
6738482 | Jaber | May 2004 | B1 |
6745155 | Andringa et al. | Jun 2004 | B1 |
6760450 | Matsuo | Jul 2004 | B2 |
6768979 | Menendez-Pidal et al. | Jul 2004 | B1 |
6778954 | Kim et al. | Aug 2004 | B1 |
6782363 | Lee et al. | Aug 2004 | B2 |
6785381 | Gartner et al. | Aug 2004 | B2 |
6792118 | Watts | Sep 2004 | B2 |
6795558 | Matsuo | Sep 2004 | B2 |
6798886 | Smith et al. | Sep 2004 | B1 |
6804203 | Benyassine et al. | Oct 2004 | B1 |
6804651 | Juric et al. | Oct 2004 | B2 |
6810273 | Mattila et al. | Oct 2004 | B1 |
6859508 | Koyama et al. | Feb 2005 | B1 |
6882736 | Dickel et al. | Apr 2005 | B2 |
6915257 | Heikkinen et al. | Jul 2005 | B2 |
6915264 | Baumgarte | Jul 2005 | B2 |
6917688 | Yu et al. | Jul 2005 | B2 |
6934387 | Kim | Aug 2005 | B1 |
6978159 | Feng et al. | Dec 2005 | B2 |
6982377 | Sakurai et al. | Jan 2006 | B2 |
6990196 | Zeng et al. | Jan 2006 | B2 |
7010134 | Jensen | Mar 2006 | B2 |
7016507 | Brennan | Mar 2006 | B1 |
7020605 | Gao | Mar 2006 | B2 |
RE39080 | Johnston | Apr 2006 | E |
7031478 | Belt et al. | Apr 2006 | B2 |
7035666 | Silberfenig et al. | Apr 2006 | B2 |
7042934 | Zamir | May 2006 | B2 |
7050388 | Kim et al. | May 2006 | B2 |
7054452 | Ukita | May 2006 | B2 |
7054808 | Yoshida | May 2006 | B2 |
7058572 | Nemer | Jun 2006 | B1 |
7065485 | Chong-White et al. | Jun 2006 | B1 |
7065486 | Thyssen | Jun 2006 | B1 |
7072834 | Zhou | Jul 2006 | B2 |
7076315 | Watts | Jul 2006 | B1 |
7092529 | Yu et al. | Aug 2006 | B2 |
7092882 | Arrowood et al. | Aug 2006 | B2 |
7099821 | Visser et al. | Aug 2006 | B2 |
7110554 | Brennan et al. | Sep 2006 | B2 |
7127072 | Rademacher et al. | Oct 2006 | B2 |
7142677 | Gonopolskiy et al. | Nov 2006 | B2 |
7146013 | Saito et al. | Dec 2006 | B1 |
7146316 | Alves | Dec 2006 | B2 |
7155019 | Hou | Dec 2006 | B2 |
7165026 | Acero et al. | Jan 2007 | B2 |
7171008 | Elko | Jan 2007 | B2 |
7171246 | Mattila et al. | Jan 2007 | B2 |
7174022 | Zhang et al. | Feb 2007 | B1 |
7190665 | Warke et al. | Mar 2007 | B2 |
7190775 | Rambo | Mar 2007 | B2 |
7206418 | Yang et al. | Apr 2007 | B2 |
7209567 | Kozel et al. | Apr 2007 | B1 |
7221622 | Matsuo et al. | May 2007 | B2 |
7225001 | Eriksson et al. | May 2007 | B1 |
7242762 | He et al. | Jul 2007 | B2 |
7245767 | Moreno et al. | Jul 2007 | B2 |
7246058 | Burnett | Jul 2007 | B2 |
7254242 | Ise et al. | Aug 2007 | B2 |
7254535 | Kushner et al. | Aug 2007 | B2 |
7289554 | Alloin | Oct 2007 | B2 |
7289955 | Deng et al. | Oct 2007 | B2 |
7327985 | Morfitt, III et al. | Feb 2008 | B2 |
7330138 | Mallinson et al. | Feb 2008 | B2 |
7339503 | Elenes | Mar 2008 | B1 |
7359520 | Brennan et al. | Apr 2008 | B2 |
7376558 | Gemello et al. | May 2008 | B2 |
7383179 | Alves et al. | Jun 2008 | B2 |
7395298 | Debes et al. | Jul 2008 | B2 |
7412379 | Taori et al. | Aug 2008 | B2 |
7433907 | Nagai et al. | Oct 2008 | B2 |
7436333 | Forman et al. | Oct 2008 | B2 |
7469208 | Kincaid | Dec 2008 | B1 |
7516067 | Seltzer et al. | Apr 2009 | B2 |
7555434 | Nomura et al. | Jun 2009 | B2 |
7561627 | Chow et al. | Jul 2009 | B2 |
7562140 | Clemm et al. | Jul 2009 | B2 |
7574352 | Quatieri, Jr. | Aug 2009 | B2 |
7577084 | Tang et al. | Aug 2009 | B2 |
7617099 | Yang et al. | Nov 2009 | B2 |
7617282 | Han | Nov 2009 | B2 |
7657038 | Doclo et al. | Feb 2010 | B2 |
7664640 | Webber | Feb 2010 | B2 |
7725314 | Wu et al. | May 2010 | B2 |
7764752 | Langberg et al. | Jul 2010 | B2 |
7777658 | Nguyen et al. | Aug 2010 | B2 |
7783032 | Abutalebi et al. | Aug 2010 | B2 |
7783481 | Endo et al. | Aug 2010 | B2 |
7791508 | Wegener | Sep 2010 | B2 |
7895036 | Hetherington et al. | Feb 2011 | B2 |
7912567 | Chhatwal et al. | Mar 2011 | B2 |
7925502 | Droppo et al. | Apr 2011 | B2 |
7949522 | Hetherington et al. | May 2011 | B2 |
7953596 | Pinto | May 2011 | B2 |
8010355 | Rahbar | Aug 2011 | B2 |
8032364 | Watts | Oct 2011 | B1 |
8046219 | Zurek et al. | Oct 2011 | B2 |
8081878 | Zhang et al. | Dec 2011 | B1 |
8098812 | Fadili et al. | Jan 2012 | B2 |
8103011 | Mohammad et al. | Jan 2012 | B2 |
8107656 | Dreβler et al. | Jan 2012 | B2 |
8126159 | Goose et al. | Feb 2012 | B2 |
8140331 | Lou | Mar 2012 | B2 |
8143620 | Malinowski et al. | Mar 2012 | B1 |
8150065 | Solbach et al. | Apr 2012 | B2 |
8155953 | Park et al. | Apr 2012 | B2 |
8175291 | Chan et al. | May 2012 | B2 |
8180064 | Avendano et al. | May 2012 | B1 |
8184818 | Ishiguro | May 2012 | B2 |
8189429 | Chen et al. | May 2012 | B2 |
8194880 | Avendano | Jun 2012 | B2 |
8194882 | Every et al. | Jun 2012 | B2 |
8204252 | Avendano | Jun 2012 | B1 |
8204253 | Solbach | Jun 2012 | B1 |
8223988 | Wang et al. | Jul 2012 | B2 |
8280731 | Yu | Oct 2012 | B2 |
8345890 | Avendano et al. | Jan 2013 | B2 |
8359195 | Li | Jan 2013 | B2 |
8363850 | Amada | Jan 2013 | B2 |
8369973 | Risbo | Feb 2013 | B2 |
8378871 | Bapat | Feb 2013 | B1 |
8447596 | Avendano et al. | May 2013 | B2 |
8467891 | Huang et al. | Jun 2013 | B2 |
8473285 | Every et al. | Jun 2013 | B2 |
8488805 | Santos et al. | Jul 2013 | B1 |
8494193 | Zhang et al. | Jul 2013 | B2 |
8521530 | Every et al. | Aug 2013 | B1 |
8538035 | Every et al. | Sep 2013 | B2 |
8606249 | Goodwin | Dec 2013 | B1 |
8639516 | Lindahl et al. | Jan 2014 | B2 |
8682006 | Laroche et al. | Mar 2014 | B1 |
8705759 | Wolff et al. | Apr 2014 | B2 |
8718290 | Murgia et al. | May 2014 | B2 |
8737188 | Murgia et al. | May 2014 | B1 |
8737532 | Green et al. | May 2014 | B2 |
8744844 | Klein | Jun 2014 | B2 |
8750526 | Santos et al. | Jun 2014 | B1 |
8762144 | Cho et al. | Jun 2014 | B2 |
8774423 | Solbach | Jul 2014 | B1 |
8781137 | Goodwin | Jul 2014 | B1 |
8804865 | Elenes et al. | Aug 2014 | B2 |
8867759 | Avendano et al. | Oct 2014 | B2 |
8880396 | Laroche et al. | Nov 2014 | B1 |
8886525 | Klein | Nov 2014 | B2 |
8949120 | Every et al. | Feb 2015 | B1 |
8949266 | Phillips et al. | Feb 2015 | B2 |
8965942 | Rossum et al. | Feb 2015 | B1 |
9008329 | Mandel et al. | Apr 2015 | B1 |
9049282 | Murgia et al. | Jun 2015 | B1 |
9076456 | Avendano et al. | Jul 2015 | B1 |
9143857 | Every et al. | Sep 2015 | B2 |
9185487 | Solbach et al. | Nov 2015 | B2 |
9197974 | Clark et al. | Nov 2015 | B1 |
9236874 | Rossum | Jan 2016 | B1 |
9343056 | Goodwin | May 2016 | B1 |
20010016020 | Gustafsson et al. | Aug 2001 | A1 |
20010031053 | Feng et al. | Oct 2001 | A1 |
20010044719 | Casey | Nov 2001 | A1 |
20010053228 | Jones | Dec 2001 | A1 |
20020002455 | Accardi et al. | Jan 2002 | A1 |
20020009203 | Erten | Jan 2002 | A1 |
20020041693 | Matsuo | Apr 2002 | A1 |
20020080980 | Matsuo | Jun 2002 | A1 |
20020106092 | Matsuo | Aug 2002 | A1 |
20020116187 | Erten | Aug 2002 | A1 |
20020133334 | Coorman et al. | Sep 2002 | A1 |
20020138263 | Deligne et al. | Sep 2002 | A1 |
20020147595 | Baumgarte | Oct 2002 | A1 |
20020156624 | Gigi | Oct 2002 | A1 |
20020160751 | Sun et al. | Oct 2002 | A1 |
20020176589 | Buck et al. | Nov 2002 | A1 |
20020177995 | Walker | Nov 2002 | A1 |
20020194159 | Kamath et al. | Dec 2002 | A1 |
20030014248 | Vetter | Jan 2003 | A1 |
20030026437 | Janse et al. | Feb 2003 | A1 |
20030033140 | Taori et al. | Feb 2003 | A1 |
20030038736 | Becker et al. | Feb 2003 | A1 |
20030039369 | Bullen | Feb 2003 | A1 |
20030040908 | Yang et al. | Feb 2003 | A1 |
20030056220 | Thornton et al. | Mar 2003 | A1 |
20030061032 | Gonopolskiy | Mar 2003 | A1 |
20030063759 | Brennan et al. | Apr 2003 | A1 |
20030072382 | Raleigh et al. | Apr 2003 | A1 |
20030072460 | Gonopolskiy et al. | Apr 2003 | A1 |
20030095667 | Watts | May 2003 | A1 |
20030099345 | Gartner et al. | May 2003 | A1 |
20030099370 | Moore | May 2003 | A1 |
20030101048 | Liu | May 2003 | A1 |
20030103632 | Goubran et al. | Jun 2003 | A1 |
20030118200 | Beaucoup et al. | Jun 2003 | A1 |
20030128851 | Furuta | Jul 2003 | A1 |
20030138116 | Jones et al. | Jul 2003 | A1 |
20030147538 | Elko | Aug 2003 | A1 |
20030169891 | Ryan et al. | Sep 2003 | A1 |
20030177006 | Ichikawa et al. | Sep 2003 | A1 |
20030191641 | Acero et al. | Oct 2003 | A1 |
20030228023 | Burnett et al. | Dec 2003 | A1 |
20040001450 | He et al. | Jan 2004 | A1 |
20040013276 | Ellis et al. | Jan 2004 | A1 |
20040015348 | McArthur et al. | Jan 2004 | A1 |
20040042616 | Matsuo | Mar 2004 | A1 |
20040047464 | Yu et al. | Mar 2004 | A1 |
20040078199 | Kremer et al. | Apr 2004 | A1 |
20040102967 | Furuta et al. | May 2004 | A1 |
20040125965 | Alberth, Jr. et al. | Jul 2004 | A1 |
20040131178 | Shahaf et al. | Jul 2004 | A1 |
20040133421 | Burnett et al. | Jul 2004 | A1 |
20040148166 | Zheng | Jul 2004 | A1 |
20040165736 | Hetherington et al. | Aug 2004 | A1 |
20040185804 | Kanamori et al. | Sep 2004 | A1 |
20040196989 | Friedman et al. | Oct 2004 | A1 |
20040263636 | Cutler et al. | Dec 2004 | A1 |
20050008179 | Quinn | Jan 2005 | A1 |
20050025263 | Wu | Feb 2005 | A1 |
20050027520 | Mattila et al. | Feb 2005 | A1 |
20050049857 | Seltzer et al. | Mar 2005 | A1 |
20050049864 | Kaltenmeier et al. | Mar 2005 | A1 |
20050060142 | Visser et al. | Mar 2005 | A1 |
20050066279 | LeBarton et al. | Mar 2005 | A1 |
20050069162 | Haykin et al. | Mar 2005 | A1 |
20050075866 | Widrow | Apr 2005 | A1 |
20050114123 | Lukac et al. | May 2005 | A1 |
20050114128 | Hetherington et al. | May 2005 | A1 |
20050152559 | Gierl et al. | Jul 2005 | A1 |
20050152563 | Amada et al. | Jul 2005 | A1 |
20050185813 | Sinclair et al. | Aug 2005 | A1 |
20050203735 | Ichikawa | Sep 2005 | A1 |
20050213778 | Buck et al. | Sep 2005 | A1 |
20050216259 | Watts | Sep 2005 | A1 |
20050228518 | Watts | Oct 2005 | A1 |
20050238238 | Xu et al. | Oct 2005 | A1 |
20050240399 | Makinen | Oct 2005 | A1 |
20050261894 | Balan et al. | Nov 2005 | A1 |
20050276423 | Aubauer et al. | Dec 2005 | A1 |
20050288923 | Kok | Dec 2005 | A1 |
20060053007 | Niemisto | Mar 2006 | A1 |
20060058998 | Yamamoto et al. | Mar 2006 | A1 |
20060072768 | Schwartz et al. | Apr 2006 | A1 |
20060074646 | Alves et al. | Apr 2006 | A1 |
20060098809 | Nongpiur et al. | May 2006 | A1 |
20060120537 | Burnett et al. | Jun 2006 | A1 |
20060122832 | Takiguchi et al. | Jun 2006 | A1 |
20060133621 | Chen et al. | Jun 2006 | A1 |
20060136201 | Landron et al. | Jun 2006 | A1 |
20060149535 | Choi et al. | Jul 2006 | A1 |
20060153391 | Hooley et al. | Jul 2006 | A1 |
20060160581 | Beaugeant et al. | Jul 2006 | A1 |
20060165202 | Thomas et al. | Jul 2006 | A1 |
20060184363 | McCree et al. | Aug 2006 | A1 |
20060206320 | Li | Sep 2006 | A1 |
20060222184 | Buck et al. | Oct 2006 | A1 |
20060224382 | Taneda | Oct 2006 | A1 |
20070021958 | Visser et al. | Jan 2007 | A1 |
20070027685 | Arakawa et al. | Feb 2007 | A1 |
20070033020 | (Kelleher) Francois et al. | Feb 2007 | A1 |
20070033032 | Schubert et al. | Feb 2007 | A1 |
20070041589 | Patel et al. | Feb 2007 | A1 |
20070055508 | Zhao et al. | Mar 2007 | A1 |
20070071206 | Gainsboro et al. | Mar 2007 | A1 |
20070078649 | Hetherington et al. | Apr 2007 | A1 |
20070094031 | Chen | Apr 2007 | A1 |
20070110263 | Brox | May 2007 | A1 |
20070116300 | Chen | May 2007 | A1 |
20070127668 | Ahya et al. | Jun 2007 | A1 |
20070136059 | Gadbois | Jun 2007 | A1 |
20070150268 | Acero et al. | Jun 2007 | A1 |
20070154031 | Avendano et al. | Jul 2007 | A1 |
20070165879 | Deng et al. | Jul 2007 | A1 |
20070195968 | Jaber | Aug 2007 | A1 |
20070211064 | Buck | Sep 2007 | A1 |
20070230712 | Belt et al. | Oct 2007 | A1 |
20070230913 | Ichimura | Oct 2007 | A1 |
20070237339 | Konchitsky | Oct 2007 | A1 |
20070276656 | Solbach et al. | Nov 2007 | A1 |
20070294263 | Punj et al. | Dec 2007 | A1 |
20080019548 | Avendano | Jan 2008 | A1 |
20080033723 | Jang et al. | Feb 2008 | A1 |
20080059163 | Ding et al. | Mar 2008 | A1 |
20080071540 | Nakano et al. | Mar 2008 | A1 |
20080140391 | Yen et al. | Jun 2008 | A1 |
20080152157 | Lin et al. | Jun 2008 | A1 |
20080159507 | Virolainen et al. | Jul 2008 | A1 |
20080160977 | Ahmaniemi et al. | Jul 2008 | A1 |
20080170703 | Zivney | Jul 2008 | A1 |
20080192955 | Merks | Aug 2008 | A1 |
20080201138 | Visser et al. | Aug 2008 | A1 |
20080228474 | Huang et al. | Sep 2008 | A1 |
20080228478 | Hetherington et al. | Sep 2008 | A1 |
20080233934 | Diethorn | Sep 2008 | A1 |
20080259731 | Happonen | Oct 2008 | A1 |
20080260175 | Elko | Oct 2008 | A1 |
20080273476 | Cohen et al. | Nov 2008 | A1 |
20080298571 | Kurtz et al. | Dec 2008 | A1 |
20080304677 | Abolfathi et al. | Dec 2008 | A1 |
20080317259 | Zhang et al. | Dec 2008 | A1 |
20080317261 | Yoshida et al. | Dec 2008 | A1 |
20090012783 | Klein | Jan 2009 | A1 |
20090012786 | Zhang et al. | Jan 2009 | A1 |
20090034755 | Short et al. | Feb 2009 | A1 |
20090063142 | Sukkar | Mar 2009 | A1 |
20090089054 | Wang et al. | Apr 2009 | A1 |
20090116652 | Kirkeby et al. | May 2009 | A1 |
20090129610 | Kim et al. | May 2009 | A1 |
20090141908 | Jeong et al. | Jun 2009 | A1 |
20090144053 | Tamura et al. | Jun 2009 | A1 |
20090147942 | Culter | Jun 2009 | A1 |
20090150149 | Culter et al. | Jun 2009 | A1 |
20090154717 | Hoshuyama | Jun 2009 | A1 |
20090164905 | Ko | Jun 2009 | A1 |
20090177464 | Gao et al. | Jul 2009 | A1 |
20090220107 | Every et al. | Sep 2009 | A1 |
20090240497 | Usher et al. | Sep 2009 | A1 |
20090245335 | Fang | Oct 2009 | A1 |
20090245444 | Fang | Oct 2009 | A1 |
20090253418 | Makinen | Oct 2009 | A1 |
20090264114 | Virolainen et al. | Oct 2009 | A1 |
20090271187 | Yen et al. | Oct 2009 | A1 |
20090292536 | Hetherington et al. | Nov 2009 | A1 |
20090323925 | Sweeney et al. | Dec 2009 | A1 |
20090323981 | Cutler | Dec 2009 | A1 |
20090323982 | Solbach et al. | Dec 2009 | A1 |
20100017205 | Visser et al. | Jan 2010 | A1 |
20100027799 | Romesburg et al. | Feb 2010 | A1 |
20100036659 | Haulick et al. | Feb 2010 | A1 |
20100082339 | Konchitsky et al. | Apr 2010 | A1 |
20100092007 | Sun | Apr 2010 | A1 |
20100094622 | Cardillo et al. | Apr 2010 | A1 |
20100103776 | Chan | Apr 2010 | A1 |
20100105447 | Sibbald et al. | Apr 2010 | A1 |
20100128123 | DiPoala | May 2010 | A1 |
20100130198 | Kannappan et al. | May 2010 | A1 |
20100138220 | Matsumoto et al. | Jun 2010 | A1 |
20100166199 | Seydoux | Jul 2010 | A1 |
20100177916 | Gerkmann et al. | Jul 2010 | A1 |
20100215184 | Buck et al. | Aug 2010 | A1 |
20100278352 | Petit et al. | Nov 2010 | A1 |
20100282045 | Chen et al. | Nov 2010 | A1 |
20100290615 | Takahashi | Nov 2010 | A1 |
20100303298 | Marks et al. | Dec 2010 | A1 |
20100309774 | Astrom | Dec 2010 | A1 |
20100315482 | Rosenfeld et al. | Dec 2010 | A1 |
20110019833 | Kuech et al. | Jan 2011 | A1 |
20110026734 | Hetherington et al. | Feb 2011 | A1 |
20110035213 | Malenovsky et al. | Feb 2011 | A1 |
20110060587 | Phillips et al. | Mar 2011 | A1 |
20110081026 | Ramakrishnan et al. | Apr 2011 | A1 |
20110091047 | Konchitsky et al. | Apr 2011 | A1 |
20110101654 | Cech | May 2011 | A1 |
20110123019 | Gowreesunker et al. | May 2011 | A1 |
20110178800 | Watts | Jul 2011 | A1 |
20110182436 | Murgia et al. | Jul 2011 | A1 |
20110261150 | Goyal et al. | Oct 2011 | A1 |
20110286605 | Furuta et al. | Nov 2011 | A1 |
20110300806 | Lindahl et al. | Dec 2011 | A1 |
20110305345 | Bouchard et al. | Dec 2011 | A1 |
20120010881 | Avendano et al. | Jan 2012 | A1 |
20120027217 | Jun et al. | Feb 2012 | A1 |
20120027218 | Every et al. | Feb 2012 | A1 |
20120050582 | Seshadri et al. | Mar 2012 | A1 |
20120062729 | Hart et al. | Mar 2012 | A1 |
20120063609 | Triki et al. | Mar 2012 | A1 |
20120087514 | Williams et al. | Apr 2012 | A1 |
20120093341 | Kim et al. | Apr 2012 | A1 |
20120116758 | Murgia et al. | May 2012 | A1 |
20120121096 | Chen et al. | May 2012 | A1 |
20120133728 | Lee | May 2012 | A1 |
20120140917 | Nicholson et al. | Jun 2012 | A1 |
20120143363 | Liu et al. | Jun 2012 | A1 |
20120179461 | Every et al. | Jul 2012 | A1 |
20120179462 | Klein | Jul 2012 | A1 |
20120182429 | Forutanpour et al. | Jul 2012 | A1 |
20120197898 | Pandey et al. | Aug 2012 | A1 |
20120220347 | Davidson | Aug 2012 | A1 |
20120237037 | Ninan et al. | Sep 2012 | A1 |
20120249785 | Sudo et al. | Oct 2012 | A1 |
20120250871 | Lu et al. | Oct 2012 | A1 |
20130011111 | Abraham et al. | Jan 2013 | A1 |
20130024190 | Fairey | Jan 2013 | A1 |
20130034243 | Yermeche et al. | Feb 2013 | A1 |
20130051543 | McDysan et al. | Feb 2013 | A1 |
20130096914 | Avendano et al. | Apr 2013 | A1 |
20130182857 | Namba et al. | Jul 2013 | A1 |
20130196715 | Hansson et al. | Aug 2013 | A1 |
20130231925 | Avendano et al. | Sep 2013 | A1 |
20130251170 | Every et al. | Sep 2013 | A1 |
20130268280 | Del Galdo et al. | Oct 2013 | A1 |
20130318613 | Archer | Nov 2013 | A1 |
20140032470 | McCarthy | Jan 2014 | A1 |
20140039888 | Taubman et al. | Feb 2014 | A1 |
20140098964 | Rosca et al. | Apr 2014 | A1 |
20140108020 | Sharma et al. | Apr 2014 | A1 |
20140112496 | Murgia et al. | Apr 2014 | A1 |
20140142958 | Sharma et al. | May 2014 | A1 |
20140241702 | Solbach et al. | Aug 2014 | A1 |
20140337016 | Herbig et al. | Nov 2014 | A1 |
20150025881 | Carlos et al. | Jan 2015 | A1 |
20150030163 | Sokolov | Jan 2015 | A1 |
20150100311 | Kar et al. | Apr 2015 | A1 |
20160027451 | Solbach et al. | Jan 2016 | A1 |
20160063997 | Nemala et al. | Mar 2016 | A1 |
20160066089 | Klein | Mar 2016 | A1 |
Number | Date | Country |
---|---|---|
0756437 | Jan 1997 | EP |
1232496 | Aug 2002 | EP |
1474755 | Nov 2004 | EP |
20080428 | Jul 2008 | FI |
20100431 | Dec 2010 | FI |
20125812 | Oct 2012 | FI |
20135038 | Apr 2013 | FI |
124716 | Dec 2014 | FI |
62110349 | May 1987 | JP |
4184400 | Jul 1992 | JP |
5053587 | Mar 1993 | JP |
6269083 | Sep 1994 | JP |
H07248793 | Sep 1995 | JP |
H10-313497 | Nov 1998 | JP |
H11-249693 | Sep 1999 | JP |
2001159899 | Jun 2001 | JP |
2002366200 | Dec 2002 | JP |
2002542689 | Dec 2002 | JP |
2003514473 | Apr 2003 | JP |
2003271191 | Sep 2003 | JP |
2004187283 | Jul 2004 | JP |
2005110127 | Apr 2005 | JP |
2005518118 | Jun 2005 | JP |
2005195955 | Jul 2005 | JP |
2006094522 | Apr 2006 | JP |
2006337415 | Dec 2006 | JP |
2007006525 | Jan 2007 | JP |
2008015443 | Jan 2008 | JP |
2008135933 | Jun 2008 | JP |
2009522942 | Jun 2009 | JP |
2010532879 | Oct 2010 | JP |
2011527025 | Oct 2011 | JP |
5007442 | Jun 2012 | JP |
2013517531 | May 2013 | JP |
2013534651 | Sep 2013 | JP |
5762956 | Jun 2015 | JP |
1020080092404 | Oct 2008 | KR |
1020100041741 | Apr 2010 | KR |
1020110038024 | Apr 2011 | KR |
1020120116442 | Oct 2012 | KR |
101210313 | Dec 2012 | KR |
1020130117750 | Oct 2013 | KR |
101461141 | Nov 2014 | KR |
101610656 | Apr 2016 | KR |
526468 | Apr 2003 | TW |
200305854 | Nov 2003 | TW |
200629240 | Aug 2006 | TW |
I279776 | Apr 2007 | TW |
200910793 | Mar 2009 | TW |
201009817 | Mar 2010 | TW |
201214418 | Apr 2012 | TW |
I463817 | Dec 2014 | TW |
I465121 | Dec 2014 | TW |
201513099 | Apr 2015 | TW |
I488179 | Jun 2015 | TW |
WO0137265 | May 2001 | WO |
WO0141504 | Jun 2001 | WO |
WO0156328 | Aug 2001 | WO |
WO0174118 | Oct 2001 | WO |
WO03043374 | May 2003 | WO |
WO03069499 | Aug 2003 | WO |
WO2006027707 | Mar 2006 | WO |
WO2007001068 | Jan 2007 | WO |
WO2007049644 | May 2007 | WO |
WO2007081916 | Jul 2007 | WO |
WO2008045476 | Apr 2008 | WO |
WO2008101198 | Aug 2008 | WO |
WO2009008998 | Jan 2009 | WO |
WO2010005493 | Jan 2010 | WO |
WO2011091068 | Jul 2011 | WO |
WO2011129725 | Oct 2011 | WO |
WO2012009047 | Jan 2012 | WO |
WO2012097016 | Jul 2012 | WO |
WO2014063099 | Apr 2014 | WO |
WO2014131054 | Aug 2014 | WO |
WO2015010129 | Jan 2015 | WO |
WO2016033364 | Mar 2016 | WO |
Entry |
---|
Allen, Jont B. “Short Term Spectral Analysis, Synthesis, and Modification by Discrete Fourier Transform”, IEEE Transactions on Acoustics, Speech, and Signal Processing. vol. ASSP-25, No. 3, Jun. 1977. pp. 235-238. |
Allen, Jont B. et al., “A Unified Approach to Short-Time Fourier Analysis and Synthesis”, Proceedings of the IEEE vol. 65, No. 11, Nov. 1977. pp. 1558-1564. |
Avendano, Carlos, “Frequency-Domain Source Identification and Manipulation in Stereo Mixes for Enhancement, Suppression and Re-Panning Applications,” 2003 IEEE Workshop on Application of Signal Processing to Audio and Acoustics, Oct. 19-22, pp. 55-58, New Paltz, New York, USA. |
Boll, Steven F. “Suppression of Acoustic Noise in Speech using Spectral Subtraction”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. ASSP-27, No. 2, Apr. 1979, pp. 113-120. |
Boll, Steven F. et al., “Suppression of Acoustic Noise in Speech Using Two Microphone Adaptive Noise Cancellation”, IEEE Transactions on Acoustic, Speech, and Signal Processing, vol. ASSP-28, No. 6, Dec. 1980, pp. 752-753. |
Boll, Steven F. “Suppression of Acoustic Noise in Speech Using Spectral Subtraction”, Dept. of Computer Science, University of Utah Salt Lake City, Utah, Apr. 1979, pp. 18-19. |
Chen, Jingdong et al., “New Insights into the Noise Reduction Wiener Filter”, IEEE Transactions on Audio, Speech, and Language Processing. vol. 14, No. 4, Jul. 2006, pp. 1218-1234. |
Cohen, Israel et al., “Microphone Array Post-Filtering for Non-Stationary Noise Suppression”, IEEE International Conference on Acoustics, Speech, and Signal Processing, May 2002, pp. 1-4. |
Cohen, Israel, “Multichannel Post-Filtering in Nonstationary Noise Environments”, IEEE Transactions on Signal Processing, vol. 52, No. 5, May 2004, pp. 1149-1160. |
Dahl, Mattias et al., “Simultaneous Echo Cancellation and Car Noise Suppression Employing a Microphone Array”, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, Apr. 21-24, pp. 239-242. |
Elko, Gary W., “Chapter 2: Differential Microphone Arrays”, “Audio Signal Processing for Next-Generation Multimedia Communication Systems”, 2004, pp. 12-65, Kluwer Academic Publishers, Norwell, Massachusetts, USA. |
“Ent 172.” Instructional Module. Prince George's Community College Department of Engineering Technology. Accessed: Oct. 15, 2011. Subsection: “Polar and Rectangular Notation”. <http://academic.ppgcc.edu/ent/ent172—instr—mod.html>. |
Fuchs, Martin et al., “Noise Suppression for Automotive Applications Based on Directional Information”, 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, May 17-21, pp. 237-240. |
Fulghum, D. P. et al., “LPC Voice Digitizer with Background Noise Suppression”, 1979 IEEE International aonference on Acoustics, Speech, and Signal Processing, pp. 220-223. |
Goubran, R.A. et al., “Acoustic Noise Suppression Using Regressive Adaptive Filtering”, 1990 IEEE 40th Vehicular Technology Conference, May 6-9, pp. 48-53. |
Graupe, Daniel et al., “Blind Adaptive Filtering of Speech from Noise of Unknown Spectrum Using a Virtual Feedback Configuration”, IEEE Transactions on Speech and Audio Processing, Mar. 2000, vol. 8, No. 2, pp. 146-158. |
Haykin, Simon et al., “Appendix A.2 Complex Numbers.” Signals and Systems. 2nd Ed. 2003. p. 764. |
Hermansky, Hynek “Should Recognizers Have Ears?”, In Proc. ESCA Tutorial and Research Workshop on Robust Speech Recognition for Unknown Communication Channels, pp. 1-10, France 1997. |
Hohmann, V. “Frequency Analysis and Synthesis Using a Gammatone Filterbank”, ACTA Acustica United with Acustica, 2002, vol. 88, pp. 433-442. |
Jeffress, Lloyd A. et al., “A Place Theory of Sound Localization,” Journal of Comparative and Physiological Psychology, 1948, vol. 41, p. 35-39. |
Jeong, Hyuk et al., “Implementation of a New Algorithm Using the STFT with Variable Frequency Resolution for the Time-Frequency Auditory Model”, J. Audio Eng. Soc., Apr. 1999, vol. 47, No. 4., pp. 240-251. |
Kates, James M. “A Time-Domain Digital Cochlear Model”, IEEE Transactions on Signal Processing, Dec. 1991, vol. 39, No. 12, pp. 2573-2592. |
Kato et al., “Noise Suppression with High Speech Quality Based on Weighted Noise Estimation and MMSE STSA” Proc. IWAENC [Online] 2001, pp. 183-186. |
Lazzaro, John et al., “A Silicon Model of Auditory Localization,” Neural Computation Spring 1989, vol. 1, pp. 47-57, Massachusetts Institute of Technology. |
Lippmann, Richard P. “Speech Recognition by Machines and Humans”, Speech Communication, Jul. 1997, vol. 22, No. 1, pp. 1-15. |
Liu, Chen et al., “A Two-Microphone Dual Delay-Line Approach for Extraction of a Speech Sound in the Presence of Multiple Interferers”, Journal of the Acoustical Society of America, vol. 110, No. 6, Dec. 2001, pp. 3218-3231. |
Martin, Rainer et al., “Combined Acoustic Echo Cancellation, Dereverberation and Noise Reduction: A two Microphone Approach”, Annales des Telecommunications/Annals of Telecommunications. vol. 49, No. 7-8, Jul.-Aug. 1994, pp. 429-438. |
Martin, Rainer “Spectral Subtraction Based on Minimum Statistics”, in Proceedings Europe. Signal Processing Conf., 1994, pp. 1182-1185. |
Mitra, Sanjit K. Digital Signal Processing: a Computer-based Approach. 2nd Ed. 2001. pp. 131-133. |
Mizumachi, Mitsunori et al., “Noise Reduction by Paired-Microphones Using Spectral Subtraction”, 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, May 12-15. pp. 1001-1004. |
Moonen, Marc et al., “Multi-Microphone Signal Enhancement Techniques for Noise Suppression and Dereverbration,” http://www.esat.kuleuven.ac.be/sista/yearreport97//node37.html, accessed on Apr. 21, 1998. |
Watts, Lloyd Narrative of Prior Disclosure of Audio Display on Feb. 15, 2000 and May 31, 2000. |
Cosi, Piero et al., (1996), “Lyon's Auditory Model Inversion: a Tool for Sound Separation and Speech Enhancement,” Proceedings of ESCA Workshop on ‘The Auditory Basis of Speech Perception,’ Keele University, Keele (UK), Jul. 15-19, 1996, pp. 194-197. |
Parra, Lucas et al., “Convolutive Blind Separation of Non-Stationary Sources”, IEEE Transactions on Speech and Audio Processing. vol. 8, No. 3, May 2008, pp. 320-327. |
Rabiner, Lawrence R. et al., “Digital Processing of Speech Signals”, (Prentice-Hall Series in Signal Processing). Upper Saddle River, NJ: Prentice Hall, 1978. |
Weiss, Ron et al., “Estimating Single-Channel Source Separation Masks: Revelance Vector Machine Classifiers vs. Pitch-Based Masking”, Workshop on Statistical and Perceptual Audio Processing, 2006. |
Schimmel, Steven et al., “Coherent Envelope Detection for Modulation Filtering of Speech,” 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 1, No. 7, pp. 221-224. |
Slaney, Malcom, “Lyon's Cochlear Model”, Advanced Technology Group, Apple Technical Report #13, Apple Computer, Inc., 1988, pp. 1-79. |
Slaney, Malcom, et al., “Auditory Model Inversion for Sound Separation,” 1994 IEEE International Conference on Acoustics, Speech and Signal Processing, Apr. 19-22, vol. 2, pp. 77-80. |
Slaney, Malcom. “An Introduction to Auditory Model Inversion”, Interval Technical Report IRC 1994-014, http://coweb.ecn.purdue.edu/˜maclom/interval/1994-014/, Sep. 1994, accessed on Jul. 6, 2010. |
Solbach, Ludger “An Architecture for Robust Partial Tracking and Onset Localization in Single Channel Audio Signal Mixes”, Technical University Hamburg-Harburg, 1998. |
Soon et al., “Low Distortion Speech Enhancement” Proc. Inst. Elect. Eng. [Online] 2000, vol. 147, pp. 247-253. |
Stahl, V. et al., “Quantile Based Noise Estimation for Spectral Subtraction and Wiener Filtering,” 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing, Jun. 5-9, vol. 3, pp. 1875-1878. |
Syntrillium Software Corporation, “Cool Edit User's Manual”, 1996, pp. 1-74. |
Tashev, Ivan et al., “Microphone Array for Headset with Spatial Noise Suppressor”, http://research.microsoft.com/users/ivantash/Documents/Tashev—MAforHeadset—HSCMA—05.pdf. (4 pages). |
Tchorz, Jurgen et al., “SNR Estimation Based on Amplitude Modulation Analysis with Applications to Noise Suppression”, IEEE Transactions on Speech and Audio Processing, vol. 11, No. 3, May 2003, pp. 184-192. |
Valin, Jean-Marc et al., “Enhanced Robot Audition Based on Microphone Array Source Separation with Post-Filter”, Proceedings of 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sep. 28-Oct. 2, 2004, Sendai, Japan. pp. 2123-2128. |
Watts, Lloyd, “Robust Hearing Systems for Intelligent Machines,” Applied Neurosystems Corporation, 2001, pp. 1-5. |
Widrow, B. et al., “Adaptive Antenna Systems,” Proceedings of the IEEE, vol. 55, No. 12, pp. 2143-2159, Dec. 1967. |
Yoo, Heejong et al., “Continuous-Time Audio Noise Suppression and Real-Time Implementation”, 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing, May 13-17, pp. IV3980-IV3983. |
Non-Final Office Action, Oct. 27, 2003, U.S. Appl. No. 09/534,682, filed Mar. 24, 2000. |
Non-Final Office Action, Feb. 10, 2004, U.S. Appl. No. 09/534,682, filed Mar. 24, 2000. |
Final Office Action, Dec. 17, 2004, U.S. Appl. No. 09/534,682, filed Mar. 24, 2000. |
Non-Final Office Action, Apr. 20, 2005, U.S. Appl. No. 09/534,682, filed Mar. 24, 2000. |
Notice of Allowance, Oct. 26, 2005, U.S. Appl. No. 09/534,682, filed Mar. 24, 2000. |
Non-Final Office Action, May 3, 2005, U.S. Appl. No. 09/993,442, filed Nov. 13, 2001. |
Final Office Action, Oct. 19, 2005, U.S. Appl. No. 09/993,442, filed Nov. 13, 2001. |
Advisory Action, Jan. 20, 2006, U.S. Appl. No. 09/993,442, filed Nov. 13, 2001. |
Non-Final Office Action, May 17, 2006, U.S. Appl. No. 09/993,442, filed Nov. 13, 2001. |
Non-Final Office Action, Nov. 16, 2006, U.S. Appl. No. 09/993,442, filed Nov. 13, 2001. |
Final Office Action, Jun. 15, 2007, U.S. Appl. No. 09/993,442, filed Nov. 13, 2001. |
Non-Final Office Action, Oct. 8, 2003, U.S. Appl. No. 10/004,141, filed Nov. 14, 2001. |
Notice of Allowance, Feb. 24, 2004, U.S. Appl. No. 10/004,141, filed Nov. 14, 2001. |
Non-Final Office Action, May 9, 2003, U.S. Appl. No. 10/074,991, filed Feb. 13, 2002. |
Notice of Allowance, Jun. 4, 2003, U.S. Appl. No. 10/074,991, filed Feb. 13, 2002. |
Non-Final Office Action, Jun. 26, 2006, U.S. Appl. No. 10/074,991, filed Feb. 13, 2002. |
Final Office Action, Feb. 23, 2007, U.S. Appl. No. 10/074,991, filed Feb. 13, 2002. |
Non-Final Office Action, Oct. 6, 2005, U.S. Appl. No. 10/177,049, filed Jun. 21, 2002. |
Final Office Action, Mar. 28, 2006, U.S. Appl. No. 10/177,049, filed Jun. 21, 2002. |
Advisory Action, Jun. 19, 2006, U.S. Appl. No. 10/177,049, filed Jun. 21, 2002. |
Non-Final Office Action, Dec. 13, 2006, U.S. Appl. No. 10/613,224, filed Jul. 3, 2003. |
Non-Final Office Action, Jun. 13, 2007, U.S. Appl. No. 10/613,224, filed Jul. 3, 2003. |
Non-Final Office Action, Jun. 13, 2006, U.S. Appl. No. 10/840,201, filed May 5, 2004. |
Non-Final Office Action, Mar. 30, 2010, U.S. Appl. No. 11/343,524, filed Jan. 30, 2006. |
Non-Final Office Action, Sep. 13, 2010, U.S. Appl. No. 11/343,524, filed Jan. 30, 2006. |
Final Office Action, Mar. 30, 2011, U.S. Appl. No. 11/343,524, filed Jan. 30, 2006. |
Final Office Action, May 21, 2012, U.S. Appl. No. 11/343,524, filed Jan. 30, 2006. |
Notice of Allowance, Oct. 9, 2012, U.S. Appl. No. 11/343,524, filed Jan. 30, 2006. |
Non-Final Office Action, Aug. 5, 2008, U.S. Appl. No. 11/441,675, filed May 25, 2006. |
Non-Final Office Action, Jan. 21, 2009, U.S. Appl. No. 11/441,675, filed May 25, 2006. |
Final Office Action, Sep. 3, 2009, U.S. Appl. No. 11/441,675, filed May 25, 2006. |
Non-Final Office Action, May 10, 2011, U.S. Appl. No. 11/441,675, filed May 25, 2006. |
Final Office Action, Oct. 24, 2011, U.S. Appl. No. 11/441,675, filed May 25, 2006. |
Notice of Allowance, Feb. 13, 2012, U.S. Appl. No. 11/441,675, filed May 25, 2006. |
Non-Final Office Action, Apr. 7, 2011, U.S. Appl. No. 11/699,732, filed Jan. 29, 2007. |
Final Office Action, Dec. 6, 2011, U.S. Appl. No. 11/699,732, filed Jan. 29, 2007. |
Advisory Action, Feb. 14, 2012, U.S. Appl. No. 11/699,732, filed Jan. 29, 2007. |
Notice of Allowance, Mar. 15, 2012, U.S. Appl. No. 11/699,732, filed Jan. 29, 2007. |
Non-Final Office Action, Aug. 18, 2010, U.S. Appl. No. 11/825,563, filed Jul. 6, 2007. |
Final Office Action, Apr. 28, 2011, U.S. Appl. No. 11/825,563, filed Jul. 6, 2007. |
Non-Final Office Action, Apr. 24, 2013, U.S. Appl. No. 11/825,563, filed Jul. 6, 2007. |
Final Office Action, Dec. 30, 2013, U.S. Appl. No. 11/825,563, filed Jul. 6, 2007. |
Notice of Allowance, Mar. 25, 2014, U.S. Appl. No. 11/825,563, filed Jul. 6, 2007. |
Non-Final Office Action, Oct. 3, 2011, U.S. Appl. No. 12/004,788, filed Dec. 21, 2007. |
Notice of Allowance, Feb. 23, 2012, U.S. Appl. No. 12/004,788, filed Dec. 21, 2007. |
Non-Final Office Action, Sep. 14, 2011, U.S. Appl. No. 12/004,897, filed Dec. 21, 2007. |
Notice of Allowance, Jan. 27, 2012, U.S. Appl. No. 12/004,897, filed Dec. 21, 2007. |
Non-Final Office Action, Jul. 28, 2011, U.S. Appl. No. 12/072,931, filed Feb. 29, 2008. |
Notice of Allowance, Mar. 1, 2012, U.S. Appl. No. 12/072,931, filed Feb. 29, 2008. |
Notice of Allowance, Mar. 1, 2012, U.S. Appl. No. 12/080,115, filed Mar. 31, 2008. |
Non-Final Office Action, Nov. 14, 2011, U.S. Appl. No. 12/215,980, filed Jun. 30, 2008. |
Final Office Action, Apr. 24, 2012, U.S. Appl. No. 12/215,980, filed Jun. 30, 2008. |
Advisory Action, Jul. 3, 2012, U.S. Appl. No. 12/215,980, filed Jun. 30, 2008. |
Non-Final Office Action, Mar. 11, 2014, U.S. Appl. No. 12/215,980, filed Jun. 30, 2008. |
Final Office Action, Jul. 11, 2014, U.S. Appl. No. 12/215,980, filed Jun. 30, 2008. |
Non-Final Office Action, Dec. 8, 2014, U.S. Appl. No. 12/215,980, filed Jun. 30, 2008. |
Notice of Allowance, Jul. 7, 2015, U.S. Appl. No. 12/215,980, filed Jun. 30, 2008. |
Non-Final Office Action, Jul. 13, 2011, U.S. Appl. No. 12/217,076, filed Jun. 30, 2008. |
Final Office Action, Nov. 16, 2011, U.S. Appl. No. 12/217,076, filed Jun. 30, 2008. |
Non-Final Office Action, Mar. 14, 2012, U.S. Appl. No. 12/217,076, filed Jun. 30, 2008. |
Final Office Action, Sep. 19, 2012, U.S. Appl. No. 12/217,076, filed Jun. 30, 2008. |
Notice of Allowance, Apr. 15, 2013, U.S. Appl. No. 12/217,076, filed Jun. 30, 2008. |
Non-Final Office Action, Sep. 1, 2011, U.S. Appl. No. 12/286,909, filed Oct. 2, 2008. |
Notice of Allowance, Feb. 28, 2012, U.S. Appl. No. 12/286,909, filed Oct. 2, 2008. |
Non-Final Office Action, Nov. 15, 2011, U.S. Appl. No. 12/286,995, filed Oct. 2, 2008. |
Final Office Action, Apr. 10, 2012, U.S. Appl. No. 12/286,995, filed Oct. 2, 2008. |
Notice of Allowance, Mar. 13, 2014, U.S. Appl. No. 12/286,995, filed Oct. 2, 2008. |
Non-Final Office Action, Dec. 28, 2011, U.S. Appl. No. 12/288,228, filed Oct. 16, 2008. |
Non-Final Office Action, Dec. 30, 2011, U.S. Appl. No. 12/422,917, filed Apr. 13, 2009. |
Final Office Action, May 14, 2012, U.S. Appl. No. 12/422,917, filed Apr. 13, 2009. |
Advisory Action, Jul. 27, 2012, U.S. Appl. No. 12/422,917, filed Apr. 13, 2009. |
Notice of Allowance, Sep. 11, 2014, U.S. Appl. No. 12/422,917, filed Apr. 13, 2009. |
Non-Final Office Action, Jun. 20, 2012, U.S. Appl. No. 12/649,121, filed Dec. 29, 2009. |
Final Office Action, Nov. 28, 2012, U.S. Appl. No. 12/649,121, filed Dec. 29, 2009. |
Advisory Action, Feb. 19, 2013, U.S. Appl. No. 12/649,121, filed Dec. 29, 2009. |
Notice of Allowance, Mar. 19, 2013, U.S. Appl. No. 12/649,121, filed Dec. 29, 2009. |
Non-Final Office Action, Feb. 19, 2013, U.S. Appl. No. 12/944,659, filed Nov. 11, 2010. |
Notice of Allowance, May 25, 2011, U.S. Appl. No. 13/016,916, filed Jan. 28, 2011. |
Notice of Allowance, Aug. 4, 2011, U.S. Appl. No. 13/016,916, filed Jan. 28, 2011. |
Non-Final Office Action, Nov. 22, 2013, U.S. Appl. No. 13/363,362, filed Jan. 31, 2012. |
Final Office Action, Sep. 12, 2014, U.S. Appl. No. 13/363,362, filed Jan. 31, 2012. |
Non-Final Office Action, Oct. 28, 2015, U.S. Appl. No. 13/363,362, filed Jan. 31, 2012. |
Non-Final Office Action, Dec. 4, 2013, U.S. Appl. No. 13/396,568, filed Feb. 14, 2012. |
Final Office Action, Sep. 23, 2014, U.S. Appl. No. 13/396,568, filed Feb. 14, 2012. |
Non-Final Office Action, Nov. 5, 2015, U.S. Appl. No. 13/396,568, filed Feb. 14, 2012. |
Non-Final Office Action, Sep. 17, 2013, U.S. Appl. No. 13/397,597, filed Feb. 15, 2012. |
Final Office Action, Apr. 1, 2014, U.S. Appl. No. 13/397,597, filed Feb. 15, 2012. |
Non-Final Office Action, Nov. 21, 2014, U.S. Appl. No. 13/397,597, filed Feb. 15, 2012. |
Non-Final Office Action, Jun. 7, 2012, U.S. Appl. No. 13/426,436, filed Mar. 21, 2012. |
Final Office Action, Dec. 31, 2012, U.S. Appl. No. 13/426,436, filed Mar. 21, 2012. |
Non-Final Office Action, Sep. 12, 2013, U.S. Appl. No. 13/426,436, filed Mar. 21, 2012. |
Notice of Allowance, Jul. 16, 2014, U.S. Appl. No. 13/426,436, filed Mar. 21, 2012. |
Non-Final Office Action, Jul. 15, 2014, U.S. Appl. No. 13/432,490, filed Mar. 28, 2012. |
Notice of Allowance, Apr. 3, 2015, U.S. Appl. No. 13/432,490, filed Mar. 28, 2012. |
Notice of Allowance, Oct. 17, 2012, U.S. Appl. No. 13/565,751, filed Aug. 2, 2012. |
Non-Final Office Action, Jan. 9, 2012, U.S. Appl. No. 13/664,299, filed Oct. 30, 2012. |
Non-Final Office Action, Dec. 28, 2012, U.S. Appl. No. 13/664,299, filed Oct. 30, 2012. |
Non-Final Office Action, Mar. 7, 2013, U.S. Appl. No. 13/664,299, filed Oct. 30, 2012. |
Final Office Action, Apr. 29, 2013, U.S. Appl. No. 13/664,299, filed Oct. 30, 2012. |
Non-Final Office Action, Nov. 27, 2013, U.S. Appl. No. 13/664,299, filed Oct. 30, 2012. |
Notice of Allowance, Jan. 30, 2014, U.S. Appl. No. 13/664,299, filed Oct. 30, 2012. |
Non-Final Office Action, Jun. 4, 2013, U.S. Appl. No. 13/705,132, filed Dec. 4, 2012. |
Final Office Action, Dec. 19, 2013, U.S. Appl. No. 13/705,132, filed Dec. 4, 2012. |
Notice of Allowance, Jun. 19, 2014, U.S. Appl. No. 13/705,132, filed Dec. 4, 2012. |
Non-Final Office Action, May 21, 2015, U.S. Appl. No. 14/189,817, filed Feb. 25, 2014. |
Final Office Action, Dec. 15, 2015, U.S. Appl. No. 14/189,817, filed Feb. 25, 2014. |
Notice of Allowance, Oct. 7, 2014, U.S. Appl. No. 14/207,096, filed Mar. 12, 2014. |
Non-Final Office Action, Oct. 28, 2015, U.S. Appl. No. 14/216,567, filed Mar. 17, 2014. |
Non-Final Office Action, Jul. 10, 2014, U.S. Appl. No. 14/279,092, filed May 15, 2014. |
Notice of Allowance, Jan. 29, 2015, U.S. Appl. No. 14/279,092, filed May 15, 2014. |
Non-Final Office Action, Feb. 27, 2015, U.S. Appl. No. 14/336,934, filed Jul. 21, 2014. |
Notice of Allowance, Aug. 28, 2015, U.S. Appl. No. 14/336,934, filed Jul. 21, 2014. |
International Search Report dated Jun. 8, 2001 in Patent Cooperation Treaty Application No. PCT/US2001/008372. |
International Search Report dated Apr. 3, 2003 in Patent Cooperation Treaty Application No. PCT/US2002/036946. |
International Search Report dated May 29, 2003 in Patent Cooperation Treaty Application No. PCT/US2003/004124. |
International Search Report and Written Opinion dated Oct. 19, 2007 in Patent Cooperation Treaty Application No. PCT/US2007/000463. |
International Search Report and Written Opinion dated Apr. 9, 2008 in Patent Cooperation Treaty Application No. PCT/US2007/021654. |
International Search Report and Written Opinion dated Sep. 16, 2008 in Patent Cooperation Treaty Application No. PCT/US2007/012628. |
International Search Report and Written Opinion dated Oct. 1, 2008 in Patent Cooperation Treaty Application No. PCT/US2008/008249. |
International Search Report and Written Opinion dated Aug. 27, 2009 in Patent Cooperation Treaty Application No. PCT/US2009/003813. |
Dahl, Mattias et al., “Acoustic Echo and Noise Cancelling Using Microphone Arrays”, International Symposium on Signal Processing and its Applications, ISSPA, Gold coast, Australia, Aug. 25-30, 1996, pp. 379-382. |
Demol, M. et al., “Efficient Non-Uniform Time-Scaling of Speech With WSOLA for CALL Applications”, Proceedings of InSTIL/ICALL2004—NLP and Speech Technologies in Advanced Language Learning Systems—Venice Jun. 17-19, 2004. |
Laroche, Jean. “Time and Pitch Scale Modification of Audio Signals”, in “Applications of Digital Signal Processing to Audio and Acoustics”, The Kluwer International Series in Engineering and Computer Science, vol. 437, pp. 279-309, 2002. |
Moulines, Eric et al., “Non-Parametric Techniques for Pitch-Scale and Time-Scale Modification of Speech”, Speech Communication, vol. 16, pp. 175-205, 1995. |
Verhelst, Werner, “Overlap-Add Methods for Time-Scaling of Speech”, Speech Communication vol. 30, pp. 207-221, 2000. |
Bach et al., Learning Spectral Clustering with application to spech separation, Journal of machine learning research, 2006. |
Mokbel et al., 1995, IEEE Transactions of Speech and Audio Processing, vol. 3, No. 5, Sep. 1995, pp. 346-356. |
Office Action mailed Oct. 14, 2013 in Taiwanese Patent Application 097125481, filed Jul. 4, 2008. |
Office Action mailed Oct. 29, 2013 in Japanese Patent Application 2011-516313, filed Jun. 26, 2009. |
Office Action mailed Dec. 20, 2013 in Taiwanese Patent Application 096146144, filed Dec. 4, 2007. |
Office Action mailed Dec. 9, 2013 in Finnish Patent Application 20100431, filed Jun. 26, 2009. |
Office Action mailed Jan. 20, 2014 in Finnish Patent Application 20100001, filed Jul. 3, 2008. |
Office Action mailed Mar. 10, 2014 in Taiwanese Patent Application 097125481, filed Jul. 4, 2008. |
Bai et al., “Upmixing and Downmixing Two-channel Stereo Audio for Consumer Electronics”. IEEE Transactions on Consumer Electronics [Online] 2007, vol. 53, Issue 3, pp. 1011-1019. |
Jo et al., “Crosstalk cancellation for spatial sound reproduction in portable devices with stereo loudspeakers”. Communications in Computer and Information Science [Online] 2011, vol. 266, pp. 114-123. |
Nongpuir et al., “NEXT cancellation system with improved convergence rate and tracking performance”. IEEE Proceedings—Communications [Online] 2005, vol. 152, Issue 3, pp. 378-384. |
Ahmed et al., “Blind Crosstalk Cancellation for DMT Systems” IEEE—Emergent Technologies Technical Committee. Sep. 2002. pp. 1-5. |
Allowance mailed May 21, 2014 in Finnish Patent Application 20100001, filed Jan. 4, 2010. |
Office Action mailed May 2, 2014 in Taiwanese Patent Application 098121933, filed Jun. 29, 2009. |
Office Action mailed Apr. 15, 2014 in Japanese Patent Application 2010-514871, filed Jul. 3, 2008. |
Elhilali et al.,“A cocktail party with a cortical twist: How cortical mechanisms contribute to sound segregation.” J Acoust Soc Am. Dec. 2008; 124(6): 3751-3771). |
Jin et al., “HMM-Based Multipitch Tracking for Noisy and Reverberant Speech.” |
Kawahara, W., et al., “Tandem-Straight: A temporally stable power spectral representation for periodic signals and applications to interference-free spectrum, F0, and aperiodicity estimation.” IEEE ICASSP 2008. |
Office Action mailed Jun. 27, 2014 in Korean Patent Application No. 10-2010-7000194, filed Jan. 6, 2010. |
Office Action mailed Jun. 18, 2014 in Finnish Patent Application No. 20080428, filed Jul. 4, 2008. |
International Search Report & Written Opinion dated Jul. 15, 2014 in Patent Cooperation Treaty Application No. PCT/US2014/018443, filed Feb. 25, 2014. |
Notice of Allowance dated Aug. 26, 2014 in Taiwanese Application No. 096146144, filed Dec. 4, 2007. |
Notice of Allowance dated Sep. 16, 2014 in Korean Application No. 10-2010-7000194, filed Jul. 3, 2008. |
Notice of Allowance dated Sep. 29, 2014 in Taiwanese Application No. 097125481, filed Jul. 4, 2008. |
Notice of Allowance dated Oct. 10, 2014 in Finnish Application No. 20100001, filed Jul. 3, 2008. |
International Search Report & Written Opinion dated Nov. 12, 2014 in Patent Cooperation Treaty Application No. PCT/US2014/047458, filed Jul. 21, 2014. |
Office Action mailed Oct. 28, 2014 in Japanese Patent Application No. 2011-516313, filed Dec. 27, 2012. |
Heiko Pumhagen, “Low Complexity Parametric Stereo Coding in MPEG-4,” Proc. of the 7th Int. Conference on Digital Audio Effects (DAFx'04), Naples, Italy, Oct. 5-8, 2004. |
Chun-Ming Chang et al., “Voltage-Mode Multifunction Filter with Single Input and Three Outputs Using Two Compound Current Conveyors” IEEE Transactions on Circuits and Systems-I: Fundamental Theory and Applications, vol. 46, No. 11, Nov. 1999. |
Notice of Allowance mailed Feb. 10, 2015 in Taiwanese Patent Application No. 098121933, filed Jun. 29, 2009. |
Office Action mailed Jan. 30, 2015 in Finnish Patent Application No. 20080623, filed May 24, 2007. |
Office Action mailed Mar. 24, 2015 in Japanese Patent Application No. 2011-516313, filed Jun. 26, 2009. |
Office Action mailed Apr. 16, 2015 in Korean Patent Application No. 10-2011-7000440, filed Jun. 26, 2009. |
Notice of Allowance mailed Jun. 2, 2015 in Japanese Patent Application 2011-516313, filed Jun. 26, 2009. |
Office Action mailed Jun. 4, 2015 in Finnish Patent Application 20080428, filed Jan. 5, 2007. |
Office Action mailed Jun. 9, 2015 in Japanese Patent Application 2014-165477 filed Jul. 3, 2008. |
Notice of Allowance mailed Aug. 13, 2015 in Finnish Patent Application 20080623, filed May 24, 2007. |
International Search Report & Written Opinion dated Nov. 27, 2015 in Patent Cooperation Treaty Application No. PCT/US2015/047263, filed Aug. 27, 2015. |
International Search Report and Written Opinion dated Sep. 1, 2011 in Patent Cooperation Treaty Application No. PCT/US11/37250. |
Fazel et al., “An overview of statistical pattern recognition techniques for speaker verification,” IEEE, May 2011. |
Sundaram et al., “Discriminating Two Types of Noise Sources Using Cortical Representation and Dimension Reduction Technique,” IEEE, 2007. |
Tognieri et al., “A Comparison of the LBG, LVQ, MLP, SOM and GMM Algorithms for Vector Quantisation and Clustering Analysis,” University of Western Australia, 1992. |
Klautau et al., “Discriminative Gaussian Mixture Models a Comparison with Kernel Classifiers,” ICML, 2003. |
International Search Report & Written Opinion dated Mar. 18, 2014 in Patent Cooperation Treaty Application No. PCT/US2013/065752, filed Oct. 18, 2013. |
Kim et al., “Improving Speech Intelligibility in Noise Using Environment-Optimized Algorithms,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, No. 8, Nov. 2010, pp. 2080-2090. |
Sharma et al., “Rotational Linear Discriminant Analysis Technique for Dimensionality Reduction,” IEEE Transactions on Knowledge and Data Engineering, vol. 20, No. 10, Oct. 2008, pp. 1336-1347. |
Temko et al., “Classifiation of Acoustic Events Using SVM-Based Clustering Schemes,” Pattern Recognition 39, No. 4, 2006, pp. 682-694. |
Office Action mailed Jun. 17, 2015 in Japan Patent Application 2013-519682 filed May 19, 2011. |
Notice of Allowance dated Feb. 24, 2016 in Korean Application No. 10-2011-7000440, filed Jun. 26, 2009. |
Hu et al., “Robust Speaker's Location Detection in a Vehicle Environment Using GMM Models,” IEEE Transactions on Systems, Man, and Cybernetics—Part B: Cybernetics, vol. 36, No. 2, Apr. 2006, pp. 403-412. |
Laroche, Jean et al., “Noise Suppression Assisted Automatic Speech Recognition”, U.S. Appl. No. 12/962,519, filed Dec. 7, 2010. |
Goodwin, Michael M. et al., “Key Click Suppression”, U.S. Appl. No. 14/745,176, filed Jun. 19, 2015. |
Non-Final Office Action, Aug. 1, 2012, U.S. Appl. No. 12/860,043, filed Aug. 20, 2010. |
Notice of Allowance, Jan. 18, 2013, U.S. Appl. No. 12/860,043, filed Aug. 22, 2010. |
Non-Final Office Action, Aug. 17, 2012, U.S. Appl. No. 12/868,622, filed Aug. 25, 2010. |
Final Office Action, Feb. 22, 2013, U.S. Appl. No. 12/868,622, filed Aug. 25, 2010. |
Advisory Action, May 14, 2013, U.S. Appl. No. 12/868,622, filed Aug. 25, 2010. |
Notice of Allowance, May 1, 2014, U.S. Appl. No. 12/868,622, filed Aug. 25, 2010. |
Non-Final Office Action, Jun. 26, 2013, U.S. Appl. No. 12/959,994, filed Dec. 3, 2010. |
Non-Final Office Action, Jul. 21, 2014, U.S. Appl. No. 12/959,994, filed Dec. 3, 2010. |
Non-Final Office Action, May 20, 2015, U.S. Appl. No. 12/959,994, filed Dec. 3, 2010. |
Final Office Action, Jan. 12, 2016, U.S. Appl. No. 12/959,994, filed Dec. 3, 2010. |
Non-Final Office Action, May 13, 2014, U.S. Appl. No. 12/962,519, filed Dec. 7, 2010. |
Final Office Action, Feb. 10, 2015, U.S. Appl. No. 12/962,519, filed Dec. 7, 2010. |
Non-Final Office Action, Nov. 3, 2015, U.S. Appl. No. 12/962,519, filed Dec. 7, 2010. |
Final Office Action, May 18, 2016, U.S. Appl. No. 12/962,519, filed Dec. 7, 2010. |
Non-Final Office Action, Jan. 2, 2013, U.S. Appl. No. 12/963,493, filed Dec. 8, 2010. |
Final Office Action, May 7, 2013, U.S. Appl. No. 12/963,493, filed Dec. 8, 2010. |
Non-Final Office Action, Jul. 31, 2014, U.S. Appl. No. 12/963,493, filed Dec. 8, 2010. |
Non-Final Office Action, May 15, 2015, U.S. Appl. No. 12/963,493, filed Dec. 8, 2010. |
Notice of Allowance, Oct. 3, 2013, U.S. Appl. No. 13/157,238, filed Jun. 9, 2011. |
Final Office Action, May 5, 2016, U.S. Appl. No. 13/363,362, filed Jan. 31, 2012. |
Non-Final Office Action, Jan. 31, 2013, U.S. Appl. No. 13/414,121, filed Mar. 7, 2012. |
Notice of Allowance, Jul. 29, 2013, U.S. Appl. No. 13/414,121, filed Mar. 7, 2012. |
Non-Final Office Action, May 11, 2012, U.S. Appl. No. 13/424,189, filed Mar. 19, 2012. |
Final Office Action, Sep. 4, 2012, U.S. Appl. No. 13/424,189, filed Mar. 19, 2012. |
Final Office Action, Nov. 28, 2012, U.S. Appl. No. 13/424,189, filed Mar. 19, 2012. |
Notice of Allowance, Mar. 7, 2013, U.S. Appl. No. 13/424,189, filed Mar. 19, 2012. |
Non-Final Office Action, Nov. 7, 2012, U.S. Appl. No. 13/492,780, filed Jun. 8, 2012. |
Non-Final Office Action, May 8, 2013, U.S. Appl. No. 13/492,780, filed Jun. 8, 2012. |
Final Office Action, Oct. 23, 2013, U.S. Appl. No. 13/492,780, filed Jun. 8, 2012. |
Notice of Allowance, Nov. 24, 2014, U.S. Appl. No. 13/492,780, filed Jun. 8, 2012. |
Non-Final Office Action, Oct. 8, 2013, U.S. Appl. No. 13/734,208, filed Jan. 4, 2013. |
Notice of Allowance, Jan. 31, 2014, U.S. Appl. No. 13/734,208, filed Jan. 4, 2013. |
Non-Final Office Action, May 28, 2013, U.S. Appl. No. 13/735,446, filed Jan. 7, 2013. |
Non-Final Office Action, Dec. 13, 2013, U.S. Appl. No. 13/735,446, filed Jan. 7, 2013. |
Final Office Action, Apr. 9, 2014, U.S. Appl. No. 13/735,446, filed Jan. 7, 2013. |
Non-Final Office Action, Sep. 29, 2014, U.S. Appl. No. 13/735,446, filed Jan. 7, 2013. |
Notice of Allowance, Jul. 15, 2015, U.S. Appl. No. 13/735,446, filed Jan. 7, 2013. |
Non-Final Office Action, May 23, 2014, U.S. Appl. No. 13/859,186, filed Apr. 9, 2013. |
Final Office Action, Dec. 3, 2014, U.S. Appl. No. 13/859,186, filed Apr. 9, 2013. |
Non-Final Office Action, Jul. 7, 2015, U.S. Appl. No. 13/859,186, filed Apr. 9, 2013. |
Final Office Action, Feb. 2, 2016, U.S. Appl. No. 13/859,186, filed Apr. 9, 2013. |
Notice of Allowance, Apr. 28, 2016, U.S. Appl. No. 13/859,186, filed Apr. 9, 2013. |
Non-Final Office Action, Apr. 17, 2015, U.S. Appl. No. 13/888,796, filed May 7, 2013. |
Notice of Allowance, May 20, 2015, U.S. Appl. No. 13/888,796, filed May 7, 2013. |
Non-Final Office Action, Jul. 15, 2015, U.S. Appl. No. 14/058,059, filed Oct. 18, 2013. |
Non-Final Office Action, Jun. 26, 2015, U.S. Appl. No. 14/262,489, filed Apr. 25, 2014. |
Notice of Allowance, Jan. 28, 2016, U.S. Appl. No. 14/313,883, filed Jun. 24, 2014. |
Non-Final Office Action, May 6, 2016, U.S. Appl. No. 14/495,550, filed Sep. 24, 2014. |
Non-Final Office Action, Jun. 26, 2015, U.S. Appl. No. 14/626,489, filed Apr. 25, 2014. |
Non-Final Office Action, Jun. 10, 2015, U.S. Appl. No. 14/628,109, filed Feb. 20, 2015. |
Final Office Action, Mar. 16, 2016, U.S. Appl. No. 14/628,109, filed Feb. 20, 2015. |
Non-Final Office Action, Apr. 8, 2016, U.S. Appl. No. 14/838,133, filed Aug. 27, 2015. |
Non-Final Office Action, May 31, 2016, U.S. Appl. No. 14/874,329, filed Oct. 2, 2015. |
Final Office Action, Jun. 17, 2016, U.S. Appl. No. 13/396,568, filed Feb. 14, 2012. |
Number | Date | Country | |
---|---|---|---|
61709908 | Oct 2012 | US |