The present invention relates generally to audio processing, and, more specifically, to systems and methods for stereo separation and directional suppression with omni-directional microphones.
Recording stereo audio with a mobile device, such as smartphones and tablet computers, may be useful for making video of concerts, performances, and other events. Typical stereo recording devices are designed with either large separation between microphones or with precisely angled directional microphones to utilize acoustic properties of the directional microphones to capture stereo effects. Mobile devices, however, are limited in size and, therefore, the distance between microphones is significantly smaller than a minimum distance required for optimal omni-directional microphone stereo separation. Using directional microphones is not practical due to the size limitations of the mobile devices and may result in an increase in overall costs associated with the mobile devices. Additionally, due to the limited space for placing directional microphones, a user of the mobile device can be a dominant source for the directional microphones, often interfering with target sound sources.
Another aspect of recording stereo audio using a mobile device is a problem of capturing acoustically representative signals to be used in subsequent processing. Traditional microphones used for mobile devices may not able to handle high pressure conditions in which stereo recording is performed, such as a performance, concert, or a windy environment. As a result, signals generated by the microphones can become distorted due to reaching their acoustic overload point (AOP).
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Provided are systems and methods for stereo separation and directional suppression with omni-directional microphones. An example method includes receiving at least a first audio signal and a second audio signal. The first audio signal can represent sound captured by a first microphone associated with a first location. The second audio signal can represent sound captured by a second microphone associated with a second location. The first microphone and the second microphone can include omni-directional microphones. The method can include generating a first channel signal of a stereo audio signal by forming, based on the at least first audio signal and second audio signal, a first beam at the first location. The method can also include generating a second channel signal of the stereo audio signal by forming, based on the at least first audio signal and second audio signal, a second beam at the second location.
In some embodiments, a distance between the first microphone and the second microphone is limited by a size of a mobile device. In certain embodiments, the first microphone is located at the top of the mobile device and the second microphone is located at the bottom of the mobile device. In other embodiments, the first and second microphones (and additional microphones, if any) may be located differently, including but not limited to, the microphones being located along a side of the device, e.g., separated along the side of a tablet having microphones on the side.
In some embodiments, directions of the first beam and the second beam are fixed relative to a line between the first location and the second location. In some embodiments, the method further includes receiving at least one other acoustic signal. The other acoustic signal can be captured by another microphone associated with another location. The other microphone includes an omni-directional microphone. In some embodiments, forming the first beam and the second beam is further based on the other acoustic signal. In some embodiments, the other microphone is located off the line between the first microphone and the second microphone.
In some embodiments, forming the first beam includes reducing signal energy of acoustic signal components associated with sources outside the first beam. Forming the second beam can include reducing signal energy of acoustic signal components associated with further sources off the second beam. In certain embodiments, reducing signal energy is performed by a subtractive suppression. In some embodiments, the first microphone and the second microphone include microphones having an acoustic overload point (AOP) greater than a pre-determined sound pressure level. In certain embodiments, the pre-determined sound pressure level is 120 decibels.
According to another example embodiment of the present disclosure, the steps of the method for stereo separation and directional suppression with omni-directional microphones are stored on a machine-readable medium comprising instructions, which when implemented by one or more processors perform the recited steps.
Other example embodiments of the disclosure and aspects will become apparent from the following description taken in conjunction with the following drawings.
Embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements.
The technology disclosed herein relates to systems and methods for stereo separation and directional suppression with omni-directional microphones. Embodiments of the present technology may be practiced with audio devices operable at least to capture and process acoustic signals. In some embodiments, the audio devices may be hand-held devices, such as wired and/or wireless remote controls, notebook computers, tablet computers, phablets, smart phones, personal digital assistants, media players, mobile telephones, and the like. The audio devices can have radio frequency (RF) receivers, transmitters and transceivers; wired and/or wireless telecommunications and/or networking devices; amplifiers; audio and/or video players; encoders; decoders; speakers; inputs; outputs; storage devices; and user input devices. Audio devices may have input devices such as buttons, switches, keys, keyboards, trackballs, sliders, touch screens, one or more microphones, gyroscopes, accelerometers, global positioning system (GPS) receivers, and the like. The audio devices may have outputs, such as LED indicators, video displays, touchscreens, speakers, and the like.
In various embodiments, the audio devices operate in stationary and portable environments. The stationary environments can include residential and commercial buildings or structures and the like. For example, the stationary embodiments can include concert halls, living rooms, bedrooms, home theaters, conference rooms, auditoriums, business premises, and the like. Portable environments can include moving vehicles, moving persons or other transportation means, and the like.
According to an example embodiment, a method for stereo separation and directional suppression includes receiving at least a first audio signal and a second audio signal. The first audio signal can represent sound captured by a first microphone associated with a first location. The second audio signal can represent sound captured by a second microphone associated with a second location. The first microphone and the second microphone can comprise omni-directional microphones. The example method includes generating a first stereo signal by forming, based on the at least first audio signal and second audio signal, a first beam at the first location. The method can further include generating a second stereo signal by forming, based on the at least first audio signal and second audio signal, a second beam at the second location.
The primary microphone 106a and the secondary microphone 106b of the audio device 104 may comprise omni-directional microphones. In some embodiments, the primary microphone 106a is located at the bottom of the audio device 104 and, accordingly, may be referred to as the bottom microphone. Similarly, in some embodiments, the secondary microphone 106b is located at the top of the audio device 104 and, accordingly, may be referred to as the top microphone. In other embodiments, the first and second microphones (and additional microphones, if any) may be located differently, including but not limited to, the microphones being located along a side of the device, e.g., separated along the side of a tablet having microphones on the side.
Some embodiments if the present disclosure utilize level differences (e.g., energy differences), phase differences, and differences in arrival times between the acoustic signals received by the two microphones 106a and 106b. Because the primary microphone 106a is closer to the audio source 112 than the secondary microphone 106b, the intensity level, for the audio signal from audio source 112 (represented graphically by 122, which may also include noise in addition to desired sounds) is higher for the primary microphone 106a, resulting in a larger energy level received by the primary microphone 106a. Similarly, because the secondary microphone 106b is closer to the audio source 116 than the primary microphone 106a, the intensity level, for the audio signal from audio source 116 (represented graphically by 126, which may also include noise in addition to desired sounds) is higher for the secondary microphone 106, resulting in a larger energy level received by the secondary microphone 106b. On the other hand, the intensity level for the audio signal from audio source 114 (represented graphically by 124, which may also include noise in addition to desired sounds) could be higher for one of the two microphones 106a and 106b, depending on, for example, its location within cones 108a and 108b.
The level differences can be used to discriminate between speech and noise in the time-frequency domain. Some embodiments may use a combination of energy level differences and differences in arrival times to discriminate between acoustic signals coming from different directions. In some embodiments, a combination of energy level differences and phase differences is used for directional audio capture.
Various example embodiments of the present technology utilize level differences (e.g. energy differences), phase differences, and differences in arrival times for stereo separation and directional suppression of acoustic signals captured by microphones 106a and 106b. As shown in
Processor 220 may execute instructions and modules stored in a memory (not illustrated in
The example receiver 210 can be a sensor configured to receive a signal from a communications network. In some embodiments, the receiver 210 may include an antenna device. The signal may then be forwarded to the audio processing system 230 for noise reduction and other processing using the techniques described herein. The audio processing system 230 may provide a processed signal to the output device 240 for providing an audio output(s) to the user. The present technology may be used in one or both of the transmitting and receiving paths of the audio device 104.
The audio processing system 230 can be configured to receive acoustic signals that represent sound from acoustic source(s) via the primary microphone 106a and secondary microphone 106b and process the acoustic signals. The processing may include performing noise reduction for an acoustic signal. The example audio processing system 230 is discussed in more detail below. The primary and secondary microphones 106a, 106b may be spaced a distance apart in order to allow for detecting an energy level difference, time arrival difference, or phase difference between them. The acoustic signals received by primary microphone 106a and secondary microphone 106b may be converted into electrical signals (e.g., a primary electrical signal and a secondary electrical signal). The electrical signals may, in turn, be converted by an analog-to-digital converter (not shown) into digital signals, that represent the captured sound, for processing in accordance with some embodiments.
The output device 240 can include any device which provides an audio output to the user. For example, the output device 240 may include a loudspeaker, an earpiece of a headset or handset, or a memory where the output is stored for video/audio extraction at a later time, e.g., for transfer to computer, video disc or other media for use.
In various embodiments, where the primary and secondary microphones include omni-directional microphones that are closely-spaced (e.g., 1-2 cm apart), a beamforming technique may be used to simulate forward-facing and backward-facing directional microphones. The energy level difference may be used to discriminate between speech and noise in the time-frequency domain used in noise reduction.
FCT 302 and 304 may receive acoustic signals from audio device microphones and convert the acoustic signals into frequency range sub-band signals. In some embodiments, FCT 302 and 304 are implemented as one or more modules operable to generate one or more sub-band signals for each received microphone signal. FCT 302 and 304 can receive an acoustic signal representing sound from each microphone included in audio device 104. These acoustic signals are illustrated as signals X1-XI, wherein X1 represent a primary microphone signal and Xi represents the rest (e.g., N−1) of the microphone signals. In some embodiments, the audio processing system 230 of
In some embodiments, beamformer 310 receives frequency sub-band signals as well as a zoom indication signal. The zoom indication signal can be received from zoom control 350. The zoom indication signal can be generated in response to user input, analysis of a primary microphone signal, or other acoustic signals received by audio device 104, a video zoom feature selection, or some other data. In operation, beamformer 310 receives sub-band signals, processes the sub-band signals to identify which signals are within a particular area to enhance (or “zoom”), and provide data for the selected signals as output to multiplicative gain expansion module 320. The output may include sub-band signals for the audio source within the area to enhance. Beamformer 310 can also provide a gain factor to multiplicative gain expansion 320. The gain factor may indicate whether multiplicative gain expansion 320 should perform additional gain or reduction to the signals received from beamformer 310. In some embodiments, the gain factor is generated as an energy ratio based on the received microphone signals and components. The gain indication output by beamformer 310 may be a ratio of energy in the energy component of the primary microphone reduced by beamformer 310 to output energy of beamformer 310. Accordingly, the gain may include a boost or cancellation gain expansion factor. An example gain factor is discussed in more detail below.
Beamformer 310 can be implemented as a null processing noise subtraction (NPNS) module, multiplicative module, or a combination of these modules. When an NPNS module is used in microphones to generate a beam and achieve beamforming, the beam is focused by narrowing constraints of alpha (α) and gamma (σ). Accordingly, a beam may be manipulated by providing a protective range for the preferred direction. Exemplary beamformer 310 modules are further described in U.S. patent application Ser. No. 14/957,447, entitled “Directional Audio Capture,” and U.S. patent application Ser. No. 12/896,725, entitled “Audio Zoom” (issued as U.S. Pat. No. 9,210,503 on Dec. 8, 2015), the disclosures of which is incorporated herein by reference in its entirety. Additional techniques for reducing undesired audio components of a signal are discussed in U.S. patent application Ser. No. 12/693,998, entitled “Adaptive Noise Reduction Using Level Cues” (issued as U.S. Pat. No. 8,718,290 on May 6, 2014), the disclosure of which is incorporated herein by reference in its entirety.
Multiplicative gain expansion module 320 can receive sub-band signals associated with audio sources within the selected beam, the gain factor from beamformer 310, and the zoom indicator signal. Multiplicative gain expansion module 320 can apply a multiplicative gain based on the gain factor received. In effect, multiplicative gain expansion module 320 can filter the beamformer signal provided by beamformer 310.
The gain factor may be implemented as one of several different energy ratios. For example, the energy ratio may include a ratio of a noise reduced signal to a primary acoustic signal received from a primary microphone, the ratio of a noise reduced signal and a detected noise component within the primary microphone signal, the ratio of a noise reduced signal and a secondary acoustic signal, or the ratio of a noise reduced signal compared to an intra level difference between a primary signal and a further signal. The gain factors may be an indication of signal strength in a target direction versus all other directions. In other words, the gain factor may be indicative of multiplicative expansions and whether these additional expansions should be performed by the multiplicative gain expansion 320. Multiplicative gain expansion 320 can output the modified signal and provide signal to reverb 330 (also referred to herein as reverb (de-reverb) 330).
Reverb 330 can receive the sub-band signals output by multiplicative gain expansion 320, as well as the microphone signals also received by beamformer 310, and perform reverberation (or dereverberation) of the sub-band signal output by multiplicative gain expansion 320. Reverb 330 may adjust a ratio of direct energy to remaining energy within a signal based on the zoom control indicator provided by zoom control 350. After adjusting the reverberation of the received signal, reverb 330 can provide the modified signal to a mixing component, e.g., mixer 340.
The mixer 340 can receive the reverberation adjusted signal and mix the signal with the signal from the primary microphone. In some embodiments, mixer 340 increases the energy of the signal appropriately when audio is present in the frame and decreases the energy when there is little audio energy present in the frame.
In various embodiments, SDE module 408 is operable to localize a source of sound. The SDE module 408 is operable to generate cues based on correlation of phase plots between different microphone inputs. Based on the correlation of the phase plots, the SDE module 408 is operable to compute a vector of salience estimates at different angles. Based on the salience estimates, the SDE module 408 can determine a direction of the source. In other words, a peak in the vector of salience estimates is an indication of direction of a source in a particular direction. At the same time, sources of diffused nature, i.e., non-directional, are represented by poor salience estimates at all the angles. The SDE module 408 can rely upon the cues (estimates of salience) to improve the performance of a directional audio solution, which is carried out by the analysis module 406, signal modifier 412, and zoom control 410. In some embodiments, the signal modifier 412 includes modules analogous or similar to beamformer 310, multiplicative gain expansion module 320, reverb module 330, and mixer module 340 as shown for audio system 230 in
In some embodiments, estimates of salience are used to localize the angle of the source in the range of 0 to 360 degrees in a plane parallel to the ground, when, for example, the audio device 104 is placed on a table top. The estimates of salience can be used to attenuate/amplify the signals at different angles as required by the customer. The characterization of these modes may be driven by a SDE salience parameter. Example AZA and SDE subsystems are described further in U.S. patent application Ser. No. 14/957,447, entitled “Directional Audio Capture,” the disclosure of which is incorporated herein by reference in its entirety.
In the example in
In various embodiments of the present disclosure, the coordinate system 710 used in AZA is rotated to adapt for providing a stereo separation and directional suppression of received acoustic signals.
According to various embodiments of the present disclosure, at least two channels of a stereo signal (also referred to herein as left and right channel stereo (audio) signals, and a left stereo signal and a right stereo signal) are generated based on acoustic signals captured by two or more omni-directional microphones. In some embodiments, the omni-directional microphones include the primary microphone 106a and the secondary microphone 106b. As shown in
According to some embodiments of the present disclosure, NPNS module 600 (in the example in
In certain embodiments, only two omni-directional microphones 106a and 106b are used for stereo separation. Using two omni-directional microphones 106a and 106b, one on each end of the audio device, a clear separation between the left side and the right side can be achieved. For example, the secondary microphone 106b is closer to the audio source 920 (at the right in the example in
In some embodiments, an appropriately-placed third microphone can be used to improve differentiation of the scene (audio device camera's view) direction from the direction behind the audio device. Using a third microphone (for example, the tertiary microphone 106c shown in
In some embodiments, the microphones 106a, 106b, and 106c include high AOP microphones. The AOP microphones can provide robust inputs for beamforming in loud environments, for example, concerts. Sound levels at some concerts are capable of exceeding 120 dB with peak levels exceeding 120 dB considerably. Traditional omni-directional microphones may saturate at these sound levels making it impossible to recover any signal captured by the microphone. High AOP microphones are designed for a higher overload point as compared to traditional microphones and, therefore, are capable of capturing an accurate signal under significantly louder environments when compared to traditional microphones. Combining the technology of high AOP microphones with the methods for stereo separation and directional suppression using omni-directional microphones (e.g., using high AOP omni-directional microphones for the combination) according to various embodiments of the present disclosure, can enable users to capture a video providing a much more realistic representation of their experience during, for example, a concert.
In block 1120, a first stereo signal (e.g., a first channel signal of a stereo audio signal) can be generated by forming a first beam at the first location, based on the first audio signal and the second audio signal. In block 1130, a second stereo signal (e.g., a second channel signal of the stereo audio signal) can be generated by forming a second beam at the second location based on the first audio signal and the second audio signal.
The components shown in
Mass data storage 1230, which can be implemented with a magnetic disk drive, solid state drive, or an optical disk drive, is a non-volatile storage device for storing data and instructions for use by processor unit(s) 1210. Mass data storage 1230 stores the system software for implementing embodiments of the present disclosure for purposes of loading that software into main memory 1220.
Portable storage device 1240 operates in conjunction with a portable non-volatile storage medium, such as a flash drive, floppy disk, compact disk, digital video disc, or Universal Serial Bus (USB) storage device, to input and output data and code to and from the computer system 1200 of
User input devices 1260 can provide a portion of a user interface. User input devices 1260 may include one or more microphones, an alphanumeric keypad, such as a keyboard, for inputting alphanumeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys. User input devices 1260 can also include a touchscreen. Additionally, the computer system 1200 as shown in
Graphics display system 1270 include a liquid crystal display (LCD) or other suitable display device. Graphics display system 1270 is configurable to receive textual and graphical information and processes the information for output to the display device.
Peripheral devices 1280 may include any type of computer support device to add additional functionality to the computer system.
The components provided in the computer system 1200 of
The processing for various embodiments may be implemented in software that is cloud-based. In some embodiments, the computer system 1200 is implemented as a cloud-based computing environment, such as a virtual machine operating within a computing cloud. In other embodiments, the computer system 1200 may itself include a cloud-based computing environment, where the functionalities of the computer system 1200 are executed in a distributed fashion. Thus, the computer system 1200, when configured as a computing cloud, may include pluralities of computing devices in various forms, as will be described in greater detail below.
In general, a cloud-based computing environment is a resource that typically combines the computational power of a large grouping of processors (such as within web servers) and/or that combines the storage capacity of a large grouping of computer memories or storage devices. Systems that provide cloud-based resources may be utilized exclusively by their owners or such systems may be accessible to outside users who deploy applications within the computing infrastructure to obtain the benefit of large computational or storage resources.
The cloud may be formed, for example, by a network of web servers that comprise a plurality of computing devices, such as the computer system 1200, with each server (or at least a plurality thereof) providing processor and/or storage resources. These servers may manage workloads provided by multiple users (e.g., cloud resource customers or other users). Typically, each user places workload demands upon the cloud that vary in real-time, sometimes dramatically. The nature and extent of these variations typically depends on the type of business associated with the user.
The present technology is described above with reference to example embodiments. Therefore, other variations upon the example embodiments are intended to be covered by the present disclosure.
Number | Name | Date | Kind |
---|---|---|---|
4137510 | Iwahara | Jan 1979 | A |
4969203 | Herman | Nov 1990 | A |
5204906 | Nohara et al. | Apr 1993 | A |
5224170 | Waite, Jr. | Jun 1993 | A |
5230022 | Sakata | Jul 1993 | A |
5400409 | Linhard | Mar 1995 | A |
5440751 | Santeler et al. | Aug 1995 | A |
5544346 | Amini et al. | Aug 1996 | A |
5555306 | Gerzon | Sep 1996 | A |
5583784 | Kapust et al. | Dec 1996 | A |
5598505 | Austin et al. | Jan 1997 | A |
5682463 | Allen et al. | Oct 1997 | A |
5796850 | Shiono et al. | Aug 1998 | A |
5806025 | Vis et al. | Sep 1998 | A |
5937070 | Todter et al. | Aug 1999 | A |
5956674 | Smyth et al. | Sep 1999 | A |
5974379 | Hatanaka et al. | Oct 1999 | A |
5974380 | Smyth et al. | Oct 1999 | A |
5978567 | Rebane et al. | Nov 1999 | A |
5978824 | Ikeda | Nov 1999 | A |
6104993 | Ashley | Aug 2000 | A |
6188769 | Jot et al. | Feb 2001 | B1 |
6202047 | Ephraim et al. | Mar 2001 | B1 |
6226616 | You et al. | May 2001 | B1 |
6236731 | Brennan et al. | May 2001 | B1 |
6240386 | Thyssen et al. | May 2001 | B1 |
6263307 | Arslan et al. | Jul 2001 | B1 |
6377637 | Berdugo | Apr 2002 | B1 |
6421388 | Parizhsky et al. | Jul 2002 | B1 |
6477489 | Lockwood et al. | Nov 2002 | B1 |
6490556 | Graumann et al. | Dec 2002 | B1 |
6496795 | Malvar | Dec 2002 | B1 |
6584438 | Manjunath et al. | Jun 2003 | B1 |
6772117 | Laurila et al. | Aug 2004 | B1 |
6810273 | Mattila et al. | Oct 2004 | B1 |
6862567 | Gao | Mar 2005 | B1 |
6907045 | Robinson et al. | Jun 2005 | B1 |
7054809 | Gao | May 2006 | B1 |
7058574 | Taniguchi et al. | Jun 2006 | B2 |
7254242 | Ise et al. | Aug 2007 | B2 |
7283956 | Ashley et al. | Oct 2007 | B2 |
7366658 | Moogi et al. | Apr 2008 | B2 |
7383179 | Alves et al. | Jun 2008 | B2 |
7433907 | Nagai et al. | Oct 2008 | B2 |
7472059 | Huang | Dec 2008 | B2 |
7548791 | Johnston | Jun 2009 | B1 |
7555434 | Nomura et al. | Jun 2009 | B2 |
7590250 | Ellis et al. | Sep 2009 | B2 |
7617099 | Yang et al. | Nov 2009 | B2 |
7657427 | Jelinek | Feb 2010 | B2 |
7899565 | Johnston | Mar 2011 | B1 |
8032369 | Manjunath et al. | Oct 2011 | B2 |
8036767 | Soulodre | Oct 2011 | B2 |
8046219 | Zurek et al. | Oct 2011 | B2 |
8060363 | Ramo et al. | Nov 2011 | B2 |
8098844 | Elko | Jan 2012 | B2 |
8150065 | Solbach et al. | Apr 2012 | B2 |
8194880 | Avendano | Jun 2012 | B2 |
8194882 | Every et al. | Jun 2012 | B2 |
8195454 | Muesch | Jun 2012 | B2 |
8204253 | Solbach | Jun 2012 | B1 |
8233352 | Beaucoup | Jul 2012 | B2 |
8311817 | Murgia et al. | Nov 2012 | B2 |
8345890 | Avendano et al. | Jan 2013 | B2 |
8473287 | Every et al. | Jun 2013 | B2 |
8615394 | Avendano et al. | Dec 2013 | B1 |
8694522 | Pance | Apr 2014 | B1 |
8744844 | Klein | Jun 2014 | B2 |
8774423 | Solbach | Jul 2014 | B1 |
8831937 | Murgia et al. | Sep 2014 | B2 |
8880396 | Laroche et al. | Nov 2014 | B1 |
8908882 | Goodwin et al. | Dec 2014 | B2 |
8934641 | Avendano et al. | Jan 2015 | B2 |
8989401 | Ojanpera | Mar 2015 | B2 |
9094496 | Teutsch | Jul 2015 | B2 |
9185487 | Solbach et al. | Nov 2015 | B2 |
9197974 | Clark et al. | Nov 2015 | B1 |
9210503 | Avendano et al. | Dec 2015 | B2 |
9247192 | Lee et al. | Jan 2016 | B2 |
9330669 | Stonehocker et al. | May 2016 | B2 |
20010041976 | Taniguchi et al. | Nov 2001 | A1 |
20020097884 | Cairns | Jul 2002 | A1 |
20030023430 | Wang et al. | Jan 2003 | A1 |
20030228019 | Eichler et al. | Dec 2003 | A1 |
20040066940 | Amir | Apr 2004 | A1 |
20040083110 | Wang | Apr 2004 | A1 |
20040133421 | Burnett et al. | Jul 2004 | A1 |
20040165736 | Hetherington et al. | Aug 2004 | A1 |
20050008169 | Muren et al. | Jan 2005 | A1 |
20050008179 | Quinn | Jan 2005 | A1 |
20050043959 | Stemerdink et al. | Feb 2005 | A1 |
20050080616 | Leung et al. | Apr 2005 | A1 |
20050096904 | Taniguchi et al. | May 2005 | A1 |
20050143989 | Jelinek | Jun 2005 | A1 |
20050249292 | Zhu | Nov 2005 | A1 |
20050261896 | Schuijers et al. | Nov 2005 | A1 |
20050276363 | Joublin et al. | Dec 2005 | A1 |
20050281410 | Grosvenor et al. | Dec 2005 | A1 |
20050283544 | Yee | Dec 2005 | A1 |
20060100868 | Hetherington et al. | May 2006 | A1 |
20060136203 | Ichikawa | Jun 2006 | A1 |
20060198542 | Benjelloun Touimi et al. | Sep 2006 | A1 |
20060242071 | Stebbings | Oct 2006 | A1 |
20060270468 | Hui et al. | Nov 2006 | A1 |
20060293882 | Giesbrecht et al. | Dec 2006 | A1 |
20070025562 | Zalewski et al. | Feb 2007 | A1 |
20070033494 | Wenger et al. | Feb 2007 | A1 |
20070038440 | Sung et al. | Feb 2007 | A1 |
20070058822 | Ozawa | Mar 2007 | A1 |
20070067166 | Pan et al. | Mar 2007 | A1 |
20070088544 | Acero et al. | Apr 2007 | A1 |
20070100612 | Ekstrand et al. | May 2007 | A1 |
20070136056 | Moogi et al. | Jun 2007 | A1 |
20070136059 | Gadbois | Jun 2007 | A1 |
20070150268 | Acero et al. | Jun 2007 | A1 |
20070154031 | Avendano et al. | Jul 2007 | A1 |
20070198254 | Goto et al. | Aug 2007 | A1 |
20070237271 | Pessoa et al. | Oct 2007 | A1 |
20070244695 | Manjunath et al. | Oct 2007 | A1 |
20070253574 | Soulodre | Nov 2007 | A1 |
20070276656 | Solbach et al. | Nov 2007 | A1 |
20070282604 | Gartner et al. | Dec 2007 | A1 |
20070287490 | Green et al. | Dec 2007 | A1 |
20080019548 | Avendano | Jan 2008 | A1 |
20080069366 | Soulodre | Mar 2008 | A1 |
20080101626 | Samadani | May 2008 | A1 |
20080111734 | Fam et al. | May 2008 | A1 |
20080117901 | Klammer | May 2008 | A1 |
20080118082 | Seltzer et al. | May 2008 | A1 |
20080140396 | Grosse-Schulte et al. | Jun 2008 | A1 |
20080192956 | Kazama | Aug 2008 | A1 |
20080195384 | Jabri et al. | Aug 2008 | A1 |
20080208575 | Laaksonen et al. | Aug 2008 | A1 |
20080212795 | Goodwin et al. | Sep 2008 | A1 |
20080247567 | Kjolerbakken et al. | Oct 2008 | A1 |
20080310646 | Amada | Dec 2008 | A1 |
20080317261 | Yoshida et al. | Dec 2008 | A1 |
20090012783 | Klein | Jan 2009 | A1 |
20090012784 | Murgia et al. | Jan 2009 | A1 |
20090018828 | Nakadai et al. | Jan 2009 | A1 |
20090048824 | Amada | Feb 2009 | A1 |
20090060222 | Jeong et al. | Mar 2009 | A1 |
20090070118 | Den Brinker et al. | Mar 2009 | A1 |
20090086986 | Schmidt et al. | Apr 2009 | A1 |
20090106021 | Zurek et al. | Apr 2009 | A1 |
20090112579 | Li et al. | Apr 2009 | A1 |
20090119096 | Gerl et al. | May 2009 | A1 |
20090119099 | Lee et al. | May 2009 | A1 |
20090144053 | Tamura et al. | Jun 2009 | A1 |
20090144058 | Sorin | Jun 2009 | A1 |
20090192790 | El-Maleh et al. | Jul 2009 | A1 |
20090204413 | Sintes et al. | Aug 2009 | A1 |
20090216526 | Schmidt et al. | Aug 2009 | A1 |
20090226005 | Acero et al. | Sep 2009 | A1 |
20090226010 | Schnell et al. | Sep 2009 | A1 |
20090228272 | Herbig et al. | Sep 2009 | A1 |
20090257609 | Gerkmann et al. | Oct 2009 | A1 |
20090262969 | Short et al. | Oct 2009 | A1 |
20090287481 | Paranjpe et al. | Nov 2009 | A1 |
20090292536 | Hetherington et al. | Nov 2009 | A1 |
20090303350 | Terada | Dec 2009 | A1 |
20090323982 | Solbach et al. | Dec 2009 | A1 |
20100004929 | Baik | Jan 2010 | A1 |
20100033427 | Marks et al. | Feb 2010 | A1 |
20100094643 | Avendano et al. | Apr 2010 | A1 |
20100211385 | Sehlstedt | Aug 2010 | A1 |
20100228545 | Ito et al. | Sep 2010 | A1 |
20100245624 | Beaucoup | Sep 2010 | A1 |
20100280824 | Petit et al. | Nov 2010 | A1 |
20100296668 | Lee et al. | Nov 2010 | A1 |
20110038486 | Beaucoup | Feb 2011 | A1 |
20110038557 | Closset et al. | Feb 2011 | A1 |
20110044324 | Li et al. | Feb 2011 | A1 |
20110058676 | Visser | Mar 2011 | A1 |
20110075857 | Aoyagi | Mar 2011 | A1 |
20110081024 | Soulodre | Apr 2011 | A1 |
20110107367 | Georgis et al. | May 2011 | A1 |
20110129095 | Avendano et al. | Jun 2011 | A1 |
20110137646 | Ahgren et al. | Jun 2011 | A1 |
20110142257 | Goodwin et al. | Jun 2011 | A1 |
20110182436 | Murgia et al. | Jul 2011 | A1 |
20110184732 | Godavarti | Jul 2011 | A1 |
20110184734 | Wang et al. | Jul 2011 | A1 |
20110191101 | Uhle et al. | Aug 2011 | A1 |
20110208520 | Lee | Aug 2011 | A1 |
20110257965 | Hardwick | Oct 2011 | A1 |
20110257967 | Every et al. | Oct 2011 | A1 |
20110264449 | Sehlstedt | Oct 2011 | A1 |
20120013768 | Zurek et al. | Jan 2012 | A1 |
20120019689 | Zurek et al. | Jan 2012 | A1 |
20120076316 | Zhu | Mar 2012 | A1 |
20120116758 | Murgia et al. | May 2012 | A1 |
20120123775 | Murgia et al. | May 2012 | A1 |
20120209611 | Furuta et al. | Aug 2012 | A1 |
20120257778 | Hall et al. | Oct 2012 | A1 |
20130272511 | Bouzid et al. | Oct 2013 | A1 |
20130289988 | Fry | Oct 2013 | A1 |
20130289996 | Fry | Oct 2013 | A1 |
20130322461 | Poulsen | Dec 2013 | A1 |
20130332156 | Tackin et al. | Dec 2013 | A1 |
20130343549 | Vemireddy | Dec 2013 | A1 |
20140003622 | Ikizyan et al. | Jan 2014 | A1 |
20140126726 | Heiman et al. | May 2014 | A1 |
20140241529 | Lee | Aug 2014 | A1 |
20140350926 | Schuster et al. | Nov 2014 | A1 |
20140379338 | Fry | Dec 2014 | A1 |
20150025881 | Carlos et al. | Jan 2015 | A1 |
20150078555 | Zhang et al. | Mar 2015 | A1 |
20150078606 | Zhang et al. | Mar 2015 | A1 |
20150088499 | White et al. | Mar 2015 | A1 |
20150112672 | Giacobello et al. | Apr 2015 | A1 |
20150139428 | Reining | May 2015 | A1 |
20150206528 | Wilson et al. | Jul 2015 | A1 |
20150208165 | Volk et al. | Jul 2015 | A1 |
20150237470 | Mayor et al. | Aug 2015 | A1 |
20150277847 | Yliaho | Oct 2015 | A1 |
20150364137 | Katuri et al. | Dec 2015 | A1 |
20160037245 | Harrington | Feb 2016 | A1 |
20160061934 | Woodruff et al. | Mar 2016 | A1 |
20160078880 | Avendano et al. | Mar 2016 | A1 |
20160093307 | Warren et al. | Mar 2016 | A1 |
20160094910 | Vallabhan et al. | Mar 2016 | A1 |
20160133269 | Dusan et al. | May 2016 | A1 |
20160162469 | Santos | Jun 2016 | A1 |
Number | Date | Country |
---|---|---|
105474311 | Apr 2016 | CN |
112014003337 | Mar 2016 | DE |
1081685 | Mar 2001 | EP |
20080623 | Nov 2008 | FI |
20110428 | Dec 2011 | FI |
20125600 | Jun 2012 | FI |
123080 | Oct 2012 | FI |
H05172865 | Jul 1993 | JP |
H05300419 | Nov 1993 | JP |
H07336793 | Dec 1995 | JP |
2004053895 | Feb 2004 | JP |
2004531767 | Oct 2004 | JP |
2004533155 | Oct 2004 | JP |
2005148274 | Jun 2005 | JP |
2005518118 | Jun 2005 | JP |
2005309096 | Nov 2005 | JP |
2006515490 | May 2006 | JP |
2007201818 | Aug 2007 | JP |
2008518257 | May 2008 | JP |
2008542798 | Nov 2008 | JP |
2009037042 | Feb 2009 | JP |
2009538450 | Nov 2009 | JP |
2012514233 | Jun 2012 | JP |
5081903 | Nov 2012 | JP |
2013513306 | Apr 2013 | JP |
2013527479 | Jun 2013 | JP |
5718251 | May 2015 | JP |
5855571 | Feb 2016 | JP |
1020060024498 | Mar 2006 | KR |
1020070068270 | Jun 2007 | KR |
101050379 | Dec 2008 | KR |
1020080109048 | Dec 2008 | KR |
1020090013221 | Feb 2009 | KR |
1020110111409 | Oct 2011 | KR |
1020120094892 | Aug 2012 | KR |
1020120101457 | Sep 2012 | KR |
101294634 | Aug 2013 | KR |
101610662 | Apr 2016 | KR |
519615 | Feb 2003 | TW |
200847133 | Dec 2008 | TW |
201113873 | Apr 2011 | TW |
201143475 | Dec 2011 | TW |
I421858 | Jan 2014 | TW |
201513099 | Apr 2015 | TW |
WO0207061 | Jan 2002 | WO |
WO02080362 | Oct 2002 | WO |
WO02103676 | Dec 2002 | WO |
WO03069499 | Aug 2003 | WO |
WO2004010415 | Jan 2004 | WO |
WO2005086138 | Sep 2005 | WO |
WO2007140003 | Dec 2007 | WO |
WO2008034221 | Mar 2008 | WO |
WO2010077361 | Jul 2010 | WO |
WO2011002489 | Jan 2011 | WO |
WO2011068901 | Jun 2011 | WO |
WO2012094422 | Jul 2012 | WO |
WO2015010129 | Jan 2015 | WO |
WO2016040885 | Mar 2016 | WO |
WO2016049566 | Mar 2016 | WO |
WO2016094418 | Jun 2016 | WO |
WO2016109103 | Jul 2016 | WO |
Entry |
---|
Boll, Steven F “Suppression of Acoustic Noise in Speech using Spectral Subtraction”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. ASSP-27, No. 2, Apr. 1979, pp. 113-120. |
“ENT 172.” Instructional Module. Prince George's Community College Department of Engineering Technology. Accessed: Oct. 15, 2011. Subsection: “Polar and Rectangular Notation”. <http://academic.ppgcc.edu/ent/ent172—instr—mod.html>. |
Fulghum, D. P. et al., “LPC Voice Digitizer with Background Noise Suppression”, 1979 IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 220-223. |
Haykin, Simon et al., “Appendix A.2 Complex Numbers.” Signals and Systems. 2nd Ed. 2003. p. 764. |
Hohmann, V. “Frequency Analysis and Synthesis Using a Gammatone Filterbank”, ACTA Acustica United with Acustica, 2002, vol. 88, pp. 433-442. |
Martin, Rainer “Spectral Subtraction Based on Minimum Statistics”, in Proceedings Europe. Signal Processing Conf., 1994, pp. 1182-1185. |
Mitra, Sanjit K. Digital Signal Processing: a Computer-based Approach. 2nd Ed. 2001. pp. 131-133. |
Cosi, Piero et al., (1996), “Lyon's Auditory Model Inversion: a Tool for Sound Separation and Speech Enhancement,” Proceedings of ESCA Workshop on ‘The Auditory Basis of Speech Perception,’ Keele University, Keele (UK), Jul. 15-19, 1996, pp. 194-197. |
Rabiner, Lawrence R. et al., “Digital Processing of Speech Signals”, (Prentice-Hall Series in Signal Processing). Upper Saddle River, NJ: Prentice Hall, 1978. |
Schimmel, Steven et al., “Coherent Envelope Detection for Modulation Filtering of Speech,” 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 1, No. 7, pp. 221-224. |
Slaney, Malcom, et al., “Auditory Model Inversion for Sound Separation,” 1994 IEEE International Conference on Acoustics, Speech and Signal Processing, Apr. 19-22, vol. 2, pp. 77-80. |
Slaney, Malcom. “An Introduction to Auditory Model Inversion”, Interval Technical Report IRC 1994-014, http://coweb.ecn.purdue.edu/˜maclom/interval/1994-014/, Sep. 1994, accessed on Jul. 6, 2010. |
Solbach, Ludger “An Architecture for Robust Partial Tracking and Onset Localization in Single Channel Audio Signal Mixes”, Technical University Hamburg-Harburg, 1998. |
International Search Report and Written Opinion dated Sep. 16, 2008 in Patent Cooperation Treaty Application No. PCT/US2007/012628. |
International Search Report and Written Opinion dated May 20, 2010 in Patent Cooperation Treaty Application No. PCT/US2009/006754. |
Fast Cochlea Transform, US Trademark Reg. No. 2,875,755 (Aug. 17, 2004). |
3GPP2 “Enhanced Variable Rate Codec, Speech Service Options 3, 68, 70, and 73 for Wideband Spread Spectrum Digital Systems”, May 2009, pp. 1-308. |
3GPP2 “Selectable Mode Vocoder (SMV) Service Option for Wideband Spread Spectrum Communication Systems”, Jan. 2004, pp. 1-231. |
3GPP2 “Source-Controlled Variable-Rate Multimode Wideband Speech Codec (VMR-WB) Service Option 62 for Spread Spectrum Systems”, Jun. 11, 2004, pp. 1-164. |
3GPP “3GPP Specification 26.071 Mandatory Speech CODEC Speech Processing Functions; AMR Speech Codec; General Description”, http://www.3gpp.org/ftp/Specs/html-info/26071.htm, accessed on Jan. 25, 2012. |
3GPP “3GPP Specification 26.094 Mandatory Speech Codec Speech Processing Functions; Adaptive Multi-Rate (AMR) Speech Codec; Voice Activity Detector (VAD)”, http://www.3gpp.org/ftp/Specs/html-info/26094.htm, accessed on Jan. 25, 2012. |
3GPP “3GPP Specification 26.171 Speech Codec Speech Processing Functions; Adaptive Multi-Rate—Wideband (AMR-WB) Speech Codec; General Description”, http://www.3gpp.org/ftp/Specs/html-info26171.htm, accessed on Jan. 25, 2012. |
3GPP “3GPP Specification 26.194 Speech Codec Speech Processing Functions; Adaptive Multi-Rate—Wideband (AMR-WB) Speech Codec; Voice Activity Detector (VAD)” http://www.3gpp.org/ftp/Specs/html-info26194.htm, accessed on Jan. 25, 2012. |
International Telecommunication Union “Coding of Speech at 8 kbit/s Using Conjugate-Structure Algebraic-code-excited Linear-prediction (CS-ACELP)”, Mar. 19, 1996, pp. 1-39. |
International Telecommunication Union “Coding of Speech at 8 kbit/s Using Conjugate Structure Algebraic-code-excited Linear-prediction (CS-ACELP) Annex B: A Silence Compression Scheme for G.729 Optimized for Terminals Conforming to Recommendation V.70”, Nov. 8, 1996, pp. 1-23. |
International Search Report and Written Opinion dated Aug. 19, 2010 in Patent Cooperation Treaty Application No. PCT/US2010/001786. |
International Search Report and Written Opinion dated Feb. 7, 2011 in Patent Cooperation Treaty Application No. PCT/US2010/058600, filed Dec. 1, 2010. |
Cisco, “Understanding How Digital T1 CAS (Robbed Bit Signaling) Works in IOS Gateways”, Jan. 17, 2007, http://www.cisco.com/image/gif/paws/22444/t1-cas-ios.pdf, accessed on Apr. 3, 2012. |
Jelinek et al., “Noise Reduction Method for Wideband Speech Coding” Proc. Eusipco, Vienna, Austria, Sep. 2004, pp. 1959-1962. |
Widjaja et al., “Application of Differential Microphone Array for IS-127 EVRC Rate Determination Algorithm”, Interspeech 2009, 10th Annual Conference of the International Speech Communication Association, Brighton, United Kingdom Sep. 6-10, 2009, pp. 1123-1126. |
Sugiyama et al., “Single-Microphone Noise Suppression for 3G Handsets Based on Weighted Noise Estimation” in Benesty et al., “Speech Enhancement”, 2005, pp. 115-133, Springer Berlin Heidelberg. |
Watts, “Real-Time, High-Resolution Simulation of the Auditory Pathway, with Application to Cell-Phone Noise Reduction” Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS), May 30-Jun. 2, 2010, pp. 3821-3824. |
3GPP Minimum Performance Specification for the Enhanced Variable rate Codec, Speech Service Option 3 and 68 or Wideband Spread Spectrum Digital Systems, Jul. 2007, pp. 1-83. |
Ramakrishnan, 2000. Reconstruction of Incomplete Spectrograms for robust speech recognition. PhD thesis, Carnegie Mellon University, Pittsburgh, Pennsylvania. |
Kim et al., “Missing-Feature Reconstruction by Leveraging Temporal Spectral Correlation for Robust Speech Recognition in Background Noise Conditions, ”Audio, Speech, and Language Processing, IEEE Transactions on, vol. 18, No. 8 pp. 2111-2120, Nov. 2010. |
Cooke et al.,“Robust Automatic Speech Recognition with Missing and Unreliable Acoustic data,” Speech Commun., vol. 34, No. 3, pp. 267-285, 2001. |
Liu et al., “Efficient cepstral normalization for robust speech recognition.” Proceedings of the workshop on Human Language Technology. Association for Computational Linguistics, 1993. |
Yoshizawa et al., “Cepstral gain normalization for noise robust speech recognition.” Acoustics, Speech, and Signal Processing, 2004. Proceedings, (ICASSP04), IEEE International Conference on vol. 1 IEEE, 2004. |
Office Action dated Apr. 8, 2014 in Japan Patent Application 2011-544416, filed Dec. 30, 2009. |
Elhilali et al.,“A cocktail party with a cortical twist: How cortical mechanisms contribute to sound segregation.” J. Acoust. Soc. Am., vol. 124, No. 6, Dec. 2008; 124(6): 3751-3771). |
Jin et al., “HMM-Based Multipitch Tracking for Noisy and Reverberant Speech.” Jul. 2011. |
Kawahara, W., et al., “Tandem-Straight: A temporally stable power spectral representation for periodic signals and applications to interference-free spectrum, F0, and aperiodicity estimation.” IEEE ICASSP 2008. |
Lu et al. “A Robust Audio Classification and Segmentation Method.” Microsoft Research, 2001, pp. 203, 206, and 207. |
Office Action dated Aug. 26, 2014 in Japan Application No. 2012-542167, filed Dec. 1, 2010. |
International Search Report & Written Opinion dated Nov. 12, 2014 in Patent Cooperation Treaty Application No. PCT/US2014/047458, filed Jul. 21, 2014. |
Office Action dated Oct. 31, 2014 in Finland Patent Application No. 20125600, filed Jun. 1, 2012. |
Krini, Mohamed et al., “Model-Based Speech Enhancement,” in Speech and Audio Processing in Adverse Environments; Signals and Communication Technology, edited by Hansler et al., 2008, Chapter 4, pp. 89-134. |
Office Action dated Dec. 9, 2014 in Japan Patent Application No. 2012-518521, filed Jun. 21, 2010. |
Office Action dated Dec. 10, 2014 in Taiwan Patent Application No. 099121290, filed Jun. 29, 2010. |
Purnhagen, Heiko, “Low Complexity Parametric Stereo Coding in MPEG-4,” Proc. of the 7th Int. Conference on Digital Audio Effects (DAFx'04), Naples, Italy, Oct. 5-8, 2004. |
Chang, Chun-Ming et al., “Voltage-Mode Multifunction Filter with Single Input and Three Outputs Using Two Compound Current Conveyors” IEEE Transactions on Circuits and Systems-I: Fundamental Theory and Applications, vol. 46, No. 11, Nov. 1999. |
Nayebi et al., “Low delay FIR filter banks: design and evaluation” IEEE Transactions on Signal Processing, vol. 42, No. 1, pp. 24-31, Jan. 1994. |
Notice of Allowance dated Feb. 17, 2015 in Japan Patent Application No. 2011-544416, filed Dec. 30, 2009. |
Office Action dated Jan. 30, 2015 in Finland Patent Application No. 20080623, filed May 24, 2007. |
Office Action dated Mar. 27, 2015 in Korean Patent Application No. 10-2011-7016591, filed Dec. 30, 2009. |
Office Action dated Jul. 21, 2015 in Japan Patent Application No. 2012-542167, filed Dec. 1, 2010. |
Notice of Allowance dated Aug. 13, 2015 in Finnish Patent Application 20080623, filed May 24, 2007. |
Office Action dated Sep. 29, 2015 in Finland Patent Application No. 20125600, filed Dec. 1, 2010. |
Office Action dated Oct. 15, 2015 in Korean Patent Application 10-2011-7016591. |
Allowance dated Nov. 17, 2015 in Japan Patent Application No. 2012-542167, filed Dec. 1, 2010. |
International Search Report & Written Opinion dated Dec. 14, 2015 in Patent Cooperation Treaty Application No. PCT/US2015/049816, filed Sep. 11, 2015. |
International Search Report & Written Opinion dated Dec. 22, 2015 in Patent Cooperation Treaty Application No. PCT/US2015/052433, filed Sep. 25, 2015. |
Notice of Allowance dated Jan. 14, 2016 in South Korean Patent Application No. 10-2011-7016591 filed Jul. 15, 2011. |
International Search Report & Written Opinion dated Feb. 12, 2016 in Patent Cooperation Treaty Application No. PCT/US2015/064523, filed Dec. 8, 2015. |
International Search Report & Written Opinion dated Feb. 11, 2016 in Patent Cooperation Treaty Application No. PCT/US2015/063519, filed Dec. 2, 2015. |
Klein, David, “Noise-Robust Multi-Lingual Keyword Spotting with a Deep Neural Network Based Architecture”, U.S. Appl. No. 14/614,348, filed Feb. 4, 2015. |
Vitus, Deborah Kathleen et al., “Method for Modeling User Possession of Mobile Device for User Authentication Framework”, U.S. Appl. No. 14/548,207, filed Nov. 19, 2014. |
Murgia, Carlo, “Selection of System Parameters Based on Non-Acoustic Sensor Information”, U.S. Appl. No. 14/331,205, filed Jul. 14, 2014. |
Goodwin, Michael M. et al., “Key Click Suppression”, U.S. Appl. No. 14/745,176, filed Jun. 19, 2015. |
Office Action dated May 17, 2016 in Korean Patent Application 1020127001822 filed Jun. 21, 2010. |
Lauber, Pierre et al., “Error Concealment for Compressed Digital Audio,” Audio Engineering Society, 2001. |
Non-Final Office Action, dated Aug. 5, 2008, U.S. Appl. No. 11/441,675, filed May 25, 2006. |
Non-Final Office Action, dated Jan. 21, 2009, U.S. Appl. No. 11/441,675, filed May 25, 2006. |
Final Office Action, dated Sep. 3, 2009, U.S. Appl. No. 11/441,675, filed May 25, 2006. |
Non-Final Office Action, dated May 10, 2011, U.S. Appl. No. 11/441,675, filed May 25, 2006. |
Final Office Action, dated Oct. 24, 2011, U.S. Appl. No. 11/441,675, filed May 25, 2006. |
Non-Final Office Action, dated Dec. 6, 2011, U.S. Appl. No. 12/319,107, filed Dec. 31, 2008. |
Notice of Allowance, dated Feb. 13, 2012, U.S. Appl. No. 11/441,675, filed May 25, 2006. |
Non-Final Office Action, dated Feb. 14, 2012, U.S. Appl. No. 13/295,981, filed Nov. 14, 2011. |
Non-Final Office Action, dated Feb. 21, 2012, U.S. Appl. No. 13/288,858, filed Nov. 3, 2011. |
Final Office Action, dated Apr. 16, 2012, U.S. Appl. No. 12/319,107, filed Dec. 31, 2008. |
Advisory Action, dated Jun. 28, 2012, U.S. Appl. No. 12/319,107, filed Dec. 31, 2008. |
Final Office Action, dated Jul. 9, 2012, U.S. Appl. No. 13/295,981, filed Nov. 14, 2011. |
Final Office Action, dated Jul. 17, 2012, U.S. Appl. No. 13/295,981, filed Nov. 14, 2011. |
Non-Final Office Action, dated Aug. 28, 2012, U.S. Appl. No. 12/860,515, filed Aug. 20, 2010. |
Notice of Allowance, dated Sep. 10, 2012, U.S. Appl. No. 13/288,858, filed Nov. 3, 2011. |
Advisory Action, dated Sep. 24, 2012, U.S. Appl. No. 13/295,981, filed Nov. 14, 2011. |
Non-Final Office Action, dated Oct. 2, 2012, U.S. Appl. No. 12/906,009, filed Oct. 15, 2010. |
Non-Final Office Action, dated Oct. 11, 2012, U.S. Appl. No. 12/896,725, filed Oct. 1, 2010. |
Non-Final Office Action, dated Dec. 10, 2012, U.S. Appl. No. 12/493,927, filed Jun. 29, 2009. |
Final Office Action, dated Mar. 11, 2013, U.S. Appl. No. 12/860,515, filed Aug. 20, 2010. |
Non-Final Office Action, dated Apr. 24, 2013, U.S. Appl. No. 13/012,517, filed Jan. 24, 2011. |
Non-Final Office Action, dated May 10, 2013, U.S. Appl. No. 13/751,907, filed Jan. 28, 2013. |
Final Office Action, dated May 14, 2013, U.S. Appl. No. 12/493,927, filed Jun. 29, 2009. |
Final Office Action, dated May 22, 2013, U.S. Appl. No. 12/896,725, filed Oct. 1, 2010. |
Non-Final Office Action, dated Jul. 2, 2013, U.S. Appl. No. 12/906,009, filed Oct. 15, 2010. |
Non-Final Office Action, dated Jul. 31, 2013, U.S. Appl. No. 13/009,732, filed Jan. 19, 2011. |
Non-Final Office Action, dated Aug. 28, 2013, U.S. Appl. No. 12/860,515, filed Aug. 20, 2010. |
Notice of Allowance, dated Sep. 17, 2013, U.S. Appl. No. 13/751,907, filed Jan. 28, 2013. |
Final Office Action, dated Dec. 3, 2013, U.S. Appl. No. 13/012,517, filed Jan. 24, 2011. |
Non-Final Office Action, dated Jan. 3, 2014, U.S. Appl. No. 12/319,107, filed Dec. 31, 2008. |
Non-Final Office Action, dated Jan. 9, 2014, U.S. Appl. No. 12/493,927, filed Jun. 29, 2009. |
Non-Final Office Action, dated Jan. 30, 2014, U.S. Appl. No. 12/896,725, filed Oct. 1, 2010. |
Final Office Action, dated May 7, 2014, U.S. Appl. No. 12/906,009, filed Oct. 15, 2010. |
Notice of Allowance, dated May 9, 2014, U.S. Appl. No. 13/295,981, filed Nov. 14, 2011. |
Notice of Allowance, dated Jun. 18, 2014, U.S. Appl. No. 12/860,515, filed Aug. 20, 2010. |
Notice of Allowance, dated Aug. 20, 2014, U.S. Appl. No. 12/493,927, filed Jun. 29, 2009. |
Notice of Allowance, dated Aug. 25, 2014, U.S. Appl. No. 12/319,107, filed Dec. 31, 2008. |
Non-Final Office Action, dated Nov. 19, 2014, U.S. Appl. No. 12/896,725, filed Oct. 1, 2010. |
Non-Final Office Action, dated Nov. 19, 2014, U.S. Appl. No. 13/012,517, filed Jan. 24, 2011. |
Final Office Action, dated Dec. 16, 2014, U.S. Appl. No. 13/009,732, filed Jan. 19, 2011. |
Non-Final Office Action, dated Apr. 21, 2015, U.S. Appl. No. 12/906,009, filed Oct. 15, 2010. |
Final Office Action, dated Jun. 17, 2015, U.S. Appl. No. 13/012,517, filed Jan. 24, 2011. |
Notice of Allowance, dated Jul. 30, 2015, U.S. Appl. No. 12/896,725, filed Oct. 1, 2010. |
Non-Final Office Action, dated Dec. 28, 2015, U.S. Appl. No. 14/081,723, filed Nov. 15, 2013. |
Non-Final Office Action, dated Feb. 1, 2016, U.S. Appl. No. 14/335,850, filed Jul. 18, 2014. |
Non-Final Office Action, dated Jun. 22, 2016, U.S. Appl. No. 13/012,517, filed Jan. 24, 2011. |
Non-Final Office Action, dated Jun. 24, 2016, U.S. Appl. No. 14/962,931, filed Dec. 8, 2015. |
International Search Report and Written Opinion, PCT/US2017/030220, Knowles Electronics LLC (14 pages) dated Aug. 30, 2017. |