This application claims the benefit of Australian Provisional Patent Application No. 2014901429 filed 17 Apr. 2014, which is incorporated herein by reference.
The present invention relates to the digital processing of signals from microphones or other such transducers, and in particular relates to a device and method for mixing signals from multiple such signals in order to achieve a desired function, while retaining spatial or directional cues in the signals.
Natural human hearing provides stereo perception whereby a listener can discriminate the direction from which a sound originates. This listening ability arises because the time of arrival of an acoustic signal at each respective ear of the listener depends on the angle of incidence of the acoustic signal. The amplitude of the acoustic signal at each respective ear of the listener can also depend on the angle of incidence of the acoustic signal. The difference between the time of arrival of the acoustic signal at each respective ear of the listener, and the amplitude of the acoustic signal at each respective ear of the listener, are examples of binaural cues which enrich the hearing perception of the listener and can enable certain tasks or effects. However, when acoustic sound is processed by a digital signal processing device and delivered to each respective ear of the user by a speaker, such binaural cues are often lost.
Processing signals from microphones in consumer electronic devices such as smartphones, hearing aids, headsets and the like presents a range of design problems. There are usually multiple microphones to consider, including one or more microphones on the body of the device and one or more external microphones such as headset or hands-free car kit microphones. In smartphones these microphones can be used not only to capture speech for phone calls, but also for recording voice notes. In the case of devices with a camera, one or more microphones may be used to enable recording of an audio track to accompany video captured by the camera. Increasingly, more than one microphone is being provided on the body of the device, for example to improve noise cancellation as is addressed in GB2484722 (Wolfson Microelectronics).
The device hardware associated with the microphones should provide for sufficient microphone inputs, preferably with individually adjustable gains, and flexible internal routing to cover all usage scenarios, which can be numerous in the case of a smartphone with an applications processor. Telephony functions should include a “side tone” so that the user can hear their own voice, and acoustic echo cancellation. Jack insertion detection should be provided to enable seamless switching between internal to external microphones when a headset or external microphone is plugged in or disconnected.
Wind noise detection and reduction is a particularly difficult problem in such devices. Wind noise is defined herein as a microphone signal generated from turbulence in an air stream flowing past microphone ports, as opposed to the sound of wind blowing past other objects such as the sound of rustling leaves as wind blows past a tree in the far field. Wind noise can be objectionable to the user and/or can mask other signals of interest. It is desirable that digital signal processing devices are configured to take steps to ameliorate the deleterious effects of wind noise upon signal quality. One such approach is described in International Patent Publication No. WO 2015/003220 by the present applicant, the content of which is incorporated herein by reference. This approach involves mixing the signals from at least two microphones so that the signal which is suffering from least wind noise is preferentially used for further processing. Such mixing is applied at low frequencies (e.g. less than 3-8 kHz), with higher frequencies being retained in separate channels. Other applications may require subband mixing at mid- and/or high frequencies in the audio range. However these and other methods of microphone signal mixing can corrupt the binaural cues being delivered to the listener.
Any discussion of documents, acts, materials, devices, articles or the like which has been included in the present specification is solely for the purpose of providing a context for the present invention. It is not to be taken as an admission that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present invention as it existed before the priority date of each claim of this application.
Throughout this specification the word “comprise”, or variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps.
In this specification, a statement that an element may be “at least one of” a list of options is to be understood that the element may be any one of the listed options, or may be any combination of two or more of the listed options.
According to a first aspect the present invention provides a method of mixing microphone signals, the method comprising:
obtaining first and second microphone signals from respective first and second microphones;
in at least one affected subband, mixing the first and second microphone signals to produce first and second mixed signals;
processing at least one reference subband of the first and second microphone signals in order to identify a binaural cue between the first and second microphone signals, the reference subband being distinct from the or each affected subband; and
modifying the affected subband in the first and second mixed signals in order to re-emphasize the identified binaural cue.
According to a second aspect the present invention provides a device for mixing microphone signals, the device comprising:
first and second inputs for receiving respective first and second microphone signals from respective first and second microphones; and
a digital signal processor configured to, in at least one affected subband, mix the first and second microphone signals to produce first and second mixed signals; the digital signal processor further configured to process at least one reference subband of the first and second microphone signals in order to identify a binaural cue between the first and second microphone signals, the reference subband being distinct from the or each affected subband; and the digital signal processor further configured to modify the affected subband in the first and second mixed signals in order to re-emphasize the identified binaural cue.
According to a third aspect the present invention provides a non-transitory computer readable medium for mixing microphone signals, comprising instructions which, when executed by one or more processors, causes performance of the following:
obtaining first and second microphone signals from respective first and second microphones;
in at least one affected subband, mixing the first and second microphone signals to produce first and second mixed signals;
processing at least one reference subband of the first and second microphone signals in order to identify a binaural cue between the first and second microphone signals, the reference subband being distinct from the or each affected subband; and
modifying the affected subband in the first and second mixed signals in order to re-emphasize the identified binaural cue.
In some embodiments, identifying the binaural cue may comprise analysing the reference subband in the first and second signals in order to identify a level, magnitude or power difference between the first and second signals in the reference subband. In such embodiments, modifying the affected subband in the first and second mixed signals may comprise applying respective first and second emphasis gains to the first and second mixed signals in the or each affected subband, the first and second emphasis gains being selected to correspond to the identified level, magnitude or power difference between the first and second signals in the reference subband.
In some embodiments, identifying the binaural cue may comprise analysing the reference subband in the first and second signals in order to identify a time difference between the first and second microphone signals. In such embodiments, modifying the affected subband in the first and second mixed signals may comprise applying an emphasis delay to completely or partly restore the identified time difference to the first and second mixed signals in the or each affected subband.
In some embodiments, the binaural cue comprises both a delay between the microphone signals and a signal level difference between the microphone signals, whereby both emphasis gains and an emphasis delay are applied to the first and second mixed signals in the or each affected subband.
In some embodiments the mixing may comprise mixing the signals from at least two microphones, in low frequency subbands, so that the signal which is suffering from least wind noise in each of the low frequency subbands is preferentially used in that subband for further processing in both of the mixed signals.
In other embodiments, the mixing may comprise mixing the signals from at least two microphones, in middle-to-high frequency subbands, so that the signal which is suffering from least lens focus motor noise in each of the affected subbands is preferentially used in that subband for further processing in both of the mixed signals.
An example of the invention will now be described with reference to the accompanying drawings, in which:
Focus noise in video recording, being the noise of an auto focus motor of the lens of the video camera, is a situation where subband mixing between multiple microphone signals may be applied for example between about 4 kHz and 12 kHz. The following description uses subband signal mixing to ameliorate focus noise as an example, however it is to be appreciated that other embodiments of the present invention may be applied to low frequency subband mixing to address wind noise, for example.
Gj=(1−aj)*(ILDj−1)+1
The gain Gj is one (0 dB gain) if the mixing ratio is 1 (no mixing), or if the ILDj is 1 (i.e. mic1 and mic2 signals are of the same level). The calculation of Gj in other embodiments can take different forms, such as:
Gj=(1−aj)2*(ILDj−1)+1;
In alternative embodiments similar to
It will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the invention as shown in the specific embodiments without departing from the spirit or scope of the invention as broadly described. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive.
Number | Date | Country | Kind |
---|---|---|---|
2014901429 | Apr 2014 | AU | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/AU2015/050182 | 4/17/2015 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2015/157827 | 10/22/2015 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5371802 | McDonald | Dec 1994 | A |
8473287 | Every et al. | Jun 2013 | B2 |
20020041695 | Luo | Apr 2002 | A1 |
20090304188 | Mejia et al. | Dec 2009 | A1 |
20100280824 | Petit | Nov 2010 | A1 |
20110129105 | Choi | Jun 2011 | A1 |
20130010972 | Ma | Jan 2013 | A1 |
20140161271 | Teranishi | Jun 2014 | A1 |
20140226842 | Shenoy | Aug 2014 | A1 |
20160155453 | Harvey | Jun 2016 | A1 |
Number | Date | Country |
---|---|---|
2015003220 | Jan 2015 | WO |
Entry |
---|
Welker, Daniel P., et al. “Microphone-array hearing aids with binaural output. II. A two-microphone adaptive system.” IEEE Transactions on Speech and Audio Processing 5.6 (1997): 543-551. |
F. L. Wightman and D. J. Kistler, “The dominant role of low-frequency interaural time differences in sound localization,” J. Acoust. Soc. Amer., vol. 91, pp. 1648-1661, Mar. 1991. |
International Search Report and Written Opinion of the International Searching Authority, International Application No. PCT/AU2015/050182, dated Jun. 2, 2015. |
Australian Patent Office International-Type Search Report, National Application No. 2014901429, dated Nov. 18, 2014. |
Wikipedia, “Sound localization”, https://en.wikipedia.org/wiki/Sound_localization, retrieved Oct. 30, 2017. |
Number | Date | Country | |
---|---|---|---|
20170041707 A1 | Feb 2017 | US |