This disclosure relates to differential microphone arrays and, in particular, to constructing a first-order differential microphone array (FODMA) with steerable differential beamformers.
A differential microphone array (DMA) uses signal processing techniques to obtain a directional response to a source sound signal based on differentials of pairs of the source signals received by microphones of the array. DMAs may contain an array of microphone sensors that are responsive to the spatial derivatives of the acoustic pressure field generated by the sound source. The microphones of the DMA may be arranged on a common planar platform according to the microphone array’s geometry (e.g., linear, circular, or other array geometries).
The DMA may be communicatively coupled to a processing device (e.g., a digital signal processor (DSP) or a central processing unit (CPU)) that includes circuits programmed to implement a beamformer to calculate an estimate of the sound source. A beamformer is a spatial filter that uses the multiple versions of the sound signal captured by the microphones in the microphone array to identify the sound source according to certain optimization rules. A beampattern reflects the sensitivity of the beamformer to a plane wave impinging on the DMA from a particular angular direction. DMAs combined with proper beamforming algorithms have been widely used in speech communication and human-machine interface systems to extract the speech signals of interest from unwanted noise and interference.
The present disclosure is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.
DMAs may measure the derivatives (at different orders) of the sound signals captured by each microphone, where the collection of the sound signals forms an acoustic field associated with the microphone arrays. For example, a first-order DMA beamformer, formed using the difference between a pair of microphones (either adjacent or non-adjacent), may measure the first-order derivative of the acoustic pressure field. A second-order DMA beamformer may be formed using the difference between a pair of two first-order differences of the first-order DMA. The second-order DMA may measure the second-order derivatives of the acoustic pressure field by using at least three microphones. Generally, an Nth order DMA beamformer may measure the Nth order derivatives of the acoustic pressure field by using at least N + 1 microphones.
A beampattern of a DMA can be quantified in one aspect by the directivity factor (DF) which is the capacity of the beampattern to maximize the ratio of its sensitivity in the look direction to its averaged sensitivity over the whole space. The look direction is an impinging angle that the desired sound source comes from. The DF of a DMA beampattern may increase with the order of the DMA. However, a higher order DMA can be very sensitive to noise generated by the hardware elements of each microphone of the DMA itself, where the sensitivity is measured according to a white noise gain (WNG). The design of a beamformer for the DMA may focus on finding an optimal beamforming filter under some criteria (e.g., beampattern, DF, WNG, etc.) for a specified array geometry (e.g., linear, circular, square, etc.).
First-order differential microphone arrays (FODMAs), which combine a small-spacing uniform linear array and a first-order differential beamformer, have been used in a wide range of applications for sound and speech signal acquisition. In applications such as hearing aids and Bluetooth headsets, the direction of the sound source may be assumed and beamformer steering is not really needed. However, in many other applications, such as smart TVs, smart phones, tablets, etc., a steerable beamformer may be desired as the sound source position may not impinge along the endfire direction. For example, an LDMA may be mounted along the bottom side of a smart TV with voice recognition capabilities in order to form a beampattern along the broadside of the smart TV. Therefore, it would be useful to be able to steer the beamformer for such an LDMA in order to maximize signal acquisition (e.g., a user’s voice) and noise reduction.
The present disclosure provides an approach to the design of a linear differential microphone array (LDMA) with steerable beamformers. The approach described herein includes dividing the target beampattern into a sum of two sub-beampatterns, e.g., a cardioid and a dipole, where the summation is controlled by the steering angle. Two sub-beamformers are constructed, the first one is similar to the traditional beamformer and is used to achieve the cardioid sub-beampattern while the second one is designed to filter the squared observation signals and is used to approximate the dipole sub-beampattern. The design of the second sub-beamformer is focused on the estimation of the spectral amplitude of the signal of interest while de-emphasizing the spectral phase, which is commonly accepted in speech enhancement and noise reduction.
For simplicity of explanation, methods are depicted and described as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all presented acts may be required to implement the methods in accordance with the disclosed subject matter. In addition, the methods could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methods disclosed in this disclosure are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computing devices. The term article of manufacture, as used herein, is intended to encompass a computer program accessible from any computer-readable device or storage media. In an implementation, the methods may be performed by a hardware processing associated with the LDMA 300 of
Referring to
In an implementation, a uniform linear array composed of M microphones may be used to capture a signal of interest, e.g., LDMA 300 of
where X(ω) is the signal of interest (also referred to as the desired signal) received at the first microphone, Xm(ω) and Vm(ω) are, respectively, the speech and additive noise signals received at the mth microphone, j is the imaginary unit with j2 = -1, ω = 2πƒ is the angular frequency, ƒ > 0 denotes the temporal frequency, τ0 = δ/c, δ is the microphone spacing, c is the speed of sound in the air, which is generally assumed to be 340 m/s, and θ is the source incidence angle. In DMAs, it is assumed that the spacing δ is much smaller than the smallest acoustic wavelength of the frequency band of interest such that ωτ0 ≤ 2π. For example, in the simulations and experiments described below, values of δ = 1 cm and δ = 1.1 cm are used for the spacing of the FODMA microphones. Since cos θ is an even function, the beampatterns of linear arrays are symmetric with respect to the line that connects all the sensors. Therefore, in the following description, the range of θ may be limited to [0, π].
Traditionally, beamforming is achieved by applying a linear spatial filter, h(ω), to the microphone observation signals, i.e.,
where
is the observation signal vector, v(ω) is the noise signal vector defined analogously to the observation signal vector y(ω),
is a phase vector, the superscripts * and H denote, respectively, the complex-conjugate and transpose-conjugate operators, ω̅ = ωτ0 cos θ, T is the transpose operator, and Z(ω) is an estimate of X(ω). An objective of beamforming is to determine an optimal filter under certain criteria so that Z(ω) is a good estimate of X(m).
At 104, the processing device may specify a target beampattern for the FODMA at a steering angle θ.
With linear microphone arrays and the traditional beamforming approach, as described above at (2), the beampattern of an FODMA may lack steering flexibility, i.e., its main lobe may be difficult to steer to directions other than the linear endfire directions. In one implementation, to steer the main lobe to any direction in the range of θ ∈ [0, π], the target frequency-independent beampattern of FODMA may be expressed as:
where α0, α1, and α2 are real coefficients that determine the shape of the target beampattern for the FODMA.
At 106, the processing device may decompose the target beampattern into a first sub-beampattern and a second sub-beampattern based on the steering angle θ.
The target beampattern for the FODMA may be decomposed into two sub-beampatterns B1,1(θ) + B1,2(θ) wherein:
which are a first-order cosine (cardiod) pattern and a first-order sinusoidal (dipole) pattern, respectively. If α2 = 0, this target beampattern degenerates to one particular case in equation (2) above. Based on the properties of a Fourier series expansion, any first-order beampattern, which is continuous in [0, 2π], may be represented by target beampattern (5). At the main lobe (or desired steering) direction θ = θd, the target beampattern should be distortionless, i.e., B1(θd) = 1. Therefore, the following two conditions are satisfied:
Given the target beampattern in equation (5) above, the problem of differential beamforming becomes one of finding the beamforming filter, h(ω) in (2), so that the resulting beampattern resembles the target beampattern.
At 108, the processing device may generate a first sub-beamformer and a second sub-beamformer to each filter signals from microphones of the FODMA, where the first sub-beamformer is associated with the first sub-beampattern, and the second sub-beamformer is associated with the second sub-beampattern.
The processing device may generate the two sub-beamformers h1(ω) and h2(ω), the outputs of which may be denoted as:
where {M1, M2} ≤ M, h1(ω) and h2(ω) are defined similarly to h(ω),
v1(ω) is defined analogously to y1(ω),
is defined similarly to
⊙ denotes the Hadamard product (element-wise product),
are the two phase vectors, and d2(ω, cos θ) is defined analogously to d1(ω, cos θ).
At 110, the processing device may, generate the steerable beamformer based on the first sub-beamformer and the second sub-beamformer.
Given Z1(ω) and Z2(ω), the estimate of the desired signal, X(ω), may be obtained as:
wherein ϕ1 (ω) is the spectral phase of the output of the sub-beamformer h1(ω) (the original noisy phase or an estimate of the phase of the clean speech spectrum may also be used). The spectral phase is a phase having little impact on the quality of the estimated signal. Based on equations (9) and (10) above, the beampatterns of the two sub-beamformers may be defined as:
Equation (17) used to define the beampattern for the second sub-beamformer (e.g., h2(ω)), is based on equation (10) above which filters squared signals from the observation signal vector (e.g.,
(ω)). In an implementation, the cross term in (10) may be neglected, which should not affect the validity of the beampattern because the signal of interest and any noise signals are assumed to be uncorrelated.
Therefore, the overall beampattern of the designed beamformer is:
Given the above formulation, the beamforming in an implementation of this disclosure includes the construction of the filters h1(ω) and h2 (ω) (e.g., the first and second sub-beamformers) in an optimal way such that their combination (e.g., the steerable beamformer for the FODMA) results in a beampattern Bd(θ), e.g., (18) above, which resembles the target beampattern given in equation (5) above.
The two sub-beamformers h1(ω) and h2(ω) may be determined according to the null-constrained method, which is widely used in the design of differential beamformers. Based on M1 ≥ 2, h1(ω) may be constructed using the following linear system of:
wherein
The minimum-norm solution of equation (19) may be expressed as:
Then, based on M2 ≥ 3, h2(ω) may be constructed using the following linear system of:
wherein
The minimum-norm solution of equation (23) may be expressed as:
In the particular case of M1 = 2 and M2 = 3, from (22) and (26):
wherein “DI” denotes the “direct inverse”.
At 112, the processing device may end the execution of operations to construct a FODMA with a steerable beamformer.
Referring to
As noted above, with respect to
Traditionally, beamforming is achieved by applying a linear spatial filter, h(ω), to the microphone observation signals, i.e., equations (2), (3) and (4) above. As noted above, an objective of beamforming is to determine the optimal filter, h(ω), so that the filtered signals from the microphones of the FODMA match the signals of interest from the sound source (e.g., a human voice).
At 204, a plurality (M) of microphones may be organized on a substantially planar platform, the plurality of microphones comprising a first subset (M1) of microphones and a second subset (M2) of microphones.
As described more fully below with respect to
At 206, a processing device may construct a first sub-beamformer based on the first sub-set (M1) of microphones and a target beampattern at a steering angle θ, wherein the first sub-beamformer is characterized according to a first-order cosine (cardioid) first sub-beampattern.
With linear microphone arrays and the traditional beamforming approach, as described above at (2), the beampattern of a FODMA may lack steering flexibility, i.e., its main lobe may be difficult to steer to directions other than the linear endfire directions. As noted above, in one implementation, to steer the main lobe to any direction in the range of θ ∈ [0, π], the target frequency-independent beampattern of FODMA may be expressed according to (5) where α0, α1, and α2 are real coefficients that determine the shape of the target beampattern for the FODMA.
As described above, the target beampattern for the FODMA may be decomposed into two sub-beampatterns B1,1(θ) + B1,2(θ) according to (6) and (7) which are a first-order cosine (cardiod) pattern and a first-order sinusoidal (dipole) pattern, respectively.
The processing device may generate the two sub-beamformers h1(ω) and h2(ω), the output of the first sub-beamformer may be denoted as shown above at (9):
where M1 is a subset of M, h1(ω) is defined similarly to h(ω),
as noted at (11), v1(w) is defined analogously to y1(ω), and
as described at (13) is a phase vector.
At 208, the processing device may construct a second sub-beamformer based on the second sub-set (M2) of the microphones and the target beampattern at the steering angle θ, wherein the second sub-beamformer is characterized according to a first-order sinusoidal (dipole) second sub-beampattern.
As described above, the target beampattern for the FODMA may be decomposed into two sub-beampatterns B1,1(θ) + B1,2(θ) according to (6) and (7) which are a first-order cosine (cardiod) pattern and a first-order sinusoidal (dipole) pattern, respectively.
The processing device may generate the two sub-beamformers h1(ω) and h2(ω), the output of the second sub-beamformer may be denoted as shown above at (10):
where M2 is a subset of M, h2(ω) is defined similarly to h(ω),
as noted at (12),
is defined similarly to
(ω), ⊙ denotes the Hadamard product (element-wise product),
as described at (14) is a phase vector, and d2(ω, cos θ) may be defined analogously to d1(ω,cosθ).
At 210, the processing device may, generate the steerable beamformer based on the first sub-beamformer and the second sub-beamformer.
Given Z1(ω) and Z2(ω), the estimate of the desired signal, X(ω), may be obtained as described above at (15). The beampatterns of the two sub-beamformers may be defined as shown at (16) and (17) and therefore, the overall beampattern of the designed beamformer is:
as shown at (18) above. Given the above formulation, the beamforming in an implementation of this disclosure includes the construction of the filters h1(ω) and h2 (ω) (e.g., the first and second sub-beamformers) in an optimal way so that their combination (e.g., the steerable beamformer) results in a beampattern Bd(θ), e.g., (18) above, which resembles the target beampattern given in equation (5) above.
At 212, the processing device may end the execution of operations to construct a FODMA with a steerable beamformer.
FODMA 300 may include uniformly distributed microphones (1,2, ..., m, ..., M) that are arranged according to a linear array geometry on a common plenary platform. The locations of these microphones may be specified with respect to a reference point (e.g., microphone 1). The coordinates of the microphones (2, ..., m, ..., M) of FODMA 300 may be specified by a distance mδ, with m = 1,2, ..., M — 1, which denotes the spacing between the mth microphone of the FODMA 300 and the specified reference point: microphone 1 of the FODMA 300 which is at 0 distance from itself. Accordingly, the vector p = [0, δ, 2δ, ..., mδ, ..., (M - 1)δ]T may be used to denote an array geometry 302 of the microphones (1, 2, ..., m, ..., M) of FODMA 300, where T is the transpose operator. It may be assumed that the maximum distance between two adjacent microphones (e.g., δmax) will be smaller than the wavelength λ of an impinging sound wave.
As noted above with respect to the output of the two sub-beamformers h1(ω) and h2(ω) (see equations (9) and (10) above), signals from a set of microphones are used for each beamformer respectively, with h1(ω) using microphones from 1 to M1 and h2 (ω) using microphones from 1 to M2 where {M1, M2} ≤ M, and {} is the union operator. Accordingly, the two sub-beamformers h1(ω) and h2(ω) may either use all of the M microphone sensors of FODMA 300 or a subset (e.g., subarray 304) of the M microphone sensors.
For an effective or valid target beampattern, the coefficients in equation (5) above should satisfy the condition in (8) above. In order to determine the coefficients α0, α1, and α2, considering the cases:
such that the target beampattern B1(θ) may be decomposed as:
with B1,1(θ) ≥ 0 and B1,2(θ) ≥ 0. Based on the conditions in (29) above being satisfied, it may be determined that for any value of α1: B1,1(α1, θ) = B1,1(-α1,π - θ).
Furthermore, taking derivative of equation (5) above with respect to θ, and equating the result to zero, we obtain:
Combining conditions (8) and (31) it may be determined that:
The directivity factor (DF) of B1(θ) may then be calculated as:
which increases as the value of α0 decreases. Substituting equations (31) and (32) into (33), it may be shown that the DF depends not only on the coefficients α0 and α1, but also on the steering angle θd.
Graph 400A of
Graph 400B of
Based on the results shown in graphs 400A and 400B, of
For example, if θd = π/4, then α0 = α1 = α2 =
- 1, and G1 = 2.51 dB. In such a case, B1,1(θ) is a scaled cardioid and B1,2(θ) is a scaled dipole along the direction π/2.
It should be noted that the aforementioned decomposition of a FODMA beampattern may be generalized to the general case of higher orders. Based on the multistage structure in the construction of DMAs, the response of a general Nth-order DMA is equal to the product of N FODMAs′ responses:
For the purpose of studying the performance of the method described herein, a uniform linear array consisting of 3 microphones (e.g., M = 3 in FODMA 300 of
It can be seen from graph 500B that for a particular value of θd, the value of DF is almost constant over the studied frequency range. This property may be very important for processing wideband signals such as speech.
The frequency independence of the designed beampattern is further verified by graph 500C, where the designed beampattern is frequency invariant.
The distance between the designed beampattern and the target beampattern may be computed according to:
The results are plotted in graph 500D with conditions: M1 = 2, M2 = 3, and δ = 1 cm. It may be readily seen that the difference between the designed beampattern and the target beampattern is very small in graph 500D.
In another simulation (
As noted above,
To further verify the performance of the methods described herein, a uniform linear array consisting of 3 microphones is used. The uniform microphone spacing δ is 1.1 cm. The described beamforming algorithm was coded into the DSP processor of the designed FODMA system. This system was then tested on the top of a rotating platform in an anechoic chamber. A loudspeaker was put on the same level as the FODMA to simulate a sound source. The platform rotates clockwise at an interval of 5°. The beampattern is obtained by measuring the FODMA array gain at each angle based on the reference input signal (e.g. loudspeaker) and the beamforming output. The results at two different steering angles and frequencies are plotted in
It is clear from graphs 700A and 700B that the measured beampatterns (solid lines) are close to the target beampattern (dashed lines) although there are some differences, which may be caused by multiple reasons, such as, measurement errors.
In alternative implementations, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of either a server or a client machine in server-client network environments, or it may act as a peer machine in peer-to-peer (or distributed) network environments. The machine may be an onboard vehicle system, wearable device, personal computer (PC), a tablet PC, a hybrid tablet, a personal digital assistant (PDA), a mobile telephone, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. Similarly, the term “processor-based system” shall be taken to include any set of one or more machines that are controlled by or operated by a processor (e.g., a computer) to individually or jointly execute instructions to perform any one or more of the methodologies discussed herein.
Example computer system 800 includes at least one processor 802 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both, processor cores, compute nodes, etc.), a main memory 804 and a static memory 806, which communicate with each other via a link 808 (e.g., bus). The computer system 800 may further include a video display unit 810, an alphanumeric input device 812 (e.g., a keyboard), and a user interface (UI) navigation device 814 (e.g., a mouse). In one implementation, the display device 810, input device 812 and UI navigation device 814 are incorporated into a touch screen display. The computer system 800 may additionally include a storage device 816 (e.g., a drive unit), a signal generation device 818 (e.g., a speaker), a network interface device 820, and one or more sensors 822, such as a global positioning system (GPS) sensor, compass, accelerometer, gyrometer, magnetometer, or other sensor.
The storage device 816 includes a machine-readable medium 824 on which is stored one or more sets of data structures and instructions 826 (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 826 may also reside, completely or at least partially, within the main memory 804, static memory 806, and/or within the processor 802 during execution thereof by the computer system 800, with the main memory 804, static memory 806, and the processor 802 also constituting machine-readable media.
While the machine-readable medium 824 is illustrated in an example implementation to be a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions 826. The term “machine-readable medium” shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions. Specific examples of machine-readable media include volatile or non-volatile memory, including but not limited to, by way of example, semiconductor memory devices (e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
The instructions 826 may further be transmitted or received over a communications network 828 using a transmission medium via the network interface device 820 utilizing any one of a number of well-known transfer protocols (e.g., HTTP). Examples of communication networks include a local area network (LAN), a wide area network (WAN), the Internet, mobile telephone networks, plain old telephone (POTS) networks, and wireless data networks (e.g., Wi-Fi, 3G, and 4G LTE/LTE-A or WiMAX networks). Input/output controllers 830 may receive input and output requests from the central processor 802, and then send device-specific control signals to the devices they control (e.g., display device 810). The input/output controllers 830 may also manage the data flow to and from the computer system 800. This may free the central processor 802 from involvement with the details of controlling each input/output device.
In the foregoing description, numerous details are set forth. It will be apparent, however, to one of ordinary skill in the art having the benefit of this disclosure, that the present disclosure may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present disclosure.
Some portions of the detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “segmenting”, “analyzing”, “determining”, “enabling”, “identifying,” “modifying” or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system’s registers and memories into other data represented as physical quantities within the computer system memories or other such information storage, transmission or display devices.
The words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example’ or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term “an implementation” or “one implementation” or “an implementation” or “one implementation” throughout is not intended to mean the same implementation or implementation unless described as such.
Reference throughout this specification to “one implementation” or “an implementation” means that a particular feature, structure, or characteristic described in connection with the implementation is included in at least one implementation. Thus, the appearances of the phrase “in one implementation” or “in an implementation” in various places throughout this specification are not necessarily all referring to the same implementation. In addition, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.”
It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other implementations will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2021/076435 | 2/10/2021 | WO |