Computing devices routinely employ techniques to identify the words spoken by a human user based on the various qualities of a received audio input. Such techniques are called speech recognition or automatic speech recognition (ASR). Speech recognition combined with natural language processing (NLP) techniques may allow a user to control a computing device to perform tasks based on the user's spoken commands. ASR and NLP may together be considered types of speech processing.
For a more complete understanding of the present disclosure, reference is now made to the following description taken in conjunction with the accompanying drawings.
Speech processing and other sound detection and processing have become important parts of a number of different computing systems. For certain speech processing systems it may be desirable to know (or at least to estimate or infer) where the speaker is relative to the sensor(s)/microphone(s) that are detecting the speech. Knowing the relative direction that the speech will be arriving from may enable various direction-specific features or capabilities that may improve the output of the system. To infer the position of the speaker, multiple sensors/microphones may be used. Knowing the position between the individual sensors/microphones and the speed at which sound travels, the time differences between the arrival of speech at the different sensors/microphones may be used to infer the position or movement of the speaker relative to the sensors/microphones.
One particular sensor arrangement that may be used with such systems is a spherical sensor array.
As can be appreciated, to enable more precise and accurate calculations of the direction of the incoming speech 180 it is desirable to account for noise that may be present in the observed sound. Being able to account for such noise may lead to better results. However precisely separating the noise from the speech is difficult. To overcome this difficulty, certain assumptions may be made about the noise. First, a system 102 may assume that a noise component of a detected audio signal is spherically isotropic diffuse noise, namely noise that is evenly distributed across the sphere of the array 160. Using the assumption, the system 102 may determine (122) first entry values for a first noise covariance matrix (described below) based on the noise being assumed to be diffuse and based on the position of the sensors 140 on the surface of the array 160. This first noise covariance matrix may be predetermined, that is determined prior to detection of audio by the system 100. The system 102 may then detect (124) audio using the spherical array 160. The audio may include both a speech component (such as the incoming speech 180) and a noise component. The audio may be converted into an audio signal by the sensors of the array. The system 102 may then estimate (126) a noise intensity using an intensity of the audio signal. The detected signal intensity may be represented by a pressure detected by the sensors of the array, by a volume detected by the sensors of the array, or using some other observed value. The system 102 may then determine (128) second entry values for a second noise covariance matrix using the estimated noise intensity and the first noise covariance matrix. The second entry values may be the first entry values multiplied by an estimated value for the noise intensity, as described below. The system 102 may then estimate (130) the direction of arrival of the speech 180 using a Kalman filter (described below) and the second noise covariance matrix. Steps 122-130 may be performed by device 100 or by another, device (not illustrated) of system 102, for example a device configured to predetermine the first covariance matrix and provide that first covariance matrix to device 100. Thus, the system uses two components, the speech from the desired speaker and a diffuse noise component, and based on those two components estimates what pressure signals should be observed at each of the sensors on the spherical array. Then the system may estimate a direction of the speaker that makes what was observed at the sensors most closely match the model for diffuse noise and the speech. Further operations of the system, and a detailed explanation of steps 122-130, are described below.
State Inference and Observation Expression
As discussed above, the pressure may be measured at particular sensors on a sensor array. These pressure values are observable. From these observables, the system may infer the speaker direction. The direction of arrival of the desired speech at time k, also known as the state the system is attempting to infer, may be denoted by xk. Equation 1 shows the inference of the state xk:
xk=xk-1+uk-1 (1)
where u is a noise term. Thus, the direction of the speaker at time k (xk) can be inferred based on the direction of the speaker (xk-1) at the previous time k−1 and the noise (uk-1) at the previous time k−1.
The observation at time k may be denoted by yk. Thus yk represents, based on the current state (i.e., the current direction of arrival of the desired speech), what the system is expected to observe (i.e., what the system should measure at the sensors). Equation 2 shows the observation yk:
yk=Hk(xk)+vk (2)
where Hk(xk) is the known, nonlinear observation functional and v is a noise term. The observation term yk is a vector of actual outputs from the individual sensors/microphones (for example, the 32 sensors of the array of
Equations 1 and 2 may govern an extended Kalman filter (EKF) used to predict the state xk. Generally, Kalman filtering is an algorithm that uses a series of measurements observed over time, containing random variations (such as noise) and other inaccuracies, and produces estimates of unknown variables (such as the direction of arrival of speech, xk). An extended Kalman filter is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance. Equations 1 and 2 may be used by a Kalman filter or EKF to estimate the direction of arrival of speech.
In general, a Kalman filter may operate by determining how much weight to give to previous measurements versus model predictions when arriving at future estimates. That weight may be generally referred to as a Kalman Gain. Generally, a high gain places more weight on observed measurements, while a low gain places more weight on model predictions. One way of judging the effectiveness of the Kalman filter is to measure the error. In the situation of noise arriving at a spherical array discussed above, the error may be the difference between the actual observed measurements (y) and the predicted observation Hk(xk). Thus, the weighted square error ε at a particular time k and estimated position of incoming speech (θ, φ) (discussed below) may be expressed as:
where yk,l is the sensor output of from the spherical array at subband sample l and time k. As estimation is done in the frequency domain, each subband sample l is determined by taking a short time signal from each sensor and transforming it using a fast Fourier transform (FFT) to convert it to a subband sample in the frequency domain. The total sensor output may include a total of L subband samples. Thus the error may be measured across the subband samples l from subband sample 0 to subband sample L−1. The error ε is based on particular sensor locations on the array, as measured by angular values θ and φ, where θs is the polar angle (i.e., inclination) and φs is the azimuth of the location of an individual sensor S relative to the center of the spherical array.
As shown in Equations 1 and 2, both the state inference xk and the expected observation yk depend on noise terms, with u representing the noise in Equation 1 and v representing the noise in Equation 2. The noise terms u and v are assumed to be statistically independent, zero mean white Gaussian random vector processes. These noise terms represent real-world randomness that may be experienced when observing sound. Such randomness may be referred to as a stochastic process. Thus u represents the random movement of the speaker and v represents the random audio signal noise that may be observed by the sensors. Accounting for such randomness will allow the system to better infer the direction of the arrival of the desired speech.
Due to the random nature of the noise terms, however, it is difficult to precisely determine what the noise is from one moment in time (e.g., k−1), to the next (e.g., k). However, certain aspects of the noise may be predicted. For example, the noise u of the movement of the speaker is likely to be within a certain distance from one moment of time to the next (e.g., k−1 to k). Similarly, the expected noise v in the observed sound at different sensors may be estimated. Thus the system may assume covariance matrices of the noise terms, and use those covariance matrices when determine the state x using the Kalman filter. The covariance matrices are models of the randomness of the noise. Covariance is a measurement of how two random variables change together. Thus, covariance matrix Vk is a representation of how much the noise variables represented in vk change across the sensors of the spherical array. Uk and Vk may be expressed as:
where ε is an expectation function. Using equation 5, the weighted square error of equation 3 may thus be expressed as:
A goal of the Kalman filter is to choose the a time sequence {xk}k so as to minimize the summation over time of the weighted least square error:
under the constraint that xk(θk, φk) (namely the expected direction of arrival of the desired sound expressed in terms of θ and φ) changes gradually over time. How quickly xk changes may be governed by covariance matrix Uk. Thus, based on an estimated direction of arrival of sound (x), the sound observed/measured at locations on the sphere (y) should match the sound expected at the locations on the sphere (H). If those two sound measurements do not sufficiently match (leading to high error) the state xk of the Kalman filter may be adjusted to reduce the error.
As can be seen in equation 6, the error is based on the covariance matrix Vk. The covariance matrix Vk is a square Hermitian matrix with S×S parameters where S represents the number of sensors on the spherical array. To fully calculate the Vk would involve individually calculating the covariance between each pair of sensors on the spherical array and then populating Vk with those values. This is a difficult, if not impossible task, given the unknown precise value of the noise metric. As a result, in previous work Vk was assumed to be diagonal and constant for ease of calculation of Vk. This assumption, however, is not correct and may lead to undesired performance due to the difference between the assumed Vk and its correct value.
However the system may be improved by assuming Vk to be a full matrix and non-constant (i.e., varying with time), which more closely resembles real-world conditions. As noted, determining values for a full matrix Vk is a non-trivial problem. However, in the case of a spherical sensor array, the system may make use of the spherical properties of the sensor array, specifically certain known behaviors of sound as it impacts a sphere, to arrive at an improved estimate of Vk, which in turn will improve the Kalman filter results.
Offered is an improved method to determine Vk based on certain assumptions of the observation noise vk and the configuration of the spherical array. In particular, a first set of values for a first noise covariance matrix V may be determined by assuming that the random variation modeled by the observation noise vk is due to spherically isotropic (SI), diffuse noise, namely noise that may be assumed to be arriving from all directions approximately uniformly to the spherical sensor array. For illustration, the hum of an air conditioner running in a room may be a diffuse noise (assuming the listener is not close to the output of the air conditioner itself). The first noise covariance matrix V may thus be determined ahead of time, i.e., prior to detecting sound by the spherical array at time k. A noise intensity value (that will be used to determine Vk) may then be estimated based on the data observed by the sensors at a time k, and the intensity value used to scale first noise covariance matrix V to arrive at an estimated (and second) noise covariance matrix Vk that is used by the Kalman filter for state estimation. The various characteristics of sound impacting a sphere, and how those may be used to determine the values for first noise covariance matrix V, are explained below. That discussion is then followed by how the intensity value of Vk may be estimated and how the estimated (and second) noise covariance matrix Vk may be used by a Kalman filter to infer a state x, namely the direction of arrival of speech.
Estimating Observed Noise Covariance Based on Spherical Properties of the Sensor Array
To estimate the direction of arrival of speech, the observed sound at various points on the sphere is taken into account. The behavior of a plane wave (such as sound) impinging on a rigid sphere (such as the spherical sensor array) may be described by an array manifold vector. The array manifold vector includes a representation of the value of the pressure from the sound measured at each sensor in the spherical array. When a sound wave hits a spherical surface, the sound wave scatters. The array manifold vector describes the effect the on the spherical surface of the sound wave scattering. The pressure of the sound G, may be different for each sensor location ΩN where S is the number of the sensor. Thus, if there are S sensors in the spherical array, the sensors are represented by Ω0, Ω1, . . . ΩS-1. The actual position ΩN of each sensor/microphone on the sphere is measured by polar angle θ and azimuth φ, of each respective sensor/microphone relative to the defined axes of a spherical coordinate system for the array. For example, as shown in
The direction of arrival of the speech is also represented in the array manifold vector. As noted above, the state xk that the Kalman filter is attempting to determine is this same direction of arrival of the speech. In the array manifold array vector the direction of arrival is represented by Ω. Thus, xk=Ω. Ω is also measured by angular values θ and φ, thus representing the direction of the arrival of the speech relative to the center of the spherical sensor array. For example, as shown in
where α is the radius of the sphere, f is the wave number of the incoming sound (i.e., 2π/λ where λ is the wavelength), ΩN is the position on the sphere of sensor N, Ω is the expected position of arrival of the sound on the sphere, and G is the wave that results from sound scattering on the rigid sphere. G may be given by the equation:
G represents the sound pressure field for a plane sound wave impinging on a rigid sphere. Further, for a particular Kalman filter state xk (where xkΩ=(θ,φ), the observation function providing the predicted observation ŷk,l can thus be expressed as:
where Bk,l is the l th subband component of the desired speech at time step k and eiω
Certain characteristics of a plane sound wave impacting a rigid spherical surface (such as that of a spherical sensor array) laid out above are based on various previous work described in the following publications, which are herein incorporated by reference in their entireties:
As mentioned above, the first noise covariance matrix V may be determined by assuming that the random variation modeled by the observation noise v is due to spherically isotropic, diffuse noise. Spherically isotropic (SI) noise is, by definition, uniform sound arriving from all directions with a uniform power density of σSI2. This is explained further in G. W. Elko, “Spatial coherence functions for differential sensors in isotropic noise fields,” in Microphone Arrays, M. Brandstein and D. Ward, Eds. Heidelberg, Germany: Springer Verlag, 2001, ch. 4, which is hereby incorporated by reference in its entirety. The directions of arrival of the plane wave are represented by Ω, where, as noted above, Ω is measured by polar angle θ and azimuth φ, both relative to the center of the sphere, allowing Ω to also be expressed as (θ, φ). Thus, for a rigid sphere with a radius of α, and a wave number f, the inter-sensor covariance matrix of SI noise on a spherical sensor array may be expressed as:
If the array manifold vector of equation 8 is substituted into equation 10, and the expectation and integration is performed for equation 10, the (s,s′)th component of Σ(fα) (namely the covariance between points Ωs and Ωs′ on the sphere of radius α for a sound wave of wave number f) may be expressed as:
where:
is the spherical harmonic of order n and degree m (as explained in the Driscoll and Healy (1994) reference discussed above), and
Further taking into account the spherical nature of the sensor array, equation 11 may be refined to allow for improved estimation of the covariance matrix Vk, thus resulting in improved estimation of the arrival of the detected speech. The addition theorem for the pressure function showing the expansion of spherical harmonics on the sphere can be expressed as:
where γ is the angular offset between two points Ωs (θs, φs) 202 and Ωs′ (θs′, φs′) 504 on the sphere of radius α as illustrated in
Equation 13 shows the relation of covariance between various points on a sphere. Namely, equation 13 shows that the covariance between any two points depends on their angular separation γ, the radius of the sphere α, and the wave number f (that essentially normalizes frequency). The self-variances of the outputs of all sensors/microphones on the spherical array are equal. Equation 13 does not depend on the direction of arrival of the noise because the noise is assumed to be diffuse, i.e., arriving from all directions simultaneously with equal intensity. As noted in equation 13, each individual entry of the noise covariance matrix may depend on the radius of the sphere of the array (α), the wave number (f), the angular offset (γ) between the sensor locations whose covariance is represented by the particular matrix entry, and power density σSI2. What equation 13 does not say is the actual noise intensity itself. The noise intensity is the expectation of the square of the pressure field that's arriving at each sensor, namely the power density σSI2. Because the power density σSI2 is initially unknown, it may be initially removed from the estimated noise covariance matrix and then replaced when a value is known (as discussed below). Thus, a first noise covariance matrix may be determined (as described above in reference to step 122), using equation 13 (with σSI2 removed), where the first noise covariance matrix includes first values for the individual matrix entries and the first values do not depend on the power density or intensity of incoming sound.
With the first noise covariance matrix (and the individual first values of the matrix) determined, the noise intensity may be incorporated later when incoming speech is detected by the sensor array. As detailed below, the noise intensity may be estimated using the observed sound, including the intensity of the observed sound (as described above in reference to step 124) thus completing the estimate for a covariance matrix of the spherical array. The intensity of the observed sound may be based on pressure, volume, or other sound characteristics measured by the sensors of the array.
First, the sensor self-variance
where μk,l(Ω)gk,l(Ω)Bk,leiw
where α is defined to be
Summing equation 10 over K samples in a training set, taking the derivative with respect to α and equating to zero yields the maximum likelihood estimate of α, namely:
where {circumflex over (α)} is the estimated value of the intensity α, also called the variance. A further assumption may be made that the instantaneous power of the diffuse noise varies, but slowly. Thus a running estimate at sample K of α may be expressed as:
Thus equation 17 shows that {circumflex over (α)} (the estimated value of α) at a particular time (K) depends on the value of {circumflex over (α)} at the previous time (K−1), adjusted by a weighting factor β, where 0<β<1, and S is the number of sensors of the array. The weighting factor may be adjusted depending on how heavily the system should weight previous samples. The value yK,l represents the observed data at subband l and the value μK,l represents the predicted data based on the model of speech impinging on the spherical surface at subband l. As can be appreciated from equation 17, if β is set to 1, the current estimate {circumflex over (α)}(K) would be equal to the previous estimate {circumflex over (α)}(K−1). If β is set to 0, the current estimate {circumflex over (α)}(K) would be equal to the second half of equation 17, which is the new set of observed data (i.e., the observed signal intensity) compared to the predicted data. Thus, the estimated intensity may be based on a weighted previous intensity (β{circumflex over (α)}(K−1)) plus a weighted observed signal intensity.
The estimate {circumflex over (α)} may be used as the estimated intensity of the diffuse noise, as referenced above in step 126. This intensity estimate {circumflex over (α)} may be used to adjust the first values of the first noise covariance matrix (described above in reference to equations 11-13) to estimate the second (and ultimate) covariance matrix Vk as described above in reference to step 128. Each of the first values of the first noise covariance matrix may be multiplied by the intensity estimate {circumflex over (α)} to arrive at the second values of the second noise covariance matrix. The new estimated (i.e., second) covariance matrix Vk may now be used by the Kalman filter to estimate the direction of the arrival of speech as explained below.
Estimating Speech Direction
The offered estimates for the covariance matrix Vk may be used with a number of different techniques for estimating the arrival direction of speech using a Kalman filter. One example of estimating speech direction is discussed here, but others may be used.
As discussed above, a Kalman filter may be configured to reduce an squared-error metric to more accurately predict the arrival direction of the speech. The squared-error metric at time step k may be expressed as:
where gk,l (θ, φ) is the model for sensor outputs for sphere location (θ, φ) at time k and subband l is the array-manifold vector, namely:
where θs and φs with subscripts represent the location of sensors on the array, θ and φ without subscripts represent the expected direction of arrival of speech, and G is described above in Equation 8A.
The maximum likelihood estimate of Bk,l is given by:
An extended Kalman filter may (a) estimate the scale factors in Bk,l as given in equation 20. This estimate may be used to (b) update the state estimates ({circumflex over (θ)}k, {circumflex over (φ)}k) of the Kalman filter. The system may then perform an iterative update for each time step (by repeating steps (a) and (b)) as in the iterated extended Kalman filter described by M. Wolfel and J. McDonough, Distant Speech Recognition. London: Wiley, 2009 in §4.3.3.
State Estimation Using Kalman Filter and Estimated Noise Covariance
As mentioned above, the Kalman filter operates to determine how much previous observations should be weighted when estimating the desired state x, namely the direction of arrival of the speech detected by the spherical array. The Kalman filter may operate in a number of ways using the estimated noise covariance Vk described above. An extended Kalman filter (EKF) is used to illustrate the Kalman filter operation in conjunction with
Starting at the right hand side of
By definition, the innovation at time k (sk) is the difference between the current observation (yk) and the expected observation
namely
as shown at node 602. Namely, the innovation represents the difference between the data actually observed and the data that the system expected to observe, given the estimated state/direction of arrival of speech. The predicted observation may be calculated based on the current state estimate according to
Hence, sk may be expressed as
which implies:
where
is the predicted state estimate error at time k, using all data up to time k−1, and
is me linearization or Hk(x) about x=xk|k−1.
εk|k−1 is orthogonal to uk and vk. Thus, the covariance matrix S of the innovation sequence s can be expressed as:
where the predicted state estimation error covariance matrix is defined as:
As can be seen in equation 23, the covariance matrix Sk depends on the covariance matrix Vk. As explained above, Vk may be estimated as described above. In particular, first values for a first covariance matrix V may be determined by estimating that the noise is a spherically isotropic noise. The first values may be adjusted by an estimated noise intensity {circumflex over (α)} to determine second values for the second covariance matrix Vk. The intensity {circumflex over (α)} may be estimated as described in equation 17. Thus the second noise covariance matrix Vk may be used to improve the operation of the Kalman filter. Continuing the description of the Kalman filter, the Kalman Gain Gk can be calculated as:
According to the Riccati equation, Kk|k−1 can be sequentially updated as:
The matrix Kk may be obtained through the recursion:
Kk may be interpreted as the covariance matrix of the filtered state estimate error, such that
where
The filtered state estimate is given by:
as shown in node 604. Thus, the new estimated state {circumflex over (x)}k|k is based on the previous estimated state {circumflex over (x)}k|k−1 plus an adjustment value equal to the Kalman Gain Gk multiplied by the innovation sk. And as noted, the Kalman Gain Gk is based on the estimated noise covariance Vk determined as explained above. Thus, during estimation of the current state (i.e., the estimated direction of arrival of speech), the weight given to the current observed data is based on the estimated noise covariance Vk, thus estimating the direction of arrival of speech as described above in reference to step 130.
The device 100 may include, among other things a spherical sensor array 160. The sensors 140 of the array 160 may include pressure sensors, microphones, or other sensors 140 capable of detecting sound. The sensors 140 will provide the observed data described above to estimate the direction of arrival of speech.
The device 100 may also include other input components, such as a video input device such as camera(s) 714. The video input may be used to enhance the speaker tracking The device 100 may also include video output device for displaying images, such as display 716. The video output device may be a display of any suitable technology, such as a liquid crystal display, an organic light emitting diode display, electronic paper, an electrochromic display, or other suitable component(s).
The device 100 may include an address/data bus 724 for conveying data among components of the device 100. Each component within the device 100 may also be directly connected to other components in addition to (or instead of) being connected to other components across the bus 724.
The device 100 may include one or more controllers/processors 704, that may each include a central processing unit (CPU) for processing data and computer-readable instructions, and a memory 706 for storing data and instructions. The memory 706 may include volatile random access memory (RAM), non-volatile read only memory (ROM), non-volatile magnetoresistive (MRAM) and/or other types of memory. The device 100 may also include a data storage component 708, for storing data and controller/processor-executable instructions (e.g., instructions to perform the algorithms illustrated in
Computer instructions for operating the device 100 and its various components may be executed by the controller(s)/processor(s) 704, using the memory 706 as temporary “working” storage at runtime. The computer instructions and a table storing the jitter constant β may be stored in a non-transitory manner in non-volatile memory 706, storage 708, or an external device. Alternatively, some or all of the executable instructions may be embedded in hardware or firmware in addition to or instead of software.
The device 100 includes input/output device interfaces 702. A variety of components may be connected through the input/output device interfaces 702, such as the array 160, the camera(s) 714 and the display 716. The input/output device interfaces 702 may also include an interface for an external peripheral device connection such as universal serial bus (USB), FireWire, Thunderbolt or other connection protocol. The input/output device interfaces 702 may also include a connection to one or more networks 799 via a wireless local area network (WLAN) (such as WiFi) radio, Bluetooth, and/or wireless network radio, such as a radio capable of communication with a wireless communication network such as a Long Term Evolution (LTE) network, WiMAX network, 3G network, etc.
The covariance determination module 720 performs the processes and calculations disclosed above in determining/estimating the first values of the first noise covariance matrix, the noise intensity, and the second values of the second noise covariance matrix. The covariance determination module 720 may receive data from the array 160. The covariance matrix determination module 720 may also receive a predetermined first noise covariance matrix from a different device (not pictured).
The Kalman filter module 730 performs processes and calculations disclosed above in estimating the direction of arrival of the speech.
The concepts disclosed herein may be applied within a number of different devices and computer systems, including, for example, set top boxes, audio detectors for accepting spoken commands, other devices, etc.
The above aspects of the present disclosure are meant to be illustrative. They were chosen to explain the principles and application of the disclosure and are not intended to be exhaustive or to limit the disclosure. Many modifications and variations of the disclosed aspects may be apparent to those of skill in the art. Persons having ordinary skill in the field of mobile computers, orientation filters, and inertial navigation systems should recognize that components and process steps described herein may be interchangeable with other components or steps, or combinations of components or steps, and still achieve the benefits and advantages of the present disclosure. Moreover, it should be apparent to one skilled in the art, that the disclosure may be practiced without some or all of the specific details and steps disclosed herein.
Aspects of the disclosed system may be implemented as a computer method or as an article of manufacture such as a memory device or non-transitory computer readable storage medium. The computer readable storage medium may be readable by a computer and may comprise instructions for causing a computer or other device to perform processes described in the present disclosure. The computer readable storage medium may be implemented by a volatile computer memory, non-volatile computer memory, hard drive, solid-state memory, flash drive, removable disk and/or other media. In addition, portions of the orientation filter 730 may be implemented in hardware, such arithmetic logic to apply the various vector matrix transforms.
As used in this disclosure, the term “a” or “one” may include one or more items unless specifically stated otherwise. Further, the phrase “based on” is intended to mean “based at least in part on” unless specifically stated otherwise.
Number | Name | Date | Kind |
---|---|---|---|
8098842 | Florencio | Jan 2012 | B2 |
20070263881 | Maziar | Nov 2007 | A1 |
20080247565 | Elko | Oct 2008 | A1 |
20120195436 | Nakadai | Aug 2012 | A1 |
20140286493 | Kordon | Sep 2014 | A1 |
Entry |
---|
Arfken, et al. Mathematical Methods for Physicists. Boston: Elsevier, 2005. |
Benesty. Adaptive Eigenvalue Decomposition Algorithm for Passive Acoustic Source Localization. The Journal of the Acoustical Society of America, vol. 107, No. 1, pp. 384-391, Jan. 2000. |
Carter. Time Delay Estimation for Passive Sonar Signal Processing. Acoustics, Speech and Signal Processing, IEEE Transactions. vol. 29, No. 3, pp. 463-470, 1981. |
Driscoll, et al. Computing Fourier Transforms and Convolutions on the 2-Sphere. Advances in Applied Mathematics. vol. 15, No. 2, pp. 202-250, 1994. |
Dunster. Legendre and Related Functions. NIST Handbook of Mathematical Functions. pp. 351-381, 2010. |
Elko. Spatial Coherence Functions for Differential Microphones in Isotropic Noise Fields. Microphone Arrays. pp. 61-85. Springer Berlin Heidelberg, 2001. |
Fisher et al. Near-Field Spherical Microphone Array Processing With Radial Filtering. Audio, Speech, and Language Processing, IEEE Transactions. vol. 19, No. 2, pp. 256-265, 2011. |
Klee, et al. Kalman Filters for Time Delay of Arrival—Based Source Localization. Proceedings of Eurospeech, 2005. |
Kumatani, et al. Microphone Array Processing for Distant Speech Recognition: From Close-Talking Microphones to Far-Field Sensors. Signal Processing Magazine, IEEE. vol. 29, No. 6, pp. 127-140, 2012. |
McDonough, et al. Speaker Tracking With Spherical Microphone Arrays. Acoustics, Speech and Signal Processing (ICASSP), IEEE International Conference, pp. 3981-3985, 2013. |
McDonough, et al. Microphone Arrays for Distant Speech Recognition: Spherical Arrays. Proc. APSIPA Conference, Hollywood, CA, Dec. 2012. |
Meyer, et al. A Highly Scalable Spherical Microphone Array Based on an Orthonormal Decomposition of the Soundfield. Acoustics, Speech, and Signal Processing (ICASSP), 2002 IEEE International Conference. vol. 2, pp. 11-1781, 11-1784. IEEE, 2002. |
Olver, et al. NIST Handbook of Mathematical Functions. Cambridge University Press, 2010. |
Rafaely, et al. Spherical Microphone Array Beamforming. Speech Processing in Modern Communication, pp. 281-305. Springer Berlin Heidelberg, 2010. |
Sun, et al. Robust Localization of Multiple Sources in Reverberant Environments Using EB-ESPRIT With Spherical Microphone Arrays. Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference, pp. 117-120. IEEE, 2011. |
Teutsch, et al. Acoustic Source Detection and Localization Based on Wavefield Decomposition Using Circular Microphone Arrays. The Journal of the Acoustical Society of America. vol. 120, No. 5, pp. 2724-2736, 2006. |
Teutsch, et al. Detection and Localization of Multiple Wideband Acoustic Sources Based on Wavefield Decomposition Using Spherical Apertures. Acoustics, Speech and Signal Processing, 2008. IEEE International Conference. pp. 5276-5279. IEEE, 2008. |
Williams. Fourier Aoustics: Sound Radiation and Nearfield Acoustical Holography. Academic Press, 1999. |