The present disclosure is generally related to signal processing.
Deployment of long aperture arrays using large numbers of sensor elements has been enabled by the evolution of sensor technology, telemetry technology, and digital processing technology. Such arrays hold promise for benefits in terms of theoretical processing gain and spatial resolution. Long aperture arrays have been employed across a range of applications, including acoustic and radar applications. Adaptive beamforming techniques have been applied to long aperture arrays to increase a system's ability to perform well in complicated signal and interference environments. For many adaptive beamforming techniques, increasing the number of sensor elements of the array works against a desire to adapt quickly. Additionally, multipath environments introduce challenges that may cause substantial performance loss for certain adaptive beamforming techniques.
Many approaches for covariance estimation for adaptive beamforming have been proposed. Many adaptive beamforming techniques use sensed data to estimate a covariance matrix and use the covariance matrix for spectral estimation. However, snapshot deficient operation and correlated signal and interference environments continue to be difficult to handle for certain adaptive beamforming techniques.
An adaptive beamforming technique is disclosed in which a spatial spectrum is estimated based on sensed data. The estimated spatial spectrum is used to estimate a covariance matrix. The estimated covariance matrix may be used for adaptive beamforming.
In a particular embodiment, a computer-implemented method includes receiving sensed data from sensors of a sensor array, where data from each sensor is descriptive of waveform phenomena detected at the sensor. The method also includes determining an estimated spatial spectrum of the waveform phenomena based at least partially on the sensed data. The method further includes determining an estimated covariance matrix of the waveform phenomena based on the estimated spatial spectrum. The method includes determining adaptive beamforming weights using the estimated covariance matrix.
In another particular embodiment, a non-transitory computer-readable medium includes instructions that, when executed by a processor, cause the processor to receive sensed data from sensors of a sensor array, where data from each sensor is descriptive of waveform phenomena detected at the sensor. The instructions also cause the processor to determine an estimated spatial spectrum of the waveform phenomena based at least partially on the sensed data. The instructions further cause the processor to determine an estimated covariance matrix of the waveform phenomena based on the estimated spatial spectrum. The instructions also cause the processor to determine adaptive beamforming weights using the estimated covariance matrix.
In another particular embodiment, a system includes a sensor array including multiple sensors and a signal processor coupled to the sensor array. The signal processor is configured to receive sensed data from the sensors of the sensor array, where data from each sensor is descriptive of waveform phenomena detected at the sensor. The signal processor is also configured to determine an estimated spatial spectrum of the waveform phenomena based at least partially on the sensed data. The signal processor is further configured to determine an estimated covariance matrix of the waveform phenomena based on the estimated spatial spectrum. The signal processor is configured to determine adaptive beamforming weights using the estimated covariance matrix.
The features, functions, and advantages that have been described can be achieved independently in various embodiments or may be combined in yet other embodiments, further details of which are disclosed with reference to the following description and drawings.
In a particular embodiment, a covariance matrix for a sensor array responsive to waveform phenomena (also referred to herein as a space-time process) may be determined based on locations of individual sensor element of the sensor array, directional response and noise of the individual sensor elements, and a spatial spectrum of the waveform phenomena. In this embodiment, the covariance matrix may have a particular structure that can be exploited to improve adaptive beamformer performance both in terms of snapshot deficient operation and in terms of robustness against correlated signal and interference environments. Such performance improvements may be particularly beneficial for large aperture arrays employing large numbers of sensor elements when operating in non-stationary and multi-path environments.
In a particular embodiment, the covariance matrix is determined using structured covariance techniques referred to herein as a covariance from spatial spectrum (CSS) method. In disclosed CSS methods, the spatial spectrum of the sensor array observing the waveform phenomena is estimated, and spatial spectrum to covariance transforms are employed to estimate the covariance matrix. Performance predictions indicate that the disclosed structured covariance techniques can provide near optimal performance for passive signal detection and recovery with very few snapshots (e.g., with a single snapshot or with fewer snapshots than interferers, as described further below).
In a particular embodiment, multi-taper spectral estimation with harmonic analysis is used to estimate the spatial spectrum. Other spectral estimation techniques, such as classical power spectral estimation techniques may be used in other embodiments. The spatial spectrum estimation techniques disclosed may be extended to support arbitrary array geometry and non-ideal array manifold response. The non-ideal array manifold response may include random or non-deterministic variations in gain, phase, and directionality in the sensors, deterministic positional errors (e.g., the bending experienced by an underwater towed array), or both non-deterministic and deterministic errors.
The array processor 102 may be configured to receive sensed data from the sensors 111-114 of the sensor array 110. Data from each sensor 111-114 may be descriptive of waveform phenomena detected at the sensor 111-114. The array processor 102 may be configured to perform structured covariance estimation from spatial spectra for adaptive beamforming. For example, the array processor 102 may determine an estimated spatial spectrum of the waveform phenomena based at least partially on the sensed data. The array processor 102 may determine an estimated covariance matrix of the waveform phenomena based on the estimated spatial spectrum. The array processor 102 may also determine adaptive beamforming weights using the estimated covariance matrix. The array processor 102 or one or more other system components 106 may be configured to apply the adaptive beamforming weights to the sensed data to reconstruct a time-domain signal from the waveform phenomena.
In a particular embodiment, the array processor 102 uses a structured covariance determination method, which incorporates some a priori knowledge or constraints. For example, the array processor 102 may impose a priori knowledge or constraints, such as an assumption that a space-time process being observed is stationary in time, space, or both (e.g., a source of the waveform phenomena is stationary), or geometric information or constraints associated with the sensors 111-114 of the sensor array 110. Such a priori knowledge or constraints may reduce a number of unknown quantities to estimate or a size of an allowable solution space. Thus, structured covariance determination methods may convergence to a final solution with little data or reduced data in some scenarios.
In
The array processor 102 performs one or more operations using the output data from the sensor array 110. For example, the array processor 102 may determine whether a signal is present or not for a particular range and/or direction of arrival (which may be referred to as a “detection problem”). In another example, the array processor 102 may use the output data to reconstruct one or more components of a time-domain waveform (which may be referred to as a “beamforming problem”). Knowledge of temporal characteristics of signals in the sampled environment may be exploited when such characteristics are known, as may be the case, for example, in communications or active radar or sonar processing. Alternately, the signal may be unknown. In a particular embodiment, the array processor 110 uses multi-taper spectral estimation in a manner that combines non-parametric spectral estimation and harmonic analysis.
Adaptive beamforming may be used to suppress noise and interference components detected in the sampled environment and to enhance response to desired signals. The array processor 102 may determine a vector of beamforming weights (referred to herein as a weighting vector, w) for use in adaptive beamforming. The weighting vector may be used to combine the sensor array 110 outputs, at a particular time, xm, to produce a scalar output, ym=wHxm. When the covariance matrix, R=E{xxHm}, for the sensor array outputs is known (or estimated), an optimal or near-optimal set of beamforming weights, wopt, may be determined as wopt=γR−1s, where s is a steering vector and γ is a normalization factor determined via an optimization process. The covariance matrix R may not be known a priori, and may therefore be estimated. When the noise and interference are Gaussian, a sample covariance matrix, {circumflex over (R)}SCM, may be used as an unconstrained maximum likelihood (ML) estimate of R. The sample covariance matrix may be determine as:
Eqn. (1) is valid over a duration that the sampled environment can be considered stationary. Nonstationarity in the sampled environment may arise in both sonar and radar applications and may be a limiting factor in the ability to estimate R from the output data of the sensor array 110. In the case of sonar, a dominant noise mechanism in cluttered environments may be shipping. A combination of continuous motion of sound sources relative to the sensor array 110 at short ranges and corresponding small resolution cells of the sensor array 110 may establish a time scale over which a short-term stationarity assumption is valid. In space-time adaptive processing (STAP) for radar, a resolution of range-Doppler cells may be susceptible to nonstationarity in the form of heterogeneous clutter, caused by ground clutter varying over the surface of the earth. Nonstationary conditions may be addressed by using a limited number of snapshots of data from the sensor array 110. A snapshot refers to a set of samples including no more than one sample from each sensor 111-114 of the sensor array 110. Snapshot deficient operation refers to a condition in which a limited amount of snapshot data is available for processing. Snapshot deficient operation may be a concern when the stationarity assumption applies only over a limit number of snapshots and the limit number of snapshots is insufficient for the algorithm to converge. In addition to concerns caused by snapshot deficient operation due to nonstationarity, correlated signal and interference can also be problematic in adaptive beamforming. Correlated interference may arise in multipath or smart jamming scenarios, resulting in signal cancellation effects. Applications that have no a priori knowledge of the inbound signal at the desired direction of arrival, such as passive sonar, may be subject to this effect.
In a particular embodiment, covariance from spatial spectrum (CSS) processing for uniform linear and regularly spaced array geometries may be performed using power spectral estimation techniques (such as windowed averaged periodogram techniques) and fast Fourier transform (FFT) processing. However, such techniques may estimate the covariance with bias. Additionally, discrete or point source interference and spatial separation from the steering vector may degrade performance. Additional processing may be performed to detect, estimate, and subtract discrete components from such sources from available snapshot data prior to determining a final covariance matrix estimate. Such processing enables operation with negligible normalized signal to interference and noise ratio (SINR) loss under a large range of conditions.
The signal processing system 100 may also include one or more other components coupled to the array processor 102. For example, the one or more other system components 106 may include devices that use the covariance matrix, the beamforming weights, the sensed data, the time-domain signal, or other data available to or generated by the array processor 102.
Under a narrowband assumption, the effect of the plane wave 202 as it propagates across the array 110 in space may be approximated by a phase shift, and the array manifold response vector may be represented as:
Each array element 111-114 may be sampled simultaneously at a time index m to produce a snapshot, xm, where data elements of the snapshot are individual sensor outputs. M snapshots may be produced for processing, one for each time index m. The snapshot xm may include components that derive from different sources. For example, the snapshot xm may include a contribution from point sources, which may be described as Vam, where V is a matrix of array manifold responses and am is a vector of point source amplitudes. When only a single point source is present, the contribution from the point source may be described as vam, where v is the array manifold response and am is a scalar value of the individual point source amplitude. Both the vector of point source amplitudes, am, and the scalar point source amplitude, am, may be modeled as complex Gaussian. The snapshot xm may also include a contribution from background or environmental noise, which may be described as nb,m. The snapshot xm may also include a contribution from internal noise of the sensors which may be described as nw,m. Thus, the snapshot xm may be a zero mean complex Gaussian random vector, since it may be a linear combination of uncorrelated, zero mean, complex Gaussian random vectors.
For the N element sensor array 110 of omni-directional sensors 111-114 at locations pn, n=1, 2, . . . , observing waveform phenomena (e.g., the plane wave 202) that can be described as a function f(t, p) with corresponding covariance Rf(p), the covariance matrix for the array outputs, ignoring sensor noise components, may include:
Rf=((Rf(pr−pc)))r,c
which can be expressed in matrix form using the frequency-wavenumber spectrum, Pf(k), and array manifold response vector, v(k).
Rf=(2π)−C∫. . . ∫R
where −C is the dimension of k. The waveform phenomena may also be interpreted another way. A wavenumber restriction, |k|=2π/λ, implies that plane waves 202 observed by the sensor array 110 correspond to angles of arrival that may physically propagate to the sensor array 110. The waveform phenomena in this case may be modeled as the sum of uncorrelated plane waves, that are distributed according to a normalized directional distribution, Gf (ω, θ, φ), across all angles of arrival to the sensor array 110, where ω is a radian frequency, θ is an angle from vertical in a spherical coordinate system, and φ is an azimuth angle in the spherical coordinate system.
At 304, one or more estimated point source interference components may be subtracted from the sensed data. For example, a harmonic analysis technique may be used to subtract the point source interference components.
At 306, an estimated spatial spectrum of the waveform phenomena may be determined based at least partially on the sensed data. For example, the estimated spatial spectrum may include a frequency-wavenumber spectrum of the waveform phenomena or a direction-of-arrival spectrum of the waveform phenomena. In a particular embodiment, the estimated spatial spectrum is determined by applying windowed, averaged periodogram analysis to the sensed data, at 308. In another particular embodiment, the estimated spatial spectrum is determined by applying multi-taper spectral estimation to the sensed data, at 310.
At 312, an estimated covariance matrix of the waveform phenomena may be determined based on the estimated spatial spectrum. In a particular embodiment, a source of the waveform phenomena may be treated as stationary for purposes of estimating the covariance matrix. The estimated covariance matrix may be determined based on information regarding locations of the sensors of the sensor array. In a particular embodiment, the estimated covariance matrix converges with fewer snapshots than interferers.
In a particular embodiment, the estimated covariance matrix is determined as a combination of a first portion (e.g., a visible region) and a second portion (e.g., a virtual region). The first portion may correspond to one or more physically propagating waves of the waveform phenomena. The second portion may be associated with sensor noise. In this embodiment, the method may include determining an estimate of the sensor noise based at least partially on the second portion.
At 314, adaptive beamforming weights may be determined using the estimated covariance matrix. At 316, the adaptive beamforming weights may be applied to the sensed data to reconstruct a time-domain signal from the waveform phenomena. Thus, the covariance matrix may be determined in a manner that improves adaptive beamformer performance in terms of snapshot deficient operation, in terms of robustness against correlated signal and interference environments, and in terms of rate of convergence to a solution. Such performance improvements may be particularly beneficial for large aperture arrays employing large numbers of sensor elements when operating in non-stationary and multi-path environments.
At 404, harmonic analysis is applied with the multi-taper spectral estimation to detect and subtract discrete components from the sensed data. Applying the multi-taper spectral estimation with the harmonic analysis may include detecting discrete components of the waveform phenomena using outputs of the multi-taper spectral estimation, at 406. At 408, estimated discrete component parameters corresponding to each detected discrete component may be determined. At 410, an estimated array error vector corresponding to each detected discrete component may be determined. At 412, a contribution of each detected discrete component may be subtracted from the sensed data to generate a residual continuous component. For example, the estimated array error vector may correspond to a linear minimum mean square error estimate of non-ideal response error in the sensed data. At 414, an estimated continuous component covariance matrix may be determined based on the residual continuous component.
In a particular embodiment, the estimated covariance matrix is determined as a numeric combination of the estimated continuous component covariance matrix and an estimated discrete component covariance for each detected discrete component. In this embodiment, the estimated discrete component covariance for a particular detected discrete component is determined based on the estimated array error vector corresponding to the particular detected discrete component and based on the estimated parameters of the particular detected discrete component.
At 512, a continuous background spectrum may be determined or estimated based on the sample data with discrete components removed. At 514, a covariance matrix may be estimated based on the continuous background spectrum.
Thus, a covariance matrix for a sensor array observing a stationary space-time process (also referred to herein as a waveform phenomena) may be determined based on individual sensor element locations, directional response and noise of those individual sensor elements, and a spatial spectrum of the space-time process. In a particular embodiment, the covariance matrix may have a particular structure that can be exploited, improving adaptive beamformer performance both in terms of snapshot deficient operation and robustness against correlated signal and interference environments. The description below provides additional detail regarding determination of a covariance matrix and determination of beamforming weights based on the covariance matrix.
Structured Covariance Techniques
Structured covariance methods incorporate some a priori knowledge of a problem space to add additional constraints to the problem. These constraints may include, for instance, a Toeplitz structure or block Toeplitz structure within the covariance matrix. The a priori knowledge may be based on established models, such as the space-time process observed being stationary, or the geometry of the array elements. These constraints reduce the number of unknown quantities to estimate or the size of the allowable solution space, and may convergence to a solution with very little data in some scenarios. Thus, they offer the potential for meaningful adaptive processor performance with lower snapshot requirements than sample covariance methods and their reduced rank derivatives. Structured covariance algorithms may be applied to both the snapshot deficient and correlated signal and interference problems.
Structured Covariance Based on Spatial Spectrum
An observed environment may be modeled as a spatially and temporally stationary space-time random process in many applications in array signal processing. This model of the space-time process can be described using a Cramér spectral representation theorem. This theorem provides a description of a random process as a sum of uncorrelated plane waves, distributed across a visible region of an array as a function of azimuth and elevation or across the visible region in a frequency-wavenumber domain. These domains are related to each other, and each properly describes the space-time process observed. A theory of second order characterizations of space-time random processes shows that either of these descriptions (or representations) can be related to the space-time covariance of the process at the output of the array. This is analogous to the relationship between power spectral density and auto-correlation in the case of time series analysis.
Because the space-time covariance can be related to either of these representations, embodiments disclosed herein consider estimating these spectral quantities first in order to compute the covariance at the output of the array. A process that uses estimated spectral quantities to estimate covariance is referred to herein as covariance from spatial spectrum (CSS).
A spectral representation model may include uncorrelated plane wave components. When encountering correlated signal and interference scenarios, processors may degrade in performance due to signal cancellation effects. Constraining the estimated covariance to a stationary model may reduce the contribution of the correlation within the estimated covariance matrix, mitigating signal cancellation. The stationary model may also impart a structure to the covariance matrix. Structured covariance matrix techniques may converge more rapidly than traditional sample covariance matrix techniques and derivative techniques. While there may be no simple closed form solution for a maximum likelihood (ML) estimate for a structured covariance for the general problem of an unknown number of signals in non-white noise, a naturally intuitive interpretation of the process in the azimuth-elevation or frequency-wavenumber domains may be used to address the problem.
Spectral representation theorem in the context of angle of arrival to the array considers the space-time process itself. Sensor noise may be modeled as an uncorrelated noise component within each sensor and may be a part of the covariance observed by an adaptive processor. CSS processing for uniform linear and regularly spaced array geometries may use power spectral estimation techniques and Fast-Fourier Transform (FFT) processing.
The impact of discrete, or point source interference and its spatial separation from the steering vector, s, may degrade performance. Additional steps may be used to detect, estimate, and subtract this type of discrete component from available snapshot data prior to final covariance estimation. This allows operation with negligible normalized SINR loss under a large range of conditions. Multi-taper spectral estimation technique, such as Thomson's multi-taper spectral estimation as described in Thomson, D. J. “Spectrum estimation and harmonic analysis.” In Proceedings of the IEEE, Volume 70 (1982), 1055-1096, which may combine the convenience of non-parametric spectral estimation algorithms and harmonic analysis techniques, work well in this application. Performance assessment of the final technique was performed through simulation of various interference and noise environments, with excellent normalized SINR loss performance obtained below an M=2K threshold for sample covariance type algorithms. Extensions to support arbitrary array geometry are disclosed.
A correlated signal and interference environment generally cannot be modeled as a stationary space-time process. The correlated components perturb the Toeplitz structure of the covariance matrix in the uniform linear array (ULA) case. The expected value of the estimated covariance for this problem, while constrained to an uncorrelated plane waves model, contains a bias term related to the original correlated data. Impact of this bias term is negligible as array length increases. Simulation to compare sample covariance based techniques and CSS techniques, using an effective SINR metric, demonstrated successful mitigation of signal cancellation effect in the CSS techniques. The CSS techniques mitigate the signal cancellation effect without loss in effective aperture, and provide the same low snapshot support requirements as the stationary space-time process case. The CSS techniques provide an increase in effective sample size, similar to redundancy averaging, while maintaining a positive definite covariance matrix estimate, and differ from CMT in their ability to mitigate the effects of correlated signal and interference in the data.
The CSS techniques are structured covariance techniques, which assume an ideal array manifold response. A real-world array manifold response may be non-ideal due to random variation in gain, phase, and directionality in the sensors. Non-ideal response may also result from deterministic positional errors, e.g., the bending experienced by an underwater towed array. Performance impact for both random and deterministic array manifold response errors was simulated. In a particular embodiment, the CSS techniques are extended to account for non-ideal response.
Notation
The following notational conventions are used herein. Italicized lower case letters (e.g., x) denote scalar variables, and x* is the complex conjugate of x. A vector of dimension P×1 is represented as a lowercase letter with a single underline (e.g., k) and is complex or real-valued as indicated. The transpose of a vector x is denoted as xT, and xH is the conjugate transpose of x. The matrix of dimension P× Q is denoted with an underlined uppercase letter (e.g., X) and is complex or real-valued as indicated. A compact notation is sometimes used to denote a matrix X=((xrc))rc and to indicate that the matrix X includes elements (rows, columns) which are individually denoted as xrc. For example, this notation may be used to denote values when the values are a function of position indices, e.g., x=((cos [ωp]))p is a vector of P elements including the values [cos(ω), cos(2ω), cos(3ω)), •••]T. Similar notation may be used for vectors or higher dimensional matrices. The notation diag (x1, x2, •••) describes a diagonal matrix including entries x1, x2, ••• on the main diagonal.
The expectation operator is denoted as E {x}. When describing random variables,
indicates that the random variable x is distributed according to the probability density function fx (x). Gaussian random variables are denoted with specific notation for the appropriate density function. The P-variant Gaussian random variable, x ε RP, with mean m=E {x} and covariance Rx=E{(x−m)(x−m)T} is represented using the notation
Similarly, a P-variant complex Gaussian random variable, x ε CP, with mean m=E{x} and covariance Rx=E{(x−m)(x−m)H} is represented using the notation
It is assumed that all complex Gaussian random variables are circularly symmetric of the type described by Goodman. The acronym i.i.d. stands for independent and identically distributed.
Data Models and Assumptions
For ease of description, it is assumed herein that a sensed or observed waveform phenomena (also referred to herein as an space-time process) is narrowband and propagates in a homogeneous medium with velocity c and temporal frequency f=c/λ. The space-time process is also assumed to include plane waves that are solutions to a homogeneous wave equation. This places a restriction on wavenumber k such that | k |=2π/λ, or said another way, plane waves correspond to physically propagating angles of arrival to the sensor array. The space-time process may typically be considered to be spatially wide sense stationary (WSS) and zero mean. The spatially wide sense stationary property implies the spatial covariance is a function of difference in position only. Additional, for WSS, plane wave components making up the process are uncorrelated. This restriction is relaxed when dealing with the special condition of correlated signal and interference, as this special condition violates the spatially stationary model. Snapshots are assumed independent over a time index, m. Several formulations of narrowband bandwidth and sampling period may be used to reasonably support this assumption.
General Model
The general model of an N element array with arbitrary sensor positions subject to an incident plane wave is shown in
In the output of the sensor array observing a space-time process, a snapshot model may include three terms
Each of the individual terms is described below. The space-time process may include two types of sources. The first type corresponds to point sources in the environment. These arrive at the sensor array 110 as discrete plane waves 202. K such sources may exist. Each source has a given direction of arrival uk, with a corresponding array manifold response vector, vk as well as a source amplitude at each sampling instant ak(m). Combining the array manifold responses, V=[v1, v2, •••, vK] and point source amplitudes am=((ak(m)))k, Eqn. (2.1) can be written more succinctly as
xm=Vam+nb,m+nw,m (2.2)
The discrete source amplitudes, am, are assumed to be
This is a standard model for passive sonar reception of far field discrete sources. The point sources may be correlated with each other or not. Specifying this attribute is a distinguishing feature between models. For a spatially stationary space-time process, the sources are uncorrelated by definition and Ra=diag (σ12, σ22, . . . σK2).
The second component of the space-time process is the background or environmental noise. This noise is spread spatially and has a smooth, continuous distribution across some region of angle of arrival to the sensor array. The background noise at each snapshot,
and is uncorrelated with the discrete components of the space-time process.
Each sensor 111-114 also produces an internal noise, which is uncorrelated with internal noise of the other sensors and is independent across snapshots. This noise is modeled as
The sensor noise is uncorrelated with other components of the space-time process.
Because it is a linear combination of uncorrelated, zero mean, complex Gaussian random vectors, the snapshot xm is a zero mean, complex Gaussian random vector. It is distributed as
Eqn. (2.3) is a model of general plane waves in non-white noise, and can represent line component, continuous, or mixed spectra type of processes.
Space-Time Random Processes
Second (2nd) order characterizations of WSS space-time random processes may be used to develop linear array processors (such as the array processor 102 of
Cramér spectral representation provides a description of a stochastic process in terms of an orthogonal process in a transform domain. The spectral representation can be used to develop the more familiar second order characterizations for the process and their relationships, rather than by stated definition. It may become clear from the spectral representation how a process may be characterized by one of the following: i) a smooth, continuous power spectral density, ii) a discrete (or line component) spectrum, or iii) a mixed spectrum, containing both smooth and line components. This formulation may be useful in developing techniques to deal with mixed spectrum when they are encountered, which may be beneficial for good adaptive beamformer performance.
Second Order Characterization of Space-Time Random Processes
A zero mean, WSS complex space-time process, {f(t, p)}, is defined for −∞<t<∞, and over some dimensionality of Cartesian space, C, typically 3 so that p=[px, py, pz]T. The following relationships define second order central moments of the process.
Second Order Characterizations
Space-time covariance between two points Δp=p1−p2 and times τ=t1−t2 is defined as
Rf(τ,Δp)=E{f(t,p)f*(t−τ,p−Δp)} (2.4)
The temporal frequency spectrum-spatial correlation function, also referred to as the cross spectral density, is related to Rf (τ, Δp) by a Fourier transform of a time lag variable.
The frequency-wavenumber spectrum is related to Sf (ω, Δp) by a Fourier transform of the Cartesian spatial coordinate Δp, with note of the reverse sign convention of the complex exponential. The wavenumber vector k has dimension C similar to Δp.
Because of the Fourier transform pair relationships, the following inverse transforms may also be used.
These results may be specialized for the narrowband, independent snapshot model. For a narrowband or monochromatic process at ωo, Pf (ω, k)=Pf (k)−2πδ(ω−ωo). This simplifies the relationship between the covariance and frequency-wavenumber spectrum.
Further, under the assumption of independent snapshots, the covariance is zero for τ≠0, further simplifying the relationship to
This shows that the narrowband, independent snapshot problem is principally a spatial problem, the temporal related aspects are not considered.
Matrix Representation
Consider an N element array of omni-directional sensors at locations pm n=1, 2, . . . •••, observing a process represented as {f(t, p)}. The covariance matrix for the array outputs, ignoring sensor noise components, is a matrix including elements
Rf=((Rf(pr−pc)))r,c (2.12)
which can be expressed in matrix form succinctly using the frequency-wavenumber spectrum and array manifold response vector, v(k).
Directional Distribution of Plane Waves
The space-time process may also be interpreted another way. The wavenumber restriction, |k |=2π/λ, implies that plane waves observed by the array correspond to angles of arrival that may physically propagate to the array, i.e., 0≦θ≦π and 0≦φ≦2π in spherical coordinates. The stationary space-time process in this case may be modeled as the sum of uncorrelated plane waves that are distributed according to a directional distribution, Gf(ω, θ, φ), across all angles of arrival to the array. Gf (ω, θ, φ) is similar to a probability density, and
The covariance may be related to the directional distribution. Consider two sensors in the field at locations p+, p2. Each sensor has a respective frequency and directional response, Hi(ω, θ, φ), i=1, 2. The difference in position, Δp=p1−p2, can be described in Cartesian or spherical coordinates.
The cross-spectral density between two sensors may be expressed in spherical coordinates as
The function Sf (ω) in front of the integral translates the relative levels specified by the directional distribution, Gf (·), to the absolute levels observed at the array. The cross-spectral density and space-time covariance are a Fourier transform pair.
For a narrowband process, Sf (ω, s, γ, ζ)=Sf (s, γ, ζ)·2πδ(ω−ωo). Computing the space-time covariance at τ=0, the covariance and cross spectral density are seen to be the same.
Using Eqns. (2.16) and (2.18) with Hi(ω, θ, φ)=1, i=1, 2, the covariance, Rf(s, γ, ζ) between two omnidirectional sensors in a given noise field, Gf (ω, θ, φ), can be found. This is particularly convenient if the noise field has a known form that can be conveniently represented in spherical or cylindrical harmonics. The covariance matrix for an array of sensors is populated by values of this function, where Δp=pr−pc.
Rf=((Rf(s,γ,ζ)))r,c (2.19)
Spectral Representation
Spectral representation theorem for stationary random processes, e.g., Cramér spectral representation, provides a mechanism for describing the properties of a WSS temporal or space-time random process in the respective transform domain, e.g., frequency or frequency-wavenumber. With this representation, the quantities of interest for a physically generated process have intuitive interpretation and representation. For example, distribution of power may be related to the space-time random process as a function of frequency (temporal or spatial). Definitions and a description of properties of spectral representation are provided below. The discussion below also explains how spectral representation can be used to develop the relationships between the more common second order moment characterizations of random processes. The spectral representation and its properties for a continuous time random process are introduced, followed by a discussion of properties for a discrete time random process. Extensions for WSS space-time random processes, in light of model assumptions, follows. In the following discussion, Ω denotes a continuous time frequency variable (in radians/sec) and ω denotes a discrete time frequency variable (in radians/sample).
Continuous Time
Let {f (t)} be a zero mean, complex-valued stationary random process defined over time−∞<t<∞. Additionally, let {f (t)} be stochastically continuous, a mild regularity condition that holds when the autocorrelation function, R (τ), is continuous at the origin. Then by the Cramér spectral representation theorem, there exists an orthogonal process, {Z (Ω)}, such that for all t, a realization of the process, f(t), can be expressed as
where equality is in the mean square sense and Eqn. (2.20) is a Riemann-Stieljes integral. The orthogonal process is zero mean
E{dZ(Ω)}=0 (2.21)
and its corresponding covariance
implies that disjoint frequencies are uncorrelated. The function S(I)(Ω), known as the integrated spectrum of {f (t)}, is a bounded, non-decreasing function. If S(I)(Ω) is differentiable, then:
and the covariance then becomes
E{dZ(Ω)dZ*(Ω)}=S(Ω)dΩ (2.24)
The function S(Ω) is referred to as the power spectral density function. The integrated spectrum S(I)(Ω) plays a role similar to a random variable cumulative distribution function, and many of the results for distribution functions may be applied to it. In analog to the Lebesgue decomposition theorem for distribution functions, the integrated spectrum can be written as including three components
S(I)(Ω)=S1(I)(Ω)+S2(I)(Ω)+S3(I) (2.25)
S(I) (Ω) is continuous, meaning its derivative exists for almost all Ω such that S(I)(Ω)=∫−∞ΩS(Ω′)dΩ′. A process with an integrated spectrum with only this term, S(I)(Ω)=S1(I)(Ω), has a smooth background spectral component. White or colored noise processes (AR, MA, ARMA) are of this type, which is referred to herein as a process with a purely continuous spectrum.
S2(I) (Ω) is a step function with jumps of size pl at frequencies Ωl, l=1, 2, . . . . A process with an integrated spectrum with only this term, S(I)(Ω)=S2(I)(Ω), has a purely discrete spectrum or line spectrum. A harmonic random process,
where Θl is uniform [−π, π] and Al is a real-valued random variable, has this type of spectrum.
S3(I)(Ω) is a continuous singular function. This type of pathologic function is not of practical use for spectral estimation, and may be assumed to be identically equal to 0 herein.
From this classification of spectra, and the decomposition described by Eqn. (2.25), three types of stationary random processes may be encountered. First, those with a purely continuous spectrum. Second those with a purely discrete spectrum. And last, those which are a combination of both types, referred to herein as a mixed spectrum processes.
The autocorrelation function, R (τ), and power spectral density, S (Ω), of the process can be related through the spectral representation and the properties of the orthogonal process. Starting with the definition of the autocorrelation function
R(τ)=E{f(t)f*(t−τ)} (2.26)
and replace f(t) with its spectral representation from Eqn. (2.20)
Using the uncorrelated property of the covariance at disjoint frequencies from Eqn. (2.22), Eqn. (2.27) simplifies to
where the result of the last integral assumes S(I)(Ω) is differentiable. Because R (τ) is an integrable, deterministic function, Eqn. (2.28) indicates that R (τ) and S (Ω) form a Fourier transform pair, such that
Discrete Time
The spectral representation also applies to discrete time random processes, {f [n] }, with modifications explained below. Given a stationary, discrete time random process, {f [n] }−∞<n<∞, a realization the process, f [n], has spectral representation using the orthogonal process, {Z (ω)}, given by
where equality is in the mean square sense. The limits of integration are restricted to ±π to reflect the unambiguous range of the discrete time frequency. The orthogonal process is zero mean, E {dZ (ω)}=0, with covariance
where the bounded, non-decreasing function S(I)(ω) is the integrated spectrum of {f[n]}. The covariance implies disjoint frequencies are uncorrelated. If S(I)(ω) is differentiable, then:
and the covariance of the increments becomes
E{dZ(ω)dZ*(ω)}=S(ω)dω (2.33)
The function S (ω) is referred to as the power spectral density function. Following the same procedure as in the continuous time case, the autocorrelation, R [l]=E {x (n) x*(n−1)} and integrated spectrum are related via the spectral representation by
where the final expression assumes S(I) (ω) is differentiable. The power spectral density and autocorrelation form a Fourier transform pair such that
where equality is in the mean square sense.
Sampling and Aliasing
A sampled WSS continuous time random process {fc (t)}, −∞<t<∞, with uniform sampling period T, produces a WSS discrete random process {fd [n]}, −∞<n<∞. The subscripts c and d are used to reinforce the distinction between the continuous and discrete time domains. As a stationary discrete time random process, fd [n]=fc (t0+nT) has a spectral representation
where the orthogonal process {Zd (ω)} has the spectral representation properties described above. The autocorrelation and power spectral density form a Fourier transform pair.
The discrete time process second order characterizations, Rd [l] and Sd (ω), may be related to the continuous time counterparts, Rc(τ) and Sc (Ω). The discrete time autocorrelation function samples its continuous time counterpart. This follows directly from its definition
Two approaches may be taken to determine the form of the discrete time process power spectral density. The first considers both Rc (τ) and Sc (Ω) as deterministic functions, and utilizes the properties of the Fourier transform and the continuous time sampling indicated in Eqn. (2.38) to arrive at
The second approach utilizes the spectral representation directly to arrive at the same result.
The relationship in Eqn. (2.39) can be used to determine parameters for the sampling period, T, such that the resultant discrete spectrum is an accurate representation of the underlying continuous spectrum, i.e., the Nyquist sampling criteria. However, consider an alternate interpretation of Eqn. (2.39). Even if the Nyquist sampling criteria is not met, using an estimate of the discrete spectrum, although it may be an aliased version of the true spectrum, it is possible to determine correct values for the covariance at the sample points indicated by Eqn. (2.38). This may be insufficient to completely reconstruct the underlying spectrum, but it enables estimation of covariance matrix values for adaptive processing.
In practice a finite number of samples are available for processing. This is the effect of a finite duration observation window and has a direct impact on the ability to estimate the discrete spectrum, which may be addressed via formulation of a multitaper spectral estimator, as explained below.
These concepts may be applied directly to uniform linear array processing problems. For regular arrays, such as arrays based on a multiple of a fixed spacing (e.g., minimum redundancy arrays), additional complexity is introduced due to the impact of the equivalent windowing function on the ability to estimate the discrete spectrum. Arbitrary array design may also have this issue. Additionally, arbitrary array design may add potentially unusual spectral combinations due to the non-uniform spacing compared to the uniform structure described by Eqn. (2.39).
Continuous Space-Time
The spectral representation extends to the case of a spatially and temporally stationary multidimensional random process, {f (t, p)}. Given a realization of the process, f (t, p), there exists an orthogonal process, {Z (ω, k)}, such that for all t, p
The orthogonal process, {Z(ω, k)}, is zero mean E {dZ(ω, k)}=0, and uncorrelated across disjoint frequency-wavenumber bands.
where it is assumed that the integrated frequency-wavenumber spectrum, Px(I) (ω, k), is differentiable.
The relationships between covariance, cross spectral density, and frequency-wavenumber spectrum may be derived from the multidimensional spectral representation in Eqn. (2.40).
Directional Distribution of Plane Waves
As an alternative to the frequency-wavenumber domain in Eqn. (2.40), one may define the orthogonal process across all angles of arrival on a sphere.
where
where k0=2π/λ, and ar (θ, φ) is a unit vector in the radial direction. The orthogonal process, {dZ(ω0, θ, φ)}, defines an integrated spectrum, S (ω) G(I) (ω, θ, φ), where G (·) is used to be consistent with the discussion above of the second order characterization of space-time random processes. The function S (ω) scales the relative levels defined in G (·) to the absolute levels seen at the array. Assuming G(I) (ω, θ, φ) is differentiable
The cross spectral density is defined by
Relating Eqn. (2.45) to Eqn. (2.42), and using Eqn. (2.44):
One may compare Eqn. (2.46) to the earlier expression in Eqn. (2.16), which also included directional response of the individual sensor elements and expanded the arT Δp terms in spherical coordinates. By defining the directional distribution, Gf (ω, θ, φ), and requiring disjoint regions in angle space to be uncorrelated Eqn. (2.44) this shows how the spectral representation underlies the model of the stationary space-time process as the sum of uncorrelated plane waves distributed over all directions of arrival to the array.
Optimal Beamforming
The following discussion reviews optimal beamforming techniques given observation of the interference and noise environment, and the related problem when the desired signal is also present in the data. Normalized signal to interference and noise ratio (SINR) loss metric is a measure of the decrease in output SINR of an implemented beamformer compared to an optimal processor. The normalized SINR loss metric may be used to assess the performance of a given adaptive beamforming algorithm. This metric and its application are described below.
Minimum Variance Distortionless Response (MVDR)
Given a series of snapshots, xm ε CN with E{xmxmH}=Rx, an adaptive processor may use a weighted linear combination of the sensor outputs to produce a scalar output signal
ym=wHxm (2.47)
The processor should pass the desired direction of arrival, specified by steering vector s, undistorted. This constraint may be expressed as
wHs=1 (2.48)
Expected output power of the processor is
E{|ym|2}=E{|wHxm|2}=wHRxw (2.49)
The design criteria is to minimize the expected output power, subject to the distortionless constraint. This design criteria is the same as maximizing the output SINR. The optimization problem is then
argwminwHRxws.t. wHs=1 (2.50)
Using the method of Lagrange multipliers, the constrained cost function to be minimized is then
J(w)=wHRxw+λ(wHs−1)+λ*(sHw−1) (2.51)
The cost function is quadratic in w. Taking the complex gradient with respect to w, and setting to zero
the optimal set of weights is
wopt=−λRx−1s (2.53)
Using Eqn. (2.53) in Eqn. (2.48) to solve for the Lagrange multiplier, gives λ=−(sHRxs)−1, and combining the two produces the final weight vector
When snapshot data used to estimate Rx contains only a noise and interference environment, this processor is referred to as a minimum variance distortionless response (MVDR). In the event the desired signal is also present in the snapshot data, the same solution for the weight vector results; however, such a solution is sometimes referred to as a minimum power distortionless response (MPDR) to indicate the difference in the observed data. In practice, the distinction makes a significant difference in terms of the required snapshot support to achieve good performance.
Normalized SINR Loss
For cases involving uncorrelated signal and interference, a standard metric for performance of an adaptive beamformer is degradation in the output signal to interferer and noise ratio (SINR) compared to that obtainable with an optimal processor. The normalized SINR loss, ξ, is defined as
The subscript o designates true quantities or optimal values, while the subscript a designates the actual or estimated values. For convenience, normalized SINR loss can be expressed on a dB scale, as ξdB=−10 log10 (ξ). In this way, ξdB=1 implies an output SINR that is 1 dB lower than obtainable by an optimal processor. For the optimal processor, SINR is computed as
while for an implemented processor, SINR may be computed as
The general expression for ξ, not assuming a particular form for the weights, is therefore
For an adaptive beamformer with weights designed using the minimum variance distortionless response (MVDR) criteria, using Eqn. (2.54) in Eqn. (2.56), the SINR for the optimal processor is
SINRo=sHR0−1s (2.59)
while using Eqn. (2.54) in Eqn. (2.57) yields the SINR for an implemented processor
Using Eqn. (2.59) and Eqn. (2.60) in Eqn. (2.55), the expression for ξ becomes
Eqn. (2.61) is a general expression for ξ when beamformer weights are found via MVDR, but it does not give any insight into performance as it relates to the quantities estimated for the underlying model. The matrix inverse operations also make it difficult to follow directly how model parameters influence the performance, except under some simplifying assumptions.
Spectral Estimation Techniques
In a particular embodiment, covariance for adaptive beamforming is estimated by first estimating the spatial or wavenumber spectrum. Thus, techniques that require an estimate of the covariance a priori, such as Capon's MVDR or MUSIC, are not usable for this purpose. Two main techniques for spectral estimation based upon the data are described: windowed, averaged periodogram (a non-parametric technique) and multitaper spectral estimation.
Classical (Nonparametric) Spectral Estimation
A windowed, averaged periodogram is a technique for spectral estimation for uniform sampled data series. By applying a predetermined fixed window function or taper, w=((w[n]))n, to the data, the behavior of the spectral estimate can be controlled with regard to frequency resolution and spectral leakage, e.g., sidelobe suppression. These quantities may be traded off against each other.
For simplicity, a single dimension time series or uniform linear array processing may be assumed. For either the time series or array processing problem the procedure is identical once the snapshots have been established. Time series processing may consider a contiguous collection of NTOTAL samples. This collection may then be subdivided into M snapshots of N samples each. Within the time series, the snapshots may be specified such that there is overlap of samples between adjacent snapshots. For the array processing application, each snapshot represents the simultaneous sampling of each of the N array elements. In either application, once obtained, each snapshot, xm, is multiplied element-by-element with the windowing function, w [n].
ym=((xn[n]w[n]))n=xmw (2.62)
where ⊙ is the element-by-element or Hadamard product. This windowed snapshot data may then be Fourier transformed, e.g., using efficient fast Fourier transform (FFT) algorithms
A value of NFFT may be selected to more finely sample the underlying spectrum, e.g., zero-padding, but the fundamental “resolution” of the transform is constant based on the amount of available samples, N. A final estimated spectrum is the average, magnitude squared of the outputs of Eqn. (2.63):
The frequency domain may be referred to as the transform domain of the time series, although for array processing the wavenumber is the spatial frequency, and once normalized by their respective sample period or sensor separation the two are equivalent. The averaged, modified periodogram processing shown here using the FFT provides a fixed resolution and sidelobe (leakage) performance across the frequency domain, based on the characteristics of the selected window. The array problem encounters the non-linear mapping between physical angle of arrival and wavenumber. If a fixed response across angle space is desired, the window function becomes a function of angle of arrival, or wl=((wl[n]))n, such that Eqn. (2.63) becomes
Multitaper Spectral Estimation (MTSE)
A multitaper algorithm formulates the spectral estimation problem as one of estimating second order moments of a spectral representation of a process. The following discussion considers uniform sampled time series. Application of the method to uniform linear array processing in wavenumber space is immediate. For arbitrary array geometries or operation in angle space, the concepts are the same though a specialized multitaper design may be used and processing may be more computationally intensive.
Spectral Estimation
In the following discussion the temporal “centering” term, ej(N-1)/2, has been omitted to simplify the discussion, and relationships are shown for M>1 available snapshots. Given a stationary discrete random process, {x [n] }, −∞<n<∞, a realization of the process, x [n], may have a spectral representation
where the covariance of the orthogonal increment process defines the power spectral density.
E{dZ(f)dZ*(f)}=S(f)df (2.67)
The problem of spectral estimation is to estimate the covariance of this process. However, dZ (f) is not observable directly from the available, limited samples x [n], 0≦n<N. While the impact of this data limiting operation (or projection onto a finite number of samples) is obvious in the time domain, its effect on the spectral representation of the process is less immediate. Taking the Fourier transform of the samples
and inserting Eqn. (2.66) into Eqn. (2.68) gives what may be referred to as the fundamental equation of spectral estimation
This result is a linear Fredholm integral equation of the first kind, and cannot be solved explicitly for dZ (v). This is in line with the inability to reconstruct the entire realization of the process, x [n], −∞<n<∞, from the limited sample observation. Eqn. (2.69) can be solved approximately, for a local region (fo−W, fo+W) using an eigenfunction expansion of the kernel
and a local least squares error criterion. The eigenfunction equation is given by
where 0<W<½ is a design choice and N is the number of available samples. There are N solutions to Eqn. (2.70), indexed by the subscript d. The eigenvalues, 0<λd (N, W)<1, give a measure of the concentration of the eigenfunction Qd (N, W, f) within a desired region, [−W, W]. For this particular problem, the solutions to Eqn. (2.70) are known. The Qd (N, W, f) are the discrete prolate spheroidal wave functions (DPSWF), which are related to qd (N, W, n), the discrete prolate spheroidal sequences (DPSS) by a Fourier transform
where εd=1 for d even, j for d odd. These sequences are also known as the Slepian sequences. There are approximately 2 NW significant eigenvalues for these functions. Defining the Fourier transform of the windowed samples, ym(d)(f), as the dth eigencoefficients of the data
The dth eigenspectra, Ŝd(f) may be is computed by averaging the magnitude squared of ym(d) (f) over all snapshots
Due to the orthogonality of Qd (N, W, f) over the interval [−W, W], for locally near flat spectra the eigenspectra are approximately uncorrelated. By averaging them, the overall variance of the final estimate is improved. Before considering a method for combining the eigenspectra, one might be interested to use all N available eigenfunctions to improve the variance by increased averaging, but this is not recommended.
Looking at Multiple Tapers
To develop a better understanding of the difference between using a multitaper technique and a single taper classical technique, a simple example may be considered. For this example, consider three approaches to estimating the spectrum using a windowed technique. Averaging multiple, uncorrelated estimates improves estimation accuracy. This can be accomplished using non-overlapping Hann windows. Alternatively, a multitaper design uses overlapping but orthogonal windows. Both achieve improvement due to averaging uncorrelated estimates, but one would expect the multitaper design to perform better overall as it incorporates more of the sample data in each estimate. In order to improve the resolution using the Hann window one may increase the length of the window (at the expense of providing fewer uncorrelated estimates). In the extreme, a single Hann window has better resolution than the multitaper but achieves no improvement due to averaging. Alternate formulations of the Hann based approach may be used, such as 50% overlap, etc., but in general the multitaper has better performance in terms of frequency resolution and overall improvement in variance due to averaging.
There is a limit to the number of tapers that may be applied meaningfully, based on N and the W selected. As a rule of thumb, there are 2 NW significant eigenvalues (sometimes more conservatively estimated as 2 NW−1), indicating 2 NW tapers are highly concentrated in the region [−W, W]. For example, a first few eigenfunctions may have a mainlobe concentrated largely within [−W, W], but for higher numbered windows the main lobe may be mostly outside the desired region, and the sidelobe level may increase substantially. This implies is that power estimates based on these higher numbered windows may be heavily influenced by frequency content outside the area of interest. This effect may be referred to as a broad band bias, which may be undesirable, in particular for high dynamic range non-flat spectra. Limiting the number of employed windows such that D≦2 NW provides some robustness against broad band bias automatically. Improvement may also be achieved by appropriately combining of the individual eigenspectra as discussed further below.
Combining Eigenspectra
With N given, and W and D specified, a multitaper method may compute individual eigenspectra, Ŝd (f), using Eqn. (2.73). For an assumed flat spectrum, there is a fixed optimal weighting scheme for combining the individual Ŝd (f), however, this is of limited use. If one had a priori knowledge that the spectrum was white, altogether different estimations techniques could be applied. For non-flat spectrum, adaptive weighting schemes may reduce the contribution of broadband bias while maintaining estimation accuracy. An iterative method may be used to determine an eigenspectra weighting function, hd (f), for non-white noise. Begin with an initial estimate of the spectrum, Ŝd (f), using a flat spectrum fixed weighting
where the superscript indexing is introduced to indicate the appropriate iteration. Estimating the variance of the process as
the following iterative procedure may be performed.
Typically, only 3 iterations of Eqn. (2.75) followed by Eqn. (2.76) are used for convergence.
Free Parameter Expansion
The formulation for multitaper spectral estimation develops an estimate of the power spectral density at f0 by considering a region [fo−W, fo+W]. This estimate is valid for any f in this region. For this reason, for a specific Ŝf (fo′), there remains a question of which fo in the range fo′−W≦fo≦fo′+W is most appropriate as all are valid. This choice may be referred to as a free parameter expansion of f0. The final Ŝf(fo′) should be a weighted average of these estimates, typically over a range |fo′−fo|≦0.8 W. In practice, the eigencoefficients in Eqn. (2.72) may be computed using FFT techniques for efficiency. Additional eigencoefficients for free parameter expansion may be generated by continuing to use the FFT with additional zero-padding. As used herein, the scalar multiplier NFPE refers to a value such that the full zero-padded FFT size is NFFT=N×NFPE. Thus, NFPE indicates the number of sampled “between points” available for free parameter expansion averaging.
Harmonic Analysis
As explained above, a discrete random process may have a mixed spectrum, such that it includes two independent processes—one with a continuous spectrum and one with a discrete spectrum. In terms of its spectral representation, a realization of a mixed spectrum stationary discrete random process {x [n] } may have spectral representation
where dZ1 (f) corresponds to the continuous spectrum process and has increments in continuum from −½≦f≦½, while dZ2 (f) corresponds to the line spectrum process and has increments only at the discrete locations of the frequencies in the harmonic process, fk for k=1, 2, •••, K. The line components (impulses) in the spectrum, which are caused by a portion of the process being a harmonic random process, cause difficulties with both classical and multitaper techniques. This is a result of the modulation property of the Fourier transform. Windowing of the data in the time domain results in convolution in the frequency domain. If a line component has large SNR, the result is unintended spectral leakage across frequency. A harmonic analysis approach may be applied to deal with this phenomenon.
At each frequency fo, the multiple tapers define a region in the frequency domain, [fo−W, fo+W]. The continuous spectrum portion of the process, {dZ1 (f)}, is non-zero throughout the region [fo−W,fo+W]. For now, assume a single line component may exist in this region. If a line component exists at fo, the increment process {dZ2(f)} is only non-zero at fo within [fo−W, fo+W]. Each realization of the process, dZ2 (f), provides a complex valued constant at fo. An analysis of variance (ANOVA) test may be used to detect the presence of the potential line component. For a single snapshot, this detection problem is termed the constant false alarm rate (CFAR) matched subspace detector. Within the region [fo−W, fo+W], a subspace located at fo only may be defined. A vector, qo, may be defined that includes the mean value of each of the tapers
and use this to form a projection matrix, Pq, for the subspace
Pq=qo(qoHqo)−1qoH (2.79)
The null projector for the subspace, Pql,
Pql=I−Pq (2.80)
defines “everywhere else” in the region [fo−W, fo+W]. Forming a vector of the eigencoefficients, ym (fo)
ym(fo)=[ym(1)(f0),ym(2)(f0), . . . ,ym(D)(f0)]T (2.81)
a detection statistic may be formed by taking a ratio of the power of the eigencoefficients in the fo*subspace to the power of the eigencoefficients outside that subspace. Formally, the detection statistic F (fo) may be computed as
and compared to an appropriate threshold, γTH.
The importance of the detection of the line components in the spectrum is that they may be identified, and after estimation of their unknown parameters (amplitude, frequency, and phase) subtracted from the original data. The residual sample data may then be subject to the spectral estimation algorithm, now with line components removed. The final spectral estimate may numerically “add” the line components to the continuous spectrum, with appropriate scaling for SNR and estimation accuracy.
Structured Covariance Matrices Based on Frequency Wavenumber Spectral Estimation
The discussion below explains how the frequency-wavenumber spectrum can be used as a basis for covariance matrix estimation for WSS space-time processes. First, the sensitivity of a model based adaptive beamformer to errors in estimates of the model parameters is considered. Performance of these processors with respect to the estimation of individual model parameters, such as interferer angle-of-arrival or interferer to noise ratio (INR), is investigated. Second, relationships between the model for the array processing problem and the desired covariance are considered. In addition to the contribution from physically propagating waves seen by the array, contributions due to the sensor noise component may be included to maintain robustness. Thus, a model may be defined that includes both of these elements, and that produces the proper covariance matrix estimate after transformation to the space-time domain. This leads to an approach for covariance estimation from spatial spectrum (CSS) for uniform arrays, and guides subsequent discussion of CSS for arbitrary arrays. Concentrating on the uniform linear array case, the classical power spectral estimation techniques discussed above are applied to the problem. Expected values for the resulting covariance are developed and used to predict normalized SINR loss performance. These predictions indicate that some form of harmonic analysis to mitigate the effects of line components in the spectrum may be used. This allows the processor to maintain operation in the low normalized SINR loss region for a wide range of conditions.
Sensitivity of a Model Based Beamformer
In the following discussion, performance of a model-based adaptive beamformer is assessed relative to an optimal beamformer. Analysis is performed to determine performance sensitivity to estimation errors for the individual model components. This analysis gives insight into what parameters within the model matter most in terms of impact to beamformer performance without specifying the exact form of the processor. Bounds are developed indicating the estimation accuracy to obtain an acceptable performance for an acceptable normalized SINR loss, ξ.
Single Plane Wave in Spatially White Noise
Consider the simple case of a single plane wave interferer in spatially white noise. This provides a basic understanding of sensitivity to error in estimation of the interferer INR or wavenumber, and develops a basis for more complicated models. The model for the covariance matrix for this problem is
Ro=σn2vovoH+σw2I (3.1)
where vo=((e−jk
σw2, the variance of the uncorrelated sensor noise;
σn2, the variance of the plane wave interferer; and
k
o, the wavenumber of the plane wave interferer.
Optimal beamformer weights for desired wavenumber, ks, specifying steering vector, s, may be determined using Eqn. (2.54). Given its simple structure, there is a closed form expression for Ro−1. Using the matrix inversion lemma.
The projection matrix for the subspace spanned by a single vector, v, is defined as Pv=v(vHv)−1vH which can be rearranged as vHvPv=vvH. Defining the quantity
the matrix inverse is represented as
Ro−1=σw−2[I−βoPv
Eqn. (3.3), which is similar to a projection matrix on the null space of vo, is a weighted subtraction of the projection onto the range space of vo. βo is a measure of the interferer to noise ratio, so the operation of the optimal beamformer can be interpreted as projecting out the interferer based on its relative strength against the spatially white noise. The complete expression for the weight vector is
Note that the denominator is a scalar, and does not affect the shape of the beam pattern other than as a gain.
In a particular embodiment, a model based adaptive processor may know the form of the covariance, estimate the unknown parameters, and use the estimates to determine the adaptive weights using Eqn. (3.4). As used below, the subscript o is used to denote a true or optimal quantity, while the subscript a denotes an estimated quantity. The estimated quantities reflect the true value and the estimation error, σw,a2=σw2+Δσw2, σn,a2=σn2+Δσn2, ka=ko+Δk. The estimated covariance matrix, Ra, and the corresponding weight vector, wa, are
Ra=σn,a2vavaH+σw,a2I (3.5)
Ra−1=σw,a−2[I−βaPv
The weight vector does not depend on the absolute values of σw,a2 and σn,a2, only their ratio. Thus, the following quantities may be defined:
where 0<ΔINR<1 indicates an underestimate, and ΔINR>1 indicates an overestimate. The following discussion considers the sensitivity of ξ to under/over-estimation of INR, and whether there is a range of Δk that is also tolerant such that ξ is near unity.
Over/Under-Estimate of INR, ΔINR≠1
For this analysis let Δk=0, so that Pv
Four scenarios resulting in two different approximations for performance are described in Table 3.1. An additional condition is considered in the upper right case of Table 3.1 that (INR)−1(ΔINR)−1≦1, or ΔINR≧(INR)−1. This puts a limit on the amount of underestimation considered and eliminates the situation of gross underestimation of INR when the interferer is strong. As seen in Table 3.1, the two approximations to consider are: Case 1− βa≈1 and Case 2− βa≈0.
Case 1. Overestimate or Slightly Underestimated INR, βa≈1
The general expression for the SINR loss is approximately
The performance depends on the number of elements, N, the INR, and the wavenumber separation. For a uniform linear array with element spacing d, with broadside as the desired steering vector, s, Eqn. (3.10) becomes
Defining the normalized frequency as the ratio of the operational frequency to the design frequency of the array, fnorm=f|fo,f≦fo, the expression can be written in terms of the angle of incidence to the array, θ. Note the normalized wavenumber kod=−πfnorm cos θ.
As θ moves away from the mainlobe at broadside, this expression is ≈1.
Case 2. Largely Underestimated INR, βa≈0
When INR is largely underestimated, the adaptive beamformer reduces to the conventional beamformer. Normalized SINR loss is highly dependent on INR, as the processor takes no action to specifically suppress the interferer. The expression for SINR loss for a uniform linear array in this case is approximately
The normalized SINR loss for the conventional (non-adaptive) beamformer is:
Large underestimation of the INR results in performance of the non-adaptive beamformer, effectively failing to take any corrective action in the weight determination to null the interference. The normalized SINR loss for this case shows the strong dependence between SINR loss and INR. This is expected for a non-adaptive processor, the larger the interferer the worse the performance.
Wavenumber Offset, Δk≠0
Now consider the impact of Δk≠0 on the performance. Assume that ΔINR=1, i.e., there is no error in estimating the INR, so that βa=βo. The expression for the approximate SINR loss in this case is
This expression may be used to determine a Δk that achieves an acceptable SINR loss, ξOK. This can then be compared to the Cramér Rao bound to see how reasonable it is in terms of the imagined model based processor. This results in the general expression for the single interferer in uncorrelated white noise case.
Assumptions about the array geometry may be used to further simply this expression. For a uniform linear array (ULA), performance may be analyzed in terms of Δk, with an assumed desired steering vector corresponding to broadside, ks=0. Due to the choice of ks, the interferer wavenumber ko is the separation in wavenumber between the two. The following expressions can be derived through analysis of normalized SINR loss of a model based adaptive beamformer as a function of estimation accuracies of the model parameters (e.g., wavenumber and interferer to noise ratio (INR)):
Eqn. (3.17) includes oscillations that indicate there are areas in kd space that are more tolerant to the estimation error, Δk. These correspond to nulls in the conventional beam pattern that provide sufficient attenuation against the interference. A smoother bound for the expression that eliminates oscillations and neatly spans the lower values may be determined, resulting in a final smoothed expression:
While Eqn. (3.17) produces oscillations, Eqn. (3.19) smoothly bounds the bottom values. To be conservative, Eqn. (3.19) may be used, although Eqn. (3.19) may be too conservative as the separation between the interferer and the desired steering vector approaches zero. This corresponds to the interferer residing in the main lobe of the beamformer.
Comparison to the Cramér Rao Bound
The spatial frequency/direction of arrival estimation accuracy specified by Eqn. (3.19) results in a prescribed amount of normalized SINR loss, ξOK. This accuracy can be compared to the Cramér Rao (CR) bound for the case of a single plane wave in noise. The CR bound for a given number of snapshots, M, is
Model for Covariance
Estimating the covariance matrix may begin by considering a model that incorporates the components that make up the covariance at the output of an N element array for a narrowband, stationary space-time process, f (t, Δp). Using the spectral representation theorem, the space-time process can be represented as a sum of uncorrelated plane waves distributed as function of angle of arrival to the array, G(θ, φ), or wavenumber, G(k). The corresponding wavenumber spectrum for the process is proportional to this normalized distribution, Pf (k)=αG(k), where α accounts for scaling the relative levels defined by G(k) to the absolute power level seen at the array.
As explained above, a stationary random process may include two uncorrelated components, one corresponding to a continuous spectrum process, and another corresponding to a discrete spectrum, i.e., harmonic, process. Independent, white sensor noise adds a third component to the array output covariance. The mth snapshot containing these components is
where vk is the array manifold response vector at spatial frequency kk. The covariance for this model includes three uncorrelated parts based on these components
Rx=VRaVH+Rb+Rw (3.22)
where
and spatial stationarity requires the plane waves be uncorrelated, so that Ra=diag(σ12, σ22, . . . , σK2).
Thus,
Alternatively, grouping the terms for the space-time process, Rf=VRaVH+Rb, separately from the sensor noise component:
Rx=Rf+Rw (3.24)
The matrix Rf includes all terms that correspond to physical propagating waves and can be decomposed via eigendecomposition
Depending on the particular form of the space-time process, f (t, Δp), and the array geometry, Rf may have rank Nf<N. In that event, some of the eigenvalues will be zero valued.
Regardless of the rank of Rf, the N eigenvectors Qf form a complete orthonormal set for the space CN×N. Using (3.25) in Eqn. (3.24), with Qf QfH=I the matrix Rx can be expressed
As evident in Eqn. (3.27), the white noise contribution from Rw guarantees that all the eigenvalues are non-zero, so that overall covariance, Rx,is full rank.
Estimating Visible Space Covariance, {circumflex over (R)}vs
Consider the covariance associated with the space-time process only, Rf. As explained above, the transform relationship between the frequency-wavenumber spectrum, Pf(k), and the space-time covariance, Rf, is
Rf=(2π)−C∫ . . . ∫vsPf(k)v(k)vH(k)dk (3.28)
where C in this expression is the dimension of the wavenumber used, C=1, 2 or 3. In Eqn. (3.28), the range of integration is restricted to the visible region for the array, corresponding to physical propagating waves arriving at the array with some azimuth, 0≦φ≦2π, and elevation, 0≦θ≦π. For a given estimate of the visible region frequency-wavenumber spectrum, {circumflex over (P)}vs(k), the corresponding covariance of the space-time process can be determined using Eqn. (3.28)
{circumflex over (R)}vs=(2π)−C∫ . . . ∫vs{circumflex over (P)}vs(k)v(k)vH(k)dk (3.29)
Spectral estimation techniques used to form the estimate {circumflex over (P)}vs (k) may not be able to distinguish between the contribution of the space-time process, f (t, Δp), and the sensor noise component apparent within the visible space. Any basis projection or steered beam power measurement technique will see both the content from f (t, Δp) and the sensor noise.
{circumflex over (P)}vs(k)={circumflex over (P)}f(k)+{circumflex over (σ)}w2 (3.30)
{circumflex over (R)}
vs from Eqn. (3.29) will contain the sum of both.
{circumflex over (R)}vs={circumflex over (R)}f+{circumflex over (σ)}w2(2π)−C∫ . . . ∫vsv(k)vH(k)dk (3.31)
Observe from Eqn. (3.31) that the contribution due to the sensor noise component, when viewed only across the visible region, appears as an additional isotropic noise in the environment.
Visible and Virtual Space
For certain array geometries, or when operating below the design frequency, there may be a significant additional virtual space in addition to the visible space available to the array. To define virtual space, the subspace spanned by the columns of a matrix A may be denoted as span (A)≡“A”. As explained above, with a sensor noise component present, Rx ε CN×N, and is full rank, thus “Rx”=“CN×N”. The visible space for a particular geometry and operational frequency may only occupy a subspace of CN×N. “Rx” then includes two subspaces, one corresponding to a visible region, indicated with subscript vs, and one corresponding to a virtual region, indicated with subscript yr.
<Rx>=<Rvs>+<Rvr> (3.32)
Wavenumbers in the virtual space do not correspond to physical propagating waves. Use of Eqn. (3.29) directly as an estimate of covariance with failure to account for the sensor noise component within the virtual region subspace may lead to poor sidelobe behavior within the virtual region, and adaptive beamformers developed using this covariance alone may suffer an overall loss in SINR.
Regularly and Uniformly Spaced Array Geometry
A regularly spaced array geometry is described by an interelement spacing that is a multiple of a fixed quantity, d, in Cartesian coordinates. Uniform arrays are a special case of this with the interelement spacing simply the constant, d. The uniform linear array, in terms of spatial sampling, is analogous to uniform sampled time series and the results of stationary processes and Fourier transform pairs and properties may apply directly. The discussion below concentrates on the uniform linear array, with some discussion of the implications of regularly spaced arrays. Extensions to higher dimension processing are envisioned.
The narrowband space-time process, f (t, Δp), at frequency ω includes plane waves propagating in a homogeneous medium with velocity c. These waves are solutions to the homogeneous wave equation, and are constrained in wavenumber such that | k |=ω/c=2π/λ. This indicates that the frequency-wavenumber spectrum, Pf(ω, k), for this process exists on the surface of a sphere in wavenumber space with radius |k|.
Consider an N element uniform linear array with design frequency ωo (spacing d=λo/2) and sensor elements at locations on the z axis, pn=(n−1) d for n=0, •••, N−1. With no ability to resolve spatial components in the kx or ky direction, the frequency-wavenumber spectrum, Pf(ω, k), may be projected down onto the kz axis. After projection the spectrum maintains the strict bandlimiting to the range |kz|≦2π/λ. Using sampling of random processes, as explained above:
where Δp=ld for integer l. For operation below the array design frequency, ω<ωo, the wavenumber spectrum Pf (kz) is non-zero only over the range corresponding to the visible region of the array, |kz|≦2π/λ, and the range of integration may be reduced to ∫−2π/λ2π/λ. Now consider the uncorrelated sensor noise component
Rw(Δp)=σw2δ(Δp) (3.34)
Even though it does not correspond to a component of a physically propagating space-time process, a wavenumber-spectrum and covariance Fourier transform pair can be defined:
The difference between Pf (kz) and Pw (kz) is that Pw (kr) is non-zero over the entire interval, |kz|≦2π/λo. The covariance for the output of the array of interest is the sum of the two
Rx(Δp)=Rf(Δp)+Rw(Δp) (3.37)
The two wavenumber spectra, one for the space-time process and the other for the sensor noise component, can be added to produce a composite spectrum
Px(kz)=Pf(kz)+Pw(kz),|kz|≦2π/λo (3.38)
so that the covariance is related via
For convenience, the expressions can be converted to normalized wavenumber, ψ=−kzd space. Defining the ψ spectrum as
the covariance sequence Rl,x (l) and ψ spectrum are related
Pψ,x (ψ) may be estimated over the entire range, |ψ|≦π, from the snapshot data, xm, using classical power spectral estimation techniques. By accepting a fixed resolution in ψ space (which is non-uniform in angle space, θ) this can be done efficiently with FFT based processing. Using the estimate {circumflex over (P)}ψ,x (ψ) in Eqn. (3.41)
As long as the limits of integration in Eqn. (3.42) are over the entire range, ∫−ππ, the covariance estimate will contain the appropriate components for both the space-time process and the uncorrelated sensor noise.
The process itself may be bandlimited, and if the sensor noise component is estimated in the transform domain, it should have the appropriate form such that it equates to {circumflex over (σ)}w2I within the final covariance matrix estimate.
Positive Definiteness
In order for the estimated covariance to have value for adaptive beamforming, the covariance should be Hermitian, {circumflex over (R)}x={circumflex over (R)}xH, and invertible. The Hermitian property and invertibility imply that the eigenvalues of {circumflex over (R)}x are all real valued and greater than zero, or more simply that the matrix {circumflex over (R)}x is positive definite. For arbitrary vector x, not equal to the null vector (xHx≠0), the matrix A is positive semi-definite if
xHAx≧0 (3.43)
and is indicated notationally as A≧0. The matrix A is positive definite if
xHAx>0 (3.44)
and is indicated notationally as A>0.
For simplicity of explanation, consider a regularly spaced linear array (though higher dimension regular arrays may follow similarly). From Eqn. (3.42), the covariance matrix estimate is based on the estimate of the wavenumber spectrum, given in normalized wavenumber space, ψ=−kd
The ψ-spectrum estimates, {circumflex over (P)}ψ,x (ψ), may be restricted to be real-valued and greater than zero. This has implications in choice of algorithm but is a reasonable for a power spectral estimator when the observed process has a white noise component. With this restriction, {circumflex over (P)}ψ,x (ψ) may be expressed in the form
{circumflex over (P)}ψ,x(ψ)={circumflex over (P)}ψ,f(ψ)+{circumflex over (σ)}w2 (3.46)
where {circumflex over (P)}ψ,f(ψ) is the estimate of the space-time process spectrum, {circumflex over (P)}ψ,x(ψ)≧0, and {circumflex over (σ)}w2 is the estimate of the sensor noise {circumflex over (σ)}w2>0. Now
The first term is
Both quantities in the integral in Eqn. (3.48), {circumflex over (P)}ψ,f(ψ) and |xHvψ(ψ)|2, are real-valued and greater than or equal to zero, therefore {circumflex over (R)}f≧0. The second term may be evaluated as
The integral in Eqn. (3.29) can be reduced to
so that
xH{circumflex over (R)}wx={circumflex over (σ)}w2xHx>0 (3.51)
and {circumflex over (R)}w is positive definite. The sum of a positive semidefinite matrix and a positive definite matrix is positive definite, so {circumflex over (R)}x>0 when {circumflex over (P)}ψ,f(ψ) is real-valued and greater than zero.
Performance when Using Classical PSD Techniques
The analysis of performance of estimating covariance from spatial spectrum (CSS) may begin by considering estimates of the wavenumber spectrum found using classical spectral estimation techniques. For ease of explanation, a uniform linear array with spacing d=λo/2 is considered. For a given fixed window function (or taper), w=((w[n]))n, the windowed snapshot data is
ym=((xm[n]w[n]))n=xmw (3.52)
The windowed data may be used to form an averaged windowed periodogram estimate of the spectrum. Writing the array manifold response vector, vk(kz)=((e−jk
The final spectral estimated is the averaged, magnitude squared value of the Fourier transforms
{circumflex over (P)}ψ(ψ) is periodic in ψ with period 2π. The range |ψ|≦π may be referred to as the region of support. The visible region of the array, when operating at frequency f=c/λ, (f≦fo), is restricted to the range |ψ|≦π(λo/λ). As explained above, the remainder outside the visible region is referred to as the virtual region. The fixed window function, w, provides a fixed resolution, e.g., “bin width”, across ψ space. This allows {circumflex over (P)}ψ(ψ) to be computed efficiently at several equal spaced locations throughout the supported region using FFT techniques. It is also possible to have the window function vary as a function of ψ, represented as wψ. In this way, one can design for fixed resolution in angle space. This results in non-uniform “bin width” in ψ space. Multi-taper spectral estimation techniques lend themselves to this method of design. FFT techniques may not be directly applicable, though, when the window function is not fixed so increased computational cost may be associated with the approach.
An alternate perspective on the estimated {circumflex over (P)}ψ(ψ) spectrum may be considered in terms of an auto-correlation sequence {circumflex over (ρ)}y[n] defined by the windowed sensor outputs. Based on the Fourier transform property
The sample autocorrelation per snapshot is
where the sequence ym[β] has value in the range [0, N−1], and is zero elsewhere. As a convention, {circumflex over (ρ)}[n] may be used to represent a sample autocorrelation from the data itself, while reserving R[n] is to indicate an auto-correlation based on the ensemble, E {·}. The overall sample autocorrelation is the average over all snapshots
Using Eqns. (3.55), (3.56), and (3.57), in Eqn. (3.54):
The estimated power spectrum and auto-correlation sequence are a Fourier transform pair, with corresponding inverse transform relationship
The covariance matrix for the array is formed from the {circumflex over (ρ)}y [n] values in a Toeplitz structure
{circumflex over (R)}y=(({circumflex over (ρ)}y[r−c]))r,c (3.60)
Expressed directly in matrix notation based on Eqn. (3.59) this can be also be expressed as
It is also useful when comparing related techniques to understand how the formation of {circumflex over (R)}y relates to the operations used in the traditional sample covariance matrix. Eqn. (3.57) is equivalent in result to Eqn. (3.59), but operates directly on the snapshot data in space-time domain. First, define a windowed sample covariance matrix
The classical sample covariance matrix,
uses Eqn. (3.62) with w=1, the all one's vector. Showing the entries in the matrix in Eqn. (3.62) explicitly:
Comparing Eqn. (3.56) to Eqn. (3.63) shows that {circumflex over (ρ)}y,m [n] is the sum down each of the diagonals in the inner matrix in Eqn. (3.63), where the diagonals correspond to the numbered index n as main, sub, or super diagonal according to:
By averaging over multiple snapshots, therefore, {circumflex over (ρ)}y [n] is the sum down the diagonals of Rw,SCM. The covariance matrix {circumflex over (R)}y is then populated with entries from {circumflex over (ρ)}y [n]. Going forward as a convention, this operation is referred to herein as a diagonal-sum-replace (DSR), with a notation indicating the operation as
{circumflex over (R)}y=DSR(Rw,SCM) (3.65)
The DSR operation acts in a linear fashion for addition of matrices A, B ε CN×N
DSR(A+B)=DSR(A)=DSR(B) (3.66)
as well as in regards to the expectation operator
E{DSR(A)}=DSR(E{A}) (3.67)
Expected Value-Stationary Random Process
The expected value of the covariance, {circumflex over (R)}y, for the WSS space-time process is considered below. From Eqn. (3.58), the expected value of {circumflex over (P)}y [ψ] is related to the sample autocorrelation, {circumflex over (ρ)}y [n],
The expected value of E {{circumflex over (ρ)}y}=E{{circumflex over (ρ)}y,m}.
where Rx[n] is the ensemble covariance. The remaining summation term is the sample autocorrelation of the window,
The final result for the expectation is then
E{{circumflex over (ρ)}y[n]}=Rx[n]ρw[n] (3.70)
From Eqn. (3.70), the expected value of the covariance matrix is
E{{circumflex over (R)}y}=RxRw (3.71)
where Rw=DSR(wwH)=((ρw[r−c]))r,c. Looking at the result in the frequency domain, using Eqn. (3.70) in Eqn. (3.68) results in:
This can be expressed in the ψ domain as the convolution of the power pattern of the window, Cw(ψ)=|W(ψ)|2=F (ρw[n]), with the underlying model spectrum, Px,ψ (ψ)
Performance Based on Expected Value
The result for the expected value of the covariance of Eqn. (3.71) can be used to assess performance of the algorithm. For a given window function (a.k.a. taper), w, first determine the matrix Rw=DSR (wwH). For each particular problem of interest, e.g., single plane wave in uncorrelated noise, form the known model ensemble covariance, Rx. Using these in the normalized SINR loss expression, adaptive beamformer performance using CSS can be analyzed.
Because of the Hadamard product nature of the relationship several things become apparent:
however this property does not give any further insight into performance, since the matrix product, Rx ⊙Rw, is biased compared to the ensemble covariance Rx.
Prototype Power Pattern
To assist in the normalized SINR loss analysis, a “prototype” window function may be defined. A power pattern, Cw(ψ) may be defined by an ideal bandlimited portion, which may be useful for identifying the impacts of mainlobe width and a constant offset portion, which may be useful for identifying the impacts of sidelobes or spectral leakage. Given definition for these regions, the analysis is more straightforward and the two factors may be considered individually. Classically defined windows incorporate both features together, in some trade-off related to their design, making the analysis of individual window functions less insightful. The prototype window is defined herein using subscripts as follows: lb denotes local bias and relates to the main lobe region, bb denotes broadband bias and corresponds to the sidelobe levels. Cw(ψ) is a periodic function with a period 2π, and is defined explicitly over the region of support, |ψ|≦π, as
Cw(ψ)=Clb(ψ)+Cbb(ψ) (3.75)
The mainlobe is defined by an ideal bandlimited function
and the sidelobe level is defined by a constant
Cbb(ψ)=Abb|ψ|≦π (3.77)
The scale factors, Alb and Abb, may be constrained according to the normalization
By specifying two of the three parameters, {Alb, Abb, ψlb}, e.g., the latter two, and varying them separately, their respective influence on performance can be examined. This provides a general feel for behavior of normalized SINR loss using classical PSD techniques. As explained above, E{{circumflex over (R)}y}=RxRw. Thus, it is sufficient to find Rw for the prototype window. For the definition in Eqn. (3.75), there is a closed form solution based on the parameters. Starting with the Fourier transform relationship
substituting in the definition for Cw(ψ) showing the explicit local and broadband bias terms
The solution includes two terms, ρw[n]=ρlb[n]+ρbb[n]. The first term, ρlb[n], is:
where sin c (x) ≡sin (πx)/(πx). The second term, ρbb[n], is
The complete autocorrelation sequence, ρw[n], is the sum of the two
ρw[n]=Alb(ψlb/π)sin c([ψlb/π]m)+Abbδ[m] (3.83)
Expressed in matrix form, where Rlb,bb=((ρlb,bb[r−c]))r,c:
Rw=Rlb+Rbb (3.84)
The resultant expected value of the estimated covariance matrix is then
E{{circumflex over (R)}y}=Rx[Rlb+Rbb] (3.85)
Observe in Eqn. (3.85) that the broadband bias term, Rbb, is a diagonal matrix with constant diagonal Abb. The overall affect of the broadband bias term is the same as a diagonal loading in that it increases the main diagonal of the estimated covariance matrix. This is accomplished as a multiplicative, not additive, effect.
The prototype window described by Eqn. (3.75) is not realizable with any array of finite length. So it is not possible to determine a set of coefficients, w, that would result in a power pattern of this type. This does not prevent analysis, however, since it is assumed that the estimated spectra is available, {circumflex over (P)}y(ψ)=Cw(ψ)≠Px,ψ(ψ), and not how it was computed. The approach is to understand performance issues when using classical techniques, and to show where they work well and where they do not. These results can then guide development to extend the range of situations allowing useful operation.
Prototype Window Normalized SINR Loss
Eqn (3.84) may be used in the expression for SINR loss of Eqn. (3.74). The Hadamard product structure within the expression of Eqn. (3.74) prevents a more compact form. Strictly speaking, the value of ξ found using Eqn. (3.74) is not a random variable but a constant, since the expected value of the estimated covariance, E{{circumflex over (R)}y}, is being used. To understand the statistical behavior of ξ, one may insert the computational form of the estimated covariance given in Eqn. (3.65) into Eqn. (3.74). This is also not easily reducible to a more compact form.
The conclusion is that for this type of structured covariance matrix estimation algorithm, the normalized SINR loss cannot be simplified into an expression that does not involve the particular covariance, Rx, for the problem. This is unlike sample covariance matrix methods, where the performance can be derived in closed form as a random variable and shown to be function of number of elements and snapshots only. Performance of this structured covariance method depends on the problem. Eqn. (3.74) can still be used to understand SINR loss performance, but the scenarios analyzed may be specified.
By specifying the parameters of the prototypical window function ([Alb or Abb], ψlb), for particular types of interference problems, Eqn (3.74) may be used to predict performance. For example, the single plane wave interferer in uncorrelated noise case may be considered with a signal of interest not present. This case highlights some of the features of the approach.
Single Plane Wave Interferer
The impact of using CSS for the single plane wave in uncorrelated noise case may be analyzed using Eqn. (3.74). Normalized SINR loss may be computed as a function of distance between the desired signal direction of arrival and the interferer location in ψ-space, Δψ=(ψs−ψo). For each Δψ the difference between the per element INR and a fixed sidelobe level, Abb, may be varied. This may be done, for example, for three values of ψlb, set to multiples of the mainlobe half width of the array,
By increasing ψlb an indication of performance when using wider mainlobe windows or when operating below the design frequency may be determined.
Simulation results indicate that for CSS covariance matrix estimation based on classical power spectral based methods, near optimal SINR loss performance can be achieved when the interference has sufficient separation from the desired signal, or in the event the interferer is near the desired signal, its power per sensor element is substantially below the sidelobe level of the window used.
Analysis of a simple single plane wave interferer in white noise showed that the adaptive processor is relatively insensitive to estimation error of INR. Performance is more effected by wavenumber estimation accuracy in direct relation to the accurate placement of nulls, but acceptable performance is achievable in few or even one snapshot. The CR bound is proportional to M−1 and N−2, so a larger number of sensors is beneficial. This is in contrast to the closed form performance of sample covariance matrix techniques, where performance does not depend on the number of sensors, just the number interferers and snapshots (with use of diagonal loading). Covariance matrix estimates developed using classical power spectral estimation techniques were seen to be biased. The normalized SINR loss performance indicated that to broaden the conditions under which useful performance can be achieved an additional step of harmonic analysis may be used, including the detection and subtraction of line components from the data.
Structured Covariance Estimation with Multi-Taper Spectral Estimation (MTSE)
The performance of adaptive beamforming using covariance matrix estimates based on the wavenumber spectrum using classical spectral estimation techniques, e.g., power spectral distribution (PSD), is described above. The expected value of the estimated covariance when using PSD is biased, E {RPSD}=Rx⊙Rw, so in general adaptive beamformers based on the CSS covariance do not converge to the optimal solution. However, analysis of the SINR loss performance for the uniform linear array case, as function of interferer to desired signal spacing, Δψ, window characteristics, and INR showed that performance is within a few tenths of a dB from optimal under some conditions.
To maintain good normalized SINR loss performance, the interferer should have sufficient separation from the desired signal, in proportion to the window mainlobe width, Δψ≧2ψlb−3ψlb, with an interferer to noise ratio such that (INRPE+Abb)≦10 log 10(N) (dB). To continue to achieve good performance with smaller separation, the condition (INRPE+Abb)<<0 (dB) may be used. Neither of these conditions can be guaranteed in practice. These concerns motivate use of multi-taper spectral estimation (MTSE), e.g., Thompson's MTSE, in forming the estimate of the frequency-wavenumber spectra instead of the classical techniques. This is for at least three reasons:
Covariance from Spatial Spectra (CSS) with MTSE
The discussion below provides a procedural outline for using MTSE with harmonic analysis to estimate the frequency-wavenumber spectra used to form an estimate of the covariance matrix at the output of the array. The discussion considers an N element uniform linear array, and extension to more general geometries is considered further below. The process has a mixed spectrum, with K point source signals have independent, random complex-valued amplitudes,
M snapsnots are available for processing. The snapshot model is
An estimate for the frequency-wavenumber spectra, for both process and sensor noise, {circumflex over (P)}x(ψ), may be formed using MTSE and used to compute an estimate of the covariance.
The integral in Eqn. (4.2) may be implemented using a numerical summation
Alternatively, for a uniform linear array one can take advantage of the frequency-wavenumber spectrum and covariance being single dimension, and relate them by the 1-D inverse Fourier transform
and populate the covariance matrix as {circumflex over (R)}x=(({circumflex over (R)}x[r−c]))r,c. Fast Fourier transform techniques may be a particularly efficient implementation when the value of L permits their use.
Number of Tapers, D
To exploit the efficiencies of FFT based processing when using MTSE to estimate the spectrum, {circumflex over (P)}x(ψ), the spectral estimation may maintain a fixed resolution in normalized wavenumber space, ψ=−kzd. This allows a single set of tapers to be used. A width of the analysis region, W, may be chosen. Typically choices are NW=1.5, 2.0, 2.5. The number of significant tapers supported by a particular width, W, is D=2 NW−1. The case of NW=1 results in a single usable taper and reverts to a standard classical PSD method. No harmonic analysis may be used in this scenario. As a practical matter, D=2 NW may be selected, resulting in a lower concentration for the last taper. This may increase the number of basis vectors used in the harmonic analysis detection statistic. The D tapers may be designed according to an appropriate eigenvalue or generalized eigenvalue problem. For the ULA case, the resultant tapers are a discrete prolate spheroidal sequences.
qd(n)=dpss(N,NW),d=1, . . . ,D (4.5)
FFT and Zero-Padding
Snapshot data may be windowed and fast Fourier transformed to produce the MTSE eigencoefficients
for the set of points l=0, . . . ,NFFT−1. The FFT size, NFFT, may be specified in Eqn. (4.6) independently from the number of array elements, N. The nominal set of points would be NFFT=N, with a greater number of points, NFFT>N, generated using the zero-padding technique (NFFT<N is possible using polyphase techniques but not likely a case of interest here). The zero-pad operation may be useful for several reasons. First, the detection process within harmonic analysis performs a subtraction of discrete harmonic (point source) components in the data. This is done by estimating the unknown line component parameters: wavenumber, amplitude, and phase. Many algorithms exist for estimating parameters of sinusoids in noise. Assuming that multiple interferers are sufficiently separated, these parameters may be conveniently estimated optimally using FFT techniques. The precision to which this can be accomplished, without additional techniques, such as curve fitting between FFT bins, is directly proportional to the fineness of the FFT spacing in wavenumber space. The zero-padding operation is an efficient method for increasing this fineness. The zero-padding is also useful in the smoothing operation referred to as free parameter expansion, FPE.
Discrete Line Component Processing (Harmonic Analysis)
Harmonic analysis algorithm operates on the eigencoefficients, Ym,d(l), to determine the presence of discrete line components. A detection statistic is computed as a ratio of the power in the line component subspace to the power outside that subspace in the region [fo−W<f≦fo+W].
The choice of threshold, γTH, can be determined using a Neyman-Pearson criteria assuming Gaussian noise statistics. Practically, it may also be useful to define a minimum limit allowable for detection, e.g., γmin(dB)=10 log10(γmin), such that
γTH=max(γNP,γmin) (4.8))
with γmin(dB)=3 dB typically. This reduces excessive false detections and has minimal impact on overall performance, since the harmonic analysis is used primarily to eliminate high powered, not low powered, discrete interference. This test is valid for a single line component present within the analysis region, [fo−W<f≦fo+W]. If the interference environment is dense with respect to the array resolution, additional tests such as a double F line test may be appropriate.
For a detected line component, it may be assumed that the wavenumber remains constant across all snapshots. For the kth line component the wavenumber, ψk, is used to estimate the remaining parameters per snapshot and form the overall covariance matrix. With sufficient zero-padding, ψk, can be estimated as
The remaining parameters may be estimated per snapshot using matched filter techniques. Defining the reference sinusoid waveform
sref,k=((ejψ
the complex-amplitude and interferer power are estimated as
While not required for the processing, the interferer to noise ratio for each detected discrete component may be estimated. This is useful when generating a composite spectrum for visualization. A composite spectrum may be generated based on the estimated continuous background spectrum of the residual snapshot data (post harmonic analysis), with numerical insertion of the discrete components. The insertion technique may use knowledge of the INR to properly represent the uncertainty of a particular estimate.
Influence of the K detected components may be removed from the snapshot data to produce the residual data snapshots, xb,m. This can be accomplished with one of two methods.
Method 1. Coherent Subtraction
Method 2. Null Projection
x
b,m
=P
K
⊥
x
m (4.16)
Continuous Background Spectrum
Once harmonic analysis is complete the residual snapshot data may be used to compute a final smooth, continuous background spectrum. The snapshot data may be windowed and fast Fourier transformed to produce eigencoefficients
The eigencoefficients may be used to produce the individual eigenspectra.
The individual eigenspectra may be linearly combined according to a set of weights
The weights, hd(k), may be fixed (e.g., for an underlying white spectrum) or may be determined adaptively.
Covariance Matrix Computation
The final estimate of the covariance matrix may be formed using line component and continuous background spectrum products.
{circumflex over (R)}a=diag(σ12,σ22, . . . ,σK2) (4.20)
{circumflex over (V)}=[sref,1,sref,2, . . . ,sref,K] (4.21)
Composite Spectrum Generation
In a particular embodiment, the wavenumber spectrum, {circumflex over (P)}x (ψ), may be visualized directly in addition to forming the covariance matrix. The smooth continuous component is the direct output from the MTSE processing of the residual snapshot data, xb,m, yielding {circumflex over (P)}x,b(ψ). The discrete components previously estimated and subtracted are then added back into the numerical spectrum. Two points that may be considered in this context are: 1) numerical power should conserved and 2) better resolution or performance than is available with the data should not be implied.
Unbiased Spectral Estimate in White Noise
In this type of spectral estimation, each point in the spectral estimate is scaled such that it is an unbiased estimate of the noise power for a white noise input. In terms of classical PSD techniques, this implies that the window function has been scaled such that wHw=1.0. MTSE tapers may be scaled in accordance with this approach. Plane wave or discrete sinusoidal components experience a processing gain due to the coherent gain of the window function. The maximum gain may be obtained using
and equates to 10 log 10(N) dB. Thus, a snapshot,
will produce an expected value at any power spectral estimate, E {{circumflex over (P)} (ψ)}=σ2, while a snapshot corresponding to a plane wave component, xm=Aovψ(ψo) will produce a value of E {{circumflex over (P)}(ψo)}≦NAo2. The coherent gain of the MTSE tapers can be found by computing
Because the choice of coherent gain is somewhat arbitrary for the re-inserted spectral component, it may be convenient to use the maximum theoretical processing gain, CG=10 log10 (N), such that the composite spectrum looks similar, on a relative scale, to that obtained using MVDR techniques.
Normalized SINR loss performance was assessed via simulation for a number of interference and noise scenarios. Performance was seen to converge with very few snapshots to near optimal, and in many cases with fewer snapshots than interferers. These simulation results are significantly better than what is achievable using diagonal loading or comparable reduced rank techniques. The simulations considered line component, spatially spread, and mixed spectra conditions. It was observed that performance could be improved by increasing the estimation accuracy of the harmonic analysis step to provide better cancellation of the line components within the data.
Correlated Signal and Interference
One of the properties of a wide sense stationary narrowband space-time process (as considered above) is that it can be represented as a sum of uncorrelated plane waves, distributed across all directions of arrival to the array. This model does not describe every situation that may be encountered. An example of another case of interest is when there is correlation between two or more plane wave components observed by the array. This may occur in situations of multipath or smart jamming. Under these conditions the covariance is function of absolute position, not relative, and the process is not wide sense stationary. Failure to account for the correlation within the data can lead to signal cancellation, and an overall loss in output SNR.
In a particular embodiment, the snapshot data may include both the signal of interest and interference correlated with it. Wavenumber or spatial spectra provide no correlation information. Covariance from spatial spectrum (CSS) may provide a level of robustness against the effects of correlated signal and interference on performance. CSS may be biased in two ways. A first bias is in a manner similar to the bias discussed above for the wide sense stationary process case due to the method of spectral estimation. The second bias is specific to the correlation within the data. Performance may be assessed in comparison to CSS operating on uncorrelated data, as well with the effective SINR metric, an appropriate measure of output SINR in the correlated signal and interference scenario. Using simulations, the bias attributable to the correlation component was found to have negligible impact on performance.
Covariance for Correlated Signals
In the case where the point source signals are correlated the space-time process is not spatially stationary. Referring back to the Cramér spectral representation of the stationary space-time process, the correlation violates the condition that disjoint regions in wavenumber space be uncorrelated. In terms of the covariance, the effect is seen as a dependence on absolute as well as relative position. This is visible when examining the covariance matrix based on the model for the snapshot data, xm.
where the background noise component and sensor noise component are combined together,
The covariance matrix is
E{xmxmH}=Rx=VRaVH+Rn (5.2)
where Ra=E {amamH}. At least in this context, Ra is not a diagonal matrix. The off diagonal terms represent the cross-correlation between the plane waves. Ra can be expressed as a combination of a diagonal matrix and an off-diagonal matrix.
Ra=Ra,U+Ra,C (5.3)
The subscript U is used to reinforce that the diagonal matrix relates to uncorrelated plane waves, while the subscript C is used to reinforce that the off-diagonal matrix corresponds to the terms representing the correlation. Thus,
Rx=VRa,UVH+Rn+Ra,CVH (5.4)
The first two terms of Eqn. (5.4) correspond to the covariance for the stationary process model. This portion of the overall covariance as is referred to herein as:
Rx,U=VRa,UVH+Rn (5.5)
The remaining portion, due to the off-diagonal entries of the matrix Ra, is referred to in a similar fashion.
Rx,C=VRa,CVH (5.6)
Expected Value
To proceed in analyzing the expected value of the covariance matrix estimate, CSS with classical power spectral estimation may be used. As explained above, for a stationary process:
where the subscript U is used to reinforce that the process is stationary and that Ra,U is a diagonal matrix. The CSS covariance matrix estimate may be found from the windowed (tapered) snapshots, ym=xm⊙w, as
with expected value E {{circumflex over (R)}y}=Rx,U Rw. Alternatively, the windowed snapshot model for this case may be written as:
so that the tapered snapshots are distributed as
From Eqn. (5.8), DSR linearity properties and Eqn. (5.10):
E {{circumflex over (R)}y}=DSR E {ymymH})=DSR(
The two expressions for the expectation are equivalent. Thus,
Rx,URw=DSR(
The plane wave components may be correlated, so that
Eqn. (5.8) may be used, so that the expected value of the estimated covariance is
E{{circumflex over (R)}y}=DSR(
Using the DSR properties, the expected value includes two terms
E{{circumflex over (R)}y}=DSR(
From Eqn. (5.11), the first term is the CSS with classical spectral estimation covariance as if the process where in fact stationary, so the final result is
E{{circumflex over (R)}y}=Rx,URw+DSR(
Eqn. (5.15) implies that for a correlated signal and interference problem, the CSS with classical spectral estimation technique produces a covariance matrix estimate that is an estimate of the covariance as if the process were uncorrelated, Rx,U⊙Rw, with an additional bias term DSR (
CSS Performance with Correlated Signal and Interference
Correlated signal and interference introduces the potential for signal cancellation for some adaptive beamforming algorithms. A minimum variance distortionless response (MVDR) beamformer may be derived to be optimal when observing uncorrelated noise and interference only, a spatially stationary space-time process. The MVDR approach can be extended when both the desired signal and interference are present in the snapshot data, the so called minimum power distortionless response (MPDR) beamformer. MPDR attempts to minimize output power while constrained to be distortionless in the direction of the desired signal, wMPDR ∝R1 s. This allows the desired signal through due to the distortionless constraint, but in the event of correlated interference the processor may use the interferer to destructively cancel the desired signal in the overall attempt to minimize output power.
Relative Contribution of Correlated and Uncorrelated Components
Questions that may be of interest for adaptive beamformers based upon the covariance from spatial spectrum estimate, referred to as {circumflex over (R)}CSS, include: 1) does {circumflex over (R)}CSS convey any information regarding the correlation component in the data if it exists, and 2) how well do beamformers based upon {circumflex over (R)}CSS perform compared to an adaptive processor using a covariance for the data where there is no correlation present? As explained above, the expected value of {circumflex over (R)}CSS contains the covariance if the data were uncorrelated plus an additional term.
The influence of the second temi can be considered a bias. By considering the case of two interferers with no noise component, explicit expressions for the bias term show that for redundancy averaging the bias is not guaranteed to go to zero even if the array length is extended infinitely. Using the ratio of the Froebenius norm squared, ∥∥F2, of the bias component to the unbiased component, as array length is extended to infinity, the ratio goes to zero. This implies that while the bias term exists, its impact as measured by the relative power indicated by ∥∥F2 may become insignificant as the array length increases.
A similar line of analysis may be followed using a two tone scenario with no noise. The signals may have angles of arrival denoted as ψ1,ψ2 with corresponding array manifold response vectors, v1, v2, and respective variances σ12, σ22. The correlation between the two may be described by a magnitude and phase as ζ=E {a1 (m)a2*(m)}=Aζejζ. The ensemble covariance may include correlated and uncorrelated terms.
Of interest is where the relative ratio of ∥∥F2 shows that the bias term is insignificant within E {{circumflex over (R)}CSS}, compared to the same ratio for the ensemble covariance,
When Eqn. (5.18) is valid, it indicates that CSS has substantially diminished the contributed of the correlated component, as measured using ∥∥F2. Consider the problem for an N element uniform linear array. The uncorrelated and correlated components for the ensemble covariance are
where Δψ=ψ2−ψ1. The expressions for the components of E {{circumflex over (R)}CSS} use the following property. For a Hermitian, Toeplitz matrix,
The Froebenius norm squared of A is
This yields a simplified expression for the ∥∥F2 of the uncorrelated component of E {{circumflex over (R)}CSS}.
where ρw[n] is the sample autocorrelation of the taper used, w. The correlated component cannot be similarly reduced because of the summation term in the sample autocorrelation, ρCSS,C [n], with a simplest expression for arbitrary w given as
where
Normalized SINR Loss w.r.t. Uncorrelated Ensemble Covariance
Normalized SINR loss provides a useful performance metric for practical array lengths. Because MVDR is not designed for the correlated signal and interference case, normalized SINR loss calculations based on the ensemble covariance containing the correlation may be inappropriate. For example, they do not predict the detrimental effects of the signal cancellation described earlier. The normalized SINR loss may be used to understand performance, and the ensemble covariance may be used for the uncorrelated data scenario, Rx,U, as the reference covariance. This effectively compares the performance of CSS with the optimal beamformer as if there were no correlation contained in the data. The subscript RC is used to indicate that this is the normalized SINR loss for correlated data case.
As a second measure of performance in addition to ξR
The difference between the performance measures of Eqn. (5.28) and Eqn. (5.29) is
ΔξdB=ξRC
where ξdB=−10 log10ξ.
Simulation was performed for a case of an N=32 uniform linear observing correlated signal and interference. The simulation determined a predicted normalized SINR loss (dB) of CSS with classical spectral estimation as a function of separation between the sources and the correlation angle with correlation magnitude fixed at ζ=1. Performance was determined with respect to an optimal processor using an ensemble covariance where the signal and interference were uncorrelated. Such a processor may be the best possible when a goal is to reduce the impact of the correlation in the data. The simulated performance was within 0.2 dB almost everywhere, except when the sources were very close. Thus, the simulations indicate that CSS may perform nearly identically whether the signal and interference are correlated or not.
Results of the simulations indicate that while E{{circumflex over (R)}CSS} has a bias component when correlated signal and interference are present, the impact of the bias component is negligible as performance is nearly identical to the uncorrelated signal and interference case. Covariance estimates based on frequency wavenumber spectrum have a decorrelating effect when signal and interference are correlated, and reduce the potential for signal cancellation to negatively impact performance. These results indicate that CSS techniques should perform consistently in the presence of correlated signal and interference, regardless of correlation coefficient or angle. Also, the performance should be in line with CSS performance as if the data were uncorrelated.
Redundancy Averaging
Redundancy averaging may be used to address a correlated signal and interference problem. Redundancy averaging takes advantage of the multiple available estimates of the space-time correlation at a given spatial lag by averaging them, and then generates a covariance matrix using the averaged values. For the uniform line array, this amounts to replacing diagonals in the sample covariance matrix with the average diagonal values. For redundancy averaging, an averaged diagonal value may be defined for the nth diagonal for the mth snapshot, xm=((xm[n]))n
where it is assumed that the sequence xm[β]=0 for β<0,β≧N. The sample autocorrelation for the each snapshot, xm, is
Eqn. (5.31) applies a sample autocorrelation, ρw[n], to the data sample autocorrelation, where
With these, the redundancy averaged values of Eqn. (5.31) can be written as
ρRA,m[n]=ρx,m[n]ρw[n] (5.34)
The final values may be averaged over all snapshots,
The covariance matrix for redundancy averaging is then formed from the sequence as
RRA=((ρRA[r−c]))r,c (5.35)
In terms of the DSR operation,
RRA=DSR(RSCM)TRA (5.36)
where
TRA=((ρw[r−c]))r,c (5.37)
By way of comparison to the CSS technique discloses herein, the DSR operation in Eqn. (5.8), which is related to the CSS technique, is defined as sum, not average. The particular window sample autocorrelation, ρw[n]=(N−|n|)−1, used for redundancy averaging results in an unbiased expected value, E{ρRA[n]}=Rx[n] and E{RRA}=Rx, if the data is uncorrelated. However, it is also results in RRA being an indefinite matrix. This is a familiar result in the context of time series analysis, as Eqn. (5.31) is the form of the unbiased estimator for the auto-correlation of a sequence. While unbiased, its form in Eqn. (5.34) shows the sequence is a product of two functions. The Fourier transform of this product is the estimate of the power spectral density. It is the convolution
where {circumflex over (P)}px,ψ(ψ)=F (ρx[n]) and Cw(ψ)=F (ρw[n]). For the particular ρw[n] used in redundancy averaging, Cw(ψ) is not strictly greater than zero, and as a result portions of {circumflex over (P)}RA(ψ) may become negative valued. This is an invalid condition for a power spectral density. In its matrix form, this condition presents itself by making the covariance RRA indefinite. This is undesirable, and is particular concern for the redundancy averaging approach.
Covariance Matrix Tapers
Covariance matrix tapers is a technique that provides a measure of robustness to sample covariance matrix processing by modifying the sample covariance matrix with a taper matrix, TCMT, according to
RCMT=RSCM⊙TCMT (5.39)
The taper matrix, TCMT, is designed specifically to impart null widening properties, diagonal loading, or other features. The taper matrix is positive semi-definite, TCMT≧0, and Hermitian, TCMT=TCMTH. Additionally, the TCMT may be a normalized diagonally homogeneous (NDH) matrix, meaning it is a constant down its main diagonal. Because it is based on the sample covariance matrix, CMT does not attempt to address the correlated signal and interference condition. Additional processing, such as spatial smoothing is necessary to mitigate signal cancellation.
Comparison Summary
Thus, the CSS technique disclosed herein provides benefits found in RA and CMT as well as additional benefit. The CSS technique uses the DSR operation which is beneficial for correlated signal and interference. As additional positive attributes, CSS maintains positive definiteness with reasonable restrictions on the choice of window function and resultant power spectral density estimate. The DSR processing provides additional data averaging, or alternatively an increase in effective sample size over CMT which uses only the sample covariance matrix.
Non-Ideal Array Manifold Response
Certain CSS methods disclosed herein assume an underlying structure based upon the observed narrowband space-time process including sums of physically propagating plane waves. For an array of ideal omnidirectional sensors, the array manifold response takes on a form based on the complex-exponential, v(k)=((exp[−jkTpn]))n. Real-world sensors and arrays may exhibit perturbations to this ideal response that alter the form of the encountered covariance from its assumed structure. For a uniform linear array, the ideal manifold response results in a Toeplitz covariance matrix. With array manifold response error, this Toeplitz structure may no longer hold. This is similar to the effect encountered when investigating the correlated signal and interference scenario. The ML estimate of an unstructured covariance is the sample covariance matrix, and the set of algorithms that build upon it are most applicable if the underlying problem is unstructured.
Non-ideal array manifold responses and their impact on structured covariance beamformer performance are discussed below. A technique to mitigate the impact based on data already available via CSS with MTSE processing is described. This is done by estimating the array manifold response corresponding to detectable line components in the spectrum, since these typically dominate the overall performance, and incorporating this discrete set of non-ideal response vectors into the covariance.
This approach differs from techniques that concentrate on estimating the steering vector (such as a principal eigenvector of a clutter covariance matrix for radar) or that concentrate on estimates of the actual sensor positions (such as for towed arrays). Such techniques may use GPS data for the tow vessel and a water pulley model for the array, and algorithms to optimize the “focus” or sharpness of the wavenumber spectrum or observations of broadband signals in the environment. As a narrowband processing algorithm operating directly on the snapshot data, the CSS technique disclosed herein may be valuable in concert with legacy techniques as the array displacement becomes significant, in particular in light of the similarity between the circular bow array deformity and observed array behavior during turning maneuvers.
Types of Non-Ideal Array Manifold Responses
Random Errors
Random errors in array manifold response may be considered zero mean perturbations from the nominal array response values. These may arise from non-ideal sensor gain or phase, manufacturing precision, or other component tolerances. Two approaches for describing these effects are described. The first provides a random error term for each physical quantity related to each sensor, namely, position, amplitude response, and phase response.
Each error term, Δi, is specified as a Gaussian random variable,
The “actual” value of a quantity, x, may be denoted using a subscript a (e.g., xa). The effective position of the nth sensor element with nominal position pn is
pn,a=pn+[Δx,Δy,Δz]T (6.1)
and the overall array manifold response is
va=(([1+ΔA]exp[jΔθ]exp[−j(kTpn,a)]))n (6.2)
This provides complete specification of the potential random errors, (Δx, Δy, Δz, ΔA, Δθ), and is useful for analysis of performance impacts in relation to each of the individual quantities. A simpler model reduces the random error contribution to a multiplicative effect on the amplitude and phase only. Define the vector
The non-ideal array manifold response is
va=vh=chv+vg (6.4)
The constant ch is chosen such that
vaHva=vHv (6.5)
The ratio σg2/ch2 is a single metric that provides a measure of the difference between the ideal and actual array manifold responses. Expressed in dB as 10 log10(σg2/ch2), this value gives some indication of “how far down” the perturbation components are from the nominal response. A simpler model may be useful for varying array response as a function of angle of arrival, while the more explicit model may be more appropriate for analyzing operation below the design frequency where there is a non-zero virtual region and isotropic noise component.
Deterministic Errors
In addition to random errors, arrays may experience more deterministic types of array manifold response perturbation. For example, deformation of the array may cause a non-zero mean, non-random disturbance in the positions of the elements. With underwater acoustic arrays, such deformation can occur due to hydrodynamics for towed arrays in motion. Two types of positional errors for linear arrays, circular and partial circular bows, are considered below.
Impact to Structured Covariance Matrix ABF Performance
The technique of computing covariance based on estimates of the wavenumber spectrum may be influenced by factors that would impact the ability to accurately estimate that spectrum, or how accurately that spectrum (based on complex exponentials) represents the true underlying situation. Using the simple model described above, the actual array manifold response vector is va=chv+vg. The true array manifold response, v, is scaled and the additional bias term is vg=v⊙g. Because v(k)=((e−jk
so that the bias in any given realization is a complex Gaussian random vector. Consider a single interferer in uncorrelated white noise.
xm=vaa(m)+nm (6.7)
The covariance for this case for a given instance of vg is
E{xmxmH|vg}=σ2(|ch|2vvH+chvvgH+ch*vgvH+vgvgH)+Rn (6.8)
In any given instance of this scenario, the error is unknown but non-random, and will produce a particular wavenumber spectrum. The covariance over the ensemble of error vectors, vg, is
E{xmxmH}=σ2|ch|2vvH+Rn+σ2σg2I (6.9)
The ensemble power spectrum for this case appears as the original with the line component scaled | ch|2, with an elevated noise floor corresponding to the σ2σg2 I term. In a given realization though, it is a single vector, vg, causing the elevated sidelobes and not an ensemble of plane waves from all directions. As the ratio σg/ch2 increases there is larger and larger array manifold error, and the rise in the perceived noise floor is clear.
The normalized SINR loss performance of structured covariance techniques may degrade due to random errors and may be proportional to the difference between the sidelobe levels due to the errors and the true noise floor. The expected increase in normalized SINR loss is equal to the rise in the noise floor.
ΔξdB≈max(0,INRdB+(σg2/ch2)dB) (6.10)
Deterministic array manifold response error also causes an impact to the estimated wavenumber spectrum. This impact is a function of both the positional displacement and the angle of arrival of the discrete point source. Due to the symmetry of the array bend, the sidelobe structure resulting from the circular bow is expected to be symmetric in wavenumber space. The circular bow distortion causes the single discrete source to appear as a symmetric, spatially spread interference.
Mitigation Techniques
Non-ideal array manifold response may be expressed in a snapshot model by adding an error vector (or calibration error), uk, to the ideal response vector, vk, in building the snapshots.
The impact of combinations of array manifold response errors and the background noise, nm,are not considered since the dominant source of performance degradation is assumed to be caused by the large INR line components. Because the error vectors are additive to the ideal response vector, the snapshots include a set of terms corresponding to the ideal array manifold response and the additional error terms.
The ideal array manifold response drives the structure in the CSS algorithms. The case considered here is where the array manifold response errors exist, but are small in magnitude compared to the true array manifold response. This situation is referred to herein as “partially structured”, to reflect that the ideal array manifold response is still somewhat discernible in the data. The MTSE harmonic analysis still provides the estimates of the observable plane wave components, provided they maintain a sufficient SNR over the apparent raised noise floor such that line component detection is possible.
The random error case may be considered as a starting point to develop an algorithm to deal with random array manifold perturbation. In a particular embodiment, an optimal algorithm to deal with deterministic array manifold errors may estimate the error model parameters, H for a circular bow, and H and L2 for a partial circular bow, or parabolic parameters in, and may use that information to assist in estimating the array manifold errors. The discussion below further describes a particular technique developed for random errors, and considers to what extent deterministic errors may be processed effectively for two types of non-random errors that are considered.
MMSE Unbiased Linear Estimate of Calibration Errors
A minimum mean squared error, linear unbiased estimate of calibration errors is explained below. For ease of explanation, there are assumed to be K line components, and M≧K snapshots observed. A snapshot model containing non-ideal array manifold responses, vk,a=vk+uk, provides a starting point. The collection of snapshots, m=1, . . . , M can be arranged as a matrix
X=[x1,x2, . . . ,xM]=(V+U)A+N (6.13)
where the individual matrices similarly include the original vectors, V=[v1, v2, . . . , vK], U=[u1, u2, . . . , uK], A=[a1, a2, . . . , aM], and N=[n1, n2, . . . , nM]. Here, a priori knowledge of the ideal array manifold responses, V, and signal amplitudes, A, may be assumed. In practice, the estimates provided by the MTSE harmonic analysis processing may be used for these values. The matrix A may be represented in an additional way, with each row representing the amplitude time series for the corresponding point source signal.
αkT=[ak(1),ak(2), . . . ,ak(M)] (6.14)
A=[α1,α2, . . . ,αK]T (6.15)
The array calibration errors U may be estimated given V and A. An initial step may be performed to subtract out the known line components.
Y=X−VA=UA+N (6.16)
Additional calculations may be performed subject to the following assumptions:
The array calibration errors, uk, are non-random but unknown and are different for each source, k=1 . . . K.
The noise terms are zero mean and independent of the array manifold responses and signal amplitudes, with covariance, E {nm nmH}=Rn.
The snapshots for different sample indices, m, are independent.
rank(A)=K, which is discussed further below. This rank deficient situation may occur when two signals are perfectly correlated with one another (the magnitude of the correlation coefficient is unity), or when the number of interferers exceeds the number of snapshots, K>M.
In a particular embodiment, a linear processor can be used to estimate the array manifold response error vectors from the observations. For each error vector (or for all ûk simultaneously)
{circumflex over (u)}k=Ywk,{circumflex over (U)}=YW (6.17)
where the matrices are W=[w1, w2, . . . , wK] and Û=[û1, û2, . . . , ûK]. The processor may be unbiased, so that E {ûk}=uk, E={Û}U. Expanding out the expectation
E{{circumflex over (u)}k}=E{Ywk}=UAwk (6.18)
This implies that to be unbiased UAwk=uk, or
Awk=ek, orAW=I (6.19)
where ek is the elementary vector which is all zeros with a 1 in the kth position. This places K constraints on each weight vector, wk. Written out explicitly using a row definition of A
αjTwk=δ[j−k] (6.20)
where δ[j−k] is a Kronecker delta. In a particular embodiment, the estimation error variance may be minimized at the output for each filter, wk, given by
σw,k2=E[(uk−{circumflex over (u)}k)H(uk−{circumflex over (u)}k)] (6.21)
Using Eqn. (6.17) in Eqn. (6.21), the expression for the variance reduces to
σw,k2=wkHE[NHN]wk (6.22)
To find E {NH N} recall that N=[n1, n2, . . . , nM], where the individual nm are i.i.d. with covariance, Rn.
Inserting Eqn. (6.23) into Eqn. (6.22) gives the noise output power.
σw,k2=tr(Rn)wkHwk (6.24)
The constrained optimization problem may be set up to find the individual weights, wk.
The following cost function may be minimized using the method of Lagrange multipliers.
Taking the gradient w.r.t. wk and setting to 0
yields the following expression
Defining the vector λk and matrix Λ respectively,
λk=[λk,1,λk,2, . . . ,λk,K],Λ=[λ1T,λ2T, . . . ,λKT]T (6.29)
then Eqn. (6.28) can be written as
The entire solution can be expressed for all k simultaneously as
Inserting Eqn. (6.31) into the constraint Eqn. (6.19), and solving for the constraint matrix, Λ
Λ=−tr(Rn)(AAH)−1 (6.32)
The earlier assumption that rank(A)=K implies that (AAH)−1 exists. Inserting Eqn. (6.32) back into Eqn. (6.31) produces the overall solution for the weights.
W=AH(AAH)−1 (6.33)
which is the Moore-Penrose pseudo-inverse of A. Using Eqn. (6.33) in Eqn. (6.17), the minimum variance unbiased linear estimate of the non ideal array manifold response error vectors is
{circumflex over (U)}=YW=YAH(AAH)−1 (6.34)
In situations where M<K, then rank(A)<K and the inverse (AAH)−1 does not exist. In this situation, the processing may be restricted to estimate a subset of the array manifold error vectors, ûk, corresponding to the M largest sources. In practice, there may be practical issues with the conditioning of the matrix AAH and a value less than M may work better numerically. Performance may degrade in these scenarios compared to an optimal adaptive beamformer because there is not enough snapshot information to estimate all the random error vectors from the data.
Maximum Likelihood Estimate of Calibration Errors
In the discussion above, the minimum mean square error linear unbiased estimate of the array manifold response error vectors was determined. This used knowledge of the number of point source signals, K, the signal amplitudes, A, and the ideal array manifold response vectors, V. The snapshots were assumed to be independent. The solution did not require definition of the statistics of the noise term, nm, other than to state that the noise was independent of the other quantities in the data. In the discussion below, the maximum likelihood estimation of the array manifold response error vectors is derived. The same assumptions are used as were used above. Additionally, the noise terms are specified as complex Gaussian,
The individual snapshots
xm=(V+U)am+nm (6.35)
are complex Gaussian random vectors, with non-zero mean, mm=[V+U]am
Grouping all snapshots, X=[x1, x2, . . . , xM] and amplitude values, A=[a1, a2, . . . , aM], the complex Gaussian random matrix X has non-zero mean M=(V+U) A. With the snapshots being independent, the probability density for X is
Eqn. (6.37) can be expressed using the trace operator, tr( ).
fx(X)=π−NMdet(Rn)−Mexp(−tr[(X-M)HRn−1(X-M)]) (6.38)
For the available snapshot data, the likelihood function for a given estimate of the array manifold errors is
fX|Û(X|{circumflex over (U)})π−NMdet(Rn)−Mexp(−tr[(X-M({circumflex over (U)}))HRn−1(X-M({circumflex over (U)}))]) (6.39)
The maximum likelihood estimate of the array manifold error vectors, ÛML, is a matrix that maximizes fX|Û(X|Û).
Beginning with the log-likelihood function, X|Û(X|Û)=ln fX|Û(X|Û).
where the constants C1, C2 are not functions of Û. Because it is a maximum of X|Û(X|Û).
Expanding out terms in Eqn. (6.41), and applying the partial derivative through the tr ( ) function
{circumflex over (U)}ML=XAH(AAH)−1−V (6.43)
Alternatively, starting with
Y=X−VA (6.44)
and applying the same procedure
{circumflex over (U)}ML=YAH(AAH)−1 (6.45)
Eqn. (6.45) matches the result Eqn. (6.34). It is interesting to note is that the covariance of the noise term nm does not have an impact on the final solution. This is particularly useful, since this covariance is not known at this point in the processing. Also, it establishes that problems should not be encountered if the underlying noise is non-white, due to either spatially spread interference or the combination of visible/virtual space when operating below the design frequency.
Where a non-white noise may still cause issues is in situations where the dynamic range of the noise, spectrally, is very large. This could be the case when operating below design frequency where the isotropic noise component is much larger than the sensor noise component. The apparent increase in noise floor due to the effects of array manifold response errors accumulated by the spatially spread isotropic noise process may mask the true sensor noise components.
Procedure
The discussion below describes a procedure to use the results from derivation of Eqns. (6.45) and (6.34) to supplement the CSS with MTSE approach with additional non-ideal array manifold response error vector information. The dominant sources of normalized SINR loss are the error vectors associated with the high INR point sources in the data. The term partially structured is used to indicate that some observation of the underlying structure remains observable. Fundamentally, being partially structured implies that the high INR line components in the spectrum can still be detected via the MTSE harmonic analysis process. In general, this indicates some amount of positive SNR of the line component above the raised noise floor caused by the array manifold errors.
The following procedure may be used to incorporate non-ideal array manifold response error vector information into an overall covariance matrix estimate.
1. Use MTSE harmonic analysis to detect and estimate the number of line components, K, and their parameters, vk, and ak(m).
2. Use Eqns. (6.44) and (6.45) to estimate array calibration error vectors, uk, for these line components.
3. Subtract the discrete components from the snapshot data to form the residual, Xres=X−(V+U)A.
4. Optionally, items 1, 2 and 3 may be iterated.
5. Estimate the final residual “continuous” background spectrum, {circumflex over (P)}res(·), and form RMTSE,res.
6. Form an overall estimate of the covariance matrix
Comparison-MTSE and MVDR Spectra
The error vector processing is a technique for adapting the CSS with MTSE approach to handle non-ideal array manifold response, where the response retains sufficient “likeness” to the ideal that the parameters of the large discrete components in the spectrum can be estimated. Using this information, the error vector(s) can be estimated and used in the formulation of the final estimate of covariance. Simulations that have been performed indicate that this processing is effective and that the overall normalized SINR loss is equivalent to reduced rank processing of the data using MWF. Array manifold error processing may not be a replacement for unconstrained covariance and reduced rank techniques in all circumstance, but it offers a way to extend the basic CSS with MTSE processing based on available data products to address the issue of non-ideal response.
An additional byproduct of this processing is a final MTSE wavenumber spectrum with the influence of the error vectors for the large discretes removed. One can qualitatively compare the utility in this final estimate of the spectrum to MVDR spectra computed using the ensemble covariance matrix.
The techniques described above may be used to estimate the error vectors associated with the strongest INR line components in the data using the detection and estimation parameters available from harmonic analysis. Through simulations, this technique was seen to improve performance in the face of non-ideal array manifold response, and normalized SINR loss was seen to be comparable to reduced rank techniques provided the harmonic analysis could reliably detect the line components. As a beneficial byproduct, MTSE could be made to produce an estimate of the wavenumber spectrum as if the array manifold had been restored to ideal.
Extensions for Arbitrary Geometry
Arrays with regular element spacing lend themselves to a convenient simultaneous estimation of visible space components of the space-time process and virtual space sensor noise using classical spectral estimation techniques and efficient FFT computation. For arbitrary array geometries, proper estimation of the virtual space sensor noise may require more effort. The following discussion explains an approach for doing this based upon analyzing the covariance for isotropic noise. Methods to extend the disclosed CSS with MTSE techniques for application to arbitrary arrays are also explained below.
Sensor Noise with Arbitrary Array Geometry
The covariance matrix of interest, Rx, includes two subspaces corresponding to visible and virtual regions.
<Rx>=<Rvs>+<Rvr> (7.1)
Assuming that the subspaces, <Rvs> and <Rvr> are approximately orthogonal, although due to the array geometry the transition between the two may not be so sharp. The visible region subspace is approximately the subspace defined by the covariance associated with 3D isotropic noise.
<Rvs>≈<Riso> (7.2)
This is convenient because the 3D isotropic noise is specified in terms of angle of arrival to the array. This has an intuitive physical interpretation regardless of array geometry. Additionally, from Eqn. (3.31), the contribution of the sensor noise component appeared as a 3D isotropic noise component when restricting attention to the visible region. This may be useful later when considering the positive definiteness of the estimated covariance.
Because of the spatial stationarity of the 3D isotropic noise, the covariance is a function of the difference in position, Δp. This difference may be expressed in Cartesian coordinates {Δpx, Δpy, Δpz} or spherical coordinates {s, γ, ζ}.
The covariance between two omnidirectional sensors in 3D isotropic noise has a form
The covariance matrix for an array of sensors is populated by values of this function where the relative position is Δp=pr−pc.
Riso=((Riso(Δp)))r,c=((Riso(s)))r,c (7.5)
Eqn. (7.5) may be used to compute Riso, and to perform an eigendecomposition of the matrix. The decomposition is ordered according to decreasing eigenvalues.
The eigenvalues for this problem, λn, are a measure of the concentration of each eigenvector within the visible region subspace. The eigenvectors may be divided into two sets, those mostly contained in the visible region subspace and those mostly contained in the virtual region subspace. The boundary may be apparent from inspection of the eigenvalues.
Indicating the number of eigenvectors determined to be in the visible space as Nvs, show the explicit make up of Riso.
The visible region subspace is approximately the subspace spanned by the first Nvs eigenvectors.
<Rvs>≈<q0,q1, . . . ,qN
The eigenvectors, Qiso, make a complete orthonormal set for <CN×N>. The covariance matrix Rx is full rank, so from <Rx><CN×N>=<Qiso>. Thus,
<Rvr>≈<qN
Grouping the appropriate eigenvectors together for the visible and virtual region subspaces,
<Qvs=[q0,q1, . . . ,qNvs-1],Qvr=[qNvs,qNvs+1, . . . ,qN-1], (7.9)
projection matrices for those subspaces may be formed
Pvs=QvsQvsH,Pvr=QvrQvrH=I-QvsQvsH (7.11)
Let Nvr=N−Nvs. The sensor noise power component within the virtual region subspace can be estimated from the available snapshots as
From the form of the projection matrix, Pvr, the sensor noise power estimate in Eqn. (7.12) is equivalent to
Eqn. (7.13) shows that {circumflex over (σ)}w2 is the average power across each of the orthonormal basis vectors, qn, in the subspace. The overall estimate for the covariance Rx then uses this estimate as
{circumflex over (R)}x={circumflex over (R)}vs+{circumflex over (σ)}w2Pvr (7.14)
Use of the projection matrix Pvr in Eqn. (7.14) avoids double counting the sensor noise component measured in the visible region subspace and contained in {circumflex over (R)}vs. If the sensor noise is significantly below the continuous background noise component of the space-time process throughout the visible region, one could use the simpler expression
{circumflex over (R)}x={circumflex over (R)}vs+{circumflex over (σ)}w,vr2I (7.15)
at the expense of double counting the sensor noise in the visible region. In either case some representation of the sensor noise in the virtual region may be used.
Positive Definiteness
The estimated covariance must be positive definite to be meaningful for array processing. For regularly spaced arrays, the estimated covariance is positive definite when the estimated frequency-wavenumber spectrum, {circumflex over (P)}(k)>0, and the estimated sensor noise, {circumflex over (σ)}w2>0, as explained above. The estimated covariance is also positive definite for arbitrary geometry arrays, where the covariance is estimated as the sum of the visible space covariance, {circumflex over (R)}vs, and the sense noise in the virtual space.
Method 1
Eqn. (7.14) provides an accurate method for accounting for the sensor noise in the covariance matrix.
{circumflex over (R)}x={circumflex over (R)}vs+σw2Pvr (7.16)
The estimate for the covariance corresponding to the visible region of the array is
The estimate of the wavenumber spectrum in the visible region may be assumed to be greater than zero, {circumflex over (P)}vs(k)>0, which enables expressing it as including an estimate of the wavenumber spectrum of the observed process, {circumflex over (P)}f(k)≧0, and an estimate of the sensor noise seen in the visible region, {circumflex over (σ)}w,vs2>0.
{circumflex over (P)}vs(k)={circumflex over (P)}f(k)+{circumflex over (σ)}w,vs2 (7.18)
Using Eqns. (7.18) and (7.17) in the quadratic expression for positive definiteness
xH{circumflex over (R)}xx=xH((2π)−C∫ . . . ∫vs[{circumflex over (P)}f(k)+{circumflex over (σ)}w,vs2]v(k)vH(k)dk+{circumflex over (σ)}w2Pvr)x (7.19)
Carrying out the integration, this simplifies to
xH{circumflex over (R)}xx=xH{circumflex over (R)}fx+xH({circumflex over (σ)}w,vs2Riso+σw2Pvr)x (7.20)
The covariance estimate for the space-time process is positive semi-definite, {circumflex over (R)}f≧0. This follows from
xH{circumflex over (R)}fx=(2π)−C∫ . . . ∫vs{circumflex over (P)}f(k)|xHv(k)|2dk (7.21)
where {circumflex over (P)}f(k)≧0 and |xH v(k)|2≧0. For the terms relating to the noise estimates, Riso may be replaced with its eigendecomposition.
The matrix Pvr may also be defined in terms of the eigendecomposition of Riso
Define the combined eigenvalues, λn′, as
By the selection methods outlined above, the eigenvalues λn for 0≦n<Nvs are greater than zero, such that all λn′ are real-valued and greater than zero. Defining Λ′=diag (λ0′, λ1′, . . . , λ′N-1)
xH({circumflex over (σ)}w,vs2Riso+σw2Pvr)x=xHQisoΛ′QisoHx (7.25)
The matrix Qiso is unitary, therefore |QisoHx|2=|x|2≠0 for |x|2≠0. With positive, real-valued entries on the diagonal of Λ′
xH({circumflex over (σ)}w,vs2Riso+{circumflex over (σ)}w2Pvr)x>0 (7.26)
The overall matrix {circumflex over (R)}x is the sum of a positive semidefinite matrix, {circumflex over (R)}f, and a positive definite matrix, xH ({circumflex over (σ)}w,vs2Riso+{circumflex over (σ)}w2Pvr)x, and therefore is positive definite, {circumflex over (R)}x>0.
Method 2
Another method for an arbitrary array geometry is
{circumflex over (R)}x={circumflex over (R)}vs+{circumflex over (σ)}w,vs2I (7.27)
As explained above, {circumflex over (R)}vs=({circumflex over (R)}f+{circumflex over (σ)}w,vs2Riso)≧0. The matrix I>0 and so {circumflex over (R)}x>0.
Covariance from Spatial Spectra (CSS) with MTSE
Design of Multi-Tapers
For arbitrary array geometry, the design of the multiple tapers for spectral estimation may be done by dividing the visible region into a search grid, where each grid point defines a region of analysis. For convenience, the angle domain, (θ, φ), is used herein, although operation may alternately be specified in wavenumber. In general form, the search grid covers a sphere, 0≦θ≦π, 0≦φ≦2π, although the array characteristics may be exploited to reduce this. For example, a planar array in an x−y plane cannot measure wavenumber in the z direction. Because of this, its search grid may be restricted to the hemisphere, 0≦θ≦π/2, 0≦φ≦2π, as the lower hemisphere is identical due to the ambiguity and provides no additional information.
One approach is to design a set of tapers at each grid location, (θo±Δθ, φo±Δφ), based on an eigendecomposition of the matrix
where v(θ,φ) is the array manifold response vector. The multi-tapers are selected as the eigenvectors corresponding to the D largest eigenvalues Rθ
wθ
Alternatively, one may design a single set of tapers, wo,d, perhaps at broadside for the array, and steer the main response axis (MRA) of this fixed set of weights through angle space.
wθ
Discrete Line Component Processing (Harmonic Analysis)
Defining the Multi-Taper Weight Matrix,
W(θo,Φo)=[wθ
the eigencoefficients are computed for each snapshot, xm, as
ym(θo,Φo)=WH(θo,Φo)xm (7.32)
The eigencoefficients output for a line component at (θo, φo) is given by
qθ
where 1 is the all one's vector. This vector defines the subspaces used for computing a detection statistic.
The detection statistic may be computed as before
K is the number of detected line components. Each has its parameters estimated, ({circumflex over (σ)}k, {circumflex over (Φ)}k, âk(m)), and these are used to subtract the line component from the data. The projection or subtraction methods described above may be used. The variance for each line component may be estimated as
and may be used in the formation of the covariance matrix. This process may be iterated to successively process multiple line components in the data. The final residual snapshot data, xb,m, may include the smooth, continuous background content.
Background/Continuous Spectrum
Once harmonic analysis is complete the residual snapshot data, xb,m, may be used to compute a final smooth, continuous background spectrum. The eigencoefficients are computed
yb,m(θo,φo)=WH(θo,φo)xb,m=((yb,m(d)(θo,φo)))d (7.37)
The eigencoefficients are used to produce the individual eigenspectra.
{circumflex over (P)}b(d)(θo,φo)=|yb,m(d)(θo,φo)|2 (7.38)
The individual eigenspectra are then linearly combined according to a set of weights
The weights, hd(k), may be fixed, which may be optimal for an underlying white spectrum, or may be determined adaptively.
Estimate Sensor Noise in Virtual Region
As described above, analyzing the covariance matrix of isotropic noise to determine an appropriate subspace for the virtual region for the array may provide a dimension, Nvr, and a projection matrix for that subspace, Pvr. The sensor noise may be estimated from the residual snapshot data, xb,m, as
Covariance Matrix Estimate
The final estimate of the covariance matrix may be formed from the estimated line components, the covariance from spatial spectrum of the residual continuous background, and the sensor noise component as
where
{circumflex over (R)}a=diag({circumflex over (σ)}12,{circumflex over (σ)}22, . . . ,{circumflex over (σ)}K2) (7.42)
and
{circumflex over (V)}=[v({circumflex over (θ)}1,{circumflex over (Φ)}1),v({circumflex over (θ)}2,{circumflex over (Φ)}2), . . . ,v({circumflex over (θ)}K,{circumflex over (Φ)}K)] (7.43)
Thus, the CSS techniques disclosed herein are applicable to arrays with arbitrary geometry. In a particular embodiment, the sensor noise component in virtual space may be estimated and accounted for. A method for doing so based on the covariance matrix of 3D isotropic noise is explained above. Uniform circular arrays maintain a certain structure, and are not strictly speaking arbitrary, but do result in a covariance matrix which is not Toeplitz, and may be addressed using the same methods. Normalized SINR loss performance and average MTSE spectra have been assessed via simulation. The simulations indicate that performance is generally very close to optimal with little snapshot support, with some losses encountered near line components due to mismatch, and for weak INR line components at very low snapshot support due to inconsistent detection within harmonic analysis.
The system memory 630 may include volatile memory devices (e.g., random access memory (RAM) devices), nonvolatile memory devices (e.g., read-only memory (ROM) devices, programmable read-only memory, and flash memory), or both. The system memory 630 typically includes an operating system 632, which may include a basic/input output system for booting the computing device 610 as well as a full operating system to enable the computing device 610 to interact with users, other programs, and other devices. The system memory 630 also typically includes one or more application programs 634 and program data 636. For example, the application programs 634 and program data 636 may enable the computing device 610 to perform one or more aspects of signal processing, as described above. To illustrate, the application programs 634 may include instructions that, when executed by the processor 620, cause the processor 620 to receive sensed data from sensors of a sensor array (such as the sensor array 110 of
The processor 620 may also communicate with one or more storage devices 640. For example, the one or more storage devices 640 may include nonvolatile storage devices, such as magnetic disks, optical disks, or flash memory devices. The storage devices 640 may include both removable and non-removable memory devices. The storage devices 640 may be configured to store an operating system, applications, and program data. In a particular embodiment, the system memory 630, the storage devices 640, or both, include tangible, non-transitory computer-readable media.
The processor 620 may also communicate with one or more input/output interfaces 650 that enable the computing device 610 to communicate with one or more input/output devices 670 to facilitate user interaction. The input/output interfaces 650 may include serial interfaces (e.g., universal serial bus (USB) interfaces or IEEE 1394 interfaces), parallel interfaces, display adapters, audio adapters, and other interfaces. The input/output devices 670 may include keyboards, pointing devices, displays, speakers, microphones, touch screens, and other devices.
The processor 620 may communicate with other computer systems 680 via the one or more network interfaces 660. The one or more network interfaces 660 may include wired Ethernet interfaces, IEEE 802.01 wireless interfaces, Bluetooth communication interfaces, or other network interfaces. The other computer systems 680 may include host computers, servers, workstations, and other computing devices, such as the other system components 106 of
The illustrations of the embodiments described herein are intended to provide a general understanding of the structure of the various embodiments. The illustrations are not intended to serve as a complete description of all of the elements and features of apparatus and systems that utilize the structures or methods described herein. Many other embodiments may be apparent to those of skill in the art upon reviewing the disclosure. Other embodiments may be utilized and derived from the disclosure, such that structural and logical substitutions and changes may be made without departing from the scope of the disclosure. For example, method steps may be performed in a different order than is shown in the illustrations or one or more method steps may be omitted. Accordingly, the disclosure and the figures are to be regarded as illustrative rather than restrictive.
Moreover, although specific embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar results may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments. Combinations of the above embodiments and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the description.
In the foregoing Detailed Description, various features may be grouped together or described in a single embodiment for the purpose of streamlining the disclosure. This disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, the claimed subject matter may be directed to less than all of the features of any of the disclosed embodiments.
The present application claims priority to U.S. Provisional Application No. 61/408,456 filed Oct. 29, 2010, which is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4641259 | Shan et al. | Feb 1987 | A |
7120657 | Ricks et al. | Oct 2006 | B2 |
7319640 | Donald | Jan 2008 | B1 |
7471744 | Van Wechel et al. | Dec 2008 | B2 |
7671800 | Lee | Mar 2010 | B2 |
8224254 | Haykin | Jul 2012 | B2 |
8290459 | Lagunas et al. | Oct 2012 | B2 |
8428897 | Richmond | Apr 2013 | B2 |
8705759 | Wolff | Apr 2014 | B2 |
Entry |
---|
“Extrapolation Algorithms for Discrete Signals with Application in Spectral Estimation”, Jain A.; Ranganath, S. Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP '84. vol. 29, Issue 4, Aug. 1981, pp. 830-845. |
“Spectrum Estimation and Harmonic Analysis”, Thomson D. Proceedings of the IEEE, vol. 70, No. 9, Sep. 1982 (Sep. 1982), pp. 1055-1096. |
Coherent Subtraction of Narrowband Radio Frequency Interference; Kyehun Lee May 4, 2007—Radio Science, vol. ???, No., pp. 1-19 http://www.faculty.ece.vt.edu/swe/lwa/memo/lwa0085.pdf. |
“An Overview of Multiple-Window and Quadratic-Inverse Spectrum Estimation Methods”; Thomson, D.J.; Acoustics, Speech, and Signal Processing, 1994; IEEE International Conference on Year: 1994, vol. vi; pp. VI/185-VI/194 vol. 6. |
“Harmonic Analysis and Spectral Estimation”; John J. Benedetto; Journal of Mathematical Analysis and Applications; vol. 91, Issue 2, Feb. 1983, pp. 444-509. |
Baggeroer, A.B. et al., “Passive Sonar Limits Upon Nulling Multiple Moving Ships with Large Aperture Arrays,” IEEE Conference Record of the Thirty-Third Asilomar Conference on Signals, Systems, and Computers, vol. 1, Oct. 1999, pp. 103-108. |
Guerci, J.R., “Theory and Application of Covariance Matrix Tapers for Robust Adaptive Beamforming,” IEEE Transactions on Signal Processing, vol. 47, No. 4, Apr. 1999, pp. 977-985. |
Jin, Y. et al., “A CFAR Adaptive Subspace Detector for Second-Order Gaussian Signals,” IEEE Transactions on Signal Processing, vol. 53, No. 3, Mar. 2005, pp. 871-884. |
Kirsteins, I. et al., “Adaptive Detection Using Low Rank Approximation to a Data Matrix,” IEEE Transactions on Aerospace and Electronic Systems, vol. 30, No. 1, Jan. 1994, pp. 55-67. |
Kraay, A. et al., “A Physically Constrained Maximum-Likelihood Method for Snapshot-Deficient Adaptive Array Processing,” IEEE Transactions on Signal Processing, vol. 55, No. 8, Aug. 2007, pp. 4048-4063. |
Linebarger, D., “Redundancy Averaging with Large Arrays,” IEEE Transactions on Signal Processing, vol. 41, No. 4, Apr. 1993, pp. 1707-1710. |
Linebarger, D. et al., “The Effect of Spatial Averaging on Spatial Correlation Matrices in the Presence of Coherent Signals,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 38, No. 5, May 1990, pp. 880-884. |
Ricks, D.C. et al., “What Is Optimal Processing for Nonstationary Data?,” IEEE Conference Record of the Thirty-Fourth Asilomar Conference on Signals, Systems and Computers, vol. 1, Oct. 29-Nov. 1, 2000, pp. 656-661. |
Schwarzwalder, J.J. et al., “ABF Performance Using Covariance Matrices Derived From Spatial Spectra for Large Arrays,” IEEE Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers, Nov. 2009, pp. 1164-1168. |
Wage, K., “Multitaper Array Processing,” IEEE Proceedings of the Fourty-First Asilomar Conference on Signals, Systems, and Computers, vol. 1, Nov. 2007, pp. 1242-1246. |
Yu, L. et al., “A Novel Robust Beamforiner Based on Sub-Array Data Subtraction for Coherent Interference Suppression,” IEEE/SP 15th Workshop on Statistical Signal Processing, Aug. 31-Sep. 3, 2009, pp. 525-528. |
Zatman, M., “Comments on ‘Theory and Application of Covariance Matrix Tapers for Robust Adaptive Beamforming’,” IEEE Transactions on Signal Processing, vol. 48, No. 6, Jun. 2000, pp. 1796-1800. |
Number | Date | Country | |
---|---|---|---|
61408456 | Oct 2010 | US |