Aspects of various embodiments are directed to radar apparatuses/systems and related methods.
In certain radar signaling applications including but not limited to automotive and autonomous vehicle applications, high spatial resolution may be desirable for detecting and distinguishing objects which are perceived as being located at the similar distances and/or moving at similar velocities. For instance, it may be useful to discern directional characteristics of radar reflections from two or more objects that are closely spaced, to accurately identify information such as location and velocity of the objects.
Virtual antenna arrays have been used to mitigate ambiguity issues with regards to apparent replicas in discerned reflections as indicated, for example, by the amplitudes of corresponding signals as perceived in the spatial resolution spectrum (e.g., amplitudes of main lobes or “grating lobes”). But even with many advancements in configurations and algorithms involving virtual antenna array, radar-based detection systems continue to be susceptible to ambiguities and in many instances yield less-than optimal or desirable spatial resolution. Among these advancements, virtual antenna arrays have been used with multiple-input multiple-output (MIMO) antennas to achieve a higher spatial resolution, but such approaches can be challenging to implement successfully, particularly in rapidly-changing environments such as those involving automobiles travelling at relatively high speeds.
These and other matters have presented challenges to efficiencies of radar implementations, for a variety of applications.
Various example embodiments are directed to issues such as those addressed above and/or others which may become apparent from the following disclosure concerning radar devices and systems in which objects are detected by sensing and processing reflections or radar signals for discerning location information and related information including as examples, distance, angle-of-arrival and/or speed information.
In certain example embodiments, aspects of the present disclosure are directed to radar-based processing circuitry, and/or use of such circuitry, configured to solve for a sparse array AoA (angle of arrival) estimation problem in which it may be beneficial to recognize and overcome ambiguities for an accurate AoA estimation while also accounting for data processing throughput and computation resources. More specific aspects of the present disclosure are directed to overcoming the estimation problem by carrying out a set of steps which help to account for measurement errors and noise by iteratively updating measurement-error and noise parameters, and with the set up steps using a matrix-based model in which each of the possible spectrum support vectors is drawn from a long-tail (or Cauchy-like) distribution, for example, as may be used in known Sparse Bayesian Models in automatic relevance determination methodologies.
In more specific example embodiments, the present disclosure is directed to a method and/or an apparatus involving a radar system having a logic circuit and an array (e.g., in which at least one uniform sparse linear array may be embedded) for processing radar reflection signals. Various steps or actions carried out by the radar logic circuitry include generating output data indicative of the reflection signals' amplitudes, and discerning angle-of-arrival information for the output data for the output data by correlating the output data with an iteratively-refined estimate of a sparse spectrum support vector (“support vector”). The estimate approach may include: assessing at least one most probable spectrum support vector from among a plurality of most probable spectrum support vectors modeled as random values in a matrix drawn from a long-tail distribution that is controlled as a function of a scaling parameter; and update a set of parameters including a covariance estimate, the scaling parameter, and a noise variance parameter which is being associated with a measurement error for said at least one most probable spectrum support vector from a previous iteration.
In other more specific examples, the above examples may involve one or more of the following aspects (e.g., such aspects being used alone and/or in any of a variety of combinations). The sparse spatial frequency support vector may be processed as a random variable using a matrix-based model, and with the matrix-based model processed by Cholesky decomposition with each iterative update, so as to reduce computational burdens. The long-tail distribution may be a Cauchy distribution, and the set of parameters may further include a covariance estimate. Further, for each iterative update, certain of said at least one most probable spectrum support vector having respective amplitudes, may be pruned, and these correspond to those selected ones which are insignificant relative to a statistical expectation of the at least one most probable spectrum support vector associated with a preceding iteration.
In further examples and also related to the above aspects, the computer processing circuitry may convert a modeled set of said at least one most probable spectrum support vector to a tractable Gaussian model of said at least one most probable spectrum support vector, and may apply a Laplace approximation for providing said tractable Gaussian model of said at least one most probable spectrum support vector.
In yet other specific examples, the steps may be carried out sequentially, without inversion of a matrix in the matrix-based model, with the update of the statistical expectation of the support vector following the update of the covariance estimate of the support vector, and the update of the noise variance parameter following the update of the statistical expectation of the support vector. Further, the set of parameters may include a noise variance parameter, and a precision vector associated with a random variable T such that the conditional probability of the support vector in a current iterative update, given T, is a joint Gaussian distribution, and the conditional probability of T itself is a Gamma distribution with multiple parameters chosen to promote sparse outcomes for the iteratively-refined estimate.
In the above examples and/or other specific example embodiments, further aspects are as follows. The iterative updating of the parameters may be carried out over an increasing iteration count which stops upon reaching or satisfying a threshold criteria which may be a function of the multiple parameters and/or a function of a measurement error (e.g., having a Gaussian distribution). In response to the threshold criteria, resultant data may be generated to provide the discerned angle-of-arrival information as an output. Also, the measurement error may correspond to an error probability given the constraint of the support vector after its most recent iterative update. Further, to increase the accuracy, the array may have at least two embedded arrays, each of which is being associated with a unique antenna-element spacing from among a set of unique co-prime antenna-element spacings.
The above discussion/summary is not intended to describe each embodiment or every implementation of the present disclosure. The figures and detailed description that follow also exemplify various embodiments.
Various example embodiments may be more completely understood in consideration of the following detailed description in connection with the accompanying drawings, in which:
While various embodiments discussed herein are amenable to modifications and alternative forms, aspects thereof have been shown by way of example in the drawings and will be described in detail. It should be understood, however, that the intention is not to limit the disclosure to the particular embodiments described. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure including aspects defined in the claims. In addition, the term “example” as used throughout this application is only by way of illustration, and not limitation.
Aspects of the present disclosure are believed to be applicable to a variety of different types of apparatuses, systems and methods involving radar systems and related communications. In certain implementations, aspects of the present disclosure have been shown to be beneficial when used in the context of automotive radar in environments susceptible to the presence of multiple objects within a relatively small region. While not necessarily so limited, various aspects may be appreciated through the following discussion of non-limiting examples which use exemplary contexts.
Accordingly, in the following description various specific details are set forth to describe specific examples presented herein. It should be apparent to one skilled in the art, however, that one or more other examples and/or variations of these examples may be practiced without all the specific details given below. In other instances, well known features have not been described in detail so as not to obscure the description of the examples herein. For ease of illustration, the same reference numerals may be used in different diagrams to refer to the same elements or additional instances of the same element. Also, although aspects and features may in some cases be described in individual figures, it will be appreciated that features from one figure or embodiment can be combined with features of another figure or embodiment even though the combination is not explicitly shown or explicitly described as a combination.
In a particular embodiment, a radar-based system or radar-detection circuit may include a radar circuit front-end with signal transmission circuitry to transmit radar signals and with signal reception circuitry to receive, in response, reflection signals as reflections from objects which may be targeted by the radar-detection circuit or system. In processing of data corresponding to the reflection signals, logic or computer-processing circuitry solves for a sparse array AoA (angle-of-arrival) estimation problem in which ambiguities may be recognized and overcome for an accurate AoA estimation. For a more accurate estimation, the circuitry should also account for measurement errors and noise, while also respecting data-processing throughput and computation-resource goals associated with practicable designs.
In a more specific example, aspects of the present disclosure are directed to overcoming the estimation problem by carrying out a set of steps which help to account for such measurement errors and noise by iteratively updating measurement-error and noise parameters, and by using a matrix-based model in which each of the possible spectrum support vectors is drawn from a distinct distribution, for example, as may be used in known Sparse Bayesian models in automatic relevance determination methodologies.
In a particular embodiment, a radar-based system or radar-detection circuit may include a sparse array, whether a multi-input multi-output (MIMO) or other type of array, embedded with one or multiple uniform sparse linear arrays, to process the reflection-related signals. From the sparse array, output data is presented as measurement vectors, indicative of signal magnitudes associated with the reflection signals, to another module for discerning angle-of-arrival (AoA) information.
The logic or computer processing circuitry associated with this AoA module determines or estimates the AoA information by correlating the output data with at least one spatial frequency support vector indicative of a correlation peak for the output data. For example, in one specific example of a method according to the present disclosure, the determination and/or estimation may be realized by carrying out a set of steps in connection with a matrix-based probabilities computation which help to account for measurement errors and noise by iteratively updating measurement-error and noise parameters, and with the set up steps using a matrix-based model in which each of the possible spectrum support vectors is drawn from a long-tail (e.g., Cauchy-like) distribution.
In a related more-specific example of the present disclosure, the long-tail distribution is processed or controlled as a function of a scaling parameter and with the scaling parameter being updated along with each iteration (e.g., along with the iterative updating of one or more other parameters). Such iterative refinement leads to a report, as output data generated from such processing, corresponding to an iteratively-refined estimate of a sparse spectrum support vector (“support vector”). The approach may more specifically include: assessing at least one most probable spectrum support vector from among a plurality of most probable spectrum support vectors modeled as random values in a matrix drawn from the long-tail distribution; and update a set of parameters including a covariance estimate, the scaling parameter, and a noise variance parameter which is being associated with a measurement error for said at least one most probable spectrum support vector from a previous iteration.
To help offset burdens in connection with processing of the matrix-based computations, in specific examples the above type of approach may be further enhanced by including with each iterative update, an automatic pruning effort to eliminate certain of the less-probable support vectors from among the many most probable spectrum support vectors. These are selected as the support vectors having amplitudes which are insignificant relative to a statistical expectation of the support vector of in a preceding iteration. The statistical expectation among a plurality of support vectors may be, for example, an average or a median vector or another middle-ground selection taken from within a limited range such as the mean or median plus and/or minus seven percent.
Certain more particular aspects of the present disclosure are directed to such use and/or design of the AoA module in response to such output data from a sparse array which, as will become apparent, may be implemented in any of a variety of different manners, depending on the design goals and applications. Accordingly given that the data flow and related processing operations in such devices and systems is perceived as being performed in connection with the sparse array first, in the following discussion certain optional designs of the sparse array are first addressed and then the discussion herein shifts to such particular aspects involving use and/or design of the AoA module.
Among various exemplary designs consistent with the present disclosure, one specific design for the sparse array has it arranged to include a plurality of embedded sparse linear arrays, with each such array being associated with a unique antenna-element spacing from among a set of unique co-prime antenna-element spacings. As will become apparent, such co-prime spacings refer to numeric value assignments of spacings between antenna elements, wherein two such values are coprime (or co-prime) if the only positive integer (factor) that divides both of them is 1; therefore, the values are coprime if any prime number that divides one does not divide the other. As a method in use, such a radar-based circuit or system transmits radar signals and, in response, receives reflection signals as reflections from targeted objects which may be in a particular field of view. The sparse (virtual) array provides processing of data corresponding to the reflections by using at least two embedded (e.g., MIMO-embedded) sparse linear arrays, each being associated with one such unique antenna-element spacing. In other designs for the sparse array, there are either embedded uniform sparse linear array and/or multiple, each being associated with a unique antenna-element spacing which may or may not be necessarily selected from among set of unique co-prime antenna-element spacings.
These unique co-prime antenna-element spacings may be selected to cause respective unique grating lobe centers along a spatially discrete sampling spectrum, so as to facilitate differentiating lobe centers from side lobes, as shown in experiments relating to the present disclosure. In this context, each sparse linear arrays may have a different detectable amplitude due to associated grating lobe centers not coinciding and mitigating ambiguity among side lobes adjacent to the grating lobe centers. In certain more specific examples also consistent with such examples of the present disclosure, the grating lobe center of one such sparse linear array is coincident with a null of the grating lobe center of another of the sparse linear arrays, thereby helping to distinguish the grating lobe center and mitigate against ambiguous measurements and analyses.
In various more specific examples, the sparse array may include various numbers of such sparse linear arrays (e.g., two, three, several or more such sparse linear arrays). In each such example, there is a respective spacing value associated with each of the sparse linear arrays and collectively, these respective spacing values form a co-prime relationship. For example, in an example wherein the array includes two sparse linear arrays, there are two corresponding spacing values that form a co-prime relationship which is a co-prime pair where there are only two sparse linear arrays.
In other specific examples, the present disclosure is directed to radar communication circuitry that operates with first and second (and, in some instances, more) uniform MIMO antenna arrays that are used together in a non-uniform arrangement, and with each such array being associated with a unique antenna-element spacing from among a set of unique co-prime antenna-element spacings that form a co-prime relationship (as in the case of a co-prime pair). The first uniform antenna array has transmitting antennas and receiving antennas in a first sparse arrangement, and the second uniform antenna array has transmitting antennas and receiving antennas in a different sparse arrangement. The radar communication circuitry operates with the first and second antenna arrays to transmit radar signals utilizing the transmitting antennas in the first and second arrays, and to receive reflections of the transmitted radar signals from an object utilizing the receiving antennas in the first and second arrays. Directional characteristics of the object relative to the antennas are determined by comparing the reflections received by the first array with the reflections received by the second array during a common time period. Such a time period may correspond to a particular instance in time (e.g., voltages concurrently measured at feed points of the receiving antennas), or a time period corresponding to multiple waveforms. The sparse array antennas may be spaced apart from one another within a vehicle with the radar communication circuitry being configured to ascertain the directional characteristics relative to the vehicle and the object as the vehicle is moving through a dynamic environment. An estimate of the DOA may be obtained and combined to determine an accurate DOA for multiple objects.
The reflections may be compared in a variety of manners. In some implementations, a reflection detected by the first array that overlaps with a reflection detected by the second array is identified and used for determining DOA. Correspondingly, reflections detected by the first array that are offset in angle relative to reflections detected by the second array. The reflections may also be compared during respective instances in time; and used together to ascertain the directional characteristics of the object. Further, time and/or space averaging may be utilized to provide an averaged comparison over time and/or (e.g., after covariance matrix spatial smoothing).
In accordance with the present disclosure,
More specifically, in the example depicted in
After processing via the sparse (MIMO virtual) array via its sparse linear arrays, each with unique co-prime antenna-element spacing values, the module 134 may provide an output to circuitry/interface 140 for further processing. As an example, the circuitry/interface 140 may be configured with circuitry to provide data useful for generating high-resolution radar images as used by drive-scene perception processors for various purposes; these may include one or more of target detection, classification, tracking, fusion, semantic segmentation, path prediction and planning, and/or actuation control processes which are part of an advanced driver assistance system (ADAS), vehicle control, and autonomous driving (AD) system onboard a vehicle. In certain specific examples, the drive scene perception processors may be internal or external (as indicated with the dotted lines at 140) to the integrated radar system or circuit.
The example depicted in
The antenna array block 156 also has respectively arranged receive antenna elements for receiving reflections and presenting corresponding signals to respective amplifiers which provide outputs for subsequent front-end processing. As is conventional, this front-end processing may include mixing (summing or multiplying) with the respective outputs of the conditioning-amplification circuits, high-pass filtering, further amplification following by low-pass filtering and finally analog-to-digital conversion for presenting corresponding digital versions (e.g., samples) of the front end's processed analog signals to logic circuitry 160.
The logic circuitry 160 in this example is shown to include a radar controller for providing the above-discussed control/signal bus, and a receive-signal processing CPU or module including three to five functional submodules. In this particular example, the first three of these functional submodules as well as the last such submodule (which is an AoA estimation module as discussed with
The fourth submodule in this particular example is a MIMO co-prime array module which, as discussed above, may be implemented using at least two MIMO-embedded sparse linear arrays, each being associated with one such unique antenna-element spacing, such as with values that manifest a co-prime relationship.
Consistent with the logic circuitry 160,
In such examples using the MIMO for a co-prime array, as in the module of the logic circuitry 160, an advantageous aspect of concerns the suppression of spurious sidelobes as perceived in the spatial resolution spectrum in which the amplitudes of main lobes or “grating lobes” are sought to be distinguished and detected. Spurious sidelobes are suppressed by designing the MIMO co-prime array module as a composite array including at least two uniform linear arrays (ULA) with co-prime spacings. By using co-prime spacing, ambiguities caused by the sidelobes are naturally suppressed. The suppression grows stronger when the composite ULA is extended to larger sizes by adding additional MIMO-based transmitters via each additional ULA, as the suppression of spurious sidelobes may be limited by the size of the two composite ULAs. In the cases where higher suppression is desirable to achieve better target dynamic range, further processing may be implemented.
In experimentation/simulation efforts leading to aspect of the present disclosure, comparisons of a 46-element uniform linear array (ULA) and a 16-element sparse array (SPA) of 46-element aperture has shown that the SPA and ULA have similar aperture parameters but the spatial under sampling of the SPA results in many ambiguous spurious sidelobes, and that further reducing the amplitudes of the spurious sidelobes results in a significant reduction of targets (or object) being falsely identified and/or located. In such a spatial resolution spectrum, the amplitude peaks in the spectrum corresponds to detected targets.
A more specific example of the present disclosure is directed to further mitigating the spurious sidelobe issues by setting up the issues using probability theories having related probability solutions. Using the sparsity constraint imposed upon the angular spectrum, such issues are known as L-1 Norm minimization problems. Well-known techniques such as Orthogonal Matching Pursuit (OMP) may be used for resolving the sparse angular spectrum; however, the performance is impacted by the sensitivity to array geometry and support selection, sensitivity to angle quantitation, and/or the growing burden of least-squares (LS) computation as more targets are found. Alternatively and as a further aspect of the present disclosure, such performance may be improved by mitigating the angle quantization problem to a large degree by carrying out a set of steps which, as noted above, help to account for measurement errors and noise by iteratively updating measurement-error and noise parameters. These steps may use a matrix-based model in which each of the possible spectrum support vectors is drawn from a distinct distribution, for example, as may be used in known Sparse Bayesian Models in automatic relevance determination methodologies.
Before further discussing these steps, the discussion first explains how such a sparse array may be used, according to various optional aspects of the present disclosure, to develop and generated the output data used by the AoA estimation module (e.g., 134 of
Optionally, the above-described sparse array may be constructed to result in sidelobe suppression being repeatable (e.g., extendable via MIMO Tx) antenna geometry. The constructed sparse (e.g., MIMO virtual) array consists of 2 embedded ULAs each with a unique element spacing. First, the two element spacing values are selected such that they are co-prime numbers (that is, their greatest common factor (GCF) is 1 and their lowest common multiple (LCM) is their product). Secondly, the co-prime pair is selected such that the two composite ULA's results in an array of a (sparse) aperture of the size equal to the LCM and of antenna elements equal to the number of physical Rx antenna elements plus 1. If such array is found, the composite-ULA array can then be repeated at every LCM elements by placing the MIMO TX's LCM elements apart.
For example, for a system of 8 physical Rx antennas and 2 Tx MIMO antennas, a co-prime pair {4, 5} is selected to form the composite-ULA sparse array based on the following arrangement. This is shown in the table below:
The LCM of {4, 5} co-prime numbers is 20, so, by placing MIMO Tx (transmit, as opposed to Rx for receive) antennas at {0, 20, 40, . . . } element positions (i.e. integer multiples of LCM), the two ULAs can be naturally extended to form a larger composite-ULA sparse array. This requires careful selection of the co-prime pair. The case of 2 Tx {4, 5} co-prime sparse array can be constructed based on the following arrangement, where the locations of the Tx antennas is marked with ‘T’ and the locations of the Rx antennas are marked with ‘R’. The constructed MIMO virtual antennas' locations are marked with ‘V’. The virtual array may consist of 2 embedded ULAs of 4 and 5 element spacings, both with the same (sparse array) aperture size of 36 elements, as below.
Assuming half-wavelength element spacing, for a filled ULA the grating lobe occurs in the angle spectrum outside the +/−90° Field of view (FOV) so no ambiguity occurs. On the other hand, for the 4-element spacing ULA and the 5-element spacing ULA, grating lobe occurs within the +/−90° FOV causing ambiguous sidelobes. The use of co-prime element spacings, however, effectively reduces the amplitude level of the ambiguous sidelobes because the centers of the grating lobes of the two co-prime ULAs do not coincide until many repeats of the spatially discretely sampled spectrums. Because the centers of the grating lobes from the two ULAs do not overlap, the composite grating lobes have a lowered amplitude level due to the limited lobe width. Further, not only the centers of the grating lobes do not overlap, the center of the grating lobe of the first ULA coincide with a null of the second ULA such that it is guaranteed that the power from the two ULAs do not coherently add up in the composite array. This directly results in the suppression of the grating lobes in the composite array. As more MIMO Tx's are employed to extend the ULAs, the lobe width is further reduced such that the composite grating lobe levels are further reduced. Thus, aspects of the present disclosure teach use of a sparse MIMO array construction method that is sure to reduce the ambiguous sidelobes (or composite grating lobes of the co-prime ULAs) and the sidelobe suppression performance scales with the number of MIMO Tx's employed. Note that when more MIMO Tx's are employed, the overlap of the grating lobes further decreases. The co-prime pair guarantees a suppression level of roughly 50%. Additional suppression can be achieved by further incorporating additional co-prime ULA(s). For example, {3, 4, 5} are co-prime triplets which suppresses grating lobes to roughly 30% of its original level. {3, 4, 5, 7} are co-prime quadruplets which suppresses grating lobes to roughly 25% of its original level, etc. The percentage of suppression corresponds to the ratio of the number of elements of a co-prime ULA and the total number of elements in the composite array.
Further understanding of such aspects of the present disclosure may be understood by way of further specific (non-limiting) examples through which reference is again made to the spatial resolution spectrum but in these examples, with spatial frequency plots being normalized. These specific examples are shown in three pairs of figures identified as:
In
In
In other examples, relative to the example of
One may also compute individual co-prime ULA angle spectrums and detect angle-domain targets separately for each spectrum. In such an approach, only targets detected consistently in all co-prime ULA spectrums may be declared as being a valid target detection. In such an embodiment which is consistent with the present disclosure, the individual co-prime ULA's AoA spectrum are first produced and targets are identified as peaks above a predetermined threshold. Next, detected targets are check if they are present in the same angle bin in all co-prime ULAs' spectrums. If it is consistently detected in the same angle bin of all spectrums, a target is declared. Otherwise it is considered as a false detection and discarded.
In general, conventional processing for AoA estimation effectively corresponds to random spatial sampling and this leads to a sparse array design. It can be proven that the maximum spurious sidelobe level is proportional to the coherence so it follows that by designing a matrix A that has low coherence, this leads to low spurious sidelobes and vice versa. This demonstrates that by employing the extendable MIMO co-prime array approach of the present disclosure, reduced coherence can be achieved, and sparse recovery of targets can be obtained using greedy algorithms. In this context, such above-described MIMO array aspects are complemented by addressing the sparse spectral signal linear regression problem.
More specifically, to process the output of a sparse array, standard beamforming or Fourier spectral analysis based processing suffers due to the non-uniform spatial sampling which violates the Nyquist sampling rules. As a result, high spurious angle sidelobes will be present alone with the true target beams. To mitigate the spurious sidelobes, one may impose sparsity constraints on the angle spectrum outputs and solve the problem accordingly. One class of algorithms, based on so called greedy algorithms, originally developed for solving underdetermined linear problems, can be used for estimating the sparse spectrum output.
As is known the greedy algorithm starts by modelling the angle estimation problem as a linear regression problem, that is, by modelling the array output measurement vector x as a product of an array steering matrix A and a spatial frequency support amplitude vector c plus noise e, where each column of A is a steering vector of the array steered to a support spatial frequency (f1, f1, . . . fM) in normalized unit (between 0 and 1) upon which one desires to evaluate the amplitude of a target and the spatial sampling positions (t1, t1, . . . tN) in normalized integer units. To achieve high angular resolution, a large number of supports can be established, thereby dividing up the 0˜2π radian frequency spectrum resulting in a fine grid and a “wide” A matrix (that is, number of columns, which corresponds to the number of supports, is much greater than the number of rows, which corresponds to the number of array outputs or measurements). Since A is a wide matrix, this implies that the number of unknowns (vector c) is greater than the number of knowns (vector x) and the solving of equation x=Ac+e is an under-determined linear regression problem, where x and e are N×1 vectors, A is a N×M matrix, and c is a M×1 vector. This is seen below as:
Next, the greedy algorithm identifies one or more most probable supports and assuming one such most probable support, measures this support's most probable amplitude, and this is followed by cancellation of its contribution to the array output measurement vector to obtain a residual array measurement vector r. Based on the residual measurement vector, the process repeats until all supports are found or a stop criteria is met.
The identification of the most probably support (without loss of generality, assume one support is to be selected at a time) is by correlating the columns of A with measurement vector and support frequency that leads to the highest correlation is selected. The correlation vector, y, can be directly computed by y=AHx for the first iteration where AH denotes the transpose-conjugate (i.e. Hermitian transpose) of A. In general, for the k-th iteration, the correlation output is computed as y=AHrk where rk is the residual measurement vector computed in the k-1-th iteration and r1=x. The found support of the k-th iteration is then added to a solution support set s ϵ {i1, i2, . . . ik}.
The amplitude of the found support and the residual measurement vector can be obtained in any of various versatile ways. One known method, known as Matching Pursuit or MP, involves an iterative search through which the correlator peak's amplitude is found, and the amplitude is simply selected as the correlator peak's amplitude. Another known method, known as Orthogonal Matching Pursuit or OMP, in which a least-squares (LS) fitted solution is selected as the amplitude. The LS-fit is based on solving a new equation x=Ascs in LS sense, where As consists of columns of A of selected support set and elements of cs is a subset of elements of c of the selected supports. Once the amplitudes are found, the residual measurement vector is updated by rk+1=x−Asĉs where ĉs is the LS solution of cs. One solution to the LS problem is simply the pseudo inverse from which ĉs is solved by ĉs=(AsHAs)−1 Asx (for square or narrow matrix As) or AsH(AsAsH)−1x (for wide matrix As).
The MP and OMP method can be used to reconstruct the sparse spectrum c if a certain property of A is met. One of such widely used property is Coherence, μ(A), defined by the equation
where Ai and Aj are the i-th and the j-th column of A, respectively and in theory,
According to the known theory, unique sparse reconstruction is guaranteed if
where K denotes the number detectable targets (i.e. number of supports with amplitude above noise level). So, the lower the Coherence, the large value of K is possible. Note that unique reconstruction is possible if such condition is not met, only that it cannot be guaranteed based on the known theory.
In order to achieve high angular resolution, many supports much more than the number of measurements (i.e. N<<M) is modelled and estimated. This naturally leads to very high Coherence which in turn results in small K or recoverable target amplitudes. One way to reduce the Coherence is by randomizing the spatial sampling of the steering vectors. For example, one may create a N′×1 steering vector where N′>N, and randomly (following any sub-Gaussian or Gaussian probability distribution) deleting the samples to obtain a N×1 vector. The resulting matrix is called Random Fourier matrix. In the following equation, the matrix A represents such a Random Fourier matrix where {t1, t2, . . . , tN} are N integers randomly selected from {0, 1, . . . , N′}.
In general, the random spatial sampling requirement leads the sparse array design and it can be proven that the maximum spurious sidelobe level is proportional to the Coherence so by designing a matrix A that has low Coherence leads to low spurious sidelobes and vice versa. This demonstrates that by employing the extendable MIMO co-prime array approach of the present disclosure previously introduced, reduced Coherence can be achieved, and sparse recovery of targets can be obtained using greedy algorithms.
One problem with a greedy algorithm arises from the quantized supports on which target amplitudes are evaluated. Given finite quantization, which is necessary to keep coherence low, it is not possible to always have signals coincide exactly with the spatial frequency of the supports. When the actual spatial frequency misaligns with any of the supports, it may not be possible to cancel the target signal in its entirety in the residual measurement vector and as a result, neighboring supports are to be selected in order to cancel the signal in the later iteration(s). The resulting solution becomes non-sparse and the sparse recovery performance; thus, the resolution performance, is degraded.
Returning now to the AoA estimation determination and use of the iteratively-executed steps or actions to account for measurement error and noise, another aspect of the present disclosure involves an initialized array steering matrix used to model the angle estimation problem, and for which a solution may be provided through a sparse learning method which has a pruning action carried out in connection with each iteration to rule out supports that are of insignificant amplitude based on previous estimation of the spectrum amplitudes (e.g., as estimated in one or more of the immediately preceding iterations). According to examples of the present disclosure, aspects of the sparse learning methodology is best understood using certain probability theories which are common to Bayesian Linear Regression (BLR) approaches as discussed below.
In BLR, the problem of finding sparse c is modeled as the problem of finding the most probable values of c given the measurement x, corrupted by random noise ϵ. In other words, c is estimated by finding the values of c that maximize the posterior probability p (c|x), which can be casted into a simpler problem based on Bayesian theorem, following the max a posteriori (MAP) estimator approach shown as follows:
In order to find solution of above problem, one may establish some prior knowledge on the probability distribution p(c) and p(x|c). For AoA estimation problems of radar systems, the conditional probability of p(x|c) carries the physical meaning of array measurement noise, which can be modeled as a joint distribution of i.i.d zero-mean Gaussian random variables. As to the selection of the distribution of p (c), there are a variety of a versatile of ways to model it such that the resulting estimate on c is sparse, and Sparse Bayesian Learning (SBL) is an example.
SBL models p(c) by introducing a latent random variable T such that the conditional probability p(c|τ) is a joint Gaussian distribution and further assuming that p(τ) itself is a Gamma distribution with parameters {α, β} whose value is chosen by the model designer. In the context of SBL, it is favorable to set {α, β}→{0,0} such that the resulting p(c) has a long-tail distribution having a general form of
sparse solutions (i.e. zero, i.e. noise, is the most probably value with or without the presence of outliers, i.e. target signals). The exact model of the SBL is provided below, with the measurement error being modeled as Gaussian:
The a priori distribution is modeled as a marginal distribution with the following form:
such that elements of c follows the Student's t distribution,
which tends to the form
α→0, β→0.
With reference to the above relationships, ĉ=arg maxc ln p(x|c)+ln p(c) may be solved using above definitions. One may compute the derivative of ln p(y|c)+ln p(c) w.r.t c and set it to zero such that c can be found, along with distribution parameters σn2 and τ also found through maximizing p(y;τ,σn2).
In certain more specific examples and while detailed derivations may be known, ĉ may be iteratively found by sequentially updating equations as below with initial values of ĉ set according to an FFT beamforming result, {circumflex over (σ)}n2 set to a value close to noise variance and wherein {circumflex over (τ)}i is set to suitable identical values such that it cannot be neglected in Ω nor does it dominates Ω.
Disadvantages in using SBL are well known and they include its performance. As an example, SBL requires an inversion step in the Ω update and this inversion often leads to numerical problems when {circumflex over (θ)}n2 tends towards small values. When this occurs, the low rank AHA term dominates the expression which results in a rank deficiency problem for the matrix inverse. Secondly, the M×M matrix to be inverted can be very large and this results in computation efficiency being correspondingly low.
Certain aspects of the present disclosure may be used to mitigate such disadvantages of SBL, and one, as mentioned above, is the pruning action carried out in connection with each iteration to rule out supports that are of insignificant amplitude. This may be based on one or more previous estimations of the spectrum amplitudes. The effect is a result which decreases the size of the problem monotonically with each new iteration. In turn, this reduces the computation burden and also well reduces the sensitivity to the rank deficiency problem.
Further, in a typical implementation according to the present disclosure, there is no matrix inversion step as such. Rather, instead of a matrix inversion step as above, Cholesky decomposition is used to take advantage of the structure of the underlying matrix to be inverted such that the speed increases and the computation is more robust against numerical issues. The enhanced solution is described in the below equations where ĉp is the amplitude after the pruning and Ap is the corresponding steering vector matrix of the pruned support. Matrix U is the upper triangular matrix based on the Cholesky decomposition.
Ωp=U−1(UH)−1
As an illustration in accordance with yet a specific example, one approach consistent with these above equations and the present disclosure permits for related operations to be implemented by logic circuitry such as in the AoA-related module shown at the lower right of
For purposes of understanding some of the terminology, the flow in this approach may pertain to the above type of steering vector matrix Ap of full supports. In practice, the entire matrix need not to be precomputed and stored and can be generated on the fly and/or on demand. More specifically, this flow may begin with initialization or resetting (e.g., to zero)of a count variable “p” for tracking the iterations or times involved with the flow until a report or output is generated from the iterative refinement. The initialization may involve updating certain vector-related parameters which, in this example, are: the noise variance parameter, the precision vector, and the output support amplitude vector ĉp (to be the amplitude after the pruning) which is initialized to ApHx. As should be apparent, these vector-related parameters to be updated refer to the variables, terms and mathematical relationships as discussed above in connection with the related aspects of the present disclosure.
Next several actions are performed in this example before the generation of an output or report of a determination of the AoA and each is as reflected in connection with the above equations. In the first of these actions, vector supports may be pruned as indicated above. The objective matrix may be simplified via a Cholesky decomposition, as shown in connection with the above equations. Next the logic circuitry may update the covariance of the output support amplitude vector as indicated above. The logic circuitry may then update the support amplitude vector ĉp as indicated and illustrated, and then compute the normalized residual rp based on an absolute value associated with the above updates and processing (corresponding to the expression in the numerator on the right side of the above noise variance equation). At this juncture, with the residual vector rp reflecting measurement error taken from a Gaussian distribution, rp may be stored for a comparison step in connection with the determination associated with deciding whether the residual vector has been sufficiently reduced relative to a specified criteria such as below a minimum residual threshold rTH or whether a maximum iteration count threshold for the above-noted count “p” has been incremented so as to be equal to pmax. This decision is processed to assess whether a stop criteria is realized (in this example, the thresholds pmax and rTH). If either of these conditions is met, the logic or computer circuitry effects a report as noted above; otherwise, the noise variance parameter is updated as indicated in the above noise variance equation. Finally, before incrementing the count p and returning or reverting back to the beginning pruning action for the next iteration, the precision vector, as yet another parameter, is updated as indicated in the last of the above equations.
In example experimental and/or simulation-based implementations, consistent with the above aspects of the present disclosure, AoA estimation results have been obtained for two type of virtual (e.g., MIMO) arrays: a 16-element {4,5} co-prime sparse array and a 16-element uniform linear array (ULA). The results show that the targets may be resolved using either array configuration. When the sparse array is used, better resolution performance is shown to be usually achieved. When ULA is used, the performance loss is observed however it is not lost entirely like greedy algorithm methods. This demonstrates that such aspects of the present disclosure result in superior sparse spectral signal reconstruction. From these implementations, such results also show that the spectral peaks are generally wider than other greedy algorithms and, depending on the greedy algorithm, this may be due to its probabilistic modelling of the spectrum amplitudes.
Using this above-type pruning approach in a simulated example in which 6 targets are present and with the number of supports being pruned for the case of pruning-type sparse learning, the outcome shows a monotonic reduction from 256 to 26. This means that the matrix is to be inverted (for the case of the SBL approach) or Cholesky decomposed (for the case of pruning-type sparse learning) can differ by as much as 10 times, resulting in difference in computation as much as 1000 times (based on O{n3}) in the final iterations. The use of Cholesky decomposition (sometimes QR decomposition) can be more efficient (depending on the implementation) and it also reduces the sensitivity to rank deficiency so in general the robustness of the present disclosure is improved.
Accordingly, such a pruning aspect of the present disclosure may be beneficial as pruning-type sparse learning method applicable for processing output data, indicative of reflection signals, passed from a sparse array. It is appreciated that such an array may be in any of a variety of different forms such as those disclosed as above. In each instance, the logic (or processing) circuitry receives the output data as being indicative of signal magnitude (e.g., in a spectrum support vector) of the reflection signals via the sparse array, and then discerns angle-of-arrival information for the output data by performing certain steps in an iterative manner for implementation of a sparse learning method which includes pruning, for each iterative update, certain of the plurality of spectrum-related support vectors having respective amplitudes which are insignificant relative to the statistical expectation of the support vector in a preceding iteration.
In certain more-specific examples according to the present disclosure, these steps include updating of a set of support-vector parameters including a covariance estimate of the support vector, and a statistical expectation of the support vector over a plurality of spectrum-related support vectors (e.g., mean) and, in certain more specific aspects, the above-noted set of parameters to be updated with each iteration (e.g., associated with previous values of the support vector) to also include a noise variance associated with the most recent refinement of the support vector, and a scaling parameter such as τ in the form of a precision vector as exemplified above. Further and as applicable to each such example, to reduce further computational burdens which may be significant for many computer-circuit architectures, certain of the possible support vectors may be pruned relative to the statistical expectation and the matrix-based model may also be processed by Cholesky decomposition with each iterative update.
As a variation from the above specific approach, the present disclosure provides an alternative circuit-based method that may also apply a Cholesky decomposition. In this alternative method, however, a convergence of certain of the updated parameters, including a scaling parameter, may be used to provide further advantages. Such advantages include, as examples, reductions in the space required for updating the above-noted parameters and increased processing speeds for the computations. In this alternative method, the one or more most probable spectrum support vectors (from among a plurality of most probable spectrum support vectors) are modeled as random values in a matrix drawn from a long-tail distribution (e.g., Cauchy or other extended tail distribution), and the distribution is controlled as a function of the scaling parameter.
Such sparse learning algorithms have a large number of intrinsic parameters to be handled by the algorithms. This means that for a problem size of s supports, there exists s number of τ (known as the precision) parameters and a σn2 (known as the noise variance) parameter that are to be computed and updated in each iteration. The optimization automatically adapts and finds the best values of these s+1 internal parameters and so the tuning of these parameters is not done manually, which is a significant advantage of SBL and pSBL algorithms. The convergence of these parameters, however, may be slow because of the sheer large dimension of optimization problem. The problem is somewhat mitigated by the support pruning implemented in pSBL, which reduces the number of supports from M to s so the dimension of the problem is already reduced. Given limited number of measurements, intuitively, the fewer number of parameters required to be estimated the more robust the algorithm can be. So, by further reducing the dimension of the parameter space, more robust performance can be achieved.
Further reduction of the internal parameter dimension can be achieved by replacing the a priori probability distribution, p(c), employed in the BLR model, with a different distribution. In this new class of algorithm, the prior distribution is modelled as a Cauchy distribution (instead of Student's t with {α, β}→{0,0}). Further, the entire population of support amplitudes are modelled as random values drawn from a single Cauchy distribution (versus in pSBL, each support amplitude is drawn from a distinct distribution), such that the distribution is controlled by only one internal “scaling” parameter, τ. Given that Cauchy is also a long-tail distribution, it can be used to reliably model sparse spectral signals.
Such Cauchy prior strategy has employed in one prior-art algorithm. The prior-art algorithm is, however, not complete because it fails to achieve automatic tuning of the internal parameters τ and σn2. The measurement error is modeled as Gaussian (influenced by parameter σn2) as follows
and the a priori distribution is modeled as a Cauchy distribution (also influenced by parameter τ) as follows
The solution of c is found via MAP by maximizing the posterior probability, p(c|x), transformed by Bayesian theorem, into the following, more tractable, form.
The solution to the above problem may be derived in a straightforward fashion and may be known. Aspects of the above problem, as applied in the above-characterized radar context of the present disclosure, may be expressed in summary form via the following few equations.
Compute diagonal loading matrix: Q=diag{[1+τ|c1|2,1+τ|c2|2, . . . , 1′τ|cM|2]}
The solution of ĉ is a function of itself so it can be iteratively solved. Note that the parameters τ and σn2 are manually defined as constants and, consequently, the performance is sensitive to their selected values of these constants. Proper selection is required to ensure convergence to a reasonable solution. In general, the value of τ usually may be selected such that it is greater than one over the target's spectral amplitude and σn2 should be selected close to the noise variance.
The manual selection of parameters {τ, σn2} is due to the lack of analytical or closed-form solutions. Usually in BLR problems, the selection of the prior distribution follows the conjugate-prior distributions such that analytical solution can be found more easily. The Cauchy-Gaussian pair as used in this approach, however, does not constitutes a conjugate-prior distribution pair and no known analytical solution to τ and σn2 exist to our best knowledge. To achieve optimal solution more reliably, an efficient update strategy of the parameters is warranted.
In accordance with this approach, this and aspect of the present disclosure is directed to a closed-form analytical solution to the intrinsic parameter {τ, σn2}, which solution may involve the following processing actions according to an example solution of the present disclosure. First by employing Laplace approximation the intractable Cauchy-type prior model is converted to a tractable Gaussian-type model. Next, by applying Expectation Maximization (EM) method, an intractable optimization problem is converted into one that iteratively optimizes (increases) its lower bound and such that as a result, an analytical solution to {τ,σn2} may be readily found.
According to a more specific example of the present disclosure, the logic circuitry may receive the amplitude-related output data for AoA processing via such an analytical solution (e.g., via Cauchy or long-tail Bayesian linear regression) with automatically tuned parameters by using the following equations and (iteratively updating) set of parameters:
Compute diagonal loading matrix:
Q=diag{[1+τ|c1|2,1+τ|c2|2, . . . ,1+τ|cM|2]}
As discussed above, in a more specific example embodiment support pruning may be added and/or Cholesky decomposition may be used to reduce the computation burden and increase the robustness of the solution. With both of these aspects, support pruning and Cholesky decomposition, the above equations and the updating set of the parameters may be as below:
Q
p=diag{[1+{circumflex over (τ)}|{circumflex over (c)}p
Ωp=U−1(UH)−1
Again, these operations may be implemented by logic circuitry such as in the AoA-related module shown at the lower right of
In one such specific example, such processing may be carried out as follows, starting with block 610 of
The next several blocks of
In block 660, a decision is made based on whether the residual vector has been reduced below a minimum residual threshold rTH or whether a maximum iteration count threshold for the count p is equal to pmax. This decision is processed to assess whether a stop criteria is realized (in this example, the thresholds pmax and rTH). If either of these conditions is met, flow proceeds from block 660 to block 665 where the logic or computer circuitry effects a report as noted above; otherwise, flow proceeds from block 660 to block 670 where the noise variance parameter is updated as indicated above and illustrated herein.
From block 670, flow proceeds to block 680 where another parameter, the precision vector, is updated as indicated above and illustrated herein via block 680. Next, at block 690, the count p is incremented and flow returns to block 620 supports being pruned for the next iteration in the flow shown in this example of
In example experimental and/or simulation-based implementations, consistent with the above aspects of the present disclosure, AoA estimation results have been obtained for different types of sparse (e.g., MIMO) arrays and while using variations of the methodology disclosed above in connection with
In particular examples disclosed herein by way of
Accordingly, while the previously-described PSL-type method has been shown to provide significant improvements over prior sparse Bayesian learning algorithms, the
In yet other examples involving a realistic target scene, simulations have been conducted for testing different ones of the high-resolution AoA estimation algorithms as discussed and disclosed herein. In one such scenario, 5 sedan automobile targets are placed between 70 m and 90 m from the radar under test for testing targets of realistic physical structure. Three 3 arcs of reference corner reflectors are placed at 65 m, 95 m, and 105 m radii for testing point-target scenario with uniform angle spacing. The radar under test consists of a 16-element linear array (in the azimuthal direction) which are arranged in a uniform linear array as well as a {4,5} coprime array. Distinctive image feature can be observed between the different algorithms. Highest resolution imaging performance can be achieved by the above pruning types of methodology (e.g., previously-described PSL-type and
Based on the above discussion, yet further exemplary variations (among a variety of others) in accordance with the present disclosure are noteworthy. For instance,
Various other examples in accordance with the present disclosure employ any of various combinations of examples and aspects as disclosed hereinabove and also, consistent with the present disclosure, related examples and aspects as disclosed in concurrently-filed U.S. patent application Ser. No. 17/185,084 (Dkt. No. 82284414US01_NXPS.1551PA) concerning the previously-described PSL-type methods; Ser. No. 17/185,040 (Dkt. No. 82284396US01_NXPS.1550PA) concerning the sparse array and coprime element spacings; and Ser. No. 17/185,115 (Dkt. No. 82284405US01_NXPS.1553PA) concerning updating/refining a correlation of processing relative to upper-side and lower-side spectrum support vectors relative to a spatial frequency support vector. Relative to the instant U.S. patent application at the time of its filing, each of these other three applications: is by the same inventors, has the same assignee, and is incorporated by reference in its entirety and specifically for the circuit-based methodology disclosed in connection with operations associated with the commonly-illustrated sparse (e.g., MIMO) array and/or for the exemplary disclosure of the AoA determinations or estimations (e.g., as in
Terms to exemplify orientation, such as upper/lower, left/right, top/bottom and above/below, may be used herein to refer to relative positions of elements as shown in the figures. It should be understood that the terminology is used for notational convenience only and that in actual use the disclosed structures may be oriented different from the orientation shown in the figures. Thus, the terms should not be construed in a limiting manner.
As examples, the Specification describes and/or illustrates aspects useful for implementing the claimed disclosure by way of various circuits or circuitry which may be illustrated as or using terms such as blocks, modules, device, system, unit, controller, etc. and/or other circuit-type depictions. Such circuits or circuitry are used together with other elements to exemplify how certain embodiments may be carried out in the form or structures, steps, functions, operations, activities, etc. As examples, wherein such circuits or circuitry may correspond to logic circuitry (which may refer to or include a code-programmed/configured CPU), in one example the logic circuitry may carry out a process or method (sometimes “algorithm”) by performing such activities and/or steps associated with the above-discussed functionalities. In other examples, the logic circuitry may carry out a process or method by performing these same activities/operations and in addition.
For example, in certain of the above-discussed embodiments, one or more modules are discrete logic circuits or programmable logic circuits configured and arranged for implementing these operations/activities, as may be carried out in the approaches shown in the signal/data flow of
Based upon the above discussion and illustrations, those skilled in the art will readily recognize that various modifications and changes may be made to the various embodiments without strictly following the exemplary embodiments and applications illustrated and described herein. For example, methods as exemplified in the Figures may involve steps carried out in various orders, with one or more aspects of the embodiments herein retained, or may involve fewer or more steps. Such modifications do not depart from the true spirit and scope of various aspects of the disclosure, including aspects set forth in the claims.