The invention generally relates to array processing and, more particularly, beamforming for a moving radio frequency (RF) source.
Array processing has played a key role in many modern applications such as radar, sonar, radio astronomy, communications, and seismology. For example, antenna arrays are essential components in radar systems, while hydrophone arrays are widely used in sonar systems.
Beamforming, also known as spatial filtering, is a technique used in array processing to receive a signal radiating from a specific direction and suppress signals emerging from other directions. It is well known in the literature that beamforming performance degrades severely in the presence of steering vector errors. This is attributed to improper modeling, miscalibration, pointing error, and source motion. Hence, robust and adaptive techniques have been proposed to enhance beamforming performance. These methods include diagonal loading, multiple linear constraints, eigenspace projection, and the robust Capon beamformers (RCBs) that use ellipsoidal uncertainty sets of the steering vector.
New problems arise when beamforming for moving sources. In addressing the problem of beamforming for moving sources, work thus far has primarily considered acoustic applications. Little effort has been directed at improving beamforming for moving radio frequency (RF) sources, and acoustic solutions do not necessarily translate to RF solutions. Acoustic signals are wideband with no characteristic wavelength and time delays must be obtained by waveform interpolation. On the other hand, most of the RF signals of interest are narrowband signals with a well-defined nominal wavelength, and time delay can be compensated by a phase shift.
One example acoustic beamforming method is presented in W. Chen and X. Huang, “Wavelet-based beamforming for high-speed rotating acoustic source,” IEEE Access, vol. 6, pp. 10 231-10 239, 2018. It is a wavelet-based beamforming method for rotating sources. Acoustic images are produced in the time-frequency domain as a result of direct incorporation of wavelet transform and the Doppler effect into Green's function. Beamforming is achieved as a simple inversion of time-frequency domain.
For RF applications, a Bayesian beamforming approach for a moving target is presented in Q. Nengfeng, B. Ming, H. Xiaoqing, T. Zhuanxia, and G. Luyang, “Moving target beamforming based on bayesian method,” in 2015 IEEE China Summit and International Conference on Signal and Information Processing (ChinaSIP). IEEE, 2015, pp. 393-397. The optimal beamformers' coefficients for this technique is the sum of minimum variance distortionless response (MVDR) beamformer at the estimated direction of arrivals (DOAs) weighted by a posterior probability density function (PDF) of these DOAs. A particle filter is used to find this a posteriori PDF.
A sliding window modified loaded sample matrix inversion (LSMI) beamforming algorithm for high speed mobile sources was proposed in V. V. Zaharov, “Smart antenna beamforming algorithm for mobile communications with high speed moving sources,” in 2008 IEEE Radio and Wireless Symposium. IEEE, 2008, pp. 279-282. The algorithm is based on recursive vector updating rather than matrix updating. For a sliding window of size K and antenna array of size N, the algorithm requires updating K vectors of size N, instead of K matrices of size N×N. However, this method uses a fixed diagonal loading method, which is not effective for a moving source beamforming.
Another proposed beamforming method is given in I. S. Yetik and A. Nehorai, “Beamforming using the fractional fourier transform,” IEEE Transactions on Signal Processing, vol. 51, no. 6, pp. 1663-1668, 2003. This technique is based on Fractional Fourier transform (FrFT). The beamformer involves optimum FrFT order selection based on the strongest amplitude peak search. However, this search based method is computationally inefficient and not practical.
Other methods aim at reducing computational complexity in beamforming. For example, reduced-rank methods are used to provide fast convergence and reduce computational complexity of adaptive beamformers. These methods are based on projecting data onto a low rank subspace. One work proposes a Krylov-subspace based reduced-dimension robust Capon beamformers. S. D. Somasundaram, N. H. Parsons, P. Li, and R. C. de Lamare, “Reduced-dimension robust capon beamforming using krylov-subspace techniques,” IEEE Transactions on Aerospace and Electronic Systems, vol. 51, no. 1, pp. 270-289, January 2015. Sensitivity to the signal of interest error (SOI) and array steering vector (ASV) errors is the main disadvantage of the Krylov-subspace methods.
This disclosure provides a robust beamforming method where the steering vector is not known precisely due to source movement and direction of arrivals (DOA) estimation errors. An exemplary beamformer is based on loaded minimum variance distortionless response (MVDR) with data collection achieved by sliding the observation window to allow a new snapshot to enter and an old one to leave. This sliding-window process results in a slight change in the regularization (diagonal loading) parameter. Unlike the approach in the Zaharov paper identified in the Background section above, which utilizes a fixed regularization parameter, present embodiments exploit a sliding window setting to persistently update the regularization parameter, as will be explained in the Detailed Description section below.
The frequencies, bandwidths, wave shapes, and directivities of received signals may be determined by the reception aperture elements 20a of reception transducer 20 and reception unit 32. The reception unit 32 includes receivers 32a with plural channels, AD convertors 32b, and memories (or storage devices, storage media) 32c. Arrival of waves to the reception aperture elements 20a generates the reception signals within the instrument. Reception signal characteristics may be determined by the geometries of the reception aperture elements 20a (thickness or aperture size and shape) and materials. Under the direction of control unit 34, the reception transducer 20 and/or reception unit 32 may perform processing that controls for reception of particular frequencies, bandwidths, wave shapes and directivities of received signals. Desired parameters may be set automatically via the control unit 34 or set using an input device 40.
Control signals (e.g., trigger signals) sent from the control unit 34 in the instrument body 30 may command the start of AD conversions at AD convertors 32b of the respective channels. According to the command signals, analog signals of respective channels are converted to digital signals which are stored in storage devices or storage media 32c. In some embodiments, one frame of received signals may be stored at a time and processed by the DSP unit 33 according to the discussion below. Control unit 34 may also change the transmission aperture position, the transmission effective aperture width, or the transmission steering directions.
Generally, the reception channel number is the number of communication lines that are used for performing one beamforming, to send waves (signals) received by the respective reception aperture elements 20a to the reception unit 32. The formations of reception channels are various. Generally, in order to perform one beamforming every time, received signals generated by plural reception aperture elements 20a are applied with different delays. That is, the reception unit 32 is equipped with analogue or digital delay patterns, and the delay patterns that realize reception focusings or steering directions, etc. can be used according to an operator's selection using the input device 40.
The digital signal processing (DSP) unit 33 is configured to perform beamforming processes with respect to the reception signals generated by transducer 20 and reception unit 32. The DSP unit 33 may also perform other processes such as a Hilbert transform, spectral frequency division, and superposition. The reception unit 32 may include the DSP unit 33, or the DSP unit 33 may include the reception unit 32. The control unit 34 may control the DSP unit 33 and other units by sending command signals. Alternatively, the control unit 34 may include the DSP unit 33, or else the DSP unit 33 may include the control unit 34.
The digital signal processing unit 33 may comprise one or more devices, calculators, PLDs (Programmable Logic Devices), FPGAs (Field-Programmable Gate Arrays), DSPs (Digital Signal Processors), GPUs (Graphical Processing Units), processors, and/or microprocessors.
Following is an explanation of an exemplary procedure and sub-procedures performed by, e.g., an instrument as depicted in
A. Signal Model and MVDR Beamformer
Consider a moving source with angular motion described by the following equation:
θt=θo+ωt, (1)
where θt(rad) is the position of the source at time t, θo(rad) is its initial position, and ω(rad/sec) is the angular velocity. Also consider a uniform linear array (ULA) of N elements receiving a signal from this moving source and Q signals from static interference sources. The N×1 complex array observation vector can be modeled as
yt=βta(θt)st+vt, (2)
where a(θt) is the steering vector of the desired source st, the term
is the sum of interference signals, si[t], multiplied by their corresponding steering vectors, ai[t], and additive white Gaussian noise, n[t]. The variability in signal amplitude due to distance change is modeled using the scalar βt. Throughout this disclosure, without loss of generality, the following exponential model for amplitude change is used:
βt=e−αt, (3)
where αϵ, α≥0
The beamformer output at time t is given by
xt=wtHyt, (4)
where wtϵ is the beamformer weighting coefficients at time t. For an MVDR beamformer, the weighting vector is found by solving the following optimization problem:
where Rt=[ytytH] is the covariance matrix, and at=a(θt)ϵN is the steering vector of the desired signal. The solution of the above optimization problem of Equation (5) is given by
In practical applications, Rt is unavailable; hence, it is replaced with an estimate {circumflex over (R)}t that is given by
where K is the number of snapshots.
B. Generalized Sidelobe Canceller (GSC)
An alternative formulation of Equation (5) is called the generalized sidelobe canceller (GSC). The GSC may be obtained by decomposing the weights w as follows:
wt=wq[t]−Btwa[t], (8)
where wq[t]=at/N is a quiescent weight vector and Btϵ is a blocking matrix that is orthogonal to at, and is chosen such that BtHBt=I. By substituting the decomposed wt of Equation (8) into Equation (5) and replacing Rt with {circumflex over (R)}t, the problem is reformulated as the following unconstrained least squares optimization:
or
where
and
The minimization of Equation (10) corresponds to the following linear regression model:
bt=Atwa[t]+zt, (11)
where zϵ is an error vector. Since
is normally ill-conditioned, and bt is noisy, the application of regularization to estimate wa[t] is preferred. The regularized least squares (RLS) problem is stated as follows:
After choosing a proper value for γt, it is usable in the loaded version of Equation (6) that is given by
C. Exemplary Method
In this subsection, the subscript t is eliminated for simplicity of notations. The regularization parameter, γ, is obtained by solving the following equation:
where tr (.) denotes the matrix trace, I is the identity matrix, Uϵ and Σ=diag [σ1, σ2, . . . σN-1, 0]T, with σ1>σ2> . . . >σN-1 are obtained from the following singular value decomposition (SVD) of A
A=UΣVH, (15)
where Vϵ
Equation (14) is known as bounded perturbation regularization (BPR) equation. The regularization parameter values that solve equation (14) minimize the MSE of the RLS.
The introduction of the regularization term in equation (12) aims at providing stability against the ill-conditioning of the matrix A. To reap the full benefit of the regularization process, the regularization parameter γ is adjusted carefully. The process of setting the regularization parameter may be performed independently at each time point. However, applying such methods repetitively at each time point may result in unnecessarily increasing the computational complexity of the system. Therefore, the following description presents a method to effectively adjust the regularization parameter needed in Equation (12) with significantly reduced computational complexity.
Newton's method, also known as the Newton-Raphson method, is a well-known technique used to find a root of a function. Starting from an initial guess γi=0 for the root of Equation (14), the following iterations are carried out:
where f′(γi) is the derivative of the function. The iterations stop when |f(γi+1)|<ϵ.
The present exemplary technique focuses on a situation that guarantees the existence of a unique solution for Equation (14). It is shown that a unique root exists in the interval (−σN, ∞) if the following sufficient condition is met:
Ntr(Σ2UHbbHU)>tr(Σ2)tr(UHbbHU) (17)
If f(γ) has a positive root, γ+, and the condition (17) is satisfied, the following two results are valid:
1) f(γ) is always negative in the interval [0, γ+), i.e., f(γ)≤0 for [0, γ+]
2) f(γ) is an increasing function in the interval [0, γ+], i.e., f′(γ)≥0 for [0, γ+]
Thus, using an initial value γ0 in the interval (0, γ+], equation (16) can produce a progressively increasing estimate of γ. Convergence of equation (16) occurs when γi+1→γ+; thus, f(γi)→0 and γi+1→γi.
In the moving source beamforming scenario, data is collected by a sliding window that allows only the newest snapshot to enter and the oldest one to leave the window. Hence, a slight change in γ+ is expected for each new sliding window. However, starting from γ0=δ, where δ is a small positive value, each new sliding window would increase the number of iterations that are required for f(γ) to converge to a positive root.
Assume that the root of the current sliding window is γ0+. The root of the next sliding window is either γr+, where γr+>γo+, or γ1+, where γ1+<γ0+.
If the next sliding window moves to the left from fo(γ), it produces the dashed curve 303, f1(γ). In this case, testing f′1(γ) at γo+ reveals a negative sign which implies that γ0+ is not in the interval (0, γ1+]. For this scenario, bisect the interval (0, γ0+) and test the sign at γ0+/2. If the result is a positive sign, γ0+/2 is useable as an initialization value for finding the root of f1(γ). Otherwise, repeat bisecting and testing process until a positive sign is obtained. Algorithm 1 summarizes the proposed initialization method:
Algorithm 1: Moving-source Beamformer (BPR-MSB)
Input: U, Σ, a, N
Output: γ+
Initialization: γo=1×10−4
1: Solve Equation (14) using Newton's method.
For the next new snapshot
2: γi=γ+
3: while f′(γi)<0 do
4: γi=γi/2
5: if f′(γi)>0 then
6: break
7: end if
8: end while
9: Solve Equation (14) using Newton's method.
10: return γ+=γi
To further reduce the complexity, notice that the SVD of A is needed to find a root γ+ in Equation (14). However, repeating the SVD increases the complexity of the system. A different approach may be used to carry out the calculations efficiently. As explained earlier, calculate A from the blocking matrix, B, and the estimated covariance matrix, {circumflex over (R)}. Since {circumflex over (R)} is a positive semidefinite matrix, the eigenvalue decomposition (EVD) is useable as follows:
{circumflex over (R)}=LSLH, (18)
where Lϵ and S=diag [s1, s2, . . . sN]T, with s1>s2> . . . >sN. Choose an arbitrary matrix, Mϵ such that B=LMH, or
MH=LHB. (19)
Now we can write A differently as follows:
Comparing with Equation (15), notice
For each sliding window, modify {circumflex over (R)} by adding a rank-1 matrix and subtracting a rank-1 matrix. This allows the use of a recursive algorithm to compute the eigenvalues and eigenvectors of {circumflex over (R)} and use them directly in Equation (14). This reduces the complexity by an order of magnitude from O(N3) to O(N2). A suitable recursive algorithm for this purpose is described in K.-B. Yu, “Recursive updating the eigenvalue decomposition of a covariance matrix,” IEEE Transactions on Signal Processing, vol. 39, no. 5, pp. 1136-1145, 1991.
Signal-to-interference and noise ratio (SINR) is considered for performance evaluation, which is calculated as follows:
where σs[t] is the moving source signal power at time t, ao[t] is the actual steering vector of the desired signal at time t, and Ri+n[t] is the interference-plus-noise covariance matrix.
Assume a ULA of N=10 elements that receives an RF signal transmitted by a moving source that starts at 0° and stops at 60°. The speed of the source is set at high (ω=105 rad/sec) to speed up simulation time. During the movement of the source, the signal's amplitude suffers up to 20% attenuation. There are four interference signals (Q=4) located at fixed positions. [−30°, 60°, 100°, 120° ] with an interference-to-noise-ratio (INR) of 10 dB. The snapshots are collected in a sliding window of size K=10. Assume that the DOA of the source that coincides with each snapshot is known with a uniformly distributed uncertainty in the interval [−1°,1° ]. The desired signal and the interference signals are Gaussian randomly generated data. This Example considers two cases with a signal-to-noise ratio (SNR) equal to 10 dB and 20 dB at θ=0°. The signal power decays progressively in time.
Both the present disclosure's method and fixed initialization method dynamically update their diagonal loading during the movement. LSMI method uses a fixed diagonal loading γFL=10 dB. It can be seen that both the present disclosure's method and the fixed initialization method noticeably outperform LSMI in the range (0°, 20°) and around 45°.
The results show the benefit of continuously updating the diagonal loading parameter.
An instance of the
Some embodiments of the present invention may be a system, a device, a method, and/or a computer program product. A system, device, or computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing one or more processors to carry out aspects of the present invention, e.g., processes or parts of processes or a combination of processes described herein.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Processes described herein, or steps thereof, may be embodied in computer readable program instructions which may be paired with or downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions and in various combinations.
These computer readable program instructions may be provided to one or more processors of one or more general purpose computers, special purpose computers, or other programmable data processing apparatuses to produce a machine or system, such that the instructions, which execute via the processor(s) of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
While the invention has been described herein in connection with exemplary embodiments and features, one skilled in the art will recognize that the invention is not limited by the disclosure and that various changes and modifications may be made without departing from the scope of the invention as defined by the appended claims.
Acknowledgement
The inventors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia, for funding this research work through project number (2020-055) and King Abdulaziz University, DSR Saudi Arabia.
Number | Name | Date | Kind |
---|---|---|---|
6482160 | Stergiopoulos | Nov 2002 | B1 |
7084812 | Xin | Aug 2006 | B2 |
7536029 | Choi et al. | May 2009 | B2 |
8754810 | Guo et al. | Jun 2014 | B2 |
9312929 | Forenza et al. | Apr 2016 | B2 |
9338551 | Thyssen et al. | May 2016 | B2 |
9502021 | Kleijn | Nov 2016 | B1 |
9584909 | Heusdens et al. | Feb 2017 | B2 |
9800316 | Woodsum | Oct 2017 | B2 |
9952307 | Gan | Apr 2018 | B2 |
10141993 | Lee et al. | Nov 2018 | B2 |
20020152253 | Ricks | Oct 2002 | A1 |
20050254347 | Beaucoup | Nov 2005 | A1 |
20080181174 | Cho | Jul 2008 | A1 |
20100232531 | Nam | Sep 2010 | A1 |
20200191943 | Wu | Jun 2020 | A1 |
20210109232 | Kassas | Apr 2021 | A1 |
Number | Date | Country |
---|---|---|
105681972 | Jun 2016 | CN |
106782590 | Oct 2020 | CN |
Entry |
---|
W. Chen and X. Huang, “Wavelet-based beamforming for high-speed rotating acoustic source,” IEEE Access, vol. 6, pp. 10 231-10 239, 2018. |
Q. Nengfeng, B. Ming, H. Xiaoqing, T. Zhuanxia, and G. Luyang, “Moving target beamforming based on bayesian method,” in 2015 IEEE China Summit and International Conference on Signal and Information Processing (ChinaSIP). IEEE, 2015, pp. 393-397. |
V. V. Zaharov, “Smart antenna beamforming algorithm for mobile communications with high speed moving sources,” in 2008 IEEE Radio and Wireless Symposium. IEEE, 2008, pp. 279-282. |
I. S. Yetik and A. Nehorai, “Beamforming using the fractional fourier transform,” IEEE Transactions on Signal Processing, vol. 51, No. 6, pp. 1663-1668, 2003. |
S. D. Somasundaram, N. H. Parsons, P. Li and R. C. De Lamare, “Reduced-dimension robust capon beamforming using krylov-subspace techniques,” IEEE Transactions on Aerospace and Electronic Systems, vol. 51, No. 1, pp. 270-289, Jan. 2015. |
K.-B. Yu, “Recursive updating the eigenvalue decomposition of a covariance matrix,” IEEE Transactions on Signal Processing, vol. 39, No. 5, pp. 1136-1145, 1991. |
T. Ballal, M. A. Suliman, and T. Y. Al-Naffouri, “Bounded perturbation regularization for linear least squares estimation,” IEEE Access, vol. 5, pp. 27 551-27 562, 2017. |