The present invention relates to processing techniques, and more particularly, but not exclusively, relates to focusing synthetic aperture radar.
Environmental monitoring, earth-resource mapping, and military systems are applications that frequently benefit from broad-area imaging at high resolutions. Sometimes such imagery is desired even when there is inclement weather or during night as well as day. Synthetic Aperture Radar (SAR) provides such a capability. SAR systems take advantage of the long-range propagation characteristics of radar signals and the complex information processing capability of modern digital electronics to provide high resolution imagery. SAR frequently complements photographic and other imaging approaches because time-of-day and atmospheric condition constraints are relatively minimal, and further because of the unique signature provided by some targets of interest to radar frequencies.
SAR technology has provided terrain structural information to geologists for mineral exploration, oil spill boundaries on water to environmentalists, sea state and ice hazard maps to navigators, and reconnaissance and targeting information to military operations. There are many other applications or potential applications. Some of these, particularly civilian, have not yet been adequately explored because lower cost electronics are just beginning to make SAR technology economical for smaller scale uses.
Unfortunately, standard SAR systems are susceptible to phase errors that can adversely impact a resulting image. In synthetic aperture radar imaging, demodulation timing errors at the radar receiver due to signal delays resulting from inaccurate range measurements or signal propagation effects sometime produce unknown phase errors in the imaging data. As a consequence of these errors, the resulting synthetic aperture radar images can be improperly focused. To address such shortcomings, autofocusing schemes have arisen that rely on a particular image model such as Phase Gradient Autofocus (PGA) approaches and/or optimization based on one or more particular image metrics, such as entropy, powers of image intensity, or knowledge of point scatterers to name a few. Unfortunately, the restoration tends to be inaccurate when the underlying scene is poorly described by the assumed image model. Also, implementation of these schemes often involves iterative calculations that tend to significantly consume processing resources. Thus, there is a need for further contributions in this area of technology.
One embodiment of the present invention includes a unique processing technique. Other embodiments include unique apparatus, devices, systems, and methods for focusing synthetic aperture radar. Further embodiments, forms, objects, features, advantages, aspects, and benefits of the present application shall become apparent from the detailed description and figures included herewith.
While the present invention can take many different forms, for the purpose of promoting an understanding of the principles of the invention, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended. Any alterations and further modifications of the described embodiments, and any further applications of the principles of the invention as described herein are contemplated as would normally occur to one skilled in the art to which the invention relates.
One embodiment of the present invention is directed to a technique of synthetic aperture radar (SAR) autofocus that is non-iterative. In this embodiment the multichannel redundancy of the defocusing operation has been utilized to create a linear subspace, where the unknown perfectly-focused image resides, expressed in terms of a known basis formed from the given defocused image. A unique solution for the perfectly-focused image is determined directly through a linear algebraic formulation by invoking an additional image support condition. This approach has been found to be computationally efficient and robust, and generally does not require prior assumptions about the SAR scene like those used in existing methods. As an optional feature of this embodiment, the vector-space formulation of the data facilitates incorporation of sharpness metric optimization within the image restoration framework as a regularization term.
Image processing device 40 communicates with radar transmitter/receiver equipment 60. Equipment 60 is operatively coupled to radar antenna device 70. Equipment 60 and antenna device 70 operate to selectively provide electromagnetic energy in the radar range under control of processing device 40. The transmitter and receiver of equipment 60 may be separate units or at least partially combined. For terrain interrogation, typically SAR systems include at least a single radar antenna attached to the side of aircraft 24. During flight, a single pulse from the antenna tends to be rather broad (several degrees), and often illuminates the terrain from directly beneath aircraft 24 out to the horizon. However, if the terrain is approximately flat, the time at which the radar echoes return facilitates the determination of different distances from the aircraft flight track. While distinguishing points along the track of aircraft 24 can be difficult with a small antenna, if the amplitude and phase of the signal returning from a given portion of the terrain are recorded, and a series of pulses is emitted as aircraft 24 travels, then the results from these pulses can be combined. In effect, this series of observations can be combined just as if they had all been made simultaneously from a very large “virtual” antenna—resulting in a synthetic aperture much larger than the length of the antenna (and typically much larger than the platform 22).
Device 40 can be comprised of one or more components of any type suitable to process the signals received from equipment 60 and provide desired output signals. Such components may include digital circuitry, analog circuitry, or a combination of both. As illustrated, processor 42 is of a programmable type with operating logic 52 provided in the form of executable program instructions stored in memory 50. Alternatively or additionally, processor 42 and/or operating logic 52 are at least partially defined by hardwired logic or other hardware. Device 44 can further include multiple processors, Arithmetic-Logic Units (ALUs), Central Processing Units (CPUs), or the like. For forms of device 40 with multiple processing units, distributed, pipelined, and/or parallel processing can be utilized as appropriate. Device 40 includes signal conditioners, signal format converters (such as analog-to-digital and digital-to-analog converters), limiters, clamps, filters, power supplies, power converters, communication interfaces, operator interfaces, computer networking and the like as needed to perform various operations described herein. Device 40 may be dedicated to performance of just the operations described herein or may be utilized in one or more additional applications. Moreover, device 40 may be completely carried with platform 22 and/or at least a portion of device 40 may be remote from platform 22 at a ground station or the like, with pertinent data being downloaded or otherwise communicated to the remote station as desired.
Memory 50 can be of a solid-state variety, electromagnetic variety, optical variety, or a combination of these forms. Furthermore, memory 50 can be volatile, nonvolatile, or a mixture of these types. Some or all of memory 50 can be of a portable type, such as a disk, tape, memory stick, cartridge, or the like. Memory 50 can be at least partially integrated with processor 42 and/or may be in the form of one or more components or units.
Device 40 includes input/output (I/O) devices 56, such as one or more input devices like a keyboard, mouse or other pointing device, a voice recognition input subsystem, one or more output devices like an operator display that can be of a Cathode Ray Tube (CRT) type, Liquid Crystal Display (LCD) type, plasma type, Organic Light Emitting Diode (OLED) type, a printer, or the like. Other I/O devices 56 can be included such as loudspeakers, electronic wired or wireless communication subsystems, and the like. In
Processing device 40 is structured to combine the series of observations provided by the SAR pulses and returns via equipment 60 and antenna device 70. SAR data is typically organized in terms of range (cross-track) and azimuth (along track); where the “track” is the direction of travel of platform 22, and can be retained in data store 54 of memory 50. This data is typically converted from the time domain to the frequency domain via Fourier transformation or another technique. The phase data of the frequency domain form of the data may be discarded in some of the more basic implementations, using only the magnitude data for image generation. The basic operation of a synthetic aperture radar system can be enhanced in various ways to collect more information. Most of these methods use the same basic principle of combining many pulses to form a synthetic aperture, but they may involve additional antennas and/or additional processing. Nonlimiting examples of these enhancements include polarimetry that exploits the polarization of interrogating radar signals and/or target materials, interferometry that can be used to improve resolution and/or provide additional mapping information, ultra-wideband techniques that can be used to enhance interrogation penetration, Doppler beam sharpening to improve resolution, and pulse compression techniques.
It should be appreciated that the phase error of the SAR data can be modeled as varying only along one dimension in the Fourier domain. The following mathematical model relates the phase-corrupted Fourier imaging data {tilde over (G)} to the ideal data G through the one-dimensional phase error function φe as in expression (1) that follows:
{tilde over (G)}[k,n]=G[k,n]ejφ
where: the row index k=0, 1, . . . , M−1 corresponds to the cross-range frequency index and the column index n=0, 1, . . . , N−1 corresponds to the range (spatial-domain) coordinate. The SAR image {tilde over (g)} is formed by applying an inverse one-dimensional (1-D) Fourier Transformation, such as a Digital Fourier Transform (DFT), to each column of {tilde over (G)}: {tilde over (g)}[m, n]=DFTk−1{{tilde over (G)}[k, n]}. Because the phase error φe can be represented as a 1-D function of the index k, defocusing of each column of {tilde over (g)} can be modeled by applying the same blurring kernel, b[m] (where: b[m]=DFTk−1{ejφ
{tilde over (g)}[m,n]=g[m,n]Mb[m], (2)
where: M denotes an M-point circular convolution, and g is the perfectly-focused image.
Procedure 120 continues with operation 126 in which a subspace is determined that includes the ideal, focused image. For this determination, let the column vector b∈CM be composed of the values of b[m], m=0, 1, . . . , M−1 representative of the blurring kernel, and let column n of g[m, n], representing a particular range coordinate of a SAR image, be denoted by the vector g[n]∈CM. Let the vec{g}∈CMN be composed of the concatenated columns g[n], n=0, 1, . . . , N−1, and the notation {A}Ω refer to the matrix formed from a subset of the rows of A; where Ω is a set of row indices. Further, C{b}∈CMM refers to a circulant matrix formed with the vector b as defined by the following expression (3):
Given this notation, it should be appreciated that SAR autofocus aims to restore a perfectly-focused image g given the defocused image {tilde over (g)} and any assumptions about the characteristics of the underlying scene. Using expressions (1) and (2), the defocusing relationship in the spatial-domain can be represented by expression (4) as follows:
where: F∈CM×M is the 1-D DFT unitary matrix with entries
FH is the Hermitian of F and represents the inverse DFT, D{ejφ
where: fA is an all-pass correction filter, noting that ĝ(φe)=g. Theoretically, the estimated phase error, {circumflex over (φ)}, can be applied directly to the corrupt imaging data {tilde over (G)} to restore the focused image according to expression (6) that follows:
ĝ[m,n]=DFTk−1{{tilde over (G)}[k,n]e−j{circumflex over (φ)}[k]}. (6)
However, to solve for the desired image in this manner typically leads to iterative schemes that evaluate some measure of quality in the spatial domain and then perturb the estimate of the phase error function in a manner that increases the image focus. In at least some applications, a more direct, non-iterative approach is desired in which a focusing operator f is directly determined to restore the image. From this focusing operator, it is generally straightforward to obtain {circumflex over (φ)}=φe.
In one such approach, a linear subspace characterization of the focused image g is used, which allows the focusing operator to be computed using a linear algebraic formulation. This subspace is spanned by a basis constructed from the given defocused image {tilde over (g)}. To determine such a subspace, the relationship set forth in expression (5) is generalized to include all correction filters f∈CM×M—not just the subset of all-pass correction filters fA. As a result, for a given defocused image {tilde over (g)}, an M-dimensional subspace is obtained that includes the focused image g, as provided in expression (7) that follows:
ĝ(f)=C{f}{tilde over (g)}, (7)
where: ĝ(f) denotes the restoration formed by applying the focus operator f. This subspace characterization explicitly captures the multichannel condition of SAR autofocus based on the model that each column of the image is defocused by the same blurring kernel. To produce a basis expansion for the subspace in terms of {tilde over (g)}, the standard basis {ek}k=0M−1 for CM is selected, i.e., ek[m]=1 if m=k and 0 otherwise, and express the correction filter as provided in expression (8a) that follows:
By generalizing to all f∈CM, a linear framework arises that may not result from initial application of the all-pass condition. Using the linearity property of circular convolution, expression (8b) results:
From this relationship, any image ĝ in the subspace can be expressed in terms of a basis expansion as set forth in expression (9) as follows:
where expression (10) defines:
φ[k]({tilde over (g)})=C{ek}{tilde over (g)} (10)
Because {tilde over (g)} is given, the basis functions of expression (10) are known for the M-dimensional subspace containing the unknown focused image g. In matrix form, expression (9) can be written as expression (11):
vec{{circumflex over (g)}(f)}=Φ({tilde over (g)})f (11)
where expression (12) defines:
and is designated the basis matrix. The unknown perfectly-focused image is represented in terms of the basis expansion in expression (9) as follows in expression (13):
vec{g}=Φ({tilde over (g)})f*, (13)
where: f* is the true correction filter satisfying ĝ(f*)=g. For expression (13), the matrix Φ({tilde over (g)}) is known, but g and f* are unknown.
From the subspace definition determined in operation 126, procedure 120 continues with operation 128 in which an image support constraint is imposed. By imposing an image support constraint on the focused image g, the linear system in expression (13) can be constrained sufficiently to solve for the unknown correction filter f*. This constraint assumes that g is approximately zero-valued over a particular set of low-return pixels, as represented by expression (14):
where: ξ[m, n] are low-return pixels (such that |ξ[m, n]|≈0) and g′[m, n] are unknown nonzero pixels. Letting
From operation 128, conditional 130 is reached which tests the constraint to determine if an acceptable support constraint is available. If the test of conditional 130 is true (yes), then procedure 120 proceeds to operation 132. With a zero or near nonzero image support constraint, operation 132 provides as direct solution. Applying the spatially-limited constraint of expression (14) to the multichannel framework of expression (13), expression (15) results as follows:
where ξ={vec{g}} is a vector of the low-return constraints, {Φ({tilde over (g)})}Ω are the rows of Φ({tilde over (g)}) that correspond to the low-return constraints, and {Φ({tilde over (g)})}
{Φ({tilde over (g)})}Ωf=0. (16)
For this MultiChannel Autofocus (MCA) approach of determining the correction filter, define
to be the MCA matrix formed using the constraint set with rank M−1 matrix. The solution {circumflex over (f)} for expression (16) can be obtained by determining the unique vector spanning the nullspace of ΦΩ({tilde over (g)}) as set forth in expression (17):
{circumflex over (f)}=Null(ΦΩ({tilde over (g)}))=αf*, (17)
where α is an arbitrary complex constant. To eliminate magnitude scaling α, the Fourier phase of {circumflex over (f)} corrects the defocused image according to expression (6) as set forth in expression (18):
{circumflex over (φ)}[k]=−∠(DFTm{{circumflex over (f)}[m]}). (18)
Accordingly, the all-pass condition of {circumflex over (f)} is enforced to determine a unique solution from expression (17).
When the test of conditional 130 is false (no), procedure 120 branches to operation 134. This negative outcome may result, for example, when |ξ[m, n]|≠0 as set forth in expression (14) or the defocused image is contaminated by additive noise, for example, such that the MCA matrix has full column rank. In this case, {circumflex over (f)} cannot be obtained as the null vector of ΦΩ({tilde over (g)}). Accordingly, operation 134 applies a Singular Value Decomposition (SVD) process to ΦΩ({tilde over (g)}), determining a unique vector that produces the minimum gain solution (in the l2-sense). SVD is represented by expression (19) as follows:
ΦΩ({tilde over (g)})=Ũ{tilde over (Σ)}{tilde over (V)}H, (19)
where: {tilde over (Σ)}=diag(σ1, σ2, . . . , σM) is a diagonal matrix of the singular values satisfying σ1≧σ2≧ . . . ≧σM≧0. Because f is an all-pass filter, ∥f∥2=1 results. Although it cannot be assumed that the pixels in the low-return region are exactly zero, it is reasonable to require the low-return region to have minimum energy subject to |f|2=1. A solution {circumflex over (f)} satisfying expression (20):
is given by {circumflex over (f)}={tilde over (V)}M, which is the right singular vector corresponding to the smallest singular value of ΦΩ({tilde over (g)}) as set forth in G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins University Press, Baltimore, 1996, which is hereby incorporated by reference.
From operations 132 and 134, procedure 120 continues with operation 136. In operation 136, the restored SAR image is determined for further use or processing as desired. From operation 136, conditional 140 is reached. Conditional 140 tests whether to continue execution of procedure 120 by acquiring and processing another image. If the test of conditional 140 is true (yes), then procedure returns to operation 122 to repeat various operations and conditionals as appropriate. If the test of conditional 140 is false (no), then procedure 120 halts.
With respect to procedure 120, it should be appreciated that while both the channel responses (i.e., focused image columns) and input (i.e., blurring kernel) are unknown, it is desired to reconstruct the channel responses from available output signals (i.e., defocused image columns) analogous to a Blind Multichannel Deconvolution (BMD) approach. In contrast, it should be appreciated that the filter operator of procedure 120 is described by circular convolution, as opposed to standard discrete-time convolution, and the channel responses g[n], n=0, 1, . . . , N−1, of procedure 120 are not short-support FIR filters—instead having support over the entire signal length. It should be observed that procedure 120 directly solves for a common focusing operator f (i.e., the inverse of the blurring kernel b) through explicit characterization of the multichannel condition of the SAR autofocus problem by constructing a low-dimensional subspace where the focused image resides. This subspace characterization provides a linear framework through which the focusing operator can be directly determined by constraining a small portion of the focused image to be zero-valued or correspond to a region of low return. This constraint facilitates solving for the focusing operator from a linear system of equations in a noniterative fashion. Frequently in certain implementations, the constraint on the underlying image may be enforced approximately by acquiring Fourier domain data that are sufficiently oversampled in the cross-range dimension so that the coverage of the image extends beyond the brightly illuminated portion of the scene determined by the antenna pattern, which is further described in C. V. Jakowatz, Jr., D. E. Wahl, P. H. Eichel, D. C. Ghiglia, and P. A. Thompson, Spotlight-Mode Synthetic Aperture Radar: A Signal Processing Approach, Kluwer Academic Publishers, Boston, 1996.
The MCA approach is typically found to be computationally efficient, and robust in the presence of noise and deviations from the image support assumption. In addition, performance of procedure 120 does not generally depend on the nature of the phase error. It should also be appreciated that general properties of ΦΩ({tilde over (g)}) resulting in the solution of procedure 120 follow from the observation that the circulant blurring matrix C{b} is unitary. This result is arrived at using expression (4), where all the eigenvalues of C{b} are observed to have unit magnitude, and the fact that the DFT matrix, F, is unitary, as follows in expression (21):
C{b}C
H
{b}=F
H
D(ejφ
The basis matrix Φ({tilde over (g)}) has an alternative structure by rewriting expression (7) for a single column as set forth in expression (22):
ĝ
[n](f)=fM{tilde over (g)}[n]=C{{tilde over (g)}[n]}f. (22)
Comparing with expression (11), where the left side of the equation is formed by stacking the column vectors ĝ[n](f), and using expression (22), expression (23) results:
Analogous to expression (12), let Φ(g) be the basis matrix formed by the perfectly-focused image g, i.e., Φ(g) is formed by using g instead of {tilde over (g)} in expression (12). Likewise, ΦΩ(g)={Φ(g)}Ω is the MCA matrix formed from the perfectly-focused image. From the unitary property of C{b}, the following proposition results (equivalence of singular values): suppose that {tilde over (g)}=C{b}g, then ΦΩ({tilde over (g)})=ΦΩ(g)C{b} and the singular values of ΦΩ(g) and ΦΩ({tilde over (g)}) are identical. Proof for this proposition follow from the assumption:
{tilde over (g)}[n]=bMg[n].
Therefore, C{{tilde over (g)}[n]}=C{g[n]}C{b}, and from expression (23), expression (24) results:
which implies:
{Φ({tilde over (g)})}Ω={Φ(g)}ΩC{b}.
As a result, it follows that:
ΦΩ({tilde over (g)})ΦΩH({tilde over (g)})=ΦΩ(g)C{b}CH{b}ΦΩH(g)=ΦΩ(g)ΦΩH(g),
thus ΦΩ(g) and ΦΩ({tilde over (g)}) have the same singular values. From this proposition, the SVD of the MCA matrices for g and {tilde over (g)} can be written as:
ΦΩ(g)=UΣVH and ΦΩ({tilde over (g)})=ŨΣ{tilde over (V)}H, respectively.
The following result demonstrates that the MCA restoration obtained through ΦΩ({tilde over (g)}) and {tilde over (g)} is the same as the restoration obtained using ΦΩ(g) and g.
Another proposition is directed to equivalence of restorations: suppose that ΦΩ(g) (or equivalently ΦΩ({tilde over (g)}) has a distinct smallest singular value, then applying the MCA correction filters V[M] and {tilde over (V)}M to g and {tilde over (g)}, respectively, produce the same restoration in absolute values; i.e., expression (25) results:
|C{{tilde over (V)}[M]}{tilde over (g)}|=|C{V[M]}g|. (25)
Proof for this proposition: expressing ΦΩ({tilde over (g)})=ΦΩ(g)C{b} in terms of the SVD of ΦΩ(g) and ΦΩ({tilde over (g)}), results in expression (26):
ΦΩ({tilde over (g)})={tilde over (g)}Σ{tilde over (g)}H=UΣVHC{b}. (26)
Because of the assumption in the proposition, the right singular vector corresponding to the smallest singular value of ΦΩ({tilde over (g)}) is uniquely determined to within a constant scalar factor β of absolute value one as described in G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins University Press, Baltimore, 1996. Expression (27) results:
{tilde over (V)}[M]H=βV[M]HC{b}, (27)
where |β|=1. Taking the transpose of both sides of expression (27) produces:
{tilde over (V)}[M]=βCH{b}V[M].
Using the unitary property of C{b}, expression (28) results:
V[M]=β−1C{b}{tilde over (V)}[M]. (28)
It follows that:
and thus C[V[m]}g and C[{tilde over (V)}[m]}{tilde over (g)} have the same absolute value because |β−1|=1. This proposition demonstrates that applying MCA to the perfectly focused image or any defocused image described by expression (4) produces the same restored image (with respect to display of image magnitude) such that the restoration formed using the MCA approach does not depend on the phase error function. Instead, the MCA restoration depends on g and the selection of low-return constraints (i.e., the pixels in g designated to be low-return). It also results from this proposition that it is sufficient to examine the perfectly-focused image to determine the conditions under which unique restorations are possible using MCA.
In one case of interest, Ω corresponds to a set of low-return rows. The consideration of row constraints matches a practical case of interest where the attenuation due to the antenna pattern is used to satisfy the low-return pixel assumption. In this case, ΦΩ(g) has a structure that can be exploited for efficient computation in the typical case. This form also allows the necessary conditions for a unique correction filter to be precisely determined.
To explicitly construct the MCA matrix in this case, expression (7) results in expression (30) at follows:
g
T
={tilde over (g)}
T
C
T
{f*}, (30)
where T denotes a transpose operator. The transposed images represent the low-return rows in g as column vectors, which leads to a relationship in the form of expression (16) where ΦΩ({tilde over (g)}) is explicitly defined. Accordingly, expression (31) results:
CT{f}=[fF,C{e1}fF, . . . , C{eM−1}fF], (31)
where C{e1} is the l-component circulant shift matrix, and expression (32) defines:
fF[m]=f[−m
M], (32)
m=0, 1, . . . , M−1, is a flipped version of the true correction filter n
M denotes n modulo M). Using expressions (30) and (31), the l-th row of g is defined by expression (33) as:
(gT)[l]={tilde over (g)}TC{el}f*F. (33)
Note that multiplication with the matrix C{el} in the expression above results in an l-component left circulant shift along each row of {tilde over (g)}T. The relationship in expression (33) shows how the MCA matrix ΦΩ({tilde over (g)}) can be constructed given the image support constraint in expression (29). For the low-return rows satisfying (gT)[lj]≈0, the relation of expression (34) is set forth as follows:
(gT)[l
for j=1, 2, . . . , R. Applying expression (34) for all of the low-return rows simultaneously results in expression (35):
where (with abuse of notation) ΦL({tilde over (g)})∈CNR×M is the MCA matrix for the row constraint set L. In this case, ΦL plays the same role as ΦΩ for the general case. Thus, the MCA matrix is formed by stacking shifted versions of the transposed defocused image, where the shifts correspond to the locations of the low-return rows in the perfectly-focused image. Determining the null vector (or minimum right singular vector) ΦL({tilde over (g)}) as defined in expression (35) produces a flipped version of the correction filter. The correction filter f can be obtained by appropriately shifting the elements of fF according to expression (32). The reason for considering the flipped form in expression (35) is that it can provide a structure well-suited to efficiently compute f if desired.
To determine necessary conditions for a unique and correct solution of the MCA expression (16), there is a restriction of the model in expression (29) to low-return rows that are identically zero: ξ[m, n]=0. From the previous propositions, the conditions for a unique solution to expression (16) can be determined using ΦL(g) in place of ΦL({tilde over (g)}). This approach in turn is equivalent to requiring ΦL(g) to be a rank M−1 matrix.
As a further proposition, consider the image model g[m, n]=0 for m∈L and g[m, n]=g′[m, n] for m∉L, then a necessary condition for MCA to produce a unique and correct solution to the autofocus problem is set forth in expression (36):
As proof, note that:
because C{el
rank(Φ({tilde over (g)}))≦Rrank(g′).
Therefore, a necessary condition to be a rank (ΦL({tilde over (g)}))=M−1 is rank(g′)≧(M−1)/R. Furthermore, note that the filter fld=[1, 0, . . . , 0]T is always a solution to expression (16) for g as defined in the proposition statement: ΦL(g)fld=0 because applying fld to g returns the same image g, where all the pixels in the low-return region are zero by assumption. Thus, the unique solution for expression (16) is also the correct solution to the autofocus problem. Noting that M=R+L, and using the condition of expression (36), the minimum number of zero-return rows R required to achieve a unique solution as a function of the rank of g′ is set forth by expression (37):
The condition rank(g)=min(L,N) usually holds, with the exception of degenerate cases where the rows or columns of g′ are linearly dependent. Because rank(g′)≦min(L,N), expression (37) implies expression (38) as follows:
The condition in expression (38) provides a rule for determining the minimum R (the minimum number of low-return rows required) as a function of the dimensions of the ROS in the general case where ξ[n,m]≠0.
Due to the structure of ΦL({tilde over (g)}), it is possible to efficiently compute the minimum right singular vector solution in expression (20) even when the formation of the MCA matrix according to expression (35) results in many low-return rows that lead to dimensions of ΦL({tilde over (g)}) with NR rows by M columns. As an example, for a 1000 by 1000 pixel image with 100 low-return rows, ΦL({tilde over (g)}) is a 100000×1000 matrix. In such a case, it often not practical to construct and invert such a large matrix. However, the right singular vectors of ΦL({tilde over (g)}), can be determined by solving for the eigenvectors for the expression (39) that follows:
B
({tilde over (g)})=Φ
H({tilde over (g)})Φ
({tilde over (g)}) (39)
Without exploiting the structure of the MCA matrix, forming BL({tilde over (g)})∈CM×M and computing its eigenvectors requires O(NRM2) operations. Using expression (35), the matrix product of expression (39) can be set forth as in expression (40):
where: {tilde over (g)}*=({tilde over (g)}T)H (i.e., all of the entries of {tilde over (g)} are conjugated). Let H({tilde over (g)})={tilde over (g)}*{tilde over (g)}T. The effect of CT{el
As an option, the vector space framework of the MCA approach allows sharpness metric optimization to be incorporated as a regularization procedure. The use of sharpness metrics can improve the solution when multiple singular values of ΦΩ({tilde over (g)}) are close to zero. Such a condition can occur if the focused SAR image is very sparse (effectively low rank). In addition, metric optimization can be beneficial in cases where the low-return assumption |ξ[m, n]|≈0 holds weakly, or where additive noise with large variance is present. In these nonideal scenarios, the MCA framework provides an approximate reduced-dimension solution subspace, where the optimization may be performed over a small set of parameters.
Suppose that instead of knowing that the image pixels in the low-return region are exactly zero, it is assumed that expression (41) applies:
∥{vec{g}}Ω∥22≦c (41)
for some specific constant c. Then, the MCA condition can be represented by expression (42) as follows:
∥ΦΩ({tilde over (g)})f∥22≦c∥f∥22. (42)
The true correction filter f* satisfies expression (42). The goal of using sharpness optimization is to determine the best f (in the sense of producing an image with maximum sharpness) that satisfies expression (42). To derive a reduced-dimension subspace for performing the optimization where expression (42) holds for all f in the subspace, first determine σM−K+1, which is defined as the largest singular value of ΦΩ({tilde over (g)}) satisfying σk2≦c. Then express f in terms of the basis formed from the right singular vectors of ΦΩ({tilde over (g)}) corresponding to the K smallest singular values, i.e., expression (43) applies:
where υk is a basis coefficient corresponding to the basis vector {tilde over (V)}[k]. To demonstrate that every element of the K-dimensional subspace in expression (43) satisfies expression (42), define:
SK*=span{{tilde over (V)}[M−K+1], {tilde over (V)}[M−K+2], . . . , {tilde over (V)}[M]},
and note that expression (44) applies as follows:
where: υ={tilde over (V)}Hf. In the second equality, the unitary property of {tilde over (V)} is used to obtain ∥f∥2=∥υ∥2, and also f={tilde over (V)}υ, from which it is observed that:
f∈SK* implies υ1=υ2= . . . =υM−K=0.
It should be appreciated that the indicated subspace does not contain all f satisfying expression (42); however, it provides an optimal K-dimensional subspace in the following sense: for any subspace SK where dim(SK)=K, expression (45) applies as follows:
Accordingly, this subspace is a preferred K-dimensional subspace in that every element is feasible (i.e., satisfies expression (42)), and among all K-dimensional subspaces this subspace minimizes the maximum energy in the low-return region. Substituting the basis expansion expression (43) for f into expression (7) allows g to be expressed in terms of an approximate reduced-dimension basis as represented by expression (46):
where expression (47) defines:
ψ[k]=C{{tilde over (V)}[M−K+k]}{tilde over (g)}, (47)
dk=υM−K+k, and gd is the image parameterized by the basis coefficients d=[d1, d2, . . . , dK]T. To obtain the best ĝ that satisfies the data consistency condition, a particular sharpness metric is optimized over the coefficients d, where the number of coefficients K<<M.
To perform metric optimization, define the metric objective function C: CK→R as the mapping from the basis coefficients d=[d1, d2, . . . , dK]T to a sharpness cost as set forth by expression (48):
where Id[m, n]=|gd[m, n]|2 is the intensity of each pixel, Īd[m, n]=Id[m, n]/γg+→
is an image sharpness metric operating on the normalized intensity of each pixel. An example of a commonly used sharpness metric in SAR is the image entropy set forth as:
and further, a gradient-based search can be used to determine a local minimizer of C(d) as described in D. G. Luenberger, Linear and Nonlinear Programming, Kluwer Academic Publishers, Boston, 2003. The k-th element of the gradient ∇dC(d) is determined using expression (49) as follows:
where * denotes the complex conjugate. It should be appreciated that expression (49) can be applied to a variety of sharpness metrics. Considering the entropy example, the derivative of the sharpness metric is: ∂SH(Īd[m,n])/∂Īd[m,n]=−(1+lnĪd[m,n]).
In applying procedure 120, one way to satisfy the image support assumption used in MCA is to exploit the SAR antenna pattern. In spotlight mode SAR, the area of terrain that can be imaged depends on the antenna footprint, i.e., the illuminated portion of the scene corresponding to the projection of the antenna main-beam onto the ground plane. There is low return from features outside of the antenna footprint. The fact the SAR image is essentially spatially-limited, due to the profile of the antenna beam pattern, suggests that the autofocus technique can be applied in spotlight-mode SAR imaging with a sufficiently high sampling rate.
The amount of area represented in a SAR image, the image Field Of View (FOV), is determined by how densely the analog Fourier transform is sampled. As the density of the sampling is increased, the FOV of the image increases. For a spatially-limited scene, there is a sampling density at which the image coverage is equal to the support of the scene (determined by the width of the antenna footprint). If the Fourier transform is sampled above this rate, the FOV of the image extends beyond the finite support of the scene, and the result resembles a zero-padded or zero-extended image. By selecting the Fourier domain sampling density such that the FOV of the SAR image extends beyond the brightly illuminated portion of the scene the focused digital image is (effectively) spatially-limited, allowing the use of the autofocus approach of procedure 120.
w(x)=sin c2(Wx−1x), (50)
where expression (51) applies as follows:
x is the cross-range coordinate, λ0 is the wavelength of the radar, R0 is the range from the radar platform to the center of the scene, and D is the length of the antenna aperture. Near the nulls of the antenna pattern at x=±Wx, the attenuation will be very large, producing low-return rows in the focused SAR image consistent with expression (29). Using the model in expression (50), the Fourier-domain sampling density should be large enough so that the FOV of the SAR image is equal to or greater than the width of the main lobe of the sinc window: X≧2 Wx. In spotlight-mode SAR, the Fourier-domain sampling density in the cross-range dimension is determined by the pulse repetition frequency (PRF) of the radar. For a radar platform moving with constant velocity, increasing the PRF decreases the angular interval between pulses (i.e., the angular increment between successive look angles), thus increasing the cross-range Fourier-domain sampling density and FOV. Alternatively, keeping the PRF constant and decreasing the platform velocity also increases the cross-range Fourier-domain sampling density—which results in airborne SAR when the aircraft is flying into a headwind. In many cases, the platform velocity and PRF are such that the image FOV is approximately equal to the main lobe width defined by expression (50). In such case, the final images are typically cropped to half the main lobe width of the sinc window because it is realized that the edge of the processed image will suffer from some amount of aliasing. Per procedure 120, the additional information from the discarded portions of the image can be used for SAR image autofocus.
Another instance where the image support assumption can be exploited is when prior knowledge of low-return features in the SAR image is available. Examples of such features include smooth bodies of water, roads, and shadowy regions. If the image defocusing is not very severe, then low-return regions can be estimated using the defocused image. Inverse SAR (ISAR) provides a further application for MCA. In ISAR images, pixels outside of the support of the imaged object (e.g., aircraft, satellites) correspond to a region of zero return. Thus, given an estimate of the object support, MCA can be applied.
The following experimental examples are provided for illustrative purposes and are not intended to limit scope of the inventions of the present application or otherwise be restrictive in character.
where: the “noise” in SNRout refers to the error in the magnitude of the reconstructed image ̂g relative to the perfectly-focused image g, and should not be confused with additive noise (which is considered later). For the restoration in
To evaluate the robustness of the procedure 120 approach with respect to the low-return assumption, a series of experiments were performed using the idealized window function depicted in
where: σp is the noise standard deviation.
Many other embodiments of the present application are envisioned. For example, in one embodiment, a technique includes: acquiring synthetic aperture radar data representative of a defocused form of an image, designating an image region as having a selected radar return characteristic, determining a focus operator as a function of the image region and a data subspace including a restored form of the image, and applying the focus operator to the data to generate information representative of the restored form of the image.
In another example, the embodiment includes a synthetic aperture radar interrogation platform comprising: means for traveling above ground, means for acquiring synthetic aperture radar data representative of a defocused form of an image, means for designating an image region as having a selected radar return characteristic, means for determining a focus operator as a function of the image region and a data subspace including a restored form of the image, and means for applying the focus operator to the data to generate information representative of the restored form of the image.
In still another example, a further embodiment of the present application includes: processing synthetic aperture radar data representative of a defocused image, defining an image processing constraint corresponding to an image region expected to have a low radar return, and focusing the defocused image as a function of the image processing constraint and the data.
A further example comprises: a synthetic aperture radar processing device including means for processing synthetic aperture radar data representative of a defocused image, means for defining an image processing constraint corresponding to an image region expected to have a low radar return, and means for focusing the defocused image as a function of the image processing constraint and the data.
Another example is directed to: a device carrying processor-executable operating logic to process synthetic aperture radar data representative of a defocused image that includes defining an image support constraint corresponding to an image region expected to have a low radar return and focusing the defocused image as a function of the image support constraint and a subspace including a focused form of the defocused image.
Any theory, mechanism of operation, proof, or finding stated herein is meant to further enhance understanding of the present invention and is not intended to make the present invention in any way dependent upon such theory, mechanism of operation, proof, or finding. It should be understood that while the use of the word preferable, preferably or preferred in the description above indicates that the feature so described may be more desirable, it nonetheless may not be necessary and embodiments lacking the same may be contemplated as within the scope of the invention, that scope being defined by the claims that follow. In reading the claims it is intended that when words such as “a,” “an,” “at least one,” “at least a portion” are used there is no intention to limit the claim to only one item unless specifically stated to the contrary in the claim. Further, when the language “at least a portion” and/or “a portion” is used the item may include a portion and/or the entire item unless specifically stated to the contrary. While the invention has been illustrated and described in detail in the drawings and foregoing description, the same is to be considered as illustrative and not restrictive in character, it being understood that only selected embodiments have been shown and described and that all changes, modifications and equivalents that come within the spirit of the invention as defined herein or by any of the following claims are desired to be protected.
The present application claims the benefit of U.S. Provisional Patent Application No. 60/922,106, filed Apr. 6, 2007, which is hereby incorporated by reference herein.
The present invention was made with Government assistance under National Science Foundation (NSF) Grant Contract Number CCR 0430877. The Government has certain rights in this invention.
Number | Date | Country | |
---|---|---|---|
60922106 | Apr 2007 | US |