Microwave and millimeter wave wide-band three-dimensional (3-D) synthetic aperture radar (SAR)-based imaging techniques have demonstrated tremendous usefulness for nondestructive evaluation (NDE) applications for industrial, scientific, and medical imaging. For example, such techniques are particularly useful for 3-D imaging of low contrast dielectric media and in security applications. Typically, measurements are performed by raster scanning a probe on a uniform 2-D grid. To achieve optimum resolution and image quality, however, a large quantity of measurements must be obtained to image even a small area. Unfortunately, conventional scanning techniques require a relatively long time to scan and obtain an image. For example, the time needed to perform the measurements typically ranges from tens of minutes to several hours depending on the size of structure being imaged and the operating frequency. As a result, nondestructive testing of large and critical structures (e.g., aircraft, bridges, space vehicles and the like) cannot utilize real-time imaging.
A method to form SAR images as quickly as possible is desired.
Briefly, aspects of the invention permit SAR images to be quickly generated while maintaining an acceptable level of resolution. One such way to achieve this is to manually nonuniformly sample wide-band reflection measurement data over the sample under test while simultaneously producing a SAR image from the data as it is gathered. This enables the production of complete SAR images using only a fraction of the required measured data because the user may intelligently stop the measurement once an image is deemed satisfactory. By reducing the amount of measured data, there is a commensurate time savings achieved in data acquisition. To assist the user during the data acquisition process, a fast 3D wide-band SAR algorithm that produces 3D SAR images in real-time is needed to inform the user in real-time as to the progress of the scan. Furthermore, a reconstruction algorithm used to post-process the data for the objective of optimization resulting in high quality images with, for example, considerably lower background noise/clutter is needed.
In an aspect, a wideband synthetic aperture radar (SAR) imaging system includes a probe that has an aperture through which a signal, such as an electromagnetic signal, is transmitted incident to an object located in a medium of interest remotely from the probe. Also, the probe receives through the aperture a plurality of nonuniformly sampled reflected signals from the object as the probe moves in a measurement plane located a predetermined distance from the object. The system also includes a memory and a processor. The memory stores measurement data representative of the reflected signals collected by the probe and the processor executes a plurality of computer-executable instructions for a SAR-based reconstruction algorithm. The instructions include instructions for performing a spectral estimation based on the measurement data, instructions for transforming a frequency component of the spectral estimation as a function of the medium of interest, and instructions for obtaining a three-dimensional SAR image from the transformed spectral estimation data using Fourier transforms. The system further includes a display responsive to the processor for presenting the three-dimensional SAR image to a user.
A method embodying aspects of the invention generates a three dimensional image of a specimen under test (SUT). The method includes transmitting, via a probe, a signal within a predetermined operating bandwidth and tracking nonuniform two-dimensional movement of the probe within a measurement plane remote from the SUT. In addition, the method includes receiving, via the probe, signals reflected from the SUT during the movement of the probe and storing reflection coefficient data based on the reflected signals as distributed measurement positions within the measurement plane by recording the signals at discrete frequencies throughout the operating bandwidth. In a further step, the method includes processing the stored data into a wide band, three dimensional (3-D) synthetic aperture image by implementing a 3-D SAR algorithm. The method also includes displaying the 3-D SAR image to a user in real-time; further processing the 3-D SAR image to perform an objective optimization and further displaying the 3-D SAR image having reduced errors to the user.
In another aspect, a wideband synthetic aperture radar (SAR) imaging system comprises a signal source, a transceiver antenna coupled to the signal source, a memory, a processor, and a display. The signal source generates a signal with a predetermined operating bandwidth that is transmitted through an aperture of the antenna. The transmitted signal is incident to an object located in a medium and the antenna receives a plurality of nonuniformly sampled reflected signals from the object through the aperture as the antenna moves nonuniformly in a plane located a predetermined distance from the object. The memory stores signal data comprising nonuniformly sampled reflected signals collected at the aperture and the processor executes a plurality of computer-executable instructions for a real-time, post-processing, reconstruction algorithm. The instructions include estimating a two-dimensional spatial spectrum based on the signal data to provide a uniformly sampled spectrum, estimating the uniformly sampled spectrum to remove or minimize image artifacts, reconstructing uniformly sampled data from nonuniformly sampled data to remove or minimize image artifacts, forming a SAR image of the object from the estimated uniform spectrum, dividing the reconstructed SAR image into a plurality of segments, applying a R-SAR transform to each of the segments, and filtering and reconstructing the data for each segment and summing each filtered segment. The display presents the three-dimensional SAR image in real-time to a user.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Other features will be in part apparent and in part pointed out hereinafter.
For a better understanding of the aforementioned aspects of the invention as well as additional aspects and embodiments thereof, reference should be made to the Detailed Description below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
In general, in one embodiment, a unique system for imaging a scene of interest (e.g., an object) is described. To best reconstruct a complex signal from all targets within a scene of interest (e.g., an object) from nonuniform measurement samples occurs in two parts. In a first part of the imaging process, a two-dimensional (2-D) positioning system is utilized to monitor the movements of an imaging probe as the imaging probe performs a scan in a plane. A processor tracks the position of a probe as the probe scans and collects data. The scanning process is performed free-hand by a user (who is holding and using the imaging probe) and utilizes user feed-back from the real-time SAR image formation as to whether to continue or terminate the scanning process.
A second part of the imaging process includes post-processing where the collected randomly positioned measurements are processed to reconstruct the 3-D SAR image with optimum resolution and signal-to-noise ratio (SNR). For example, PP2 forms an intermediate 3-D SAR image and then the SAR image is segmented (i.e., the data is transformed temporarily from the data domain to the image domain). These segments are then individually transformed from the image domain to the measurement domain, where each image segment is represented as partial data. This transform is performed using the unique 3-D Reverse-SAR (R-SAR) transform. The partial data are individually and optimally reconstructed according to their own spatial bandwidth of the partial data using an optimization process, such as an error minimization process (e.g., minimizing residual error). The multi-band partial data segments are subsequently recombined and processed using a 3-D SAR processor to produce the final SAR image.
Table 1 provides a description for the different SAR processors that are discussed throughout this document. For example, the RT1 processor provides a means to simplify real-time data management by including a spectral estimation step consisting of a fast Fourier transform (FFT) on data which is digitally stored corresponding to the nearest uniform grid point. Alternatively, the RT2 processor performs a FFT on the raw sample points, without assigning the points to a uniform grid. Both processors only take a fraction of a second to produce and render an SAR image, enabling quick inspection of a specimen under test (SUT).
Improved spectral estimation techniques require further processing time, but reduce undesirable aspects of SAR images inherent in processors such as RT1 and RT2, such as uneven brightness and image artifacts. The PP1 processor reduces these undesirable aspects by generating a spectral estimation which decreases the residual error between the original data and the inverse 2D nonuniform fast Fourier transform (NUFFT) of the estimate of the spectrum. Because of this step, this processor is suitable for rendering a SAR image of higher quality than the RT1 and RT2 processors.
A final method of spectral estimation, exemplified in processors PP2 and PP3, is suitable for providing vastly improved SAR images in situations where the objects sought to be scanned within the SUT lie at different distances from the measurement plane. PP2 renders an improved image by obtaining an intermediate SAR image, then comparing that image with the final processed result to reduce error. PP3, on the other hand, is suitable for producing the most improved SAR images. By comparing a component of the forward SAR algorithm with a component of the reverse SAR algorithm during spectral estimation, and designating each sample of the spectrum its own bandwidth, this processor not only produces the most desirable SAR images when multiple objects lie at different distances from the measurement plane, but also performs this task with less computational complexity than previously known.
After spectral estimation 310, the next step involves determining the wavenumber (kz), which is related to a frequency, f, by the following dispersion relation abbreviated as kz←kz(f) 315:
where v is the speed of light in the medium. The term α is equal to a value of one for bistatic measurements, and term α is equal to a value of two for monostatic (reflection) measurements. This results in the nonuniform sampling of the image spectrum, D(kx,ky,kz) 320, along kz.
Finally, a 1-D inverse Fourier transform over range (z) 325 is performed, resulting in the partially processed image S(kx,ky,z) 330. Next, a 2-D inverse fast Fourier transform (FFT) over the spatial/cross-range coordinates (x,y) 335 results in a high-resolution volumetric image, s(x,y,z) 340. To be able to use the FFT along the range and consequently speed up SAR image formation, typical implementations use Stolt interpolation (i.e., linear, spline, and the like) to generate a uniform sampling of the spectrum along the range prior to the 1-D inverse Fast Fourier transform 325. However, in an embodiment, the Stolt interpolation may be replaced by the nonuniform FFT (NUFFT) to provide a faster and more accurate SAR image 345. More specifically, the 1-D inverse adjoint NUFFT (INUFFTH) is used such that: (H) represents the adjoint. Thus, the 1-D INUFFTH transforms nonuniform frequency to uniform spatial samples.
The spatial resolution (δx) of the final SAR image is highly dependent upon the imaging system. Particularly, the SAR image is dependent upon aperture size (a), the beamwidth (σb), the distance from the target to the aperture (h), and the sampling of the scattered field.
For example, the first method (RT2) is a Fourier integration, which is a fast rudimentary (e.g., direct) spectral estimation technique and not a reconstruction technique. Each sample may optionally be weighted according to the partial area of the sample on the aperture. The partial area corresponding to every sample may be found from the polygons of a Voronoi diagram. Polygons exceeding the aperture are cropped precisely to the aperture. The data (d′) is sampled discretely and nonuniformly at Nxy points weighted by this partial area, an, when performing the nonuniform discrete Fourier transform (NDFT). In summation form the equation may be expressed as:
This summation operation may be performed rapidly and accurately by using the computationally efficient 2-D NUFFTH to transform nonuniform spatial samples to a uniformly sampled spatial spectrum. The algorithm is faster without computing the partial weights. For this reason, this method may be desired for real-time applications. Unfortunately, the spectral estimation degrades rapidly for low sample densities because the spectral estimation is only bounded to |kx|≦π/Δx and |ky|≦π/Δy (i.e., a spectrum not sufficiently bounded). This may result in high levels of image artifacts. However, the resolution of the SAR image does not degrade for the same reason. Therefore, in practice some real-time imaging systems may benefit from this method by preserving the resolution and computational speed at the cost of increasing image artifacts.
r(x′,y′,f)=d′(x′,y′,f)−INDFT2D{D(kx,ky,f)}, (3)
where INDFT is the inverse NDFT and D is the low-pass filtered spectrum of the measured data (d′):
D(kx,ky,f)=NDFT2D{d′(x′,y′,f)}·FB(kx,ky), (4)
and where FB is a rectangular low-pass filter with spatial bandwidth (2B):
However, this filter may take any shape the user requires (i.e., circular, etc.). The NDFT transforms may be accelerated if the 2-D NDFT becomes the 2-D NUFFTH and the 2-D INDFT becomes the 2-D INUFFT. The error of the minimization process may be represented as the normalized energy of the residual:
Referring now to the inner loop of
Because the accurate method is highly sensitive to the initial estimate of the spatial bandwidth (Bo), different estimates of the spatial bandwidth may be used. If one uses a Bo that is too large, it can be seen that if Bo is greater than π/δx that the minimization problem is underdetermined (i.e., the error minimization will terminate quickly), and reconstruction artifacts may result. Therefore, it is of paramount importance to choose the best Bo.
The sampled signal contains information from all scatterers in the scanning area. Scatterers are located at different coordinates. Therefore, each scatterer has its own distance from the aperture (h) and its own resolution δx, which is a strong function of h. Consequently, the spatial bandwidth necessary to accurately represent each scatterer is different. The preceding accurate spectral estimation/signal reconstruction method is formulated optimally for one scatterer in the scanning area, and not multiple scatterers. This is a significant problem because this method must increment only a single spatial bandwidth B until the termination condition is met. If a scanning area consists of scatterers near and far from the aperture, the final bandwidth B to satisfy the termination condition may be too large to correctly reconstruct the signal for the scatterers far from the aperture. In an embodiment, it is preferred to separate the signal contributions from scatterers at different h and then reconstruct that data separately, such that each scatterer has its own spatial bandwidth. Thus, the best signal reconstruction may be performed individually for all scatterers present.
The following describes the details of a general form of coarse multi-bandwidth reconstruction that finds the best reconstruction for all scatterers within a scanning area of interest from nonuniform samples.
Begin with nonuniform measurement of the scattered fields from scatterers—These measurements are inherently nonuniform and represented by d′(x′,y′,f) 600 where the primed coordinates indicate measurement samples. They may either be gathered manually as a user moves a probe over the scanning area or they may be gathered automatically (i.e., as an automated system moves a probe along a predetermined path or an array electronically switches between measuring antennas).
Make a preliminary signal reconstruction onto a high-density uniform grid—The current data d′(x′,y′,f) 600 must be processed to make the intermediate SAR image 605 that we will segment to divide the current data. A bandwidth that preserves the largest spatial frequencies and preserves the best resolution should be utilized. However, the SAR image 605 is only an intermediate image and will contain a high level of image artifacts for scatterers in image segments that are located far from the aperture. The spatial bandwidth of the filter used in the reconstruction process 610 will correspond to the propagating (e.g., non-attenuating) plane waves that are described by the spectrum Dp(kx,ky,f). Thus, the filter should be circular and is defined for every frequency, f, such that:
where Bp=2απf/v.
Form a SAR image of the scene from the preliminary signal reconstruction—The spectrum Dp(kx,ky,f) corresponds to the highly sampled reconstructed data dp(x,y,f) 615. This reconstructed data is then passed through a SAR processor to obtain the intermediate SAR image sp(x,y,z) 620. This SAR image is computed from −Zmax to +Zmax, however, the choice of Zmax must be chosen to facilitate the R-SAR transform 625. A detailed description of the R-SAR transform 625 is described below. This SAR image 620 will have greater image artifacts for z values far from the aperture.
Divide the SAR image into segmented ranges—The SAR image sp(x,y,z) 620 is now divided into N segments 630. The segmentation is performed along the z axis; however, segmentation may also be performed automatically to extract individual scatterers (e.g., watershed segmentation). The thickness of the segments 630 may be chosen arbitrarily by the user, however, it is useful to divide the image according to the expected range resolution δz, where:
Consequently, segment 1 (s1) 635 is bounded between −δz≦z≦δz, segment 2 (s2) 640 is bounded between −2δz≦z≦−δz and δz≦z≦2δz, segment 3 (s3) is bounded between −3δz≦z≦−2δz and 2δz≦z≦3δz, etc. Thus, this type of segmentation is referred to as being coarse.
Perform a highly accurate and fast 3-D Reverse SAR (R-SAR) for each segmented range—All segments 630 of the SAR image 605 correspond to their own reflection data. Thus, the R-SAR transform 625 may be applied to each segment index s 630 where partial data ts 645 is a result of the R-SAR transform 625 of ss
ts(x,y,f)=R−SAR{ss(x,y,z)}. (10)
However, the R-SAR transform 625 as defined retrieves the data at the high sampling density ts(x,y,f) 645. This must be transformed back to the original nonuniform sample locations d′s(x′,y′,f) 650 to facilitate another reconstruction attempt 655. To accomplish another reconstruction 655, one must first realize that ts(x,y,f) 645 and d′s(x′,y′,f) 650 have the same spectrum. Therefore, the spectrum 665 may be calculated by using the 2-D FFT 660:
Ts(kx,ky,f)=FFT2D{ts(x,y,f)} (11)
Consequently, the uniform spectrum can be sampled again onto the nonuniform original sample locations by using the NUFFT that maps uniform samples onto a nonuniform spectrum, which is in contrast to the NUFFTH that maps nonuniform samples onto a uniform spectrum. Consequently the inverse nonuniform FFT (INUFFT) 670 of the spectrum 665 yields the original nonuniform sample locations d′s(x′,y′,f) 650 as:
d′s(x′,y′,f)=INUFFT2D{Ts(kx,ky,f)} (12)
Make another signal reconstruction for each segmented range and sum these—Now that the measured data 650 for every segment has been retrieved, the best signal reconstruction possible for every segment is determined. This is done by utilizing the rectangular filter described above and setting the initial spatial bandwidth 655, Bo=Bs where
After this reconstruction process is complete for all segments 675, the data of the segments may be summed 680 to the final reconstructed data 685:
Make final SAR image—high quality, highly accurate, with minimal image artifacts—At last, the final SAR image 690 may be computed from this reconstructed data 685:
s(x,y,z)=SAR{d(x,y,f)} (15)
This results in a SAR image 690 with the least amount of image artifacts and is the best SAR image obtainable because the reconstruction of the data 655 has been optimized for each range segment 675.
The SAR and R-SAR (reverse SAR) algorithms form a transform pair, which enables the separation of data for scatterers located in different range segments. The SAR and R-SAR algorithms may also be used in compressive sensing (CS) techniques to recover the image from under-sampled data and enforce the measurement constraint.
The ω-k SAR algorithm is used to compute the wideband 3-D SAR images. The SAR algorithm is formulated to use the NUFFT, which as mentioned above, is a fast and accurate approximation to the NDFT.
where equation (16) is the mathematical form of
where v is the speed of light in the medium that is imaged with dielectric constant ∈r, where
The variable z0 is the shift along the Z direction from the aperture to the top of the medium of interest. This is labeled in
The transform as expressed in equation (16) supports propagating through a medium that may or may not have the same properties as the medium being imaged. An obvious restriction is that the boundary between the first and second mediums must be parallel to the aperture. However, strong reflections may occur at the boundary depending upon the relative dielectric contrast between the two media that may mask the scatterers within the medium of interest. To reduce the influence of the boundary on the SAR image, the boundary may be subtracted from the original measurement d(x,y,f) 715. Given a particular frequency of operation fm, the reflection from the dielectric boundary is contained in the measurements d(x,y,fm). Because the boundary is parallel to the aperture, the reflection from the dielectric boundary does not change as a function of location (x,y). Therefore, an estimate of the data {circumflex over (d)}(x,y,fm) without the reflection from the dielectric boundary can be found by subtracting the mean of d(x,y,fm):
{circumflex over (d)}(x,y,fm)=d(x,y,fm)−E[d(x,y,fm)]∀m∈{1,2, . . . , Nf}, (19)
where E[.] is the expectation operator.
R-SAR algorithms have been described in the art. However, the R-SAR algorithm here 705 is unique because it is a 3-D, robust, and highly accurate algorithm. The R-SAR algorithm 705 is similar to SAR algorithm 700 except the R-SAR algorithm 705 is backward and contains an additional repair step (e.g., “Truncation Repair” 725), which corrects for the Fourier truncation that occurred in the SAR algorithm by the Fourier transform 730 from kz to z. Truncation occurs because the SAR image 710 cannot be computed for an infinite range of z and because the SAR image 710 is nonperiodic, which is a result of dispersion relation of equation (17) that generates nonuniform samples of kz. Uncorrected, the truncation error is the largest source of error when computing R-SAR.
“Truncation Repair” 725 quickly and accurately deconvolves the effect of the truncation along z from the estimate of the spectrum in kz 735. Because truncation error occurs only for the 1-D Fourier transforms, the nomenclature can be simplified from S(kx,ky,z) 740 to S(z), which is sampled at Nz uniform locations zn and may be vectorized as S. (S is used to simplify the following mathematical expressions). The uniform image step size Δz is chosen to be less than or equal to the range resolution δz such that:
for the range −Rmax≦zn≦Rmax, where Rmax is the maximum unambiguous range for the propagating wave along the Z axis from the measurement plane and is defined as:
where Δf is the frequency step size. Similarly, reduce D(kx,ky,kz) 745 notation to continuous function D(kz):
which is sampled at Nf nonuniform samples located at kzm with the values as vectorized by F where δ(.) is the continuous Dirac delta function. Furthermore, if D(kz) may be the 1-D discrete time Fourier transform (DTFT) of S, and if S could extend from −∞ to +∞, we have
S=IDTFT1D{D(kz)} (23)
and
D(kz)=DTFT1D{S}, (24)
where IDTFT is the inverse DTFT. However, as stated before, the SAR image cannot be computed for an infinite range. Therefore, only the spectrum from the truncated SAR image 735 is available:
D′(kz)=DTFT1D{{circumflex over (m)}·S}, (25)
where the prime notation represents the spectral estimate after truncation 745 and the truncation 725 (i.e., window function or masking function) is represented as:
Knowing that multiplication in the z domain is equivalent to convolution in the kz domain, the following relationship holds:
where * denotes convolution operation and {circumflex over (M)}(kz) is the corresponding spectrum to {circumflex over (m)} given as:
The difference between D(kz) 745 and D′(kz) 735 is the truncation error, and the truncation error may be reduced by using known error minimization methods. However, these error minimization methods are computationally complex compared to simply solving for the truncation error by deconvolving {circumflex over (M)}(kz) from D′(kz). One consequence of this deconvolution is that information at any frequency (f or kz) is independent of all other frequencies.
To deconvolve the effect of truncation efficiently, one may formulate the convolution in equation (27) in matrix form
D′={circumflex over (M)}D (29)
such that
where for some row r and column c
{circumflex over (M)}rc={circumflex over (M)}(kzr−kzc). (31).
If {circumflex over (M)} is invertible, the original signal D 745 can be recovered exactly by deconvolving the spectrum representation of the window function 735:
D={circumflex over (M)}−1D′, (32)
which is referred to as “Truncation Repair” in
where T{.} is the truncation repair as implemented in equation (32).
The R-SAR transform 705 can only be performed successfully if {circumflex over (M)}−1 exists and if the following three requirements are met: (a) frequencies of measurement must be known, (b) support functions cannot overlap, and (c) sampling of SAR image along z must satisfy the Nyquist rate. Each of the three requirements are discussed further below.
Frequencies of measurement must be known—The frequencies f used in the measurement d(x,y,f) 715 must be known so that the contributions of these frequencies in the SAR image 710 s(x,y,z) 750 can be determined. This is in contrast to the more general problem for which the frequencies of the system may be unknown. Therefore, the SAR imaging system must be well defined so that SAR 700 and R-SAR 705 algorithms form an accurate transform pair.
Support functions cannot overlap—The main lobes of the function {circumflex over (M)}(kz) in equation (28) referred to by {circumflex over (M)}rc in equation (31) must not overlap. Given that
it can be shown that the following condition must be met
where Rmax is defined in equation (21).
Sampling of the SAR image to satisfy the Nyquist rate prevents the occurrence of aliasing error in the 1-D NUFFT 730 of the R-SAR transform 705.
As discussed earlier, the accurate NUFFT-based SAR/R-SAR transform pair can be used to accurately separate contributions of scatterers in the measured data. Further, if the truncation error is not repaired, iterations of the SAR 700 and R-SAR 705 algorithms will diverge due to the cumulative error.
PP3, fine multi-bandwidth reconstruction, determines the best reconstruction for all scatterers within a scene of interest from nonuniform samples, and PP3 is similar to PP2. In an embodiment, PP3 can consist of a coarse filter along z that can obtain identical results to PP2. However, SAR and R-SAR have been combined into the error minimization process, which is illustrated in
The following demonstrates exemplary performance of the algorithms using simulated data. A square aperture size with aperture dimensions ax=ay=10λ was used consisting of antennas with Gaussian half-power beamwidth of 120-degrees. These antennas measured the complex reflection coefficient for 31 uniformly sampled frequencies in Ku-Band (e.g., 12.4-18 GHz). The measuring locations were selected randomly but not independently such that a minimum distance (Δm) between antennas was maintained. For Nxy nonuniform measurement locations, this resulted in an average sample density of Δ, where:
Δ=√{square root over (axay/Nxy)}. (38)
Three different Δm were selected to show the performance of the algorithm for different sampling (0.3, 0.5, and 0.7*λ). Six point scatterers were placed in the scene to ideally scatter signal back to the aperture for distances 2.5, 5, 7.5, 10, 12.5, and 15-λ. White Gaussian noise was injected into the nonuniform measurements to correspond to a signal-to-noise ratio (SNR) of 30 dB. The simulation was set up such that each scatterer had the same scattering coefficient. Consequently, the scattered signal attenuates as a function of distance (e.g., distant scatterers are weaker).
The same nonuniform data was processed into SAR images in multiple ways: (1) no reconstruction −RT2, (2) reconstruction using range segments with smooth transitions and thickness equal to the range resolution (δz)−PP2, and (3) reconstruction using the modified error minimization method −PP3. These were compared not only to each other but also to an image formed from noiseless, high-density measurements. To render images so they are easy to interpret, the images were auto-scaled as a function of z to make all scatterers appear with the same brightness. Therefore, the image of more distant objects appears noisy as the scattered signal drops to the level of the noise or the error remaining after reconstruction.
Results for Δm=0.5λ (17% of proper sampling) and Δm=0.7λ (8% of proper sampling) are shown in
The following discussion is intended to provide a brief, general description of a suitable computing environment in which aspects of the invention may be implemented. Although not required, aspects of the invention are described in the general context of computer-executable instructions, such as program modules which perform particular tasks or implement particular abstract data types, being executed by computers in network environments or in distributed computing environments.
Those skilled in the art will appreciate that aspects of the invention may be practiced in network computing environments with many types of computer system configurations (like personal computers, tablets, mobile or hand-held devices, or multi-processor systems). Aspects of the invention may also be practiced in distributed computing environments, where tasks are performed by local and remote processing devices linked through a communications network. Examples of devices used in a distributed computing environment include program modules located in both local and remote memory storage devices.
An exemplary system for implementing aspects of the invention includes a general purpose computing device consisting of various system components including the system memory. The system memory includes random access memory (RAM) and read only memory (ROM).
The computing device may also include a magnetic hard disk drive for reading from and writing to a magnetic hard disk or a removable magnetic disk, or an optical disk drive for reading from or writing to a removable optical disk such as a CD-ROM or other optical media. These disk drives are connected to the system bus by specific interfaces. The drives and their associated computer-readable media provide nonvolatile storage of data for the computer. The exemplary environment described herein employs a magnetic hard disk, a removable magnetic disk, and a removable optical disk, but other types of computer readable media for storing data can be used.
Program code means comprising one or more program modules may be stored on the computer readable media storage previously mentioned. Various means of user input as well as various display devices are typically included. In an embodiment, the SAR images can be displayed in real-time on the monitor.
The computer may operate in a networked environment, which may include another personal computer or another common network node including many or all of the elements described above relative to the computer. Networking environments may connect computers locally (through a network interface) or wirelessly (through a modem, wireless link, or other means).
Preferably, computer-executable instructions stored in a memory, such as the hard disk drive, and executed by computer embody the illustrated processes.
The order of execution or performance of the operations in embodiments of the invention illustrated and described herein is not essential, unless otherwise specified. That is, the operations may be performed in any order, unless otherwise specified, and embodiments of the invention may include additional or fewer operations than those disclosed herein. For example, it is contemplated that executing or performing a particular operation before, contemporaneously with, or after another operation is within the scope of aspects of the invention. In addition, it is contemplated that the Fourier references mentioned throughout this document are in one embodiment a nonuniform discrete Fourier Transform (NDFT), but may also include other Fourier methodologies as known to one skilled in the art to approximate the NDFT.
Embodiments of the invention may be implemented with computer-executable instructions. The computer-executable instructions may be organized into one or more computer-executable components or modules. Aspects of the invention may be implemented with any number and organization of such components or modules. For example, aspects of the invention are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other embodiments of the invention may include different computer-executable instructions or components having more or less functionality than illustrated and described herein.
When introducing elements of aspects of the invention or the embodiments thereof, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements.
In view of the above, it will be seen that the several objects of the invention are achieved and other advantageous results attained.
Having described aspects of the invention in detail, it will be apparent that modifications and variations are possible without departing from the scope of aspects of the invention as defined in the appended claims. As various changes could be made in the above constructions, products, and methods without departing from the scope of aspects of the invention, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
Number | Name | Date | Kind |
---|---|---|---|
3573446 | Bergland | Apr 1971 | A |
3713156 | Pothier | Jan 1973 | A |
4271389 | Jacobi | Jun 1981 | A |
5357253 | Van Etten | Oct 1994 | A |
5463397 | Frankot | Oct 1995 | A |
5557277 | Tricoles | Sep 1996 | A |
5592170 | Price | Jan 1997 | A |
5623928 | Wright | Apr 1997 | A |
5644314 | Ahmad | Jul 1997 | A |
5673050 | Moussally | Sep 1997 | A |
5748003 | Zoughi | May 1998 | A |
5796363 | Mast | Aug 1998 | A |
5829437 | Bridges | Nov 1998 | A |
5835054 | Warhus | Nov 1998 | A |
6061589 | Bridges | May 2000 | A |
6424287 | Doerry | Jul 2002 | B1 |
6646593 | Garren | Nov 2003 | B1 |
7259715 | Garren | Aug 2007 | B1 |
7340292 | Li | Mar 2008 | B2 |
7397418 | Doerry | Jul 2008 | B1 |
7567198 | Smith | Jul 2009 | B2 |
8499634 | Urbano | Aug 2013 | B2 |
20020175849 | Arndt | Nov 2002 | A1 |
20060197697 | Nagata | Sep 2006 | A1 |
20070205936 | McMakin | Sep 2007 | A1 |
20070293752 | Simpkin | Dec 2007 | A1 |
20090021423 | Cheng | Jan 2009 | A1 |
20090033549 | Jin | Feb 2009 | A1 |
20090222221 | Buyukozturk | Sep 2009 | A1 |
20090262005 | McNeill | Oct 2009 | A1 |
20090262006 | McNeill | Oct 2009 | A1 |
20090309786 | Stolpman | Dec 2009 | A1 |
20100033709 | Lampin | Feb 2010 | A1 |
20100045514 | Bartscher | Feb 2010 | A1 |
20100141508 | Nguyen | Jun 2010 | A1 |
20100176789 | Zoughi | Jul 2010 | A1 |
20100321235 | Vossiek | Dec 2010 | A1 |
20110163912 | Ranney | Jul 2011 | A1 |
20120250748 | Nguyen | Oct 2012 | A1 |
20130129253 | Moate | May 2013 | A1 |
20140077989 | Healy, Jr. | Mar 2014 | A1 |
20140091965 | Sheen | Apr 2014 | A1 |
20140232590 | Jin | Aug 2014 | A1 |
Entry |
---|
D. M. Sheen, D. L. McMakin, and T. E. Hall, “Three-Dimensional Millimeter-Wave Imaging for Concealed Weapon Detection,” IEEE Trans. Microwave Theory Tech., vol. 49, No. 9, Sep. 2001, pp. 1581-1592. |
M. Soumekh, “Bistatic Synthetic Aperture Radar Inversion with Application in Dynamic Object Imaging,” IEEE Trans. Signal Processing, vol. 39, No. 9, Sep. 1991, pp. 2044-2055. |
J. M. Lopez-Sanchez and J. Fortuny-Guasch, “3-D Radar Imaging Using Range Migration Techniques,” IEEE Trans. Antennas Propag., vol. 48, No. 5, May 2000, pp. 728-737. |
J. Keiner, S. Kunis, and D. Potts, “Using NFFT 3—a software library for various nonequispaced fast Fourier transforms,” ACM Trans. Math. Softw., vol. V, No. N, M, 2008, pp. 1-23. |
I. Dutt and V. Rokhlin, “Fast Fourier Transforms for Nonequispaced Data,” SIAM J. Sci. Comput., vol. 14, No. 6, Nov. 1993, pp. 1368-1393. |
B. Subiza, E. Gimeno-Nieves, J. M. Lopez-Sanchez, and J. Fortuny-Guasch, “An Approach to SAR Imaging by Means of Non-Uniform FFTs,” IEEE Int. Proc. Geosci. and Remote Sensing Symp., vol. 6, Jul. 2003, pp. 4089-4091. |
J. Song, Q. H. Liu, K. Kim, and W. R. Scott, “High-Resolution 3-D Radar Imaging through Nonuniform Fast Fourier Transform (NUFFT),” Commun. Comput. Phys., vol. 1, No. 1, Feb. 2006, pp. 176-191. |
J.T. Case, M.T. Ghasr, R. Zoughi, “Optimum Two-Dimensional Uniform Spatial Sampling for Microwave SAR-Based NDE Imaging System,” IEEE Trans. Instrumentation and Measurement, vol. 60, No. 12, Dec. 2011, pp. 3806-3815. |
H. Hahn and H-O Peitgen, “The Skull Stripping Problem in MRI Solved by a Single 3D Watershed Transform,” Proc. MICCAI, vol. 1935,2000, pp. 134-143. |
J.C. Yoo and Y.S. Kim, “A Reverse-SAR (R-SAR) Algorithm for the Detection of Targets Buried in Ground Clutter,” Microwave and Optical Technology Letters, vol. 28, No. 2, 2001, pp. 121-126. |
Ghasr, M. T.; Abou-Khousa, M. A.; Kharkovsky, S.; Zoughi, R.; Pommerenke, D.; , “Portable Real-Time Microwave Camera at 24 GHz,” IEEE Trans. Antennas Propag., vol. 60, No. 2, Feb. 2012, pp. 1114-1125. |
K. Grochenig and T. Strohmer, “Numerical and Theoretical Aspects of Nonuniform Sampling of Band-Limited Images,” Nonuniform Sampling: Theory and Practice, New York, NY, Kluwer Academic/Plenum Publishers, 2001, pp. 283-324. |
A.V. Oppenheim and R.W. Schafer, Discrete-Time Signal Processing, Third Edition, Upper Saddle River, NJ, Prentice Hall, 2010, pp. 48-54. |
F.R. Preparata and M. I. Shamos, Computational Geometry: An Introduction, New York, NY, Springer-Verlag, 1985, pp. 234-248. |
J. W. Goodman, Introduction to Fourier Optics, 3rd ed., Englewood, CO: Roberts & Company Publishers, 2005, pp. 6-7. |
G. Cumming and F. H. Wong, Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementation, Norwood, MA, Artech House, 2005, pp. 323-367. |
Number | Date | Country | |
---|---|---|---|
20140111374 A1 | Apr 2014 | US |
Number | Date | Country | |
---|---|---|---|
61715894 | Oct 2012 | US |