The present application relates to the field of echo-location and echo-imaging systems, including radar, sonar, and lidar systems, medical ultrasound, and other imaging systems that use coherent electromagnetic or acoustic waves.
Existing echo-location and echo-imaging systems, including radar, sonar, and lidar systems, medical ultrasound, forward imaging systems, such as transmission imaging, scattering imaging, and diffraction, and other imaging systems using coherent electromagnetic or acoustic waves, such as those that may typically have a transmitter for emitting coherent waves for “illuminating” one or more targets. This transmitter may incorporate one or more of radio frequency or microwave transmitters, infrared or optical lasers, or may include ultrasonic transducers.
The coherent waves are reflected by the one or more targets towards receiving and/or imaging apparatus, hereinafter receiver, that may, but need not, be collocated with the transmitter. These reflected waves are an echo or echoes.
It is desirable to determine the number and locations, and other qualities such as speed, of targets, or to produce quality images from, information embedded in reflected waves and echoes. For example, a warship's crew may respond quite differently if it can be determined that echoes are being received from a single, large, transport aircraft instead of several small aircraft flying in a tight formation.
Radar, lidar, active sonar, and medical ultrasound systems may use round-trip “time-of-flight” information to determine distance from the receiving and/or imaging apparatus, they may also use Doppler-shift of echoes to determine target speed and the velocity of blood flow. It is also desirable to discriminate between, or image, targets based upon the direction, or angle, from which echoes are received—for which good angular resolution is required. The minimum angle that must separate two targets for the system to reliably determine that echoes are from two, and not one larger target, is the angular resolution of the system. Good angular resolution is of importance in medical imaging, and sonar, as well as radar, since imaging of a large target is equivalent to studying many smaller, closely spaced, targets.
Classically, a limit for angular resolution of a receiving and/or imaging system is related to the wavelength of the waves and the aperture size, or the greatest distance between elements, of the receiver.
Resolution refers to the ability to distinguish closely spaced signal sources. The angular resolution of the classical sensor is given by the diffraction angle λ/D of the array aperture; the field of view is Nλ/D for N elements. To see this, consider a plane wave incident on a one-dimensional antenna array with N elements and aperture D, which we assume is the limiting aperture in the system. The signal received at the array aperture in angular space ψ from a point source far away has the form:
Where θ is the angle of incident on the detector, k=2π/λ, λ is the wavelength, ω is the operating angular frequency, d is the separation between the elements. A typical angular signal strength distribution is plotted in
If θ is small enough, we have:
Nd in equations (2) and (3) is called numerical aperture (NA) and is the size of the array aperture D. The spot size (3) is called point-spread function (PSF), it can be used as a convention criterion to define a limit to the minimum angular separation below which two nearby objects can not be distinguished as clearly providing two peaks, see
In past two decades, parameter estimation has been an area of focus by applied statisticians and engineers. As applications expanded the interest in accurately estimating relevant temporal as well as spatial parameters grew. Sensor array signal processing emerged as an active area of research and was centered on the ability to fuse, that is, to process, analyze, and/or synthesize, data collected at several sensors in order to carry out a given estimation task (space-time processing). This framework has the advantage of prior information on the data acquisition system (i.e. array geometry, sensor characteristics). The methods have proven useful for solving several real world problems. One of most notable is for source location. It demonstrated the possibility that the processing developed such as MUltiple SIgnal Classification (MUSIC) algorithm, which uses the eigenvector decomposition method or signal subspace approach, might be a superresolution algorithm useful to locate closely spaced multiple emitters (targets) with high resolution (smaller than the Rayleigh Limit).
However, the Cramer-Rao Bound principle,
named in honor of Harald Cramér and Calyampudi Radhakrishna Rao, expresses a lower bound on the variance of estimators of a deterministic parameter. It is the “best” in a minimum error variance sense (lower bound) that an estimator can achieve. In a statistical setting, assumptions can be made regarding statistical properties of the signal and/or noise
In conclusion, the resolution obtained in classical sense might, with MUSIC, be better than Rayleigh Limit, but never better than Cramer-Rao Bound.
Since the Raleigh Limit has been known for many years, prior systems for improving angular resolution of a system have often involved increasing operating frequency, thereby decreasing wavelength λ, or alternatively increasing aperture size D. There are often practical limitations to either. For example, waves, whether sonic or electromagnetic, of differing wavelengths may propagate differently—for example short radar wavelengths may be limited to line of sight while atmospheric ionization may allow longer radar wavelengths to follow the earth's curvature thereby allowing detection of targets at greater distances from the imaging system. Similarly, receivers having a large physical aperture size D may be unwieldy.
An imaging or echolocation system has a source of coherent waves, such as acoustic and electromagnetic waves, that are transmitted towards any target or targets of interest. Any waves reflected or echoed by the target or targets are received by a receiver further having many sensor elements spaced across a surface. A reference signal of the same frequency of the waves as received from received waves. A least one phase amplifier receives signals from at least one sensor element, and amplifies phase differences between the reference signal and the received waves. In imaging systems, signals from the phase amplifier(s) enter image construction apparatus and are used for constructing an image; in echolocation systems, signals from the phase amplifiers are used to distinguish between and identify targets. In various embodiments, phase amplifiers may be implemented in analog or digital form.
Phase Difference Corresponds to Angle from Perpendicular
Arriving waves from the two targets 102, 104 in
The devices we propose exploits the fact that the coherent detection on the focal plane converts a problem in spatial or angular resolution of a target to one of resolution in phase, and the fact that faster phase variation implies higher resolution. Our approach does this by adding to the classical sensor described above a quantum phase amplifier (QPA).
Suppose we could increase the phase differences of the incident plane wave by a scale factor g prior to detection: this would have the effect of increasing the fringe spatial frequency across the array. Then equation (1) would become:
immediately leading to the angular resolution (analogously to equation (3)),
The QPA does not increase the operating frequency, but introduce a phase shift in the incident field proportional to its local phase as compared to a reference phase φref, i.e., Δφ=(g−1)(φ−φref).
We can picture the effect of the QPA by referring to
The effect of the phase amplifier in the coherent imaging system is depicted in
To visualize the phase amplification process, we can look at its action on a coherent state in the phase plane whose coordinates are the real and imaginary parts of the electric field (
Note that both the phase and the phase noise have been amplified. However, phase amplification may preserve or improve the overall SNR, as mentioned above.
In order to build the phase amplifier for the frequency of interest, we must figure out how to generate the squeeze state realized by this frequency.
An active phase amplifier or QPA 600 is illustrated in
A signal at QPA 600 input 602 is representable as cos(ωt+kdθ).
The input signal is applied to a first frequency doubler 604 that operates by mixing the input with itself, with output taken through a filter as the upper harmonic, giving cos(2ωt+2 kdθ). A source 606 of a reference signal having frequency co, the fundamental frequency of the signal arriving from the targets, is provided. The signal from the first frequency doubler 604 is mixed with the reference 606 signal at the second mixer 608, and take the lower harmonic is selected by a filter. The filtered signal at the second mixer 608 output is cos(ωt+2kdθ). Phase differences from the reference to the input signal are now doubled.
The filtered signal at the second mixer 608 output is applied to a second frequency doubler 610 that operates by mixing the input with itself, with output taken through a filter as the upper harmonic, giving cos(2χt+4kdθ). We then mix this signal again with reference 606 signal at the fourth mixer 612, and take the lower harmonic, we have the signal cos(ωt+4kdθ). We now have the 4× gain phase gain desired in this particular embodiment. Every two mixers complete one phase doubling operation, we call this one multiply. If there are M multiples we have the gain of 2M. Necessary amplifiers and filters have been omitted from
In the embodiment of
A local reference source 706 is coupled to at least one sensor element 702. In order to prevent Doppler effects from affecting the QPA 708, this reference source 706 may be a local oscillator phase-locked to the echo as received by one predetermined sensor element 702 of sensor elements 702, or alternatively to a signal derived from an average of several sensor elements. In another embodiment, the reference signal source 706 buffers echo received by one predetermined sensor element 702. In other embodiments, such as those where targets are stationary, the reference source may be tapped from the transmitter 704. The output of reference source 706 is applied as a common reference to the reference 606 (
Each sensor element 702 feeds one of identical QPAs 708 with phase gain g. The input signals at each of the QPAs 708 are effectively 1, e−j(ωt+kdθ), e−j(ωt+2kdθ), . . . , e−j(ωt+Nkdθ). The N outputs of the QPAs 708 are 1, e−j(ωt+kdgθ), e−j(ωt+kdgθ), . . . , e−j(ωt+Nkdgθ).
The QPAs therefore operate as phase-difference amplifiers, amplifying a phase shift between reference 706 and the signals received through sensor elements 702.
Outputs from QPAs 708 feed a resolver and/or imager 710. Resolver and/or imager 710 uses conventional beam forming techniques or parameter estimating algorithms such as MUSIC to resolve any targets 712, or form images of any targets 712, that may be present. Resolver and/or imager 710 provide information to a display system 716 as known in the art. Resolver and/or imager 710 may act to resolve separate targets directly, or may act to form a narrow beam that may then be scanned by other apparatus to identify the targets.
In an alternative embodiment, as illustrated in
In an alternative embodiment of the phase amplifier as a degenerate squeeze state generator is illustrated in
It is desirable that only one field quadrature will be amplified, while the other will be deamplified. We see that for small angles θ˜X2/X1, the degenerate squeeze state generator provides gain to the phase and deamplifies the amplitude, i.e., it behaves like a quantum phase amplifier.
In the embodiment of phase amplifier 900 (
The alternate embodiment of
An alternate embodiment of the system 1300 is illustrated in
Digital signal processor 1312 implements reference signal recovery 1314, similar to the function of local reference 706 previously described with reference to
A first embodiment of the system of
In this embodiment, a lens with a refractive index less than 1 but greater than 0, such as may be constructed of an artificial material such as a metamaterial, is added as a covering or coating on the sensor array. With such a material, Refraction angle is away from the normal of the antenna array by the nature of the lens material, and effective phase amplifying is achieved as the incident wavefront arrives at the sensor array behind the lens.
A material with a refractive index less than unity is referred to as a phase-advance material since the phase change per unit length for a wave traveling in such a material is less than that if the wave was traveling in free-space. This implementation generally requires such a phase-advance material for microwave or optical lens application.
Metamaterials having microwave refractive index less than one have been demonstrated under laboratory conditions. Metamaterials are typically static assemblies of a particular geometry and material that can be tuned to provide desired properties. In optics and electromechanical applications, such as with RF and microwave signals, for example, lenses and gratings are typically constructed of homogenous materials having particular shapes. As utilized in the embodiments disclosed herein, metamaterials depart from this conventional approach in that they can be non-homogenous constructed devices that exhibit passive behavior normally associated with regular materials. In some applications, the metamaterials act like a band-pass filter, except according to the present embodiments, phase can be filtered, and not just frequency. By filtering phase components, significantly greater measurement resolution can be realized with respect to time, angle, and other measured components.
Whereas the active approach, described above, can be particularly advantageous for use with digital processing, RADAR, and ultrasound applications, this passive approach is seen by the present inventors to have significant advantages where light applications, such as LIDAR, are also present. One advantage of this passive approach is that it is capable of bypassing stringent requirements seen when dealing with “non-classical” light situations. This passive approach further allows for a more general implementation for various types of signals, including at least those described above.
The phase amplifier achieves Heisenberg resolution scaling, R˜1/Energy or R˜1/N for N received photons per unit time. One way is simply to consider equation (6), which shows R˜1/g. The maximum g value is just given by the mean photon (or phonon) number N, although phase noise limits this gain to a somewhat lower value. This implies R˜1/N. In particle sense, the energy is carried by the particles; therefore, the energy is proportional to the particle number. Consequently, the resolution is proportional to 1/Energy, which is the Heisenberg scaling.
Suppose we wish to resolve two coherent-state plane waves whose propagation directions differ by an angle φ. This means that the photon states have mean phase values equal to, say 0 and φ, the variances of which scale as δφ2
∝1/N for N mean photons in the mode. The incident beams have angular Gaussian distribution, whose bandwidths are σφ. Since the angular separation between two beans is φ, we define the resolution proportional to the ratio of the angular beam width over the angular separation. These phase distributions are distinguishable if R˜σφ/φ˜(φ√{square root over (N)})−1 is small enough. After phase amplification, φ→gφ and σφ→√{square root over (g/N)}, so after post-amplification the resolution is R˜φ−1(gN)−1/2. But, again, the maximum g value is just given by the mean photon number N; this implies R˜1/N. Since the photon number has been squeezed g times, the energy; therefore, is reduced g times as well. If we want to improve the Signal-to-Noise-Ratio (SNR), at a fixed number of post-amplification detected photons, we need to increase the transmitted power by a factor of g to achieve a g-fold improvement in resolution. This situation is completely analogous to the classical Heisenberg-like resolution attainable by increasing both power and frequency, except we do not need to propagate shorter wavelength photons to the target.
Gain g is a parameter in the QPA sensor, and the resolution enhancement is a factor of g. As described in the previous section, since the PA deamplifies the photon number by the same scale factor g, the maximum allowable gain is given simply by the mean photon number received from the target for fixed SNR, and the maximum gain is the ratio of the pre- to post-amplification photon number. Therefore, the theoretical resolution improvement scales directly with the power transmitted to the target. Practically, one will be limited by the feasibility of attaining high gain amplification. In addition, the effect of phase noise due to atmospheric turbulence must also be considered, since it too will increase with gain (as it would for propagating shorter wavelength).
While the forgoing has been particularly shown and described with reference to particular embodiments thereof, it will be understood by those skilled in the art that various other changes in the form and details may be made without departing from the spirit and hereof. It is to be understood that various changes may be made in adapting the description to different embodiments without departing from the broader concepts disclosed herein and comprehended by the claims that follow.
The present application claims the benefit of priority to Provisional Application Ser. No. 60/976,318 filed Sep. 28, 2007, which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
60976318 | Sep 2007 | US |