1. Field of the Invention
This invention relates generally to the control of adaptive optics used in imaging systems.
2. Description of the Related Art
Adaptive optics (AO) elements such as deformable mirrors or liquid crystal spatial light modulators allow rapid and precise control over an optical wavefront by shifting the phase of the incoming beam of light passing through the optical system. Adaptive optics elements are currently employed to correct uncontrolled wavefront errors arising from turbulent media (as in telescopic imaging), or random media such (as in microscopy and imaging of the human retina).
Conventionally, the adaptive optics element imparts the conjugate of the phase error in the wavefront so as to cancel out the wavefront error in a process known as phase conjugation. For example, an incoming aberrated wavefront reflects off a deformable mirror. The mirror's shape is controlled by an array of actuators so that the mirror cancels the aberrations in the incoming wavefront. The reflected wavefront then has no wavefront error.
In a typical application of adaptive optics, the actuators adjust the shape of the variable phase element in order to minimize the RMS wavefront error. When the incoming wavefront error is small, the adaptive optics element can completely correct the incoming beam. All adaptive optics elements, however, have a limited range of operation. There is a physical limit to the amount and speed at which a deformable surface can be deformed. For example, one commercial deformable mirror device is limited to a height deviation between neighboring pistons of 2 to 3 microns. This restriction on motion presents several problems when trying to correct severely aberrated wavefronts. On the other hand, LC-SLM devices typically have slower response times and can correct only small phase errors. For all of these devices, the traditional control of AO systems based on minimizing the RMS wavefront error breaks down when encountering large wavefront errors that cannot be fully compensated by the AO element.
Thus, there is a need for AO controllers that can provide good correction, even when the AO element is not capable of compensating fully for wavefront errors.
The present invention overcomes the limitations of the prior art by controlling an AO element based on a post-processing performance metric, rather than based on the intermediate wavefront error. The performance metric is post-processing in the sense that it takes into account image processing applied to the captured images.
In one approach, an imaging system includes optics, an image capture device and image processing. The optics include adaptive optics. The optics form an image of an object. The image capture device captures the image formed by the optics. The image processing processes the captured images. The controller for the adaptive optics does not control the adaptive optics element using the conventional approach of minimizing wavefront error. Rather, it controls the adaptive optics based directly on a post-processing performance metric that accounts for propagation of the object through the optics, the image capture device and the image processing. For example, it might control the adaptive optics element with a goal of minimizing mean square error between an ideal image and the actual digital image after image processing. This approach can result in an intermediate optical image that is worse in image quality than the conventional image formed when the adaptive optics is controlled to minimize wavefront error, but which is better after image processing.
Other aspects of the invention include methods corresponding to the devices and systems described above.
The invention has other advantages and features which will be more readily apparent from the following detailed description of the invention and the appended claims, when taken in conjunction with the accompanying drawings, in which:
The figures depict embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
In adaptive optics, some object or other source is used to provide feedback for the adaptive optics. This object will be referred to as the probe object or probe source. For example, the probe object may be a natural or synthetic guide star, or some other point source. It may also be an optical beam that interrogates the aberrations along the optical path for the imaging system 100. The object 150 being imaged may or may not be the same as the probe object.
The parameter space for the adaptive optics is also defined 220, either expressly or implicitly. For example, a certain deformable mirror may have N actuators that each may be moved throughout a certain range of travel, but subject to the constraint that the mirror surface itself will limit the relative positioning of adjacent actuators.
In
A post-processing performance metric 190 is also defined 230. The performance metric is post-processing in the sense that it is based on performance after image processing rather than before image processing. For examples, measures of the wavefront error or spot size of the intermediate optical image produced by the optics alone may be conventional error metrics for the optics but they are not post-processing performance metrics. In
In many situations, the image 180 is determined based on propagation of the selected object through the imaging system 100 (including image processing 130). The propagation may be actual, simulated or modeled, or a combination. For example, actual propagation through the optics (including atmospheric turbulence) can be determined by observing the captured images and/or the adaptive optics. As an example of a mixed approach, aberrations of the atmospheric turbulence may be based on actual propagation through the atmosphere, but subsequent “propagation” through a lens system may be based on a model of the lens system constructed on a computer. The model may be based on measuring the actual lens system. Alternately, it can be based on the design documents for the lens system. In the modeling approach, the optics may be modeled, for example, by the modulation transfer function (MTF).
The control step 240 can be described as selecting the adaptive optic parameters θao that optimizes the post-processing performance metric 190, possibly subject to certain constraints (e.g., limits on certain costs 170). Note that the control of the adaptive optics 115 takes into account subsequent image processing 130. In some cases, the adaptive optics 115 and image processing 130 may be adjusted together. For example, filter coefficients may be changed for different levels of turbulence and degrees of adaptive optics compensation. Mathematically, using the notation of
A number of optimization algorithms can be used. For some linear cases, parameters may be solved for analytically or using known and well-behaved numerical methods. For more complicated cases, including certain nonlinear cases, techniques such as expectation maximization, gradient descent and linear programming can be used to search the design space.
Note that in both
In step 330, the post-processing performance metric is determined based on the object to be imaged, the probe object, the image capture device and the optics. In this particular example, the performance metric is the root mean square error between a simulated image and an ideal image, as will be described in greater detail below. The simulated image is determined by simulating the propagation of an object through the optics (based on the OPD and corresponding MTF characterization, determined from the probe object), the image capture device and the image processing. Optionally, models of various components in the imaging system may be used to facilitate simulating the propagation of an object through the imaging system.
Step 330 may have self-contained loops or optimizations. In this example, the image processing is adjusted for each new OPD and this process may or may not be iterative. Step 330 outputs the post-processing performance metric, which is used in step 320 to iterate the adjustment of the adaptive optics element. Note that the adjustment of the image processing changes as the adjustment of the adaptive optics changes. Different adjustments to the image processing are used to compensate for different errors introduced by different settings of the adaptive optics element. Thus, in this example, the adaptive optics and the image processing are jointly adjusted based on the post-processing performance metric. For example, this process may generate adjusted tap weights for a linear filter, as well as mechanical adjustments to a deformable mirror.
In one specific implementation of
y=H(Θ)s+n, (1)
where the operator H is a linear characterization of the optics and the image capture device, s is the image captured under ideal conditions (e.g., an ideal geometric projection of the original object) and n is the random noise associated with the two subsystems. Note that H is a function of Θ, and may be determined in part by use of the probe object. Eqn. 1 above is entirely analogous to Eqn. 10 in U.S. patent application Ser. No. 11/155,870, which contains a further description of the various quantities in the equation and their derivation and is incorporated herein by reference.
The goal of the image processing is to provide an estimate ŝ of the ideal image that is as “close” as possible to the ideal image s. One form of image processing is linear image processing. These are generally simple to analyze formally and easy to implement in an actual system. In the linear framework, the original signal is estimated using a linear operator of the form:
ŝ=Ry (2)
where R is a linear filter.
In this example, the minimum mean square error (MMSE) is used as the Lyapunov or target function. Referring to
where the subscript of the expectation operator ε represents an expectation taken over the random noise n and the (assumed) stationary random signal s. The MMSE filtering approach requires no assumptions about the statistical properties of the underlying signal or noise models other than their respective means and covariance structures. Under the assumption that the noise and the signal are uncorrelated, the ideal linear restoration matrix is given by
R=CsHT[HCsHT+Cn]−1 (4)
where Cs and Cn represent the covariance matrices of the signal and the noise respectively. The per-pixel MSE performance is predicted by such a system using
MSE(Θ,R)=(1/N)Tr[(RH−I)Cs(RH−I)T+RCnR]. (5)
where N is the number of pixels and Tr[ ] is the trace operator.
A variety of techniques exist for measuring the optical characteristics of a given optics. One simple approach to estimating both the PSF and the noise characteristics involves repeated measurements of an ideal point object (also known as the star test) at several points across the image field. Averaging the Fourier transforms of these point objects offers an estimate of the PSF and hence the optical transfer function (OTF). The probe object can be used to estimate random or unpredictable components of the PSF or OTF, for example the effects of atmospheric turbulence. Furthermore, the noise covariance matrices may be also be estimated in flat or dark test regions, or by using other more sophisticated conventional approaches such as those described in Glenn Healey and Raghava Kondepudy, “Radiometric CCD camera calibration and noise estimation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(3):267-276, 1994, which is incorporated herein by reference.
Regardless of the approach for characterizing H and Cn, once these terms are characterized, the ideal set of optical compensators Θ and image processing filter R can be chosen to minimize the predicted RMSE of Eqn. 5.
Utilizing nonlinear restoration techniques widens the space of possible post-processing performance metrics. For instance, the class of nonlinear iterative restoration techniques is often statistically motivated, such as Maximum Likelihood (ML) or Maximum A-Posteriori (MAP). Such approaches have the benefit of being asymptotically unbiased with minimum error variance, which are stronger properties than MMSE.
For instance, assuming that the signal s is a deterministic, yet unknown signal, the ML estimate of the signal satisfies
where L(y|s) is the statistical likelihood function for the observed data. Since it is assumed in this particular example that the additive noise in the signal model is Gaussian, the ML cost function reduces to a least squares (LS) objective function
For signals of large dimension (i.e. large numbers of pixels), it may become prohibitive to explicitly construct these matrices. Often, iterative methods are utilized to minimize Eqn. 7 eliminating the need to explicitly construct the matrices. In many situations, the operator H is rank-deficient leading to unstable solutions. In such cases, additional information, such as object power spectral density information or object functional smoothness, can be used to constrain the space of solutions.
When statistical prior information exists about the unknown signal, the MAP cost function becomes
where C(s) represents the prior information about the unknown signal and ψ represents a Lagrangian-type relative weighting between the data objective function and prior information. Cost functions of this form may not permit analytic solutions. The Cramer-Rao inequality could be used to bound as well as predict asymptotically the nonlinear estimator performance.
The adjustment approach described above is now applied to a specific example using a simulated telescopic imaging system.
An adaptive optics element (in this case a LC-SLM) 415 is located directly after the primary refractive element 411 for the purpose of adjusting the incoming wavefront. The adaptive optics element is assumed to have the constraint that the difference in phase between neighboring regions in the AO element cannot exceed 3 microns or about 6 waves of separation in the visible spectrum, and the total phase cannot be adjusted by more than 5 microns or 10 waves of separation.
When the light collected by an imaging system has passed through large optical distances (e.g., in the case of astronomical or terrestrial telescopic imaging, random variations in the air's refractive index produce random wavefront disturbances which can significantly blur the captured images. Adaptive optics elements can be used to help correct these random fluctuations. However, in these cases, the optical aberrations typically are not well defined by low-order polynomial functions. Rather, the random fluctuations of turbulent media contain both low order and high-order wavefront error functions best described by Zernike polynomials. In this simulation, these random wavefront errors are simulated by drawing Gaussian random variables for the first 9 Zernike modes.
These random wavefront disturbances significantly reduce the contrast of the captured images. For example,
Employing conventional AO control (i.e., attempting to minimize the RMS wavefront error) produces the wavefront error and MTF curves shown in
In an alternate approach, the image processing system combines multiple short exposure images. The detector captures multiple images using different AO settings. The image processing combines these images to produce a single “composite” image. The effective MTF for the composite image is thus based on the individual MTFs for the different AO settings. In this case, the AO control optimizees the AO device for each short exposure such that the collection of short exposures together contain the maximal amount of information.
The AO control approach described above can be implemented in many different ways.
In
In
Although the detailed description contains many specifics, these should not be construed as limiting the scope of the invention but merely as illustrating different examples and aspects of the invention. It should be appreciated that the scope of the invention includes other embodiments not discussed in detail above. Various other modifications, changes and variations which will be apparent to those skilled in the art may be made in the arrangement, operation and details of the method and apparatus of the present invention disclosed herein without departing from the spirit and scope of the invention as defined in the appended claims. Therefore, the scope of the invention should be determined by the appended claims and their legal equivalents.
Number | Name | Date | Kind |
---|---|---|---|
4987607 | Gilbreath et al. | Jan 1991 | A |
5227890 | Dowski, Jr. | Jul 1993 | A |
5521695 | Cathey, Jr. et al. | May 1996 | A |
5748371 | Cathey, Jr. et al. | May 1998 | A |
5870179 | Cathey, Jr. et al. | Feb 1999 | A |
6021005 | Cathey, Jr. et al. | Feb 2000 | A |
6069738 | Cathey, Jr. et al. | May 2000 | A |
6525302 | Dowski, Jr. et al. | Feb 2003 | B2 |
6842297 | Dowski, Jr. | Jan 2005 | B2 |
6873733 | Dowski, Jr. | Mar 2005 | B2 |
6911638 | Dowski, Jr. et al. | Jun 2005 | B2 |
6940649 | Dowski, Jr. | Sep 2005 | B2 |
7203552 | Solomon | Apr 2007 | B2 |
7283251 | Tansey | Oct 2007 | B1 |
7333215 | Smith | Feb 2008 | B2 |
20020118457 | Dowski, Jr. | Aug 2002 | A1 |
20020195548 | Dowski, Jr. et al. | Dec 2002 | A1 |
20030057353 | Dowski, Jr. et al. | Mar 2003 | A1 |
20030169944 | Dowski, Jr. et al. | Sep 2003 | A1 |
20030173502 | Dowski, Jr. et al. | Sep 2003 | A1 |
20040145808 | Cathey, Jr. et al. | Jul 2004 | A1 |
20040190762 | Dowski, Jr. et al. | Sep 2004 | A1 |
20040228005 | Dowski, Jr. | Nov 2004 | A1 |
20040257543 | Dowski, Jr. et al. | Dec 2004 | A1 |
20050088745 | Cathey, Jr. et al. | Apr 2005 | A1 |
20050197809 | Dowski, Jr. et al. | Sep 2005 | A1 |
20050264886 | Dowski, Jr. | Dec 2005 | A1 |
20060285002 | Robinson et al. | Dec 2006 | A1 |
20070081224 | Robinson et al. | Apr 2007 | A1 |
Number | Date | Country | |
---|---|---|---|
20100053411 A1 | Mar 2010 | US |