The present disclosure generally relates to optical or near infra-red air turbulence measurements system.
Accurate characterization of atmospheric turbulence and its effects is extremely important for performance evaluation of optical systems operating in real environment and for designing of systems to mitigate turbulence effects. This analysis is relevant to laser communications, surveillance or tactical applications. Irradiance based methods such as the Scintillation, Detection and Ranging (SCIDAR) technique have been used in the past by astronomers to obtain a low resolution vertical profile of turbulence. [1, 2] The path-weighted refractive index structure constant, Cn2 is traditionally measured using scintillometers. [3]However, irradiance based techniques are of limited use in the saturation regime and hence are not suitable for measurements over long paths through turbulence. Also, direct point estimates of Cn2 that are derived from such intermediate quantities as the velocity structure constant, Cv2 and the temperature structure constant, CT2 require measurements of wind speed and temperature at high temporal resolution. [4] Use of these methods requires physical sensors to be deployed in and around the region of interest.
The present innovation overcomes the foregoing problems and other shortcomings, drawbacks, and challenges of measuring turbulence for surveillance systems that are affected by turbulence or a tactical weapon engagements scenario where accuracy is dependent on turbulence monitoring. While the present innovation will be described in connection with certain embodiments, it will be understood that the invention is not limited to these embodiments. To the contrary, this invention includes all alternatives, modifications, and equivalents as may be included within the spirit and scope of the present invention.
Scintillometers are gold standards for measuring turbulence. However, they have limited operational ranges since they suffer from saturation issues. They also require access to both ends of the path. Other turbulence measurement techniques such as the Differential Image Motion Monitor (DIMM), Scintillation, Detection and Ranging (SCIDAR), Slope, Detection and Ranging (SLODAR) and MZA Associates' Path Resolved Optical Profiler System (PROPS) require active sources at the target location. Some of these instruments require sophisticated instrumentation as well. The proposed technique is a low cost approach to remotely characterize turbulence from a single site without deployment of sensors or sources at the target location. MZA's Delayed Tilt Anisoplanatism (DELTA) is an imaging based technique that uses the turbulence induced motion of point features on a distant target to remotely profile turbulence. However, since point features are tracked, the operational range is limited. Our imaging technique is phase-based (hence no saturation) and it tracks motion of extended patches rather than point features which allow the technique to be applied over longer ranges. The measured tilt variances can be linearly combined in such a way that the output can be made to mimic the output of any traditional turbulence measuring system as well. [5-8]
According to one aspect of the present innovation, a turbulence characterization system includes (i) An image capturing device that captures time-lapse images of a distant target (anywhere from a km to more than a 100 km away), and (ii) A turbulence measurement system, comprising of a processor, that is communicatively connected to the image capturing device to receive images. The processor tracks the relative motion due to atmospheric turbulence between some number of patches of definite size on each of these images using a sub-pixel accurate correlation technique, the processor computes the differential motion between all pairs of patches on each image, this differential motion is simply proportional to the differential wavefront tilt. The processor computes the differential tilt variance for each pair of patches from the collection of images and evaluates the theoretical weighting functions between turbulence along the imaging path and the measured differential tilt variances. These weighting functions depend upon the size and separation of the patches being tracked as well as the path length and the size of the camera aperture. The processor determines a linear combination of these weighting functions which best matches the weighting function for some desired turbulence parameter (such as the isoplanatic patch size, the log-amplitude variance, Fried's coherence diameter or the turbulence strength in a neighborhood around some specific range) and finally, linearly combines the measured tilt variances to estimate the turbulence parameter of interest.
According to another aspect of the present innovation, a method includes capturing frames of images of a distant target (anywhere from a km to more than a 100 km away) using an image capturing device. The method includes tracking the relative motion due to atmospheric turbulence between some number of patches of definite size on each of these images using a sub-pixel accurate correlation technique. The method includes computing differential motion between all pairs of patches on each image, this differential motion is simply proportional to the differential wavefront tilt. The method includes computing differential tilt variances for each pair of patches from the collection of frames. The method includes computing theoretical weighting functions between turbulence along the imaging path and the measured differential tilt variances. The method includes determining weights to linearly combine weighting functions such that the combined weighting function closely resembles a desired weighting function that corresponds to the turbulence parameter of interest (such as the isoplanatic patch size, the log-amplitude variance, Fried's coherence diameter or the turbulence strength in a neighborhood around some specific range). The method includes linearly combining the differential tilt variances using the determined weights to obtain the turbulence parameter of interest.
Additional objects, advantages, and novel features of the invention will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following or may be learned by practice of the invention. The objects and advantages of the invention may be realized and attained by means of the instrumentalities and combinations particularly pointed out in the appended claims.
The description of the illustrative embodiments can be read in conjunction with the accompanying figures. It will be appreciated that for simplicity and clarity of illustration, elements illustrated in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements are exaggerated relative to other elements. Embodiments incorporating teachings of the present disclosure are shown and described with respect to the figures presented herein, in which:
According to aspects of the present disclosure, a system and method provide improved remote turbulence measurement. The system includes an image capturing device that captures time-lapse images of a distant target (anywhere from a km to more than a 100 km away). A processor of the system tracks relative motion due to atmospheric turbulence of some number of patches of definite size on each of these images using a subpixel accurate correlation technique. The processor computes differential tilt variances between every pair of patches from the image collection and evaluates the theoretical weighting functions between turbulence along the path and differential tilt variances. The processor determines weights to linearly combine the weighting functions such that the combined weighting function closely resembles the weighting function corresponding to a turbulence parameter of interest. The processor then combines the differential tilt variances using the determined weights to obtain the desired turbulence parameter.
In one aspect of the present disclosure, a method includes obtaining a sequence of images of a distant target (anywhere from a km to more than a 100 km away) using a digital image capture device, or even digitizing film images. The method includes tracking the relative motion due to atmospheric turbulence between some number of patches of definite size on each of these images using a sub-pixel accurate correlation technique. The method includes computing the differential motion between all pairs of patches on each image, this differential motion is simply proportional to the differential wavefront tilt. The method includes computing the differential tilt variance for each pair of patches from the collection of images. The method includes evaluating the theoretical weighting functions between turbulence along the imaging path and the measured differential tilt variances. These weighting functions depend upon the size and separation of the patches being tracked as well as the path length and the size of the camera aperture. An expression for these theoretical weighting functions is derived in the description section under the presumptions of geometric optics and turbulence which follows the Kolmogorov power spectrum. The method includes determining a linear combination of these weighting functions which best matches the weighting function for some desired turbulence parameter (such as the isoplanatic patch size, Fried's coherence diameter, the log-amplitude variance, or the turbulence strength in a neighborhood around some specific range).
The images are first cropped to isolate the region of interest. A reference image is chosen from the images collected and a cross-correlation algorithm is used to estimate the image shift between each image and the reference image. A Gaussian window is applied to the images before the cross-correlation algorithm is run. This reduces the effects of the frame edges on the correlation result. A parabolic fit is applied to the correlation peak to provide a sub-pixel estimate of the shift between the images. The correlation with a reference image provides information about the slow motion during the course of the day due to refractive bending. This long-term drift information is used to adjust the tracking window keeping it locked on to a feature through the collection. The cross-correlation algorithm is now applied to each image and its neighboring image to estimate the random motion of the feature due to turbulence. The shifts of two features are subtracted to get the differential signal. The differential mode of measurement eliminates the effects of common mode disturbances, such as platform vibrations.
B.1 Path Weighting Functions for Differential Patch-averaged Tilt Variance: The z-tilt over an aperture of diameter D, when viewing a source in the direction θ can be expressed as [9]:
where λ is the wavelength, ϕ(r,θ) is the turbulence induced wavefront distortion at aperture coordinate r and
The mean correlation between tilts observed at the aperture due to two sources at viewing directions θ1 and θ2 can be written as,
where the angled brackets indicate ensemble averaging. Interchanging the order of integration and ensemble averaging results in,
Since ∫∫drdr′W(r)W(r′)r·r′=0 terms which are functions of either r or r′, and not both, can be added without changing the result of the integration.
Hence, Eq. (4) becomes,
where Dϕ(r−r′,θ1−θ2)=[ϕ(r,θ1)−ϕ(r′,θ2)]2
is the phase structure function.
For a spherical wave propagating through turbulence characterized by the Kolmogorov power spectrum, the wave structure function can be written as, [10]
where k=2π/2 is the wave number and L is the path length. The receiver is located at z=0 and the source is located at z=L. Assuming that the phase structure function is approximately equal to the wave structure function and substituting Eq. (6) in Eq. (5),
The integrations over r or r′ can be done using techniques outlined by Fried [11] and Winick et. al [12]. Applying those techniques, Eq. (7) reduces to
Regarding the novelty of the weighting functions, we have derived expressions for patch averaged tilt variances and the corresponding weighting functions while Whiteley et al. have expressions for weighting functions corresponding to a point source case. A patch is essentially made up of a bunch of incoherent point sources. The fact that we use the weighting functions for patch averaged tilt variance rather than tilt variance due to a point source allows us to apply this technique over long ranges, where even a pixel in the image is a patch and not a point. Tracking motion of extended patches rather than points on a target (over long ranges or in strong turbulence conditions, it is not possible to track points on a non-cooperative target) and developing weighting functions for the patch-averaged tilt variance described below are novel.
Since each pixel in the time-lapse imagery corresponds to a patch of finite size, not just a point on the target, the shift (or the tilt) measured from the whole image, or even a pixel on the image is not tilt due to a single point source, but rather an average tilt due to several incoherent point sources over a patch. Since a Gaussian window is applied to the images before the cross-correlation is computed, to make the analysis consistent, the source patch is modeled as the same Gaussian. The patch-averaged tilt u, is defined as,
The correlation between two patch-averaged tilts αp1 and αp2 is
where Δθ is the angular separation between the two patches. The order of integration and ensemble averaging is interchanged to obtain the above expression. In the following analysis, the patches are assumed to be equal in size. The term with the angle brackets in Eq. (10) is nothing but the correlation of tilts due to two point sources. Substituting Eq. (8) in Eq. (10),
Eq. (11) can be equivalently expressed as,
Changing the variables of integration, Eq. (12) reduces to,
Substituting the above result in Eq. (14), the expression for patch-averaged tilt correlation becomes,
The expression for patch-averaged tilt variance αp2
can be obtained from Eq. (15) by setting Δθ=0. The differential patch-averaged tilt variance can hence be expressed as:
is the path weighting function. No simpler form for the weighting function can be obtained, and hence in the present work, Eq. (17) is evaluated numerically for cases of interest.
B.2 Creating Arbitrary Weighting Function from Linear Combination of Path Weighting Functions: If the turbulence is presumed to be constant along the path, then the differential tilt variance associated with each pair of image patches would provide an estimate for Cn2. If this presumption is not made, a set of tilt variances from pairs of patches of different sizes and separations can be seen as encoding differences in Cn2 along the path. Members of a set of path weighting functions, from a variety of differently sized and separated image patches, each characterize the turbulence along (almost) the same path, but each weights that path differently. However, this rich set of weighting functions can be used as a basis set such that the weighting functions from this set can be linearly combined to reproduce the path weighting function associated with some atmospheric parameter of interest. To determine the appropriate linear combination of weighting functions, the Moore-Penrose pseudoinverse can be used to find the least-squares optimal way to achieve the desired weighting function in terms of the basis set available. Here it is shown that the weighting functions for differential patch-averaged tilt variances can be used to reproduce the weighting functions for the spherical wave r0 (Fried's coherence diameter) and a scintillometer.
The spherical wave r0 is given by: [13]
This equation has been written so the integration takes place from the camera to the object, i.e. the camera is at 0 and the target is at L as above. In this expression Cn2 is weighted by z5/3 along the path. This is the same weighting as that of tilt variance due to a point source. Since there is no beacon or point source at the target, no part of the image corresponds to a weighting function of this form. By sampling the weighting functions along the path, and generating a matrix, M, where the rows are formed from the individual sampled weighting functions, and the columns then correspond to the range where these functions are sampled, the desired weights can be computed as W=M+F, where W is the weight to be applied to each weighting function, and F is the desired weighting function sampled the same way as M, and M+ is the pseudoinverse. The success of this operation can then be revealed by examining MW, which is the actual weighting function generated by attempting to match F with a linear combination of functions drawn from M.
The Moore-Penrose technique is used here to obtain the pseudo-inverse.
C. How the Invention can be used: The following example demonstrates how the method developed can be used to obtain an estimate of path-weighted Cn2, such as that from a scintillometer. [14] The experiment was conducted in February 2017. Images of the Dayton VA Medical Center were captured from a window at University of Dayton's (UD) Intelligent Optics Laboratory.
Example images are shown in
The four patches tracked were each of 1/e diameter 16 cm (20 pixels). Hence there were two pairs of patches separated by 48 cm, and two pairs of patches separated by 1.8m at the target for each image. A 30 frame moving window, corresponding to twenty (20) minutes of imagery was used to compute the variance from the differential motion. The horizontal and vertical variances were added to obtain the total variance. The tilt variances for the two different patch separations were multiplied by their corresponding weights (computed using the pseudo-inverse technique mentioned in the previous section) and added together to obtain a path-weighted estimate of Cn2. In essence, the path-weighting function for the Cn2 estimate was the weighting function shown in
According to meteorological observations [15], Feb. 14, 2017 was a clear day. As evident from
Alternatives: The derivation of the weighting functions given in section B contains a presumption that the source patch is well modeled by the Gaussian window applied to the images. This presumption can be eliminated by using a measurement of the light intensity within each source window. In cases where this Gaussian window doesn't correspond well with the actual source patch detail, this approach will improve the accuracy of the method. To accomplish this, the Gaussian patch function introduced in equation 9 is replaced by the known intensity distribution of the patch being tracked. The known intensity distribution could be taken from the image set used, higher resolution images, or model data. The integrations which follow are then computed numerically. An intermediate approach is to compute the width (i.e. standard deviation) of the intensity distribution within each windowed source patch on the x and y axes and use these as the widths of the Gaussian windows used for the integrations. When these widths differ on x and y the derivation which follows needs to be modified to account for this bivariate Gaussian. In the cases of corners, edges, or small high contrast features, the width on one or both axes will be smaller than the Gaussian window applied, and this correction will prevent the turbulence from being overestimated.
In one or more embodiments, ICD 1102 is a digital camera operating in the visible or near infrared. In one or more embodiments, ICD 1102 captures images no faster than an atmospheric coherence time, with individual exposures shorter than the atmospheric coherence time. In one or more embodiments, ICD 1102 turbulence measurement is used to a) compensate for turbulence degraded images in a surveillance system and b) to understand how turbulence is distributed along the path in a tactical engagement scenario.
In one or more embodiments, the turbulence measurement system uses the pseudoinverse technique to obtain the weights required to linearly combine the differential tilt variance weighting functions to obtain a weighting function that approximates the weighting function of a desired turbulence parameter. The measured differential tilt variances from the sample images are linearly combined according to the computed weights to get the turbulence parameter of interest. The images are blurred and warped due to turbulence. The calculated turbulence parameter can be used to generate a model of the Optical Transfer Function which can be used to deconvolve the turbulence degraded images to get pristine images for surveillance.
In one or more aspects of the present disclosure, a method includes obtaining a sequence of images of a distant target (anywhere from a km to more than 100 km away) using a digital image capture device, or even digitizing film images. The method includes tracking the relative motion due to atmospheric turbulence between some number of patches of definite size on each of these images using a sub-pixel accurate correlation technique. The method includes computing the differential motion between all pairs of patches on each image, this differential motion is simply proportional to the differential wavefront tilt. The method includes computing the differential tilt variance for each pair of patches from the collection of images. The method evaluating the theoretical weighting functions between turbulence along the imaging path and the measured differential tilt variances. These weighting functions depend upon the size and separation of the patches being tracked as well as the path length and the size of the camera aperture. The method includes determining a linear combination of these weighting functions which best matches the weighting function for some desired turbulence parameter (such as the isoplanatic patch size, the log-amplitude variance, Fried's coherence diameter, or the turbulence strength in a neighborhood around some specific range). The method includes linearly combining the measured tilt variances to estimate the turbulence parameter of interest.
The following references cited above are hereby incorporated by reference in their entirety:
While the disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the disclosure. In addition, many modifications may be made to adapt a particular system, device or component thereof to the teachings of the disclosure without departing from the essential scope thereof. Therefore, it is intended that the disclosure not be limited to the particular embodiments disclosed for carrying out this disclosure, but that the disclosure will include all embodiments falling within the scope of the appended claims. Moreover, the use of the terms first, second, etc. do not denote any order or importance, but rather the terms first, second, etc. are used to distinguish one element from another.
In the preceding detailed description of exemplary embodiments of the disclosure, specific exemplary embodiments in which the disclosure may be practiced are described in sufficient detail to enable those skilled in the art to practice the disclosed embodiments. For example, specific details such as specific method orders, structures, elements, and connections have been presented herein. However, it is to be understood that the specific details presented need not be utilized to practice embodiments of the present disclosure. It is also to be understood that other embodiments may be utilized and that logical, architectural, programmatic, mechanical, electrical and other changes may be made without departing from general scope of the disclosure. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present disclosure is defined by the appended claims and equivalents thereof.
References within the specification to “one embodiment,” “an embodiment,” “embodiments”, or “one or more embodiments” are intended to indicate that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. The appearance of such phrases in various places within the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Further, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments.
It is understood that the use of specific component, device and/or parameter names and/or corresponding acronyms thereof, such as those of the executing utility, logic, and/or firmware described herein, are for example only and not meant to imply any limitations on the described embodiments. The embodiments may thus be described with different nomenclature and/or terminology utilized to describe the components, devices, parameters, methods and/or functions herein, without limitation. References to any specific protocol or proprietary name in describing one or more elements, features or concepts of the embodiments are provided solely as examples of one implementation, and such references do not limit the extension of the claimed embodiments to embodiments in which different element, feature, protocol, or concept names are utilized. Thus, each term utilized herein is to be given its broadest interpretation given the context in which that terms is utilized.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope of the disclosure. The described embodiments were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.
This application is a continuation-in-part under 35 U.S.C. § 120 to U.S. patent application Ser. No. 17/077,323 entitled “Estimation of Atmospheric Turbulence Parameters using Differential Motion of Extended Features in Time-lapse Imagery”, filed 9 Dec. 2020, which in turn claims the benefit of priority under 35 U.S.C. § 119(e) to U.S. Provisional Application Ser. No. 62/924,745 entitled “Estimation of Atmospheric Turbulence Parameters using Differential Motion of Extended Features in Time-lapse Imagery”, filed 23 Oct. 2019, the contents of which are incorporated herein by reference in their entirety.
The invention described herein was made by employees of the United States Government and may be manufactured and used by or for the Government of the United States of America for governmental purposes without the payment of any royalties thereon or therefore.
Number | Date | Country | |
---|---|---|---|
63210693 | Jun 2021 | US | |
62924745 | Oct 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17077323 | Oct 2020 | US |
Child | 17704558 | US |