The present invention generally relates to space research, and in particular, a system and method of asteroid detection, characterization, and imaging.
Near-Earth asteroids pose a potential danger to Earth based on the serious damage they cause upon planetary impact. Asteroid collisions, such as the Chicxulub asteroid, can transmit an energy impact that is nearly 2 million times more powerful than the most powerful man-made explosive and can cause mass-extinction events. The secondary effects of an asteroid impact include global firestorms, altered climate, megatsunamis, earthquakes, volcanic eruptions, and acid rain. The Chicxulub asteroid is cited as the main cause of the Cretaceous-Paleogene mass extinction event that eradicated numerous animal and plant groups, including non-avian dinosaurs.
While the Chicxulub asteroid was a larger asteroid, 10 km in diameter, even smaller asteroids can wreak significant damage. The Chelyabinsk meteor was caused by a near-Earth asteroid that exploded in the air over Chelyabink Oblast. The meteor was merely 20 meters in diameter and yet caused a shockwave, injured 1,500 people who needed medical attention, damaged 7,200 buildings in six cities, and produced a hot cloud of dust and gas. It is estimated that the total kinetic energy before atmospheric impact was 20-30 times more than the energy that was released in the Hiroshima atomic bomb. The Chelyabinsk meteor was undetected before entering Earth's atmosphere and caused this level of damage without direct impact on the Earth's surface. It is therefore important to be able to accurately detect, characterize, and image near-Earth asteroids.
One way to observe asteroids is by using stellar occultation. This occurs when a star is hidden by an object such as an asteroid, moon, or planet that is passing between the observer and the star. Most conventional stellar occultation methods require the use of a telescope to measure the size and position of an asteroid and while many asteroids may be observed in this manner, these conventional methods are not effective in observing smaller near-Earth asteroids, like the Chelyabinsk meteor, that can still create serious damage. Furthermore, in conventional occultation methods many observers are needed to simultaneously observe the asteroid, noting the times that the asteroid blocks the star and when the star reappears, in order to accurately characterize it. By processing this data, observers are able to draw a set of parallel lines across a shadow region to determine the asteroid's size and shape.
It is estimated that only 1% of the near-Earth asteroids in the 40-140 meter size range thought to exist have been detected leaving about 285,000 near-Earth asteroids as unknown. Many of these objects are too small and distant to cast the sharply defined shadows required for conventional occultation methods thus making the observation, characterization and imaging of these smaller asteroids improbable if not impossible. Conventional methodologies of stellar occultation are also cost-prohibitive requiring sophisticated equipment and a coordinated team of observers to record data.
In accordance with one aspect of the present invention, a method of characterizing a near-Earth object includes collecting intensity time history data by at least one light collecting aperture positioned to observe a star, detecting a stellar occultation event, recording the intensity time history data, processing the intensity time history data, predicting at least one of a set of near-Earth object characteristics, and imaging the near-Earth celestial object. In accordance with at least one embodiment of the invention an asteroid characterization and imaging system comprising at least one light collecting aperture positioned to collect intensity time history data and a data analysis unit configured to detect an occultation event and process the intensity time history data. In one or more embodiments of the invention the intensity time history data represents light reaching the light collecting aperture from the star as a function of time. In one or more embodiments of the invention, observations made during a stellar occultation by an asteroid include determination of the asteroid's size, calculation of the distance between the light collecting aperture and the asteroid, prediction of trajectory, calculation of the asteroid's velocity, and imaging of the asteroid. In one or more embodiments of the invention, the image of the asteroid is derived by applying a shadow function, phase retrieval algorithm, and silhouette function to the intensity time history data.
These and other features and advantages of the present invention will be better understood by reading the following Detailed Description, taken together with the Drawings wherein:
The embodiment of
In accordance with one embodiment of the invention, observations made during a stellar occultation by an asteroid include detection of the asteroid, determination of the asteroid's size and shape, imaging of the asteroid, and prediction of trajectory. These characteristics are important to observe because they allow identification of asteroids or any other celestial object, especially near-Earth objects. If the orbit of a near-Earth object crosses Earth's orbit, the resulting impact event could be catastrophic. Therefore detection, characterization and identification of orbital paths of near-Earth objects like near-Earth asteroids may allow prediction of objects that may pose a danger to the Earth and the magnitude that that danger. Once identified and characterized, these near-Earth objects may, in the future, be mined for mineral resources.
Known methods allow detection of large near-Earth objects, between 10,000 meters and 100,000 meters, which cast a sharp shadow when occulting. These near-Earth objects are within the radiative near-field or Fresnel zone, with Freznel number>1. However, smaller near-Earth objects such as those with diameters between 40 meters and 140 meters do not create these defined shadows with occulting a star. These smaller near-Earth objects have shadows near or within the far field or Fraunhofer region from a distance of astronomical unit (1 AU). Therefore, known methods cannot efficiently detect and characterize these types of objects.
In one embodiment of the invention, shown in
According to this embodiment of the invention, the star 107 is being occulted by an asteroid 105. Each of the light collecting apertures 103 continuously records intensity time history data. This intensity time history data represents the intensity of light reaching the light collecting aperture from the star as a function of time. By continuously recording the intensity of light from the star, the light collecting apertures measure the relative intensities before, during and after the occultation of the star. In an embodiment of the invention, light collecting aperture includes a reasonably high bandwidth, at least 10 MHz, Avalanche Photodiode, run in Geiger mode, or a high bandwidth, high sensitivity photomultiplier to measure the light intensity. In this embodiment, the light is passed through a band-limited filter with center at 0.5μ wavelength before its intensity is recorded. The intensity time history data is communicated to a receiving station 109. The receiving station 109 collects the recorded intensity time history data to characterize, image and track the asteroid 105.
A second embodiment of the present invention, shown in
While
As in
In one or more embodiments of the invention, the light collecting apertures may send a signal to the receiving station when a stellar occultation is detected prior to sending the intensity time history data. In an embodiment of the invention, the light collecting apertures collect intensity time history data but do not record it until an occultation event is detected. In one or more embodiments of the invention, the light collecting apertures record the intensity time history data at programmed intervals. In an embodiment of the invention, the programmed intervals coincide with anticipated stellar occultation events.
The data analysis module 400 may include a communications module 410, an internal memory 420, an imaging module 430, a size calculation module 440, a distance calculation module 450, a velocity calculation module 460, and a trajectory calculation module 470.
The communications module 410 may be configured to receive intensity time history data from the light collecting apertures. In an embodiment of the invention the communications module 410 may be further configured to receive a signal from the light collecting aperture indicating that an occultation event has been detected. In one or more embodiments of the invention the communication module 410 is further configured to send intensity time history data, object characteristic, or imaging data of an occulting near-Earth object. In an embodiment of the invention, the communications module 410 may send intensity time history data, object characteristic, or imaging data of an occulting near-Earth object to a database or other data repository. In an embodiment of the invention the communications module 410 may be configured to send a signal to the light collecting apertures to begin collecting intensity time history data. In an embodiment of the invention, the communications module 410 may be configured to send a signal to the light collecting apertures to end collecting intensity time history data. In an embodiment of the invention, the communications module 410 may be configured to send an instruction to the light collecting apertures including a schedule of occultation events indicating start and end times for collecting intensity time history data.
The internal memory 420 of the data analysis module 400 may be configured to store intensity time history data of an occultation event received by the communications module. The internal memory 420 may be further configured to store processing data that may need to be accessed by the individual calculation modules including the imaging module, size calculation module 440, distance calculation module 450, velocity calculation module 460, and trajectory calculation module 470. The size of the occulting near-Earth object may be determined by the size calculation module 440. The distance of the occulting near-Earth object may be determined by the distance calculation module 450. The velocity of the occulting near-Earth object may be determined by the velocity calculation module 460. In an embodiment of the invention, the diameter of the array projected onto the direction of motion of the shadow of the occulting near-Earth object factoring in a time delay is used to calculate the velocity of the object. The trajectory of the occulting near-Earth object may be determined by the trajectory calculation module 470. In one or more embodiments of the invention, the trajectory of the occulting near-Earth object may be used to determine future occultation events.
Returning to
Note that for a circular array, as in
Using just this data, a number of significant parameters can be estimated. The intensity distribution is heavily diffracted, but some well known diffraction algorithms allow us to estimate the size, with may be performed by the size calculation module 440, and distance, which may be performed by the distance calculation module 450, of the near-Earth object from the extent of the relatively deep shadow and the width of the striations that immediately surround it.
In one or more embodiments of the invention the size, distance, velocity, and trajectory of the occulting near-Earth object may be analyzed to indicate whether the occulting near-Earth object poses an impact threat to Earth. In an embodiment of the invention, the size, distance, velocity, and trajectory of the occulting near-Earth object may be sent via the communications module to a central impact alert system.
The imaging module 430 may be configured to run an algorithm to create an image of the occulting near-Earth object using intensity time history data. In one or more embodiments of the invention, the imaging module 430 creates an actual silhouette of the near-Earth object in the form of a pixellated image. In an exemplary embodiment of the invention, this pixellated image has roughly 10 to 20 pixels on a side. This embodiment implies the shadow pattern needs to have a spatial resolution corresponding to 10-20 measurements across the cross-track direction and time history measurements (representing intensity variation in the along-track direction) during time intervals at most 0.1 to 0.05 of the duration of the passage of the near-Earth object across the array. In one or more embodiments of the invention, the imaging module 430 clarifies the accuracy of the image using a signal-to-noise ratio.
The imaging algorithm is detailed below. In an embodiment of the invention, the algorithm for creating the a sharp silhouette image of the occulting near-Earth object includes producing a shadow function, applying a phase retrieval algorithm to determine the complex field amplitude at position vector x=(x,y) on the plane normal to the line-of-sight at the array location, and applying an inversion technique using a silhouette function.
Shadow Function
The shadow function is the shadow pattern intensity relative to that of the unobstructed star is the square of the magnitude of the complex field amplitude, U(x), at position vector x=(x,y) on the plane normal to the line-of-sight at the array location. This may be represented as:
The complex field amplitude U(x,y) at coordinates (x,y) on the plane normal to the line-of-sight at the array location is:
where λ is the mid-band wavelength,
If the star is treated as a point source, the complex field amplitude at position vector x=(x,y) is given by:
The imaging module 430 may calculate U(x) at position vector x=(x,y) on the plane normal to the line-of-sight at the array location and use it to determine a silhouette function.
In an alternate embodiment of the invention, if the light source is treated as an extended incoherent source, then the total shadow pattern is given by the convolution integral
where B(λ,θ′) is the normalized spectral radiance of the light from the source transmitted by a suitable gray band-limited filter with band-pass width, Δλ, and center band wavelength
The integral over the wavelength tends to smooth the fluctuations in
reducing the contrast of the striations bordering the shadow interior. To maintain an acceptable level of contrast, the spectral resolution of the filter, Rj=
In this form, the imaging module 430 is able to calculate the silhouette function using a phase retrieval algorithm.
Phase Retreival Algorithm
The phase retrieval problem occurs widely in astronomy, crystallography and electron microscopy. In these applications, the images (intensity distributions) being reconstructed are typically some object, say a planet or molecular structure, with a dark background, thus having finite support. The objective is to reconstruct the image, typically pixellated, when only the magnitude of its Fourier transform is known. This is possible when the dark background furnishes sufficiently many image-domain constraints, i.e. the restriction that the background pixels are black. For the most part, phase retrieval is formulated to address discretized data, and the object is to determine a pixellated image given the modulus of the (discrete) Fourier transform.
In one embodiment of the invention, the imaging module 430 calculates |U(x,y)|2, while U(x,y) is the Fourier transform of a known function,
multiplied by Γ(θx,θy), or the silhouette function. The imaging module 430 exploits some special features that differ from the typical phase retrieval situation. For example, there are more image domain constraints here than in the traditional problem: All background pixels in Γ(θx,θy) are uniformly unity, while pixels within the silhouette are zero. Since all pixels are constrained, this leads to faster, more reliable convergence and much greater tolerance of measurement noise in the modulus data. Once U(x,y) is determined, one can invert
to obtain:
Borrowing the notations of Equations
and
we are given the data: J(x,y)=|Ũ(x,y)|2 where Ũ(x,y) is the counterpart of U(x,y) in
In phase retrieval applications, Ũ(x,y) is the optical coherence, not the complex field amplitude. Also far-field radiation can be assumed, in which case the exponential,
in
is approximately unity. Then Ũ(x,y) is given by the Fourier transform relation:
where {hacek over (Γ)}(θx,θy), the counterpart of the term
is the image intensity distribution, not the silhouette function. In phase retrieval, {hacek over (Γ)}(θx,θy) is a function that may assume any real, non-negative values. With relation Ũ(x,y), the challenge is to compute {hacek over (Γ)}(θx,θy) given knowledge of the data, J(x,y)=|Ũ(x,y)|2.
Since the phase of Ũ(x,y) is not available, a solution cannot exist unless there are additional constraints on {hacek over (Γ)}(θx,θy). These usually take the form of constraints on the value of {hacek over (Γ)}(θx,θy), (usually that it vanishes), over some region in the (θx,θy) plane. Many methods of phase retrieval are based on known methods that proceed by imposing a sequence of projections designed to converge to satisfaction of both the image-domain constraints (zero values of {hacek over (Γ)}(θx,θy) in a specified region), and the Fourier domain constraints (the specified values of J(x,y)=|Ũ(x,y)|2). This involves starting with an estimate of {hacek over (Γ)}(θx,θy), nulling its values in the specified region, taking its Fourier transform, correcting the modulus in conformity with J(x,y)=|Ũ(x,y)|2, taking the inverse transform to obtain the next estimate of {hacek over (Γ)}(θx,θy) and repeating the process until (or if) acceptable convergence is attained. The algorithm, however, is very susceptible to local minima and frequently stagnates before convergence.
In another embodiment of the invention, the image domain constraint imposition is modified through known methods so that instead of rigidly setting the values of {hacek over (Γ)}(θx,θy) in the constraint region to zero, a step is taken towards imposing the constraint based on the previous iterate. This offers enormous improvement with very infrequent incidence of stagnation.
The two methods above assume that the image domain constraint region is known. This work suggests that a priori knowledge of the (θx,θy) plane region wherein {hacek over (Γ)}(θx,θy)=0 may not be necessary. Successful phase retrieval appears possible as long as it is known a priori that the constraint region encompasses at least half the total extent of the image (half the number of pixels in the discrete case).
If the light source is treated as an extended incoherent source, then the Fourier transform pair may be defined as:
Then, taking the Fourier transform of both sides of the above equation and employing the convolution theorem, results in:
Solving for |Ups(
where in most instances one may employ a Gaussian approximation for Fu[B(
The restriction on the spectral resolution, while preserving the contrast in the shadow pattern may sometimes unduly impair the signal-to-noise ratio. In an embodiment of the invention, a battery of narrow band filters with contiguous, non-overlapping pass bands that span a wide wavelength range is used as the light collecting apertures. Each filter band has an associated photodetector and a high spectral resolution. To be specific, suppose that there are NC such wavelength channels, all of equal width, ΔλC, spanning the visible band, ΔλV=λV2−λV1. The mid-band wavelength of channel k is:
Ck=λV1+(2k−1)ΔλC
for k=1,2, . . . ,N. Then
can be written:
Each channel may be processed independently by the imaging module 430 in a manner analogous to
and then the results may be combined to
achieve much improved signal-to-noise ratio while preserving adequate spectral resolution and contrast.
It should be noted that even when all constraints are satisfied, there can be ambiguities in the solution. The trivial ambiguities are those transformations of the image that do not affect the Fourier transform modulus. These comprise translations of the image in the picture plane and 180° rotations. Nontrivial ambiguities arise when the image is itself the convolution of two or more other images. This relatively rare phenomenon gives rise to multiple possible solutions.
Silhouette Function
Generally, the imaging module 430 scans the pixels in the silhouette plane using a simple raster scan or some other scanning algorithm and for each pixel, changes the value (from unity to zero or vice versa), computes the modified shadow pattern, and then determines the cross correlation with the shadow pattern data to generate an image of the near-Earth object.
Silhouette function calculation can be seen to be very similar to phase retrieval: data on the modulus of a quantity, U(x,y), is known, which is related via Fourier transformation to a known function
multiplied by the function of interest, Γ(θx,θy). Provided that the detection array is appropriately sized, a region of sufficient extent wherein Γ(θx,θy) vanishes is known. Additionally the silhouette function only assumes the values zero or unity. The silhouette function may be defined by:
In one or more embodiments, the silhouette function is represented as a pixellated image in the (xa,ya) plane, where each matrix element is either zero or unity. In principle, the search space is finite, and there is a finite-step algorithm that may converge to a solution for which all constraints are satisfied. With respect to ambiguities, note that by the conditions of the problem, the silhouette is known to be surrounded by uniform intensity, so translations may be eliminated by shifting the silhouette pattern in the picture frame. There is also a one-to-one relation between the orientation of the shadow pattern and the silhouette so that rotational ambiguity is not possible. Further, nontrivial ambiguities are unlikely because the convolution of silhouette functions with other silhouette functions must have values greater than unity.
Let {hacek over (Γ)}εM×M and JεM×M and be the matrices of the trial values of the image and of the specified Fourier transform modulus, and denote the discrete Fourier transform by F( . . . ). The error metric is defined by:
The silhouette function conducts a raster scan of the M×M image plane. At each pixel, the color (0 or 1) is changed and the error metric is computed. If the error is reduced from the previous step, the new color is retained; otherwise the color change is rescinded. The raster scan is performed iteratively until the actual image is sharp enough to meet a threshold or perfectly reproduced, except for a translation in the image plane.
From
the shadow intensity distribution is:
where
In other words, this is the reverse color version of the sharp shadow. The introduction of
can be carried out at once to yield:
Ψ(x,y)=|U|2
U=i−F∫
−∞
∞
dφ
x∫−∞∞dφy
where the previous estimate of the near-Earth object radius, a, is used to define non-dimensional variables:
and where F is the corresponding Fresnel number.
With the above expressions, relating
Ψ(x,y)=|U|2
U=i−F∫
−∞
∞
dφ
x∫−∞∞dφy
assuming
where: E(ξ)=C(ξ)+iS(ξ) and F′ is the corresponding Fresnel number. This calculation is not repeated for every coarse resolution pixel for which
In an embodiment of the invention, the imaging module 430 may use this method of mapping
and retain the value change if there is improvement, or rescind the change if not.
However, in another embodiment of the invention, the imaging module uses a procedure that avoids any errors in the construction of the shadow pattern by directly comparing results with the raw data. To do this we take one further processing step: For given estimate of
Alignment Method
In another embodiment of the invention, the time history data may be aligned by using the travel velocity to transform times to distances normal to the direction of travel so that the phase retrieval may be applied directly without a complete reconstruction of the shadow function. This is the shadow pattern, i.e. the normalized intensity distribution over the plane parallel to the plane of the array that moves with the shadow. Using just this data, a number of significant parameters may be estimated. The intensity distribution is heavily diffracted, but the relations and some well known diffraction results allow the data analysis module to estimate the size and distance of the near-Earth Object from the extent of a relatively deep shadow and the width of the striations that immediately surround it.
First, let us consider the information on the near-Earth object distance implied by the shadow function. Referring to
recall that (φx,φy) are coordinates on a plane very close to the occulting object divided by
θx=xa/
In terms of
may be written:
While the field amplitude depends on the geometry of the silhouette function, the only other parameter is λ
where C( . . . ) and S( . . . ) are the Fresnel integrals. Equations
and
make it clear that the distance to the near-Earth object can be estimated from the width of the striations.
In an exemplary embodiment of the invention, the width of the first lobe in
that is above unit intensity is, in terms of the non-dimensional argument, 1.275. The width of the portion of the striations in an exemplary embodiment of the invention is approximately 130 m. Setting
gives:
In an embodiment of the invention, the image may show an intensity hole. In an exemplary embodiment of the invention, the diameter roughly 106 pixels, or approximately 2120 m on the average. The imaging module 430 then applies:
where a denotes the average radius of the near-Earth object. Setting W=2120 m and solving for a yields a=20.1 m. Using these values and
In an embodiment of the invention, the imaging module 430 uses a procedure in the construction of the shadow pattern by directly comparing results with the raw data. To do this the imaging module takes one further processing step: Given estimate of
where ⊙ denotes the Hadamard product. The summations are the squares of vector 2-norms induced in the space of Na×MS matrices. Given the above equations, Cor(
In summary, using the Alignment Method, data analysis module may use the aligned data and the shadow function alone to provide estimates of the apparent velocity of the near-Earth object, its approximate size and distance and Fresnel number of the shadow.
Photon Arrival Rates
In an exemplary embodiment of the invention each light collecting aperture will measure intensity in the visible to near infrared range. Even with very narrow band filtering of collected light, the response time of even fast Avalanche Photodiode detectors is much larger than the correlation time of the light. With this “slow detector” condition, the arrival time statistics of photon detections is fully characterized by the average number of photon arrivals per unit time. Further, the standard deviation of the number of photons entering a detector over a given time period is equal to the square root of the average multiplied by the time interval. The average number of photon detections divided by the standard deviation is the signal-to-noise of the measured average intensity over the measurement interval.
As the basis for our estimate of average photon detection rate, the Planck radiation law for the spectral radiance per unit wavelength is used:
where λ is the wavelength, and T* is the photospheric temperature of the occulted star. Integrating over the appropriate solid angle, the radiance from a unit area of surface is πrBλ(T*) (W−m−3) Then, dividing by hν, where ν=c/λ, the number of photons emitted per unit surface, per unit wavelength, per second is:
If nλ(T*) denotes the number of photons received per second, per square meter per unit wavelength then, from
we have:
where R* and d* are the stellar radius and distance, respectively.
In one or more embodiments of the invention, the light is passed through the jth grey band-limited filter discussed above, that admits light in the wavelength range λε[λCk−1,λCk]. Then, nΔλ
where j=1, . . . , NC.
It is convenient to express this in terms of the stellar apparent magnitude, m*. In terms of solar parameters, this is given by:
m⊙=Solar magnitude=−26.73
d⊙=Solar distance=1.58×10−5 lyr
L⊙=Solar luminosity=3.846×1026 W
where, using BλT* the stellar luminosity is:
Applying
may be put in the form:
Substituting this into m* and solving for nΔλ
where:
and where L⊙, d⊙, and m⊙ are the solar luminosity, distance and apparent magnitude, respectively.
In one or more embodiments of the invention, the photon arrival processes within the various non-overlapping wavelength channels are mutually statistically independent Poisson processes. Thus the average number of photons, denoted by nΔλ received per second, per square meter in the entire wavelength band, λε[λV1, λV2], spanned by the NC channels is, by virtue of
where:
and where:
Note that intensity measurements determine only the magnitude, not the phase of U (x,y) However, given that U(x,y) is essentially the Fourier transform of Γ(θx,θy), and given the constraints on the form of Γ(θx,θy) the phase can be determined by phase retrieval algorithms.
Moreover, the second central moment of the number of photon arrivals, denoted received per second, per square meter in the entire wavelength band is simply nΔλ. In an exemplary embodiment of the invention, the average arrival rate is a weak function of the temperature. In an embodiment of the invention, nΔλ increases slightly for the cooler stars, reflecting the fact that for a given radiance, the number of photons increases with wavelength. Hotter stars are relatively rare, while nearly 90%/o of stars are M-class. In an exemplary embodiment of the invention, when these results are averaged over the numerical distribution of spectral classes, the close the case T*=3000K which is near the high end of temperatures for M stars. Thus, in this exemplary embodiment, it is reasonable to approximate all stars with the result for T*=3000K and
Signal-to-Noise Ratio
To characterize the accuracy with which the silhouette may be determined the imaging module 430, in one or more embodiments of the invention, determines a signal-to-noise ratio that is reasonably convenient to calculate, yet is consistent with the phase retrieval algorithm, and relates to those critical factors in determining the accuracy of the silhouette function. In an embodiment of the invention, the imaging module 430 characterizes the error in the image of the near-Earth object as a function of the levels of noise in the intensity measurements.
In an alternate embodiment of the invention, the imaging module 430 uses intensity measurement statistics as a measure of silhouette estimation accuracy. As described above, the silhouette reconstruction method scans the pixels in the silhouette plane by some algorithm and for each pixel, changes the value (from unity to zero or vice versa), computes the modified shadow pattern from, and then determines the cross correlation with the shadow pattern data. If the cross-correlation increases, the value change is accepted; otherwise it is rescinded. This is a convex optimization problem, and as the measurement noise approaches zero, the silhouette errors also vanish. Therefore, the imaging module 430 may, in an embodiment of the invention, use the measurement noise statistics to define an signal-to-noise ratio, taking care that the “signal” part of the signal-to-noise ratio encompasses only that fraction of the intensity measurements that convey the most pertinent information on the silhouette function.
In this embodiment of the invention, the imaging module 430 characterizes the silhouette accuracy via the accuracy of the shadow pattern measurements relevant to silhouette information. In an exemplary embodiment of the invention, the imaging module 430 determines that there will be 20 pixels on a side, Npix of the final pixellated image of the asteroid. In this embodiment of the invention the number of apertures distributed across the cross-track direction must be approximately Npix. The end-to-end extent of the array may be at least as big as the width of the shadow region, W pertaining to the mid-band wavelength,
a=asteroid radius
The array need not be much larger than this because the Fresnel integral fluctuations far from this shadow region contain little information about the silhouette.
In an exemplary embodiment of the invention, at 0.28 AU a 40 m diameter asteroid has a shadow width of over 2 km. A typical value for the duration of the asteroid velocity perpendicular to the line of sight, V, is approximately 10 km/s, so the entire duration of the shadow passage is roughly W/V≈0.2 s. Further, for each pixel, the intensity measurement is the total number of photodetection events recorded by the photodetector in time period Δy/V where →y=W/Npix, the extent of a pixel in the observation plane.
In this embodiment of the invention, each aperture is equipped with a bank of NC non-overlapping narrow band filters with associated photodetectors. Therefore,
This relation states that the shadow pattern produced by light centered at wavelength
In one or more embodiments of the invention, for each pixel, the data alignment process merely adds up all the photodetector measurements that contribute to the estimated intensity associated with that pixel. The pixel measurement is a sum of independent Poisson processes corresponding to all the light collecting apertures and all the narrow band filter channels in each light collecting aperture whose along-axis tracks pass through the pixel during occultation. The main factor contributing to both the signal and the noise in the signal-to-noise ratio is the average of this quantity, evaluated for the un-occulted star. This is a function of the cross-track coordinate only. Using the equations
we have:
This function has a continuous limit as NC→□, and Δy→0, and is well approximated by the limit. In an embodiment of the invention ρ(ypix) is a function of ypix divided by the aperture array width for NC=Npix=200, and for various values of the reciprocal of the spectral resolution,
In this embodiment of the invention, ρ(ypix) is somewhat above unity for a region within the aperture array, but then decreases abruptly. Thus it is desirable to make the cross-track extent of the aperture array somewhat larger than W by the factor
Under this condition, a conservative estimate of ρ(ypix) is unity.
The true “signal” is not the quantity in
In this embodiment of the invention, the contrast arising from the fluctuations in the intensity is what contributes information about the silhouette. The estimated level of contrast may be expressed as a fraction, κ, of the un-occulted average photodetections given by the above equation. To estimate κ, the imaging module 430 considers that for the range of Fresnel numbers above ˜0.1, the principal fluctuations take the form of nested striations enclosing the deep shadow region of the diffracted silhouette. The extent of the dark spot enclosed by the striations indicates the overall size of the silhouette. The fluctuation of the striations themselves combined with the estimated size indicates the distance and Fresnel number. Generally, if the system has sufficient signal-to-noise ratio to accurately model the striations, it will suffice to determine the silhouette accurately. These striations approximate the knife edge shadow pattern for a point source given by:
where ξ is distance along the axis perpendicular to the striation, and C( . . . ) and S( . . . ) are the Fresnel integrals.
In summary, using the above estimate of nΔλ, and the above estimates of ρ and κ, the imaging module 430 may calculate:
Average contrast measurement=ηκATN0(Δy/V)□0−0.4 m,
N
0=1.46×1010
In an embodiment of the invention, the imaging module 430 considers the noise statistics. The variance of the noise due to photon arrival statistics is ηATnΔλρ(ypix)Δy/V. To this the photodectector dark count must be added, which is denoted by, nDC measured in average number of photons per second. The noise variance due to dark count for each pixel measurement and all wavelength channels is nDCNC(Δy/V)ρ(ypix). Noise due to the star and the dark count are independent, so:
The signal-to-noise ratio (SNR) of each pixel measurement of the shadow pattern is κ times expression for the average number of un-occulted photo detections for a pixel centered at ypix divided by the square root of expression in
Therefore the signal-to-noise ratio of each pixel calculated by the imaging module 430 is:
In an exemplary embodiment of the invention, at 0.3 AU distance, 0.4 m telescope diameter, quantum efficiency of 0.5, ten wavelength channels, nDC=4 phot/s, and a 12th magnitude star, the signal-to-noise ratio is approximately 9.1 for a 20×20 pixel silhouette image.
While the principles of the invention have been described herein, it is to be understood by those skilled in the art that this description is made only by way of example and not as a limitation as to the scope of the invention. Further embodiments are contemplated within the scope of the present invention in addition to the exemplary embodiments shown and described herein. Modifications and substitutions by one of ordinary skill in the art are considered to be within the scope of the present invention, which is not to be limited except by the following claims.
The present application claims the benefit of PCT Application no PCT/US2015/044197 filed on Aug. 7, 2015 and provisional application, No. 62/034,517 filed Aug. 7, 2014, which is incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2015/044197 | 8/7/2015 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62034517 | Aug 2014 | US |