The present invention relates to a method of using an image sensor onboard a satellite or an aircraft, as well as to an image sensor and an image capture device adapted to implement such a method.
An increasing number of Earth observation or reconnaissance missions require obtaining images with a very high resolution. These may include observation missions that are carried out from a satellite; this latter possibly being a low-orbit satellite, a geostationary satellite or a satellite on an intermediary circular or elliptical orbit. For example, a resolution lower than 1 meter can be asked for images that have been captured from a low-altitude satellite, and a resolution lower than 50 meters for images captured from a geostationary satellite. But then, under such imaging conditions, the resolution obtained is limited by the variations in the line of sight of the imaging system used that occurred during the period of exposure that was implemented to capture each observation image. Such unintentional line-of-sight variations may have multiple causes including vibrations generated by moving parts onboard the satellite, in particular the attitude control systems of the satellite. These variations generate, in turn, high-frequency distortions of the imaging system and these distortions further contribute to the line-of-sight variations. Professionals refer to these unintentional line-of-sight variations during exposure of each image capture as “image jitter.”
Various methods have been proposed to characterize or measure image jitter. Most of them are based on the high-frequency capture of data that characterize the line-of-sight variations during each exposure. To that end, a metrology device is added to the imaging system to act as an inertial or pseudo-inertial reference. But then, the devices based on gyroscopes or accelerometers are unable to sense vibrations whose frequencies are as high as those that occur onboard a satellite, and they are unable to sense the contributions of the distortions of the imaging system itself to the line-of-sight variations.
Laser-based metrology devices have also been proposed to be used as a pseudo-inertial reference. Images of a reference laser beam are therefore captured and processed at high speed in order to characterize the vibrations and distortions of the imaging system during each observation image capture exposure. But the addition of such a laser device to the imaging system that acts as a pseudo-inertial reference makes the design and realization of this system more complex. Its cost price is therefore increased, as is its weight, which is a great handicap especially when the imaging system is intended to be loaded onboard a satellite, particularly with respect to the cost of launching the satellite.
It has equally been proposed to capture and process at high speed, during the exposure for each observation image, the images of the stars that are used as fixed landmarks on account of their distance. A secondary image sensor is therefore dedicated to the capture of these images independently from the main image sensor, which is dedicated to the capture of the observation images. However, the general structure of the imaging system becomes even more complex when it combines these two systems, and the cost price of the whole system is further increased.
It has notably been proposed to use image sensors dedicated to the detection of image jitter that are separate from the system dedicated to observation. Such image jitter sensors are designed to sense any line-of-sight variations at a high frequency. However, these are still additional sensors that increase the total cost of the whole imaging system. Moreover, their performances can hardly be guaranteed because they depend on the texture of each area that is imaged on these image jitter sensors.
One of the objects of the present invention is therefore to provide a method to characterize image jitter that occurs during the capture of an observation image, whereby previous drawbacks are limited or completely absent.
Specifically, a first object of the invention is to characterize the image jitter including its components at high frequencies.
A second object of the invention is to characterize the image jitter with the contributions to it that result from distortions of the observation imaging system.
A third object of the invention is to characterize the image jitter without significantly increasing the total weight and cost price of the systems onboard the satellite or the aircraft, nor making the imaging system more complex.
A fourth object of the invention is to keep the entire photo-sensitive surface of the image sensor available for the function of capturing observation images.
Finally, a fifth object of the invention is to provide a characterization of the image jitter in the highest possible number of circumstances, especially even when some areas of the image that is formed on the photo-sensitive surface of the sensor exhibit a very low contrast.
In order to achieve these objects and others, the invention proposes a new method of using an image sensor onboard a satellite or an aircraft. The image sensor comprises a matrix of photodetectors that are arranged along lines and columns of this matrix and it further comprises a plurality of line decoders and a plurality of column decoders, an addressing circuit and a sequencer that is coupled to the matrix of photodetectors through the addressing circuit. In this way, an individual operation of each photodetector can be controlled according to accumulation, reading and reset steps.
According to a first characteristic of the invention, the method comprises a first image capturing sequence, which is performed using the photodetectors of a first selection within the matrix, and which is repeated at a first frequency to capture a first series of images at this first frequency. This first image capture sequence comprises an accumulation step, a reading step and a reset step for each photodetector of the first selection. This first selection of photodetectors may correspond to all photodetectors of the matrix.
According to a second characteristic of the invention, the method further comprises a second image capture sequence, which is performed using a second selection of photodetectors also within the matrix, and which is repeated at a second frequency to capture a second series of images at this second frequency. The second frequency is higher than the first frequency and the first selection comprises more photodetectors than the second selection, with a plurality of photodetectors that are common to both selections.
According to a third characteristic of the invention, the second image capture sequence does not comprise a reset step for each photodetector that is common to both selections. In this way, an accumulation step for photodetectors that are common to the first and second selections, which runs just before a reading step performed for the common photodetectors according to the second image capture sequence, continues just after this reading step that is carried out according to the second image capture sequence.
Finally, according to a fourth characteristic of the invention, a plurality of images of the second series are captured with the photodetectors of the second selection while only one image of the first series is captured with the photodetectors of the first selection.
In this way, the invention proposes capturing images according to two overlapping sequences and using selections of photodetectors that are different. The first sequence, with the lower image capture frequency, is intended to provide observation images, while the second, with the greater frequency, is dedicated to the characterization of the line-of-sight variations, that is, of the image jitter of the observation images.
A same image formation optic can be easily used for the images of the first series and those of the second series, in particular because the same matrix of photodetectors is used for these two series. For this reason, the weight onboard a satellite or an aircraft from which observation images are captured is not increased. Also, the design of the image formation optic is not specially modified to allow characterizing the image jitter, so that the satellite launching cost and the cost price of the imaging system are not significantly increased.
Moreover, and because the images that are dedicated to the image jitter characterization and the observation images can be produced by the same optic and are captured by the same matrix of photodetectors, the image jitter that is detected comprises all the contributions available, not just those whose causes are external to the imaging system, but also the contributions of the imaging system's own distortions.
Furthermore, the second frequency of image capture is only limited by the maximum frequency with which the photodetectors in the second selection can be read without being reset. This second frequency can therefore be high, especially if the number of photodetectors in the second selection is not too high. For this reason, the method according to the invention allows sensing variations that correspond to high frequencies, using images from the second series.
Particularly, the method according to the invention can be used to sense variations in the line-of-sight of the imaging system that comprises the image sensor. These variations are detected using a comparison of pattern positions within the images that are successively captured according to the second image capture sequence using the photodetectors from the second selection. High-frequency components of these line-of-sight variations can thus be detected. Variations in the line-of-sight can then be compensated for within the image-capturing instrument, especially by prompting appropriate movements of certain optical components, preferably in an analog manner.
According to a first possible use of a method according to the invention, the line-of-sight variations that are detected may be used to control a system for compensating for these line-of-sight variations. These line-of-sight variations may, preferably, be compensated for by moving at least one optical component of the imaging system that comprises the image sensor.
According to a second possible use of the invention onboard a satellite or an aircraft, the line-of-sight variations that are detected may be used to control an attitude control system of the satellite or aircraft.
The invention also proposes the image sensor that is suitable to be arranged onboard a satellite or an aircraft. This image sensor comprises the matrix of photodetectors, the decoders of the lines and columns of this matrix, the addressing circuit and the sequencer, the latter being suitable to control the first and second image capture sequences according to the previously described method.
The sequencer may further be adapted to ensure that the second selection of photodetectors be comprised in the first selection, and/or that the photodetectors of the second selection be adjacent within at least one window in the matrix.
Finally, the invention proposes an image capture device that comprises such image sensor and a module to detect line-of-sight variations. In this device, the module that detects the line-of-sight variations is adapted to compare pattern positions within the images that are successively captured according to the second image capture sequence using the photodetectors of the second selection, and to detect these variations using a result of the comparison.
Other specificities and advantages of the present invention shall be revealed in the following descriptions of several non-limiting implementation examples, with reference to the appended drawings, in which:
Figure is a schematic representation of a structure of an image capture device, adapted to implement the invention;
Figure shows an example of a distribution of windows adapted for the invention inside a matrix of photodetectors;
a and 5a are two time-diagrams that show respectively two variants of a sequential image capture mode known from prior art; and
b and 5b correspond respectively to
As
The invention consists in a new use of the matrix of photodetectors of the image sensor 1, which allows detecting the line-of-sight D variations without it being necessary to add one or more additional sensors acting as an inertial or pseudo-inertial reference.
The invention is described within the context of the image capture mode using a matrix sensor called “starer,” when the image is fixed on the sensor during the image capture period.
The matrix of photodetectors of the image sensor 1 comprises a plurality of adjacent lines and columns of the photodetectors, for example several thousands of photodetectors along the two respective directions of lines and columns. A main window is fixed within this matrix to capture the observation images. This main window may correspond to the entire matrix of photodetectors, but not necessarily. It constitutes the first selection of photodetectors inside the image sensor matrix, which has been introduced earlier in the general description section.
According to the invention, at least one, and preferably a plurality of, secondary windows are also defined within the photodetector matrix. Each secondary window has a number of photodetectors that is less than or much less than that of the main window. The secondary windows form all together the second selection of photodetectors within the matrix of the sensor 1.
It is not necessary that all the secondary windows be within the main window, but each of them shares common photodetectors with the main window. It can be considered that the secondary windows are limited to the shared photodetectors, so that the secondary windows may appear to be contained within the main window. In this way particularly, the second selection of photodetectors may be comprised in the first selection.
Preferably, each main or secondary window contains all the neighboring photodetectors in the matrix of the image sensor 1 that are inside a peripheral limit of this window. Specifically, the photodetectors of the second selection may thus be adjacent within the secondary window(s). Typically, each secondary window may contain a hundred times fewer photodetectors than the main window
The operation of each photodetector varies therefore depending on whether this photodetector belongs to a secondary window or is situated in the main window outside the secondary windows.
The photodetectors of the main window outside the secondary windows are used in the usual manner, following consecutive accumulation, also called integration, reading and reset steps. This sequence of steps has been called first sequence in the general section of this description. The observation images are therefore captured outside the secondary windows, at a first frequency when said first sequence is repeated.
The photodetectors of the secondary windows are used according to a double implementation pattern.
On the one hand, they are used in accordance with the first image capture sequence, in a way that is identical to the main window photodetectors that are situated outside the secondary windows. The first image capture sequence, which produces observation images, is therefore performed and repeated at the first frequency for all the photodetectors of the main window. In this way, the observation images are complete within the entire main window. They are called first series of images, and they can be captured using one of the known modes of control of an image sensor matrix, especially the “snapshot mode,” the “rolling mode” or the “progressive scan mode”.
On the other hand, the photodetectors of the secondary windows are used in accordance with a second image capture sequence, which is repeated at a second frequency, higher than the first frequency.
The second image capturing sequence for each photodetector of the secondary windows is performed at the same time as the first sequence, during the periods of accumulation of this first sequence. It comprises a reading step of the photodetector in order to capture the level of accumulation that is reached at the time of this reading. However, so that the image capture according to the first sequence is not disturbed by that of the second sequence, it is necessary that the second sequence not comprise a reset step for the photodetectors. Specifically, thanks to this absence of the reset step, the signal/noise ratio of the data of the observation image that are read according to the first sequence of image capture is not degraded in the secondary windows, with respect to its value outside these same secondary windows. Thus, a plurality of reading steps are performed successively for each photodetector of the secondary windows, according to the second image capture sequence during one accumulation step performed according to the first capture sequence. Then this accumulation step is followed by the reading step with reset of the first image capture sequence. In this way, in addition to their utilization to capture the complete observation image, the photodetectors of each secondary window, that is the photodetectors of the second selection, simultaneously provide secondary images to the second frequency, which are called the second series of images.
It is therefore possible to detect variations in the line-of-sight D during each accumulation performed to capture an observation image, by comparing the positions of at least one pattern inside the images that are successively captured according to the second sequence, in at least some of the secondary windows. Possibly, an analysis of the image texture may be further performed, especially in order to select the pattern in addition to the use of the pattern itself. Advantageously, the characteristics of the pattern or of the texture may be determined a priori in an Earth-based station before capturing an image, by processing the images that have been captured beforehand, especially by using the same device. Such an application may be interesting to observe one and same zone at different times, or to seek the possible presence of moving elements inside a monitoring area, for example. It is well known that pattern, image texture and contrast are distinct characteristics of an image.
To that end, the image capture device further comprises an image processing unit 20, which itself comprises a module 21 for the selection of windows and a module 22 for the detection of variations in the line-of-sight D, marked D-DETECTION. Several strategies may be implemented in turn by the module 21 to select, within the matrix 10, the secondary windows for which the sequencer 14 shall control the second image capture sequence.
According to a possible first strategy, at least one of the secondary windows that are used to capture images according to the second sequence is selected within the photodetector matrix 10 from an image that was captured beforehand according to the first sequence. In other words, a first image is first captured with all the photodetectors of the main window, and parts of this first image are sought to form the secondary windows that will be used subsequently for the second image capture sequence. The secondary windows are therefore definitively fixed for this image capture or for the image capture sequence that relates to a same observed zone. At least one of these secondary windows may be selected based on the image captured beforehand depending on one of the following criteria, or a combination of these criteria:
Criterion /i/ in a general manner and criterion /ii/ in the specific case of an observation of the surface of Earth, ensure that the images that are captured later according to the second sequence in the secondary windows contain at least one pattern whose successive positions within these images can be compared amongst themselves. Criterion /iii/ allows comparing the movements of patterns in different zones of the main window. It is therefore possible to derive therefrom it a characterization of the movement of the imaging system during each observation image accumulation, and specifically the line-of-sight D variations. Specifically, it is possible to distinguish a rotation movement around the line-of-sight D from a transversal movement.
According to a second strategy for the selection of the secondary windows, a plurality of windows smaller than the main window are fixed a priori. Within each of them, images are captured according to the second sequence. For example, a uniform distribution of small secondary windows inside the main window may be adopted. The first and second image capture sequences are therefore implemented as that has been described. The main window is therefore used to capture the first series of images for the purposes of observation, and the smaller windows are used to capture the second series of images respectively with each of these smaller windows. Then, at least one of these smaller windows is selected and the images of the second series that have been captured with this (these) selected window(s) is (are) used to detect the line-of-sight D variations using the successive positions of the patterns in this (these) selected window(s). In other words, the second image capture sequence is performed with a number of secondary windows that is more than is necessary, and then a selection of some of these secondary windows is performed to determine the movement of the imaging system. This a posteriori selection of the secondary window(s) can be performed using the same criteria as those quoted above regarding the first strategy.
Of course, other strategies for the selection of windows in the matrix 10 can be used instead of those that have just been described in detail.
In order to implement the invention onboard a satellite S or an aircraft, module 22 may be adapted to transmit data that represent line-of-sight D variations, to an attitude control system 30 of the satellite or aircraft, marked SCAO on
Such a jitter compensation at the level of the image-capturing instrument may be performed by correcting in real time the line-of-sight in the instrument. This correction may be done by moving:
These two examples of compensation are provided as non-limiting examples, and their implementations are known to professionals. Compared to the jitter compensation methods that come through processing of the image, those that operate by compensating for the line-of-sight variations inside the image-capturing instrument can be analog. The latter provide a higher accuracy without requiring calculations, which is particularly advantageous for space applications. Indeed, space applications require the use of specific technologies to meet constraints that do not exist for Earth-based applications. Among these constraints that are specific to space applications, there is the limitation of the number of onboard components, or the requirement for manufacturing and qualification methods that are designed to provide a very high reliability and that are therefore very costly.
Finally,
a and 5a are time-diagrams of the sequential capture mode as it exists in prior art in its two variants, with an accumulation period inferior to or equal to the period of capture of observation images respectively.
In accordance with the diagram of
b comprises the same additional steps Ra, for the reading of the portions of lines of the matrix 10 that belong to the secondary windows. These steps Ra may be performed again after the reading steps with the reset of the complete lines of the matrix 10.
Of course, the invention may be reproduced by altering secondary aspects with respect to the modes of implementation that have been described in detail above, while maintaining at least some of advantages that have been quoted. Specifically, it should be reminded that the selection criteria for the secondary windows, as well as the number of these windows, can be adapted to each observation mission for which the invention is applied.
The embodiments above are intended to be illustrative and not limiting. Additional embodiments may be within the claims. Although the present invention has been described with reference to particular embodiments, workers skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention.
Various modifications to the invention may be apparent to one of skill in the art upon reading this disclosure. For example, persons of ordinary skill in the relevant art will recognize that the various features described for the different embodiments of the invention can be suitably combined, un-combined, and re-combined with other features, alone, or in different combinations, within the spirit of the invention. Likewise, the various features described above should all be regarded as example embodiments, rather than limitations to the scope or spirit of the invention. Therefore, the above is not contemplated to limit the scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
10 04737 | Dec 2010 | FR | national |
The present application is a National Phase entry of PCT Application No. PCT/FR2011/052813, filed Nov. 29, 2011, which claims priority from FR Application No. 10 04737, filed Dec. 6, 2010, said applications being hereby incorporated by reference herein in their entirety.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/FR11/52813 | 11/29/2011 | WO | 00 | 6/6/2013 |