The disclosure relates to a method and device of measuring infrared spectral characteristics of a moving target.
An imaging infrared spectrometer can capture image characteristics and infrared spectral characteristics of a target. However, much ineffective infrared spectral data is involved and cannot be processed in real-time; and the infrared spectrum of the target is acquired against the background infrared spectrum.
Disclosed is a method of measuring infrared spectral characteristics that is multi-dimensional, multi-scale, accurate, and real-time.
The disclosure provides a method of measuring infrared spectral characteristics of a moving target, the method comprising:
(1) can comprise:
(1.1) establishing a three-dimensional model for a target to be measured;
(1.2) from the three-dimensional model, determining a ROI section of the target to be measured, and performing material classification to the three-dimensional model of the above-mentioned section to determine a radiation source; and
(1.3) measuring infrared spectral characteristics of the radiation source, to obtain object-space infrared-spectral-characteristic distribution at a specific angle.
An object-space infrared spectrum radiation measurement characteristic of a target can be expressed as follows:
Frad(x,y,z,ω,S,T) (1)
where, Frad ( ) indicates radiation intensity of a target in the case where the target is in the Earth coordinate system, with a position (x, y, z), a measurement angle ω, a scale S, and a time dimension T;
An image-space radiation measurement characteristic function for a target is described below:
for a point-like or speckle-like target, a radiation measurement characteristic frad1 is expressed as below:
frad1=frad(
where,
for a plane-like target, a radiation measurement characteristic frad2 is expressed as below:
frad2=frad(
where,
(2) can comprise:
(2.1) performing multi-threaded operations to an input actually measured infrared image, including:
a 1st thread: performing hyperpixel segmentation to obtain sky background, ground background, target regions, etc., and based on the segmentation result and measurement of area and grayscale features of a target, identifying background regions and taking as a negative sample;
a 2nd thread: extracting full-image HOG features, and distinguish background from the target regions according to a slide-window method, thus obtaining a suspected target and taking as a positive sample;
a 3rd thread: using a full convolutional neural network to detect an input image to obtain a target, and taking it as a positive sample;
(2.2) inputting the results obtained by the above threads into a pre-trained support-vector-machine (SVM) classifier, to obtain position information (x, y) of the image of the target;
(2.3) creating a Gaussian pyramid by using the image of the target, to obtain multi-scale information of the image and thereafter input it into the trained convolutional neural network (CNN) to obtain pixel differences (∇x, ∇y) of respective ROIs of the target with respect to the center position of the target; and
(2.4) detecting and obtaining position information of two-frame images of the target according to (2.2) and processing, to obtain the target's frame differences as well as the moving target's direction information from the two-frame images; based on the target's pixel differences between the two frames obtained in (2.3), performing motion compensation for the target.
The disclosure also provides a measurement device for implementing the measurement method, the device comprising: an industrial computer, a rotary mirror, a beam splitter, a medium-wave lens, a long-wave lens, a non-imaging infrared spectrum measuring unit, a long-wave infrared imaging unit; a control interface of the industrial computer is connected to the rotary mirror; the medium-wave lens is mounted on the non-imaging infrared spectrum measuring unit; an output end of the non-imaging infrared spectrum measuring unit is connected to an input end of the industrial computer; the long-wave lens is mounted on the long-wave infrared imaging unit; an output end of the long-wave infrared imaging unit is connected to the input end of the industrial computer.
The rotary mirror is adapted to reflect light of a target of interest to the beam splitter; the beam splitter is adapted to divide the received reflected light into a medium-wave light beam and a long-wave light beam, and transmit the light beams to the medium-wave lens and the long-wave lens, respectively, thus the light beams pass through the medium-wave lens and the long-wave lens, respectively, and then are transmitted to the non-imaging infrared spectrum measuring unit and the long-wave infrared imaging unit for processing; the non-imaging infrared spectrum measuring unit is adapted to process the received medium wave into infrared spectrum data and transmit it to the industrial computer; the long-wave infrared imaging unit is adapted to process the received long wave into image data and transmit it to the industrial computer; the industrial computer is adapted to control steering of the rotary mirror, and process the received infrared spectral data and image data to obtain multi-dimensional multi-scale infrared spectral characteristics of the moving target.
The rotary mirror adopts a four-framework servo control and comprises: a reflective mirror, an inner pitch framework, an inner azimuth framework, an outer pitch framework, and an outer azimuth framework, which are sequentially arranged from inside to outside.
The disclosure establishes a three-dimensional model in advance, then combines and compares the three-dimensional model with an actually measured and processed infrared spectrum to establish an infrared-spectrum multi-dimensional multi-scale characteristic expression formula for a target, so as to provide an accurate infrared-spectrum model for the target, and provide an accurate positioning method for a target of interest, and identify positions of respective components of the target of interest, thereby solving the problem in the prior art that an existing infrared-spectrum characteristic expression for a target is incomplete and inaccurate, and infrared spectrum cannot be measured or cannot be accurately measured.
To further illustrate, embodiments detailing a method and device of measuring infrared spectral characteristics of a moving target are described below. It should be noted that the following embodiments are intended to describe and not to limit the disclosure.
Referring to
(1) establishing a multi-dimensional and multi-scale model with infrared spectral features of an object-space target, and extracting an object-space region of interest measurement model.
(1.1) Establishing a three-dimensional model for a target to be measured.
(1.2) Determining a ROI section of the target to be measured, and performing material classification to the three-dimensional model. For a stereo target, each point (x, y, z) of the target has infrared spectral characteristics, each point has a temperature, and emissivity may be different. For a time-sensitive target, movement of the target causes infrared spectral characteristics of the target to change. Taking an aircraft as an example, a model is established as shown in
(1.3) Acquiring data based on measurement of infrared spectral characteristic distribution of the radiation sources of the aircraft, thus obtaining infrared spectral characteristic distribution of the aircraft at a specific angle, and by means of measuring infrared spectrum of the target from different angles, obtaining infrared spectral characteristics of the target at different angles, as shown in
An object-space radiation characteristic of the aircraft target may be expressed as below:
Frad(x,y,z,ω,S,T) (1)
the above formula indicates that the aircraft target is three-dimensional, with different parts having different temperature distribution and radiation characteristics depending on their spatial positions, where, Frad( ) indicates radiation intensity of a target in the case where the target is in the Earth coordinate system, with a position (x, y, z), a measurement angle ω, a scale S, and a time dimension T.
The image-space radiation measurement characteristic function for an aircraft target is described below.
For a point-like or speckle-like target, a radiation measurement characteristic frad1 is related to an azimuth
frad1=frad(
In the formula,
For a plane-like target, a radiation measurement characteristic frad2 is related to a measurement azimuth and spatial distribution of the target, and varies with the distance as well as with the azimuth
frad2=frad(
(2) Performing target detection on an actually measured infrared image, and identifying position information for each region of a target; tracking the target, to obtain the target's pixel differences between two frames, as well as a moving direction of the target, and then, according to the target's pixel differences between two frames, controlling a two-axis four-framework servo to perform motion compensation for the target.
Activating multi-threaded operations to an input image, to execute the steps as follows:
a 1st thread: performing hyperpixel segmentation to obtain sky background, ground background, target regions, etc., and based on the segmentation result and measurement of area and grayscale features of a target, identifying background regions and taking as a negative sample.
a 2nd thread: extracting full-image HOG (Histogram of Oriented Gradient) features, and distinguish background from the target regions according to a slide-window method, thus obtaining a suspected target and taking as a positive sample.
a 3rd thread: using a full convolutional neural network to detect an input image to obtain a target, and taking it as a positive sample.
Results obtained by the above threads are input into a pre-trained SVM classifier, to obtain position information (x, y) of the image of the target; wherein, some training samples of the SVM classifier are shown in
The following operations are performed on the target image: inputting the target image, and creating a Gaussian pyramid by using the image, to obtain multi-scale information of the image and thereafter input it into the trained CNN to obtain pixel differences (∇x, ∇y) of respective ROIs of the target with respect to the center position of the target;
After the position of the target is detected, a target tracking module is activated, to obtain the target's frame differences between two-frame images as well as the moving target's direction information.
A servo of this device adopts a four-framework structured servo system, and a schematic structural diagram of a rotary mirror thereof is as shown in
(3) Activating a servo inner-framework tracking system to scan the target.
After successfully capturing an image of the target being tracked, controlling the inner-framework to point to each target of interest, and according to moving-direction information of the target, performing N-pixel-size offset motion in a direction shifted by 90° with respect to the moving direction, that is, the scanning direction is the same as the moving direction, and after each scan, it is offset by N pixels perpendicular to the moving direction, and then continues to scan along the moving direction; activating a spectrum measuring module, and recording distance information between the measuring device and the target, and measuring azimuth elevation angle information, scale information and time-dimension information, and inputting the above information into the infrared-spectral-characteristic multi-dimensional multi-scale model obtained in (1) for the target, thus obtaining a final infrared-spectrum model for the target; a schematic diagram for the scanning is as shown in
The disclosure further provides a measurement device, that is, a target tracking and spectrum measurement device for implementing the above method and capable of combining image information and infrared spectrum information. The schematic diagram of the device is as shown in
It will be obvious to those skilled in the art that changes and modifications may be made, and therefore, the aim in the appended claims is to cover all such changes and modifications.
Number | Date | Country | Kind |
---|---|---|---|
2016 1 1268920 | Dec 2016 | CN | national |
This application is a continuation-in-part of International Patent Application No. PCT/CN2017/077104 with an international filing date of Mar. 17, 2017, designating the United States, now pending, and further claims foreign priority benefits to Chinese Patent Application No. 201611268920.7 filed Dec. 31, 2016. The contents of all of the aforementioned applications, including any intervening amendments thereto, are incorporated herein by reference. Inquiries from the public to applicants or assignees concerning this document or the related applications should be directed to: Matthias Scholl P. C., Attn.: Dr. Matthias Scholl Esq., 245 First Street, 18th Floor, Cambridge, Mass. 02142.
Number | Name | Date | Kind |
---|---|---|---|
7710545 | Cramblitt | May 2010 | B2 |
9006659 | Zhang | Apr 2015 | B2 |
9258550 | Sieracki | Feb 2016 | B1 |
20090147238 | Markov | Jun 2009 | A1 |
20110285995 | Tkaczyk | Nov 2011 | A1 |
20190195689 | McQuilkin | Jun 2019 | A1 |
Entry |
---|
Xiangzhi Bai et al., “Fusion of Infrared and Visual Images Through Region Extraction by Using Multi-Scale Center Surround Top-Hat Transform”;Optics Express 8444, Jan. 2011 (Year: 2011). |
Number | Date | Country | |
---|---|---|---|
20190325586 A1 | Oct 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2017/077104 | Mar 2017 | US |
Child | 16458219 | US |