This invention relates to a method for entropy-based determination of object edge curves in a recorded image of a multispectral camera.
It is known that the average information content of an image is characterized by entropy. The entropy value provides information about the minimum number of bits required to store the pixels of an image. This value also provides information about whether a reduction in the required storage space can be achieved without any loss of information.
From the article by A. Shiozaki: “Edge Extraction Using Entropy Operator”; Academic Press Inc. 1986; Computer Vision, Graphics, and Image Processing, 1986, vol. 36, pages 1-9, it is known that entropy operators may be used for object edge detection in images.
Methods which utilize entropy in image processing of images recorded by air-to-surface means are known from the state of the art; edge trackers are one example. With an edge tracker, the image to be sought is examined for parallel edges. The search is terminated when the edge strength drops below a preselectable threshold. With this method it is possible to detect streets and roads. The disadvantage of this method is that it yields good results only with such objects that have a fixedly outlined object contour.
The problem addressed by the present invention is to provide a method for determining object edge curves.
This problem is solved with the features claimed. Advantageous embodiments are also claimed.
According to the invention, a method for detection and classification of objects in a recorded image includes converting the image recorded into a false color image, assigning a hue value from the HSV color space to each pixel in the false color image, such that the hue value corresponds to a hue angle H on a predetermined color circle, and classifying each pixel as one of an object pixel and background pixel, such that the pixels whose hue values are within a predetermined value range are defined as the object pixels. An entropy profile (
In processing images recorded by air-to-surface means, a distinction is made between artificial and natural objects. Monochromatic objects can be defined by assigning a certain hue to them. Polychromatic objects can be defined as the sum of multiple monochromatic objects. The method according to the invention can be used in classification of monochromatic objects, i.e., those of a single color, as well as polychromatic objects, i.e., those of multiple colors.
The starting image material in modern multispectral cameras is in the form of RGB images (RGB=red, green, blue). In addition many cameras also have one more color available, which is in the infrared spectrum and is referred to as IR. Since the true color has little informational value because of what is known as color mix-up (a painted green automobile could not be differentiated from a green field on which it is parked) it must be transformed to another color space to enable this differentiation, i.e., the so-called HSV space (HSV=hue (color), S=saturation, V=volume (brightness)). There are several different methods with which those skilled in the art are familiar for this conversion, but the results are equivalent.
Invariance of the hue with respect to fluctuations in brightness forms an important difference between the representation of color and hue; whereas the color color changes because of changes in lighting conditions, the hue remains unchanged over a wide range, so that an object can be located again on the basis of the hue (H), even after a certain period of time has elapsed.
The present invention and advantageous embodiments are explained in greater detail below on the basis of the drawing figures.
With the method according to the invention, the image recorded is converted to a false color image in a first step. The recorded image may thus be considered as a pixel area from a predefinable number of lines and columns.
In this false color image, a hue value from the HSV color space is then assigned to each pixel, where the hue value corresponds to a hue angle H on a predetermined color circle. Those skilled in the art are familiar with the representation of an HSV color space from http://de.wikipedia.org/wiki/HSV-Farbraum, for example.
In a next step, the individual pixels are classified as object pixels and as background pixels. The pixels with hue values within a predetermined value range are defined as object pixels. Pixels whose hue values outside of this value range are defined as background pixels. The predetermined value range is obtained here from a database which is trained onto the objects to be located with regard to their hues.
In the next process step, an entropy profile for the classified image shown in
The theoretical background of the method will be discussed here briefly and explained briefly on the basis of the Boltzmann definition of entropy: a pixel distribution of n pixels with nA object pixels and nB background pixels shall be considered. The starting point is the Boltzmann definition of entropy
S=k ln Ω (1)
where k is the Boltzmann factor and Ω is the number of possible arrangements of the nA object pixels and nB background pixels.
The number of different implementation options for arranging the nA object pixels and nB background pixels in a grid is the number being sought. The number of possibilities of distributing nA indistinguishable object pixels and nB background pixels among n sites is given by
Inserting equation (2) into equation (1) and applying the Sterling formula ln n!≈n ln n−n yields the entropy of mixing
Entropy is given in arbitrary units. The proportionality factor (Boltzmann constant) is expediently equated to one.
In an advantageous embodiment of the invention, an evaluation window with a predefinable length and width is defined, where the size of the evaluation window expediently coordinated with the expected object size.
In another advantageous process step, the evaluation window A is shifted by at least one pixel in the direction of the length and/or width of the evaluation window A. The entropy of mixing according to equation (3) is calculated again at this new position and assigned to the respective main pixel. These process steps are repeated until an entropy value has been assigned to each pixel of the recorded image to be analyzed.
In the successive calculation of the entropy of mixing, the value zero is obtained wherever there is only one type of pixel, i.e., with the evaluation window entirely outside of the object being sought or entirely inside of same. Values different from zero are obtained wherever at least one pixel of each sort (object pixel or background pixel) is contained in the evaluation window. This yields an entropy profile.
According to the invention a differentiation of the entropy profile with a subsequent extreme value consideration is performed. This makes it possible to determine an object edge curve. With this object edge curve, the highest entropy differences are to be found where the number of object pixels and the number of background pixels in the evaluation window were the same in the entropy value calculation according to equation (3). If the crest (extreme values) of the entropy height profile is projected onto the image recorded, this yields a measure of the size of the object.
Using the method according to the invention, it is possible to reliably differentiate objects with blurred colors with and without fixed contours from one another and to classify them. This method is particularly recommended for high-resolution four-color multispectral cameras, omitting the blue component (false color camera). The proposed method is real-time capable and can be transferred to moving images for the purpose of target tracking. Another advantage of this method is that even camouflaged targets can be discovered.
Number | Date | Country | Kind |
---|---|---|---|
10 2009 009 572 | Feb 2009 | DE | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/DE2010/000147 | 2/9/2010 | WO | 00 | 10/27/2011 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2010/097069 | 9/2/2010 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5375177 | Vaidyanathan et al. | Dec 1994 | A |
5448652 | Vaidyanathan et al. | Sep 1995 | A |
6665439 | Takahashi | Dec 2003 | B1 |
6901163 | Pearce et al. | May 2005 | B1 |
6944331 | Schmidt et al. | Sep 2005 | B2 |
7671898 | Chiba | Mar 2010 | B2 |
20020041705 | Lin et al. | Apr 2002 | A1 |
20030035580 | Wang et al. | Feb 2003 | A1 |
20040037460 | Luo et al. | Feb 2004 | A1 |
20060268344 | Shiau | Nov 2006 | A1 |
Entry |
---|
Akira Shiozaki, “Edge Extraction Using Entropy Operator,” Academic Press, Inc., 1986, Computer Vision, Graphics, and Image Processing, 1986, pp. 1-9, vol. 36. |
H. D. Cheng et al. “Color Image Segmentation: Advances and Prospects,” Pattern Recognition, Elsevier, Dec. 1, 2001, pp. 2259-2281, vol. 34, No. 12, XP004508355. |
Wenzhan Dai et al., “An Image Edge Detection Algorithm Based on Local Entropy,” Integration Technology, IEEE International Conference on, IEEE, Mar. 1, 2007, pp. 418-420, XP031127292. |
Wen Furong et al., “An Novelty Color Image Edge Detector Based on Fast Entropy Threshold,” Proceedings of IEEE Tencon'02, IEEE Region 10 Conference on Computers, Communications, Control and Power Engineering Proceedings, Oct. 28-31, 2002, pp. 511-514, vol. 1, XP010628535. |
S. Makrogiannis et al., “Scale Space Segmentation of Color Images Using Watersheds and Fuzzy Region Merging,” Proc. 2001, International Conference on Image Processing, Oct. 7, 2001, pp. 734-737, vol. 1, XP 010564964. |
C. F. Sin et al., “Image Segmentation by Edge Pixel Classification with Maximum Entropy,” Intelligent Multimedia, Video and Speech Processing, Proceedings of 2001 International Symposium on May 2-4, 2001, pp. 283-286, XP010544718. |
German Office Action dated Oct. 2, 2009 (Three (3) pages). |
International Search Report including English language translation dated Jun. 18, 2010 (Eleven (11) pages). |
PCT/ISA/237 Form (Eight (8) pages). |
Number | Date | Country | |
---|---|---|---|
20120033877 A1 | Feb 2012 | US |