1. Field of the Invention
The invention relates to a method for carrying out a dynamic range compression in traffic photography for representation having greater detail fidelity in images created in association with traffic monitoring installations.
2. Description of the Background Art
In traffic photography, for the most part images are created in which the driver in the vehicle cabin appears very dark, whereas the license plate appears strongly over-illuminated owing to its retroreflective property. A dynamic range compression can often be successfully applied in order to brighten up the dark regions in images with a high dynamic range, and to improve the visibility of low contrast details in said regions without destroying the information in the brighter regions. In this case, a copy of the image is converted into a contrast mask over which the image is superposed. In the process, brightness and contrast must be manually matched for each image so that the images thus corrected do not appear unnatural, or even like a photomontage.
When the images from the traffic monitoring installations come to be evaluated in the evaluation offices (backoffices) of the authorities or commissioned organizations, there is a need for daily manual matching of several thousand images, something which entails a considerable extra outlay in time, but also an additional burden on the staff. In extreme cases, fines can be time barred when a blockage results in the processing of the images. Furthermore, the manual matching is open to the subjective influences of the respective person who is processing the image.
It is therefore an object of the present invention to find an option to achieve a representation having greater digital fidelity of the dark regions in the case of digitally obtained images of traffic photography whilst precluding the different subjective influences from the processing staff, without losing the information of the brighter regions in the process.
According to the invention, this object is achieved by a method for carrying out a dynamic range compression in traffic photography. Proceeding from a digitally provided original image having the pixels xFi, subdivided into n columns s and m rows z:
the first step is to create a grayscale image mask [G] having grayscale values G(Si1;zi2), wherein i1={1, . . . ,m} and i2={1, . . . ,n}. In the further method cycle, the arithmetic mean value over all the grayscale values G(Si1;zi2) is firstly determined and calculated in accordance with
Since the mean value
A gain parameter p and a blurring factor b are then determined from the arithmetic mean value
The particular advantage of the inventive method consists in that the dark regions are automatically brightened up, particularly in the case of color images with a high dynamic range, in order firstly to improve the visibility of invisible details in said regions, but without destroying the information in the brighter regions. This is particularly advantageous in traffic photography, since the driver is mostly imaged very darkly, while the license plate is strongly over-illuminated, given its retroreflective properties. This method can be applied both in the camera directly after the acquisition in such a way that all the original images are automatically brightened up, or the time delay after which the acquired original images have been stored on a storage medium. The time-delayed brightening up can be performed in the camera at an instant when no original images are being acquired, or on a separate computer unit, for example on a computer in a backoffice.
Starting point for the method can be both a monochrome grayscale image [F] and a color image [F] having the usual 3 channels (RGB). In the case of the color images, the grayscale image mask with its grayscale values xGi is generated by converting the individual R, G and B pixels as follows:
During the final generation of the new color image [N], each pixel is then split up again into an R, G and B pixel.
In order largely to suppress rounding errors, it is advantageous for the computer, which is usually set to fixed point arithmetic, to be converted to floating point arithmetic.
Further scope of applicability of the present invention will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the invention, are given by way of illustration only, since various changes and modifications within the spirit and scope of the invention will become apparent to those skilled in the art from this detailed description.
The present invention will become more fully understood from the detailed description given hereinbelow and the accompanying drawings which are given by way of illustration only, and thus, are not limitive of the present invention, and wherein:
Since all current computers use so-called floating point arithmetic, at the start of the method all the color values are converted from fixed point arithmetic (integer) into floating point arithmetic, that is to say floating point numbers, in order to keep rounding errors as low as possible in the computational steps.
For the conversion into a floating point number, each individual R, G and B value for each pixel is respectively divided by 65,536.0:
In the first step, a grayscale image mask [G] having the grayscale values xGi, wherein i=1, . . . , n*m, is firstly created from a digitally provided original image [F] having the pixels xFi with i=1, . . . , n*m. The starting point for the method can be both a monochrome grayscale image [F] and a color image [F] having the usual 3 channels (RGB).
In the case of a color image [F], there is an Ri-, Gi- and Bi-value for each pixel xFi. The Ri-, Gi- and Bi-values represent a value triplet for each pixel. By contrast, there is only a single grayscale value for each xFi in the case of a monochrome original image.
In the case of a monochrome original image [F], only a copy of the original image [F] is generated; it then corresponds to the grayscale image mask [G].
By contrast, in the case of a color original image the color image [F] is firstly converted into a monochrome image. For this purpose, a copy of the original image [F] is firstly generated from the color image [F] and subsequently converted into a monochrome image [G]. For this purpose, the grayscale image [G] with its grayscale values xGi is generated by converting the individual R, G and B pixels as follows:
so that the calculated brightness values are subsequently assigned as grayscale values to the corresponding pixels xGi of the grayscale image [G] to be generated.
In the second step, the arithmetic mean value X over all the grayscale values xGi, with i=1, . . . , n*m, calculated is determined or calculated from the grayscale image [G] which was copied from a monochrome original image [F] or generated from a color image [F]:
In the third step, a blurred grayscale image for later use is generated by applying a blurring filter. The blurring filter (for example a Gaussian filter) in this case reduces the difference in the grayscale values (contrast) between adjacent points. Since it is a lowpass filtering, small structures are lost, whereas large ones are retained.
A common blurring filter is a Gaussian filter in the case of which, given a two-dimensional image by xui=h(x,y), the grayscale value of each pixel of the contrast mask xui describes by the following formula:
With reference to the inventive method, this means that s is to be used for x, z for y and b for σ. Here, b is the blurring factor, which is calculated from the arithmetic mean value
b=4*
the functional relationships being determined empirically from a plurality of image rows having different illumination situations. A two-dimensional filter H(x,y) can be separated into two one-dimensional filters H(x) and H(y) in order to reduce the outlay on computation. Consequently, said filter is applied, for example, only to the direct neighbor, on the one hand in the horizontal and on the other hand in the vertical, and thus reduces the outlay on computation by a multiple.
Consequently, each grayscale value xui with i=1, . . . , n*m results for each point of the contrast mask [U] through application of the Gaussian filter to each point xGi with i=1, . . . , n*m, with account being taken of the contrast values of the respective adjacent points of the grayscale image [G].
In the fourth step, the original image [F] is balanced with the blurred grayscale image [U] (contrast mask) pixel by pixel. The balancing is performed as a function of a gain parameter p, determined once for all computations of an image, and the pixel dependent parameter xui, which both feature in the exponents specific to each pixel. The natural impression is retained in the new generated image [N] owing to this type of balancing.
The first step in this regard is to calculate the gain parameter p with the aid of
while using the arithmetic mean value
Subsequently, the pixels R, G and B of the new color image [N] are determined as follows for each individual pixel of the Ri-, Gi- and Bi-value with i=1, . . . , n*m:
for the case when xui<0.5:
R
Ni=1−(1−RFi)V
G
Ni=1−(1−GFi)V
B
Ni=1−(1−BFi)V
for the case when xui≧0.5:
RNi=RFiV
GNi=GFiV
BNi=BFiV
the exponent being calculated in both cases pixel by pixel as follows:
and in this case xui are the values of the blurred grayscale image (contrast mask), and p is the gain parameter which has been calculated by applying the arithmetic mean value
The dark parts of the image are enhanced in this case without too great a change to the bright image parts.
Finally, the floating point numbers of the R, G and B values are reconverted back into an integer for each pixel by means of multiplication by 65,536.0:
The method cycles explained in more detail above are schematically illustrated in a self-explanatory fashion in
The invention being thus described, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the invention, and all such modifications as would be obvious to one skilled in the art are to be included within the scope of the following claims.
Number | Date | Country | Kind |
---|---|---|---|
10 2011 055 269.3 | Nov 2011 | DE | national |
This nonprovisional application is a National Stage of International Application No. PCT/DE2012/100345, which was filed on Nov. 10, 2012, and which claims priority to German Patent Application No. 10 2011 055 269.3, which was filed in Germany on Nov. 11, 2011, and which are both herein incorporated by reference.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/DE2012/100345 | 11/10/2012 | WO | 00 | 5/12/2014 |