Image processing method and device

Information

  • Patent Application
  • 20050185859
  • Publication Number
    20050185859
  • Date Filed
    September 20, 2004
    20 years ago
  • Date Published
    August 25, 2005
    19 years ago
Abstract
The invention serves for reproduction at a different scale of original images that are stored in the form of pixels. The degree of mutual overlap between the pixels of an output device and the pixels of the original image is determined and used to calculate the intensity values for the area elements (pixels) of the output device from the intensity values of the corresponding original pixels in the form of a weighted arithmetic mean. If the original image is reproduced in a reduced fashion (conversion factor v>1), the invention makes it possible to achieve a significant improvement of the signal/noise ratio of noisy original images.
Description

The invention pertains to an image processing method and device for reproducing original images that are stored in the form of pixels at a different scale.


Images frequently need to be reproduced on output devices, such as monitors and printers, at a different scale relative to the original image, i.e., in enlarged or reduced form. The original image information is represented by intensity values that are stored at addresses of an electronic storage medium, and the addresses of the image intensity values are assigned to certain area elements of the image that are also referred to as pixels. It is common practice to store the intensity values of the individual pixels point-by-point in a row, with the rows being stored successively, i.e., in the form of a matrix of columns and rows. In a 1:1 reproduction, each pixel of the output device reflects the intensity value of the associated pixel of the original image. In the known Nearest Neighbor Interpolation method, only the intensity values of those area elements which spatially lie closest to the pixels of the output device are respectively utilized. When reproducing a reduced image, for example, at a scale of 1:2, with this interpolation method, only every fourth pixel of the original image would be reproduced, and the originally existing image information would not be utilized in its entirety.


This also applies, in principle, to known image processing methods of higher order, e.g., the bilinear interpolation or bicubic interpolation methods, in which the intensity values of the pixels of the output device are calculated in a distance-weighted fashion relative to the spatial position of the original pixels. One common aspect of all these methods is that the stored intensity values for the pixels of the original image are processed in the form of raster points, and that the two-dimensional size of the pixels that corresponds to their physical reality is not part of the image information utilized. However, it has been recognized that an improved reproduction, by comparison to this known state of the art, can be achieved if the intensity values of the pixels are not processed in a distance-weighted, but rather in an area-weighted fashion.


Consequently, the invention is based on the objective of disclosing a method of the initially mentioned type that makes it possible to easily achieve an optimal and complete utilization of the original image information for any arbitrary two-dimensional configuration of pixels and any conversion scale. This objective is attained with the characteristics disclosed in the independent claim.


When reproducing reduced images, the invention provides the additional advantage that the signal/noise ratio of the respectively assigned pixels of the output image is significantly increased when processing noisy original images.




The invention, as well as other embodiments thereof defined in the dependent claims, are described in greater detail below with reference to the figures, wherein



FIG. 1 shows image processing by means of the known Nearest Neighbor Interpolation method;



FIG. 2 shows the method according to the invention;



FIG. 3 shows a special variation of the method according to the invention, and



FIG. 4 shows a device for carrying out the method according to the invention.




In the Nearest Neighbor Interpolation method indicated in FIG. 1, the reference symbols S1,1 through S4,4 identify intensity values that are assigned to the square pixels of an original image section and stored at individual memory addresses. If the image is reduced by a scale of 1:2 on a monitor or printer that also outputs the image with square pixels, only the hatched intensity values S2,2, S2,4, S4,2, S4,4 would be reproduced, i.e., ¾ of the total image information available would not be taken into account.


The method according to the invention is schematically illustrated in FIG. 2. In this example, the original image consists of adjoining honeycomb-shaped area elements (pixels), and the output device outputs square area elements with a side length v in the x-direction and in the y-direction. If be and he identify the width and the height of the original image and ba and ha identify the width and the height of the output device, the conversion factor for identical aspect ratios between the original and the output image is v=vx=be/ba=vy=he/ha and a reproduction at the scale of 1:v is produced.


For example, the area element (pixel) in the upper left corner of the output device is initially discussed, wherein ĩ=1 and {tilde over (j)}=1 apply to this area element. This area element overlaps the honeycomb-shaped areas of the original image that are identified by the reference symbols A1 through A4, with the respective degrees of mutual overlap being defined as g1=1, g2=ΔA2/A2, g3=ΔA3/A3 and g4=ΔA4/A4. Stored intensity values Si are assigned to and correspond to the areas Ai of the original image, and the weighted arithmetic mean is respectively determined for the intensity values {tilde over (S)}ĩ,{tilde over (j)} to be assigned to the individual pixels of the output device from these intensity values Si, wherein the respective degree of overlap of the pixels is used as the weighting factor gi. In the discussed example, this means
S~i~=1,j~=1=g1S1+g2S2+g3S3+g4S4g1+g2+g3+g4=1i=14gi(i=14giSi).Equation(1)


The above-described step is analogously repeated for the remaining pixels of the output device. Σigi=v2 always applies with respect to pixels of the output device that do not adjoin the edges, i.e., pixels that are entirely overlapped by area elements of the original image, where v2 corresponds to the area of the output pixels.


If the original image consists of rectangular pixels and the output device also outputs rectangular pixels, the calculation of the signal intensities of the output pixels can be generally formulated in an algorithmic fashion as shown below:
S~i~,j~=1v2(i=1nyj=1nxgi~,j~,i,jSi,j),Equation(2)

where v is the conversion factor, gĩ,{tilde over (j)},i,j is the area-related fraction of the i, j-th area element of the original image at the ĩ,{tilde over (j)}-th area element of the output device, and nx, ny refer to the number of columns and rows in the original image.


When reproducing a reduced image (v>1), it can be demonstrated that the method according to the invention makes it possible to process noisy original images such that the signal/noise ratio of the output pixels is significantly increased, i.e., improved. If an image is reproduced that is uniformly exposed to a mean N of roentgen photons per area element and in which each area element has a signal intensity Si=N, a statistical noise Ri={square root}{square root over (N)} and consequently a signal/noise ratio Si/Ri={square root}{square root over (N)}, and in which the pixels of the input image are identified by the reference symbol i analogously to FIG. 2, the intensity of the output pixel, according to a generalization of equation (1), becomes
S~i~,j...=(igiSi)/igi=N

and the corresponding signal/noise ratio, according to Gauss's law of errors propagation, becomes
S~i~,j.../R~i~,j~=N(igi/igi2).Ifv>1,


the value of the parenthetical term is always greater than one, and represents the factor by which the signal/noise ratio of the output image is improved by comparison to the original image. The following table lists these factors for different values of v in accordance with the specific image processing method.

Conversion factor v =1.52345678Method according to the invention1.82345678Nearest Neighbor method11111111Bilinear interpolation1.62121212Bicubical interpolation1≦31≦31≦3


According to this table, only the bilinear interpolation method makes it possible to achieve an improvement of the signal/noise ratio for v=2 that is the equivalent to that of the method according to the invention. In all other instances, the method according to the invention results in a less noisy reduction of the original images. This is of significant importance, e.g., with respect to medical X-ray images, because it enables the physicians to make a more precise diagnosis.


If in the original image and the output device rectangular or square area elements are present in which the widths and heights of the area elements are proportional to one another, the calculation of the signal intensities can be simplified, by comparison to the calculation of the signal intensities according to equation (2), as follows:
S~i~,j~=1v2(i=INT(v(i~-1)+1)RUND(vi~)j=INT(v(j~-1)+1)RUND(vj~)gi~,j~,i,jSi,j).Equation(3)


In this case, the sum of a smaller number of addends is formed. The function INT conventionally refers to the rounding off of the respective argument to the next integral value, while the function RUND used in the determination according to the invention is defined as RUND(a)=INT(a)+1−INT(1+INT(a)−a), where a represents the respective argument.



FIG. 3 schematically shows the upper left corner region of an original image that is composed of square pixels, the side lengths of which respectively have the value 1. Also square pixels of the output device, which are indicated by hatching and the side lengths of which have the value 1.6 based on the conversion factor v=1.6 used (reduction scale 1:v=1:1.6), are assigned to these pixels in a scaled fashion. This means that each pixel of the output device overlaps several pixels of the original image to a certain degree that needs to be assigned to said pixels, and is illustrated as the weighting factor for the corresponding areas.


The weighting factors shown were calculated by determining the interval limits of the i, j-th area element of the original image indicated on the left and at the top as well as the ĩ, {tilde over (j)}-th area elements of the output device from the reduction scale, respectively calculating the overlap in the row direction and the column direction and forming the product thereof. This is described in greater detail below with reference to the pixel of the output device (ĩ=2, {tilde over (j)}=2) that is bordered by a bold line in FIG. 3, and carried out analogously for all other area elements of the output device. At a reduction factor of v=1.6, the interval limits of this pixel lie at v·(ĩ−1)=v=1.6 and v·ĩ=2v=3.2 in the vertical direction and at v·({tilde over (j)}−1)=v=1.6 to v·{tilde over (j)}=2v=3.2 in the horizontal direction. The weighting factors for the ranges i=INT(v·(ĩ−1)+1)=2 to i=RUND(v·ĩ)=4 and j=INT(v·({tilde over (j)}−1)+1)=2 to j=RUND(v·{tilde over (j)})=4 are now calculated. This means that only the weighting factors g with the indices i=2,3,4 and j=2,3,4 need to be calculated in the chosen example. Row vectors and column vectors zi and sj are calculated that contain the overlap of this area element of the output device in the row direction and in the column direction. The first element of the column vector contains the value sINT(v·(ĩ−1)+1)=s2=INT (V·(ĩ−1)+1)−v·(i−1), the last element contains the value sRUND(v·ĩ)=s4=v·ĩ−(RUND(v·ĩ)−1), and the element that lies in between has the value 1. Analogously, the first element of the line vector has the value zINT(v·({tilde over (j)}−1)+1)=z2=INT(v·({tilde over (j)}−1)+1)−v·({tilde over (j)}−1), the last element has the value zRUND(x·{tilde over (j)})=z4=v·{tilde over (j)}−(RUND(v·{tilde over (j)})−1) and the element that lies in between has the value 1. This results in s2=2−1.6=0.4, s3=1, s4=3.2−3=0.2 as well as z2=2−1.6=0.4, z3=1, z4=3.2−3=0.2. The weighting factors resulting from these row and column factors are defined as gĩ,{tilde over (j)},i,j=sjzj, i.e., in an explicitly numerical fashion for the pixel in question, as shown below: g2,2,2,2=0.4·0.4=0.16 g2,2,2,3=0.4·1=0.4 g2,2,2,4=0.4·0,2=0,08 g2,2,3,2=1·0.4=0.4 g2,2,3,3=1·1=1 g2,2,3,4=1·0.2=0.2 g2,2,4,2=0.2·0.4=0.08 g2,2,4,3=0.2·1=0.2 g2,2,4,4=0.2·0.2=0.04.


These weighting factors are then used for calculating the reproduced signal intensity {tilde over (S)}ĩ=2,{tilde over (j)}=2 for the corresponding pixel of the output device in accordance with equation (3). The signal intensities of the remaining pixels of the output device are determined analogously.


If the reduction factors consist of fractions of rational numbers, i.e., numbers with v=n/m in which n, m are natural numbers, as is the case with 1.6=8/5 in the example illustrated in FIG. 3, the weighting factors are respectively repeated after each m columns and n rows of the output device, and gĩ,{tilde over (j)},i,j=gĩ+m,{tilde over (j)},i+n,j as well as gĩ,{tilde over (j)},i,j=gĩ,{tilde over (j)}+m,i,j+n apply. There also exist symmetries with respect to the straight horizontal line at i=n/2 and the straight vertical line at j=n/2, with said symmetries respectively leading to the relations gĩ,{tilde over (j)},n+1−i,j=gĩ,{tilde over (j)},i,j and gĩ,{tilde over (j)},i,j=gĩ,{tilde over (j)},i,n+1−j. In addition, gĩ,{tilde over (j)},i,j=g{tilde over (j)},ĩ,j,i always applies due to the symmetry relative to

Claims
  • 1. A method for processing original images that are stored in the form of pixels and need to be output at a different scale, characterized by the fact that all pixels of the original image are assigned to corresponding pixels of the output device in a scaled fashion, by the fact that the degree of mutual overlap (gi=ΔAi/Aj) of the pixels is determined, and by the fact that the intensity values ({tilde over (S)}ĩ,{tilde over (j)}) to be assigned to the individual pixels of the output device are determined and utilized in the form of the weighted arithmetic mean of the intensity values (Si) of the respective original pixels, wherein the respective degrees of overlap are used as weighting factors (gi) (FIG. 2).
  • 2. The method according to claim 1 with square or rectangular pixels of proportional width/height in the original image and the output device, characterized by the fact that for a conversion factor v, the intensity values of the output pixels are determined and utilized in accordance with the relation
  • 3. A device for carrying out the method according to claim 1 or 2, with a memory (2) that contains the intensity values of the original image, a central processor unit (1) that accesses said memory and calculates the signal intensities for an output device (4), and an input device (3) for inputting the conversion factor.
  • 4. The device according to claim 3, characterized by an intermediate memory (5) for storing the weighting factors.
Priority Claims (1)
Number Date Country Kind
103 45 278 8 Sep 2003 DE national