Tone scale adjustment of digital images

Information

  • Patent Application
  • 20030058464
  • Publication Number
    20030058464
  • Date Filed
    August 22, 2002
    22 years ago
  • Date Published
    March 27, 2003
    21 years ago
Abstract
A method of processing data from a digital image to enhance the neutral tonescale estimates a neutral offset, a neutral gain and a neutral gamma from the input data and uses these estimated values to transform the input image data.
Description


CROSS REFERENCE TO RELATED APPLICATIONS

[0001] This is a U.S. original patent application which claims priority on Great Britain patent application No. 0120489.0 filed Aug. 23, 2001.



FIELD OF THE INVENTION

[0002] The invention relates to digital image processing, in particular to a method of enhancing the neutral tonescale of images that are encoded in a non logarithmic space.



BACKGROUND OF THE INVENTION

[0003] A typical digital imaging system captures an image and after various manipulations, both automatic and interactive, displays an output image on a monitor, or prints out a hard copy of the image. Digital manipulations include algorithms to restore image degradations or to enhance image quality, for example by correcting for image balance or tonescale problems or for improving sharpness or reducing image structure artifacts. This invention is concerned with correcting the tonescale of images which, for example, have been captured in sub-optimal illumination conditions, or which have suffered a subsequent tonescale degradation.


[0004] There are a number of algorithms that attempt digitally to correct the tonescale of poor quality images by means of operator intervention, for example in U.S. Pat. No. 5,062,058 which requires operator selection of highlight and shadowpoints. There are some circumstances in which user or operator intervention is appropriate, but generally this introduces a subjective variation in resulting image quality, along with reduced productivity.


[0005] Additionally, there are a number of algorithms that attempt to overcome these problems by automatically correcting the tonescale of poor quality images, such as ‘Auto-levels’ in Adobe Photo-shop and U.S. Pat. No. 4,984,071. These generally include a linear tonescale correction in addition to a color balance correction, based on the determination of highlight and shadow points from a cumulative histogram. Hence the methods are based on gain and offset, and are typically aggressive, producing an unnatural tonescale with clipping in the shadow and highlight portions of the tonescale.


[0006] Other algorithms further modify the image tonescale by non-linear means. U.S. Pat. No. 5,812,286, for example, pins the ‘black ’ and ‘white’ point of each color and uses the median to modify a gamma value which provides a power-law transformation to the image.


[0007] U.S. Pat. No. 5,265,200 describes a method for automatic image saturation, gamma and exposure correction. The method disclosed for neutral tonescale transformation is implemented using a best-fit function, typically a second-order polynomial, which when applied to the image histogram, drives the histogram towards that of an ‘ideal’ image without artifacts that result from the capture process.


[0008] Both of these algorithms may, however, produce a tonescale that appears to be unnatural, and may produce either clipping or loss of detail in the shadow and highlight portions of the tonescale, or unacceptably low contrast in the central portion of the tonescale. Additionally, where there are large regions of the image that are relatively low in contrast or detail these areas will typically bias the analysis resulting in poor tonescale correction.


[0009] U.S. Pat. No. 5,822,453 describes a method for estimating and adjusting digital contrast in a logarithmic metric. This method computes a standard deviation for a histogram of sampled pixel values, more specifically those pixels that relate to high contrast (edge) information in the scene. The use of edge pixels minimizes effects of overemphasis of large flat regions or areas of texture in the scene. A target contrast is estimated by comparing the standard deviation with a predetermined contrast, and thereby produces a final reproduction tonescale transformation curve within pre-specified contrast limits. This method is logarithmically based and the algorithm is fairly highly tuned to that metric, making it inappropriate for the general non-logarithmic usage described earlier.


[0010] The invention aims to provide a method to digitally restore or enhance the neutral tonescale of images that does not depend on manual adjustments of brightness or contrast. The invention also aims to provide a method of restoring tonescale which provides a tonescale of natural appearance, and which minimizes clipping in the highlight portions of the image, whilst optimizing mid-tone contrast.



SUMMARY OF THE INVENTION

[0011] Analysis of the image provides a variety of parameters that are used to drive adaptively the tonescale transformation. These parameters include a neutral offset, a neutral gain and a neutral gamma (power law adjustment). The aim of these parameters is to provide a transformation to drive the statistics of each scene closer to the aim statistics averaged over all scenes encoded in the required metric.


[0012] According to the present invention there is provided a method of processing data from a digital image to enhance the neutral tonescale thereof, the method comprising the steps of;


[0013] estimating a neutral offset from the input image data,


[0014] estimating a neutral gain from the input image data,


[0015] estimating a neutral gamma from the input image data,


[0016] and using the estimated values to transform the input image data.


[0017] The invention further provides a method of processing data from a digital image to enhance the neutral tonescale thereof, the method comprising the steps of;


[0018] estimating a neutral offset from input image data,


[0019] estimating a neutral gain from the input image data,


[0020] estimating a neutral gamma from the input image data,


[0021] using the estimated values to calculate a shaper LUT, the shaper LUT being a function of the estimated values,


[0022] creating a tonescale transformation LUT from the estimated values and the shaper LUT and


[0023] using the tonescale transformation LUT to transform the input image data.


[0024] The invention yet further provides a method of processing data from a digital image to enhance the neutral tonescale thereof, the method comprising the steps of


[0025] estimating a neutral offset from the input image data, estimating a neutral gain from the input image data,


[0026] estimating a neutral gamma from the input image data,


[0027] calculating a shaper LUT using the estimated values,


[0028] creating a tonescale transformation LUT from the estimated values and the shaper LUT,


[0029] calculating the first differential of the tonescale transformation LUT to produce a second LUT,


[0030] performing a neutral and color difference rotation on the input image data,


[0031] using the tonescale transformation LUT to perform a neutral transformation of the rotated input image data,


[0032] using the second LUT to perform an adaptive saturation transformation of the rotated input image data, and


[0033] rotating the transformed input data to provide the output image data.


[0034] The transformation may be applied analytically, using a power-law equation which is a function of the above variables, for example the output code value v′(x,y) for every input code value v(x,y) at pixel positions (i,j) in the image may be defined as:




v
′(i,j)=((v(i,j)−neutral offset).neutral gain)ˆ neutral gamma)



[0035] In one realization of the algorithm there is benefit in automatically generating a LUT to provide the transformation which includes shoulder shaping adaptive to the gain and gamma parameters, and which maintains the range of the image data to lie within that range defined by the metric.


[0036] The tonescale may be applied to each separation of a color image, but preferred performance is achieved by applying the tonescale to a neutral signal derived from the image, and to modify the chrominance components as described in U.S. Pat. No. 6,438,264, the contents of which are herein incorporated by reference. The neutral tonescale restoration may alternatively be applied to a low-pass filtered form of the image, and the complementary high-frequency image information subsequently added, so as to minimize the impact of the tonescale transformation on image noise.


[0037] This invention is based on a non-linear viewing adaptation model, that is related to the human visual system's adaptation to viewing conditions, including image adaptive highlight-shaping as a preferred feature, thus providing a more natural tonescale than the prior art. This preferred feature has the further advantage of providing a tonescale function in the highlight portions of the image that minimizes clipping, whilst optimizing, on an image-adaptive basis, the rendition of highlight detail. The highlight shaping also enables an optimum highlight tonescale without unduly impacting the required tonescale correction for the mid-tone and shadow portions of the image.


[0038] A further advantage of the method is to provide a method of tonescale restoration that includes analysis and transformation of the image such that the statistics of the image pixel values are explicitly related to the population aims of the required metric encoding (like sRGB or ROMM), with a rendering model such as is required in ROMM. This enables the algorithm to be easily modified, by means of parameter changes that correspond to the population aims in the required metric. The tonescale restoration can therefore be applied in the native metric of the image data, minimizing metric transformations and hence also minimizing implementation time and potential quantization artifacts. Alternatively the tonescale restoration may be applied in an optimum image metric for a particular system or set of restoration algorithms.


[0039] The image analysis makes use of averages of the upper and lower portions of the image histogram, thereby reducing impact of outliers, for example noise, dropout, scratches or dust, on the analysis. A further preferred feature is the use of selected image pixel values, for example those that occur at image edges, in the analysis thereby minimizing the effects of overemphasis of large flat regions or areas of low-contrast texture in the scene.







BRIEF DESCRIPTION OF THE DRAWINGS

[0040] The invention will now be described, by way of example, with reference to the accompanying drawings, in which:


[0041]
FIG. 1 illustrates the preferred method of creation of a tonescale transformation LUT, including a highlight shaper;


[0042]
FIG. 2 is a flow chart illustrating a method according to the invention;


[0043]
FIG. 3 is a flow chart illustrating a second method according to the invention;


[0044]
FIG. 4 is a flow chart illustrating a third method according to the invention; and


[0045]
FIG. 5 is a perspective view of a computer system for implementing the present invention.







DETAILED DESCRIPTION OF THE INVENTION

[0046] The invention provides a method to restore automatically or enhance the neutral tonescale of images that are encoded in a non-logarithmic metric (that is a linear or gamma space like sRGB or ROMM). The method does not depend on manual adjustments of brightness or contrast.


[0047] Although the principles of this invention can be employed for neutral tonescale correction or enhancement in any non-logarithmic space, this description and the reduction to practice relate to corrections applied in a ROMM encoding and to provide the reference rendering required by ROMM. The usage has been primarily as part of a digital algorithm for the restoration of faded images, in particular faded reversal images. The definition of the ROMM metric and encoding is described in “Reference Input/Output Medium Metric RGB Color Encodings (RIMM/ROMM RGB), presented by K. E. Spaulding, G. J. Woolfe, E. J. Giorgianni, at PICS 2000 Conference, Mar. 26-29, 2000, Portland Oreg. The principle of a fixed viewing adaptation model, referred to as Lateral-Brightness Adaptation, is described in Appendix D, pp473 of “Digital Color Management”, E. J. Giorgianni & T. E. Madden. A simplification of the principles, and the inclusion of the model in a ROMM path for color reversal, is described in U.S. Pat. No. 6,424,740, “A method and means for producing high quality digital reflection prints from transparency images”. In this application the simplified viewing adaptation model is defined as a set of transformations in tristimulus (linear) space:




X
′=(αX)γ





Y
′=(αY)γ





Z
′=(αZ)γ  (1)



[0048] where the contrast adaptation to dark surround is modeled by γ, and the brightness adaptation modeled by α. In U.S. Pat. No. 6,424,740 the values for α and γ are optimized experimentally and the same values used for all images, that is α and γ are not image-dependent variables. This application further teaches the need for the compression of highlight and shadow portions of the tonescale such that the tristimulus values of the transparency can be adequately represented in the final print:




X
′=(H(X)L(XX)γ





Y
′=(H(Y)L(YY)γ





Z
′=(H(Z)L(ZZ)γ  (2)



[0049] where H(Y) and L(Y) are highlight and lowlight shapers, each a function of Y, and designed for use with a particular α and γ. Typically H(X)=H(Y)=H(Z), and L(X)=L(Y)=L(Z).


[0050] The present invention takes several significant steps beyond the prior art. First, it broadens the concept of the viewing adaptation model for use as an image-adaptive tonescale restoration model, hence equations (1) may be rewritten in the form:




v
′(i,j)=((v(i,j)−neutral offset)×neutral gain)ˆ neutral gamma)



[0051] where v′ and v are the restored and input code values for each pixel position (i,j) in an image encoded in a non-logarithmic (e.g. linear or gamma) metric. Secondly, it automatically estimates for each image, by means of image statistics, the neutral offset, neutral gain (related to α) and neutral gamma (related to γ) corrections required to transform the image towards the expected aim points averaged over all images defined in that metric. The correction could be applied analytically, as in the equation above. However, in one implementation of this method, the application of the gain and gamma corrections is achieved by means of a scene-adaptive lookup table that broadly takes the form of the viewing adaptation model, but where α, γ, and a neutral offset are scene dependent variables.


[0052] Thirdly, in the most general implementation, the method includes the highlight and lowlight shapers included in equations (2). In this invention α and γ are scene dependent variables and this introduces a difficulty, namely that an automatic method is required to generate the highlight and lowlight shapers, which also need to take different forms as a function of α and γ. U.S. Pat. No. 6,424,740 does not teach a form of the function for L(X) and H(X). Nor does it teach a method to automatically derive these shapers as a function of α and γ, since only one set of optimized (not scene-dependent) values for α and γ are required in the method it describes. The final significant step this invention takes over the prior art is the method to automatically generate a highlight shaper as a function of the other parameters of tonescale correction.


[0053] A method to automatically generate a low-light shaper is not described here—it was found experimentally to be of less significance than the highlight shaper—but a modification of the principles described here to generate a lowlight shaper would be obvious to a person skilled in the art.


[0054] The invention will now be described in more detail.


[0055] The description below provides detail of an example of a reduction to practice of this invention, namely to provide the neutral tonescale restoration component of a dyefade restoration algorithm.


[0056] While the use of the method in different image metrics such as linear, tristimulus, sRGB or ROMM will produce different values for offset, gamma and gain, these can be shown to be equivalent, and the resultant restoration transform will produce broadly similar image quality in the different metrics.


[0057]
FIGS. 2, 3 and 4 are flow charts illustrating methods of the invention. Where similar steps take place in the different methods the same reference number is used in each figure.


[0058] In step S1 the image data is presampled, typically 4:1 in each dimension.


[0059] Step S1 is optional, to achieve implementation efficiency. The image data may also be prefiltered or digitally filtered as part of the sub-sampling process to minimize the impact of aliasing artifacts on the image analysis.


[0060] In step S2 the neutral offset is estimated.


[0061] The method chosen to estimate the neutral offset is to calculate the means, or other arithmetic average, of a percentage of pixels having code values corresponding to the darkest portions of the scene, for each of the separation of the image. The percentage of pixels will typically be in the range 0.01 to 0.5%. The neutral offset is defined as a function of these means. In this implementation of the invention the neutral offset is given by the lowest value of the means scaled by an optimisable black offset correction factor.


[0062] The black offset correction factor will generally lie in the range 0.5-1.0. It has been found that a factor of approximately 0.95 produces optimum results in this application.


[0063] In step S3 the neutral gain is estimated.


[0064] The method chosen to estimate the neutral gain is similar to that for estimating the neutral offset as described above. The method chosen is to calculate the means, or other arithmetic average, of a percentage of pixels having code values corresponding to the lightest portions of the scene, for each of the separations of the image. The percentage of pixels will typically be in the range 0.01 to 0.5%. The neutral gain is defined as a function of these means, more specifically in this implementation:
1neutralgain=((1-gcf).maxmean)+(gcf.imscal)(maxmean-neutraloffset)


[0065] where gcf is gain correction factor, an optimisable parameter; maxmean is the highest value of the separation means calculated above; and imscal describes the way in which the original image pixel values are scaled, i.e. imscal=1.0 for data scaled to lie in the range 1-0; imscal=255 for data in the range 0-255 and so on. Typically the gain correction factor lies in the range 0.25-1.0. It has been found that a factor of approximately 0.9 produces optimum results in this application.


[0066] In step S4 the neutral gamma is estimated.


[0067] To estimate the neutral gamma the pixel values in each of the image separations are firstly modified by the neutral gain and offset. This is done in accordance with the following equation:




v
′(i,j)=(v(i,j)−neutral offset).neutral gain



[0068] for each separation of the image.


[0069] A neutral signal n(i,j) is estimated by taking a weighted average at each pixel of the separation pixel values. In this implementation, the neutral value at each pixel is simply the sum of the separation values normalized by the number of separations, n=(r+g+b)/3.


[0070] The neutral image is scaled by dividing each pixel value by imscal, such that the pixel values lie in the range 0-1, and can therefore be transformed by a power function which effectively pins the black (pixel value=0.0) and white (pixel value=1.0) points in the tonescale. The mean, or other statistical average, is calculated of the neutral pixel values. In the simplest case the mean is calculated with equal weighting over all the neutral pixel values. However, only a portion of the neutral pixels may be selected in the calculation of the mean, and the average may be a weighted average. This selection takes place in step S5 and is more fully described below. The neutral gamma is defined as a function of this mean, more specifically in this implementation:
2neutralgamma=log(((1-gamcf).meanlum)+(gamcf.aimpopmean))log(meanlum)


[0071] where gamcf is a gamma correction factor, an optimisable parameter; meanlum is the average of the neutral pixel values, scaled 0-1, as described above; and aimpopmean corresponds to the estimated tonescale centre of the image population encoded in the relevant metric. This would be around 18% in a scene reflectance metric, and estimated to be 0.4 for the ROMM metric scaled 0-1. Typically the gamma correction factor lies in the range 0.25-1.0. A factor of approximately 0.7 produces optimum results in this application.


[0072] In step S5 a selection of pixels is optionally chosen to estimate the neutral gamma.


[0073] The estimation accuracy of neutral gamma may be improved by taking the mean, or some other statistical average, of the image data that corresponds to high contrast (edge) information in the original scene. This has the advantage of eliminating from the statistics unwanted emphasis to flat or highly textured portions of the scene. This is similar to the method taught in U.S. Pat. No. 5,822,453.


[0074] There are many ways to implement the selection of the edge pixels, well known in the art, but in this implementation we choose to convolve a 3×3 weighted high-pass filter with a copy of the neutral image, and to measure the standard deviation of the resulting image. A threshold is set, based on that standard deviation, and at those pixels where the deviation of the high-pass filtered image from the mean (zero) is greater in amplitude than the threshold the corresponding pixel values from the full-band analysis image are selected for further statistics.


[0075] In this implementation the selection of pixel values corresponding to image edges is used only in the estimation of gamma. However it is possible to select pixel values corresponding to image edges also in the estimation of gain and offset.


[0076] The estimation accuracy may be further improved by weighting the statistics according to the importance of picture information. In the simplest form the pixel values in the central or near central portions of the image may be given a higher statistical weighting than those at the edge of the image. This method is more fully explained in co-pending application US Serial No. ______, having the same filing date as the present application. In a more sophisticated implementation the statistics could be weighted by scene content, in particular in conjunction with an algorithm to automatically detect the main feature or features of interest in a scene.


[0077] Step S6 is optional in the method shown in FIG. 2 and will be described later.


[0078] In step S9 the estimated values of the neutral offset, the neutral gain and the neutral gamma are used to transform the input image data and provide an output image.


[0079] The equations corresponding to the image transform of FIG. 2 can be written in their simplest form, without highlight and lowlight shapers, as:




x
″(i,j)=[(α.x=(i,j)]γ



[0080] where x′(i,j)=x(i,j)−neutraloffset


[0081] α=neutral gain


[0082] γ=neutral gamma


[0083] Equations 2 above may also be expressed in the most general form in terms of the values estimated by the above analysis:




x
″(i,j)=[Hα(x′(i,j))Lα(x′(i,j)).α.x′(i,j)]γ  (3)



[0084] or, alternatively, in their preferred form, including only the highlight shaper:




x
″(i,j)=[Hα(x′(i,j)).α.x′(i,j)]γ



[0085] Here, for generality, variable x relates to the image metric, for example it equates to Y when the images are encoded in tristimulus. Furthermore, since the preferred implementation of the method is a neutral implementation, with appropriate modification to the chrominance or color difference signals as required, then x relates to a neutral signal, neu, derived from the color signals.


[0086] It will be understood by those skilled in the art that these transformations in their analytical form may be applied directly to the input pixel values to perform the tonescale transformation. However in the preferred embodiment a LUT is created, step S8, to enable the tonescale transformation to be performed. The tonescale transformation takes place in step S9 of FIG. 3. Step S8 is illustrated in FIG. 1.


[0087] A third embodiment of the invention is illustrated in FIG. 4 In step S10 of FIG. 4 a neutral and color difference rotation is performed on the input image data. There are many possible forms of a neutral and color difference rotation from r,g,b color records; for this implementation the following rotation was chosen:
3(neucr1cr2)=(1/31/31/3-1/41/2-1/4-1/201/2)(rgb)(4)


[0088] It will be understood that the method could alternatively be applied to each of the r,g,b color records separately.


[0089] The present invention also solves the problem of the adaptive highlight shaper, Hα, corresponding to step S6. The adaptive highlight shaper compresses the dynamic range of the brightness surround function by means of a piecewise function as shown in FIG. 1. The function Hα0 has three sections: f0 where no shaping is applied; f1 which implements the shoulder shaping; and f2, an optional section which implements a clipping of the function defined in equation 3 at the maximum value permitted by the encoding metric.


[0090] It is required that the function Hα should be continuous, and it is preferred that it should be smooth. Hence the shaping section f1 may take any form that meets these requirements, and which also modifies the tonescale appropriately. In the preferred implementation f1 takes a Gaussian form, of variance σ2. The analytic description is as follows:




f


0
(x)=1 for all 0≦x<x0





f


1
(x)=δexp(−(x−x0)2/(2σ2))−δ+1 for x0≦x<x1





f


2
(x)=1/αx for all x1≦x



[0091] At x0 and x1 it is required that the following conditions are held to provide continuity and smoothness:




f


1
(x1)=f2(x1)  (a)





df


1
(x)/dx=df2(x)/dx|x=x1  (b)





f


0
(x0)=f1(x0)=1.0  (c)





df


0
(x)/dx=df1(x)/dx|x=x0=0.0  (d)



[0092] The variable δ that satisfies condition (b) is given as:
4δ=σ2αx12(x1-x0)exp((x1-x0)2(2σ2))(5)


[0093] Condition (a) can be expanded as follows:
51αx1=δexp(-(x1-x0)2)/(2σ2))-δ+1


[0094] substituting δ for equation 5 gives:
61αx1=σ2αx12(x1-x0){1-exp((x1-x0)2/(2σ2))}+1


[0095] Rearranging the equation above gives:
70=σ2x1(x1-x0){1-exp((x1-x0)2/(2σ2))}+αx1-1(6)


[0096] Root finding methods (such as Newton Raphson) can be applied to equation 6 in order to solve for τ for a given set of x1, x0 and α. If a solution for σ is successful then a value for δ can be found from equation 5 and a highlight shaper function, Hα (x) can be generated. In this implementation of the method intended for application in a linear metric (such as tristimulus) the values for x1 and x0 are chosen to be equal to the transmittances 10−0.65 and 100.07572 respectively; then for α=2.0, a solution for σ=0.409909 is obtained. Substituting x0=100.65, x1=100.07572, σ=0.409909 and α=2.0 in equation 5 gives a value for δ=0.59801. In the special case where the highlight shaper (generated with the above parameters) is multiplied by a brightness-surround function with α=2.0 and γ=0.9, the resulting shaped brightness-surround curve closely approximates the shape of the fixed viewing adaptation model described in U.S. Pat. No. 6,424,740.


[0097] When x1 and σ are fixed at x1=10−0.007572 and σ=0.409909 then highlight shapers, Hα(x), can be generated automatically as a function of α as follows:


[0098] 1. Solve equation 6 for x0.


[0099] 2. Find δ by substituting the value for x0 found in step 1 above into equation 5.


[0100] 3. Generate a piecewise shoulder shaper function using x0 and δ calculated from steps (1) and (2) respectively.


[0101] Where this method is to be implemented directly in a non-linear metric, such as ROMM, the ROMM-equivalent values should be substituted. For example in the preferred implementation, the value x1 is the ROMM value at which highlight clipping occurs, ROMM(10−0.07572)˜0.9, for ROMM values scaled to lie in the range 0-1.


[0102] The neutral tonescale correction may be implemented by applying a transformation of the type described in equation 3 to a neutral separation of the image only, or alternatively to the r,g,b separations of an image. However, modifications to neutral tonescale also modify color saturation and the perception of noise and sharpness in an image.


[0103] U.S. Pat. No. 6,438,264 discloses a saturation transform adaptive to the neutral tonescale transform. This method, primarily based in a logarithmic metric, applies a transformation to color difference signals, for example cr1 and cr2 in equation 4 above, which is proportional to the first differential of the neutral tonescale transformation. In an implementation of the present invention the principles of the adaptive saturation transform are applied. This application takes place in step S12 of FIG. 4. Specifically the r,g,b separations, appropriately scaled, are rotated to neu, cr1 and cr2 as defined in equation 4. This rotation takes place in step S10 of FIG. 4. Neu, cr1 and cr2 are then transformed, either analytically or by means of look up tables, in steps S7(for neu) and S13 (for cr1, cr2) respectively, as follows:




neu
″(i,j)=F(neu′(i,j))=[Hα(neu′(i,j))Lα(neu40 (i,j)).α.neu40 (i,j)]γ



where neu′(i,j)=neu(i,j)−neutraloffset




cr
′(i,j)=cr(i,j)[1+({dF(neu′)/dneu′}−1).stf]



[0104] for both cr1 and cr2, and where stf is a saturation transform factor, typically in the range 0.5-1.0.


[0105] In step S14 the neutral, cr1 and cr2 separations are rotated back to r,g,b color separations by means of a matrix which is the inverse of the matrix shown in equation 4.


[0106] In a further alternative implementation the transformations defined above are applied to low-pass filtered versions of neu, cr1 and cr2, and the complementary high-pass filtered neu, cr1 and cr2 are subsequently added to the transformed low-pass filtered separations, thus minimizing amplification of noise. Although the method as described above is primarily intended for use as an automatic method to restore or enhance the neutral tonescale of an image, equation 3 in conjunction with an appropriate Hα(x) may also be used with manually selected values for α, γ and the neutral offset.


[0107] Alternatively, in the case where a manual modification to image contrast is required, maintaining the overall image brightness, it is possible to rearrange equation 3 such that it provides values of α as a function of γ that produce an aim output from equation 3 for an input value corresponding to the average scene reflectance. Other, similar, variations of this principle will be obvious to those skilled in the art.


[0108] The principles of the method used in U.S. Pat. No. 5,822,453, herein incorporated by reference, could be implemented as a post-process to the present invention to achieve second order tonescale corrections to those implementable by an offset, gain and gamma, with appropriate high-light and low-light shaping, in a non-logarithmic space. Examples of such second order corrections include those required where a scene has multiple illuminants, or for example back-lit or flash in the face scenes.


[0109] This invention, or parts of it, may be incorporated in a broader algorithm for digital image restoration. In one realization the method to estimate the neutral gamma parameter is used in co-pending application GB 0120491.6, having the same filing date as the present application. In an additional realization the method may be used as an extension of the viewing adaptation model used to render images scanned from reversal to be appropriate for ROMM. (see U.S. Pat. No. 6,424,740). In a further realization the method forms a part of an algorithm for the automatic digital restoration of faded images. U.S. Ser. No. 09/650,422 describes a method for the restoration of faded reversal images that depends on the estimate of a relative black point. The application teaches that the dyefade restoration method is further improved by the addition of a neutral tonescale restoration step within the algorithm, but does not teach how that may be achieved. This invention provides the method for that component part of the algorithm.


[0110] The present invention, either as a stand-alone algorithm or a component part of a broader algorithm may be included in a range of applications in which digital images are input, images captured digitally, or film images scanned; digital image processing performed; and digital images output, stored or printed. These applications include digital processing laboratories (mini and wholesale), kiosks, internet fulfillment, and commercial or home computer systems with appropriate peripherals, digital cameras and so on.


[0111] In the above description, preferred embodiments of the present invention were described in terms that would ordinarily be implemented as a software program. Those skilled in the art will readily recognize that the equivalent of such software may also be constructed in hardware. Because image manipulation algorithms and systems are well known, the present description has been directed in particular to algorithms and systems forming part of, or co-operating more directly with, the system and method in accordance with the present invention. Other aspects of such algorithms and systems, and hardware and/or software for producing and otherwise processing the image signals involved therewith, not specifically shown or described herein, may be selected from such systems, algorithms, components and elements known in the art. Given the system as described according to the invention in the following materials, software not specifically shown, suggested or described herein that is useful for implementation of the invention is conventional and within the ordinary skill in such arts.


[0112] Still further, as used herein, the computer program may be stored in a computer readable storage medium, which may comprise, for example; magnetic storage media such as a magnetic disk (such as a hard drive or a floppy disk) or magnetic tape; optical storage media such as an optical disc, optical tape, or machine readable bar code; solid state electronic storage devices such as random access memory (RAM), or read only memory (ROM); or any other physical device or medium employed to store a computer program.


[0113] The present invention is preferably utilized on any well-known computer system, such a personal computer. Consequently, the computer system will not be discussed in detail herein. It is also instructive to note that the images are either directly input into the computer system (for example by a digital camera) or digitized before input into the computer system (for example by scanning an original, such as a silver halide film).


[0114] Referring to FIG. 5, there is illustrated a computer system 110 for implementing the present invention. Although the computer system 110 is shown for the purpose of illustrating a preferred embodiment, the present invention is not limited to the computer system 110 shown, but may be used on any electronic processing system. The computer system 110 includes a microprocessor-based unit 112 for receiving and processing software programs and for performing other processing functions. A display 114 is electrically connected to the microprocessor-based unit 112 for displaying user-related information associated with the software, e.g., by means of a graphical user interface. A keyboard 116 is also connected to the microprocessor based unit 112 for permitting a user to input information to the software. As an alternative to using the keyboard 116 for input, a mouse 118 may be used for moving a selector 120 on the display 114 and for selecting an item on which the selector 120 overlays, as is well known in the art.


[0115] A compact disk-read only memory (CD-ROM) 122 is connected to the microprocessor based unit 112 for receiving software programs and for providing a means of inputting the software programs and other information to the microprocessor based unit 112 via a compact disk 124, which typically includes a software program. In addition, a floppy disk 126 may also include a software program, and is inserted into the microprocessor-based unit 112 for inputting the software program. Still further, the microprocessor-based unit 112 may be programmed, as is well known in the art, for storing the software program internally. The microprocessor-based unit 112 may also have a network connection 127, such as a telephone line, to an external network, such as a local area network or the Internet. A printer 128 is connected to the microprocessor-based unit 112 for printing a hardcopy of the output of the computer system 110.


[0116] Images may also be displayed on the display 114 via a personal computer card (PC card) 130, such as, as it was formerly known, a PCMCIA card (based on the specifications of the Personal Computer Memory Card International Association) which contains digitized images electronically embodied in the card 130. The PC card 130 is ultimately inserted into the microprocessor based unit 112 for permitting visual display of the image on the display 114. Images may also be input via the compact disk 124, the floppy disk 126, or the network connection 127. Any images stored in the PC card 130, the floppy disk 126 or the compact disk 124, or input through the network connection 127, may have been obtained from a variety of sources, such as a digital camera (not shown) or a scanner (not shown). In accordance with the invention, the algorithm may be stored in any of the storage devices heretofore mentioned and applied to images in order to restore or enhance the neutral tonescale of images.



Parts List

[0117]

110
computer system


[0118]

112
unit


[0119]

114
display


[0120]

116
keyboard


[0121]

120
selector


[0122]

122
CD ROM


[0123]

124
compact disk


[0124]

126
floppy disk


[0125]

127
network connection


[0126]

128
printer


[0127]

130
PC card


Claims
  • 1. A method of processing data from a digital image to enhance the neutral tonescale thereof, the method comprising the steps of; estimating a neutral offset from the input image data, estimating a neutral gain from the input image data, estimating a neutral gamma from the input image data, and using the estimated values to transform the input image data.
  • 2. A method according to claim 1 applied to images encoded in a non logarithmic space.
  • 3. A method according to claim 1 wherein the estimated values are used in the following equation:
  • 4. A method according to claim 1 wherein the estimated values are used in the following equation:
  • 5. A method according to claim 4 wherein the function Hα has at least two sections, a first section where no shaping is applied and a second section which implements the shoulder shaping.
  • 6. A method according to claim 5 wherein the second section takes a Gaussian form.
  • 7. A method according to claim 6 wherein the Gaussian form is image adaptive, dependent on the value α.
  • 8. A method according to claim 1 wherein the estimated values are used in the following equation:
  • 9. A method according to claim 8 wherein the function Hα has at least two sections, a first section where no shaping is applied and a second section which implements the shoulder shaping.
  • 10. A method according to claim 9 wherein the second section takes a Gaussian form.
  • 11. A method according to claim 10 wherein the Gaussian form is image adaptive, dependent on the value α.
  • 12. A method according to claim 1 wherein the input image data is sub-sampled before the neutral offset, neutral gain and neutral gamma are estimated.
  • 13. A method according to claim 12 wherein the image data is sub-sampled by 4:1 in each dimension.
  • 14. A method according to claim 12 wherein the image data is pre-filtered to reduce the bandwidth appropriately for sub-sampling.
  • 15. A method according to claim 1 wherein the neutral offset is estimated by calculating the averages of a percentage of pixels having code values corresponding to the darkest portion of the image, for each separation of the input image, the neutral offset being a function of these averages.
  • 16. A method according to claim 15 wherein the percentage of pixels used is in the range of 0.01% to 0.5%.
  • 17. A method according to claim 15 wherein the neutral offset is given by the lowest value of the averages scaled by an optimisable black offset correction factor.
  • 18. A method according to claim 17 wherein the offset correction factor lies in the range of 0.5 to 1.0.
  • 19. A method according to claim 1 wherein the neutral gain is estimated by calculating the averages of a percentage of pixels having code values corresponding to the lightest portions of the image, for each separation of the input image, the neutral gain being a function of these averages.
  • 20. A method according to claim 19 wherein the neutral gain is defined by the following function:
  • 21. A method according to claim 20 wherein the gain correction factor lies in the range of 0.25 and 1.
  • 22. A method according to claim 1 wherein the neutral gamma is estimated using the following equation;
  • 23. A method according to claim 22 wherein the gamma correction factor lies in the range of 0.25 to 1.
  • 24. A method according to claim 22 wherein the neutral gamma is estimated using the average of the image data that corresponds to edge information in the original image.
  • 25. A method according to claim 24 wherein the neutral offset and neutral gain are also estimated using the average of the input image data that corresponds to edge information of the original image.
  • 26. A method according to claim 3 wherein pixel values in the centre of the image are given a higher statistical weighting than pixel values at the edge of the image.
  • 27. A method according to claim 4 wherein pixel values in the centre of the image are given a higher statistical weighting than pixel values at the edge of the image.
  • 28. A method according to claim 8 wherein pixel values in the centre of the image are given a higher statistical weighting than pixel values at the edge of the image.
  • 29. A method according to claim 3 wherein pixel values are weighted by scene content.
  • 30. A method according to claim 4 wherein pixel values are weighted by scene content.
  • 31. A method according to claim 8 wherein pixel values are weighted by scene content.
  • 32. A method according to claim 1 wherein red, green and blue components of the input image are used.
  • 33. A method of processing data from a digital image to enhance the neutral tonescale thereof, the method comprising the steps of, estimating a neutral offset from input image data, estimating a neutral gain from the input image data, estimating a neutral gamma from the input image data, using the estimated values to calculate a shaper LUT, the shaper LUT being a function of the estimated values, creating a tonescale transformation LUT from the estimated values and the shaper LUT and using the tonescale transformation LUT to transform the input image data.
  • 34. A method according to claim 33 applied to images encoded in a non logarithmic space.
  • 35. A method according to claim 33 wherein the estimated values are used in the following equation:
  • 36. A method according to claim 33 wherein the estimated values are used in the following equation:
  • 37. A method according to claim 36 wherein the function Hα has at least two sections, a first section where no shaping is applied and a second section which implements the shoulder shaping.
  • 38. A method according to claim 37 wherein the second section takes a Gaussian form.
  • 39. A method according to claim 38 wherein the Gaussian form is image adaptive, dependent on the value α.
  • 40. A method according to claim 33 wherein the estimated values are used in the following equation:
  • 41. A method according to claim 40 wherein the function Hα has at least two sections, a first section where no shaping is applied and a second section which implements the shoulder shaping.
  • 42. A method according to claim 41 wherein the second section takes a Gaussian form.
  • 43. A method according to claim 42 wherein the Gaussian form is image adaptive, dependent on the value α.
  • 44. A method according to claim 33 wherein the input image data is sub-sampled before the neutral offset, neutral gain and neutral gamma are estimated.
  • 45. A method according to claim 44 wherein the image data is sub-sampled by 4:1 in each dimension.
  • 46. A method according to claim 44 wherein the image data is pre-filtered to reduce the bandwidth appropriately for sub-sampling.
  • 47. A method according to claim 33 wherein the neutral offset is estimated by calculating the averages of a percentage of pixels having code values corresponding to the darkest portion of the image, for each separation of the input image, the neutral offset being a function of these averages.
  • 48. A method according to claim 47 wherein the percentage of pixels used is in the range of 0.01% to 0.5%.
  • 49. A method according to claim 47 wherein the neutral offset is given by the lowest value of the averages scaled by an optimisable black offset correction factor.
  • 50. A method according to claim 49 wherein the offset correction factor lies in the range of 0.5 to 1.0.
  • 51. A method according to claim 33 wherein the neutral gain is estimated by calculating the averages of a percentage of pixels having code values corresponding to the lightest portions of the image, for each separation of the input image, the neutral gain being a function of these averages.
  • 52. A method according to claim 51 wherein the neutral gain is defined by the following function:
  • 53. A method according to claim 52 wherein the gain correction factor lies in the range of 0.25 and 1.
  • 54. A method according to claim 33 wherein the neutral gamma is estimated using the following equation;
  • 55. A method according to claim 54 wherein the gamma correction factor lies in the range of 0.25 to 1.
  • 56. A method according to claim 54 wherein the neutral gamma is estimated using the average of the image data that corresponds to edge information in the original image.
  • 57. A method according to claim 56 wherein the neutral offset and neutral gain are also estimated using the average of the input image data that corresponds to edge information of the original image.
  • 58. A method according to claim 35 wherein pixel values in the centre of the image are given a higher statistical weighting than pixel values at the edge of the image.
  • 59. A method according to claim 36 wherein pixel values in the centre of the image are given a higher statistical weighting than pixel values at the edge of the image.
  • 60. A method according to claim 40 wherein pixel values in the centre of the image are given a higher statistical weighting than pixel values at the edge of the image.
  • 61. A method according to claim 35 wherein pixel values are weighted by scene content.
  • 62. A method according to claim 36 wherein pixel values are weighted by scene content.
  • 63. A method according to claim 40 wherein pixel values are weighted by scene content.
  • 64. A method according to claim 33 wherein red, green and blue components of the input image are used.
  • 65. A method of processing data from a digital image to enhance the neutral tonescale thereof, the method comprising the steps of estimating a neutral offset from the input image data, estimating a neutral gain from the input image data, estimating a neutral gamma from the input image data, calculating a shaper LUT using the estimated values, creating a tonescale transformation LUT from the estimated values and the shaper LUT, calculating the first differential of the tonescale transformation LUT to produce a second LUT, performing a neutral and color difference rotation on the input image data, using the tonescale transformation LUT to perform a neutral transformation of the rotated input image data, using the second LUT to perform an adaptive saturation transformation of the rotated input image data, and rotating the transformed input data to provide the output image data.
  • 66. A method according to claim 65 applied to images encoded in a non logarithmic space.
  • 67. A method as claimed in claim 65 wherein the neutral transformation used takes the form of the following equation;
  • 68. A method as claimed in claim 65 wherein the adaptive saturation transformation takes the form of the following equation;
  • 69. A method according to claim 68 wherein the saturation transform factor is in the range of 0.5 to 1.
  • 70. A method according to claim 65 wherein the input image data is rotated in accordance with the following equation
  • 71. A method according to claim 65 wherein neu, cr1 and cr2 are low pass filtered prior to being transformed, and complementary high pass filtered values of neu, cr1 and cr2 added to the transformed low pass filtered separations.
  • 72. A method according to claim 65 wherein the input image data is subsampled before the neutral offset, neutral gain and neutral gamma are estimated.
  • 73. A method according to claim 72 wherein the image data is subsampled by 4:1 in each dimension.
  • 74. A method according to claim 65 wherein the neutral offset, neutral gain and neutral gamma are estimated using high contrast edge information in the input image.
  • 75. A method according to claim 65 wherein only neutral components of the input image are used.
  • 76. A method of processing digital images wherein at least part of the method of processing data from the image is a method according to claim 1.
  • 77. A method of processing digital images wherein at least part of the method of processing data from the image is a method according to claim 33.
  • 78. A method of processing digital images wherein at least part of the method of processing data from the image is a method according to claim 65.
  • 79. A computer program product for enhancing the neutral tonescale of a digital image comprising a computer readable storage medium having a computer program stored thereon for performing the steps of estimating a neutral offset from the input image data, estimating a neutral gain from the input image data, estimating a neutral gamma from the input image data; and using the estimated values to transform the input image data.
  • 80. A computer program product for enhancing the neutral tonescale of a digital image comprising a computer readable storage medium having a computer program stored thereon for performing the steps of estimating a neutral offset from the input image data, estimating a neutral gain from the input image data, estimating a neutral gamma from the input image data, using the estimated values to calculate a shaper LUT, the shaper LUT being a function of the estimated values, creating a tonescale transformation LUT from the estimated values and the shaper LUT; and using the tonescale transformation LUT to transform the input image data.
  • 81. A computer program product for enhancing the neutral tonescale of a digital image comprising a computer readable storage medium having a computer program stored thereon for performing the steps of estimating a neutral offset from the input image data, estimating a neutral gain from the input image data, estimating a neutral gamma from the input image data, calculating a shaper LUT using the estimated values, creating a tonescale transformation LUT from the estimated values and the shaper LUT, calculating the first differential of the tonescale transformation LUT to produce a second LUT, performing a neutral and color difference rotation on the input image data, using the tonescale transformation LUT to perform a neutral transformation of the rotated input image data, using the second LUT to perform an adaptive saturation transformation of the rotated input image data; and rotating the transformed input data to provide the output image data.
  • 82. A computer program comprising program code means for performing all the steps of claim 1 when said program is run on a computer.
  • 83. A computer program comprising program code means for performing all the steps of claim 33 when said program is run on a computer.
  • 84. A computer program comprising program code means for performing all the steps of claim 65 when said program is run on a computer.
  • 85. A device for processing data from a digital image to enhance the neutral tonescale thereof, the apparatus including; means for estimating a neutral offset from input image data, means for estimating a neutral gain from the input image data, means for estimating a neutral gamma from the input image data; and means for using the estimated values to transform the input image data.
  • 86. A device for processing data from a digital image to enhance the neutral tonescale thereof, the apparatus including; means for estimating a neutral offset from input image data, means for estimating a neutral gain from the input image data, means for estimating a neutral gamma from the input image data, means for using the estimated values to calculate a shaper LUT, the shaper LUT being a function of the estimated values, means for creating a tonescale transformation LUT from the estimated values and the shaper LUT; and means for using the tonescale transformation LUT to transform the input image data.
  • 87. A device for processing data from a digital image to enhance the neutral tonescale thereof, the apparatus including; means for estimating a neutral offset from the input image data, means for estimating a neutral gain from the input image data, means for estimating a neutral gamma from the input image data, means for calculating a shaper LUT using the estimated values, means for creating a tonescale transformation LUT from the estimated values and the shaper LUT, means for calculating the first differential of the tonescale transformation LUT to produce a second LUT, means for performing a neutral and color difference rotation on the input image data, means for using the tonescale transformation LUT to perform a neutral transformation of the rotated input image data, means for using the second LUT to perform an adaptive saturation transformation of the rotated input image data, and means for rotating the transformed input data to provide the output image data.
  • 88. A system for the processing of digital image data wherein a digital image is input, the image is processed and the processed image stored and/or output, at least part of the processing of the image comprising a method according to claim 1.
  • 89. A system for the processing of digital image data wherein a digital image is input, the image is processed and the processed image stored and/or output, at least part of the processing of the image comprising a method according to claim 33.
  • 90. A system for the processing of digital image data wherein a digital image is input, the image is processed and the processed image stored and/or output, at least part of the processing of the image comprising a method according to claim 65.
Priority Claims (1)
Number Date Country Kind
0120489.0 Aug 2001 GB