METHOD FOR CONVERTING INPUT IMAGE DATA INTO OUTPUT IMAGE DATA, IMAGE CONVERSION UNIT FOR CONVERTING INPUT IMAGE DATA INTO OUTPUT IMAGE DATA, IMAGE PROCESSING APPARATUS, DISPLAY DEVICE

Information

  • Patent Application
  • 20110310116
  • Publication Number
    20110310116
  • Date Filed
    March 03, 2010
    14 years ago
  • Date Published
    December 22, 2011
    13 years ago
Abstract
In a method, unit and display device the input image signal is split into a regional contrast signal (VRC) and a detail signal (VD), followed by stretching separately the dynamic ranges for both signals, wherein the dynamic range for the regional contrast signal is stretched with a higher stretch ratio than the dynamic range for the detail signal. Preferably the stretch ratio for the detail signal is near 1 or preferably 1. In preferred embodiment highlights are identified and for the highlights the dynamic range is stretched to an even higher degree than for the regional contrast signal.
Description
FIELD OF THE INVENTION

The invention relates to a method for converting input image data into output image data.


The invention further relates to an image conversion unit for converting input image data into output image data.


The invention further relates to an image processing apparatus comprising:


receiving means for receiving input image data,


an image conversion unit for converting input image data into output image data.


The invention further relates to a display device comprising an image processing apparatus comprising:


receiving means for receiving input image data,


an image conversion unit for converting input image data into output image data.


BACKGROUND OF THE INVENTION

To enable an acceptable representation of high-dynamic range (HDR) imagery on a display with a dynamic range that is typically several orders of magnitude lower, the dynamic range of recorded video sequences is usually compressed by means of tone-mapping during acquisition and transmission. The dynamic range of many outdoor scenes can be as large as 12 orders of magnitude, whereas most liquid crystal displays (LCDs) merely offer a static contrast ratio of about 3 orders of magnitude. As a result, severe dynamic range compression is required in the early stages of the imaging pipeline to enable a pleasant representation of the scene on a LDR (low dynamic range) display. Using simple techniques usually has the drawback that the contrast of small details can be compromised or even lost.


To address these shortcomings, more advanced adaptive methods have been developed. These methods predominantly compress large-scale contrasts while preserving the contrast of fine details.


This approach performs well as long as the display system's capabilities remain more or less similar to those anticipated during compression in the early stages of the imaging pipeline. However, with new high-dynamic-range display systems, static contrast ratios of up to 6 orders of magnitude can be achieved. Moreover, such display systems may be capable of locally (in time or space) producing a very high peak brightness. For example, this can be achieved by 2D dimmable LED backlights, where the power saved by dimming some LEDs underneath dark image portions may be used to boost other LEDs underneath bright regions. An extension of the input LDR image data into a HDR image signal has been found to often result in an unnatural appearance of the scene.


SUMMARY OF THE INVENTION

It is an object of the invention to provide a method, conversion unit and image processing apparatus with an aim of increasing the quality of reproduction and providing a more pleasant and natural appearance of images.


To this end the method in accordance with the invention is characterized in that


The input image data is converted into at least two signals, a first signal providing regional contrast data and a second signal providing detail data,


The dynamic range of at least the first signal is stretched, wherein the dynamic range of the first signal is stretched to a higher degree than the dynamic range of the second signal,


The stretched first and second signals are combined in an output signal.


The inventor has realized that the problems arise out of an imbalance between local and regional contrast. Preservation of detail contrast during dynamic range compression during acquisition in combination with an overall dynamic range extension during or prior to display results in an enhancement of fine details relative to regional contrasts in the displayed image. The regional contrast data comprises relatively low spatial frequency information. The detail data comprises higher spatial frequency information.


For considerable extension factors, this results in an unnatural appearance of the scene and could also lead to an undesired amplification of analog and digital noise.


A possible solution would be to use, during range extension, the mathematical inverse of the mapping operator used during range compression to retrieve the original HDR scene. This, however, would require knowledge of the used compression method, which would have to be included in the input signal. However, in practice, we often have to deal with legacy LDR video without knowledge of how its dynamic range was compressed during acquisition and encoding. This ‘perfect’ solution is thus often not practical. Apart from this aspect, the receiving unit would have to be able to match various possible compression methods.


The present invention provides a more balanced LDR to HDR conversion of the input image data into an output signal.


The input signal is split into a first signal providing regional, semiglobal data and a second signal providing the details. The first signal can for instance be made by low pass filtering the input signal, including low pass filtering methods which preserve edge features, such as for instance bilateral filtering. The second signal providing details can be made by e.g. subtracting the first signal from the input data signal.


At least the first signal is stretched, i.e. the dynamic range of at least the first signal is extended. The two signals are differently stretched, wherein the second signal is stretched to a smaller degree than the first signal. This reduces the unnatural visible enhancement of fine details relative to regional contrasts, resulting in a more natural appearance of the scene. To some extent noise is also subdued. In preferred embodiments the second signal is not stretched. If during the original compression the details were preserved, the second signal providing detail information need not be stretched. This is a relatively simple embodiment allowing a simplification of the algorithm.


In preferred embodiments the dynamic range of the combined stretched first and second signal is bound by an upper value. This upper value may be lower than the maximum allowable signal on the display. The input image signal is further analyzed to identify groups of pixels forming highlights in the image and wherein the pixel data for said identified groups of pixels are converted into a third signal such that the third signal covers a dynamic range extending to above the said upper value to a upper maximum pixel value and wherein the third signal is combined with the combined stretched first and second signal.


The signal comprising the stretched first and second signal has a dynamic range which is bound by an upper value. In the preferred embodiment above said upper value and to a maximum value, an upper dynamic range of pixel values is reserved for displaying highlights.


It has been found that, especially for very high luminance displays, the maximum achievable intensity is so high that the viewer, in a sense, becomes blinded by the light. In moderate cases, the viewer will only perceive the bright spots and will not, or only to a very limited extent, be able to perceive the darker details of the scene. In extreme cases, however, this can be painful or even harmful for the viewers' eyes. By limiting the range to which the combined first and second signal is stretched, this is avoided. However, this does not make full use of the possibilities of HDR displays. In preferred embodiments the maximum luminance is kept below the possibilities of a high luminance device. By identifying highlights in the image and placing their pixel values in the highest part of the dynamic range of the display, these highlights are brought to the forefront without blinding the viewer thereby providing a very crisp and clear image. In an embodiment the highlights are identified by selecting groups of pixels with pixel value in a range close to or at the upper value of the LDR range, wherein in a neighborhood of a high pixel value pixel the number of high pixel value pixels is below a threshold, i.e. for small groups of high intensity pixels.


Highlights are relatively small groups of high intensity pixels. The dynamic range of the display device above the upper value is populated by the highlights. This has shown to provide a high quality image wherein, on the one hand, the details are not unnaturally enhanced, or bright blinding spots appear in the image, while, on the other hand, the highlights imaged at the high end of the display range provide for a sparkling and crisp image.


In preferred embodiments the upper value of the dynamic range for the combined stretched first and second signal lies in a range corresponding to light intensities when displayed on a display of 500 to 1000 Nit, and the upper maximum pixel value lies in a range corresponding to light intensities when displayed on a display of above 1000 Nit, preferably above 2500 Nit.


These and further aspects of the invention will be explained in greater detail by way of example and with reference to the accompanying drawings, in which





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 provides a schematic flow chart for an embodiment of the invention;



FIG. 2 illustrates extension of dynamic ranges;



FIG. 3 illustrates a highlight identification algorithm;



FIGS. 4
a to 4f illustrate the effects of a dynamic range extension algorithm according to the invention;



FIG. 5 illustrates a mixing map;



FIGS. 6
a to 6c further illustrate dynamic range extension according to the invention;



FIGS. 7
a to 7d and 8a to 8d provide further examples of dynamic range extension according to the invention;



FIG. 9 illustrates a display device according to the present invention.





The Figures are not drawn to scale. Generally, identical components are denoted by the same reference numerals in the Figures.


DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

It is remarked that below examples are shown.


The dynamic range of many outdoor scenes can be as large as 12 orders of magnitude, whereas most liquid crystal displays (LCDs) merely offer a static contrast ratio of about 3 orders of magnitude. As a result, severe dynamic range compression is required in the early stages of the imaging pipeline to enable a pleasant representation of the scene on a LDR (low dynamic Range) display. The most straightforward approach to dynamic range compression is by means of global tone-mapping operators. However, the main drawback of these simple techniques is that the contrast of small details can be compromised. To address these shortcomings, more advanced methods have been developed compress regional (large-scale) contrasts while preserving the contrast of fine details.


On conventional LDR (low-dynamic range) display screens, the contrast of the imagery is usually stretched to the full capabilities of the display device (i.e. 0 to black, 255 to white for an 8-bit system), subject to user preference, sometimes supported by a histogram stretch prior to display. This approach performs well as long as the display system's capabilities remain more or less similar to those anticipated during compression in the early stages of the imaging pipeline. However, in new HDR (high-dynamic-range) display systems static contrast ratios of up to 6 orders of magnitude are achieved. Moreover, such display systems may be capable of locally (in time or space) producing a very high peak brightness. For example, this can be achieved by 2D dimmable LED backlights, where the power saved by dimming some LEDs underneath dark image portions may be used to boost other LEDs underneath bright regions.


When displaying legacy LDR video directly on a HDR display, an artifact occurs, namely imbalance between local and regional contrast.


Preservation of detail contrast during range compression in combination with a range extension prior to display, results in an enhancement of fine details relative to regional contrasts. For large extension factors, this results in an unnatural appearance of the scene and sometimes an undesired amplification of noise.


In the method in according with the invention he input image data is converted into at least two signals, a first signal providing low spatial frequency regional contrast data and a second signal providing high spatial frequency detail data. The dynamic range of at least the first signal is stretched, wherein the dynamic range of the first signal is stretched to a higher degree than the dynamic range of the second signal. The stretched first and second signals are combined in the image output signal.


The first signal provides a regional contrast signal and the second signal provides a detail layer. The two signals are separately stretched, wherein the first signal is stretched more than the second signal. In effect a regional stretch of the regional contrast signal is performed, for instance by low pass filtering. Upon this stretching the local detail is stretched but to a lower degree. The two signals are combined. This reduces, compared to an overall stretch of the incoming signal, the imbalance between detail and regional image is reduced. In preferred embodiments the second signal is made by subtracting the first signal from the input image data.



FIG. 1 illustrates a flow diagram for an exemplary algorithm in accordance with the invention.


The algorithm performs dynamic range extension as a dual signal procedure. Initially, regional contrasts are extracted from the input signal Vin by applying, in this example, a low-pass filter 1 to the video, providing a first signal regional contrast signal VRC, and extracting a detail layer from the input signal Vin, providing a second detail signal VD. In this example VD is extracted by computing the difference between the regional contrasts and the input in subtractor 2:


In formula:






V
RC
=F
bil(Vin),






V
D
=V
in
−V
RC,


where Vin denotes the input video and Fbil denotes the application of low pass filter, preferably a fast bilateral filter. Preferably bilateral filtering using a bilateral grid as the low-pass operator is executed. This approach provides a computationally efficient approximation to the full bilateral filter. The main benefit of this method is that it provides a cheap edgepreserving blur filter, thus preventing halo artifacts often associated with linear spatial filter kernels. Bilateral filtering using the bilateral grid can effectively be summarized as (1) constructing local histograms, (2) applying a multi-dimensional linear filter kernel to these histograms and (3) slicing (=interpolating) the desired output pixels. Although preferred, it should be noted that the bilateral grid does not represent an essential part of the current invention. Regional contrasts can alternatively be extracted using conventional (banks of) low-pass filters. Instead of using a mathematical algorithm to generate the first and second signal other methods can also be used such as for instance predefined special classes e.g. dark room interior.


To reduce the imbalance between regional and detail in an image of high luminance and maintain a natural balance between the fine detail and regional contrast when applying dynamic range extension, the two signals VRC and VD are mapped separately. One preferred way of doing so is by stretching the regional contrast VRC linearly from the input dynamic range [KLDR-WLDR] to a pre-defined target dynamic range [K0-W0] which could depend on the display capabilities, the human eye capabilities or personal preference:








V
~

RC

=



(


V
RC

-

K
LDR


)

·


(


W
0

-

K
0


)


(


W
LDR

-

K
LDR


)



+


K
0

.






Wherein {tilde over (V)}RC is the stretched signal. Such predefined target dynamic range can be set by the manufacturer. W0 defines the upper value of the dynamic range for the combined signal. In FIG. 1 the stretching of the first signal VRC is schematically illustrated by M(VRC) where M stands for a stretching operation of which the above formula is an example in which linear stretching is used. Non-linear stretching using other formulas for mapping are also possible, for this stretching or mapping step, as well as for any other stretching or mapping step.


The stretching operation provides a range extension. Grosso modo the stretching of the dynamic range is a factor







(


W
0

-

K
0


)


(


W
LDR

-

K
LDR


)





being the ratio for the input dynamic range (WLDR-KLDR′ and the target dynamic range, i.e. the amount of stretching applied to the first regional contrast signal. The stretching is performed in stretcher 3. The stretcher 3 maps the incoming data Vin with an incoming dynamic range (WLDR-KLDR) onto a stretched dynamic range (W0-K0).


In the above preferably W0<WHDR where WHDR is the maximum range of the display, thereby keeping the predefined target dynamic range below the maximum display dynamic range. This prevents that large bright areas will be imaged/rendered at unpleasantly high luminances.


Preferably W0 is in the range corresponding to a luminance in the range of 500 to 1000 Nit.


Second, the detail layer signal is enhanced by applying a moderate, compared to the stretching factor of the first signal, enhancement factor gD in enhancer 4:






{tilde over (V)}
D
=g
D
V
D


{tilde over (V)}D is the stretched second signal comprising the details. Preferably the gain gD is close to 1, for instance in the range between 1 and 1.2, or simply 1, in the latter case the detail layer data VD is left as it is, without enhancement, which is a simple preferred embodiment. In many legacy compressed LDR signal, the compression is performed which more or less maintain contrast in details. Thus, leaving the detail layer unaffected, i.e. applying a gain factor of 1 is often sufficient and reduces the complexity of the algorithm.


Obviously, extension functions M(VRC) and gD other than the above simple linear scaling can be used, such as power functions or S-functions. Finally, an output is constructed by combining the mapped detail and regional contrast layers, i.e. the stretched first and second signal:






{tilde over (V)}
1
={tilde over (V)}
D
+{tilde over (V)}
RC


{tilde over (V)}1 is the combined stretched first and second signal. In this example this is done by combining the stretched first and second signal in combiner 5, in this example a simple adder.


This aspect of the invention improves the displayed image by reducing the visible mismatch between regional contrast and detail contrast after increase of the dynamic range of the input signal.


A further problem occurring in HDR display is that the peak brightness of new HDR displays is very high (e.g., the DR37-P by Brightside/Dolby is reported to have a peak brightness of over 3000 cd/m2). Consequently, stretching the signal during display to the full dynamic range may result in unpleasantly bright scenes for some images. The range to which the input is stretched can be limited for instance to between 500 and 1000 Nit, to avoid such unpleasant scenes but in this case the displays' capabilities are not fully exploited.


To address this issue, in preferred embodiments of the invention a further step is added to the algorithm. This preferred step is schematically shown in rectangle 6 in FIG. 1.


In order to take full advantage of an HDR displays' capabilities, small specular highlights are identified with which the remaining available dynamic range, i.e. the range W0 to WHDR is populated (highlighting). Preferably bilateral grids are used, also as a form of low pass filter. Since bilateral grids involve constructing local histograms, these histograms can be used directly to identify regions with a small number of bright pixels. The algorithm to perform identification of highlights is in FIG. 1 schematically shown as function FHL (Vin) in identifier 7. The data of the pixels that are identified as belonging to highlights are enhanced by a factor in mapper 8 that brings them into the highest dynamic range, providing a signal VHL highlighting small bright areas. The signals {tilde over (V)}1 and VHL are combined in combiner 9 to provide an output signal Vout. Mapping can be done by simple multiplication or by more complex functions.



FIG. 2 schematically illustrates the various dynamic range enhancements. The input signal Vin has a dynamic range ranging from KLDR to WLDR. This is mapped into a dynamic range ranging from K0 to a W0. Apart from that pixels that are identified as belonging to highlights, which include pixels in the highest input range, in FIG. 2 schematically indicated by arrow 7′, are identified by the highlight identifier 7; for the pixels that are identified as belonging to highlights a mapping operation in mapper 8 is performed wherein the data for the highlights is mapped onto a larger dynamic range, covering in particular the highest range of luminance HL, e.g. corresponding to the top part of the full dynamic range of an HDR display device. This highest range of luminance, reserved for highlights, is stretched to a maximum value WHDR. This maximum value lies above the upper value Wo of the target dynamic range, which target range is kept to moderate luminance values to avoid unpleasant viewing conditions. “Ex” in FIG. 2 illustrates schematically the extension of the dynamic range for {tilde over (V)}1, “HL” schematically illustrates the higher dynamic range reserved for the highlights signal VHL.



FIG. 3 illustrates a highlight identification algorithm.


The input signal is sent to the identifier 7. Those areas or blocks with pixels having a luminance I above a threshold value Ithreshold and a number nav of such high intensity pixels below a threshold nthreshold are identified as highlights. Further examples are given below.


As an example the following procedure can be followed:


To include highlighting in the processing flow, the intensity of the bilateral grid constructed on the input signal is stretched both to the target dynamic range [K0-W0], resulting in the grid B0, and once to full dynamic range of the display [KHDR-WHDR], resulting in the grid BHDR. These two grids are adaptively mixed into the final grid Bmapped using a mixing map M prior to slicing (interpolation):






B
mapped
=MB
HDR+(1−M)B0.


Note that in this example all the above operations are performed on a grid base, which is a heavily sub-sampled representation of the image, and hence are numerically inexpensive. The final output on full resolution is constructed by means of slicing into the mapped bilateral grid Bmapped. To create the mixing map M, we adopt the following approach


1. Construct the regional cumulative histogram by summing the existing local histograms,


2. Establish the brightness Ithreshold above which less than n percent of image pixels reside. In other words the top n percent of luminance values,


3. Count (on a local basis) the amount of pixels n with intensities higher than Ithreshold,


4. Apply a morphological dilatation filter to create spatial consistency between neighboring bins, resulting in a consistency value C0. If the consistency value is high, relatively large bright areas are present, if the consistency value is small, small bright areas are present,


5. Compute a mixing factor M. The value of the mapping function M is set to 1 for regions where the number of qualifying pixels is below a predefined threshold T (small highlights and thus to be mixed in) and falls of to 0 above this threshold to prevent large bright image portions from becoming unpleasantly bright:







M


(

C
0

)


=

{



1




for






C
0



T






CLIP


(


1
-

(



(


C
0

-
T

)

/
2


T

)


,

[

0
,
1

]


)






for






C
0


>

T
.











FIGS. 4
a to 4f illustrate the effects of a dynamic range extension algorithm as described above. Shown are (a) a simulated LDR input image, as well as the (b) regional contrast layer and (c) detail layer extracted by means of bilateral grid filtering. The natural appearance of the scene after extension is maintained by (d) extending only the regional contrast VRC to a user or manufacturer defined range [K0-W0]. In (e), the intermediate output (sum of frames (c) and (d) is shown. In (f), the final mapped output is shown in which small specular highlights are identified to fill up the remaining available dynamic range.



FIG. 5 illustrates the mixing map computed for the image of FIGS. 4a to 4f. The scale on the right hand side gives the mixing factor M. Some typical areas of mixing factor are indicated by arrows. The bright reflection in the water and the bright areas in the clouds are both correctly detected as small highlights and are mapped on the full dynamic range or near the full dynamic range of the HDR display device. Because the bright area in the sky is relatively large, a smaller weight (smaller mixing factor M) is attributed to this area to prevent it from becoming unpleasant on a high-brightness HDR display. Second, the low-resolution nature of the map is clearly visible from its blocky appearance, because these operations are performed over local histograms, not on full pixel resolution. In this preferred embodiment the mixing map M is applied to the bilateral grids B0 and BHDR. The pixels themselves are constructed by slicing the final grid Bmapped. As a result, only bright pixels in the enhanced area are affected by the highlighting operation, but dark to mid-grey intensities remain unchanged, such that the highlighting procedure is a selective operation to fill up the top part HL of the available dynamic range. The final output for the image of FIG. 3a is shown in FIG. 3f. The bright reflections in the water and in the clouds are now mapped to the peak brightness of the display, (i.e. mixing factor M is near 1, these areas are indicated in FIG. 5 by the white arrows, while larger areas in the sky remain closer to the intermediate intensities of FIG. 3e. Again, by preventing over-enhancement of fine detail contrast a more natural appearance of the scene is maintained.



FIGS. 6-8 show further examples of the performance of the proposed dynamic range extension method. FIGS. 6a to 6c form an illustration of dynamic range extension. Shown are (a) the simulated LDR image and the extended output (b) without and (c) with highlighting. Again, this method is designed for giving a high performance on extremely bright HDR displays. Without such displays, we are here limited to simulations of the extension procedure. To this end, a LDR input is simulated and the extension procedure is used to restore the image to the full available range. Obviously, this simulation is imperfect and cannot provide a realistic appearance of the actual HDR display. Nevertheless FIGS. 6a to 6c illustrate the maintained balance between region and fine contrasts as well as the selective use of the peak brightness of the display.



FIGS. 7
a to 7d show further examples of dynamic range extension. The simulated LDR images 7a and 7c are shown on the left, the extended HDR version including highlighting, images 7b and 7d, are shown on the right. In the lower example, ovals annotate highlighted areas. This example illustrates that the large white areas in the snowy mountain are not mapped to the peak brightness of the HDR display as this would be unpleasant. Instead, only small specular highlights are mapped to the full brightness in a very selective procedure. In the upper example only the car headlights are mapped to the peak brightness.



FIGS. 8
a to 8e provide further examples of dynamic range extension. The simulated LDR images, FIGS. 8a and 8c are shown on the left, the extended HDR versions including highlighting are shown on the right inn FIGS. 8a and 8d.


In short the invention can be described as providing a method, unit and display device in which the input image signal is split into a regional contrast signal and a detail signal, followed by stretching separately the dynamic ranges for both signals, wherein the dynamic range for the regional contrast signal is stretched with a higher stretch ratio than the dynamic range for the detail signal. Preferably the stretch ratio for the detail signal is near 1 or preferably 1. In preferred embodiment highlights are identified and for the highlights the dynamic range is stretched to an even higher degree than for the regional contrast signal.


Stretching the regional contrast signal more than the detail signal reduces mismatch between enhancement of fine details relative to regional contrast and provides a more natural look. The more extreme stretching of the dynamic range for highlighted areas maps these highlights in the top part of the dynamic range. This makes the image sparkle without causing large overly bright areas, which would provide for unpleasant viewing.


The methods and system of the invention may be used in various manners for various purposes such as for instance to enables enhancement algorithms and other video processing algorithms.


The invention is also embodied in a computer program comprising program code means for performing a method according to the present invention, when executed on a computer.


The invention can be used in or for conversion units of image signals and devices in which a conversion of image signals is used, such as display devices, in particular in display devices with HDR capability.


In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim.


The word “comprising” does not exclude the presence of other elements or steps than those listed in a claim. The invention may be implemented by any combination of features of various different preferred embodiments as described above.


The invention is not limited to the above given examples, but can be executed in various ways.


For example:


The upper value W0 may be made dependent on a number of parameters, the most important of which are


Color:


The maximum saturation level for reflective red and blue colors is relatively low compared to green and yellow. The value for W0 is, in preferred embodiments made dependent on the color, to avoid parts that start to glow rather than to blind.


Ambient illumination level:


In preferred embodiments the display device is provided with a light sensor to sense the ambient illumination level. The output of the ambient illumination sensor determines the upper value W0, wherein the higher the ambient illumination level, the higher the upper value W0 is set. FIG. 9 illustrates such an embodiment. The display device is provided with a display screen 91. The output signal Vout determines the image displayed on the screen 91. The display device is further provided with an ambient illumination sensor 92 for measuring the ambient illumination. An output of this sensor is an input for the stretcher 3 for stretching the dynamic range of VRC. The output of this sensor may also be coupled to identifier 7 and/or the mapper 8 for determining the highlights and/or stretching providing the dynamic range for the highlights. In this example the output of sensor 92 is fed directly into identifier 7 and/or mapper 8. Within embodiments of the invention the functional parameters of stretcher 3 and identifier 7 and/or mapper 8 may be linked, so that the sensor signal could be sent to only one of the devices. Likewise, there could be a computer program comprising a look-up table wherein functional parameters for stretcher 3, such as the upper value of the dynamic range and/or distribution over the dynamic range and/or for identifier 7 and/or mapper 8 as a function of the sensor signal are stored. The output of the sensor is, in such embodiment, an input for the computer program and the computer program controls the parameters for stretcher 3, identifier 7 and/or mapper 8.


Graphics Detection


In preferred embodiments a graphics detection unit is used to identify graphics (such as logos, subtitles) to exclude them from enhancement and/or highlighting.


The invention is also embodied in various systems:


The image conversion unit can also form part of an image processing apparatus of various kinds.


For instance, the conversion unit for performing the conversion can be part of a display device, as in FIG. 9.


“Conversion unit” is to be broadly interpreted as any means, including soft-ware, hardware or any combination thereof for performing the method of conversion.


The conversion unit can also be part of for instance a recording device. One can record an image or video, wherein the recording device is provided with information on the capabilities of the display devices. The recording devices applies, in real time or of line, the method according to the invention, matching the dynamic range W0-K0 and/or WHDR-KHDR to the capabilities of the display screen. The improved image or video can then be displayed, either in real time, or afterwards.


In a variation to this system, the software may be on some server on the internet. The user sends the image data of images or videos he/she has to an internet site and provides the internet site with details on the dynamic range capabilities of the display device he/she has. This dynamic range information can be explicit, for instance by specifying the dynamic range, or implicit, for instance by specifying the display device he/she has, or even without the user noticing it, since the type of display is automatically checked. At the server it is checked whether, given the capabilities of the display device, applying the method of the invention to the input image data an improved image or video is produced. If the answer is positive the method of the invention is applied to the input image data, and, after having received payment for the service, the improved output image data, matched to the capabilities of the HDR display, is sent back to the user.


This embodiment allows a user to upgrade his/her “old” image or videos, to make full use of the HDR capabilities of his/her newly bought HDR display without forcing the user to buy a specific conversion unit.


In “pay per view” systems, for instance to watch sport, the user may be given the option of buying standard quality, or upgraded quality, wherein the upgraded quality is matched to the dynamic range of the specific HDR display device he/she has.

Claims
  • 1. Method for converting input image data (Vin) into output image data ({tilde over (V)}1, Vout) wherein the input image data (Vin) is split into at least two signals, a first signal (VRC) comprising regional contrast data and a second signal (VD) comprising detail data, the dynamic range of at least the first signal is stretched to provide a stretched first signal ({tilde over (V)}RC), wherein the dynamic range of the first signal is stretched to a higher degree than applied to the second signal, and the stretched first ({tilde over (V)}RC) and second ({tilde over (V)}D) signals are combined in an output signal ({tilde over (V)}1).
  • 2. Method as claimed in claim 1, wherein the second signal (VD) is made by subtracting the first signal (VRC) from the image input data (Vin).
  • 3. Method as claimed in claim 1, wherein the second signal (VD) is not stretched.
  • 4. Method as claimed in claim 1, wherein the upper value of the dynamic range for the combined stretched first and second signal ({tilde over (V)}1) lies in a range corresponding to light intensities when displayed on a display of 500 to 1000 Nit.
  • 5. Method as claimed in claim 1, wherein the dynamic range of the combined stretched first and second signal is bound by an upper value (W0) and the input image signal (Vin) is analyzed (FHL(Vin)) to identify groups of pixels forming highlights in the image and wherein the pixel data for said identified groups of pixels are converted into a third signal (VHL) such that the third signal covers a dynamic range (WHDR-KHDR) extending upwards above the said upper value (W0) to a upper maximum pixel value (WHDR) and wherein the third signal (VHL) is combined with the combined stretched first and second signal ({tilde over (V)}1) in an output signal (Vout).
  • 6. Method as claimed in claim 5, wherein the upper maximum pixel value (WHDR) lies in a range corresponding to light intensities when displayed on a display of above 1000 Nit.
  • 7. Computer program comprising program code means for performing a method according to the present invention, when executed on a computer.
  • 8. Image conversion unit for converting input image data into output image data, comprising a splitter for splitting the input image data (Vin) into at least two signals, a first signal (VRC) comprising regional contrast data and a second signal (VD) comprising detail data, a stretcher (3) to stretch the dynamic range of at least the first signal to provide a stretched first signal ({tilde over (V)}RC), wherein the dynamic range of the first signal (VRC) is stretched to a higher degree than applied to the second signal (VD), and the unit comprises a combiner (5) to combine the stretched first ({tilde over (V)}RC) and second ({tilde over (V)}D) signals in an output signal ({tilde over (V)}1).
  • 9. Image conversion unit as claimed in claim 8, wherein the stretcher (3) is arranged such that the dynamic range of the combined stretched first and second signal is bound by an upper value (W0) and the unit further comprises a identifier (7) to analyze the input image signal (Vin) to identify groups of pixels forming highlights in the image and a mapper (8) for mapping the pixel data for said identified groups of pixels into a third signal (VHL) such that the third signal covers a dynamic range (WHDR-KHDR) extending upwards above the said upper value (W0) to a upper maximum pixel value (WHDR) and a combiner (9) for combining the third signal (VHL) with the combined stretched first and second signal ({tilde over (V)}1) in an output signal (Vout).
  • 10. Image conversion unit as claimed in claim 9 wherein the stretcher (3) is arranged such that the upper value (W0) of the dynamic range for the combined stretched first and second signal ({tilde over (V)}1) lies in a range corresponding to light intensities when displayed on a display of 500 to 1000 Nit.
  • 11. Image conversion unit as claimed in claim 9 wherein the mapper (8) is arranged such that the upper maximum pixel value (WHDR) lies in a range corresponding to light intensities when displayed on a display of above 1000 Nit.
  • 12. Image processing apparatus comprising receiving means for receiving input image data and an image conversion unit for converting the input image data into output image data as claimed in claim 8.
  • 13. Display device comprising an image processing apparatus comprising: receiving means for receiving input image data, an image conversion unit for converting input image data into output image data, as claimed in claim 8 and a display screen (91).
  • 14. Display device comprising an image processing apparatus comprising receiving means for receiving input image data, an image conversion unit as claimed in claim 11, and a display screen, wherein the upper maximum pixel value corresponds to a value at or near a maximum of the dynamic range of the display screen (91).
  • 15. Display device as claimed in claim 13, wherein the display device comprises an ambient illumination sensor (92) providing an output, wherein the output of the ambient illumination sensor (92) is an input for the stretcher (3).
Priority Claims (1)
Number Date Country Kind
09154549.1 Mar 2009 EP regional
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/IB2010/050905 3/3/2010 WO 00 9/1/2011