The invention concerns a method for color gamut mapping colors from a source content dependent color gamut in a target color gamut. Such an operation is called “content-to-device gamut mapping”.
Gamut mapping of colors of an image has the goal that the mapped colors are inside of the color gamut of a target display device. A second goal is that the mapped colors make efficient and complete use of the color gamut of the target display device. In general, color gamut mapping can be applied to any color values that are defined within a source color gamut in order to transform them such that they are included in a destination color gamut. The source color gamut can be linked to an image capture device such as camera or scanner. It can be linked to a reference display device such as a proof view display device used to control the creation or color processing of images. It can also be linked to a predefined, virtual color gamut as defined for instance in a standard such as ITU-R BT.709. The target color gamut can be linked to a specific reproduction display device. It can be linked also to a predefined gamut for transmission, compression or storage purpose. For example, it can be linked to a predefined, virtual color gamut as defined for instance in a standard such as ITU-R BT.2020. It can be linked to a specific medium such as film or paper prints. In the following we will simplify by talking about a source display device having a source gamut and a target display device having a target color gamut.
Color gamut mapping is carried out usually in specific color spaces. Some methods use the L*a*b* color space defined by the CIE in 1976. In L*a*b* space, a constant angle in a*b* plane is assumed to correspond to identically perceived hue. The L coordinate represents the perceived intensity. Unfortunately, this space was shown to not well represent hues, notably in blue tones. Other methods use the JCh color space defined in the CIECAM-02 standard defined by the CIE in 2002. In JCh space, the h coordinate is assumed to correspond to perceived hue by the human eye and the J coordinate is assumed to correspond to perceived intensity. JCh space was shown to better represent hues and intensity than L*a*b*, but it requires higher complexity to be calculated.
When doing gamut mapping in L*a*b* color space, the classical approach is shown in
The article entitled “Color reproduction system based on color appearance model and gamut mapping”, from Fang-Hsuan GHENG et al., published in 2000 in the Proceedings of SPIE, Vol. 4080, pages 167-178, discloses a typical gamut mapping scheme using the JCh color space instead of the L*a*b* color space.
A problem addressed by this invention is the increased computational load which is required when colors should be mapped as described above in a device-independent color space as, for instance, the Lab color space. This problem is confirmed by Dong-Woo Kang et. al. in their publication entitled “COLOR DECOMPOSITION METHOD FOR MULTIPRIMARY DISPLAY USING 3D-LUT IN LINEARIZED LAB SPACE” published in the Proceedings of the SPIE vol. 5667 no. 1 pages 354-63. The authors maintain the transformation of color coordinates back from and forward into device independent color space but use—instead of the non-linear L*a*b* color space—a linearized device independent color space such that the color transformation of color coordinates is less heavy in computation.
Another solution to this problem of increased computational load is to establish a 3D Look-Up-Table (LUT) mapping directly device-dependent coordinates into other, mapped device-dependent coordinates. For this solution, the three processing steps such as shown in
A second problem addressed by this invention is when the gamut mapping operator depends on metadata, as for instance described in US2010-220237. If the metadata change, the mapping operator or the precalculated LUT needs to be updated or recalculated, which is usually slow. If the update is slow, the frequency of update is limited. Changing metadata can be processed only in the frequency of updates of the mapping operator, so finally, the frequency of changes in the metadata is limited. As shown in
In a first application of such a color gamut mapping, the source GBD describes the color gamut of a source display device that is capable to reproduce colors from RGB color coordinates used to control this source display device. The target GBD describes the color gamut of a target display device that is capable to reproduce colors from R′G′B′ color coordinates used to control this target display device. The GBD of the target color gamut depends for instance on the settings of the target display device and/or on the viewing conditions. In this case, the goal of classical color gamut mapping is to map the colors that can be reproduced by the source display device into the color gamut of the target display device. This operation is called device-to-device color gamut mapping. Often, the color gamut of the target display device is smaller than the gamut of the source display device. One may then call this operation gamut compression. The inverse case is called gamut expansion. Both cases may apply at the same time if the color gamut of the target display device is smaller than the color gamut of the source display device for colors with certain hue, luminance or saturation but is larger for other colors.
In a second application of such a color gamut mapping, the source GBD describes the color gamut of the source content itself instead of the color gamut of the source display device. The source content color gamut is usually smaller than the source device color gamut, notably if the content was produced using the source device. The source content color gamut may be larger than the source device color gamut, for instance when the content was produced using other devices. In these cases, the goal of classical color gamut mapping is to map the colors of the source content into the color gamut of the target display device. This operation is called content-to-device gamut mapping. The source GBD might change if the color characteristics of the content change, for instance from one group of images to another one of a video content, i.e. notably from one scene to another scene of this content. In this case, the color mapping operator needs to be updated at each change of the source content color gamut. For example, a new LUT may be calculated for every scene of a film. Since this update is usually slow, the frequency of change of the source GBD should be generally limited.
The invention concerns a method for content-dependent color gamut mapping of colors from a source content color gamut into a target color gamut. This invention concerns notably “content-to-device gamut mapping”.
The color mapping method of the invention as illustrated on
For the purpose of solving the aforementioned problems, a subject of the invention is a method of, in a color mapper, mapping source colors of images of a video content into the target color gamut of a target color device thus resulting in target colors,
wherein said images are grouped into a plurality of groups associated with different source content-dependent color gamuts, each group comprising at least one image,
said gamut mapping method comprising:
The color mapper is generally comprised in a display device, as a TV set or a tablet, or in a set-top-box, a gateway or any in image processing system.
Each group of images corresponds notably to a specific scene of the video content and is associated with a specific source content-dependent color gamut.
The intermediate color space is generally associated with the definition of the intermediate color gamut. This intermediate color space corresponds generally to the color space of a so-called “intermediate” display device and the intermediate color gamut is then the color gamut of this display device. This intermediate display device may be virtual, as, for instance a display device defined in a standard.
Preferably, in the mapping method, each group of at least one image is associated with a specific source content-dependent color gamut having boundaries corresponding to upper and lower limits of colors of the at least one image of this group. These upper and lower limits are for instance defined along straight lines distributed over different directions in the intermediate color space.
Preferably, for said mapping of a source color from the source content-dependent color gamut into said intermediate color gamut, said source color is represented in said intermediate color space, said source content-dependent color gamut and said intermediate color gamut are described in said intermediate color space, notably by color coordinates.
Preferably, the shape of said intermediate color gamut is a hyper cube when defined in said intermediate color space.
Preferably, said mapping from the source content-dependent color gamut into said intermediate color gamut is performed directly in said intermediate color space.
Preferably, if said intermediate color space is associated with an intermediate display device,
an intermediate forward transform is associated with said intermediate display device and defined such as to be able to transform color coordinates representing any color in said intermediate color space into color coordinates representing approximately the same color in a device-independent color space,
the target color device is modeled by a target inverse transform which is defined to be able to transform color coordinates representing any color in the device-independent color space into color coordinates representing approximately the same color in the target device-dependent color space associated with said target display device, and
the mapping of said intermediate color from said intermediate color gamut into said target gamut comprises:
A subject of the invention is also a color mapper for mapping source colors of images of a video content into the target color gamut of a target color device thus resulting in target colors, wherein said images are grouped into a plurality of groups associated with different source content-dependent color gamuts, each group comprising at least one image, said color mapper being configured:
The subject of the invention is also a method of mapping source colors of images of a video content into the target color gamut of a target color device thus resulting in target colors,
wherein a single intermediate color gamut is defined for the whole video content,
wherein said images are grouped into a plurality of groups, each group comprising at least one image,
said gamut mapping method comprising, for each group of images, the step of defining a source content-dependent color gamut as including all source colors of images of this group,
and for each source color of image(s) of this group, the steps of:
1/ mapping said source color from said source content-dependent color gamut into said intermediate color gamut, resulting in an intermediate color,
2/ mapping said intermediate color from said intermediate color gamut into said target gamut, resulting in a target color.
The subject of the invention is also a color mapper for mapping source colors of images of a video content into the target color gamut of a target color device thus resulting in target colors, wherein said images are grouped into a plurality of groups, each group comprising at least one image, said color mapper being configured:
The method according the invention may have notably the following advantages with respect to classical color gamut mapping:
The invention will be more clearly understood on reading the description which follows, given by way of non-limiting example and with reference to the appended figures in which:
The functions of the various elements shown in the figures may be provided through the use of a color mapper comprising dedicated hardware as well as hardware capable of executing software in association with appropriate software. Such hardware may notably include, without limitation, digital signal processor (“DSP”), read-only memory (“ROM”) for storing software, random access memory (“RAM”), and non-volatile storage. Such a color mapper may be called “color box”.
In the following main embodiment, the invention is be described in the context of mapping colors of images of a video content such that these images can be reproduced with the best color quality using a specific target color device, such as a TV set, a color monitor, a color projector or the display device of any mobile device. The colors of the pixels of each image to reproduce are called “source colors”. The target color device has its own target color gamut, which is, of course, device-dependent.
Images of this video content are grouped into groups of at least one image. In each group comprising a plurality of images, those images are temporally adjacent, i.e. successive. A group of images may notably correspond to a scene of this video content. Another possibility is that each group comprises only one image.
To source colors of images of each group, a source color gamut is associated in a manner known per se and encloses all source colors of the images of this group. This source color gamut is therefore content-dependent. Each group has then its specific source content-dependent color gamut. When a group corresponds to a scene, the corresponding source content-dependent color gamut corresponds to the content color gamut of this scene. Generally, the source color gamut varies from a group to another group of the video content to map.
An intermediate color gamut that does not change through the whole video content is defined. This intermediate color gamut does not change from one group of images to another one. Preferably, the shape of this intermediate color gamut is a cube. Such a shape has the advantage of being very easy to use for the implementation of the mapping method according to the invention. Here, an intermediate color space of an intermediate display device is used to represent this intermediate color gamut.
To implement the mapping method of this main embodiment, the source colors to map are represented in a manner known per se by RGB color coordinates in this intermediate color space.
In a first series of preliminary steps of the method of mapping source colors of images of the video content, the following operations are performed:
In a second series of preliminary steps of the method according to the invention, for each group of images of the video content to map, the following operations are performed:
Thanks to the definition of this gamut operator, source colors to map do not need to be transformed into device independent color coordinates to be mapped in the intermediate color space.
Then, for this group of images having its own source content-dependent color gamut and for each source color of the images of this group, the mapping method according to the invention is implemented according to the following four steps illustrated on
Steps 2 to 4 above correspond to the mapping of the intermediate colors from the intermediate color gamut common to all groups of images into the target color gamut, resulting in these target colors as defined above. Other methods of mapping of the intermediate colors from the intermediate color gamut into the target color gamut can be used without departing from the invention.
Generally, the source content-dependent color gamuts and their boundary descriptions vary from one group to another group of images.
Preferably, all color transforms and color mapping operators are represented by Look Up Tables (LUT).
A specific example of the main embodiment above will now be described in reference to
First, an intermediate device-dependent color gamut is chosen such that the boundary of this intermediate device-dependent color gamut is a three-dimensional cube in an intermediate color space. An intermediate forward model is then inferred such that it can transform color coordinates representing any color in the intermediate color space into color coordinates representing approximately the same color in the device-independent color space, here the XYZ color space.
This intermediate color space can be associated to an intermediate display device. In this case, the intermediate color space is output referred, is device-dependent, and the intermediate device-dependent color gamut is the color gamut of this intermediate display device. This intermediate color space can be associated to an intermediate image-capturing device such as a camera. In this case, the intermediate color space is scene referred and the intermediate device-dependent color gamut is the capture analysis color gamut of this intermediate image-capturing device according, for instance, to the report “Capture Color Analysis Gamuts” published by Jack Holm (see: http://www.color.org/documents/CaptureColorAnalysisGamuts.pdf).
As described above, images of the video content to map are grouped into a plurality of groups, corresponding, for instance, to the different scenes of this content.
Then, for each group, the mapping method of the example is implemented in four steps similar to the four steps of the main embodiment above:
where the first step is fast because implemented directly in a RGB intermediate color space, and ment to be reactive to dynamic metadata that may change from one group of images to another group of the video content while the other three steps 2, 3, and 4 make use—according to state of the art—of a device independent color space and may be slower but without any drawback because, being not ment to be reactive to dynamic metadata, they can be precalculated, for instance into a LUT. If these steps are precalculated in a LUT, there is no need to transform any color coordinates into a device independent color space.
To implement step 1 above, the gamut boundary description (GBD) that describes the source content-dependent gamut of each group in the intermediate color space is for instance constructed according to the basic profile of the IEC 61966-12-1 “Gamut ID” format. Each GBD of a source content-dependent gamut is described by 5 colors: red, green, blue, black, white. We build from this information six triangles that build together the boundary of a gamut:
The GBD for describing the intermediate device-dependent color gamut in the intermediate color space is simply the cube in source-device dependent RGB color space containing all valid RGB values. Preferably, the cube is limited in RGB color space by coordinates 0 and 255 in all three directions, R, G and B, as for a 8 bit display device.
Therefore, in this non-limiting example of step 1, any source color of the images of a same group, which is represented by its RGB coordinates in the intermediate color space, is mapped to an intermediate color having R″G″B″ coordinates by the following operations:
onto the grey axis giving the anchor point A, the grey axis being the axis between the color (0,0,0) and the color (255,255,255).
Other, known gamut mapping algorithms might be used, such as algorithms using a single anchor point for all colors, or algorithms that map along curved lines instead of straight lines.
The outputs of step 1 are R″G″B″ color coordinates representing intermediates colors in the intermediate color space.
As intermediate color space, we use the RGB color space such as defined by the ITU-R BT.709 standard. The intermediate color gamut is then the color gamut according to ITU-R BT.709 standard. Then, in step 2, the application of the intermediate forward color transform is performed according to the following two operations that are well known in the context of this standard:
and equivalently for G″ and B″;
In step 3, any known gamut mapping method can be used. This step corresponds to the classical case of device-to-device color gamut mapping in a device-independent color space. The outputs of step 3 are color coordinates representing target colors in the CIE XYZ color space.
In step 4, we describe the target inverse color transform by an ICC profile according to the standard ISO 15076-1:2005 entitled “Image technology colour management—Architecture, profile format and data structure—Part 1: Based on ICC.1:2004-10”. If the target device display maker does not deliver the ICC profile, standard color characterization tools can be used to produce the ICC profile. In order to specify the target display device model, we will use colorimetric intent transforms of the ICC profile. For the target inverse transform, we use the colorimetric rendering intent transform B to A of the ICC profile. The output of step 4 are color coordinates R′G′B′ representing target colors in the device-dependent color space of the target display device.
The diagram shown on
The “device dependent” module is configured to apply the LUT sent by the provider to the intermediate colors to map, as a device-dependent operator. The source content-dependent gamut boundary description “content GBD” specific to each group of images is sent to the “content dependent” module and used by this module for defining dynamically a “content-dependent” operator adapted to implement step 1 above, directly in the intermediate color space. This content GBD can be sent dynamically since this content-dependent operator is non-complex and operates directly in the intermediate color space. For example, each time a new group of images, notably a new scene, starts, a new “content GBD” is sent. Thus, for a video content, there will be generally more than one “content GBD” sent to the terminal. As shown on
The target colors provided by the “device dependent” module are sent to the target “display” device which is also part of the terminal.
It is to be understood that the invention may be implemented in a color mapper implementing various forms of hardware, software, firmware, special purpose processors, or combinations thereof. The invention may be notably implemented as a combination of hardware and software. Moreover, the software may be implemented as an application program tangibly embodied on a program storage unit. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPU”), a random access memory (“RAM”), and input/output (“I/O”) interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU and/or by a GPU. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.
While the present invention is described with respect to particular examples and preferred embodiments, it is understood that the present invention is not limited to these examples and embodiments.
Number | Date | Country | Kind |
---|---|---|---|
13306869.2 | Dec 2013 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2014/078192 | 12/17/2014 | WO | 00 |