IMAGE PROCESSING APPARATUS AND METHOD THEREFOR

Information

  • Patent Application
  • 20150015597
  • Publication Number
    20150015597
  • Date Filed
    July 10, 2014
    10 years ago
  • Date Published
    January 15, 2015
    9 years ago
Abstract
According to one embodiment, there is provided an image processing apparatus including: first circuitry and second circuitry. The first circuitry sets for first color information in a first color gamut a correction quantity of a brightness defined depending on a lightness and a chroma of the first color information on a basis of a difference between a target brightness and the brightness of the first color information. The second circuitry corrects at least one of the lightness and the chroma of the first color information such that the brightness of the first color information is corrected by the correction quantity to obtain corrected color information in the first color gamut.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2013-145829, filed Jul. 11, 2013; the entire contents of which are incorporated herein by reference.


FIELD

Embodiments described herein relate to an image processing apparatus and a method therefor.


BACKGROUND

There has been discussed a method for performing color conversion with taking into account human visual feature when color information existing in a color gamut is displayed on a display device having a color gamut different therefrom. For example, there has been known a method for performing color conversion on a color of the wide color gamut such that hue and gradation property perceived by a human are constant with taking into account the human visual feature in order to display the color information having a wide color gamut on an image display device having a narrow color gamut.


However, this method has a difficulty in that although the color information displayed on the image display device having the narrow color gamut seems to be equivalent to the color information having the wide color gamut in the hue and the gradation property, the chroma may decrease accompanying the brightness decreasing as compared with the color information having the wide color gamut.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a diagram of an image display device according to a first embodiment;



FIG. 2 is a flowchart of an operation according to the first embodiment;



FIG. 3 is an illustration of a correction method of lightness and chroma according to the first embodiment;



FIG. 4 is an illustration of a correction method of lightness and chroma according to the first embodiment;



FIG. 5 is a diagram of an image display device according to a second embodiment;



FIG. 6 is a flowchart of an operation according to the second embodiment;



FIG. 7 is a diagram of an image display device according to a third embodiment; and



FIG. 8 is a diagram that illustrates a hardware configuration of an image processing device according to an embodiment.





DETAILED DESCRIPTION

According to one embodiment, there is provided an image processing apparatus including: first circuitry and second circuitry.


The first circuitry sets for first color information in a first color gamut a correction quantity of a brightness defined depending on a lightness and a chroma of the first color information on a basis of a difference between a target brightness and the brightness of the first color information.


The second circuitry corrects at least one of the lightness and the chroma of the first color information such that the brightness of the first color information is corrected by the correction quantity to obtain corrected color information in the first color gamut.


Below, a description is given of embodiments of the present invention with reference to the drawings.


First Embodiment


FIG. 1 is a block diagram of an image display device according to a first embodiment.


This image display device includes an image processing apparatus 100 and a display device 120 according to the embodiment. The image processing apparatus 100 includes a color gamut information acquisitor 102, a conversion unit 104, a setting unit 106, and a correction unit 108.


The color gamut information acquisitor 102 acquires color gamut information 103a and 103b from the outside or holds in the inside thereof in advance, and sends the color gamut information 103a and 103b to the conversion unit 104. The color gamut information acquisitor 102 may read the color gamut information 103a and 103b from a storage not shown or acquire the color gamut information 103a and 103b via a user interface. The color gamut information 103a is color gamut information held by an input image signal 101 to be input to the image processing apparatus 100. The color gamut information 103b is color gamut information for the display device 120 which displays an image. A color gamut of the display device 120 (first color gamut) is narrower than a color gamut of an input image (second color gamut). For example, the color gamut of the display device 120 is encompassed by the color gamut of the input image. However, not all the color gamut of the display device 120 may be encompassed by the color gamut of the input image, but a part thereof may be encompassed in some cases.


The conversion unit 104 performs color conversion on the input image signal 101 on the basis of the color gamut information 103a to acquire a converted signal 105a representing lightness, chroma, and hue, and sends the converted signal 105a to the setting unit 106. The conversion unit 104 performs the color conversion on the input image signal 101 on the basis of the color gamut information 103b to acquire a converted signal 105b representing lightness, chroma, and hue, and sends the converted signal 105b to the setting unit 106.


The setting unit 106 calculates a correction quantity of brightness in the converted signal 105b, more specifically, a correction quantity 107 of at least one of the lightness and the chroma, and sends the calculated correction quantity 107 to the correction unit 108.


The correction unit 108 corrects the converted signal 105b in accordance with the correction quantity 107 to obtain a corrected image signal 109. The correction unit 108 outputs the corrected image signal 109 to the display device 120.


The display device 120 displays the corrected image signal 109 input from the correction unit 108. In the embodiment, a case where the display device 120 is a liquid crystal display is described as an example. However, the display device 120 may be a plasma display or CRT display, and may be a projection type device such as a projector.


Next, a description is given of an operation of the image processing apparatus 100 in the embodiment.



FIG. 2 is a flowchart showing the operation of the image processing apparatus 100 in the embodiment.


First, the color gamut information acquisitor 102 acquires the color gamut information 103a of the input image signal 101 and the color gamut information 103b of the display device 120 and sends these to the conversion unit 104 (S201).


The embodiment assumes that the color gamut of the display device 120 is a color gamut defined by ITU-R BT.709 which is a common color gamut for a display device such as a LCD. On the other hand, assumed is that the color gamut of the input image signal is a color gamut defined by ITU-R BT.2020 which is wider than the color gamut of the display device 120. These exemplary color gamuts are included in an example of a case where the color gamut of the input image signal 101 is wider than that of the display device 120. The color gamuts of the display device 120 and the input image 101 are not limited to ITU-R BT.709 and ITU-R BT.2020.


The conversion unit 104 converts the input image signal 101 into the converted signals 105a and 105b representing the lightness, chroma, and hue (S202).


Specifically, the conversion unit 104 first performs gamma conversion of Formula 1 on a gradation value for each of R, G, and B subpixels of each pixel of the input image signal 101 input in an RGB format.











R
in

=


(


R
in


255

)

γ









G
in

=


(


G
in


255

)

γ









B
in

=


(


B
in


255

)

γ






(
1
)







Where, Rin′, Gin′, and Bin′ are the gradation values of the R, G, and B subpixels respectively in an input video signal, and the gradation value is represented by 8-bits (0 to 255). Rin, Gin, and Bin are the gradation values obtained by performing the gamma conversion on Rin′, Gin′, and Bin′, and represented using relative values from 0 to 1. “γ” represents a gamma coefficient.


Here, a configuration for performing the gamma conversion by Formula 1 is shown. However, as another method, a look-up table is prepared in which an input gradation value and a gradation value after the gamma conversion are associated with each other in advance and a gamma conversion operation may be performed by referring the look-up table. The above gamma conversion is performed on the values of the R, G, and B subpixels respectively of all pixels in the input video signal.


The conversion unit 104 receives from the color gamut information acquisitor 102 the color gamut information 103a of BT.2020 as the color gamut of the input image and the color gamut information 103b of BT.709 as the color gamut of the display device.


The conversion unit 104 calculates, from the color gamut information 103a and the input image signal 101, tristimulus values (X709, Y709, Z709) converted on the basis of the color gamut defined by BT.709. Also, calculated, from the color gamut information 103b and the input image signal 101, are tristimulus values (X2020, Y2020, Z2020) converted on the basis of the color gamut defined by BT.2020.


A calculation formula for the tristimulus values (X709, Y709, Z709) is represented by Formula 2A, and a calculation formula for the tristimulus values (X2020, Y2020, Z2020) is represented by Formula 2B.










[




X
709






Y
709






Z
709




]

=

M


[




R
in






G
in






B
in




]






(

2

A

)







[




X
2020






Y
2020






Z
2020




]

=

N


[




R
in






G
in






B
in




]






(

2

B

)







Where, “M” is a color converting matrix of 3×3 representing the color gamut information 103a, which is the converting matrix for conversion correspondingly to a maximum color reproducing area reproduced by a color gamut BT.709. “N” is a color converting matrix of 3×3 representing the color gamut information 130b, which is the converting matrix for conversion correspondingly to a maximum color reproducing area reproduced by a color gamut BT.2020.


Here, the color converting matrix is held for calculating the tristimulus values, which are calculated for each pixel from Rin, Gin, and Bin. As another method, a relationship between the tristimulus values obtained by the color conversion, and Rin′, Gin′, and Bin′ is held in the look-up table, and the tristimulus values may be found for each pixel from Rin′, Gin′, and Bin′ by referring the look-up table.


In a case where the input image signal is input in a format other than the RGB such as YUV, the XYZ tristimulus values may be found by referring the LUT for converting into the XYZ tristimulus values directly from an input signal value of the YUV or the like.


Next, the tristimulus values X709, Y709 and, Z709 calculated by the color conversion are converted into L*709, a*709, and b*709 of a CIE L*a*b* color space. L*709, a*709, and b*709 are calculated in accordance with Formula 3.











L
709
*

=


116
×

f


(


Y
709

Yw

)



-
16









a
709
*

=

500
×

{


f


(


X
709

Xw

)


-

f


(


Y
709

Yw

)



}










b
709
*

=

200
×

{


f


(


Y
709

Yw

)


-

f


(


Z
709

Zw

)



}







(
3
)







Here, f(Y709/Yw) is calculated as shown in Formula 4, and f(X709/Xw) and f(Z709/Zw) are also calculated similarly.











f


(


Y
709

Yw

)


=


7.787
×

(


Y
709

Yw

)


+


16
116



(



Y
709

Yw


0.008856

)











f


(


Y
709

Yw

)


=


(


Y
709

Yw

)


1
3







(
4
)







Xw, Yw, and Zw represent the tristimulus values of a perfect reflecting diffuser. Further, a*709 and b*709 are converted by Formula 5 into a chroma C*709 and a hue h709.











C
709
*

=


{



(

a
709
*

)

2

+


(

b
709
*

)

2


}


1
2










h
709

=


tan

-
1




(


b
709
*


a
709
*


)







(
5
)







Similar to Formula 3 to Formula 5, a lightness L*2020, a chroma C*2020, and a hue h2020 are also calculated from the tristimulus values X2020, Y2020 and, Z2020.


The input image color information (lightness L*2020, chroma C*2020, and hue h2020) calculated in this way are sent as the converted signal 105a to the setting unit 106 and the correction unit 108. The display device color information (lightness L*709, chroma C*709, and hue h709) are sent as the converted signal 105b to the setting unit 106 and the correction unit 108.


Next, the setting unit 106 calculates the correction quantity 107 of the lightness and chroma in the converted signal 105b from the converted signals 105a and 105b (S203).


Specifically, first, the setting unit 106 calculates brightness perceived respectively for the input image color information (L*2020, C*2020, and h2020) and the display device color information (L*709, C*709, and h709).


Here, the perceived brightness is described. As is known as the Helmholtz-Kohlrausch effect, human eyes generally perceive that a chromatic color is brighter than an achromatic color if the lightness is the same, and perceive that the more vivid, the brighter.


The embodiment assumes that, on the basis of the Helmholtz-Kohlrausch effect, a brightness B* perceived by the human eyes is defined depending on a lightness L*, chroma C*, and hue h of a target object in accordance with Formula 6.






B*=L*+(F(h)+TC*  (6)


However, “F” is a function for outputting values different depending on the hue, and “T” is a constant. In accordance with Formula 6, a brightness B*2020 with which the color information of the input image is perceived is calculated as Formula 7A. Similarly, in accordance with Formula 6, a brightness B*709 with which the color information in the display device 120 is perceived is calculated as Formula 7B.






B*
2020(L*2020,C*2020,h2020)=L*2020+(F(h2020)+TC*2020  (7A)






B*
709(L*709,C*709,h709)=L*709+(F(h709)+TC*709  (7B)


Further, a difference ΔB* between both perceived brightnesses is calculated as the correction quantity of the brightness. The difference ΔB* between the brightness B*2020 (L*2020, C*2020, h2020) with which the color information of the input image is perceived and the brightness B*709 (L*709, C*709, h709) with which the color information of the display device 120 is perceived is calculated as Formula 8.













Δ






B
*


=





B
2020
*



(


L
2020
*

,

C
2020
*

,

h
2020


)


-











B
709
*



(


L
709
*

,

C
709
*

,

h
709


)








=




(


L
2020
*

-

L
709
*


)

+


(


F


(

h
2020

)


+
T

)

×












C
2020
*

-


(


F


(

h
709

)


+
T

)

×

C
709
*










(
8
)







Next, from the difference ΔB* between the perceived brightnesses, correction quantities ΔL* and ΔC* of the lightness and chroma are set with respect to the color information (L*709, C*709, h709) in the display device 120.


For example, if the difference ΔB* (correction quantity of the brightness) between the perceived brightnesses is entirely corrected using the lightness, the correction quantity ΔL* of the lightness and the correction quantity ΔC* of the chroma are calculated as Formula 9 in consideration of Formula 6.





ΔL*=ΔB*





ΔC*=0  (9)


On the other hand, if the difference ΔB* between the perceived brightnesses is entirely corrected using the chroma, the correction quantity ΔL* of the lightness and the correction quantity ΔC* of the chroma are calculated as Formula 10 in consideration of Formula 6.











Δ






L
*


=
0








Δ






C
*


=


Δ






B
*



(


F


(

h
709

)


+
T

)







(
10
)







Color information (L*L, C*L, hL) having the difference ΔB* between the perceived brightnesses with only the lightness being corrected is calculated as Formula 11A. Color information (L*C, C*C, hC) having the difference ΔB* between the perceived brightnesses with only the chroma being corrected is calculated as Formula 11B.











L
L
*

=


L
709
*

+

Δ






B
*











C
L
*

=

C
709
*









h
L

=

h
709






(

11

A

)








L
C
*

=

L
709
*









C
C
*

=


C
709
*

+


Δ






B
*



(


F


(

h
709

)


+
T

)











h
C

=

h
709






(

11

B

)







Here, FIG. 3 shows, in the color reproducing area represented by the lightness, chroma, and hue, the color information (L*709, C*709, h709) in the display device 120, the color information (L*L, C*L, hL) with only the lightness being corrected, and the color information (L*C, C*C, hC) with only the chroma being corrected. A horizontal axis indicates the chroma and a vertical axis indicates the lightness. A solid line represents a color gamut boundary of the display device, and a broken line represents a color gamut boundary of the input image. The color gamut of the input image is wider than the color gamut of the display device.


In FIG. 3, a line connecting the color information (L*L, C*L, hL) with the color information (L*C, C*C, hC) corresponds to a graph representing a relationship between the lightness and the chroma where the perceived brightness B* represented by Formula 6 is constant. The color information existing on this line has the constant perceived brightness B* represented by Formula 6. For this reason, the color information existing on this line is the same as the color information (L*2020, C*2020, h2020) of the input image in the perceived brightness.


Accordingly, the setting unit 106 sets a correction quantity ΔL*out of the lightness and a correction quantity ΔC*out of the chroma such that the color information after the correction with respect to the color information (L*709, C*709, h709) of the display device 120 is positioned on the line connecting the color information (L*L, C*L, hL) with the color information (L*C, C*C, hC).


For example, the ΔL*out and the ΔC*out may be a correction quantity for correcting only with the lightness as shown by Formula 9 or a correction quantity for correcting only with the chroma as shown by Formula 10.


A correction quantity for correcting both the lightness and the chroma may be set as shown by Formula 12 by dividing the difference ΔB* between the perceived brightnesses for the lightness and the chroma in accordance with a constant “α” (ratio given in advance). However, “α” is a constant having a value from 0 to 1. In other words, (L*709, C*709, h709) is corrected to the color information corresponding to a point which is obtained by internally dividing by the ratio “α” given in advance a portion from point color information (L*L, C*L, hL) having the chroma value the same as (L*709, C*709, h709) to a point (L*C, C*C, hC) having the lightness value the same as (L*709, C*709, h709).











Δ






L
out
*


=

Δ






B
*

×
α









Δ






C
out
*


=


Δ






B
*

×

(

1
-
α

)



(


F


(

h
709

)


+
T








(
12
)







The setting unit 106 sends the calculated ΔL*out and ΔC*out as the correction quantity 107 to the correction unit 108.


The correction unit 108 calculates the corrected image signal 109 from the converted signal 105b and the correction quantity 107 (S204).


Specifically, corrected color information (L*out, C*out, h*out) is calculated from the correction quantity ΔL*out of the lightness, the correction quantity ΔC*out of the chroma, and the color information (L*709, C*709, h709) in accordance with Formula 13. Here, the ΔL*out and the ΔC*out satisfy Formula 6 for the difference ΔB* between the perceived brightnesses calculated in accordance with Formula 8. Therefore, the brightness with which the color information (L*out, C*out, h*out) is perceived is the same as the brightness with which the color information (L*2020, C*2020, h*2020) is perceived. FIG. 3 shows a case where an intersection between the line connecting the color information (L*L, C*L, hL) with the color information (L*C, C*C, hC) and the color gamut boundary of the display device is (L*out, C*out, hout), but not limited thereto. A description is given of a way of calculating the relevant intersection in a second embodiment.






L*
out
=L*
709
+ΔL*
out






C*
out
=C*
709
+ΔC*
out






h
out
=h
709  (13)


The correction unit 108 converts C*out and h*out into a*out and b*out in accordance with Formula (14).






a*
out
=C*
out×cos(hout)






b*
out
=C*
out×sin(hout)  (14)


The correction unit 108 converts L*out, a*out, and b*out into tristimulus values Xout, Yout, and Zout in accordance with Formula (15).











Y
out

=


f


(

Y

Y
w


)


×

1
7.787

×
Yw









f


(

Y

Y
w


)



0.206893








Y
out

=



(

f


(

Y

Y
w


)


)

3

×
Yw








0.206893
<

f


(

Y

Y
w


)







(
15
)







Xout and Zout are also calculated similar to Yout. However, f(X/Xw), f(Y/Yw), and f(Z/Zw) are calculated as Formula (16).











f


(

X

X
w


)


=



a
*

500

+


(


L
*

+
16

)

116










f


(

Y

Y
w


)


=


(


L
*

+
16

)

116









f


(

Z

Z
w


)


=



(


L
*

+
16

)

116

-


b
*

200







(
16
)







The correction unit 108 converts Xout, Yout, and Zout into output signals Rout, Gout, and Bout in the RGB format in accordance with the color reproducing area of the display device 120 as shown by Formula (17).










[




R
out






G
out






B
out




]

=


M

-
1




[




X
out






Y
out






Z
out




]






(
17
)







However, “M−1” is an inverse matrix of the color converting matrix M of 3×3 shown by Formula (2).


Here, a description is given of a correction method in a case where the RGB signal values Rout, Gout, and Bout do not fall within a signal value range 0 to 1.


First, a minimum value of the RGB signal values Rout, Gout, and Bout is represented as Lmin=min(Rout, Gout, Bout), and an original value of Lmin is represented as Mmin. In other words, Mmin is any value of Rin, Gin, and Bin. Here, if Lmin falls below 0, values of the RGB signal values Rout, Gout, and Bout are updated in accordance with Formula (18).


Next, a maximum value of the updated RGB signal values Rout, Gout, and Bout is represented as Lmax=max(Rout, Gout, Bout), and an original value of Lmax is represented as Mmax. In other words, Mmax is any value of Rin, Gin, and Bin. Here, if Lmax exceeds 1, values of the RGB signal values Rout, Gout, and Bout are updated in accordance with Formula (19).


Finally, the RGB signal values (Rout, Gout, Bout) are output as the corrected image signal 109.













R
out

=




(

0
-

M

m





i





n



)


(


L

m





i





n


-

M

m





i





n



)


×

(


R
out

-

R

i





n



)


+

R

i





n










G
out

=




(

0
-

M

m





i





n



)


(


L

m





i





n


-

M

m





i





n



)


×

(


G
out

-

G

i





n



)


+

G

i





n










B
out

=




(

0
-

M

m





i





n



)


(


L

m





i





n


-

M

m





i





n



)


×

(


B
out

-

B

i





n



)


+

B

i





n










(
18
)










R
out

=




(

1
-

M

m





ax



)


(


L

m





ax


-

M

m





ax



)


×

(


R
out

-

R

i





n



)


+

R

i





n










G
out

=




(

1
-

M

m





ax



)


(


L

m





ax


-

M

m





ax



)


×

(


G
out

-

G

i





n



)


+

G

i





n










B
out

=




(

1
-

M

m





ax



)


(


L

m





ax


-

M

m





ax



)


×

(


B
out

-

B

i





n



)


+

B

i





n










(
19
)







This correction allows that even if the RGB signal values fall outside a range of 0 to 1 and the represented color is outside the color gamut of the display device, correction can be made to a color on an intersection between a line connecting an original color with a color obtained by correcting the perceived brightness and the color gamut boundary of the display device.


Here, the “color obtained by correcting the perceived brightness” is a color whose lightness and chroma are corrected and which exists outside the color gamut of the display device, and corresponds to a point P1 shown in FIG. 4, for example. A process for fitting the point P1 in the color gamut of the display device is the correction in accordance with Formulas 18 and 19, and the point P1 is corrected into a point P2 as a result or the correction of Formulas 18 and 19. In other words, the “color on an intersection between a line connecting an original color with a color obtained by correcting the perceived brightness and the color gamut boundary of the display device” is the point P2 which is an intersection between a line connecting (L*709, C*709) with the point P1 and the color gamut boundary of the display device. This allows that a point whose lightness and chroma are corrected and which is made to fall outside the color gamut can be plotted on an outline of the color gamut of the display device with the gradation property being maintained. That is, when the point internally divided by the “α” described above is positioned outside the color gamut of the display device, correction is made to the color information corresponding to an intersection between a line connecting the internally divided point with (L*709, C*709, h709) and the color gamut boundary of the display device.


In the example shown with Formula 12, (L*709, C*709, h709) is corrected to color information corresponding to the point which is obtained by internally dividing a line connecting the color information (L*L, C*L, hL) and (L*C, C*C, hC) by the ratio “α” given in advance, but another method may be used as below. That is, a portion, of a line connecting the color information (L*L, C*L, hL) and (L*C, C*C, hC), included in the color gamut of the display device is identified, and (L*709, C*709, h709) is corrected to the color information corresponding to a point obtained by internally dividing the identified portion by a ratio given in advance. In this case, the point of the color information after the correction exists in the color gamut of the display device, and thus the process in accordance with Formula 18 and Formula 19 is not necessary.


As described above, according to the embodiment, the display device having the narrow color gamut can also display the color having the perceived brightness the same as the color having the wide color gamut.


Second Embodiment

A description is given of a second embodiment. The embodiment has a general configuration the same as the first embodiment, and thus a difference from the first embodiment is described.



FIG. 5 shows a configuration diagram of an image display device according to the embodiment. Components the same as those in FIG. 1 are denoted by the same reference signs. Differently from FIG. 1, a path is added for feedback from the correction unit 108 to the setting unit 106.


In the first embodiment, the setting unit 106 determines at a time the correction quantity ΔL*out of the lightness and the correction quantity ΔC*out of the chroma on the basis of the difference ΔB* between the perceived brightnesses in accordance with Formula 9 or Formula 10 or Formula 12.


On the other hand, in this embodiment, a process is repeated in which the correction unit 108 returns the corrected image signal calculated from the correction quantity 107 set by the setting unit 106 to the setting unit 106 and the setting unit 106 resets the correction quantity in accordance with the value of the corrected image signal 109. This allows as shown in FIG. 3 the correction quantity ΔL*out of the lightness and the correction quantity ΔC*out of the chroma to be calculated which may correct the color information (L*709, C*709, h709) to the color information existing in the color gamut of the display device 120 and having the maximum chroma, that is, the color information of an intersection between the line connecting the color information (L*L, C*L, hL) with the color information (L*C, C*C, hC) and the color gamut of the boundary of the display device. In this way, the chroma is corrected in priority to the lightness in order that the chroma decrease in the color gamut of the input image is restrained as much as possible while the perceived brightness is made the same as in the color gamut of the input image signal. A case may occur where the chroma after the correction is higher than the chroma in the color gamut of the input image signal, but there is thought to be no difficulty so long as the perceived brightnesses are the same.


The embodiment uses a binary search method as a method for calculating the lightness correction quantity ΔL*out and the chroma correction quantity ΔC*out such that the corrected color information exists in the color gamut of the display device 120 and the chroma is maximum. However, an algorithm like this for calculating the correction quantity is not limited thereto, and other search algorithm may be used.


The process for calculating by the conversion unit 102 the converted signals 105a and 105b from the input image signal 101 and the color gamut information 103a and 103b is the same as that in the first embodiment and the description thereof is omitted.


In the following, a description is given of operations of the setting unit 106 and the correction unit 108. FIG. 6 shows a flowchart of the second embodiment.


First, the setting unit 106 calculates the difference ΔB* between the perceived brightnesses from the converted signal 105 in accordance with Formula 8, and calculates correction quantity initial values ΔL*(0) and ΔC*(0) of the lightness and chroma respectively from ΔB* as shown in Formula 20 to send to the correction unit 108 (S501). The embodiment sets the correction quantities of the lightness and chroma as the initial values ΔL*(0) and ΔC*(0) respectively such that the difference ΔB* between the perceived brightnesses is entirely corrected using the chroma. The value of ΔL*(0) is zero, of course.











Δ







C
*



(
0
)



=


Δ






B
*



(


F


(

h
709

)


+
T

)










Δ







L
*



(
0
)



=


Δ






B
*


-

Δ







C
*



(
0
)


×

(


F


(

h
709

)


+
T

)








(
20
)







The correction unit 108 corrects the color information using ΔL*(0), ΔC*(0), and a constant β as Formula 21 to calculate the lightness L*(0), chroma C*(0), and hue h(0). However, β is a constant having a value ranging from 0 to 1.






C*(0)=C*709+ΔC*(0)×β






L*(0)=L*709ΔL*(0)+{ΔC*(0)×(1−β)}×(F(h709)+T)






h(0)=h709  (21)


Next, similarly to Formulas 14 to 17, the RGB signal values R(0), G(0), and B(0) in displaying the corrected color information (L*(0), C*(0), h(0)) on the display device are calculated from the lightness L*(0), chroma C*(0), and hue h(0) (S502).


Here, whether or not R(0), G(0), and B(0) all fall within a range from 0 to 1 is determined (S503).


In a case where R(0), G(0), and B(0) all fall within a range from 0 to 1, the correction unit 108 outputs R(0), G(0), and B(0) as the corrected image signal 109 to the display device 120 (S504).


On the other hand, in a case where any of R(0), G(0), and B(0) does not fall within a range from 0 to 1, the correction unit 108 sends R(0), G(0), and B(0) to the setting unit 106. The setting unit 106 calculates the correction quantities ΔL*(1) and ΔC*(1) of the lightness and chroma respectively as Formula 22 to send to the correction unit 108 (S505).











Δ







C
*



(
1
)



=


Δ






B
*



2
×

(


F


(

h
709

)


+
T

)











Δ







L
*



(
1
)



=


Δ






B
*


2






(
22
)







The correction unit 108, similarly to Formula 21, calculates the lightness L*(1), chroma C*(1), and hue h(1) to calculate, similarly to Formulas 14 to 17, the RGB signal values R(1), G(1), and B(1) in displaying the color information (L*(1), C*(1), h(1)) on the display device 120 (S506).


Here, assuming that the RGB signal values are represented as R(k), G(k), and B(k) in a generalized form using a number k of updates, whether or not R(k), G(k), and B(k) all fall within a range from 0 to 1 is determined (S507).


Depending on the values of R(k), G(k), and B(k), the correction quantities of the lightness and chroma are updated to ΔL*(k+1) and ΔC*(k+1) respectively as shown by Formula 23 and Formula 24.


1. Case where R(k), G(k), and B(k) all fall within a range from 0 to 1 (S508)











Δ







C
*



(

k
+
1

)



=


Δ







C
*



(
k
)



+


Δ






B
*




2

k
+
1


×

(


F


(

h
709

)


+
T

)












Δ







L
*



(

k
+
1

)



=


Δ






B
*


-

Δ







C
*



(

k
+
1

)


×

(


F


(

h
709

)


+
T

)








(
23
)







2. Case where any of R(k), G(k), and B(k) does not fall within a range from 0 to 1 (S509)











Δ







C
*



(

k
+
1

)



=


Δ







C
*



(
k
)



-


Δ






B
*




2

k
+
1


×

(


F


(

h
709

)


+
T

)












Δ







L
*



(

k
+
1

)



=


Δ






B
*


-

Δ







C
*



(

k
+
1

)


×

(


F


(

h
709

)


+
T

)








(
24
)







Further, depending on the calculated correction quantities ΔL*(k+1) and ΔC*(k+1), and the constant β, the lightness L*(k+1), chroma C*(k+1), and hue h(k+1) are updated as Formula 25.






C*(k+1)=C*709+ΔC*(k+1)×β






L*(k+1)=L*709+ΔL*(k+1)+{ΔC*(k+1)×(1−β)}×(F(h709)+T)






h(k+1)=h(k)  (25)


Similarly to Formulas 14 to 17, the RGB signal values R(k+1), G(k+1), and B(k+1) in displaying the color information (L*(k+1), C*(k+1), h(k+1)) on the display device 120 are calculated (S510).


Here, values of LDiff and CDiff as the difference values of the lightness and chroma respectively are calculated as shown by Formula 26 to check a magnitude relationship between LDiff and a predefined threshold LTH of the lightness, and a magnitude relationship between CDiff and a predefined threshold CTH of the chroma (S511).






L
Diff
=L*(k+1)−L*(k)






C
Diff
=C*(k+1)−C*(k)  (26)


In a case where both conditions of LDiff≦LTH and CDiff≦CTH are met, R(k+1), G(k+1), and B(k+1) are output as the corrected image signal 109 to the display device 120 (S504).


In a case where any condition of LDiff≦LTH and CDiff≦CTH is not met, the correction unit 108 sends R(k+1), G(k+1), and B(k+1) to the setting unit 106, and the setting unit 106 further updates the lightness and chroma.


In the subsequent processes, steps S507 to S511 are repeated. At step S511, in the above described embodiment, the difference of the lightness and the difference of the chroma are compared with the thresholds thereof respectively to determine the conditions. As another method, at step S511, the processes may be repeated until a condition is met that the number k of updates reaches a predefined constant N.


If at step S511, the condition is determined to be met, the correction unit 108 outputs the finally calculated RGB signal values R(k), G(k), and B(k) as the corrected image signal 109.


As described above, according to the embodiment, when the color information is corrected, which is obtained by converting the input image signal having the wide color gamut held by the input image signal into the color gamut of the display device, such that the color information exists in the color gamut of the display device 120 and the brightness the same as the color of the wide color gamut is to be perceived, the correction can be performed so as to make the chroma large as much as possible.


Third Embodiment

A description is given of a third embodiment. The embodiment has a general configuration the same as the first embodiment, and thus a difference from the first embodiment is described.



FIG. 7 shows a configuration diagram of an image display device according to the embodiment. Components the same as those in FIG. 1 are denoted by the same reference signs and duplicated description is omitted except for extended processes.


In the first embodiment, the color gamut of the display device 120 is a color gamut defined by BT.709 and the color gamut of the input image signal is a color gamut defined by BT.2020, and the color gamut of the display device 120 is far smaller than the color gamut of the input image signal. In this embodiment, a description is given of a case where the color gamut of the input image signal is also a color gamut defined by BT.709 similar to the display device 120.


In the embodiment, a look-up table (LUT) holding unit 110 is added. The LUT holding unit 110 has a LUT 111. The LUT 111 is a table for converting the input image signal having a narrow color gamut into the color information having a color gamut (third color gamut) wider than that.


The LUT 111 may be a LUT which is created estimating the color information of an imaging target (e.g., sea's color, flower's color) with taking into account characteristics of an imaging apparatus imaging the input image signal 101, or a LUT which is created estimating the color information of the imaging target with averagely taking into account a general imaging apparatus in a case where the imaging apparatus is unknown like a TV broadcast wave. The LUT in this case is for restoring a video to an original color gamut wider than that of the Imaging apparatus, the video being generated in a manner that an imaging target originally having had a wider color gamut is compressed into a narrower color gamut as a result from imaging. The LUT 111 is not limited to those, and may be any so long as the color information is obtained by converting the color gamut of the input image to another color gamut.


The conversion of the color gamut may be done using not the LUT format but a function for nonlinear conversion.


The conversion unit 104 acquires the color gamut information 103b of the display device from the color gamut information acquisitor 102 to convert the input image signal similarly to Formula 1 to Formula 5 and calculate the color information (L*709, C*709, h709).


The conversion unit 104 acquires the LUT 111 from the LUT holding unit 110 for converting the color gamut of the input image signal.


The conversion unit 104 refers the LUT 111 with respect to the RGB signal values (Rin, Gin, Bin) as signals after the gamma conversion of the input image signal to calculate the color information (L*LUT, C*LUT, h*LUT) of the color gamut different from the color gamut of the input image.


The conversion unit 104 sends the color information (L*709, C*709, h709) as the converted signal 105b and the color information (L*LUT, C*LUT, h*LUT) as the converted signal 105c to the setting unit 106 and the conversion unit 108.


The setting unit 106 and the conversion unit 108 substitute the color information (L*LUT, C*LUT, h*LUT) for the color information (L*2020, C*2020, h*2020) in the first and second embodiments to perform the processes similar to the first and second embodiments. This calculates the correction quantity 107 of the lightness and chroma and the corrected image signal 109 converted from the converted signal 105b.


As described above, according to the embodiment, the color information obtained by converting the input image signal into the color gamut of the display device is corrected so as to be able to perceive the brightness the same as the color information obtained by converting the input image signal using the LUT, allowing a user to perceive the brightness the same as the color gamut of the LUT also in displaying on the display device having the narrow color gamut.



FIG. 8 is a diagram that illustrates a hardware configuration of an image processing device according to an embodiment of the present invention.


The image processing device of each embodiment can be realized by using a general-purpose computer device as basic hardware as illustrated in, for example, FIG. 8. In this computer device 200, a processor 202 such as a CPU, a memory 203 and an auxiliary storage 204 such as a hard disk are connected with a bus 201, and a storage medium 206 is further connected via an external I/F 205. The external I/F 205 may connect to an image displaying device. Each processing block in the image processing device can be realized by making the processor 202 mounted on the above-mentioned computer device execute a program. Any combination of these blocks may constitute circuitry. At this time, the image processing device may be realized by installing the above-mentioned program in the memory 203 or the auxiliary storage 204 of the computer device beforehand, or may be realized by storing it in the storage medium 206 such as a “CD-ROM” or distributing the above-mentioned program through a network and arbitrarily installing this program in the computer device. Moreover, each buffer in the image processing device can be realized by arbitrarily using the memory 203, the hard disk 204 or the storage medium 206 such as a “CD-R”, a “CD-RW”, a “DVD-RAM” and a “DVD-R”, which are incorporated or attached to the above-mentioned computer device.


Furthermore, the image processing apparatus may include a CPU (Central Processing Unit), a ROM (Read Only Memory) and a RAM as one example of circuitry. In this case, each unit or each element in the image processing apparatus as shown in FIG. 1, 5 or 7 can be controlled by a CPU's reading out into a RAM and executing a program which is stored in a storage or ROM.


Also, the above-stated hardware configuration is one example and a part or all of the image processing apparatus according to an embodiment can be realized by an integrated circuit such as a LSI (Large Scale Integration) or an IC (Integrated Circuit) chip set as one example of circuitry. Each function block in the image processing apparatus can be realized by a processor, individually, or a part or all of the function blocks can be integrated and realized by one processor. A means for the integrating the part or all of the function blocks is not limited to the LSI and may be dedicated circuitry or a general-purpose processor.


While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.

Claims
  • 1. An image processing apparatus comprising: first circuitry that sets for first color information in a first color gamut a correction quantity of a brightness defined depending on a lightness and a chroma of the first color information on a basis of a difference between a target brightness and the brightness of the first color information; andsecond circuitry that corrects at least one of the lightness and the chroma of the first color information such that the brightness of the first color information is corrected by the correction quantity to obtain corrected color information in the first color gamut.
  • 2. The apparatus according to claim 1, wherein the first circuitry calculates a relationship between a lightness and a chroma which represent the brightness obtained by correcting the brightness of the first color information by the correction quantity, andthe second circuitry corrects the first color information such that a lightness and a chroma of the corrected color information meet the relationship.
  • 3. The apparatus according to claim 1, wherein the correction quantity has a value obtained by subtracting the brightness of the first color information from the target brightness, anda brightness obtained by correcting the brightness of the first color information by the correction quantity matches the target brightness.
  • 4. The apparatus according to claim 2, wherein the second circuitry corrects the first color information such that a value of the chroma of the first color information becomes larger.
  • 5. The apparatus according to claim 2, wherein the second circuitry corrects the first color information to color information corresponding to an intersection between a graph representing the relationship between the lightness and the chroma and a boundary of the first color gamut.
  • 6. The apparatus according to claim 2, wherein the second circuitry finds a portion in which values of the lightness and the chroma are equal to or larger than values of the lightness and the chroma of the first color information in a graph representing the relationship between the lightness and the chroma, and corrects the first color information to color information corresponding to a point which internally divides the portion by a given ratio.
  • 7. The apparatus according to claim 6, wherein when the internal dividing point is positioned outside a boundary of the first color gamut, the first color information is corrected to color information corresponding to an intersection between the boundary of the first color gamut and a line connecting between internal dividing point and a point representing the first color information.
  • 8. The apparatus according to claim 2, wherein the second circuitry finds a portion in which values of the lightness and the chroma are equal to or larger than values of the lightness and the chroma of the first color information in a graph representing the relationship between the lightness and the chroma and which is included in the first color gamut, and corrects the first color information to color information corresponding to a point which internally divides the portion by a given ratio.
  • 9. The apparatus according to claim 1, wherein the second circuitry corrects the first color information such that a hue of the corrected color information is maintained to have same hue as that of the first color information.
  • 10. The apparatus according to claim 1, wherein the first color gamut is a color gamut of an image display device,a second color gamut is a color gamut of an input image signal and the second color gamut is wider than the first color gamut,the apparatus comprises third circuitry which generates first color information by performing color conversion on the input image signal on a basis of the first color gamut, and generates second color information by performing color conversion on the input image signal on a basis of the second color gamut, andthe target brightness is a brightness of the second color information corresponding to the first color information.
  • 11. The apparatus according to claim 1, wherein the first color gamut is a color gamut of an image display device,a second color gamut is a color gamut of an input image signal,the apparatus comprises third circuitry which generates first color information by performing color conversion on the input image signal on a basis of the first color gamut, and generates second color information by performing color conversion on the input image signal on a basis of a third color gamut wider than the first and second color gamuts, andthe target brightness is a brightness of the second color information corresponding to the first color information.
  • 12. An image processing method comprising: setting for first color information in a first color gamut a correction quantity of a brightness defined depending on a lightness and a chroma of the first color information on a basis of a difference between a target brightness and the brightness of the first color information; andcorrecting at least one of the lightness and the chroma of the first color information such that the brightness of the first color information is corrected by the correction quantity to obtain corrected color information in the first color gamut.
  • 13. An image processing apparatus comprising: a processor; anda memory that stores processor-executable instructions that, when executed by the processor, cause the processor to:set for first color information in a first color gamut a correction quantity of a brightness defined depending on a lightness and a chroma of the first color information on a basis of a difference between a target brightness and the brightness of the first color information; andcorrect at least one of the lightness and the chroma of the first color information such that the brightness of the first color information is corrected by the correction quantity to obtain corrected color information in the first color gamut.
Priority Claims (1)
Number Date Country Kind
2013-145829 Jul 2013 JP national