The present application claims priorities from Japanese applications JP 2010-263077 filed on Nov. 26, 2010, and JP 2011-208405 filed on Sep. 26, 2011, the contents of which are hereby incorporated by reference into this application.
The present invention relates to a color correction technique for three-dimensional video images.
In a generation of the three-dimensional video images, there have been methods of displaying the three-dimensional video images in such a way that two cameras or lenses are prepared for taking video images viewed from the right and left eye to display the video image on a display device at the same time (in polarization system, disparity barrier system, lenticular system, etc.) or along with a time sequence (shutter glasses system). The right and left eye are subject to the left and right video image having the disparity to make the left and right disparity into a three-dimensional image in viewer's brain, therefore, the video image can be viewed three-dimensionally. In such method of generating and displaying the three-dimensional video image, a color shade, value, etc. of each of the video images are sometimes different caused by individual differences in the plural cameras and lenses, an incident amount and incident angle of light, etc. even when the same video images are taken.
In this way, in a condition where there is a difference such as the color shade and value on the left and right video image as a phenomenon acquired from the video images viewed by the left and right eye, there is a probability that a color deviation on the left and right image is large to cause a binocular rivalry and hard to merge the video images with each other, and there is also a probability that the color deviation on the left and right video image causes an eyestrain.
In the method of correcting the left and right image for a three-dimensional display, JP-A-2003-061116 discloses a method of correcting a perpendicular positional deviation of the left and right image. JP-A-02-058993 discloses a correcting method of matching an average brightness level of left and right eye video signal with its dynamic range.
However, the technique for correcting the left and right video image disclosed in JP-A-2003-061116 and JP-A-02-058993 does not describe a correction technique regarding the color deviation on the left and right video image. When a video image is taken by plural cameras, the color deviation sometimes occurs caused by a difference of individual camera characteristics, a focus of the cameras, an unfocussed, a difference of adjusting white balance. In this case, the viewer is to see a video image having the color deviation on the right and left eye.
The invention is made in light of the above-mentioned problem, and an object of the invention is to provide preferably correcting the color shade of video image entering the left and right eye.
In order to solve the above-mentioned problem, an aspect of the invention is that a three-dimensional video image processing device is configured by an input unit that receives a video image, an analysis unit that generates a histogram of a color space for each of a left and right eye video image contained in the video image to detect a feature parameter for each of the video images, and a correction unit that implements a correction on the color space, for reducing a difference of the feature parameter of the left eye video image and the feature parameter of the right eye video image detected in the analysis unit.
(Technical Effects)
The color shade of video image entering the left and right eye is corrected preferably in the three-dimensional video image.
The three-dimensional and two-dimensional are hereinafter referred to as 3D and 2D unless otherwise stated.
The other objects, features and advantages of the invention will become apparent from the following description of the embodiments of the invention taken in conjunction with the accompanying drawings.
Hereinafter, embodiments of the invention will be described with reference to the drawings.
A first embodiment of the invention will be described with reference to
Referring to
The video image input I/F 100 is an interface of receiving contents such as video images, sound signals, characters, etc. from broadcasts, and player devices and game machines incorporating a disk etc. The video image decoder circuit 101 implements a decode processing etc. for the video image to output a video image signal. The 3D video image signal processing unit 102 implements a video image signal processing for separating the video image corresponding to the left and right eye in the 3D video image processing unit 103, when receiving the 3D video image specified by a Side-by-Side (SBS) system, Top-and-Bottom (TAB) system, Frame-Packing (FP) system, Multi-Eye system, etc. The left and right color correction processing unit 104 implements a color correction, i.e. correction on color spaces, so as to make a color difference of the video images corresponding to the left and right eye small. The 3D display processing unit 105 switches over the video image subjected to the color correction in the left and right color correction processing unit 104 to a left eye video image L or a right eye video image R to output to the display unit 106, and also implements an infrared light signal processing etc. for transmitting a switching-over timing to a shutter glasses. When the 3D display is adapted to a polarized glasses system, a processing is made adapt to a disparity barrier installed on the display unit 106 to implement such that display pixels are adjusted so as to display the video images on the left and right. In a naked eye system, a video image signal processing is implemented to be adapted to a lenticular lens etc. installed on the display unit 106. The display unit 106 is a display using, such as a liquid-crystal device, OLED (Organic Light-Emitting Diode), plasma, etc., to display as a 3D display by switching over the video images on the left and right output from the 3D display processing unit 105 at an adjusted timing, display pixels adapted to the installed disparity barrier, or display the pixels adapted to the installed lenticular lens.
Next, the following description will be concerned with a configuration of diminishing a color difference on the right and left eye video image in the 3D video image display.
An L-video image hue histogram analysis unit 203 and an R-video image hue histogram analysis unit 204 in the left and right color correction processing unit 104 implement a color space analysis for the left and right video image. A hue correction processing unit 205 extracts feature parameters (hereinafter, written as feature), such as a peak and hue width of a histogram for the left and right, a barycenter of the histogram, etc. to calculate differences of the feature. Further, a correction for varying the hue of the left and right video image is implemented so that the difference is reduced. In consequence, a left eye video image 206 and a right eye video image 207 both having small hue difference are output.
Here, the following description will be concerned with a hue histogram for use in the processing in the L-video image hue histogram analysis unit 203, R-video image hue histogram analysis unit 204 and hue correction processing unit 205.
In this embodiment, the hue histogram for the left and right eye video image is calculated in the L-video image hue histogram analysis unit 203, R-video image hue histogram analysis unit 204. First, a definition of a hue value is shown in
This embodiment shows an example of the histogram analysis in the hue of HSV color space, however, the hue may be separated into between HSV/HSB color space using, for example, Hue, Saturation/Chroma and Brightness/Lightness/Value to calculate a histogram distribution for each of them as a color analysis. This may use the color analysis for calculating the histogram distribution on axes each representing HLS/HIS color space using Hue, Saturation, Lightness/Luminance or Intensity, various color spaces using CIE color coordinate system (RGB, XYZ, xyY, L*u*v*, L*a*b*), sRGB color triangle, etc.
In this embodiment, of the histogram represented by the axis of hue, saturation and value in HSV color space, the hue histogram is analyzed to make close the histogram features on the left and right video image close to each other. In consequence, the difference of color space is reduced. Not only the hue but also the saturation and value may also be analyzed for the histogram distribution, and a difference amount of the feature is calculated to reduce the difference amount. This processing may be implemented simultaneously with the histogram analysis of the hue and may also be implemented for a time different frame.
Next, an example of a left and right color correction processing sequence in the left and right color correction processing unit 104 will be described with reference to
At a step S300, the video images of the left and right are entered into the left and right color correction processing unit 104 to start a processing. At a step S301, the L-video image hue histogram analysis unit 203 aggregates the number of pixels of the hues contained in the left eye video image. Likewise, at a step S303, the R-video image hue histogram analysis unit 204 aggregates the number of pixels of the hues contained in the right eye video image. At a step S302, the L-video image hue histogram analysis unit 203 detects a hue feature of the left eye video image hue histogram. Here, the hue feature is a peak hue position, a peak hue frequency, a distribution width around the peak hue and a barycenter position of a distribution around the peak hue, or combination of those.
The following description will be concerned with an advantage of implementing the processing at the step S308. First, when the difference amount of the feature is large at the step S307, there is a probability that the video image is separated forcibly to a half, by setting of the 3D display device, as 3D specified by SBS, TAB, etc. to generate the left and right eye video image and intend displaying 3D display, in the case where the input video image signal is not 3D video image. In this case, a color matching for the left and right video image is not required at the step S307 and the separation to the left and right video image is also not required since the video image is already broken. Consequently, a control, not for implementing the left and right video image separation processing, is indicated to the 3D video image signal processing unit 103 at a step S308.
As a processing when the 3D video image signal processing unit 103 receives the indication of the step S308, the 3D video image single processing unit 103 may output the video image signal output from the video decoder circuit 101 without change as the left eye video image output and right eye video image output, and the 3D display processing unit 105 may implement the same display, i.e. alternately the left and right eye video image, or the left and right video image by line-by-line, as 3D display to realize a 2D display.
As another example of a processing when the 3D video image signal processing unit 103 receives the indication of the step S308, both the 3D video image signal processing unit 103 and the 3D video image color correction processing unit 102 do nothing, the 2D output video image signal from the video image decoder circuit 101 is transmitted to the 3D display processing unit 105 without change, and the output video image signal may be displayed by switching over a display processing setting from the 3D to the 2D display by the 3D display processing unit 105.
At the step S305, it is determined that the video image is entered by an anaglyph system using a red and blue glasses when the difference is a predetermined amount, even though the difference amount is large for the feature of hue histogram, and a processing may be implemented for outputting the left and right video image without implementing the color correction.
When the difference amount of the feature of hue histogram is large, there is a probability that the video image is taken by two cameras of which a focus for the left and right and a white balance are not made match those. In this case, either the left eye or right eye video image is selected in consideration of an eyestrain etc., however, a spatial effect is lost, and the same video image for the left and right may be output to implement a processing for eliminating a color deviation of the video image viewed by the left and right eye.
The threshold value for the difference of left and right video image on the color distribution for determining the anaglyph system mentioned above may be different from the threshold value for the difference of left and right video image on the color distribution depending on the difference of the focus of cameras and white balance. That is, plural threshold values are provided, and they may be switched over in response to the size of difference amount.
Next, at a step A309, the difference amount of the feature calculated at the step S305 is determined as a correction amount for the hue. For example, the color deviation (PL0−PR0, PL1−PR1, where PL0, PR0, PL1 and PR1 are a hue position of peak frequencies in the histogram) of the hue peak for the left and right eye video image calculated at the step S305 and the difference of peak frequency (ΔPL0−ΔPR0, ΔPL1−ΔPR1, where ΔPL0, ΔPR0, ΔPL1 and ΔPR1 are a peak frequency of the histogram), are set to as hue correction amount. At the step S309, the range ΔWn (n is integer) into which the histogram frequency equal to or larger than the threshold value set in the histogram is integrated, is calculated to set to as a correction range of the hue correction.
At a step S310, the hue correction processing is implemented for the left and right eye video image in response to the correction amount determined at the step S309. The hue correction for the video image may be applied to one of the video images at least. For example, the hue correction is implemented such that the range of ΔWL0 around the peak hue PR0 on the right eye video image is deviated as the hue, by |ΔPL0−ΔPR0| in the right direction and the range of ΔWL1 around PR1 is deviated as the hue, by |ΔPL1−ΔPR1| in the left direction. In this way, a correction accuracy is increased and an adverse effect caused by the correction can be made small by limiting the correction range to ΔWL0 and ΔWL1 to correct the pixels of hue present in that range.
Further, the pixels in the range of ΔWL0 and ΔWL1 are corrected, and a processing may be implemented to be made match the peak frequency and frequency around the hue.
When the feature of histogram distribution on the left and right video image is made close to each other in accordance with the hue correction mentioned above, the feature of histogram distribution on the right eye video image may be varied to be made close to the feature of histogram distribution on the left eye video image. Conversely, the feature of left eye video image may be varied to be made close to the feature of right eye video image. The average of the feature of histogram distribution on the left eye video image and that of histogram distribution on the right eye video image may be taken to be made close to the average feature of histogram distribution calculated for both the left and right video image.
The above-mentioned processing may be implemented for either every frame or every several frames. When the processing is implemented for every several frames, the same hue correction is implemented for between the several frames, so that a processing load can be reduced.
As a result of the above-mentioned processing, a deviated amount of the hue on the left and right video image is quantified to enable an approach of the hue histogram on the left and right video image in response to the deviated amount.
In addition, the example of hue correction has been described from the above-mentioned processing. However, the color correction for the left and right video image may be implemented for other color spaces in accordance with the difference amount of the color feature for the left and right video image.
In addition, either the left or right eye video image is set to a correction target, is that the video image entering the processing unit later may be set to the correction target. In this case, a preceding entered video image is stored in a line memory, the feature of histogram distribution on the preceding entered video image is analyzed first, and the hue of a succeeding entered video image is then corrected in response to the analyzed result of preceding entered video image. This processing has rather an advantage from a view of the processing in high speed. For example, when the left eye video image is transmitted first in Frame Packing system etc., it is desirable that the hue of right eye video image is matched to the feature of histogram distribution on the left eye video image by using the analyzed result of the feature of histogram distribution on the left eye video image. It is also the same that the received video image is stored in the line memory from the left eye video image in SBS system or TAB system.
As an example of the processing in high speed, in Frame Packing system, the feature of histogram distribution for the preceding entered one eye video image, for example left eye video image, is analyzed, the color correction processing of the right eye video image is implemented for the succeeding entered other eye video image, for example right eye video image. At this time, the feature of histogram distribution for the right eye video image can be analyzed for every one line or several lines by using the histogram analyzed result on the preceding entered left eye video image, as soon as this analysis is ended, the color correction processing can be implemented for the succeeding entered right eye video image in real time. When the histogram analyzed result of the succeeding entered video image, i.e. right eye video image, is applied to the color correction for the preceding entered video image, i.e. left eye video image, the color correction for the succeeding entered video image, i.e. right eye video image, is implemented by using the result of the preceding entered video image, i.e. left eye video image since the display processing is delayed by one frame. This is rather faster than the previous color correction. In also TAB system, likewise, the histogram analysis for the left eye video image is implemented first, and this result is used to thereby be able to correct the right eye video image for every one line. In SBS system, since both the left and right video image are entered onto the one line or several lines in the memory, the processing can be implemented for both the left and right video image simultaneously. By implementing the processing in high speed, the histogram result for the left eye video image is acquired first by counting the pixel from the left. For this reason, the color correction of the pixels to be counted for the last half adapted to that result can be implemented in real time, therefore, it can be made that the processing is implemented in high speed. In TAB system, likewise, the histogram analysis for the preceding entered left eye video image is implemented first, and this result is used to thereby be able to correct the succeeding entered right eye video image for every one line.
In addition, in
A second embodiment of the invention will be described below with reference to
The configuration shown in
In this way, it enables to easily determine that the user watches the 3D display, therefore, the determination processing at the step 5306 in the embodiment is unnecessary.
At a time of displaying the 2D display, such as a display example 800 and 801, it can also be checked whether the color deviation etc. to the left or right is present when the video image taken by two cameras unadjusted for a focus, white balance, etc. is seen on the display unit. The left and right color correction is implemented at the timing of switching over to the 3D display by the user, so that a good display characteristic can be provided.
It may be made that the user can select from the following three conditions via the remote control unit 802 and user I/F 700: (1) the color distribution of the left eye video image is corrected to be matched with the color distribution of the right eye video image; (2) the color distribution of the right eye video image is corrected to be matched with the color distribution of the left eye video image; and (3) the color distribution on the left and right video image is matched with the average of the features of color distribution on the left and right video image.
The user I/F 700 may be replaced with a human detecting sensor. For example, when a sensor is used for detecting a human residing in a viewing range by using an infrared light sensor for sensing a body heat emitted from the human body, a camera for sensing a human motion, etc., it enables that the left and right color correction is only implemented for when determining that the human resides close to the device, detected by the sensor, and the left and right color correction is not implemented when the human does not reside. At this time, a condition where the 3D system is installed may be added to the condition where the human resides. From a result of such processing, the left and right color correction can be omitted when the viewer does not reside, so that a processing load can be reduced.
Hereinafter, a third embodiment of the invention will be described with reference to
The configuration in
Here,
The video image taken by the cameras enters the 3D video image processing device to implement the histogram analysis such as hue for the left and right video image etc. In this way, the area having the ball and its color is present in the right eye video image, but not present in the left eye video image, therefore, the difference amount of the histogram feature becomes detected largely. This is however not the difference of hue and this is because an imaging target of the video image is intrinsically different. It is apparent that this is not the left and right color difference caused by a difference of the focus on cameras and white balance.
The processing starts at a step S1100. The occlusion area in the left and right video image is detected at a step S1101. At a step S1102, the occlusion area is set to elimination from the histogram analysis. Next, at a step S1103, the occlusion area is eliminated from the left and right color correction area. The operation in
From the processing mentioned above, it enables that the left and right video image color correction other than the occlusion area is implemented without subjecting to the adverse effect of the color in the occlusion area. Further, there is a merit that the correction is unnecessary for the color in the occlusion area only relative to one eye.
A fourth embodiment of the invention will be described below.
An antenna 1200 receives a broadcasting signal via a wireless broadcast wave (satellite broadcasting, terrestrial broadcasting) or wire broadcasting transmission network such as a cable, and a tuner 1201 tunes in a specific frequency to implement demodulation, error correction processing, etc. When the broadcasting signal is scrambled, a descrambler 1202 decodes a scramble signal. By the processing mentioned above, a multiplexed signal is restored to enter a multiple separation unit 1203. The multiple separation unit 1203 separates a signal multiplexed to a format, such as MPEG2-TS (Transport Stream) etc. to a signal, such as a video image ES (Elementary Stream), sound ES, program information, etc. ES means compressed and encoded image/sound data. A video decode unit 1204 decodes the video image ES into a video image signal to be output to the 3D video image color correction processing unit 102. The 3D video image color correction processing unit 102 implements the color correction controlled by a control unit 1210. The processing of color correction is the same as described in the above-mentioned embodiments. The video image output from the 3D video image color correction processing unit 102 is subjected to the 3D display processing in the 3D display processing unit 105 to display as 3D display on the display unit 106, when the 3D video image is displayed as 3D video image. When the 2D video image is displayed as 2D display, the 3D display processing is not implemented by the 3D processing unit 105, the 2D video image is displayed on the display unit 106. When the 3D video image is displayed as 2D display, the 3D display processing unit 105 may output, as 2D video image, the video image of one observing point (for example, video image of either the left or right eye video image in the case of SBS system) contained in the 3D video image to be displayed as the 2D display on the display unit 106. When the 2D video image is displayed as 3D display, the 3D display processing unit 105 may implement a 2D-3D video image conversion processing to be converted from the 2D into 3D video image to output as 3D video image and display as 3D display on the display unit 106. Various methods of the 2D-3D conversion processing have been known, therefore, either method of the 2D or 3D conversion processing function may be incorporated in the 3D display processing unit 105. The control unit 1210 controls the 3D display processing for the 3D display processing unit 105. A sound decode unit 1207 decodes the sound ES into a sound signal to output externally as an output or a sound output to a speaker 1208. A network 1205 is Internet or a network group as infrastructure. A network I/F 1206 transmits and receives information via the network, that is, transmits and receives various information and MPEG2-TS etc. between Internet and the receiving device. The video image and sound entered into the multiple separation unit 1203 may enter the receiving device via the tuner 1201 or the network I/F 1206.
A 2D/3D determination unit 1209 determines whether the video image is the 3D or 2D video image by using 3D/2D identification information, indicating whether it is 3D video image contents, contained in the information received from the antenna 1200 and network I/F 1206, and 3D system identification information, indicating the system of 3D video image, such as SBS, TAB, Frame Packing, etc.
It can be considered that there are various transmission methods for the 3D/2D identification information and 3D system identification information. For example, it can be considered to be stored as program information, i.e. program specific information and program service information, contained in MPEG2-TS (Transport Stream) and separated by the multiple separation unit 1203. The program specific information (PSI) is required for selecting a predetermined program, including encode information of video image, encode information of sound and program composition. The program service information (SI) is various information ruled in convenience for selecting the program, including PSI information of MPEG-2 system specification having EIT (Event Information Table) containing information regarding the program, such as a program name, a broadcast date and time, a program contents, etc., and SDT (Service Description Table) containing information regarding an organization channel (service), such as an organization channel name, broadcast business operator name, etc. The 3D/2D identification information may be stored in a descriptor ruled by PSI or SI or a newly added descriptor. The 3D/2D identification information and 3D system identification information are stored in these descriptors, therefore, it enables to identify whether the information is of the 3D program contents with a program unit and is of the 3D program service with a service unit.
It may be configured such that the 3D/2D identification information is appended to the video image ES at a step of encoding the video image. For example, when the video image encoding system is MPEG2 system, an encoding may be implemented such that the above-mentioned 3D/2D identification information and 3D system identification information are appended to a user data area succeeding to a Picture header and Picture Coding Extension. In this case, the video image decode unit 1204 can recognize a 3D identification flag in a frame , or picture, unit of the video image and can also identify an event even when the 2D video image is inserted into on the way of the 3D video image stream.
The 2D/3D determination unit 1209 determines whether the video image is the 3D or 2D video image by using the 3D/2D identification information and 3D system identification information separated in the multiple separation unit 1203 or extracted in the video image decode unit 1204, as mentioned above. In response to a determined result from the 2D/3D determination unit 1209, the control unit 1210 controls the color correction in the 3D video image color correction processing unit 102.
Here,
A timing for starting and stopping the color correction processing for the left and right video image in the invention may be a timing when the 3D or 2D video image was identified, in response to the above-mentioned method, and the method of identification may be used of either one mentioned above. It may be configured that the control unit 1210 determines the processing by combining the determined result in the 2D/3D determination unit 1209 and a user operated signal entered from the user I/F 700. For example, the processing may be configured such that the 3D display processing and the left and right color correction processing are not implemented until the 3D display indication is accepted via the user I/F 700, even when the 2D/3D determination unit 1209 determined that the input video image is the 3D video image. An operation sequence of the 2D/3D determination unit 1209, user I/F 700 and control unit 1210 for the above-mentioned case will be described with reference to
The color correction processing for the left and right video image may not be implemented when the video image is identified as the 2D video image, even though the information indicating the 3D display is entered via the user I/F 700. An example of an operation sequence for a 2D/3D determination unit 1209, user I/F 700 and control unit 1210 for the above-mentioned case will be described with reference to
In the above-mentioned embodiments, the same processing can be implemented for the 3D display system of either a liquid crystal shutter system, polarization glasses system or glasses-free system.
The program to be executed in the 3D display control device may be incorporated therein, provided as a recording medium recording it, and provided to download it via a network.
In addition, the embodiments have described for the hue analysis and its correction. However, the analysis and correction are not limited to apply to the hue. The color correction for the left and right video image may be implemented for the saturation, value, brightness, etc. in response to the difference amount of the color feature on the left and right video image. The analysis and correction may also be implemented in parallel with plural color spaces, among of the plural color spaces of the hue, saturation, value, brightness, etc.
It should be further understood by those skilled in the art that although the foregoing description has been made on embodiments of the invention, the invention is not limited thereto and various changes and modifications may be made without departing from the spirit of the invention and the scope of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2010-263077 | Nov 2010 | JP | national |
2011-208405 | Sep 2011 | JP | national |