The present invention relates to an image processing apparatus which divides one field or one frame into a plurality of sub-fields, and processes an input image to display gradation by combining a light emission sub-field which emits light and a non-light emission sub-field which does not emit light, and an image display apparatus using this image processing apparatus.
A plasma display apparatus has an advantage in that slimmer construction and a larger screen can be implemented, and in an AC type plasma display panel used for this plasma display apparatus, a front panel that is a glass substrate on which a plurality of scan electrodes and sustaining electrodes are arrayed, and a back panel on which a plurality of data electrodes are arrayed, are combined so that the scan electrodes and sustaining electrodes are orthogonal to the data electrodes, in order to form discharge cells in a matrix, and images are displayed by selecting arbitrary discharge cells and allowing plasma to emit.
As mentioned above, when an image is displayed, one field is divided into a plurality of screens of which the weights of brightness are different (hereafter called “sub-fields” (SFs)) in a time direction, and one field of an image, that is one frame image, is displayed by controlling the emission and non-emission of light from the discharge cells in each sub-field.
In the case of an image display apparatus using the above mentioned sub-field division, the disturbance of gradation, called a “dynamic false contour” and motion blur are generated, and display quality is diminished when moving images are displayed. In order to decrease the generation of the dynamic false contour, Patent Document 1, for example, discloses an image display apparatus which detects a motion vector of which start point is a pixel of one field of a plurality of fields included in the moving image, and end point is a pixel of another field thereof, converts the moving image into light emission data of sub-fields, and reconstructs the light emission data of the sub-fields by processing using the motion vector.
In the case of this conventional image display apparatus, a motion vector of which end point is a reconstruction target pixel of another one field, out of the motion vectors, is selected, and is multiplexed by a predetermined function to calculate a position vector, and light emission data of one sub-field of the reconstruction target pixel is reconstructed using the light emission data of the sub-field of the pixel indicated by the position vector, whereby the generation of motion blur and dynamic false contour is prevented.
As mentioned above, according to the conventional image display apparatus, a moving image is converted into the light emission data of each sub-field, and the light emission data of each sub-field is rearranged according to the motion vector, and this method for rearranging the light emission data of each sub-field will be described in concrete terms.
As
First the above mentioned conventional image display apparatus converts the moving image into a light emission data of each sub-field, and as
In the case of displaying the N-2 frame image D1, where one field is constituted by five sub-fields SF1 to SF5, the light emission data of all the sub-fields SF1 to SF5 of the pixel P-10 corresponding to the mobile object OJ becomes a light emission state (hatched sub-fields in
Then the above mentioned conventional image display apparatus rearranges the light emission data of each sub-field according to the moving vector, and as
If a moving distance of five pixels in the horizontal direction is detected as a motion vector V1 from the N-2 frame and the N-1 frame, the light emission data (light emission state) of the first sub-field SF1 of the pixel P-5 in the N-1 frame is moved to the left by four pixels, and the light emission data of the first sub-field SF1 of the pixel P-9 is changed from the non-light emission state to the light emission state (hatched sub-fields in
The light emission data (light emission state) of the second sub-field SF2 of the pixel P-5 is moved to the left by three pixels, the light emission data of the second sub-field SF2 of the pixel P-8 is changed from the non-light emission state to the light emission state, and the light emission data of the second sub-field SF2 of the pixel P-5 is changed from the light emission state to the non-light emission state.
The light emission data (light emission state) of the third sub-field SF3 of the pixel P-5 is moved to the left by two pixels, the light emission data of the third sub-field SF3 of the pixel P-7 is changed from the non-light emission state to the light emission state, and the light emission data of the third sub-field SF3 of the pixel P-5 is changed from the light emission state to the non-light emission state.
The light emission data (light emission state) of the fourth sub-field SF4 of the pixel P-5 is moved to the left by only one pixel, the light emission data of the fourth sub-field SF4 of the pixel P-6 is changed from the non-light emission state to the light emission state, and the light emission data of the fourth sub-field SF4 of the pixel P-5 is changed from the light emission state to the non-light emission state. The light emission data of the fifth sub-field SF5 of the pixel P-5 is not changed.
In the same manner, if a moving distance of five pixels in the horizontal direction is detected as a motion vector V2 from the N-1 frame and the N frame, the light emission data (light emission state) of the first to the fourth sub-fields SF1 to SF4 of the pixels P-0 is moved to the left by four to one pixel(s), the light emission data of the first sub-field SF1 of the pixel P-4 is changed from the non-light emission state to the light emission state, the light emission data of the second sub-field SF2 of the pixel P-3 is changed from the non-light emission state to the light emission state, the light emission data of the third sub-field SF3 of the pixel P-2 is changed from the non-light emission state to the light emission state, the light emission data of the fourth sub-field SF4 of the pixel P-1 is changed from the non-light emission state to the light emission state, the light emission data of the first to the fourth sub-fields SF1 to SF4 of the pixel P-0 is changed from the light emission state to the non-light emission state, and the light emission data of the fifth sub-field SF5 is not changed.
As a result of the above mentioned sub-field rearrangement processing, to a viewer viewing the display image transiting from the N-2 frame to the N frame, the line of sight direction moves smoothly to the arrow AR direction, and the generation of motion blur and dynamic false contour can be prevented.
However if the light emission position of a sub-field is corrected by the conventional sub-field rearrangement processing, sub-fields of a pixel located at a spatially forward position are distributed to a pixel behind this pixel based on the motion vector, therefore in some cases, sub-fields are distributed from a pixel of which sub-fields should not be distributed. This problem of conventional sub-field rearrangement processing will be described in concrete terms.
On the display screen D4 shown in
Here according to the conventional image display apparatus, the light emission data of each sub-field is rearranged according to the moving vector, and as
Since the motion vector of the pixels P-8 to P-4 is 0, the light emission data of the first to fifth sub-fields SF1 to SF5 of the pixels P-8 to P-4 does not move to the left. Therefore the light emission data of the first to fifth sub-fields SF1 to SF5 of the pixel P-9, the light emission data of the first to fourth sub-fields SF1 to SF4 of the pixel P-10, the right emission data of the first to third sub-fields SF1 to SF3 of the pixel P-11, the light emission data of the first to second sub-fields SF1 to SF2 of the pixel P-12, and the light emission data of the first sub-field SF1 of the pixel P-13 are not rearranged.
If the light emission data of the first to fifth sub-fields SF1 to SF5 of the pixels P-8 to P-4 is shifted to the left by five to one pixel(s), the sub-field in an area R1 indicated by a triangle in
In other words, in the sib-fields in the area R1, the light emission data of the sub-fields of the tree T1 is rearranged. Originally the pixels P-9 to P-13 belong to the car C1, so if the light emission data of the first to fifth sub-fields SF1 to SF5 of the pixels P-8 to P-4 belonging to the tree T1 is rearranged, the motion blur and dynamic false contour are generated in the boundary portion between the car C1 and the tree T1, as shown in
If the light emission positions of the sub-fields are corrected in an area where the foreground image and the background image overlap, using a conventional sub-field rearrangement processing, it becomes unknown which of the light emission data of the sub-fields constituting the foreground image and the light emission data of the sub-fields constituting the background image should be arranged. This problem of conventional sub-field rearrangement processing will be described in concrete terms.
On the display screen D5 shown in
Here according to the conventional image display apparatus, the light emission data of each sub-field is rearranged according to the motion vector, and as
The light emission data of the first to fifth sub-fields SF1 to SF5 of the pixels P-7 to P-9 moves to the left by five to one pixel(s), and the light emission data of the sixth sub-field SF6 of the pixels P-7 to P-9 is not changed.
The value of the motion vector of the pixels P-7 to P-9 is not 0, therefore for the sixth sub-field SF6 of the pixel P-7, the fifth to sixth sub-fields SF5 to SF6 of the pixel P-8, and the fourth to sixth sub-fields SF4 to SF6 of the pixel P-9, light emission data corresponding to the foreground image are rearranged respectively. However the value of the motion vector of the pixels P-10 to P-14 is 0, therefore for the third to fifth sub-fields SF3 to SF5 of the pixel P-10, the second to fourth sub-fields SF2 to SF4 of the pixel P-11, the first to third sub-fields SF1 to SF3 of the pixel P-12, the first to second sub-fields SF1 to SF2 of the pixel P-13, and the first sub-field SF1 of the pixel P-14, it is unknown whether which of the light emission data corresponding to the background image or the light emission data corresponding to the foreground image is rearranged respectively.
The sub-fields in the area R2 indicated by a quadrangle in
Patent Document 1: Japanese Patent Application Laid-Open No. 2008-209671
It is an object of the present invention to provide an image processing apparatus and an image display apparatus, in which generation of motion blur and dynamic false contour can be more reliably prevented.
An image processing apparatus according to an aspect of the present invention is an image processing apparatus which divides one field or one frame into a plurality of sub-fields, and processes an input image so as to display gradation by combining a light emission sub-field where light is emitted and a non-emission sub-field where light is not emitted, the apparatus having: a sub-field conversion unit for converting the input image into light emission data of each sub-field; a motion vector detection unit for detecting a motion vector using at least two input images which have a time lag therebetween; and a regeneration unit for changing the light emission data of a sub-field corresponding to a pixel located at a position that is moved spatially backward by the number of pixels corresponding to the motion vector detected by the motion vector detection unit into light emission data of the sub-field of the pixel before moving, whereby the light emission data of each sub-field converted by the sub-field conversion unit is spatially rearranged, and the rearranged light emission data of each sub-field of the current frame is generated using sub-fields of at least two frames.
According to this configuration, an input image is converted into light emission data of each sub-field, and a motion vector is detected using at least two input images which have a time lag. Then the light emission data of a sub-field, corresponding to a pixel located at a position that is moved spatially backward by the number of pixels corresponding to the motion vector, is changed into light emission data of the sub-field of the pixel before moving, whereby the light emission data of each sub-field is spatially rearranged, and the rearranged light emission data of each sub-field of the current frame is generated using the sub-fields of at least two frames.
According to the present invention, the rearranged light emission data of each sub-field of the current frame is generated using the sub-fields of at least two frames, hence the light emission data of the sub-fields of another frame can be used for the sub-fields of the current frame where the light emission data is not rearranged, and motion blur and dynamic false contour generated around the boundary of a foreground image and a background image can be more reliably prevented.
The objects, characteristics and advantages of the present invention will be more obvious by the detailed description thereinbelow and by the accompanying drawings.
An image display apparatus according to the present invention will now be described with reference to the drawings. In the following embodiment, a plasma display device will be described as an example of the image display apparatus, but the image display apparatus to which the present invention is applied is not especially limited to this example, but may be other image display apparatuses, only if one field or one frame is divided into a plurality of sub-fields to display gradation.
In the present description, the meaning of “sub-field” includes the “sub-field period”, and the meaning of “light emission of a sub-field” includes the “light emission of a pixel in a sub-field period”. The light emission period of a sub-field means a period of sustaining light emission by sustaining discharge so that a viewer can visually recognize the light emission, without including an initialization period and a write period when light emission which a viewer can visually recognize is not performed, and a non-light emission period, just before a sub-field, means a period when light emission which a viewer can visually recognize, is not performed, and this includes an initialization period, write period and sustaining period when light emission which a viewer can visually recognize is not performed.
The input unit 1 has a tuner for TV broadcasting, an image input terminal, a network connection terminal, for example, and moving image data is input to the input unit 1. The input unit 1 performs a known conversion processing on the moving image data which was input, and outputs frame image data after the conversion processing to the sub-field conversion unit 2 and the motion vector detection unit 3.
The sub-field conversion unit 2 sequentially converts one frame image data, that is, image data of one field, into the light emission data of each sub-field, and outputs the result to the image data storage unit 4 and the sub-field regeneration unit 6. In the following description, the image data converted into the light emission data of each sub-field is also called “sub-field data”.
Now a gradation representation method for representing gradation using sub-fields will be described. One field is constituted by K number of sub-fields, each sub-field is weighted with a predetermined weight corresponding to brightness, and an emission period is set so that the brightness of each sub-field changes according to this weighting. For example, if seven sub-fields are used and weighted with the Kth power of 2, then the weights of the first to seventh sub-fields are 1, 2, 4, 8, 16, 32 and 64 respectively, and an image can be represented in a 0 to 127 gradation range by combining a light emission state and a non-light emission state of each sub-field. The number of divisions and weighting of sub-fields are not limited to the above example, but may be changed in various ways.
Two frame image data, which are continuous in time, such as the image data of an N-1 frame and image data of an N frame, are input to the motion vector detection unit 3, and the motion vector detection unit 3 detects a motion vector for each pixel of the N frame by detecting the moving distance between these frames, and outputs the result to the motion vector storage unit 5 and the sub-field regeneration unit 6. For this motion vector detection method, a known motion vector detection method is used, such as a detection method using matching processing for each block.
The image data storage unit 4 stores at least the image data of an immediately preceding frame which is converted into the light emission data of each sub-field by the sub-field conversion unit 2. The motion vector storage unit 5 stores at least the motion vector of each pixel of the image data of the immediately preceding frame, detected by the motion vector detection unit 3.
The sub-field regeneration unit 6 changes the light emission data of a sub-field corresponding to a pixel located at a position that is moved spatially backward by the number of pixels corresponding to the motion vector detected by the motion vector detection unit 3, into light emission data of the sub-field of the pixel before moving, whereby the light emission data of each sub-field converted by the sub-field conversion unit 2 is spatially rearranged, and the rearranged light emission data of each sub-field of the current frame is generated using the sub-fields of at least two frames. In a plane specified by the direction of the motion vector, the sub-field regeneration unit 6 changes the light emission data, corresponding to a pixel located at a position that is moved two-dimensionally backward, into the light emission data of the sub-field of the pixel before moving.
In concrete terms, for the light emission data of a sub-field which is not rearranged, the sub-field regeneration unit 6 uses the light emission data of the sub-field in the image data of the immediately preceding frame that is stored in the image storage unit 4. The sub-field regeneration unit 6 includes a first sub-field regeneration unit 61, an overlapping detection unit 62, a depth information creation unit 63, a second sub-field regeneration unit 64, and a combining unit 65.
Just like the rearrangement method depicted in
The overlapping detection unit 62 detects an overlapping of a foreground image and a background image. If an overlapping is detected by the overlapping detection unit 62, the depth information creation unit 63 creates depth information for each pixel where a foreground image and a background image are overlapping, to indicate whether the pixel is the foreground image or the background image. The depth information creation unit 63 creates the depth information based on the magnitude of the motion vector in at least two frames.
Based on the depth information created by the depth information creation unit 63, the first sub-field regeneration unit 61 generates the rearranged light emission data of each sub-field. If an overlapping is detected by the overlapping detection unit 62, for the pixels constituting the foreground image, the first sub-field regeneration unit 61 changes the light emission data of a sub-field corresponding to a pixel located at a position that is moved spatially backward by the number of pixels corresponding to the motion vector detected by the motion vector detection units 3, into the light emission data of the sub-field of the pixel before moving.
Here the above mentioned sub-field rearrangement method will be described in concrete terms. First a light emission gravitational center value will be described. The light emission gravitational center value is a value generated by normalizing a light emission position of each sub-field with one frame (0 to 1), and a moving distance D [pixels] of each sub-field is given by D=V×G, where V [pixels/frame] denotes a motion vector, and G denotes a light emission gravitational center value, and the moving distance of each sub-field according to the moving vector can be calculated using the light emission gravitational center value of each sub-field.
If 25 (pixels/frame) in the x direction (horizontal direction on the display screen) and 0 (pixels/frame) in the y direction (vertical direction on the display screen) are detected as the motion vector MV of the N-1 frame and N frame at this time, the moving distance values (x, y) [pixels] of the first to fifth sub-fields SF1 to SF5 are (20, 0), (18, 0), (14, 0), (8, 0) and (0, 0) respectively based on the above mentioned D=V×G. These values are not values on the abscissa in
The second sub-field regeneration unit 64 reads the sub-field data of the immediately preceding frame stored in the image data storage unit 4, and reads the motion vector of each pixel of the image data of the immediately preceding frame stored in the motion vector storage unit 5, inverts the direction of the motion vector of each pixel of the sub-field data of the immediately preceding frame, normalizes the light emission gravitational center values which are normalized with values 0 to 1 with 1 to 0, and changes the light emission data of a sub-field, corresponding to a pixel located at a position that is moved spatially backward by the number of pixels corresponding to the motion vector, into the light emission data of the sub-fields of the pixel before moving, so as to generate the rearranged light emission data of each sub-field.
Here the light emission gravitational center value is a value (0 to 1) generated by normalizing the light emission position of each sub-field with one frame, and is used for calculating the moving distance of each sub-field. In other words, the moving distance of each sub-field is calculated by multiplying the light emission gravitational center value by the motion vector value. Normally the light emission gravitational center value is normalized with values 0 to 1, but if this value is normalized with values 1 to 0, the light emission data of each sub-field is arranged in a vertically inverted state.
The combining unit 65 combines the rearranged light emission data generated by the first sub-field regeneration unit 61 and the rearranged light emission data generated by the second sub-field regeneration unit 64.
The image display unit 7 has a plasma display panel and a panel drive circuit, and displays moving images by controlling ON or OFF of each sub-field of each pixel of the plasma display panel, based on the rearranged light emission data generated by the sub-field regeneration unit 6.
Now a light emission data rearrangement processing by the image display apparatus constructed like this will be described in concrete terms. First moving image data is input to the input unit 1, and the input unit 1 performs predetermined conversion processing on the moving image data which was input, and outputs the frame image data after conversion processing to the sub-field conversion unit 2 and the motion vector detection unit 3.
Then the sub-field conversion unit 2 sequentially converts the frame image data, for each pixel, into the light emission data of the first to sixth sub-fields SF1 to SF6, and outputs the light emission data to the sub-field regeneration unit 6 and the image data storage unit 4.
For example, it is assumed that the moving image data illustrated in
The image data storage unit 4 stores the sub-field data converted by the sub-field conversion unit 2. The image data storage unit 4 stores the sub-field data of the current frame (N frame) and the sub-field data of the immediately preceding frame (N-1 frame).
In parallel with the creation of the light emission data of the first to sixth sub-fields SF1 to SF6, the motion vector detection unit 3 detects a motion vector of each pixel between two frame image data which are continuous in time, and outputs it to the sub-field regeneration unit 6 and the motion vector storage unit 5.
The motion vector storage unit 5 stores a motion vector for each pixel of the frame image data which the motion vector detection unit 3 detected. The motion vector storage unit 5 stores a motion vector of each pixel of the frame image data of the current frame (N frame), and a motion vector of each pixel of the frame image data of the immediately preceding frame (N-1 frame).
Then the first sub-field regeneration unit 61 changes the light emission data of a sub-field corresponding to a pixel located at a position that is moved spatially backward, by the number of pixels corresponding to the motion vector, into the light emission data of the sub-field of the pixel before moving, so that the sub-field with more precedence in time moves a further distance, according to the arrangement sequence of the first to sixth sub-fields SF1 to SF6 of the N frame. Thereby the first sub-field regeneration unit 61 spatially rearranges the light emission data of each sub-field which was converted by the sub-field conversion unit 2, and generates the rearranged light emission data of each sub-field.
Then the overlapping detection unit 62 detects an overlapping of a foreground image and a background image for each sub-field. In concrete terms, when the first sub-field regeneration unit 61 rearranges the light emission data of each sub-field, the overlapping detection unit 62 counts a number of times when the light emission data is written for each sub-field, and if the write count is 0, the sub-field is detected as a non-setting portion where the light emission data is not set, and if the write count is 2 or more, the sub-field is detected as an overlapping portion of the foreground image and the background image.
In the case when a non-setting portion where the light emission data is not rearranged or an overlapping portion, where a foreground image and a background image overlap, is not detected, the overlapping detection unit 62 outputs the rearranged light emission data, generated by the first sub-field regeneration unit 61, to the image display unit 7.
In
If a non-setting portion where the light emission data is not set is detected by the overlapping detection unit 62, the second sub-field regeneration unit 64 reads the sub-field data of the N-1 frame which is stored in the image data storage unit 4, and reads the motion vector of each pixel of the frame image data of the N-1 frame stored in the motion vector storage unit 5.
Then the second sub-field regeneration unit 64 inverts the direction of the motion vector of each pixel of the frame image data of the N-1 frame, and normalizes the light emission gravitational center values of each sub-field, which is normalized with 0 to 1 of the sub-field data of the N-1 frame, with 1 to 0. By inverting the direction of the motion vector of each pixel, the light emission data moves in the reverse direction of the rearranged light emission data SF_a shown in
Then the second sub-field regeneration unit 64 changes the light emission data of a sub-field corresponding to a pixel located at a position that is moved spatially backward by the number of pixels corresponding to the motion vector into the light emission data of the sub-field of the pixel before moving, so that the sub-field with more precedence in time moves a less distance, according to the arrangement sequence of the first to sixth sub-fields SF1 to SF6 of the N-1 frame where the motion vector was inverted, and the light emission gravitational center value was normalized with 1 to 0.
The second sub-field regeneration unit 64 rearranges the light emission data of each sub-field according to the motion vector, and after the rearrangement of each sub-field of each pixel in the N-1 frame, the rearranged light emission data SF_b is created as shown in
In other words, the light emission data of the second to sixth sub-fields SF2 to SF6 of the pixel P-16 is changed to the light emission data of the second to sixth sub-fields SF2 to SF6 of the pixels P-15 to P-11, and the light emission data of the first sub-field SF1 of the pixel P-16 is not changed. The light emission data of the second to sixth sub-fields SF2 to SF6 of the pixel P-15 is changed into the light emission data of the second to sixth sub-fields SF2 to SF6 of the pixels P-14 to P-10, and the light emission data of the first sub-field SF1 of the pixel P-15 is not changed.
The light emission data of the second to sixth sub-fields SF2 to SF6 of the pixel P-14 is changed to the light emission data of the second to sixth sub-fields SF2 to SF6 of the pixels P-13 to P-9, and the light emission data of the first sub-field SF1 of the pixel P-14 is not changed. The light emission data of the second to sixth sub-fields SF2 to SF6 of the pixel P-13 is changed to the light emission data of the second to sixth sub-fields SF2 to SF6 of the pixels P-12 to P-8, and the light emission data of the first sub-field SF1 of the pixel P-13 is not changed.
The light emission data of the second to sixth sub-fields SF2 to SF6 of the pixel P-12 is changed to the light emission data of the second to sixth sub-fields SF2 to SF6 of the pixels P-11 to P-7, and the light emission data of the first sub-field SF1 of the pixel P-12 is not changed. The light emission data of the second to sixth sub-fields SF2 to SF6 of the pixel P-11 is changed to the light emission data of the second to sixth sub-fields SF2 to SF6 of the pixels P-10 to P-6, and the light emission data of the first sub-field SF1 of the pixel P-11 is not changed.
The light emission data of the second to sixth sub-fields SF2 to SF6 of the pixel P-10 is changed to the light emission data of the second to sixth sub-fields SF2 to SF6 of the pixels P-9 to P-5, and the light emission data of the first sub-field SF1 of the pixel P-10 is not changed. The light emission data of the second to sixth sub-fields SF2 to SF6 of the pixel P-9 is changed to the light emission data of the second to sixth sub-fields SF2 to SF6 of the pixels P-8 to P-4, and the light emission data of the first sub-field SF1 of the pixel P-9 is not changed.
In this way, the second sub-field regeneration unit 64 spatially rearranges the light emission data of each sub-field of the N-1 frame where the motion vector is inverted, and the light emission gravitational center values are normalized with 1 to 0, and generates the rearranged light emission data SF_b of each sub-field.
Then the combining unit 65 combines the rearranged light emission data SF_a of the N frame created by the first sub-field regeneration unit 61 and the rearranged light emission data SF_b created by the second sub-field regeneration unit 64. In concrete terms, the combining unit 65 sets the light emission data of the sub-fields of the rearranged light emission data SF_b corresponding to the non-setting portion in the sub-fields of the non-setting portion of the rearranged light emission data SF_a detected by the overlapping detection unit 62. The light emission data of the sub-fields in the area R1 of the rearranged light emission data SF_b is set in the light emission data of the sub-fields in the area R1 of the non-setting portion of the rearranged light emission data SF_a.
By the above mentioned sub-field rearrangement processing, the light emission data of the first to fifth sub-fields SF1 to SF5 of the pixel P-9, the light emission data of the first to fourth sub-fields SF1 to SF4 of the pixel P-10, the light emission data of the first to third sub-fields SF1 to SF3 of the pixel P-11, the light emission data of the first and second sub-fields SF1 and SF2 of the pixel P-12 and the light emission data of the first sub-field SF1 of the pixel P-13 in the rearranged light emission data SF_a of the N frame created by the first sub-field regeneration unit 61 are set to the light emission data of the first to fifth sub-fields SF1 to SF5 of the pixel P-9, the light emission data of the first to fourth sub-fields SF1 to SF4 of the pixel P-10, the light emission data of the first to third sub-fields SF1 to SF3 of the pixel P-11, the light emission data of the first and second sub-fields SF1 and SF2 of the pixel P-12, and the light emission data of the first sub-field SF1 of the pixel P-13 in the rearranged light emission data SF_b created by the second sub-field regeneration unit 64.
In other words, the light emission data of the sub-fields belonging to the car C1 is rearranged in the sub-fields of the area R1 indicated by the triangle in
Now a case when the overlapping detection unit 62 detected an overlapping portion of a foreground image and a background image, where light emission data is written in one sub-field two or more times, will be described.
When the sub-fields of the moving image data where a foreground image passes in front of a background image are rearranged as illustrated in
If the overlapping detection unit 62 detects an overlapping portion, the depth information creation unit 63 calculates the depth information which indicates whether a pixel is the foreground image or the background image, for each pixel where the foreground image and the background image overlap. In concrete terms, the depth information creation unit 63 compares the values of the motion vectors of a same pixel in two or more frames, and if the value of the motion vector changes, this pixel is regarded as the foreground image, and if the value of the motion vector does not change, this pixel is regarded as the background image, and creates the depth information based on this determination. For example, the depth information creation unit 63 compares the vector values of a same pixel in the N frame and the N-1 frame.
If the overlapping detection unit 62 detects overlapping, the first sub-field regeneration unit 61 changes the light emission data of each sub-field of the overlapping portion into the light emission data of the sub-field of the pixel constituting the foreground image specified by the depth information created by the depth information creation unit 63.
Here the first sub-field regeneration unit 61 rearranges the light emission data of each sub-field according to the motion vector, and, as shown in
First the first sub-field regeneration unit 61 changes the light emission data of a sub-field corresponding to a pixel located at a position that is moved spatially backward by the number of pixels corresponding to the motion vector into the light emission data of the sub-field of the pixel before moving, so that the sub-field with more precedence in time moves a further distance, according to the arrangement sequence of the first to sixth sub-fields SF1 to SF6.
Then the overlapping detection unit 62 counts a number of times of writing the light emission data to each sub-field. In the case of the rearranged light emission data shown in
Then the depth information creation unit 63 compares the values of the motion vectors in a same pixel between the N frame and the N-1 frame before rearrangement, and if the values of the motion vectors changed, this pixel is regarded as the foreground image, and if the values of the motion vectors did not change, this pixel is regarded as the background image, and the depth information is created based on this data. For example, in the case of the N frame shown in
The first sub-field regeneration unit 61 refers to the depth information corresponding to the pixel of the pre-movement sub-field of the sub-field which the overlapping detection unit 62 detected as the overlapping portion, and if the depth information is information that indicates the foreground image, the first sub-field regeneration unit 61 changes the light emission data of this sub-field into the light emission data of the sub-field of the pre-movement pixel, and if the depth information is information that indicates the background image, the first sub-field regeneration unit 61 does not change the light emission data of the sub-field into the light emission data of the sub-field of the pre-movement pixel.
Thereby as
By the above mentioned sub-field rearrangement processing, the light emission data of the sub-fields of the foreground image is moved with priority in an overlapping portion of the foreground image and the background image. In other words, in the sub-fields in the area R2 indicated by the quadrangle in
In the present embodiment, the depth information creation unit 63 creates, for each pixel, the depth information, which indicates whether the pixel is the foreground image or the background image based on the magnitude of the motion vector in at least two frames, but the present invention is not limited to this. In other words, if the depth information to indicate whether the pixel is the foreground image or the background image is included in advance in the input image which is input in the input unit 1, it is unnecessary to create the depth information. In this case, the depth information is extracted from the input image which is input in the input unit 1. The depth information creation unit 63 may detect an object from the input image signals, so as to determine the depth information according to the movement of the object. The depth information creation unit 63 may detect characters and create the depth information regarding the characters as the foreground image if such characters are detected.
Another example of the sub-field rearrangement processing will be described.
On the display screen DP shown in
As
If the light emission data of each sub-field in the N-2 frame, N-1 frame and N frame shown in
In this case, sufficient brightness cannot be provided in the boundary portion between the first image I1 and the second image I2, where motion blur and dynamic false contour are generated, and image quality drops. If the light emission data is moved beyond the boundary of the first image I1 and the second image I2, motion blur and dynamic false contour are generated in the boundary portion, and image quality drops.
If the light emission data of each sub-field of the N-2 frame, N-1 frame and N frame shown in
According to the present embodiment, the boundary between the first image I1 and the second image I2 becomes clear, and motion blur and dynamic false contour, which are generated in a boundary portion where the directions of the motion vectors are discontinuous, can be more reliably prevented.
According to the present embodiment, the image data storage unit 4 stores image data in the N-1 frame converted by the sub-field conversion unit 2, but the present invention is not limited to this, and the image data storage unit 4 may store the image data in the N-1 frame before being converted into sub-field data, which is output from the input unit 1. If the rearranged light emission data is generated by the second sub-field regeneration unit 64, the sub-field conversion unit 2 reads the image data in the N-1 frame from the image data storage unit 4, sequentially converts the read image data in the N-1 frame into the light emission data of each sub-field, and outputs the converted data to the sub-field regeneration unit 6.
The above mentioned embodiment primarily includes the invention having the following configuration.
An image processing apparatus according to an aspect of the present invention is an image processing apparatus which divides one field or one frame into a plurality of sub-fields, and processes an input image so as to display gradation by combining a light emission sub-field where light is emitted and a non-light emission sub-field where light is not emitted, the apparatus having: a sub-field conversion unit that converts the input image into light emission data of each sub-field; a motion vector detection unit that detects a motion vector using at least two input images which have a time lag therebetween; and a regeneration unit that changes the light emission data of a sub-field corresponding to a pixel located at a position that is moved spatially backward by the number of pixels corresponding to the motion vector detected by the motion vector detection unit into the light emission data of the sub-field of the pixel before moving, whereby the light emission data of each sub-field converted by the sub-field conversion unit is spatially rearranged, and the rearranged light emission data of each sub-field of a current frame is generated using sub-fields of at least two frames.
According to this configuration, an input image is converted into light emission data of each sub-field, and a motion vector is detected using at least two input images which have a time lag. Then the light emission data of a sub-field, corresponding to a pixel located at a position that is moved spatially backward by the number of pixels corresponding to the motion vector, is changed into light emission data of the sub-field of the pixel before moving, whereby the light emission data of each sub-field is spatially rearranged, and the rearranged light emission data of each sub-field of the current frame is generated using the sub-fields of at least two frames.
In some cases, when the light emission data of a sub-field corresponding to a pixel located at a position that is moved spatially backward by the number of pixels corresponding to the motion vector is changed to the light emission data of the sub-field of the pixel before moving, light emission data may not be rearranged in an area near the boundary of a foreground image and a background image, but since the rearranged light emission data of each sub-field of the current frame is generated using the sub-fields in at least two frames, light emission data of the sub-fields of another frame can be used for the sub-fields of the current frame where the light emission data is not rearranged, and as a consequence, motion blur and dynamic false contour, which are generated around the boundary of the foreground image and the background image, can be more reliably prevented.
It is preferable that the above mentioned image processing apparatus further comprises a storage unit that stores image data in the immediately preceding frame converted by the sub-field conversion unit, wherein the regeneration unit uses light emission data of a sub-field of the immediately preceding frame stored in the storage unit, for the light emission data of a sub-field which has not been rearranged.
According to this configuration, the image data in the immediately preceding frame is stored in the storage unit, and the light emission data of the sub-field of the image data in the immediately preceding frame stored in the storage unit is used for the light emission data of a sub-field which was not rearranged, hence the light emission data of the sub-field of the image data in the immediately preceding frame can be used for the sub-field in the current frame where the light emission data was not rearranged, and as a consequence, motion blur and dynamic false contour, which are generated around the boundary of the foreground image and the background image, can be more reliably prevented.
It is preferable that the above mentioned image processing apparatus further comprises a depth information creation unit that creates, for each pixel where a foreground image and a background image overlap, depth information indicating whether the pixel is the foreground image or the background image, wherein the regeneration unit generates the rearranged light emission data of each sub-field based on the depth information created by the depth information creation unit.
According to this configuration, the depth information, to indicate whether the pixel is the foreground image or the background image, is created for each pixel where the foreground image and the background image overlap, and the rearranged light emission data of each sub-field is generated based on the created depth information.
When a foreground image and a background image overlap, the depth information, to indicate whether the pixel is the foreground image or the background image, is created for each pixel where the foreground image and the background image overlap, therefore the rearranged light emission data of each sub-field can be generated based on the depth information, and as a consequence, motion blur and dynamic false contour, which are generated in the overlapping portion of the foreground image and the background image, can be more reliably prevented.
In the above mentioned image processing apparatus, it is preferable that the depth information creation unit creates the depth information based on the motion vectors in at least two frames. According to this configuration, the depth information can be created based on the motion vectors in at least two frames.
In the above mentioned image processing apparatus, it is preferable that when the foreground image and the background image overlap, the regeneration unit changes, for a pixel constituting the foreground image which is specified by the depth information created by the depth information creation unit, the light emission data of a sub-field corresponding to a pixel located at a position that is moved spatially backward by the number of pixels corresponding to the motion vector detected by the motion vector detection unit, into the light emission data of the sub-field of the pixel before moving.
According to this configuration, if the foreground image and the background image overlap, for a pixel constituting the foreground image which is specified by the depth information, the light emission data of a sub-field corresponding to a pixel located at a position that is moved spatially backward by the number of pixels corresponding to the motion vector is changed into the light emission data of the sub-field of the pixel before moving. Hence the line of sight direction of the viewer can move smoothly according to the movement of the foreground image, and motion blur and dynamic false contour, generated in an overlapping portion of the foreground image and the background image, can be prevented.
An image display apparatus according to another aspect of the present invention comprises one of the image processing apparatuses described above, and a display unit which displays images using the rearranged light emission data after correction, output from the image processing apparatus.
According to this image display apparatus, in some cases, when the light emission data of a sub-field, corresponding to a pixel located in a position that is moved spatially backward by the number of pixels corresponding to the motion vector, is changed into the light emission data of the sub-field of the pixel before moving, light emission data may not be rearranged in an area around the boundary of a foreground image and a background image. But since the rearranged light emission data of each sub-field of the current frame is generated using the sub-fields in at least two frames, light emission data of the sub-fields of another frame can be used for the sub-fields of the current frame where the light emission data was not rearranged. As a consequence, motion blur and dynamic false contour, which are generated in an area around the boundary of the foreground image and the background image, can be more reliably prevented.
The embodiments or examples described in the “Best Mode for Carrying Out the Invention” section are merely to clarify the technical content of the present invention, and the present invention shall not be interpreted in a narrow sense limited to the embodiments, but numerous modifications and variations can be made within the scope of the true spirit and Claims of the present invention.
The image processing apparatus according to the present invention can more reliably prevent motion blur and dynamic false contour, therefore it is useful for an image processing apparatus which divides one field or one frame into a plurality of sub-fields and processes an input image so as to display gradation by combining a light emission sub-field where light is emitted and a non-emission sub-field where light is not emitted.
Number | Date | Country | Kind |
---|---|---|---|
2009-023829 | Feb 2009 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2010/000259 | 1/19/2010 | WO | 00 | 7/25/2011 |