The present invention relates to a video processing apparatus which processes an input image so as to divide one field or one frame into a plurality of subfields and combine an emission subfield in which light is emitted and a non-emission subfield in which light is not emitted in order to perform gradation display, and to a video display apparatus using this apparatus.
A plasma display device has advantages in its thinness and widescreen. In an AC plasma display panel used in such plasma display device, a front panel, which is a glass substrate formed by laying out a plurality of scan electrodes and sustain electrodes, and a rear panel having an array of a plurality of data electrodes, are combined in a manner that the scan electrodes and the sustain electrodes are disposed perpendicular to the data electrodes, so as to form discharge cells arranged in a matrix fashion. Any of the discharge cells is selected and caused to perform plasma emission, in order to display an image on the AC plasma display panel.
When displaying an image in the manner described above, one field is divided in a time direction into a plurality of screens having different luminance weights (these screens are called “subfields” (SF) hereinafter). Light emission or light non-emission by the discharge cells of each of the subfields is controlled, so as to display an image corresponding to one field, or one frame image.
A video display apparatus that performs the subfield division described above has a problem where tone disturbance called “dynamic false contours” or motion blur occurs, deteriorating the display quality of the video display apparatus. In order to reduce the occurrence of the dynamic false contours, Patent Literature 1, for example, discloses an image display device that detects a motion vector in which a pixel of one of a plurality of fields included in a moving image is an initial point and a pixel of another field is a terminal point, converts the moving image into light emission data of the subfields, and reconstitutes the light emission data of the subfields by processing the converted light emission data using the motion vector.
This conventional image display device selects, from among motion vectors, a motion vector in which a reconstitution object pixel of the other field is the terminal point, calculates a position vector by multiplying the selected motion vector by a predetermined function, and reconstitutes the light emission datum of a subfield corresponding to the reconstitution object pixel, by using the light emission datum of the subfield corresponding to the pixel indicated by the position vector. In this manner, this conventional image display device prevents the occurrence of motion blur or dynamic false contours.
As described above, the conventional image display device converts the moving image into the light emission datum of each subfield, to rearrange the light emission data of the subfields in accordance with the motion vectors. A method of rearranging the light emission data of each subfield is specifically described hereinbelow.
First of all, the conventional image display device described above converts the moving image into the light emission data of the subfields, and, as shown in
When displaying the N−2 frame image D1, suppose that one field is constituted by five subfields SF1 to SF5. In this case, first, in the N−2 frame the light emission data of all subfields SF1 to SF5 of a pixel P-10 corresponding to the moving object OJ are in a light emission state (the subfields with hatched lines in the diagram), and the light emission data of the subfields SF1 to SF5 of the other pixels are in a light non-emission state (not shown). Next, when the moving object OJ moves horizontally by five pixels in the N−1 frame, the light emission data of all of the subfields SF1 to SF5 of a pixel P-5 corresponding to the moving object OJ are in the light emission state, and the light emission data of the subfields SF1 to SF5 of the other pixels are in the light non-emission state. Subsequently, when the moving object OJ further moves horizontally by five pixels in the N-frame, the light emission data of all of the subfields SF1 to SF5 of a pixel P-0 corresponding to the moving object OJ are in the light emission state, and the light emission data of the subfields SF1 to SF5 of the other pixels are in the light non-emission state.
The conventional image display device described above then rearranges the light emission data of the subfields in accordance with the motion vector, and, as shown in
First, when a horizontal distance equivalent to five pixels is detected as a motion vector V1 from the N−2 frame and the N−1 frame, in the N−1 frame the light emission datum of the first subfield SF1 of the pixel P-5 (in the light emission state) is moved to the left by four pixels. The light emission datum of the first subfield SF1 of a pixel P-9 enters the light emission state from the light non-emission state (the subfield with hatched lines in the diagram). The light emission datum of the first subfield SF1 of the pixel P-5 enters the light non-emission state from the light emission state (the white subfield surrounded by a dashed line in the diagram).
The light emission datum of the second subfield SF2 of the pixel P-5 (in the light emission state) is moved to the left by three pixels. The light emission datum of the second subfield SF2 of a pixel P-8 enters the light emission state from the light non-emission state, and the light emission datum of the second subfield SF2 of the pixel P-5 enters the light non-emission state from the light emission state.
The light emission datum of the third subfield SF3 of the pixel P-5 (in the light emission state) is moved to the left by two pixels. The light emission datum of the third subfield SF3 of a pixel P-7 enters the light emission state from the light non-emission state, and the light emission datum of the third subfield SF3 of the pixel P-5 enters the light non-emission state from the light emission state.
The light emission datum of the fourth subfield SF4 of the pixel P-5 (in the light emission state) is moved to the left by one pixel. The light emission datum of the fourth subfield SF4 of a pixel P-6 enters the light emission state from the light non-emission state, and the light emission datum of the fourth subfield SF4 of the pixel P-5 enters the light non-emission state from the light emission state. Moreover, the state of the light emission datum of the fifth subfield SF5 of the pixel P-5 is not changed.
Similarly, when a horizontal distance equivalent to five pixels is detected as a motion vector V2 from the N−1 frame and the N frame, the light emission data of the first to fourth subfields SF1 to SF4 of the pixel P-0 (in the light emission state) are moved to the left by four to one pixels. The light emission datum of the first subfield SF1 of the pixel P-4 enters the light emission state from the light non-emission state, and the light emission datum of the second subfield SF2 of a pixel P-3 enters the light emission state from the light non-emission state. The light emission datum of the third subfield SF3 of the pixel P-2 enters the light emission state from the light non-emission state. The light emission datum of the fourth subfield SF4 of the pixel P-1 enters the light emission state from the light non-emission state. The light emission data of the first to fourth subfields SF1 to SF4 of the pixel P-0 enter the light non-emission state from the light emission state. The state of the light emission datum of the fifth subfield SF5 is not changed.
As a result of this subfield rearrangement process, the line of sight of a viewer moves smoothly along the direction of the arrow AR when the viewer sees the displayed image transiting from the N−2 frame to the N-frame. This can prevent the occurrence of motion blur and dynamic false contours.
However, when a position in which each subfield emits light is corrected by the conventional subfield rearrangement process, the subfields of the pixels that are spatially located forward are distributed to the pixels located therebehind based on the motion vectors. Therefore, the subfields are distributed from the pixels that are not supposed to be distributed. Such problems regarding the conventional subfield rearrangement process are specifically described below.
In a display screen D4 shown in
The conventional image display device rearranges the light emission data of the subfields in accordance with the motion vectors, and, as shown in
Specifically, the light emission data of the first to fifth subfields SF1 to SF5 corresponding to the pixels P-8 to P-4 are moved to the left by five to one pixels, and the light emission data of a sixth subfield SF6 corresponding to the pixels P-8 to P-4 are not changed.
As a result of the subfield rearrangement process described above, the light emission data of the first to fifth subfields SF1 to SF5 of the pixel P-9, the light emission data of the first to fourth subfields SF1 to SF4 of the pixel P-10, the light emission data of the first to third subfields SF1 to SF3 of the pixel P-11, the light emission data of the first and second subfields SF1 and SF2 of the pixel P-12, and the light emission datum of the first subfield SF1 of the pixel P-13, become the light emission data of the subfields that correspond to the pixels constituting the tree T1.
More specifically, the light emission data of the subfields within a triangle region R1, corresponding to the tree T1, are rearranged, as shown in
Moreover, using the conventional subfield rearrangement process to correct the light emission position of each subfield in the region where the foreground image and the background image overlap makes it difficult to determine whether the light emission data of the subfields constituting the foreground image should be arranged or the light emission data of the subfields constituting the background image should be arranged. The problems of the conventional subfield rearrangement process are specifically described next.
In a display screen D5 shown in
Here, the conventional image display device rearranges the light emission data of the subfields in accordance with the motion vectors, so as to create the light emission data, as follows, after rearranging the subfields of the pixels in the N frame as shown in
Specifically, the light emission data of the first to fifth subfields SF1 to SF5 of the pixels P-7 to P-9 are moved to the left by five to one pixels, but the light emission data of the sixth subfield SF6 corresponding to the pixels P-7 to P-9 are not changed.
Because the values of the motion vectors of the pixels P-7 to P-9 are not 0 at this moment, the light emission data of the sixth subfield SF6 of the pixel P-7, the fifth and sixth subfields SF5 and SF6 of the pixel P-8, and the fourth to sixth subfields SF4 to SF6 of the pixel P-9, are rearranged, the light emission data corresponding to the foreground image. However, since the values of the motion vectors of the pixels P-10 to P-14 are 0, it is unknown whether to rearrange the light emission data corresponding to the background image or the light emission data corresponding to the foreground image, as for the third to fifth subfields SF3 to SF5 of the pixel P-10, the second to fourth subfields SF2 to SF4 of the pixel P-11, the first to third subfields SF1 to SF3 of the pixel P-12, the first and second subfields SF1 and SF2 of the pixel P-13, and the first subfield SF1 of the pixel P-14.
The subfields within a square region R2 shown in
An object of the present invention is to provide a video processing apparatus and video display apparatus that are capable of reliably preventing the occurrence of motion blur or dynamic false contours.
A video processing apparatus according to one aspect of the present invention is a video processing apparatus, which processes an input image so as to divide one field or one frame into a plurality of subfields and combine an emission subfield in which light is emitted and a non-emission subfield in which light is not emitted in order to perform gradation display, the video processing apparatus having: a subfield conversion unit for converting the input image into light emission data for each of the subfields; a motion vector detection unit for detecting a motion vector using at least two or more input images that are temporally adjacent to each other; a first regeneration unit for collecting the light emission data of the subfields of pixels that are located spatially forward by the number of pixels corresponding to the motion vector detected by the motion vector detection unit, and thereby spatially rearranging the light emission data for each of the subfields that are converted by the subfield conversion unit, so as to generate rearranged light emission data for each of the subfields; and a detection unit for detecting an adjacent region between a first image and a second image contacting with the first image in the input image, wherein the first regeneration unit does not collect the light emission data outside the adjacent region detected by the boundary detection unit.
According to this video processing apparatus, the input image is converted into the light emission data for each of the subfields, and the motion vector is detected using at least two or more input images that are temporally adjacent to each other. The light emission data for each of the subfields are spatially rearranged by collecting the light emission data of the subfields of the pixels that are located spatially forward by the number of pixels corresponding to the motion vector, whereby the rearranged light emission data for each of the subfields are generated. In so doing, the adjacent region between the first image and the second image contacting with the first image in the input image is detected, and the light emission data are not collected outside this detected adjacent region.
According to the present invention, when collecting the light emission data of the subfields of the pixels that are located spatially forward by the number of pixels corresponding to the motion vector, the light emission data are not collected outside the adjacent region between the first image and the second image contacting with the first image in the input image. Therefore, motion blur or dynamic false contours that can occur in the vicinity of the boundary between the foreground image and the background image can be prevented reliably.
The objects, characteristics and advantages of the present invention will become apparent from the detailed description of the invention presented below in conjunction with the attached drawings.
A video display apparatus according to the present invention is described hereinbelow with reference to the drawings. The following embodiments illustrate the video display apparatus using a plasma display apparatus as its example; however, the video display apparatus to which the present invention is applied is not particularly limited to this example, and the present invention can be applied similarly to any other video display apparatuses in which one field or one frame is divided into a plurality of subfields and hierarchical display is performed.
In addition, in the present specification, a term “subfield” implies “subfield period,” and such an expression as “light emission of a subfield” implies “light emission of a pixel during the subfield period.” Moreover, a period of light emission of a subfield means a duration of light emitted by sustained discharge for allowing a viewer to view an image, and does not imply an initialization period or write period during which the light emission for allowing the viewer to view the image is not performed. A light non-emission period immediately before the subfield means a period during which the light emission for allowing the viewer to view the image is not performed, and includes the initialization period, write period, and duration during which the light emission for allowing the viewer to view the image is not performed.
The input unit 1 has, for example, a TV broadcast tuner, an image input terminal, a network connecting terminal and the like. Moving image data are input to the input unit 1. The input unit 1 carries out a known conversion process and the like on the input moving image data, and outputs frame image data, obtained after the conversion process, to the subfield conversion unit 2 and the motion vector detection unit 3.
The subfield conversion unit 2 sequentially converts one-frame image data, or image data corresponding to one field, into light emission data of the subfields, and outputs thus obtained data to the first subfield regeneration unit 4.
A gradation expression method of the video display apparatus for expressing gradations level using the subfields is now described. One field is constituted by K subfields. Then, a predetermined weight is applied to each of the subfields in accordance with a luminance of each subfield, and the light emission period is set such that the luminance of each subfield changes in response to the weight. For instance, when a weight of the Kth power of 2 is applied using seven subfields, the weights of the first to seventh subfields are, respectively, 1, 2, 4, 8, 16, 32 and 64, thus an image can be expressed within a tonal range of 0 to 127 by combining the subfields in a light emission state or in a light non-emission state. It should be noted that the division number of the subfields and the weighting method are not particularly limited to the examples described above, and various changes can be made thereto.
Two types of frame image data that are temporally adjacent to each other are input to the motion vector detection unit 3. For example, image data of a frame N−1 and image data of a frame N are input to the motion vector detection unit 3. The motion vector detection unit 3 detects a motion vector of each pixel within the frame N by detecting a motion amount between these frames, and outputs the detected motion vector to the first subfield regeneration unit 4. A known motion vector detection method, such as a detection method using a block matching process, is used as the method for detecting the motion vector.
The first subfield regeneration unit 4 collects light emission data of the subfields of the pixels that are spatially located forward by the number of pixels corresponding to the motion vectors detected by the motion vector detection unit 3, so that the temporally precedent subfields move significantly. Accordingly, the first subfield regeneration unit 4 spatially rearranges the light emission data of the subfields, which are converted by the subfield conversion unit 2, with respect to the pixels within the frame N, to generate rearranged light emission data of the subfields for the pixels within the frame N. Note that the first subfield regeneration unit 4 collects the light emission data of the subfields of the pixels that are located two-dimensionally forward in a plane specified by the direction of the motion vectors. In addition, the first subfield regeneration unit 4 includes an adjacent region detection unit 41, an overlap detection unit 42, and a depth information creation unit 43.
The adjacent region detection unit 41 detects an adjacent region between a foreground image and background image of the frame image data that are output from the subfield conversion unit 2, and thereby detects a boundary between the foreground image and the background image. The adjacent region detection unit 41 detects the adjacent region based on a vector value of a target pixel and a vector value of a pixel from which a light emission datum is collected. Note that the adjacent region means a region that includes pixels where a first image and second image are in contact with each other, as well as peripheral pixels thereof. The adjacent region can also be defined as pixels that are spatially adjacent to each other and as a region where the difference between the motion vectors of the adjacent pixels is equal to or greater than a predetermined value.
Although the adjacent region detection unit 41 detects the adjacent region between the foreground image and the background image in the present embodiment, the present invention is not particularly limited to this embodiment. Hence, the adjacent region detection unit 41 may detect an adjacent region between the first image and the second image that is in contact with the first image.
The overlap detection unit 42 detects an overlap between the foreground image and the background image. The depth information creation unit 43 creates, when an overlap is detected by the overlap detection unit 42, depth information for each of the pixels where the foreground image and the background image overlap on each other, the depth information indicating whether each of the pixels corresponds to the foreground image or the background image. The depth information creation unit 43 creates the depth information based on the sizes of motion vectors of at least two or more frames. The depth information creation unit 43 further determines whether or not the foreground image is character information representing a character.
The second subfield regeneration unit 5 changes a light emission datum of a subfield corresponding to a pixel that is moved spatially rearward by the number of pixels corresponding to the motion vector, to a light emission datum of the subfield of the pixel obtained prior to the movement, so that the temporally precedent subfields move significantly, according to the order in which the subfields of the pixels of the frame N are arranged. Note that the second subfield regeneration unit 5 changes the light emission datum of the subfield corresponding to the pixel that is moved two-dimensionally rearward, to the light emission datum of the subfield of the pixel obtained prior to the movement, in a plane specified by the direction of the motion vector.
A subfield rearrangement process performed by the first subfield regeneration unit 4 of the present embodiment is now described. In the present embodiment, the light emission data of the subfields corresponding to the pixels that are spatially located forward of a certain pixel are collected, based on the assumption that a vicinal motion vector does not change.
First of all, when a horizontal distance equivalent to five pixels is detected as a motion vector V1 from an N−1 frame and N frame, in the N frame the light emission datum of a first subfield SF1 of a pixel P-5 is changed to the light emission datum of a first subfield SF1 of a pixel P-1 that is located spatially forward by four pixels (to the right). The light emission datum of a second subfield SF2 of the pixel P-5 is changed to the light emission datum of a second subfield SF2 of a pixel P-2 that is located spatially forward by three pixels (to the right). The light emission datum of a third subfield SF3 of the pixel P-5 is changed to the light emission datum of a third subfield SF3 of a pixel P-3 that is located spatially forward by two pixels (to the right). The light emission datum of a fourth subfield SF4 of the pixel P-5 is changed to the light emission datum of a fourth subfield SF4 of a pixel P-4 that is located spatially forward by one pixel (to the right). The light emission datum of a fifth subfield SF5 of the pixel P-5 is not changed. Note that, in the present embodiment, the light emission data express either a light emission state or a light non-emission state.
As a result of the subfield rearrangement process described above, the line of sight of the viewer moves smoothly along the direction of an arrow BR when the viewer sees a displayed image transiting from the N−1 frame to the N frame, preventing the occurrence of motion blur and dynamic false contours.
As described above, unlike the rearrangement method illustrated in
In so doing, with regard to a boundary between the moving background image and the static foreground image, the light emission data of the subfields within the region R1 corresponding to the foreground image are rearranged, as shown in
Moreover, with regard to the subfields within the region R2 in an overlapping part where the moving foreground image and the static background image overlap on each other as shown in
Note that, when the overlap is detected by the overlap detection unit 42, the first subfield regeneration unit 4 may always collect the light emission data of the subfields of the pixels constituting the foreground image, based on the depth information created by the depth information creation unit 43. In the present embodiment, however, when the overlap is detected by the overlap detection unit 42 and the depth information creation unit 43 determines that the foreground image is not the character information, the first subfield regeneration unit 4 collects the light emission data of the subfields of the pixels constituting the foreground image.
In the case where the foreground image is a character moving on the background image, instead of collecting the light emission data of the subfields of the pixels that are located spatially forward, the light emission data of the subfields corresponding to the pixels that are located spatially rearward are changed to the light emission data of the subfields of the pixels obtained prior to the movement, so that the line of sight of the viewer can be moved more smoothly.
For this reason, in the case where the overlap is detected by the overlap detection unit 42 and the depth information creation unit 43 determines that the foreground image is the character information, the second subfield regeneration unit 5 uses the depth information created by the depth information creation unit 43, to change the light emission data of the subfields corresponding to the pixels that are moved spatially rearward by the number of pixels corresponding to the motion vector, to the light emission data of the subfields of the pixels obtained prior to the movement, so that the temporally precedent subfields move significantly.
The image display unit 6, with a plasma display panel, a panel drive circuit and the like, controls ON/OFF of each subfield of each pixel on the plasma display panel, to display a moving image.
Next is described in detail a light emission data rearrangement process performed by the video display apparatus configured as described above. First, moving image data are input to the input unit 1, in response to which the input unit 1 carries out a predetermined conversion process on the input moving image data, and then outputs frame image data, obtained as a result of the conversion process, to the subfield conversion unit 2 and the motion vector detection unit 3.
Subsequently, the subfield conversion unit 2 sequentially converts the frame image data into the light emission data of the first to sixth subfields SF1 to SF6 with respect to the pixels of the frame image data, and outputs this obtained light emission data to the first subfield regeneration unit 4.
For example, suppose that the input unit 1 receives an input of the moving image data in which a car C1, a background image, passes behind a tree T1, a foreground image, as shown in
In conjunction with the creation of the light emission data of the first to sixth subfields SF1 to SF6 described above, the motion vector detection unit 3 detects a motion vector of each pixel between two frame image data that are temporally adjacent to each other, and outputs the detected motion vectors to the first subfield regeneration unit 4.
Thereafter, the first subfield regeneration unit 4 collects the light emission data of the subfields of the pixels that are located spatially forward by the number of pixels corresponding to the motion vectors, so that the temporally precedent subfields move significantly, according to the order in which the first to sixth subfields SF1 to SF6 are arranged. Accordingly, the first subfield regeneration unit 4 spatially rearranges the light emission data of the subfields, which are converted by the subfield conversion unit 2, to generate the rearranged light emission data of the subfields.
The adjacent region detection unit 41 detects the boundary (adjacent region) between the foreground image and the background image in the frame image data that are output from the subfield conversion unit 2.
A boundary detection method by the adjacent region detection unit 41 is now described in detail.
With regard to the subfields corresponding to the target pixel, when the difference between the vector value of the target pixel and the vector value of a pixel, from which a light emission datum is collected, is greater than a predetermined value, the adjacent region detection unit 41 determines that the pixel, from which the light emission datum collected, exists outside the boundary. In other words, when the difference diff between the vector value Val of the target pixel and the vector value of the pixel, from which the light emission datum is collected, satisfies the following formula (1) with regard to each subfield corresponding to the target pixel, the adjacent region detection unit 41 determines that the pixel, from which the light emission datum is collected, exists outside the boundary.
diff>±Val/2 (1)
For instance, in
At this moment, the vector values of the pixels P-10 to P-0 are “6,” “6,” “4,” “6,” “0,” and “0” respectively. With regard to the first subfield SF1 of the target pixel P-10, the difference diff between the vector value of the target pixel P-10 and the vector value of the pixel P-0 is “6” and Val/2 is “3.” Therefore, the first subfield SF1 of the target pixel P-10 satisfies the formula (1). In this case, the adjacent region detection unit 41 determines that the pixel P-0 exists outside the boundary, and the first subfield regeneration unit 4 does not change the light emission datum of the first subfield SF1 of the target pixel P-10 to the light emission datum of the first subfield SF1 of the pixel P-0.
Similarly, with regard to the second subfield SF2 of the target pixel P-10, the difference diff between the vector value of the target pixel P-10 and the vector value of the pixel P-2 is “6” and val/2 is “3.” Therefore, the second subfield SF2 of the target pixel P-10 satisfies the formula (1). In this case, the adjacent region detection unit 41 determines that the pixel P-2 is outside the boundary, and the first subfield regeneration unit 4 does not change the light emission datum of the second subfield SF2 of the target pixel P-10 to the light emission datum of the second subfield SF2 of the pixel P-2.
With regard to the third subfield SF3 of the target pixel P-10, on the other hand, the difference diff between the vector value of the target pixel P-10 and the vector value of the pixel P-4 is “0” and Val/2 is “3.” Therefore, the third subfield SF3 of the target pixel P-10 does not satisfy the formula (1). In this case, the adjacent region detection unit 41 determines that the pixel P-4 exists within the boundary, and the first subfield regeneration unit 4 changes the light emission datum of the third subfield SF3 of the target pixel P-10 to the light emission datum of the third subfield SF3 of the pixel P-4.
With regard to the fourth and fifth subfields SF4 and SF5 corresponding to the target pixel P-10 as well, the adjacent region detection unit 41 determines that the pixels P-6 and P-8 exist within the boundary, and the first subfield regeneration unit 4 changes the light emission data of the fourth and fifth subfields SF4 and SF5 of the target pixel P-10 to the light emission data of the fourth and fifth subfields SF4 and SF5 corresponding to the pixels P-6 and P-8.
At this moment, the shift amount of the first subfield SF1 of the target pixel P-10 is equivalent to 10 pixels. The shift amount of the second subfield SF2 of the target pixel P-10 is equivalent to 8 pixels. The shift amount of the third subfield SF3 of the target pixel P-10 is equivalent to 6 pixels. The shift amount of the fourth subfield SF4 of the target pixel P-10 is equivalent to 4 pixels. The shift amount of the fifth subfield SF5 of the target pixel P-10 is equivalent to 2 pixels. The shift amount of the sixth subfield SF6 of the target pixel P-10 is equivalent to 0.
Because the adjacent region detection unit 41 can determine whether each of these pixels exists within the boundary or not, when any of the pixels, from which the light emission datum is collected, is determined to exist outside the boundary, the light emission data of the subfield corresponding to the target pixel are changed to the light emission data of the subfields of the pixel that is located on the inward side from the boundary and proximate to the boundary.
More specifically, as shown in
At this moment, the shift amount of the first subfield SF1 of the target pixel P-10 is changed from 10 pixels to 6 pixels, and the shift amount of the second subfield SF2 of the target pixel P-10 is changed from 8 pixels to 6 pixels.
The first subfield regeneration unit 4 collects the light emission data of the subfields on a plurality of straight lines as shown in
Note that, in the present embodiment, when the difference diff between the vector value Val of the target pixel and the vector values of the pixel, from which the light emission datum is collected, satisfies the formula (1), with regard to each of the subfields corresponding to the target pixel, the adjacent region detection unit 41 determines that the pixel, from which the light emission datum is collected, exists outside the boundary; however, the present invention is not particularly limited to this embodiment.
In other words, when the vector value of the target pixel is small, the different diff might not satisfy the formula (1), whether or not the pixel exists in the boundary. Therefore, with regard to each of the subfields corresponding to the target pixel, when the difference diff between the vector value Val of the target pixel and the vector value of the pixel, from which the light emission datum is collected, satisfies the following formula (2), the adjacent region detection unit 41 may determine that the pixel, from which the light emission datum is collected, exists outside the boundary.
diff>Max(3,Val/2) (2)
As shown in the formula (2) above, when the difference diff between the vector value Val of the target pixel and the vector value of the pixel, from which the light emission datum is collected, is greater than Val/2 or 3, whichever is greater, the adjacent region detection unit 41 determines that the pixel, from which the light emission datum is collected, exists outside the boundary. In the formula (2), the numerical value “3” compared to the difference diff is merely an example and can therefore be “2,” “4,” “5,” or any other numerical values.
Here, the first subfield regeneration unit 4 rearranges the light emission data of the subfields according to the motion vectors, so that the light emission data are created as follows after rearranging the subfields of the pixels within the N frame, as shown in
Specifically, the light emission data of the first to fifth subfields SF1 to SF5 of the pixel P-17 are changed to the light emission data of the first to fifth subfields SF1 to SF5 of the pixel P-12 to P-16, but the light emission datum of the sixth subfield SF6 of the pixel P-17 is not changed. Note that the light emission data of the subfields corresponding to the pixels P-16 to P-14 are also changed as with the case of the pixel P-17.
Furthermore, the light emission data of the first and second subfields SF1 and SF2 of the pixel P-13 are changed to the light emission data of the first and second subfields SF1 and SF2 of the pixel P-9, and the light emission data of the third to fifth subfields SF3 to SF5 of the pixel P-13 are changed to the light emission data of the third to fifth subfields SF3 to SF5 of the pixels P-10 to P-12, but the light emission datum of the sixth subfield SF6 of the pixel P-13 is not changed.
The light emission data of the first to third subfields SF1 to SF3 of the pixel P-12 are changed to the light emission data of the first to third subfields SF1 to SF3 of the pixel P-9, and the light emission data of the fourth and fifth subfields SF4 and SF5 of the pixel P-12 are changed to the light emission data of the fourth and fifth subfields SF4 and SF5 of the pixels P-10 and P-11, but the light emission datum of the sixth subfield SF6 of the pixel P-12 is not changed.
Moreover, the light emission data of the first to fourth subfields SF1 to SF4 of the pixel P-11 are changed to the light emission of the first to fourth subfields SF1 to SF4 of the pixel P-9, and the light emission datum of the fifth subfield SF5 of the pixel P-11 is changed to the light emission datum of the fifth subfield SF5 of the pixel P-10, but the light emission datum of the sixth subfield SF6 of the pixel P-11 is not changed.
In addition, the light emission data of the first to fifth subfields SF1 to SF5 of the pixel P-10 are changed to the light emission data of the first to fifth subfields SF1 to SF5 of the pixel P-9, but the light emission datum of the sixth subfield SF6 of the pixel P-10 is not changed.
The light emission data of the first to sixth subfields SF1 to SF6 of the pixel P-9 are not changed either.
As a result of the subfield rearrangement process described above, the light emission data of the first to fifth subfields SF1 to SF5 of the pixel P-9, the light emission data of the first to fourth subfields SF1 to SF4 of the pixel P-10, the light emission data of the first to third subfields SF1 to SF3 of the pixel P-11, the light emission data of the first and second subfields SF1 and SF2 of the pixel P-12, and the light emission datum of the first subfield SF1 of the pixel P-13 become the light emission data of the subfields that correspond to the pixel P-9 constituting the car C1.
Thus, the light emission data of the subfields of the pixels that are located spatially forward are arranged in at least some regions between a region to which the rearranged light emission data generated by the first subfield regeneration unit 4 are output and the adjacent region detected by the detection unit 41.
In other words, with regard to the subfields within the triangle region R1 shown in
Subsequently, the overlap detection unit 42 detects an overlap between the foreground image and the background image for each subfield. More specifically, upon rearrangement of the subfields, the overlap detection unit 42 counts the number of times the light emission datum of each subfield is written. When the number of times the light emission datum is written is two or more, the relevant subfield is detected as the overlapping part where the foreground image and the background image overlap on each other.
For example, when rearranging the subfields of moving image data in which the foreground image passes on the background image as shown in
Next, when the overlap is detected by the overlap detection unit 42, the depth information creation unit 43 computes the depth information for each of the pixels where the foreground image and the background image overlap on each other, the depth information indicating whether each of the pixels corresponds to the foreground image or the background image. More specifically, the depth information creation unit 43 compares the motion vector of the same pixel between two or more frames. When the value of the motion vector changes, the depth information creation unit 43 creates the depth information indicating that the pixel corresponds to the foreground image. When the value of the motion vector does not change, the depth information creation unit 43 creates the depth information indicating that the pixel corresponds to the background image. For example, the depth information creation unit 43 compares the vector value of the same pixel between the N frame and the N−1 frame.
When the overlap is detected by the overlap detection unit 42, the first subfield regeneration unit 4 changes the light emission datum of each of the subfields constituting the overlapping part, to the light emission datum of each of the subfields of the pixels that constitute the foreground image that is specified by the depth information created by the depth information creation unit 43.
Here, the first subfield regeneration unit 4 rearranges the light emission data of the subfields according to the motion vectors, so that the light emission data are created as follows after rearranging the subfields of the pixels within the N frame, as shown in
First, the first subfield regeneration unit 4 collects the light emission data of the subfields of the pixels that are spatially located forward by the number of pixels corresponding to the motion vector, so that the temporally precedent subfields move significantly, according to the order in which the first to sixth subfields SF1 to SF6 are arranged.
At this moment, the overlap detection unit 42 counts the number of times the light emission datum of each subfield is written. With regard to the first subfield SF1 of the pixel P-14, the first and second subfields SF1 and SF2 of the pixel P-13, the first to third subfields SF1 to SF3 of the pixel P-12, the second to fourth subfields SF2 to SF4 of the pixel P-11, and the third to fifth subfields SF3 to SF5 of the pixel P-10, the light emission data are written twice. Therefore, the overlap detection unit 42 detects these subfields as the overlapping part where the foreground image and the background image overlap on each other.
Subsequently, the depth information creation unit 43 compares the value of the motion vector of the same pixel between the N frame and the N−1 frame prior to the rearrangement of the subfields. When the value of the motion vector changes, the depth information creation unit 43 creates the depth information indicating that the pixel corresponds to the foreground image. When the value of the motion vector does not change, the depth information creation unit 43 creates the depth information indicating that the pixel corresponds to the background image. For instance, in an N frame image shown in
The first subfield regeneration unit 4 refers to the depth information that is associated with the pixels of the subfields, from which the light emission data of the subfields detected as the overlapping part by the overlap detection unit 42 are collected. When the depth information indicates the foreground image, the first subfield regeneration unit 4 collects the light emission data of the subfields from which the light emission data are collected. When the depth information indicates the background image, the first subfield regeneration unit 4 does not collect the light emission data of the subfields from which the light emission data are collected.
Consequently, as shown in
As a result of the subfield rearrangement process described above, the light emission data of the subfields corresponding to the foreground image in the overlapping part between the foreground image and the background image are preferentially collected. In other words, for the subfields within the square region R2 shown in
Note, in the present embodiment, that the depth information creation unit 43 creates the depth information for each pixel on the basis of the sizes of the motion vectors of at least two frames, the depth information indicating whether each pixel corresponds to the foreground image or the background image; however, the present invention is not limited to this embodiment. In other words, when the input image that is input to the input unit 1 contains, beforehand, the depth information indicating whether each pixel corresponds to the foreground image or the background image, the depth information creation unit 43 does not need to create the depth information. In this case, the depth information is extracted from the input image that is input to the input unit 1.
Next is described the subfield rearrangement process performed when the background image is a character.
In
Here, when the boundary between the foreground image and the background image is detected and the light emission data are collected within the boundary, the light emission data of the subfields that are obtained after the rearrangement process are rearranged in the pixels P-3 to P-5, as shown in
In the present embodiment, therefore, when the foreground image represents the character information, the light emission data are allowed to be collected outside the boundary between the foreground image and the background image, and the light emission data of the subfields corresponding to the pixels that are moved spatially rearward by the number of pixels corresponding to the motion vectors are changed to the light emission data of the subfields of the pixels obtained prior to the movement, so that the temporally precedent subfields move significantly.
Specifically, the depth information creation unit 43 recognizes whether the foreground image is a character or not, using known character recognition technology. When the foreground image is recognized as a character, the depth information creation unit 43 adds, to the depth information, information indicating that the foreground image is a character.
When the depth information creation unit 43 identifies the foreground image as a character, the first subfield regeneration unit 4 outputs, to the second subfield regeneration unit 5, the image data that are converted to the plurality of subfields by the subfield conversion unit 2 and the motion vector detected by the motion vector detection unit 3, without performing the rearrangement process.
With regard to the pixels of the character recognized by the depth information creation unit 43, the second subfield regeneration unit 5 changes the light emission data of the subfields corresponding to the pixels that are moved spatially rearward by the number of pixels corresponding to the motion vector, to the light emission data of the subfields of the pixels obtained prior to the movement, so that the temporally precedent subfields move significantly.
As a result, as shown in
As a result of the subfield rearrangement process described above, when the foreground image is a character, the light emission data of the subfields that correspond to the pixels constituting the foreground image are distributed divided up spatially rearward by the number of pixels corresponding to the motion vector so that the temporally precedent subfields move significantly. This allows the line of sight to move smoothly, preventing the occurrence of motion blur or dynamic false contours, and consequently improving the image quality.
With regard to only the pixels that constitute the foreground image moving horizontally in the input image, the second subfield regeneration unit 5 preferably changes the light emission data of the subfields corresponding to the pixels that are moved spatially rearward by the number of pixels corresponding to the motion vector detected by the motion vector detection unit 3, to the light emission data of the subfields of the pixels obtained prior to the movement, so that the temporally precedent subfields move significantly.
In so-called character scroll where a character moves on a screen, the character usually moves in a horizontal direction and not in a vertical direction. Thus, with regard to only the pixels that constitute the foreground image moving horizontally in the input image, the second subfield regeneration unit 5 changes the light emission data of the subfields corresponding to the pixels that are moved spatially rearward by the number of pixels corresponding to the motion vector, to the light emission data of the subfields of the pixels obtained prior to the movement, so that the temporally precedent subfields move significantly. Consequently, the number of vertical line memories can be reduced, and the memories used can be reduced by the second subfield regeneration unit 5.
In the present embodiment, the depth information creation unit 43 recognizes whether the foreground image is a character or not, using the known character recognition technology. When the foreground image is recognized as a character, the depth information creation unit 43 adds, to the depth information, the information indicating that the foreground image is a character. However, the present invention is not particularly limited to this embodiment. In other words, when the input image that is input to the input unit 1 contains, beforehand, the information indicating that the foreground image is a character, the depth information creation unit 43 does not need to recognize whether the foreground image is a character or not.
In this case, the information indicating that the foreground image is a character is extracted from the input image that is input to the input unit 1. Then, the second subfield regeneration unit 5 specifies the pixels constituting the character, based on the information indicating that the foreground image is a character, the information being included in the input image that is input in the input unit 1. Subsequently, for the specified pixels, the second subfield regeneration unit 5 then changes the light emission data of the subfields corresponding to the pixels that are moved spatially rearward by the number of pixels corresponding to the motion vector, to the light emission data of the subfields of the pixels obtained prior to the movement, so that the temporally precedent subfields move significantly.
Next is described another example of the subfield rearrangement process for rearranging the subfields in the vicinity of the boundary.
In a display screen D6 shown in
As shown in
In the case where the light emission data of the subfields shown in
In this case, because the light emission data of the subfields corresponding to some of the pixels that constitute the foreground image I1 are moved to the background image I2 side, the foreground image I1 sticks out to the background image I2 side at the boundary between the foreground image I1 and the background image I2 on the display screen D6, causing motion blur or dynamic false contours and deteriorating the image quality.
However, when the light emission data of the subfields shown in
The present embodiment, as described above, can make the boundary between the foreground image I1 and the background image I2 clear, and reliably prevent the occurrence of motion blur or dynamic false contours, which can be generated when performing the rearrangement process on a boundary part where the motion vectors change significantly.
Next is described yet another example of the subfield rearrangement process for rearranging the subfields in the vicinity of a boundary.
In a display screen D7 shown in
As shown in
In the case where the light emission data of the subfields shown in
Furthermore, the light emission data of the first subfields SF1 corresponding to the pixels P-4 to P-6 are changed to the light emission data of the first subfields SF1 corresponding to the pixels P-1 to P-3. The light emission data of the second subfields SF2 corresponding to the pixels P-4 and P-6 are changed to the light emission data of the second subfields SF2 corresponding to the pixels P-2 and P-3. The light emission datum of the third subfield SF3 corresponding to the pixel P-4 is changed to the light emission datum of the third subfield SF3 of the pixels P-3.
In this case, because the light emission data of the subfields that correspond to some of the pixels constituting the first image I3 are moved to the second image I4 side, and the light emission data of the subfields that correspond to some of the pixels constituting the second image I4 are moved to the first image I3 side, the first image I3 and the second image I4 stick out of the boundary between the first image I3 and the second image I4 on the display screen D7, causing motion blur or dynamic false contours and consequently deteriorating the image quality.
However, when the light emission data of the subfields shown in
In addition, the light emission data of the first subfield SF1 corresponding to the pixels P-5 and P-6 are changed to the light emission datum of the first subfield SF1 corresponding to the pixel P-4, and the light emission datum of the second subfield SF2 corresponding to the pixel P-5 is changed to the light emission datum of the second subfield SF2 corresponding to the pixel P-4, but the light emission data of the first to fourth subfields SF1 to SF4 corresponding to the pixel P-4 are not changed.
The present embodiment, as described above, can make the boundary between the first image I3 and the second image I4 clear, and prevent the occurrence of motion blur or dynamic false contours that can be generated when the rearrangement process is performed on a boundary part in which the directions of the motion vectors are discontinuous.
A video display apparatus according to another embodiment of the present invention is described next.
Note that the same configurations as those of the video display apparatus shown in
The smoothing process unit 7, constituted by, for example, a low-pass filter, smoothes the values of the motion vectors detected by the motion vector detection unit 3 in the boundary part between the foreground image and the background image. For example, when rearranging the display screen in which the values of the motion vectors of continuous pixels change in such a manner as “666666000000” along a direction of movement, the smoothing process unit 7 smoothes these values of the motion vectors into “654321000000.”
In this manner, the smoothing process unit 7 smoothes the values of the motion vectors of the background image into continuous values in the boundary between the static foreground image and the moving background image. The first subfield regeneration unit 4 then spatially rearranges the light emission data of the subfields, which are converted by the subfield conversion unit 2, with respect to the respective pixels of the frame N, in accordance with the motion vectors smoothed by the smoothing process unit 7. Accordingly, the first subfield regeneration unit 4 generates the rearranged light emission data of the subfields for the respective pixels of the frame N.
As a result, the static foreground image and the moving background image become continuous and are displayed naturally in the boundary therebetween, whereby the subfields can be rearranged with a high degree of accuracy.
It should be noted that the specific embodiments described above mainly include the inventions having the following configurations.
A video processing apparatus according to one aspect of the present invention is a video processing apparatus, which processes an input image so as to divide one field or one frame into a plurality of subfields and combine an emission subfield in which light is emitted and a non-emission subfield in which light is not emitted in order to perform gradation display, the video processing apparatus having: a subfield conversion unit for converting the input image into light emission data for each of the subfields; a motion vector detection unit for detecting a motion vector using at least two or more input images that are temporally adjacent to each other; a first regeneration unit for collecting the light emission data of the subfields of pixels that are located spatially forward by the number of pixels corresponding to the motion vector detected by the motion vector detection unit, and thereby spatially rearranging the light emission data for each of the subfields that are converted by the subfield conversion unit, so as to generate rearranged light emission data for each of the subfields; and a detection unit for detecting an adjacent region between a first image and a second image contacting with the first image in the input image, wherein the first regeneration unit does not collect the light emission data outside the adjacent region detected by the boundary detection unit.
According to this video processing apparatus, the input image is converted into the light emission data for each of the subfields, and the motion vector is detected using at least two or more input images that are temporally adjacent to each other. The light emission data for each of the subfields are spatially rearranged by collecting the light emission data of the subfields of the pixels that are located spatially forward by the number of pixels corresponding to the motion vector, whereby the rearranged light emission data for each of the subfields are generated. In so doing, the adjacent region between the first image and the second image contacting with the first image in the input image is detected, and the light emission data are not collected outside this detected adjacent region.
Therefore, when collecting the light emission data of the subfields of the pixels that are located spatially forward by the number of pixels corresponding to the motion vector, the light emission data are not collected outside the adjacent region between the first image and the second image contacting with the first image in the input image. Therefore, motion blur or dynamic false contours that can occur in the vicinity of the boundary between the foreground image and the background image can be prevented reliably.
A video processing apparatus according to another aspect of the present invention is a video processing apparatus, which processes an input image so as to divide one field or one frame into a plurality of subfields and combine an emission subfield in which light is emitted and a non-emission subfield in which light is not emitted in order to perform gradation display, the video processing apparatus having: a subfield conversion unit for converting the input image into light emission data for each of the subfields; a motion vector detection unit for detecting a motion vector using at least two or more input images that are temporally adjacent to each other; a first regeneration unit for collecting the light emission data of the subfields of pixels that are located spatially forward by the number of pixels corresponding to the motion vector detected by the motion vector detection unit, and thereby spatially rearranging the light emission data for each of the subfields that are converted by the subfield conversion unit, so as to generate rearranged light emission data for each of the subfields; and a detection unit for detecting an adjacent region between a first image and a second image contacting with the first image in the input image, wherein the first regeneration unit collects the light emission data of the subfields that exist on a plurality of straight lines.
According to this video processing apparatus, the input image is converted into the light emission data for each of the subfields, and the motion vector is detected using at least two or more input images that are temporally adjacent to each other. The light emission data of the subfields are spatially rearranged by collecting the light emission data for each of the subfields of the pixels that are located spatially forward by the number of pixels corresponding to the motion vector, whereby the rearranged light emission data for each of the subfields are generated. In so doing, the adjacent region between the first image and the second image contacting with the first image in the input image is detected, and the light emission data of the subfields on the plurality of straight lines are collected.
Therefore, when collecting the light emission data of the subfields of the pixels that are located spatially forward by the number of pixels corresponding to the motion vector, the light emission data of the subfields on the plurality of straight lines are collected. Therefore, motion blur or dynamic false contours that can occur in the vicinity of the boundary between the foreground image and the background image can be prevented reliably.
A video processing apparatus according to yet another aspect of the present invention is a video processing apparatus, which processes an input image so as to divide one field or one frame into a plurality of subfields and combine an emission subfield in which light is emitted and a non-emission subfield in which light is not emitted in order to perform gradation display, the video processing apparatus having: a subfield conversion unit for converting the input image into light emission data for each of the subfields; a motion vector detection unit for detecting a motion vector using at least two or more input images that are temporally adjacent to each other; a first regeneration unit for spatially rearranging the light emission data for each of the subfields that are converted by the subfield conversion unit, with respect to the subfields of pixels located spatially forward, in accordance with the motion vector detected by the motion vector detection unit, so as to generate rearranged light emission data for each of the subfields; and a detection unit for detecting an adjacent region between a first image and a second image contacting with the first image in the input image, wherein the light emission data of the subfields of the pixels that are located spatially forward are arranged in at least one region between a region to which the rearranged light emission data generated by the first regeneration unit are output and the adjacent region detected by the detection unit.
According to this video processing apparatus, the input image is converted into the light emission data for each of the subfields, and the motion vector is detected using at least two or more input images that are temporally adjacent to each other. The light emission data for each of the subfields are spatially rearranged with respect to the subfields of the pixels located spatially forward, in accordance with the motion vector, whereby the rearranged light emission data for each of the subfields are generated. In so doing, the adjacent region between the first image and the second image contacting with the first image in the input image is detected, and the light emission data of the subfields of the pixels that are located spatially forward are arranged in at least some regions between the region to which the generated rearranged light emission data are output and the detected adjacent region.
Because the light emission data of the subfields of the pixels that are located spatially forward are arranged in at least some regions between the region to which the generated rearranged light emission data are output and the adjacent region between the first image and the second image contacting with the first image in the input image, motion blur or dynamic false contours that can occur in the vicinity of the boundary between the foreground image and the background image can be prevented reliably.
Moreover, in the video processing apparatus described above, it is preferred that the first regeneration unit collect the light emission data of the subfields of the pixels corresponding to the adjacent region, with respect to the subfields, the light emission datum is not collected.
According to this configuration, because the light emission data of the subfields of the pixels corresponding to the adjacent region are collected with respect to the subfields, the light emission datum is not collected, the boundary between the foreground image and the background image can be made clear, and motion blur or dynamic false contours that can occur in the vicinity of the boundary can be prevented reliably.
In addition, in the video processing apparatus described above, it is preferred that the first image include a foreground image showing a foreground, that the second image include a background image showing a background, that the video processing apparatus further include a depth information creation unit for creating depth information for each of pixels where the foreground image and the background image overlap on each other, the depth information indicating whether each of the pixels corresponds to the foreground image or the background image, and that the first regeneration unit collect the light emission data of the subfields of pixels that constitute the foreground image specified by the depth information created by the depth information creation unit.
According to this configuration, the depth information is created for each of the pixels where the foreground image and the background image overlap on each other, so as to indicate whether each of the pixels corresponds to the foreground image or the background image. Then, the light emission data of the subfields of the pixels that constitute the foreground image specified based on the depth information are collected.
Therefore, when the foreground image and the background image overlap on each other, the light emission data of the subfields of the pixels constituting the foreground image are collected. As a result, motion blur or dynamic false contours that can occur in the overlapping part between the foreground image and the background image can be prevented reliably.
Furthermore, in the video processing apparatus described above, it is preferred that the first image include a foreground image showing a foreground, that the second image include a background image showing a background, and that the video processing apparatus further include a depth information creation unit for creating depth information for each of pixels where the foreground image and the background image overlap on each other, the depth information indicating whether each of the pixels corresponds to the foreground image or the background image, and a second regeneration unit for changing the light emission data of the subfields corresponding to pixels that have been moved spatially rearward by the number of pixels corresponding to the motion vector detected by the motion vector detection unit, to the light emission data of the subfields of the pixels obtained prior to the movement, with respect to the pixels that constitute the foreground image specified by the depth information created by the depth information creation unit, and thereby spatially rearranging the light emission data for each of the subfields that are converted by the subfield conversion unit, so as to generate the rearranged light emission data for each of the subfields.
According to this configuration, the depth information is created for each of the pixels where the foreground image and the background image overlap on each other, so as to indicate whether each of the pixels corresponds to the foreground image or the background image. Then, with respect to the pixels that constitute the foreground image specified based on the depth information, the light emission data of the subfields corresponding to the pixels that are moved spatially rearward by the number of pixels corresponding to the motion vector, to the light emission data of the subfields of the pixels obtained prior to the movement. Consequently, the light emission data for each of the subfields are spatially rearranged, and the rearranged light emission data for each of the subfields are generated.
Therefore, in the pixels that constitute the foreground when the foreground image and the background image overlap on each other, the light emission data of the subfields corresponding to the pixels that are moved spatially rearward by the number of pixels corresponding to the motion vector are changed to the light emission data of the subfields of the pixels obtained prior to the movement. This allows the line of sight of the viewer to move smoothly as the foreground image moves, preventing the occurrence of motion blur or dynamic false contours that can be generated in the overlapping part between the foreground image and the background image.
It is also preferred in the video processing apparatus described above, that the foreground image be a character. According to this configuration, for the pixels that constitute the character when the character overlaps with the background image, the light emission data of the subfields corresponding to the pixels that are moved spatially rearward by the number of pixels corresponding to the motion vector are changed to the light emission data of the subfields of the pixels obtained prior to the movement. This allows the line of sight of the viewer to move smoothly as the character moves, preventing the occurrence of motion blur or dynamic false contours that can be generated in the overlapping part between the foreground image and the background image.
In the video processing apparatus described above, for the pixels that constitute the foreground image moving horizontally in the input image, the second regeneration unit preferably changes the light emission data of the subfields corresponding to the pixels that have been moved spatially rearward by the number of pixels corresponding to the motion vector detected by the motion vector detection unit, to the light emission data of the subfields of the pixels obtained prior to the movement.
According to this configuration, only with regard to the pixels that configure the foreground image moving horizontally in the input image, the light emission data of the subfields corresponding to the pixels that are moved spatially rearward by the number of pixels corresponding to the motion vector, are changed to the light emission data of the subfields of the pixels obtained prior to the movement. As a result, the number of vertical line memories can be reduced, and the memories used can be reduced by the second regeneration unit.
In the video processing apparatus described above, it is preferred that the depth information creation unit create the depth information based on the sizes of motion vectors of at least two or more frames. According to this configuration, the depth information can be created based on the sizes of the motion vectors of at least two or more frames.
A video display apparatus according to another aspect of the present invention has any of the video processing apparatuses described above, and a display unit for displaying an image by using corrected rearranged light emission data that are output from the video processing apparatus.
In this video display apparatus, when collecting the light emission data of the subfields corresponding to the pixels that are located spatially forward by the number of pixels corresponding to the motion vector, the light emission data are not collected outside the adjacent region between the first image and the second image contacting with the first image in the input image. Therefore, motion blur or dynamic false contours that can occur in the vicinity of the boundary between the foreground image and the background image can be prevented reliably.
Note that the specific embodiments or examples that are provided in the paragraphs describing the best mode for carrying out the invention are merely to clarify the technical contents of the present invention, and therefore should not be narrowly interpreted. The present invention is capable of various changes without departing from the spirit of the present invention and the scope of the claims.
The video processing apparatus according to the present invention is capable of reliably preventing the occurrence of motion blur or dynamic false contours, and is therefore useful as a video processing apparatus that processes an input image so as to divide one field or one frame into a plurality of subfields and combine an emission subfield in which light is emitted and a non-emission subfield in which light is not emitted in order to perform gradation display.
Number | Date | Country | Kind |
---|---|---|---|
2008-334182 | Dec 2008 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2009/006986 | 12/17/2009 | WO | 00 | 6/20/2011 |