The present invention relates to a stereoscopic image display device for deriving a motion vector from stereoscopic image data for allowing a stereoscopic image to be perceived due to binocular parallax and a method of deriving such a motion vector.
In recent years, there is widely-used a stereoscopic image display device that has a function of deriving a motion vector representing a moving direction of an object and its moving magnitude between a plurality of flames being individual still images constituting motion pictures by means of block matching etc., thereby producing frames to interpolate time series variation between frames.
As for not only frames imaged by one imaging device but also frames imaged by a plurality of imaging devices, there is also proposed an interpolation technique of deriving a moving vector representing the displacement of an object in frames between equal-time frames thereby estimating frames that could be imaged at positions where any imaging device is not arranged (e.g. Patent Document No. 1).
Meanwhile, a technique of recording and transmitting the stereoscopic image data (image data for left eye and image data for right eye), for allowing a stereoscopic image to be perceived due to binocular parallax, by side-by-side method, top-and-bottom method, etc. is about to be become widespread. Assume that an interframe interpolation technique is adopted to apply an interpolation on the stereoscopic image data recorded by such 3-dimensional recording methods. Then, as a result of comparing partial image data contained in the “left-eye” image data for left eye of one frame of two objective frames with partial image data contained in the “right-eye” image data for right eye on the other frame, there may be caused a deriving of an erroneous motion vector straddling the image data for left eye and the image data for right eye. It is noted that an interpolation frame produced on the ground of such an erroneous motion vector may be at the root of noise.
Therefore, an object of the present invention is to provide a stereoscopic image display device capable of avoiding the derivation of the erroneous motion vector, thereby deriving a motion vector with high accuracy and a method of deriving such a motion vector.
In order to solve the above-mentioned problems, a stereoscopic image display device of the present invention comprises: an image acquisition unit configured to acquire stereoscopic image data composed of a combination of “left-eye” image data and “right-eye” image data for allowing a stereoscopic image to be perceived due to binocular parallax; and a vector deriving unit configured to: respectively derive correlation values representing correlation strength between each reference-part image data, which constitutes image data in segmented areas of an arbitrary first frame of the stereoscopic image data, and a plurality of objective-part image data, which constitute image data in segmented areas of a second frame different from the first frame of the stereoscopic image data and which are comparative objects to be compared with the each reference-part image data; and extract, for each reference-part image data, the objective-part image data having the highest correlation value to the reference-part image data, thereby deriving a motion vector between the reference-part image data and the extracted objective-part image data, wherein the vector deriving unit is configured to derive either the correlation values between the reference-part image data and the objective-part image data, both of which belong to the “right-eye” image data, or the correlation values between the reference-part image data and the objective-part image data, both of which belong to the “left-eye” image data.
The image acquisition unit may further include a left-right judgment unit configured to: judge whether unit image data, which is obtained by compartmentalizing the stereoscopic image data every predetermined pixels, is belonging to the “left-eye” image data or the “right-eye” image data, based on synchronous signals inputted together with the stereoscopic image data; and affix image information representing whether the unit image data is belonging to either the “left-eye” image data or the “right-eye” image data, with respect to each unit image data.
To predetermined unit image data constituting the stereoscopic image data, there may be affixed image information representing whether the unit image data is belonging to either the “left-eye” image data or the “right-eye” image data, and the vector deriving unit may be configured to judge whether the unit image data is belonging to either the “left-eye” image data or the “right-eye” image data, based on the image information.
In order to solve the above-mentioned problems, a method of deriving a motion vector, comprises the steps of: acquiring stereoscopic image data composed of a combination of “left-eye” image data and “right-eye” image data for allowing a stereoscopic image to be perceived due to binocular parallax; respectively deriving correlation values representing correlation strength between each reference-part image data, which constitutes image data in segmented areas of an arbitrary first frame of the stereoscopic image data, and a plurality of objective-part image data, which constitute image data in segmented areas of a second frame different from the first frame of the stereoscopic image data and which are comparative objects to be compared with the each reference-part image data; and extracting, for each reference-part image data, the objective-part image data having the highest correlation value to the reference-part image data, thereby deriving a motion vector between the reference-part image data and the extracted objective-part image data.
The above method may further include the steps of: judging whether unit image data, which is obtained by compartmentalizing the stereoscopic image data every predetermined pixels, is belonging to the “left-eye” image data or the “right-eye” image data, based on synchronous signals inputted together with the stereoscopic image data; and affixing image information representing whether the unit image data is belonging to either the “left-eye” image data or the “right-eye” image data, with respect to each unit image data.
To predetermined unit image data constituting the stereoscopic image data, there may be affixed image information representing whether the unit image data is belonging to either the “left-eye” image data or the “right-eye” image data, and the method may further include the step of judging whether the unit image data is belonging to either the “left-eye” image data or the “right-eye” image data, based on the image information.
As mentioned above, the present invention enables a motion vector to be derived with high accuracy by avoiding the derivation of an erroneous motion vector.
Preferred embodiments of the present invention will be described below in detail with reference to drawings. In these embodiments, illustrated dimensions, materials, other concrete numerals, etc. are nothing but exemplifications for facilitating understanding of the invention and are not directed to limit the scope of the invention if not otherwise specified. Note that, throughout this specification and the drawings, elements substantially identical to those in terms of their function and constitution are indicated with the same reference numerals in view of avoiding a redundant description and also elements unrelated to the essentials of the present invention directly are not illustrated in the drawings.
(Stereoscopic Image Display Device 100)
The image acquisition unit 148 acquires stereoscopic image data, which has been transmitted from an outside broadcast station 120 and which comprises a combination of “left-eye” image data and “right-eye” image data for right eye to allow a stereoscopic image to be perceived due to binocular parallax, via an antenna 122. Alternatively, the image acquisition unit 148 may acquire the stereoscopic image data through a recording medium 124, such as DVD and Blu-ray disc, or a communication network (e.g. internet, LAN) 126. In addition, the image acquisition unit 148 of this embodiment also acquires synchronous signals outputted together with the stereoscopic image data (in synchronism with the stereoscopic image data). Note that the image acquisition unit 148 may acquire the synchronous signals outputted from an output source independent of that of the stereoscopic image data.
As for the recording method for stereoscopic image data, there are available: Side-by-Side method shown in
Note that the effective image data means image data obtained by excluding a nondisplay area (blank period) from the whole stereoscopic image data. In the Side-by-Side method, there is also available stereoscopic image data where the “right-eye” image data 202b allowing for visibility by a viewer's right eye is arranged in the left side of the effective image data while the “left-eye” image data 202a for visibility by a viewer's left eye is arranged in the right side of the effective image data. Also in the Top-and-Bottom method, there is also available stereoscopic image data where the “right-eye” image data 202b allowing for visibility by a viewer's right eye is arranged in the upper side of the effective image data while the “left-eye” image data 202a for visibility by a viewer's left eye is arranged in the lower side of the effective image data.
In the stereoscopic image data by the Side-by-Side method, however, it is noted that although each of the “left-eye” image data 202a and the “right-eye” image data 202b has a vertical resolution equal to that of the image data displayed on the display 110, each horizontal resolution is compressed to one-half of that of the image data. Thus, if the number of pixel data as “pixel-scale” data of the effective image data in full-screen is represented by 1920×1080, then the numbers of pixels of the “left-eye” image data 202a and the “right-eye” image data 202b become 960×1080, respectively. Similarly, in the stereoscopic image data by the Top-and-Down method, it is noted that although each of the “left-eye” image data 202a and the “right-eye” image data 202b has a horizontal resolution equal to that of the image data displayed on the display 110, each vertical resolution is compressed to one-half of that of the image data. Thus, if the number of pixels of the effective image data in full-screen is represented by 1920×1080, then the numbers of pixels of the “left-eye” image data 202a and the “right-eye” image data 202b become 1920×540, respectively.
In order to display the stereoscopic image data on a display surface of the display 110, there is adopted, for example, line sequential method where horizontal-line image data of the “left-eye” image data 202a and horizontal-line image data of the “right-eye” image data 202b are arranged every line alternately. Further, the subject included in the stereoscopic image data can be stereoscopically displayed in either cross viewing (cross method) or parallel viewing (parallel method) and also, by relatively shifting the horizontal position of the subject included in the “left-eye” image data 202a and the horizontal position of the subject included in the “right-eye” image data 202b to each other right and left, it is possible to form an image of the subject in any position of the backside of the display surface, on the display surface and the front side of the display surface. Note that the line sequential method means a method by which a stereoscopic image can be recognized due to binocular parallax since an observer uses polarized glasses to watch a “left-eye” image on alternating lines through a left eye and a “right-eye” image on alternating lines through a right eye.
Hereinafter, the stereoscopic image display device 100 will be described while taking an example of adopting the Side-by-Side method as the recording procedure of the stereoscopic image data and the line sequential method as the displaying procedure. Not limited to this, however, there may be adopted the Top-and-Bottom method as the recording procedure and the frame sequential method as the displaying procedure. Note that the frame sequential method means a method by which a stereoscopic image can be recognized since an observer watches a “left-eye” frame and a “right-eye” frame through an active shutter (electronic shutter) alternately.
Alternatively, the line sequential method may be used as the recording procedure of the stereoscopic image data. In this case, the stereoscopic image data of two lines is collectively recorded into one line memory whose capacity is twice as much as the amount of information included in a single line of the stereoscopic image data in the horizontal direction. By this process, upon converting the stereoscopic image data to a similar format to that recorded by the Side-by-Side method, it is possible to derive a motion vector mentioned later.
Also, if either “vertical-line” alternative display method where the “left-eye” image data 202a and the “right-eye” image data 202b are arranged with respect to each line alternately or checker board method where the “left-eye” image data 202a and the “right-eye” image data 202b are arranged in a checkered pattern is adopted as the recording procedure, then a later-mentioned motion vector can be derived by acquiring the stereoscopic image data at the image acquisition unit 148 and subsequently rearranging the same data to the Side-by-Side method or the Top-and-Bottom method, as well. Still further, if adopting the frame sequential method as the recording procedure, it may be performed to reduce the cycle of vertical synchronous signals used in the device to half thereby doubling the frame frequency. Such a doubling of the frame frequency allows the stereoscopic image data to be captured in one frame where the “left-eye” image data 202a and the “right-eye” image data 202b are arranged up and down through a blanking area, thereby accomplishing the above task.
The scaling unit 150 adjusts the size of stereoscopic image data to make the stereoscopic image data acquired by the image acquisition unit 148 accord with the display size of the display 110.
The image processing unit 152 applies video signal processing, such as RGB processing (γ-correction, color correction), enhance processing and noise reduction processing, to the stereoscopic image data adjusted by the scaling unit 150. The image processing unit 152 outputs the processed stereoscopic image data to the left-right judgment unit 154 and the frequency-converter memory 162 respectively.
The left-right judgment unit 154 judges whether predetermined unit image data constituting the stereoscopic image data is belonging to the “left-eye” image data or the “right-eye” image data, based on synchronous signals inputted together with the stereoscopic image data. Note here that, as the synchronous signals acquired by the image acquisition unit 148 are transmitted through the scaling unit 150 and the image processing unit 152, these signals will be delayed for a time period that the scaling unit 150 and the image processing unit 152 require for processing them. Then, with respect to each unit image data, the left-right judgment unit 154 affixes image information representing whether the unit image data is belonging to the “left-eye” image data or the “right-eye” image data and further outputs the unit image data to the frame memory 156 and the vector deriving unit 158, respectively. The unit image data and the image information will be described in detail, later.
The frame memory 156 comprises recording media, for example, RAM (Random Access Memory), nonvolatile RAM, flash memory, etc. and retains the stereoscopic image data outputted from the left-right judgment unit 154, e.g. one frame of data, temporally.
The vector deriving unit 158 compares reference-part image data as the base for comparison, which comprise respective image data positioned in segmented areas of an arbitrary frame (This frame will be referred to as “first frame”, later) of the stereoscopic image data outputted from the left-right judgment unit 154, with a plurality of objective-part image data as the comparative objects, which comprise respective image data positioned in segmented areas of a frame temporally succeeding to the first frame and retained in the frame memory 156 (This frame will be referred to as “second frame” later) of the stereoscopic image data outputted from the left-right judgment unit 154, respectively and derives correlation values representing respective correlation strength between the reference-part image data and the objective-part image data. Note that the vector deriving unit 158 performs a comparison between two successive frames in this embodiment. However, as respective times of the first frame and the second frame have only to differ from each other, the same unit may perform a comparison between two discontinuous frames.
For each reference-part image data, subsequently, the vector deriving unit 158 extracts the objective-part image data having the highest correlation value to the reference-part image data. Then, based on the position of one image represented by the reference-part image data in the first frame and the position of another image represented by the extracted objective-part image data in the second frame, the same unit 158 derives a motion vector between the reference-part image data and the extracted objective-part image data.
Based on the motion vector derived by the vector deriving unit 158, the first frame outputted from the left-right judgment unit 154 and the second frame retained in the frame memory 156, the interpolation generating unit 160 generates an interpolation frame for interpolating between the first frame and the second frame and also outputs the interpolation frame to the frequency-converter memory 162. This interpolation frame will be described with reference to
Note that as the frame frequency is doubled in conversion in this embodiment, the subjects in the interpolation frame 208 are displaced from the subjects in the first frame 204 by halves of distances between respective positions of the subjects in the first frame 204 and respective positions of the subjects in the second frame. Accordingly, if it is required to magnify the frame frequency by n-th, there are produced (n−1) pieces of interpolation frames 208, which are displaced from the position of the first frame 204, every distance of 1/n of intervals from the positions of the subjects in the first frame 204 to the positions of the subjects in the second frame 206.
By interposing the interpolation frames 208 between the first frame 204 and the second frame 206 without changing a time interval therebetween, the frame frequency is enhanced to allow the motion of subjects to be recognized smoothly.
However, as the above-mentioned stereoscopic image data does not comprise respective frames of image data imaged by a single imaging device but frames of image data as a result of imaging an identical subject at different points of view, there is a possibility of specific erroneous displaying caused by an erroneous motion vector.
In case of
Meanwhile in case of
In deriving a motion vector, therefore, the vector deriving unit 158 is adapted so as not to compare the reference-part image data with the objective-part image data while straddling both the left-eye image data and the right-eye image data. Such an operation will be described in detail, below.
In addition, the vector deriving part 158 configures objective-part image data in segmented areas each having a predetermined size in the second frame 206, based on vectors (candidate vectors) nominated for the motion vector. The candidate vectors comprise a plurality of prepared vectors which are established based on a maximum range where a dynamic body occupying the reference-part image data can move in one frame and which represent directions and dimensions from the central coordinate of the reference-part image data of the first frame 204 up to the central coordinate of the objective-part image data of the second frame 206.
Here, the objective-part image data determined corresponding to the candidate vectors are defined in respective positions shifted from the reference-part image data by the candidate vectors, with the same size as the reference-part image data and the number of candidate vectors.
The objective-part image data is part image data as the comparative object in the second frame 206. Based on the candidate vectors, the vector deriving unit 158 configures a plurality of overlapping partial image data as the objective-part image data while shifting pixel data as the basis of candidate vectors one by one, for example. For ease of understanding, this embodiment will be illustrated with an example of objective-part image data 224a-224l non-overlapping each other.
As shown in
When deriving the correlation values, the vector deriving unit 158 stores an obtained correlation value to the objective-part image data as being related to the candidate vector, in a not-shown working area (memory), with respect to each reference-part image data. Then, the same unit compares a newly-derived correlation value to the other objective-part image data with the stored correlation value. If such a new correlation value is higher than the stored correlation value, then the latter correlation value and its candidate vector are renewed with the new correlation value and its candidate vector. With this constitution, the correlation value stored at the time of completing the deriving of correlation values to the reference-part image data as the base for comparison represents the highest correlation value to the reference-part image data as the base for comparison. Then, the vector deriving unit 158 defines a candidate vector related to the highest correlation value as the motion vector about the reference-part image data as the base for comparison in the first frame 204.
However, if the reference-part image data 222 is compared with the objective-part image data 224i similar to the reference-part image data 222, then the correlation value is enhanced since the respective image data (the reference-part image data 222, the objective-part image data 224i) contain the character “A” commonly, so that the interpolation frame 208 may be produced based on an erroneous motion vector representing that the reference-part image data 222 has moved to the objective-part image data 224i, as described with
Therefore, according to this embodiment, the vector deriving unit 158 derives a correlation value with use of image information (flag) representing whether the reference-part image data 222 and the objective-part image data 224 are parts of the left-eye image data 202a or parts of the right-eye image data 202b, in other words, whether these data are belonging to the “left-eye” image data 202a or the “right-eye” image data 202b. Note that this image information comprises information that the left-right judgment unit 154 has affixed to respective unit image data, based on the synchronous signal acquired with the stereoscopic image data by the image acquisition unit 148.
Specifically, the left-right judgment unit 154 includes a horizontal dot counter reset by a horizontal synchronous signal contained in the synchronous signal. In the horizontal dot counter, its count value is represented by “hcnt”. In this case, the left-right judgment unit 154 affixes image information “L” representing the left-eye image data 202a to unit image data that reaches up to a value at the boundary position between the “left-eye” image data 202a and the “right-eye” image data 202b and also affixes image information “R” representing the right-eye image data 202b to the remaining unit image data. Here, since the value at the boundary position varies independently of the width of a blank outside an effective range of the stereoscopic image data and between the right edge of the left-eye image data 202a and the left edge of the right-eye image data 202b, the value at the boundary position is arbitrarily established corresponding to the stereoscopic image data. For instance, the image information comprises one bit of information where “0” and “1” designate “L” and “R”, respectively. Then, when counting up the total pixels in one line, the left-right judgment unit 154 resets the count value “bent” based on the next horizontal synchronous signal. Note that the unit image data designates image data compartmentalized in units of a predetermined number of pixels. In addition, by making the size of unit image data equal to each size of the reference-part image data and the objective-part image data, it is possible to reduce the computing load on the device.
Alternatively, when the image acquisition unit 148 acquires the stereoscopic image data recorded with use of the Top-and-Bottom method, the left-right judgment unit 154 may have a vertical line counter reset by a vertical synchronous signal contained in the synchronous signal. In the vertical line counter, its count value is represented by “vent”. In this case, the left-right judgment unit 154 affixes image information “L” representing the left-eye image data 202a to unit image data that reaches up to a value at the boundary position between the “left-eye” image data 202a and the “right-eye” image data 202b and also affixes image information “R” representing the right-eye image data 202b to the remaining unit image data. Here, since the value at the boundary position varies independently of the width of a blank outside the ineffective range of the stereoscopic image data, the value at the boundary position is arbitrarily established corresponding to the stereoscopic image data. Also in this case, the image information comprises one bit of information where “0” and “1” designate “L” and “R”, respectively. Then, when counting up the total pixels in one line, the left-right judgment unit 154 resets the count value “vent” based on the next vertical synchronous signal.
As the left-right judgment unit 154 utilizes the synchronous signals included in the existing image system (e.g. television broadcasting) in the judgment between the “left-eye” image data 202a and the “right-eye” image data 202b, it is possible to derive the motion vector with high accuracy, without producing much improvement in the image system.
Alternatively, when the image acquisition unit 148 acquires the stereoscopic image data from the recording media 124 or the communication network 126, there is no need of activating the left-right judgment unit 154, provided each predetermined unit image data (e.g. specified quantity of pixel data) constituting the stereoscopic image data is provided with the image information representing whether the unit image data is belonging to the “left-eye” image data 202a or the “right-eye” image data 202b. In this case, the vector deriving unit 158 may judge whether the unit image data is a part of the “left-eye” image data 202a or a part of the “right-eye” image data 202b, in other words, whether the unit image data is belonging to the “left-eye” image data 202a or the “right-eye” image data 202b, based on the image information. Thus, with use of the image information affixed in advance, the stereoscopic image display device 100 can determine whether the unit image data is a part of the “left-eye” image data 202a or a part of the “right-eye” image data 202b, precisely.
Based on the image information, the vector deriving unit 158 judges whether each of the reference-part image data and the objective-part image data is belonging to the “left-eye” image data 202a or the “right-eye” image data 202b. For instance, if the image information about e.g. all pixel data contained in the reference-part image data or the objective-part image data is “L”, it is judged that the relevant data belongs to the “left-eye” image data 202a. Conversely, if the image information is “R”, it is judged that the relevant data belongs to the “right-eye” image data 202b.
Then, the vector deriving unit 158 derives correlation values between the “right-eye” image data 202b of the reference-part image data 222 and the “right-eye” image data 202b of the objective-part image data 224a-224l. Alternatively, the same unit 158 derives correlation values between the “left-eye” image data 202a of the reference-part image data 222 and the “left-eye” image data 202b of the objective-part image data.
Therefore, as shown with arrows 230, 232 of
Then, the vector deriving unit 158 also configures the region of the objective-part image data (e.g. 224m≈224r) in accordance with the region of the reference-part image data 222b.
Also in the case of
Hereat, as the vector deriving unit 158 does not derive an erroneous motion vector, there is no possibility that the interpolation generating unit 160 generates an interpolation frame based on the erroneous motion vector. In addition, as the stereoscopic image display device 100 allows the display 110 to display the stereoscopic image data subjected to interpolation between the frames, it is possible to make a viewer perceive beautiful stereoscopic images cancelling noise.
The frequency-converter memory 162 stores one frame of stereoscopic image data (the first frame 204 or the second frame 206) generated from the image processing unit 152 and the interpolation frame 208 generated from the interpolation generating unit 160. Then, the frequency-converter memory 162 readouts these stereoscopic image data at double the frame frequency in temporal sequence, and outputs the stereoscopic image data to the temporary memory 164.
The temporary memory 164 is formed by a recording medium, for example, RAM, nonvolatile RAM, flash memory, etc. to temporarily retain the stereoscopic image data after frequency conversion, which was generated from the frequency-converter memory 162 and in which the interpolation frame 208 is inserted.
By controlling the read-write address of the stereoscopic image data retained in the temporary memory 164 and having the interpolation frame inserted therein (This data will be simply referred to as “interpolated stereoscopic image data” below), the line sequential unit 166 alternately juxtaposes horizontal line image data of the “right-eye” image data 202b and horizontal line image data of the “left-eye” image data 202a both in the interpolated stereoscopic image data to generate line sequential image data and outputs it to the image output unit 168.
However, as the horizontal resolution of the “left-eye” image data 202a of the “right-eye” image data 202b becomes one-half of that of the original stereoscopic image data in the Side-by-Side method, the horizontal resolution has to be doubled. In order to generate new pixel data accompanied with the enlargement of horizontal resolution, it is possible to use linear interpolation or the other filtering. Again, the magnification ratio of horizontal resolution is appropriately adjusted corresponding to the recording method of the stereoscopic image data or the ratio in the number of horizontal pixels of the format of stereoscopic image data at inputting and outputting. Then, with the enlargement of horizontal resolution, the line sequential unit 166 reduces the vertical resolution to half to generate the line sequential image data and further outputs it to the image output unit 168.
The image output unit 168 outputs the line sequential image data generated by the line sequential unit 166, to the exterior display 110.
As mentioned above, the stereoscopic image display device 100 of this embodiment derives a motion vector from only the “left-eye” image data 202a-to-the “left-eye” image data 202a or the “right-eye” image data 202b-to-the “right-eye” image data 202b. Consequently, it is possible to derive a motion vector with high accuracy while avoiding deriving an erroneous motion vector straddling the “left-eye” image data 202a and the “right-eye” image data 202b, allowing the interpolation accuracy between frames and the accuracy of removing film judder to be improved in the speed-up processing of frame frequency, the data compaction and so on.
(Method of Deriving Motion Vector)
Furthermore, there is provided a method of deriving a motion vector with use of the above-mentioned stereoscopic image display device 100.
When the image acquisition unit 148 acquires the stereoscopic image data and the synchronous signals (YES at step S300), the left-right judgment unit 156 judges whether the unit image data of the stereoscopic image data is belonging to either the “left-eye” image data 202a or the “right-eye” image data 202b, based on the synchronous signals acquired at the acquisition step S300 and further affixes the image information with respect to each unit image data (S302).
The vector deriving unit 158 divides the effective image data of the first frame 204 to configure a plurality of reference-part image data (S304) and also configures the objective-part image data determined by the central coordinate of the reference-part image data established in the second frame 206 and the candidate vector (S306).
The vector deriving unit 158 judges whether the reference-part image data and the objective-part image data belong to either the “left-eye” image data or the “right-eye” image data (S308). If both of the image information do not accord with each other (NO at step S308), then the related objective-part image data is eliminated from the comparative objects (S310). Thus, if the reference-part image data constitutes a portion of the “right-eye” image data, then the objective-part image data being a portion of the “left-eye” image data is eliminated from the comparative objects. Conversely, if the reference-part image data constitutes a portion of the “left-eye” image data, then the objective-part image data being a portion of the “right-eye” image data is eliminated from the comparative objects. If both of the image information accord with each other (YES at step S308), then the vector deriving unit 158 does not execute the eliminating step (S310).
Next, the vector deriving unit 158 judges whether the objective-part image setting step (S306) for all the candidate vectors has been completed or not (S312). If the setting step has not been completed (NO at step S312), then the routine returns to the objective-part image setting step (S306) to establish the next objective-part image data.
When the objective-part image setting step (S306) has been already completed for all the candidate vectors (YES at step S312), the vector deriving unit 158 judges whether the reference-part image setting step (S308) where the effective image data is divided to establish the reference-part image data has been completed or not (S314). If not completed (NO at step S314), the routine returns to the reference-part image setting step (S304).
When the reference-part image setting step (S308) has been already completed for all the par image data obtained by dividing the effective image data (YES at step S314), the vector deriving unit 158 compares each of the plurality of reference-part image data of the first frame 204 with the plurality of objective-part image data of the second frame 206 to derive a correlation value representing the correlation strength every the objective-part image data for each reference-part image data. Assume here that the vector deriving unit 158 does not derive the correlation values for the objective-part image data eliminated at the eliminating step (S310).
Then, the vector deriving unit 158 extracts the objective-part image data having the highest correlation value to the reference-part image data, for each reference-part image data and further configures, as the motion vector, a candidate vector related to the extracted objective-part image data (S318). The interpolation generating unit 160 generates the interpolation frame 208 interpolating between the first frame 204 and the second frame 206, based on the motion vector derived by the vector deriving unit 158, the first frame 204 and the second frame 206 (S320).
The frequency-converter memory 162 stores one frame of stereoscopic image data generated from the image processing unit 152 and the interpolation frame 208 generated from the interpolation generating unit 160. Then, the same memory outputs these stereoscopic image data to the temporary memory 164 at double the frame frequency, in temporal sequence (S322). The line sequential unit 166 alternately juxtaposes horizontal line image of the “right-eye” image data 202b and horizontal line image of the “left-eye” image data 202a to generate the line sequential image data (S324). The image output unit 168 outputs the line sequential image data to the exterior display 110 (S326).
As mentioned above, according to the motion-vector deriving method using the stereoscopic image display device 100, since it is possible to avoid a deriving of an erroneous motion vector and derive a motion vector with high accuracy, allowing a viewer to perceive stereoscopic images cancelling noise.
Although the preferred embodiment of the present invention has been described with reference to accompanying drawings hereinabove, it goes without saying that the present invention is not limited to such an embodiment only. As will be understood by the skilled person, various changes and modifications can be made within the scope of claims of the invention and they also belong to the technical scope of the invention as a matter of course.
Note that respective processes in the motion vector deriving method do not have to be executed in an orderly manner shown in the flow chart of this specification. Thus, these processes may be executed in parallel or include processes in subroutine.
The present invention is available to a stereoscopic image display device that derives a motion vector from stereoscopic image data allowing a viewer to perceive a stereoscopic image due to binocular parallax and a method of deriving such a motion vector.
Number | Date | Country | Kind |
---|---|---|---|
2009-263206 | Nov 2009 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2010/062762 | 7/29/2010 | WO | 00 | 6/16/2011 |