The present invention relates to an image processing device, an image pickup device, and an image display device, which are each used to generate a stereoscopic image.
There is known a multi-view image pickup device including a plurality of image pickup means. The multi-view image pickup device realizes sophisticated image pickup, such as taking a stereoscopic image and a panoramic image, by processing images taken by the plurality of image pickup means. In the case of viewing the stereoscopic image with a stereoscopic image display device, a stereoscopic vision can be provided by displaying a left-eye adapted image for a left eye and a right-eye adapted image for a right eye, respectively. Such a stereoscopic vision can be provided with stereoscopic display utilizing a disparity among a plurality of images that are obtained by taking images of one object from different positions.
When images taken by the above-described multi-view image pickup device from two visual points are displayed to be viewed on the stereoscopic image display device, a viewing person sometimes percepts that the spatial distance between the background and the object is narrower, or that the object is thinner than in the actual scene. Such a phenomenon is attributable to a spatial distortion specific to the stereoscopic image, which distortion is generated in taking and displaying the image, and that phenomenon is one factor causing the viewing person to feel awkward when viewing the stereoscopic image. There is a method to quantify the distortion specific to the stereoscopic image by employing parameters in taking and displaying the image. According to Patent Literature (PTL) 1, the above-described method can not only simplify the configuration of a system for generating the stereoscopic image, but also confirm unnaturalness attributable to a geometrical spatial distortion without needing complex operations when right and left images are taken.
The geometrical spatial distortion can be confirmed with the method according to PTL 1, but PTL 1 does not disclose a method of correcting the spatial distortion in practice. Furthermore, because the spatial distance between the background and the object is enlarged by correcting the spatial distortion, another problem arises in that a thinner appearance of the object is further emphasized.
The present invention has been accomplished in view of the above-mentioned problems, and an object of the present invention is to provide an image processing device, an image pickup device, and an image display device, which can correct a spatial distortion generated in taking and displaying a stereoscopic image, and which can present a high-quality image with a stereoscopic effect.
To solve the above-mentioned problems, the present invention includes technical means as follows.
According to first technical means of the present invention, there is provided an image processing device including an information acquisition unit that obtains disparity information calculated from a stereoscopic image, image-pickup condition information when the stereoscopic image is taken, and display condition information of a display unit that displays the stereoscopic image, and an image processing unit that converts a disparity of the stereoscopic image, wherein the image processing unit converts the disparity in a direction of compressing the disparity or in a direction of enlarging the disparity in accordance with the image-pickup condition information, the display condition information, and the disparity information, which are obtained by the information acquisition unit, such that the direction of converting the disparity is reversed between when a binocular spacing contained in the display condition information is larger than a camera spacing contained in the image-pickup condition information and when the binocular spacing is smaller than the camera spacing.
According to second technical means, in the first technical means, the image processing unit reverses the direction of converting the disparity between when a disparity of an output stereoscopic image output from the image processing device is positive and when the disparity of the output stereoscopic image is negative.
According to third technical means, in the first or second technical means, when a difference between adjacent disparities contained in the disparity information is within a predetermined range and the difference between the adjacent disparities is increased in the disparity information after the conversion, the image processing unit interpolates the disparity such that the difference between the adjacent disparities reduces.
According to fourth technical means, in any one of the first to third technical means, the image processing unit holds a disparity range of the stereoscopic image after the disparity conversion within a predetermined range.
According to fifth technical means, in the fourth technical means, the predetermined range is given as a range between a disparity, which is calculated in accordance with the image-pickup condition information, the display condition information, and the disparity information, and an input disparity.
According to sixth technical means, in any one of the first to fourth technical means, the image processing unit executes the disparity conversion on a disparity smaller than a disparity of a main object that is designated or detected by a predetermined method.
According to seventh technical means, in the sixth technical means, the disparity output through the disparity conversion is held within a range expressed by the disparity, which is calculated in accordance with the image-pickup condition information, the display condition information, and the disparity information, and by the input disparity.
According to eighth technical means, in any one of the first to fourth technical means, the image processing unit converts the disparity of the disparity image such that a disparity of a main object, which is designated or detected by a predetermined method, comes close to 0.
According to ninth technical means, in the eighth technical means, the image processing unit makes the disparity come close to 0 for an object present at a distance of a convergence point that is calculated from the image-pickup condition information of the image pickup unit.
According to tenth technical means, there is provided an image display device including the image processing device according to any one of the first to ninth technical means.
According to eleventh technical means, there is provided an image pickup device including the image processing device according to any one of the first to ninth technical means.
According to the present invention, the spatial distortion generated in taking and displaying the stereoscopic image can be corrected, and the high-quality image with the stereoscopic effect can be provided.
The present invention will be described in detail below with reference to the drawings. It is to be noted that configurations in the drawings are illustrated in an exaggerated manner for easier understanding, and that distances and sizes illustrated in the drawings are different from actual ones.
The storage unit 10 is constituted as a hard disk drive or a recording medium, e.g., a memory card, which stores a stereoscopic image and image-pickup condition information when the stereoscopic image is taken. The information acquisition unit 20 obtains, from the storage unit 10, the stereoscopic image and the image-pickup condition information associated with the stereoscopic image.
The image processing unit 30 executes image processing of the stereoscopic image obtained by the information acquisition unit 20. The display unit 40 obtains the stereoscopic image from the image processing unit 30 and displays the stereoscopic image by a stereoscopic image display method described later.
The information acquisition unit 20 and the image processing unit 30 will be described in more detail below.
The information acquisition unit 20 in this embodiment includes a stereoscopic image acquisition portion 21, a disparity information acquisition portion 22, an image-pickup condition acquisition portion 23, and a display condition holding portion 24.
The stereoscopic image acquisition portion 21 obtains the stereoscopic image from the storage unit 10 and sends the stereoscopic image to the image processing unit 30. The disparity information acquisition portion 22 obtains the stereoscopic image from the storage unit 10, detects a disparity per predetermined unit, such as per pixel, and generates disparity information representing the detected disparity in units of pixels. Here, of a left-eye adapted image and a right-eye adapted image constituting the stereoscopic image, the left-eye adapted image is used as a basis for disparity calculation. In other words, the disparity information corresponding to the left-eye adapted image is calculated. The disparity information acquisition portion 22 sends the calculated disparity information to the image processing unit 30.
The image-pickup condition acquisition portion 23 obtains the image-pickup condition information corresponding to the stereoscopic image from the storage unit 10 and sends the image-pickup condition information to the image processing unit 30.
The camera focal distance df implies a distance between an image pickup element and a lens in the camera, and it is a fixed value in the case of a single focus lens. In the case of a zoom lens, because the focal distance changes depending on a zooming scale, the focal distance df is obtained from a sensor (not illustrated). The pixel pitch Pc is an index representing precision of a light receiving element of the camera, and it implies a distance between adjacent pixels. The camera pixel pitch can be calculated from the number of pixels and the size of the image pickup element, and it is a value specific to each image pickup element.
The display condition holding portion 24 in
The binocular spacing de implies a distance between a left eye e1 and a right eye e2 of the viewing person. The binocular spacing de may be set to 50 mm that is a distance between a left eye and a right eye of an ordinary child, or to 65 mm that is an eye-to-eye distance of an ordinary adult. Alternately, the binocular spacing de may be set by recognizing eyes of the viewing person with a camera (not illustrated) mounted on the display, and by measuring the length between the left eye and the right eye of the viewing person.
The display pixel pitch Pd implies a distance between adjacent pixels of the display. The display pixel pitch Pd can be calculated from the resolution and the display size, and it is a value specific to each display. It is to be noted that the image-pickup condition information and the display condition information are indicated in a unit representing length, e.g., mm.
The image processing unit 30 obtains information from the above-described information acquisition unit 20 and executes processing of the stereoscopic image. The image processing unit 30 is featured in correcting a distortion of a perceived position in the input stereoscopic image in accordance with the image-pickup condition information and the display condition information.
The distortion of the perceived position will be described in detail below with reference to
In the stereoscopic image illustrated in
As illustrated in
The distortion of the perceptual distance is now described in more detail. The perceptual distance Ld is calculated using the image-pickup condition information and the display condition information. As illustrated in
By employing the information representing the image-pickup conditions and the display conditions described above, the perceptual distance Ld, i.e., the distance from the eyes of the viewing person to the stereoscopic image, is expressed by the following formula (1):
Accordingly, the camera spacing dc and the binocular spacing de affect the relationship between the image pickup distance Lb and the perceptual distance Ld.
As illustrated in
The puppet theater effect implies a phenomenon that the perceptual distance Ld perceived to be located on the side farther than the display plane D is enlarged relative to the image pickup distance Lb. On that occasion, the perceptual distance Ld on the side nearer than the display plane D is compressed. In other words, the spatial distortion occurs such that, in stereoscopic view, an object is perceived to be smaller than the actual size of the object. In
When the camera spacing dc is narrower than the binocular spacing de, a spatial distortion called the cardboard effect occurs. The cardboard effect implies a phenomenon that the perceptual distance Ld perceived to be located on the side farther than the display plane D is compressed relative to the image pickup distance Lb. Thus, the perceptual distance Ld is compressed relative to the image pickup distance Lb with the cardboard effect. On that occasion, the perceptual distance Ld on the side nearer than the display plane D is enlarged. Stated in another way, in stereoscopic view, an object is perceived to be thinner than the actual thickness of the object, or a spatial distance between the background and the object is perceived to be relatively narrow. In
As seen from the above discussion, the image display device 1 can be configured so as to display a stereoscopic image free from the spatial distortion even when the camera spacing dc and the binocular spacing de are not the same, by correcting the disparity such that the image pickup distance Lb and the perceptual distance Ld come closer to a linear relation.
The image processing unit 30 includes a perceived position adjustment portion 32, an object structure correction portion 33, and an image generation portion 31. The perceived position adjustment portion 32 generates a disparity conversion table that is applied to disparity information, and sends the disparity conversion table to the object structure correction portion 33 along with the disparity information.
Zi denotes the input disparity, and Zo denotes the output disparity. The disparity conversion table storing the input disparity Zi and the output disparity Zo corresponding to the input disparity Zi is created by employing the formula (2). The disparity information making the image pickup distance Lb and the perceptual distance Ld held in linear relation can be generated by employing the disparity conversion table thus created.
For example, a length between two arbitrary points (A, B) representing the input disparity is different from a length between two points (A′, B′) representing the output disparity that corresponds to the relevant input disparity. In other words, a change amount of the disparity is changed depending on the image pickup distance Lb so as to compress a distance in the depth direction at a position where the distance in the depth direction has been enlarged, and to enlarge a the distance in the depth direction at a position where the distance in the depth direction has been compressed, thereby executing the disparity conversion such that the relationship between the image pickup distance Lb and the perceptual distance Ld comes more closely linear. The perceived position adjustment portion 32 supplies the created disparity conversion table and the disparity information to the object structure correction portion 33.
In the above-described example of
Furthermore, the image processing unit reverses the direction of conversion of the disparity, i.e., selectively sets one of the disparity decreasing direction and the disparity increasing direction, between when the disparity of the stereoscopic image output from the image processing unit is positive and when it is negative. In the case of viewing the taken stereoscopic image as it is, the compression and the enlargement of the image pickup distance Lb and the perceptual distance Ld occur reversely on both sides of the display plane. When the stereoscopic image is perceived on the side nearer than the display plane D, the disparity is positive, and when the stereoscopic image is perceived on the side farther than the display plane D, the disparity is negative. Accordingly, the direction of conversion of the disparity is reversed between when the disparity is positive and when the disparity is negative.
In accordance with the disparity conversion table and the disparity information both supplied from the perceived position adjustment portion 32, the object structure correction portion 33 generates disparity information in which the number of gradation scales of the disparity information is increased near a disparity edge after the application of the disparity conversion table. This implies that, because a disparity difference between objects is increased with the disparity conversion, the increased disparity difference is to be interpolated.
Details of processing executed in the object structure correction portion 33 will be described below.
In
First, a disparity change point 1004 is detected from disparity information 1100. The disparity change point 1104 implies a point where, in the disparity information 1100, the disparity of a target pixel (disparity change point 1104) is changed by a threshold 1105 or more in comparison with that of a preceding pixel 1103. A region between a pixel 1106 in disparity information 1101 obtained after the disparity conversion corresponding to the target pixel 1104 and a pixel 1107 in the disparity information 1101 obtained after the disparity conversion corresponding to the preceding pixel 1103 is interpolated over a width 1108. The interpolation in the example of
Alternatively, the interpolation may be performed with nonlinear approximation as illustrated in
In accordance with the stereoscopic image supplied from the stereoscopic image acquisition portion 21 and the disparity information supplied from the object structure correction portion 33, the image generation portion 31 executes processing on, of the left-eye adapted image and the right-eye adapted image constituting the stereoscopic image, the left-eye adapted image. More specifically, pixels of the left-eye adapted image are moved in accordance with the disparity information supplied from the object structure correction portion 33. After moving the pixels, pixels which have not been made correspondent to pixels of the output image are interpolated from nearby pixels.
Here, to take the intrinsic disparity between the left-eye adapted image and the right-eye adapted image into consideration, the value of the disparity information supplied from the disparity information acquisition portion 22 is subtracted from the value of the disparity information supplied from the object structure correction portion 33. A stereoscopic image is generated using the left-eye adapted image after the conversion and the input right-eye adapted image. As a result, the stereoscopic image can be generated in which the distortion specific to the stereoscopic view is corrected. The generated stereoscopic image made up of the left-eye adapted image and the right-eye adapted image is supplied to the display unit 40. A similar operating effect can also be obtained by executing the processing while respective amounts through which pixels are to be moved are given in the disparity conversion table.
With the image display device according to the present invention, as described above, since, in accordance with the image-pickup condition information and the display condition information read out from the storage unit, the disparity of the stereoscopic image corresponding to the relevant image-pickup condition information is corrected, a more natural stereoscopic image can be displayed even when the camera spacing dc and the binocular spacing de are not the same. With the image display device, particularly, since discontinuity in disparity between objects can be interpolated, the distortion of the stereoscopic space, such as the cardboard effect, can be corrected.
While, in this embodiment, the disparity conversion is performed such that the image pickup distance Lb and the perceptual distance Ld are held in linear relation, the disparity conversion may be performed by setting a main object, and by executing the conversion such that the disparity of the main object becomes a value close to 0. Such a method is advantageous in that an object having been selected as the main object is displayed at a position near the display plane, the position being suitable for stereoscopic view. The main object is designated or detected by a predetermined method. For example, a user may designate the main object with an input device (not illustrated). As an alternative, a face of an object may be recognized through image processing, and the recognized face may be detected as the main object.
While, in this embodiment, the disparity information is generated by the disparity information acquisition portion 22, the disparity information may be read out from a recording medium. In that case, it is required that not only a stereoscopic image, but also the disparity information and the image-pickup condition information both corresponding to the stereoscopic image are recorded on the recording medium. Such a modification eliminates the necessity of complicated calculation executed in the disparity information acquisition portion and contributes to cutting a processing time.
While, in this embodiment, the stereoscopic image and the image-pickup condition information are obtained from the storage unit 10, they may be obtained from an image pickup unit. The image pickup unit is constituted by at least two cameras that take a left-eye adapted image and a right-eye adapted image, respectively. Each of the cameras includes an image pickup lens and an image pickup element, such as a CCD. An image pickup control unit controls, for example, a focus position and a zoom factor of the image pickup lens, and driving of a shutter, etc. Furthermore, the at least two cameras are disposed at a predetermined spacing, and respective optical axes of the cameras are arranged parallel to each other. In such a case, image-pickup condition information related to one of the at least two cameras is obtained as the image-pickup condition information.
While, in this embodiment, the stereoscopic image is output from the image generation portion 31 to the display unit 40, the stereoscopic image may be output to a recording device. The recording device records a stereoscopic image constituted by a left image and a right image, which are supplied from the image generation portion 31.
Moreover, the display unit 40 in this embodiment may be a spectacle type stereoscopic display device displaying an image that is viewed by a person putting on spectacles, or a naked-eye type stereoscopic display device allowing a person to view a stereoscopic image with the naked eyes. In the case of the spectacle type stereoscopic display device, the stereoscopic image may be displayed by a time division method that displays the stereoscopic image by alternately switching over the left-eye adapted image and the right-eye adapted image, or a polarization method that displays the stereoscopic image by superimposing both the images with polarization directions being different from each other. In the case of the naked-eye type stereoscopic display device, the stereoscopic image may be displayed by a parallax barrier method of alternately arranging the left-eye adapted image and the right-eye adapted image on the rear side of the so-called parallax barrier having slit-like openings, or a lenticular method of arranging substantially semi-cylindrical lenses to spatially separate the left-eye adapted image and the right-eye adapted image.
A second embodiment employs a perceived position correction method different from that used in the first embodiment. It is to be noted that components having similar functions to those in the first embodiment described above are denoted by the same reference signs, and duplicate description of those components is omitted unless especially needed.
Although the second embodiment is practiced with the same configuration as that in the first embodiment illustrated in
The perceived position adjustment portion 32 creates the disparity conversion table that is applied to the disparity information, and sends the created disparity conversion table to the object structure correction portion 33 along with the disparity information. By employing that disparity conversion table, it is possible to not only make the relationship between the image pickup distance Lb and the perceptual distance Ld come closer to be linear, but also to provide the disparity information that is set in the range allowing the viewing person to see the stereoscopic image, as illustrated in
By employing the image pickup condition information, a maximum value and a minimum value of the image pickup distance Lb can be calculated from a maximum value MAX_DEP and a minimum value MIN_DEP of the disparity in the disparity information. To obviate the influence of noise caused by a failure in calculation of the disparity, the maximum value and the minimum value of the disparity in the disparity information is preferably calculated from disparities occupying a region having a certain area in the disparity information, for example, a region corresponding to 1% of all pixels. From the maximum value MAX_DEP and the minimum value MIN_DEP of the disparity in the disparity information, a maximum value MAX_DIS and a minimum value MIN_DIS of the image pickup distance Lb are calculated using the following formula (3) and (4), respectively.
Next, a maximum value MAX_C and a minimum value MIN_C of the perceptual distance Ld are calculated using the display condition information and the disparity on the display. A minimum value MIN_E and a maximum value MAX_E of the disparity on the display may be given, for example, as values that are indicated in 3DC Safety Guidelines published from the 3D Consortium, or as values that are input by the user through an input device (not illustrated). On that occasion, the maximum value MAX_C and the minimum value MIN_C are each represented in units of pixel. When MAX_C is a positive value and MIN_C is a negative value, the object is perceived in a popped-out state and a receded state, respectively, in the depth direction.
When a photographed scene is a long-distance view, the photographed object is perceived in a state receded to the farther side in the depth direction by setting the maximum value MAX_E of the perceptual distance Ld to a value near 0, as illustrated in
From the minimum value MIN_E and the maximum value MAX_E of the disparity on the display, the maximum value MAX_C and the minimum value MIN_C of the perceptual distance Ld when the scene is displayed are calculated using the following formula (5) and (6), respectively.
The disparity is then converted such that the calculated image pickup distance Lb and perceptual distance Ld are matched with each other. Given that the disparity in the input disparity information is Zi and the disparity in the output disparity information is Zo, the disparity conversion is expressed by the following formulae (7), (8) and (9).
The disparity conversion table storing the input disparity Zi and the output disparity Zo corresponding to the former is created by employing the above disparity conversion formulae. Through the disparity conversion described above, the relationship between the image pickup distance Lb and the perceptual distance Ld can be made closer to be linear, and the disparity information can be generated in the range allowing the viewing person to see the stereoscopic image.
For the output disparity having a larger negative value, i.e., for a disparity causing an object to be perceived on the farther side than the display plane in a direction receding rearward, a distance having been compressed in the depth direction is enlarged to a larger extent. On the other hand, for the output disparity having a larger positive value, i.e., for a disparity causing an object to be perceived on the nearer side than the display plane in a direction popping out forward, a distance having been enlarged in the depth direction is compressed to a larger extent. Comparing a difference B2 between two arbitrary points in the input disparity with a difference B1 between two arbitrary points in the output disparity, B1<B2 holds and a distance in the depth direction is compressed. Comparing a difference B3 between two arbitrary points in the input disparity with a difference B4 between two arbitrary points in the output disparity, B4>B3 holds and a distance in the depth direction is enlarged. Thus, since a change amount of the disparity is changed depending on the image pickup distance Lb such that, for a position where a distance in the depth direction is enlarged, the distance is compressed, and for a position where a distance in the depth direction is compressed, the distance is enlarged, the relationship between the image pickup distance Lb and the perceptual distance Ld is converted to come closer to be linear.
As described above, according to this embodiment, the disparity information set in the range allowing the viewing person to see the stereoscopic image can be generated in conformity with the image display device that displays the stereoscopic image.
The configuration of an image pickup device 2 illustrated in
In more detail, the first image pickup unit 51 and the second image pickup unit 52 are arranged at positions spaced from each other through a predetermined distance. In synchronism with the second image pickup unit 52, the first image pickup unit 51 takes an image at the same time as the second image pickup unit 52 under the same image pickup condition as that for the second image pickup unit 52. The first image pickup unit 51 supplies resulting image data, as image data for a left eye, to the stereoscopic image acquisition portion 21. Furthermore, the first image pickup unit 51 supplies the camera spacing dc, a convergence angle α, the camera pixel pitch Pc, and the camera focal distance df, as the image pickup condition information, to the stereoscopic image acquisition portion. As illustrated in
The perceived position adjustment portion 32 creates the disparity conversion table such that, as illustrated in
The disparity conversion is executed such that the convergence point F1 is perceived on the display plane, i.e., at the same position as that of the visual distance Ls. In other words, the image processing unit 30 executes a process of setting the disparity of the object at a distance to the convergence point, which distance can be calculated from the image pickup condition information of the image pickup unit, to become close to 0. More specifically, a value of MAX_DIS is converted to F, and a value of MAX_C is converted to the visual distance Ls.
In accordance with the stereoscopic image supplied from the stereoscopic image acquisition portion 21 and the disparity information supplied from the object structure correction portion 33, the image generation portion 31 executes processing on, of the left-eye adapted image and the right-eye adapted image constituting the stereoscopic image, the left-eye adapted image.
In more detail, pixels of the left-eye adapted image are moved in accordance with the disparity information supplied from the object structure correction portion. After moving the pixels, pixels which have not been made correspondent to pixels of the output image are interpolated from nearby pixels. The stereoscopic image is generated by employing the generated left-eye adapted image and the input left-eye adapted image. As a result, the object present at the convergence point is displayed on the display plane, and the stereoscopic image can be generated in which the distortion specific to the stereoscopic view has been corrected. The stereoscopic image constituted by the left-eye adapted image after the disparity conversion process and the input left-eye adapted image is supplied to the display unit 40.
As described above, even for the stereoscopic image read out from the image pickup units having the convergence angle, the image pickup device 2 can display the object, which is positioned at the convergence point, at the visual distance Ls, i.e., on the display plane, in accordance with the image pickup condition information and the display condition information. Furthermore, even when the camera spacing dc and the binocular spacing de are not the same, the disparity of the stereoscopic image corresponding to the relevant image pickup condition is corrected, whereby a more natural stereoscopic image can be displayed.
While an image perceived with a stereoscopic feel can be obtained by converting the disparity through the image processing described above in the first embodiment, a sufficient stereoscopic effect cannot be obtained depending on the position of the object when the disparity conversion is executed within the range between a maximum disparity and a minimum disparity of the taken stereoscopic image. Even in such a case, a satisfactory stereoscopic feel can be provided by executing the disparity conversion as illustrated in
A curve 2101 in
In this embodiment, a disparity conversion method is described below using the perceptual distance and the image pickup distance. As described in detail in the first embodiment, the perceptual distance and the image pickup distance can be calculated from the disparity value, the image pickup condition, and the display condition. The relationship between the image pickup distance and the perceptual distance can be changed by executing the disparity conversion on the stereoscopic image.
While, in the first embodiment, the spatial distortion is corrected for the entire photographed scene, the spatial distortion is corrected in the fourth embodiment such that the perceptual curve is positioned between the perceptual curve 2101 and the line 2103 in
That point is described in more detail below with reference to
The line 2102 can be expressed by a linear line, a curved line, or a combination of linear and curved lines, which are each plotted within an area between the perceptual curve 2101 and the line 2013. The line 2102 may be expressed, for example, by two linear lines as denoted by a line 2301 in
The line 2102 in
When the weight applied to the line 2103 is set to be relatively large, the disparity of an object near the background is enlarged, and the disparity of an object in the foreground is compressed. Thus, the spatial distortion is corrected, and the stereoscopic feel comes closer to that perceived with the actual space. With the characteristic calculated using the weighed average, therefore, the user can change the stereoscopic feel of the photographed stereoscopic image in such a way as making it come closer to that of the actual space, or as intentionally distorting a photographed space to emphasize the stereoscopic feel for the foreground.
In particular, when the position of a main object is specified, it is preferable to execute the disparity conversion process on an object that is present on the rear side of a main object 2402, i.e., on an object having a smaller disparity than that of the main object 2402, as denoted by a line 2401 in
A disparity value of the main object can be calculated from a disparity histogram. For instance, it is possible to divide an image into a plurality of regions, and to estimate a disparity value of the object from the most frequency value of the disparity among the regions. The disparity value is held without correction for a range between the estimated disparity value of the main object and a maximum disparity value in the photographed scene, while the disparity value is corrected to reduce the spatial distortion for a range between the estimated disparity value of the main object and a minimum disparity value in the photographed scene. On that occasion, because the estimated disparity value of the object is a typical disparity value of the object, processing is preferably executed by setting, as a boundary, a disparity value smaller than the estimated disparity value. In other words, the disparity value is corrected in a manner of changing the correction with respect to a boundary set on the rear side of a position that corresponds to the estimated disparity value.
To explain the above point in terms of distance, the disparity conversion method is changed to be different for an object present between an image pickup position 2402 of the main object and a nearest image pickup position 2408 and for an object present between the image pickup position 2402 of the main object and a farthest image pickup position 2407. In
Stated in another way, a line 2401 in
Moreover, it is preferable to define a characteristic of the disparity conversion with a weighed average of the perceptual curve 2101 and the line 2401, as plotted in
A line 2501 in
The above-described method can be combined with any of the methods described above in the second and third embodiments. The method of interpolating the enlarged disparity, i.e., the above-described process executed in the object structure correction portion in the first embodiment, can also be employed in the fourth embodiment.
The above-described embodiments are further applicable to an integrated circuit/chip set that is mounted on an image processing device.
1 . . . image display device, 2 . . . image pickup device, 10 . . . storage unit, 20 . . . information acquisition unit, 21 . . . stereoscopic image acquisition portion, 22 . . . disparity information acquisition portion, 23 . . . image-pickup condition acquisition portion, 24 . . . display condition holding portion, 30 . . . image processing unit, 31 . . . image generation portion, 32 . . . perceived position adjustment portion, 33 . . . object structure correction portion, 40 . . . display unit, 50 . . . image pickup unit, 51 . . . first image pickup unit, 52 . . . second image pickup unit, 500 . . . object-to-object distance, 1001 . . . input disparity, 1002 . . . output disparity, 1003 . . . disparity information, 1004 . . . disparity change point, 1100 . . . disparity information, 1101 . . . disparity information after disparity conversion, 1103 . . . preceding pixel, 1104 . . . disparity change point, 1105 . . . threshold, 1106 . . . pixel, 1107 . . . pixel, and 1108 . . . width.
Number | Date | Country | Kind |
---|---|---|---|
2011-199168 | Sep 2011 | JP | national |
2012-038205 | Feb 2012 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2012/066986 | 7/3/2012 | WO | 00 | 3/4/2014 |