The present invention relates to a calibration data selection device, method of selection, selection program, and three dimensional position measuring apparatus for selecting calibration data for use with a parallax image at the time of measuring a three dimensional position.
A stereo camera is known as a three dimensional position measuring apparatus for measuring three dimensional information of a target object. A pair of view images photographed by the cameras constitute the parallax image. According to parallax between corresponding points in the pair of the view images, a three dimensional position of the target object, namely coordinates (Xi, Yi, Zi) of a given point Pi on the target object in a three dimensional space, are obtained.
To measure the three dimensional position with high precision, it is necessary to eliminate distortion from the view images as a component derived from a characteristic of taking optical systems, such as aberration. Also, the view images must be corrected according to information on the basis of a correct focal length, position relation, direction and the like of the taking optical systems at the time of photography. Before analyzing the view images, calibration data created according to the characteristic of the taking optical systems are corrected in association with the view images. In the taking optical systems of which the focus is adjustable, it is necessary to select the calibration data according to a focus position at the time of photography and apply this to the view images, because the characteristic changes according to the focus position of the taking optical systems.
To select the calibration data according to the focus position in the above manner, it is necessary to designate the focus position at the time of photography. As a method of the designation, a method of designation from a step position of a stepping motor for moving a focus lens is known (Patent Document 1).
To obtain the parallax between the view images, correlation between pixels in the view images is checked according to correlation processing, to search the same photographing target points in the view images according to the correlation property, namely corresponding points. A calculating cost of the correlation processing is the higher according to highness in definition of the view images. The calculating cost considerably increases even with a slight increase in the definition. Thus, a fact that a range resolution becomes higher according to nearness in relation to a distance to the target object is considered. The view images are split into areas of plural distance regions. The areas are converted in such a manner that the definition is the lower according to the nearness of the distance of the areas. Such a device is known, in which the range resolution required for the entirety of the view images is obtained upon decreasing the calculating cost (See Patent Document 2).
By the way, drive pulses for supply to the stepping motor are counted for designating the focus position from the step position of the stepping motor as disclosed in Patent Document 1. However, this is not preferable, because the focus position being correct cannot be detected if temporary stepping out occurs in the stepping motor or if shock has occurred to the taking optical systems to move the lens position irrespective of the drive pulses. Although the focus position being correct can be detected by use of an encoder for directly detecting a lens position, a problem arises in that providing such a mechanism cannot be adapted to a stereo camera suitable for many users because the number of parts or the cost will increase.
Also, another method may be conceived in which a focusing distance of focusing of the taking optical systems is specified by use of the parallax obtained from the view images without applying the calibration data to the view images or after applying suitable data of the calibration data to the view images. The focus position may be detected according to the focusing distance. However, the method is inefficient due to a wastefully long process time, because the processing is performed with a higher range resolution than required for the purpose of selecting the calibration data. The method of Patent Document 2, in which the definition is changed according to the distance region, is effective in decreasing the calculating cost. However, this can be used only for the view images of a specific distance distribution, and cannot be applied to the view images created in various kinds of scenes.
The present invention has been made in view of the foregoing problems, and has an object to provide a calibration data selection device, method of selection, selection program, and three dimensional position measuring apparatus, capable of selecting suitable data from the calibration data according to a parallax image without wasteful calculation.
In order to achieve the above object, a calibration data selection device according to the present invention includes an image acquisition unit for acquiring a plurality of view images photographed from different points by an imaging apparatus having a plurality of taking optical systems, a calibration data input unit for inputting calibration data corresponding to respectively plural reference focusing distances of the taking optical systems, an image reduction unit for reducing respectively the view images at a first reduction ratio in such a range that a definition of the view images is no less than a definition corresponding to a highest range resolution which is determined from the reference focusing distances corresponding to the calibration data and from set distance regions associated with respectively the reference focusing distances, the highest range resolution being required for determining in which of the set distance regions an object distance to a target object focused by the taking optical systems is included, a distance determining unit for acquiring a corresponding point between the view images reduced by the image reduction unit according to correlation processing, for determining the object distance to the target object focused by the taking optical systems according to parallax of the acquired corresponding point, and a calibration data selector for selecting calibration data from plural calibration data in such a manner that the object distance determined by the distance determining unit is within the set distance region.
Preferably, a focus area acquisition unit designates a focus area in the view images. The distance determining unit determines the object distance by use of the parallax of the corresponding point in the focus area designated by the focus area acquisition unit.
Preferably, the distance determining unit operates to acquire a corresponding point in the focus area designated by the focus area acquisition unit.
Preferably, also, a parallax detector detects parallax corresponding to a distance estimated for an in-focus state of the taking optical systems according to distribution of occurrence of the parallax of the corresponding point acquired by the distance determining unit for entirety of the view images. The distance determining unit acquires the object distance from the parallax detected by the parallax detector.
Preferably, also, the image reduction unit sets the first reduction ratio in a first direction of arrangement of the taking optical systems in the view images, and sets a second reduction ratio of the view images smaller than the first reduction ratio in a second direction perpendicular to the first direction.
Preferably, a correlation window correction unit adjusts an aspect ratio of a correlation window for use in the correlation processing of the distance determining unit according to the first and second reduction ratios.
Preferably, also, a focal length acquisition unit acquires a focal length of the taking optical systems having photographed a parallax image in the imaging apparatus set up in changing the focal length. The calibration data acquisition unit acquires calibration data for each of plural focal lengths of the taking optical systems according to the focal lengths. The image reduction unit sets the first reduction ratio from a reduction ratio in such a range that the definition of the view images is no less than the definition corresponding to the highest range resolution which is determined from the reference focusing distances corresponding to the calibration data for focal lengths acquired by the focal length acquisition unit and from a set distance region associated with the reference focusing distances. The calibration data selector selects calibration data corresponding to the object distance determined by the distance determining unit and the focal length acquired by the focal length acquisition unit.
Preferably, also, the image reduction unit includes a reduction ratio determining unit for acquiring imaging resolutions for measuring a distance from parallax between the view images being non-reduced for respectively the reference focusing distances according to basic information of the imaging apparatus inclusive of a base line length, a focal length and a pixel pitch in photography, for acquiring range resolutions for respectively the reference focusing distances according to a reference focusing distance corresponding to the calibration data and a set distance region associated therewith, and for determining the first reduction ratio from the imaging resolutions and the range resolutions.
Preferably, also, the reduction ratio determining unit carries out correction so that optical axes of the taking optical systems with a convergence angle are made parallel with one another in an approximation manner, to acquire the imaging resolutions.
Also, a three dimensional position measuring apparatus according to the present invention includes a calibration data selection device constructed as described above, an entry unit for applying the calibration data selected by the calibration data selection device to the view images being input for correcting the view images, and an arithmetic processing unit for determining three dimensional position information of the target object according to the parallax between the view images corrected by the entry unit.
Also, a calibration data selection method according to the present invention includes an image acquiring step of acquiring a plurality of view images photographed from different points by an imaging apparatus having a plurality of taking optical systems, a calibration data acquiring step of acquiring calibration data corresponding to respectively plural reference focusing distances of the taking optical systems, an image reduction step of reducing respectively the view images at a first reduction ratio in such a range that a definition of the view images is no less than a definition corresponding to a highest range resolution which is determined from the reference focusing distances corresponding to the calibration data and from set distance regions associated with respectively the reference focusing distances, the highest range resolution being required for determining in which of the set distance regions an object distance to a target object focused by the taking optical systems is included, a distance determining step of acquiring a corresponding point between the view images reduced by the image reduction step according to correlation processing, for determining the object distance to the target object focused by the taking optical systems according to parallax of the acquired corresponding point, and a calibration data selection step of selecting calibration data from plural calibration data in such a manner that the object distance determined by the determining step is within the set distance region.
Also, a calibration data selection program according to the present invention causes a computer to execute the image acquiring step, the calibration data acquiring step, the image reduction step, the distance determining step and the calibration data selection step as described above.
According to the present invention, respective view images are reduced in a range in which it is possible to detect any one of the set distance regions determined for a respective reference focusing distance corresponding to the calibration data. An object distance of a target object is acquired according to parallax obtained from the reduced view images, to select calibration data corresponding to the object distance. Consequently, it is possible to select appropriate calibration data by shortening process time without wasteful processing.
In
A stereo image input unit 11 retrieves the stereo image created by the stereo camera for a target object. The stereo camera, as is well-known, includes the two taking optical systems on right and left sides, photographs the target object from the right and left view points through the taking optical systems, and outputs the stereo image as parallax image. The stereo image includes a left view image photographed from the left view point and a right view image photographed from the right view point. The stereo image input unit 11 is supplied with the stereo image assigned with tag information which is a focus area indicating an area in the stereo image focused by the stereo camera. Note that a direction of arrangement of the taking optical systems is not limited to the horizontal direction, but can be, for example, a vertical direction. Also, the image can be a parallax image including view images photographed from three or more view points.
A camera information input unit 12 obtains camera information (basic information) of the stereo camera having photographed a stereo image to be input. For the camera information, a base line length as an interval between the right and left taking optical systems, focal lengths and a pixel pitch are input. Note that precision of various values of the camera information may be low for determining an estimated focusing distance which will be described later.
A calibration dataset input unit 13 is supplied with a calibration dataset prepared initially. The calibration dataset to be input corresponds to the stereo camera having photographed the stereo image to be input. The calibration dataset includes plural calibration data for eliminating influence of distortion of the taking optical systems and their convergence angle.
The distortion and the like of the taking optical systems is different according to their focus position, namely lens position. Calibration data are previously prepared for plural focus positions as reference. A focusing distance (hereinafter referred to as reference focusing distance) as a reference corresponding to calibration data is associated with respectively the calibration data. Information of the focusing distance is input to the calibration dataset input unit 13 together with the calibration data. The reference focusing distance is a focusing distance of the taking optical systems as described above. The distance is determined according to the focus position as reference of the taking optical systems. The reference focusing distance is correlated to the focus position as reference.
Creation of calibration data for the continuous focusing distance is not practical. As shown in
In an example of
For the calibration data C2 corresponding to the reference focusing distance of “1 m”, the set distance region is from the distance of “75 cm” to the distance of “1.5 m”, wherein the distance of “75 cm” described above and a median value the of “1.5 m” between the calibration data C2 and C3 are boundary values. Similarly, for the calibration data C3, the set distance region is from the distance of “1.5 m” to the distance of “3.5 m”. For the calibration data C4, the set distance region is from the distance of “3.5 m” to the distance of the infinity.
A method of setting a set distance region is not limited to the above. For example, it is possible to predetermine a set distance region of the calibration data together with the calibration data, and input the set distance region to the three dimensional position measuring apparatus 10 with the calibration data. Also, the set distance region can be input manually.
An arithmetic processing unit 15 for required resolution constitutes an image reduction unit together with an arithmetic processing unit 16 for imaging resolution, a reduction ratio determining unit 17 and an image reduction unit 18. The arithmetic processing unit 15 retrieves reference focusing distances from the input calibration dataset, and determines a required resolution for each of the reference focusing distances. The required resolution is determined as range resolution required for finding in which of the set distance regions the object distance to the target object focused by the taking optical systems is included. The range resolution is a length (on the plane (to the right or left or up or down) and in the depth) in a three dimensional space corresponding to a pitch of one pixel. The required resolutions being determined are sent to the reduction ratio determining unit 17.
The arithmetic processing unit 16, when the three dimensional position is determined by use of the camera information and by use of all the pixels in the input view images, determines an imaging resolution as a measurement resolution (range resolution) in a depth direction. The imaging resolution differs according to the object distance to the target object even though the base line length, focal lengths, and pixel pitch on the image sensor are equal at the time of photography. The arithmetic processing unit 16 determines imaging resolutions in correspondence with respectively the reference focusing distances by using the reference focusing distance for the calibration data as the object distance. The imaging resolutions are sent to the reduction ratio determining unit 17.
The reduction ratio determining unit 17 operates according to the required resolutions from the arithmetic processing unit 15 and the imaging resolutions from the arithmetic processing unit 16, and determines a reduction ratio for reducing the definition of the view images in such a range that definitions of the view images are not lower than a definition according to the highest range resolution determined according to the reference focusing distances for the calibration data and their associated set distance regions. The reduction ratio is so determined that the range resolution at the time of obtaining the object distance of the target object by use of the reduced view images meets the highest required resolution among the required resolutions, and that highest possible effect of the reduction will be obtained. In the present embodiment, the reduction ratio of the highest possible effect of the reduction is obtained from the value K as an integer, where the reduction ratio is “1/K”.
The image reduction unit 18 reduces respective view images at the reduction ratio determined by the reduction ratio determining unit 17, to lower the definition of the view images. In the processing for the reduction, the view images are reduced so that a ratio of the pixel number after the reduction to the pixel number before the reduction is equal to the reduction ratio, in relation to the pixel numbers in the horizontal direction (direction of parallax) of view images and in the vertical direction perpendicular thereto. For example, let “1/K” be the reduction ratio. One pixel after the reduction is an average value of an area containing “KxK” pixels in the view image before the reduction. Also, it is possible to reduce the view images by processing of thinning according to the number based on the reduction ratio.
A first arithmetic processing unit 21 performs first arithmetic processing which includes the correlation processing and the parallax determination. In the correlation processing, view images after reduction by the image reduction unit 18 are processed for the correlation processing, to search corresponding points (pixels) in a right view image corresponding to reference points (pixels) in a left view image. In the parallax determination, parallax between the reference points detected by the correlation processing and their corresponding points is determined. A distance estimating unit 22 is supplied with a result of the first arithmetic processing. The parallax is obtained in a form of a shift amount (pixel number) between the reference points and their corresponding points.
A focus area acquisition unit 23 reads and analyzes tag information assigned to an input stereo image, and acquires a focus area. Coordinates of the focus area in the stereo image acquired by the focus area acquisition unit 23 before the reduction are converted by an area converter 24 into coordinates in a reduced stereo image according to the reduction ratio. The distance estimating unit 22 is supplied with the converted focus area.
The distance estimating unit 22 with the first arithmetic processing unit 21 constitutes a distance determining unit. The distance estimating unit 22 operates according to the parallax obtained from the focus area in a reduced view image, determines an object distance to a portion of the target object recorded in the focus area, and outputs this as an estimated focusing distance. For determining the estimated focusing distance, a pixel pitch, focal length, base line length, and reduction ratio of the view image are used together with the parallax from the first arithmetic processing unit 21.
A calibration data selector 26 selects calibration data associated with the estimated focusing distance from among calibration data respectively input as calibration datasets. For selecting the calibration data, the calibration data selector refers to the set distance region associated with the calibration data, so as to select calibration data in which the estimated focusing distance is within the set distance region. Thus, calibration data associated with the focus position of the taking optical systems at the time of photographing the stereo image is selected.
A calibration data entry unit 31 applies the calibration data selected by the calibration data selector 26 to view images without reduction, to eliminate influence of distortion of the taking optical systems and their convergence angle. A second arithmetic processing unit 32 performs second arithmetic processing including correlation processing and parallax determination. These of the second arithmetic processing are the same as those of the first arithmetic processing, but performed for the view images without reduction. A 3D data converter 33 is supplied with a result of the second arithmetic processing.
The 3D data converter 33 determines 3D data as three dimensional position information inclusive of a distance of a target object according to a pixel as a reference point in a left view image and a pixel as a point corresponding thereto in a right view image. An output interface 34 records the 3D data of the stereo image, for example, to a recording medium. The outputting method is not limited to this method, but can be a method of, for example, outputting to a monitor.
Determination of a reduction ratio is described now. An object distance L from the stereo camera to the measurement point is expressed in the following equation (1):
L=(D.f)/(B.d) (1)
where “D” is the base line length of the stereo camera for photography, “f” is the focal length, “B” is the pixel pitch, and “d” is the parallax.
A length corresponding to the parallax can be obtained by multiplying the parallax by the pixel pitch. If the view image is reduced, it is possible to obtain the length by use of a value determined by dividing the pixel pitch of the camera information by the reduction ratio. Therefore, a relationship of “P=B.d0=K.B.d1” is satisfied, where “P” is the length corresponding to the parallax, “B” is the pixel pitch of the camera information, “1/K” is the reduction ratio of the view images, “d0” is the parallax before the reduction, and “d1” is the parallax after the reduction.
As is well-known, the parallax is smaller if the measurement point is shifted in the far distance direction, and is larger if the measurement point is shifted in the near distance direction. In relation to a given object distance, let a measurement resolution be a change amount in the distance increasing or decreasing upon a change of the parallax of one pixel. As shown in
R1=[(D.f)/(B.(d−1))]−[(D.f)/(B.d)] (2)
=[L/(1−(B.L)/(D.f))]−L (2′)
R2=[(D.f)/(B.d)]−[(D.f)/(B.(d+1))] (3)
=L−[L/(1+(B.L)/(D.f))] (3′)
The imaging resolution can be determined as measurement resolutions of a far distance side and a near distance side obtained from the Equations (2′) and (3′) described above according to the base line length, focal length, pixel pitch, and object distance at the time of photography. The respective reference focusing distances are used as object distances, so it is possible to obtain the imaging resolution on the far distance side and the imaging resolution on the near distance side for each of the reference focusing distances.
On the other hand, for judging whether an object distance to be measured is within the set distance region of given calibration data, the measurement resolution of the far distance side obtained in the above manner with the reference focusing distance as the object distance needs to be Rf or lower, and the measurement resolution of the near distance side needs to be Rc or lower, where Rf is a difference between the reference focusing distance according to the calibration data and an upper limit of the set distance region, and Rc is a difference between the reference focusing distance according to the calibration data and a lower limit thereof. In relation to a given reference focusing distance, as a result, the difference between the reference focusing distance and the upper limit of the set distance region including the reference focusing distance is a required resolution on the far distance side. The difference between the reference focusing distance and the lower limit of the same is a required resolution on the near distance side. Thus, it is possible to obtain the required resolution on the far distance side and the near distance side for the respective reference focusing distances.
A reduction ratio is determined as a largest one of the values of a ratio of the imaging resolution to the required resolution (=imaging resolution/required resolution) in use of resolution of the same kind at an equal reference focusing distance. In short, a ratio of the imaging resolution on the far distance side to the required resolution on the far distance side is obtained for respectively the reference focusing distance. A ratio of the imaging resolution on the near distance side to the required resolution on the near distance side is obtained for respectively reference focusing distance. A highest one of the ratios is determined as the reduction ratio.
The measurement resolution is reduced further according to a decrease of the reduction ratio (=“1/K”). However, the reduced stereo image can meet the required resolution in correspondence with any one of the reference focusing distances by determining the reduction ratio in the above manner. The reduction ratio is determined by setting the value K as an integer for the reduction ratio equal to “1/K” in the above manner, for the purpose of simplifying the processing to reduce the image in which resolution of the stereo image can be minimized to raise efficiency in the correlation processing.
Note that the determined reduction ratio, for satisfying the required resolution for any one of the reference focusing distances, is one for maximum effect in the reduction according to the present example. However, it is unnecessary to maximize the effect in the reduction in determining a reduction ratio.
In
The reference focusing distances corresponding to the calibration data C1-C4 shown in
In relation to the required resolution on the near distance side for the calibration data C2 and C3, if the reduction ratio is lower than “ 1/32”, the required resolutions “250 mm” and “500 mm” are satisfied at the reference focusing distance. However, for the calibration data C4, if the reduction ratio is lower than “ 1/18”, the required resolution “1,500 mm” is not satisfied at the reference focusing distance. As a result, “ 1/18” is determined as a reduction ratio, because a ratio of the imaging resolution to the required resolution is the highest as reduction ratio.
The operation of the above construction is described by referring to
When the calibration dataset and the camera information are input, reference focusing distances are retrieved in correspondence with respective calibration data. According to the reference focusing distance, the arithmetic processing unit 15 obtains required resolutions for both of the far distance side and near distance side in correspondence with the reference focusing distances. Also, imaging resolutions for both of the far distance side and near distance side in correspondence with the reference focusing distances are obtained from the reference focusing distances and camera information.
A reduction ratio of the view image is determined by the reduction ratio determining unit 17 according to the respective required resolution and imaging resolution. At this time, the reduction ratio determining unit 17 obtains a ratio of the imaging resolution on the far distance side to the required resolution on the far distance side and a ratio of the imaging resolution on the near distance side to the required resolution on the near distance side, for each one of reference focusing distances. A highest one of the ratios is determined as reduction ratio.
When the stereo image input unit 11 inputs view images, those are sent to the image reduction unit 18 and the calibration data entry unit 31. The view images in the image reduction unit 18 are reduced at the reduction ratio determined by the reduction ratio determining unit 17. In the view images, the pixel number and the definition are decreased. The pixel pitch is increased so that the measurement resolution is reduced.
The view images reduced in the above manner are sent to the first arithmetic processing unit 21, and processed in the first arithmetic processing in their entire areas. Corresponding points are searched by correlation processing, to obtain parallax for reference points of the detected corresponding points. As the view images are reduced, correlation processing is completed in a shorter time than correlation processing for the input view images. Although the calibration data are not applied to the view images in the first arithmetic processing, it is possible without large failure to search the corresponding points, because of small influence of distortion in the taking optical systems or their convergence angle even without the calibration data as a result of reducing the view images. The position information of the obtained corresponding points and information of their parallax are sent to the distance estimating unit 22.
Also, the focus area, acquired by the focus area acquisition unit 23 analyzing the tag information assigned to the stereo image, is converted to coordinates in each reduced view image by the area converter 24, and sent to the distance estimating unit 22.
In the state of having input a result of the first arithmetic operation and the focus area converted in the above manner, the distance estimating unit 22 determines an object distance to a portion of the target object of the parallax, according to the camera information and the parallax of corresponding points detected in the converted focus area. The object distance is output as the estimated focusing distance.
When the estimated focusing distance is sent to the calibration data selector 26, calibration data of which the estimated focusing distance is included in the set distance region is selected from the various calibration data.
When the selected calibration data is sent to the calibration data entry unit 31, the calibration data is applied to the respective view image not reduced, so as to eliminate distortion of the taking optical systems in the stereo camera for photography. As described heretofore, the calibration data are selected according to the estimated focusing distance obtained from the respective reduced view images. Thus, the suitably selected calibration data are applied to the view images owing to the selection in the above manner.
The view images after application of the calibration data are processed in the second arithmetic processing by the second arithmetic processing unit 32. 3D data are determined according to a result of the second arithmetic processing by way of three dimensional position information including the distance of the target object for the pixels in the view images. The 3D data are recorded to a recording medium.
To measure three dimensional position information from stereo images successively photographed by the same stereo camera, common calibration data and camera information can be used. It is possible only to input the stereo images without inputting the data and information.
In the above embodiment, the focus area as a portion focused by the stereo camera is specified from the tag information assigned to the stereo image. However, the designation of the focus area is not limited to this method. For example, the focus area may be specified by analyzing the view images. A method according to analyzing the view images can be a method according to detecting a face area or an area containing a high frequency component of a larger amount.
In an example of
In an example of
Instead of designation by way of the focus area, it is possible to designate parallax according to a distance of estimation of an in-focus state of the stereo camera. In an embodiment of
Note that only important portions are shown in
A second embodiment is described, in which camera information is acquired from calibration data. Portions of the embodiment other than those described hereinafter are the same as the first embodiment. Substantially the same elements are designated with the same reference numerals, to omit further description.
In the embodiment as shown in
As shown in
The arithmetic processing unit 51 obtains the base line length and the pixel focal length from the respective calibration data, and outputs an average base line length and an average pixel focal length determined by averaging each of those as camera information. There are fine differences between the calibration data according to the focusing position of the taking optical systems. The camera information obtained from the calibration data is not correct in a precise meaning. However, no problem occurs in obtaining an estimated focusing distance from the view images with reduced measurement resolution for the purpose of selecting calibration data. Note that median values may be used instead of the average values. For example, the base line length and pixel focal length obtained from the selected calibration data can be used as basic information for use in the second arithmetic processing unit 32 and the 3D data converter 33.
A third embodiment is described in correspondence with a stereo camera in which zoom lenses are used as taking optical systems. Portions of the embodiment other than those described hereinafter are the same as the first embodiment. Substantially the same elements are designated with the same reference numerals, to omit further description. In the third embodiment, a construction for photographing a stereo image is described in setting of the taking optical systems at one focal length of either of the wide-angle end and telephoto end. It is possible to apply the embodiment to other focal lengths, and to three or more focal lengths.
The camera information input unit 12 receives inputs of the base line length, pixel pitch, and focal lengths of the telephoto end and wide-angle end as camera information. In
The arithmetic processing unit 15 determines required resolutions for the far distance side and the near distance side for each of the focal lengths of the calibration data and for each of the reference focusing distances. The arithmetic processing unit 16 determines the imaging resolution of the far distance side and the near distance side for each of the focal lengths according to the camera information and for each of the reference focusing distances. An arithmetic processing unit 54 for a reduction ratio determines a reduction ratio according to the required resolutions and imaging resolution for each of the focal lengths in a manner similar to the reduction ratio determining unit 17 of the first embodiment. Thus, reduction ratios of the telephoto end and the wide-angle end are determined. The reduction ratios are written to a memory 54a.
A reduction ratio selector 55 is supplied with a focal length retrieved from the tag information of the stereo image. In response to the input of the focal length, the reduction ratio selector 55 retrieves a reduction ratio from the memory 54a in correspondence with the focal length, and sends the reduction ratio to the image reduction unit 18, the area converter 24 and the first arithmetic processing unit 21. Then the view images are reduced in such a manner as to satisfy the required resolution according to the focal length at which the input stereo image has been photographed, and to maximize the effect of the reduction. An estimated focusing distance is obtained from the view images.
The calibration data selector 26 selects calibration data corresponding to a focal length obtained from the tag information of the stereo image and the estimated focusing distance obtained by the distance estimating unit 22. The selected calibration data is applied to each of the view images.
A fourth embodiment is described, in which vertical and horizontal reduction ratios of view images are determined discretely from one another. Portions of the embodiment other than those described hereinafter are the same as the first embodiment. Substantially the same elements are designated with the same reference numerals, to omit further description.
In
A ratio input unit 62 for a vertical direction reduction ratio is provided and operates for inputting a reduction ratio in the vertical direction (herein referred to as vertical direction reduction ratio). The respective view images are reduced by the image reduction unit 18 according to the horizontal reduction ratio from the ratio determining unit 61 in the horizontal reduction and according to the vertical reduction ratio from the ratio input unit 62 in the vertical reduction. Similarly for the focus area, the size of the focus area obtained by the area converter 24 is reduced according to the horizontal reduction ratio in the horizontal reduction, and according to the vertical reduction ratio in the vertical reduction, to adjust the aspect ratio.
A window size correction unit 63 corrects a size of a correlation window for use in the correlation processing according to the respective reduction ratios if the horizontal reduction ratio is different from the vertical reduction ratio. The correction is made to meet “Wv=Wh.Qv/Qh” where Wv is a vertical size of the correlation window, Wh is a horizontal size of the correlation window, Qv is the vertical reduction ratio, and Qh is the horizontal reduction ratio.
A difference in the distance in the depth direction is detected as a shift amount in a parallax direction of arranging the taking optical systems. The measurement resolution is influenced by reduction in the horizontal direction but not by reduction in the vertical direction. Therefore, a reduction ratio is determined for further reduction in the vertical direction than in the horizontal direction, so that a further decrease of processing time is possible without influencing the measurement resolution.
In the above embodiment, the absolute value of the vertical reduction ratio is input. However, a relative value of the vertical reduction ratio to the horizontal reduction ratio can be input. Also, it is possible automatically to set the vertical reduction ratio with further reduction than the horizontal reduction ratio instead of inputting the vertical reduction ratio.
A fifth embodiment is described, in which an imaging resolution is determined in consideration of a convergence angle. Portions of the embodiment other than those described hereinafter are the same as the first embodiment. Substantially the same elements are designated with the same reference numerals, to omit further description.
In the embodiment shown in
In this embodiment, the correction setting unit 67 corrects a pixel pitch for determining an imaging resolution if a convergence angle θ between taking optical systems 68L and 68R is given without parallelism between their optical axes PL and PR as shown in
In the above description, the pixel pitch is corrected. However, a shift amount of a pixel can be corrected to determine the imaging resolution. If there is a convergence angle θ, the measurement distance is infinity (L=∞) on a condition of “d=(f/B). tan θ” where “d” is the shift amount of the pixel, “f” is a focal length of the taking optical systems and “B” is a pixel pitch. Also, if the axes of the taking optical systems are parallel without the convergence angle, “d=0” when the measurement distance is infinity. In short, the shift amount of a pixel is higher in case with the convergence angle θ than in case without the convergence angle. It is possible to treat without the convergence angle by correcting this amount. Accordingly, the imaging resolution can be determined by use of the shift amount d1 of the corrected pixel according to “d1=d0−(f/B). tan θ”, where “d0” is the shift amount of the pixel before the correction, and “d1” is the shift amount of the pixel after the correction.
Those above embodiments do not operate for strictly eliminating influence of the convergence angle, but can operate effectively enough to determine the imaging resolution for obtaining an estimated focusing distance. In a stereo camera of stereo photography for stereoscopy with human eyes, a convergence angle is assigned to the stereo camera normally for easy stereoscopy. The embodiments are effective in treating stereo images from such a stereo camera.
A sixth embodiment is described, in which an area is designated to perform correlation processing and determine parallax. Portions of the embodiment other than those described hereinafter are the same as the first embodiment. Substantially the same elements are designated with the same reference numerals, to omit further description. In
As shown in
In the present embodiment, the area for the correlation processing and the parallax determination is limited to the focus area specified from the tag information. However, it is possible to utilize the embodiment in a manner of
A seventh embodiment is described for an example of a focusing distance estimating unit for estimating and outputting a focusing distance upon photographing a stereo image. Portions of the embodiment other than those described hereinafter are the same as the first embodiment. Substantially the same elements are designated with the same reference numerals, to omit further description.
To a distance step input port 71, a reference focusing distance is input in correspondence to focus positions where the taking optical systems of the stereo camera can be set. For example, if the focus is adjusted by moving the taking optical systems of the stereo camera stepwise in focus positions corresponding to the object distances of 50 cm, 60 cm, 80 cm, 1 meter, 1.20 meters, 1.50 meters and the like, those object distances are input as reference focusing distances.
A focus area input port 72 is supplied with area information for focusing of the stereo camera. If the stereo camera is controlled to focus a center of an image frame, the area information input to the focus area input port 72 is coordinates of an area of the center in a view image.
An output interface 73 outputs an estimated focusing distance determined by the distance estimating unit 22, for example by recording this to a recording medium.
In the embodiment as shown in
The focusing distance estimating unit 70 described above can be used in connection with a stereo camera. For this structure, a memory is provided in the stereo camera for storing respective reference focusing distances, camera information and a focus area. It is possible to retrieve information from the memory, or to input stereo images directly from the memory. For photography by connection of the focusing distance estimating unit 70 with a stereo camera, it is possible to retrieve a focus area determined in the photography for events of changes of the focus area between shots. Also, it is possible to incorporate a function of the focusing distance estimating unit 70 in a stereo camera for detecting a focus position, instead of an encoder for detecting the focus position of the taking optical systems.
In the first to sixth embodiments described above, the three dimensional position measuring apparatus has been described as examples. However, a calibration data selection device can be constructed by use of functions including selection of calibration data. In the embodiments, a reduction ratio is determined in the apparatus. However, it is possible to create a reduction ratio with calibration datasets and input the reduction ratio together with the calibration datasets. Also, the structures of the embodiments described above can be combined together in a condition without inconsistency.
Number | Date | Country | Kind |
---|---|---|---|
2010-087519 | Apr 2010 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2011/058427 | 4/1/2011 | WO | 00 | 9/14/2012 |