The present invention relates to a distance measuring apparatus and distance measuring method that measure a distance to an object using a photographic image.
The idea has heretofore been conceived of imaging a road situation by means of a camera installed in a vehicle, and supporting driving and/or controlling the vehicle based on an image captured thereby.
In this case, it is extremely important to detect an object such as a road traffic sign, notice board, traffic signal, or the like, present in an image captured by the camera by executing predetermined processing on the image, and measure the distance between the detected object and the camera.
In general, the distance between a camera and an object (object distance) can be found by means of equation 1 below.
Object distance=(camera focal length×actual object size)/(pixel pitch×number of object pixels) (Equation 1)
Here, the actual object size is the actual size of an object, the pixel pitch is the size of one pixel of an imaging element (CCD, CMOS, or the like), and the number of object pixels is the number of pixels by which the object is displayed. That is to say, “pixel pitch×number of object pixels” represents the image size of an object. The focal length and pixel pitch are camera specification characteristics, and are normally fixed values or known values of a particular camera.
The technologies disclosed in Patent Literature 1 and 2 are examples of technologies that measure the distance between a camera and an object using the relationship in equation 1. The technology disclosed in Patent Literature 1 images road signs, traffic signals, or suchlike objects whose sizes have been unified according to a standard, and measures the distance to an object based on the size of an object in an image.
The technology disclosed in Patent Literature 2 images a vehicle number plate, measures the size of characters on the number plate in the image, and measures the distance from the camera to the vehicle by comparing the size of the measured characters with the size of a known character decided according to a standard.
Also, in Patent Literature 3, a position recording apparatus is disclosed whereby accurate position recording of an object can be performed by taking into account object detection error. In Patent Literature 3, a vehicle's own position is measured using a GPS or suchlike positioning apparatus, and when the relative positions (relative distance and relative direction) of an object and the vehicle are calculated from a photographic image, error occurs in measurement of the vehicle's own position or calculation of the relative positions. Consequently, a technology is disclosed whereby maximum error is compared for a plurality of points at which an object is detected, and position information of an object captured at a point at which maximum error is smallest is recorded.
However, in the technologies disclosed in Patent Literature 1 and Patent Literature 2, detection error when an object is detected from an image is not taken into account. More particularly, when an object such as a road sign or a number plate of a vehicle ahead is imaged by a vehicle-mounted camera, the object is often tens of meters away from the vehicle-mounted camera, and therefore an object in an image is small in size. As a result, relative error, which is the ratio between image size and error included in image size, is large. As this relative error increases in size, distance measurement accuracy degrades.
On the other hand, in the technology disclosed in Patent Literature 3, object detection error is taken into account, but only maximum error is taken into account as a theoretical value, and actual detection error is not taken into account. Also, since maximum error is fixed for each measurement position, this is in effect the same as selecting an optimal position, and the influence of illumination variation and so forth is not taken into account. That is to say, it is difficult to sufficiently suppress degradation of distance detection accuracy due to object detection error.
It is an object of the present invention to provide a distance measuring apparatus and distance measuring method that sufficiently suppress degradation of distance detection accuracy due to object detection error, and measure the distance to an imaged object with a high degree of accuracy.
One aspect of a distance measuring apparatus of the present invention employs a configuration having: a region image detection section that detects, from a captured image of an object, region images of a plurality of regions that are included in the object and whose sizes are known; a relative error comparison section that uses image sizes of the plurality of regions detected by the region image detection section, and information regarding sizes that are known in the plurality of regions, to select a region image size that minimizes relative error that is a ratio between the image size and error included in the image size; and a distance estimation section that uses the selected region image size to estimate the distance to the object.
The present invention can sufficiently suppress degradation of distance detection accuracy due to object detection error, and measure the distance to an imaged object with a high degree of accuracy.
Now, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
[1] Overall Configuration
First through third region detection sections 101 through 103 detect each region corresponding to a speed limit sign from input image information, count the number of pixels of a detected region, and output the counted numbers of pixels to relative error comparison section 104 as measured image sizes D1 through D3.
Specifically, first region detection section 101 detects the outer circle of the speed limit sign in
Relative error comparison section 104 uses image sizes D1, D2, and D3 of a plurality of regions detected by first, second, and third region detection sections 101, 102, and 103, and information regarding sizes that are known in a plurality of regions, to select a region image size that minimizes relative errors d1/D1, d2/D2, and d3/D3, which are ratios between image sizes D1, D2, and D3, and errors d1, d2, and d3 included in image sizes D1, D2, and D3.
Distance estimation section 105 uses the image size selected by relative error comparison section 104 to estimate the distance to the object. To be more specific, distance estimation section 105 estimates the distance to the object by applying the image size output from relative error comparison section 104 to the number of object pixels in above equation 1.
[2] Processing Using Relative Error
Here, processing will be described that uses relative error to select an image size of a region to be used in distance calculation.
First through third region true image sizes C1 through C3 are expressed as shown in equations 2 below using measured image sizes D1 through D3 and measured errors d1 through d3.
C1=D1+d1
C2=D2+d2
C3=D3+d3 (Equations 2)
C1 through C3 and d1 through d3 are unknown values. Since C1 through C3 are proportional to a standardized object size, the relationships in equations 3 below apply.
C1=k21×C2
C3=k23×C2 (Equations 3)
Here, k21 and k23 are known constants. That is to say, from any one of C1 through C3, relative error comparison section 104 can calculate the other two. Below, it is assumed that C1 through C3 generally correspond to the same distance Z.
If distances calculated from D1, D2, and D3 are designated Z+z1, Z+z2, and Z+z3, respectively, the relationships in equations 4 below are found from the relationship between object distance and image size. Here, z1, z2, and z3 are distance errors when image size errors d1, d2, and d3 are included.
z1/Z=d1/D1
z2/Z=d2/D2
z3/Z=d3/D3 (Equations 4)
This shows that relative errors of an image size of each region are equal to relative errors of calculated distances, respectively. Therefore, minimizing relative error enables the accuracy of a calculated distance to be improved. However, since C1 through C3 and d1 through d3 are unknown, the true value of relative error cannot be found.
Thus, the present inventor found a method whereby an image size that minimizes relative error is found by using information regarding sizes that are known in a plurality of regions, and the accuracy of a calculated distance is improved by performing distance calculation using that image size. In actuality, in this embodiment, information regarding size ratios that are known in a plurality of regions, such as shown in equations 3, is used as information regarding sizes that are known in a plurality of regions.
The reason for using relative error is as follows. Namely, if selection of an image size that minimizes error is attempted by comparing absolute errors, since absolute error is necessarily smaller for a region with a smaller image size, the smaller the image size of a region, the likelier it is to be selected as an image used in distance calculation. Since distance measuring apparatus 100 uses relative error as in this embodiment, an image size suitable for use in distance calculation can be selected on an equitable basis, regardless of the size of a region.
In this embodiment, the following three methods are included as ways of finding an image size that minimizes relative error.
[2-1] Method 1: Using a Relative Error Sum Minimization Rule
Relative error comparison section 104 uses measured image sizes D1 through D3 output from first through third region detection sections 101 through 103, and first through third region measured errors d1 through d3, to calculate relative error sum d1/D1+d2/D2+d3/D3. Then relative error comparison section 104 finds an image size that minimizes this relative error sum d1/D1+d2/D2−d3/D3, determines that that image size is an image size suitable for use in distance calculation, and sends that image size to distance estimation section 105.
Specifically, relative error comparison section 104 finds an image size that minimizes this relative error sum d1/D1+d2/D2+d3/D3 by means of the following kind of procedure.
(i) First, it is assumed that C2 is a certain value. Normally, as can be seen from the relationship in
(ii) The assumed C2 is then used in equations 3 to calculate the values of C1 and C3.
(iii) Next, the values of d1, d2, and d3 are calculated using the values of C1 through C3, the values of D1 through D3, and equations 2.
(iv) Relative error sum d1/D1+d2/D2+d3/D3 is then calculated.
Relative error comparison section 104 varies the value of C2 assumed in (i) above within the range [D3, D1], determines a value of C2 that minimizes the relative error sum obtained in (iv) above to be an optimal image size for distance calculation, and outputs that value of C2 to distance estimation section 105.
A more specific example of processing using this relative error sum minimization rule will now be described using
Next, in step ST 202, relative error comparison section 104 sets variation b that sequentially varies assumed C2 by dividing the difference between acquired D3 and D1 into N equal parts. That is to say, relative error comparison section 104 sets variation b using D3−D1=N×b.
Next, in step ST 203; first, n is set to 0, and Emin, which is the minimum value of relative error sum E, is set to ∞, as initial values. Then, in step ST 204, the assumed C2 value is set to C2=D3+n×b. In step ST 205, C1=k21×C2 and C3=k23×C2 are calculated using equations 3.
In step ST 206, measured errors d1 through d3 are calculated using equations 2, and in step ST 207, relative error sum E (=d1/D1+d2/D2+d3/D3) is calculated.
In step ST 208, it is determined whether or not relative error sum E found in step ST 207 is less than minimum value Emin up to that point, and if E is less than Emin (YES), the processing flow proceeds to step ST 209, whereas if E is not less than Emin (NO), the processing flow proceeds to step ST 210.
In step ST 209, Emin is set to E calculated in step ST 207 and C2 at that time is set to the optimal C (Copt) and temporarily stored. In step ST 210, it is determined whether or not n has reached N, and if u≠N (NO), the processing flow proceeds to step ST 211, whereas if n=N (YES), the processing flow proceeds to step ST 212.
In step ST 211, n is incremented, and the processing flow returns to step ST 204.
In step ST 212, Copt stored in step ST 209 is decided upon as C2, and relative error comparison processing is terminated.
In this way, a value of C2 that minimizes the relative error sum—that is, an optimal image size for distance calculation—can be found.
In the above example, a case has been described in which an assumed C2 value is varied within the range [D3, D1], and C2 that minimizes relative error sum d1/D1+d2/D2+d3/D3 is determined to be an optimal image size for distance calculation. Provision may also be made for C1 or C3 to be assumed to be a certain value instead of C2 in the above example, for the same kind of method as above to be used to determine a value of C1 or C3 that minimizes the relative error sum to be an optimal image size for distance calculation, and for this value to be output to distance estimation section 105.
[2-2] Method 2: Selecting the most accurate value from existing measured image sizes D1 through D3
In method 1, a method of finding an optimal C2 was described, whereas here, a method will be described whereby an optimal C2 is not found, but the most accurate value is selected from existing measured image sizes D1 through D3.
First, relative error comparison section 104 assumes that d1 and sets C1=D1. Relative error comparison section 104 also funds C2 and C3 by using size ratios that are known in a plurality of regions. For example, in the case of
Similarly, relative error comparison section 104 assumes that d2), and sets C2=D2. Relative error comparison section 104 also uses known ratio C1/C2/C3 to find C1 and C3 from C2, and furthermore finds d1 and d3. Then relative error comparison section 104 finds relative error sum e2=d1/D1+d3/D3 as a relative error sum of other regions.
In a similar way, relative error comparison section 104 also assumes that d3=0, and sets C3=D3. Relative error comparison section 104 also uses known ratio C1/C2/C3 to find C1 and C2 from C3, and furthermore finds d1 and d2. Then relative error comparison section 104 finds relative error sum e3=d1/D1+d2/D2 as a relative error sum of other regions.
Relative error comparison section 104 detects the minimum value from among other-region relative error sums e1 through e3 found in this way. Then an image size of a region for which error is made 0 when an other-region relative error sum is smallest is selected as an image size of a region that minimizes relative error. For example, if e1 is the smallest among other-region relative error sums e1 through e3, measured image size D1 is selected as a region image size that minimizes relative error. Similarly, if e2 is the smallest among other-region relative error sums e1 through e3, measured image size D2 is selected as a region image size that minimizes relative error.
Relative error comparison section 104 then determines selected measured image size D1, D2, or D3 to be an optimal image size for distance calculation, and outputs selected measured image size D1, D2, or D3 to distance estimation section 105.
An actual example will now be given.
At this time, for
The way in which e1=8.71 corresponding to
[2-3] Method 3: Minimizing Maximum Relative Error
First, relative error comparison section 104 assumes that d1=0, and sets C1=D1. Then relative error comparison section 104 finds d2/D2 and d3/D3, and selects the maximum value from among d1/D1, d2/D2, and d3/D3 (the maximum relative error).
Similarly, relative error comparison section 104 selects the maximum relative error from among d1/D1, d2/D2, and d3/D3 when d2=0 is assumed and C2=D2 is set. Also, similarly, relative error comparison section 104 selects the maximum relative error from among d1/D1, d2/D2, and d3/D3 when d3=0 is assumed and C3=D3 is set.
Next, relative error comparison section 104 finds the smallest maximum relative error among the maximum relative errors found for d1=0, d2=0, and d3=0, respectively. Then an image size of a region for which error is made 0 when this smallest maximum relative error is obtained is selected as a region image size that minimizes relative error. For example, if the maximum relative error found for d1=0 is the smallest among maximum relative errors found for d1=0, d2=0, and d3=0, respectively, measured image size D1 is selected as a region image size that minimizes relative error. Similarly, if the maximum relative error found for d2=0 is the smallest among maximum relative errors found for d1=0, d2=0, and d3=0, respectively, measured image size D2 is selected as a region image size that minimizes relative error.
Relative error comparison section 104 then determines selected measured image size D1, D2, or D3 to be an optimal image size for distance calculation, and outputs selected measured image size D1, D2, or D3 to distance estimation section 105.
A case in which relative error comparison section 104 uses D1=64, D2=45, and D3=26 in
Next, when relative error comparison section 104 assumes that d2=0 and sets C2=D2, max(d1/D1, d2/D2, d3/D3)=max(5.47, 0, 2.12)=5.47% is found. Similarly, when relative error comparison section 104 assumes that d3=0 and sets C3=D3, max(d1/D1, d2/D2, d3/D3)=max(3.59, 1.78, 0)=3.59% is found.
Then, since min(4.71, 5.47, 3.59)=3.59, measured image size D3 is selected as an image size to be used in object distance calculation.
[3] Effects
As described above, according to this embodiment, by providing region detection sections 101 through 103 that detect, from a captured image of an object, region images of a plurality of regions that are included in the object and whose sizes are known, relative error comparison section 302 that uses image sizes D1 through D3 of a plurality of regions detected by region detection sections 101 through 103, and information regarding sizes that are known in the plurality of regions, to select a region image size that minimizes relative error that is a ratio between the image size and error included in the image size, and distance estimation section 105 that uses the selected region image size to estimate the distance to the object, degradation of distance detection accuracy due to object detection error can be sufficiently suppressed, and the distance to an imaged object can be measured with a high degree of accuracy.
In Embodiment 2 of the present invention, a case is described in which probability density distributions of relative errors d1/D1, d2/D2, and d3/D3 are used. These probability density distributions are found prior to actual distance measurement as prior statistical knowledge.
The configuration of distance measuring apparatus 300 of this embodiment is shown in
Distance measuring apparatus 300 differs from distance measuring apparatus 100 of Embodiment 1 (
Probability density distribution calculation section 301 finds a probability density distribution as prior statistical knowledge prior to actual distance measurement. Probability density distribution calculation section 301 inputs sample image data, performs detection of first through third regions on a given number of samples by means of a predetermined method, and obtains probability density distributions indicating relative error value distributions such as shown in
Relative error comparison section 302 uses image sizes D1 through D3 output from first through third region detection sections 101 through 103, and information regarding sizes that are known in a plurality of regions, to calculate relative errors d1/D1, d2/D2, and d3/D3.
These relative errors d1/D1, d2/D2, and d3/D3 can be found, for example, by performing the processing in (i) through (iv) below.
(i) First, it is assumed that C2 is a certain value. Normally, as can be seen from the relationship in
(ii) The assumed C2 is then used in equations 3 to calculate the values of C1 and C3.
(iii) Next, the values of d1, d2, and d3 are calculated using the values of C1 through C3, the values of D1 through D3, and equations 2.
(iv) Relative errors d1/D1, d2/D2, and d3/D3 are then calculated.
Next, relative error comparison section 302 reads probability densities P1, P2, and P3 corresponding to relative errors d1/D1, d2/D2, and d3/D3 from probability density distributions p1, p2, and p3 found as prior statistical knowledge by probability density distribution calculation section 301. Relative error comparison section 302 then calculates relative error probability density product P1×P2×P3 by multiplying together read probability densities P1, P2, and P3.
Relative error comparison section 302 varies the value of C2 assumed in (i) above within the range [D3, D1], and calculates relative errors d1/D1, d2/D2, and d3/D3 corresponding thereto. Relative error comparison section 302 also reads new probability densities P1, P2, and P3 corresponding to calculated relative errors d1/D1, d2/D2, and d3/D3 from probability density distributions p1, p2, and p3, and calculates new relative error probability density product P1×P2×P3.
Relative error comparison section 302 finds the smallest probability density product from among a plurality of probability density products P1×P2×P3 calculated in this way. Then relative error comparison section 302 determines a value of C2 that minimizes the probability density product to be an optimal image size for distance calculation, and outputs that value of C2 to distance estimation section 105.
As described above, according to this embodiment, a region image size that minimizes relative errors d1/D1, d2/D2, and d3/D3 is selected using relative error probability density distributions for a plurality of regions in addition to image sizes D1 through D3 of a plurality of regions detected by detection sections 101 through 103 and information regarding sizes that are known in a plurality of regions. That is to say, whereas in Embodiment 1 an optimal region image size is selected based on a relative error sum, in this embodiment an optimal region image size is selected based on a relative error probability density product. By this means, degradation of distance detection accuracy due to object detection error can be sufficiently suppressed in the same way as in Embodiment 1, and the distance to an imaged object can be measured with a higher degree of accuracy.
If it is difficult to find a probability density distribution directly, a probability density distribution can be found approximately using a relative error maximum value. Specifically, if maximum values g1, g2 and g3 in relative errors d1/D1, d2/D2, and d3/D3 are acquired by means of sampling statistics or theoretical estimation, probability density distributions can be set as shown in
In Embodiment 3 of the present invention, a method is described whereby a camera parameter such as camera exposure is controlled, and each region of a road sign or the like is detected with a higher degree of accuracy.
The configuration of distance measuring apparatus 400 of this embodiment is shown in
Distance measuring apparatus 400 differs from distance measuring apparatus 100 of Embodiment 1 (
Region quality determination section 401 determines the imaging quality of each region output from first through third region detection sections 101 through 103, decides a region that should be re-detected in the next frame, and outputs information indicating a decided region to camera parameter control section 402.
Camera parameter control section 402 estimates optimal imaging conditions for a region that should be re-detected output from region quality determination section 401, and sets a camera parameter—for example, aperture, focus, sensitivity, or the like—for the camera so that these optimal imaging conditions are achieved.
Storage section 403 performs multi-frame comparisons of regions output from first through third region detection sections 101 through 103, and stores a captured image with the best imaging quality for each region. Here, it is necessary to take distance variation due to the imaging time into consideration. It is desirable for a short frame image imaging interval to be set in order to minimize distance variation between frames.
As described above, the present invention detects a plurality of regions from an image and performs distance measurement using images of the detected plurality of regions, and therefore the higher the imaging quality of each region, the higher is the accuracy of distance measurement. However, imaging conditions for improving imaging quality may differ for each region.
Thus, in this embodiment, region quality determination section 401 determines the imaging quality of a plurality of regions, and decides a region that should be re-detected in the next frame. Then a camera parameter suitable for a region that should be re-detected is set by camera parameter control section 402, and the camera captures a next-frame image. By this means, a high-quality region image is stored in storage section 403 for each region.
Relative error comparison section 104 and distance estimation section 105 use a high-quality region image stored in storage section 403 to perform the processing described in Embodiment 1 or Embodiment 2. By this means, degradation of distance detection accuracy due to object detection error can be suppressed to a greater extent, and the distance to an imaged object can be measured with a higher degree of accuracy.
In the above embodiments, road signs have been described by way of example, but the present invention is not limited to this, and a vehicle number plate may also be used, for example. Detecting a vehicle number plate enables the distance to a vehicle ahead to be measured, for example.
Also, in the above embodiments, first through third regions are detected, but the present invention is not limited to this, and provision may also be made for four or more regions to be detected, for the image sizes of these four regions and known size information to be used to select a region image size that minimises relative error, and for the selected region image size to be used to estimate the distance to an object. Processing performed when four or more regions are used in this way is basically the same as when three regions are used (as in the above embodiments), the only difference being that the number of regions is increased.
The disclosure of Japanese Patent Application No. 2009-134225, filed on Jun. 3, 2009, including the specification, drawings and abstract, is incorporated herein by reference in its entirety.
The present invention is suitable for use in a distance measuring apparatus that measures distances to road signs, traffic signals, or suchlike objects, for example, whose sizes have been unified according to a standard.
Number | Date | Country | Kind |
---|---|---|---|
2009-134225 | Jun 2009 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2010/003441 | 5/21/2010 | WO | 00 | 12/1/2011 |