The present disclosure relates to a road surface estimation device, vehicle control device, and road surface estimation method.
Devices for estimating a road profile and so on using a computer, based on an image captured by a camera, have been developed in line with increasing computer throughput. One of road profile estimation devices proposed is configured to roughly estimate a profile of road on which a vehicle is traveling by checking a road profile estimated on the basis of an image captured by a monocular camera with a road profile in a digital road map (for example, Japanese Patent Unexamined Publication No. 2001-331787).
In a study of image processing technology related to vehicle-mounted stereo cameras, detection of a road surface is an important issue. This is because accurate road-surface detection enables further efficient search of driving route and recognition of obstacles, such as pedestrians and other vehicles.
The present disclosure offers a road surface estimation device with improved detection accuracy.
The road surface estimation device of the present disclosure includes a spatial measurement unit, a filter, and a road surface estimator. The spatial measurement unit measures a three-dimensional measurement point cloud on a road surface on the basis of an image received from a stereo camera or a camera capable of three-dimensional measurement. The filter filters the three-dimensional measurement point cloud on the basis of a road surface model created based on map information so as to obtain road surface candidate points. The road surface estimator estimates the road surface on the basis of the road surface candidate points.
A vehicle control device of the present disclosure includes the road surface estimation device of the present disclosure and a controller that controls a vehicle in which the vehicle control device is installed. The controller controls the vehicle according to a road surface estimated by the road surface estimation device.
The present disclosure offers the road surface estimation device with improved detection accuracy.
Prior to describing an exemplary embodiment of the present disclosure, problems in the prior art are described briefly. In general, a road surface has differences in level (height) and dents. To identify a stereoscopic shape of a difference in level, a dent, or the like in the road surface by passive three-dimensional measurement without emission of laser beam, a plurality of parallax images taken by two or more cameras (stereo cameras) is necessary. Recently, global matching methods, such as Semi-Global Matching (SGM), have been developed to obtain road surface information as a point cloud in a three-dimensional space from images taken by stereo cameras, without using edge information, such as white lines on the road surface. However, the point cloud obtained by SGM contains errors. Due to those errors, the points in the point cloud that are supposed to be distributed on the road surface are distributed vertically with respect to the road surface. As a result, the stereoscopic shape of road surface cannot be accurately estimated on the basis of the point cloud.
The exemplary embodiment of the present disclosure is described below with reference to drawings. Same reference marks in the drawings indicate identical or equivalent parts.
Vehicle control device 200 is connected to external image capture unit 110, and includes road surface estimation device 100 and vehicle controller 160. Alternatively, road surface estimation device 100 or vehicle controller 160 may include image capture unit 110.
Road surface estimation device 100 estimates the road surface shape (profile), and includes spatial measurement unit 120, road surface model creator 130, filter 140, and road surface estimator 150.
Image capture unit 110 captures a front view of own vehicle. For example, image capture unit 110 is a stereo camera including a left camera and a right camera. Alternatively, image capture unit 110 is a camera capable of three-dimensional measurement, such as a TOP (Time Of Flight) camera.
Spatial measurement unit 120 receives, from image capture unit 110, a left image and a right image obtained by capturing the same object with two cameras, i.e., the left camera and right camera. Spatial measurement unit 120 then measures a three-dimensional position of the same object from these images.
u-v coordinate value (ul, vl) of position Q in left image 112 matches x-y coordinate value (ul, vl) of position Q centering on focal point O′ of the left camera. u-v coordinate value (ur, vr) of position R in right image 114 matches x-y coordinate value (ur, vr) of position R centering on focal point O of the right camera.
First, x-y-x coordinate value (x, y, z) of position P of the object centering on focal point O of the right camera is expressed using u-v-disparity coordinate value (ur, vr, d) of position P of the object, distance b between the cameras, and focal point distance f. Here, d defined as ul−ur shows a disparity value.
Assuming that point Q′ is a cross point where line segment OS, which is same as line segment O′P moved in parallel to pass point O, crosses right image 114, point Q′ has x-y coordinate value (ul, vl) centering on point O. Formula (1) below is derived by focusing attention on triangle OPS.
x:b=u
r
:d (1)
The same formula is established for the y coordinate (depth direction in
Next, u-v-disparity coordinate value (ur, vr, d) of position P of the object is expressed using x-y-z coordinate value (x, y, z) of position P of the object centering on focal point O of the right camera, distance b between the cameras, and focal point distance f. Next Formula (3) is derived on the basis of
x:u
r
=y:v
r
=z:f (3)
From Formula (3), next Formula (4), which is a conversion equation from x-y-z coordinate value (x, y, z) to u-v-disparity coordinate value (ur, vr, d) is derived.
Accordingly, note that a three-dimensional measurement point of an object can be expressed by both the x-y-z coordinate system and u-v-disparity coordinate system.
Spatial measurement unit 120 detects the same object from the left image and the right image, and outputs three-dimensional measurement point cloud of the same object. For example, to detect the same object, disparity information is used, such as a disparity map in which disparity of a portion corresponding to each pixel of a left image or a right image is mapped. For example, SGM is used for obtaining the disparity map. When image capture unit 110 is a camera capable of three-dimensional measurement, spatial measurement unit 120 may output results of three-dimensional measurement by image capture unit 110 as they are as three-dimensional measurement point cloud. As described above, spatial measurement unit 120 measures the three-dimensional measurement point cloud on the road surface of a street based on the image input from image capture unit 110 mounted to a vehicle that is traveling along the street where the road surface is in front of the vehicle.
The three-dimensional measurement point cloud output from spatial measurement unit 120 contains errors typically due to error in disparity information.
In the first exemplary embodiment, filter 140 therefore filters a three-dimensional measurement point cloud output from spatial measurement unit 120 before road surface estimator 150 estimates the road surface, and obtains road surface candidate points as a road surface candidate point cloud included in the three-dimensional measurement point cloud. Filter 140 applies filtering with reference to
Road surface model creator 130 creates road surface model 210 that is information indicating the road surface shape (profile). For example, road surface model creator 130 creates road surface model 210 representing planar or curved surface in a three-dimensional space as information indicating the road surface, based on three-dimensional map information and positional information of own vehicle. For example, road surface model creator 130 detects inclination of image capture unit 110 in a direction crossing the road, inclination along the road, and inclination in the shooting direction, in accordance with the three-dimensional map information and the positional information of the own vehicle, in order to align the coordinate system of the three-dimensional map and the coordinate system of image capture unit 110. Alternatively, road surface model creator 130 may detect inclination of image capture unit 110 in a direction crossing the road, inclination along the road, and inclination in the shooting direction, in accordance with inclination information input from a tilt sensor that detects inclination of the own vehicle.
Three-dimensional map information is, for example, information on longitudinal slope of road, information on transverse slope of road, and road width information. The three-dimensional map information preferably has accuracy higher than general map information used in car navigation systems. The road width information may include a right width that is a width of road from the center line to the right-hand side, and a left width that is a width of road from the center line to the left-hand side. Road surface model creator 130 creates road surface model 210 in the form of quadric surface with respect to the shooting direction of image capture unit 110 on the basis of the three-dimensional map information and the positional information of the own vehicle. For example, road surface creator 130 creates road surface model 210 within a range of road width. However, road surface model 210 may be created out of the range of road width by extending the transverse slope of road.
Filter 140 receives road surface model 210 created by road surface model creator 130. Then, filter 140 determines a filter to be used for determining whether to adopt a three-dimensional measurement point as a road surface candidate point or eliminate it as unsuitable candidate point on the basis of road surface model 210. For example, the filter is characterized by a range defined by a filter width that is a distance in the normal direction from road surface model 210. For example, as shown in
For example, filter 140 changes the filter width according to an error characteristic of image capture unit 110. When image capture unit 110 is a stereo camera, the three-dimensional measurement point cloud measured by spatial measurement unit 120 contain errors proportional to the square of a distance from image capture unit 110 to each of the three-dimensional measurement points. In other words, farther the distance of an object from image capture unit 110, larger the error contained in the three-dimensional measurement point cloud. Accordingly, excessive elimination of faraway points can be suppressed by changing the filter width in accordance with a distance of the object from image capture unit 110. In the example, the filter width is broadened as the object is farther away from image capture unit 110, as shown in
Upon receiving road surface model 210 from road surface model creator 130, filter 140 filters the three-dimensional measurement point cloud input from spatial measurement unit 120, so as to obtain road surface candidate points. In
Road surface estimator 150 estimates the road surface on the basis of the road surface candidate points output from filter 140. As described above, the three-dimensional measurement point cloud contains larger error as the object is away from image capture unit 110. Here, a z-coordinate value becomes larger and a disparity coordinate value (disparity value) becomes smaller as the object is farther from image capture unit 110. Conversely, a z-coordinate value becomes smaller and the disparity coordinate value (disparity value) becomes larger as the object is closer to image capture unit 110. Accordingly, for example, road surface estimator 150 estimates the road surface from larger disparity coordinate values that contain less error in the three-dimensional measurement point cloud to smaller disparity coordinate values.
For example, the space is divided into multiple areas in the direction of disparity coordinate axis. Parameters are calculated, starting from an area corresponding to larger disparity values and then to an adjacent area in turn, using next Formula (5).
v
r
=a
0
+a
l
u
r
+a
2
d+a
3
u
r
2
+a
4
d
2 (5)
By using a quadratic surface expressed by above Formula (5), parameter a0-a4 that minimizes the errors in the road surface candidate points in the applicable area can be obtained by using, for example, the least-square method.
After obtaining parameters a0-a4 for all areas, road surface estimator 150 estimates the road surface by connecting quadratic surfaces specified by these parameters. The estimated road surface is an aggregate of the road surface candidate points. In other words, the estimated road surface is an area within which the vehicle is allowed to travel. Road surface estimator 150 outputs the information (the profile) of the estimated road surface, which is the information of the connected quadratic surfaces, to vehicle controller 160.
In an estimated road surface that is a road surface estimated by road surface estimator 150, a road surface that is not expressed in the three-dimensional map information used for estimation is also estimated. Accordingly, an estimated road surface closer to the actual road surface can be obtained, compared to a road surface estimated only on the basis of the map information.
Vehicle controller 160 controls the vehicle on the basis of the estimated road surface. Specifically, vehicle controller 160 controls at least a traveling direction and a speed of the vehicle. For example, vehicle controller 160 controls the own vehicle to avoid an obstacle in accordance with an input from a recognition unit (not illustrated) that recognizes the obstacle in front of the vehicle on the basis of the estimated road surface and the three-dimensional measurement point cloud output from spatial measurement unit 120. Still more, there are cases that the estimated road surface is rough due to, for example, unpaved road, a road under construction, or a distance in level or a dent in the road. In these cases, vehicle controller 160 controls the own vehicle to, for example, reduce a vehicle speed or reduce rigidity of suspension to absorb impact, in accordance with the profile of the estimated road surface. By controlling the own vehicle on the basis of the estimated road surface, vehicle controller 160 can control the own vehicle in line with a condition of road surface that the own vehicle will pass soon. Accordingly, the vehicle can be controlled more flexibly, compared to the case of controlling the vehicle in accordance with the actual vibration. The comfort of the own vehicle can thus be improved. Furthermore, aforementioned recognition unit can identify a three-dimensional object other that the road surface by subtracting points equivalent to the road surface (e.g., the road surface candidate point cloud output from filter 140) from the three-dimensional measurement points output from spatial measurement unit 120. By identifying a three-dimensional object other than the road surface, road surface estimation device 100 can be applied to the purpose of searching a driving route or recognizing obstacles.
For example, road surface model creator 130 creates road surface model 210 for each frame of image captured by image capture unit 110, spatial measurement unit 120 measures a space in front of the vehicle, filter 140 filters three-dimensional measurement point cloud, and then road surface estimator 150 estimates the road surface. This enables to increase accuracy of estimation of three-dimensional measurement point cloud far from image capture unit 110, where estimation accuracy of these three-dimensional measurement point cloud is lower than that of points closer to image capture unit 110, as the vehicle drives closer to these faraway points.
As shown in
Reader 2107 reads a program for executing functions of the aforementioned parts from the recording medium where the program is recorded, and store the program in storage device 2106. Alternatively, transmitter/receiver 2108 establishes communication with a server device connected to a network, and downloads a program for executing functions of the aforementioned parts from the server device and allows the program to be stored in storage device 2106.
Then, CPU 2103 copies the program stored in storage device 2106 to RAM 2105, reads out commands in the program sequentially from RAM 2105, and executes the commands to achieve the functions of the aforementioned parts. On executing the program, information obtained through a range of processes described in the exemplary embodiments is stored in RAM 2105 or storage device 2106, and used as required. Note that the three-dimensional map information may be stored either in ROM 2104, RAM 2105 or storage device 2106, and the three-dimensional map information may be stored in advance or at the time when it is needed.
Vehicle control device 200 according to the first exemplary embodiment is connected to external image capture unit 110 which is composed with a stereo camera or a camera capable of three-dimensional measurement. On the other hand, vehicle control device 200A is connected to sensor 115 capable of three-dimensional measurement. Vehicle control device 200A includes road surface estimation device 100A and vehicle controller 160. Alternatively, road surface estimation device 100A or vehicle controller 160 may include sensor 115.
Examples of sensor 115 includes a Laser Imaging Detection and Ranging (LiDAR), a millimeter-wave radar, and a sonar. Sensor 115 output a plurality of three-dimensional measurement point cloud to filter 140 of road surface estimation device 100A.
Road surface estimation device 100A acquires the three-dimensional measurement point cloud on a road surface of a street from sensor 115 where sensor is mounted to a vehicle that is traveling along the street, and the road surface is in front of the vehicle. Therefore, road surface estimation device 100A does not include spatial measurement unit 120. Accordingly, Step S1200 in
The road surface estimation device of the present disclosure is preferably applicable to estimation of a road surface from images captured typically by a stereo camera.
Number | Date | Country | Kind |
---|---|---|---|
2016-158836 | Aug 2016 | JP | national |
This application is a continuation-in-part of the PCT International Application No. PCT/JP2017/023467 filed on Jun. 27, 2017, which claims the benefit of foreign priority of Japanese patent application No. 2016-158836 filed on Aug. 12, 2016, the contents all of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2017/023467 | Jun 2017 | US |
Child | 16254876 | US |