The present disclosure relates to a road surface detection apparatus, a road surface detection method, and a recording medium including a road surface detection program recorded therein.
Conventionally, there is a known road surface grade estimation apparatus which estimates, by using two cameras, a grade of a road surface on which a vehicle drives.
Furthermore, a distance measurement method which uses a time of flight distance measurement method (referred to as a “time of flight (TOF) method” below) is known. By measuring a distance to a road surface, a road surface grade can be estimated. To precisely estimate the road surface grade, it is important to precisely estimate the distance to the road surface.
An object of the present disclosure is to precisely estimate a road surface grade.
One embodiment of the present disclosure is a road surface detection apparatus that includes: an inputter that receives an input of an image signal from an imaging apparatus; a distance measurer that divides a road surface region into a plurality of cells based on the image signal, and derives a distance to a representative point of each of the cells by a time of flight distance measurement method; and an estimator that estimates a gradient of the road surface based on the distances to a plurality of the representative points. In addition, one embodiment of the present disclosure may be any one of a method and a non-transitory tangible recording medium including a program recorded therein.
According to the present disclosure, it is possible to precisely estimate a road surface grade.
Surrounding monitoring system 1 on which road surface detection apparatus 100 according to one embodiment of the present disclosure is mounted will be described in detail below with reference to the drawings. In this regard, the embodiment described below is an example, and the present disclosure is not limited by this embodiment.
As shown in
As shown in
As shown in
Light source 210 is attached so as to be able to emit invisible light having a cycle such as a pulse or a sine wave (e.g., infrared light or near infrared light) to an imaging range.
Image sensor 220 is, for example, a complementary metal oxide semiconductor (CMOS) image sensor, and is attached to the substantially same place as light source 210 such that optical axis A extends to the substantially rear side of vehicle V.
Road surface detection apparatus 100 is, for example, an electronic control unit (ECU), and includes an input terminal, an output terminal, a processor, a program memory and a main memory mounted on a control substrate to control monitoring of the rear side of vehicle V.
The processor executes programs stored in the program memory by using the main memory to process various signals received via an input terminal, and output various control signals to light source 210 and image sensor 220 via the output terminal.
When the processor executes the program, road surface detection apparatus 100 functions as imaging controller 110, distance measurer 120, road grade surface estimator 130 (an example of an “estimator”) and storage section 140 as shown in
Imaging controller 110 outputs a control signal to light source 210 to control some conditions (more specifically, a pulse width, a pulse amplitude, a pulse interval and the number of pulses) of emission light from light source 210.
Furthermore, imaging controller 110 outputs a control signal to a peripheral circuit included in image sensor 220 to control some conditions (more specifically, an exposure time, an exposure timing and the number of times of exposure) of reception of return light of image sensor 220.
According to the above exposure control, image sensor 220 outputs an infrared image signal and a depth image signal related to the imaging range to road surface detection apparatus 100 at a predetermined cycle (predetermined frame rate). In addition, image sensor 220 may output a visible image signal.
Furthermore, in the present embodiment, image sensor 220 performs so-called lattice transformation of adding information of a plurality of neighboring pixels, and generating image information. In this regard, according to the present disclosure, it is not indispensable to add the information of a plurality of neighboring pixels and generate the image information.
Distance measurer 120 extracts a region of the road surface on the rear side from the image outputted from image sensor 220, and divides this region into a plurality of cells.
Furthermore, distance measurer 120 derives a distance (see
Road grade surface estimator 130 estimates a grade (an example of a “gradient”) of the road surface on the rear side based on distances to representative points of a plurality of cells derived by distance measurer 120. More specifically, road grade surface estimator 130 estimates a relative gradient with respect to a road surface on which the vehicle is currently located on the road surface on the rear side.
Storage section 140 stores various pieces of information used for distance measurement processing and road surface grade estimation processing.
Surrounding monitoring system 1 outputs a signal related to the grade of the road surface on the rear side. This information is transmitted to, for example, an advanced driver assistance system (ADAS) ECU. The ADAS ECU automatically drives vehicle V by using these pieces of information.
Next, road surface grade estimation processing performed by distance measurer 120 and road grade surface estimator 130 of road surface detection apparatus 100 will be described in detail with reference to the flowchart in
First, in step S1, distance measurer 120 determines a range for estimating the road surface grade in the image received from image sensor 220. This range for estimating the road surface grade is determined based on attachment information such as an attachment position and orientation of imaging apparatus 200, and a field of view (FOV) of light source 210. Furthermore, distance measurer 120 may extract as the road surface a region sandwiched by portions (white lines) of a high brightness based on brightness information of the image signal received from image sensor 220, and determine this region as the range for estimating the road surface grade. Furthermore, this range for estimating the road surface grade may be determined by using all pieces of information including the attachment information of imaging apparatus 200, the FOV of light source 210, and the brightness information of the image received from image sensor 220. In addition, the road surface may be extracted by using a depth image and a visible image.
In subsequent step S2, distance measurer 120 divides the road surface extracted in step S1 into a plurality of cells. Into what shapes of cells the road surface is divided is set in advance according to processing contents performed in subsequent steps.
In subsequent step S3, distance measurer 120 decides for each cell divided in step S2 whether or not each cell includes a target other than the road surface. When, for example, each cell includes a predetermined rate or more of portions in which brightness is higher than a predetermined value, distance measurer 120 decides that each cell includes the target (e.g., wheel stopper) other than the road surface.
When it is decided in step S3 that one of the cells includes the target other than the road surface (step S3: NO), the processing moves to step S8. Furthermore, in step S8, a cell which is decided in step S3 to include the target other than the road surface is excluded from a processing target of subsequent processing, and the processing moves to step S4.
On the other hand, when it is decided in step S3 that any one of the cells does not include the target other than the road surface (step S3: YES), the processing moves to step S4.
In step S4, distance measurer 120 sets for each cell a predetermined pixel in each cell as a representative point of each cell. In this regard, the term “representative point” in the following description is used to indicate a predetermined pixel and indicate an actual point on the road surface corresponding to the predetermined pixel as well. Which pixel in each cell is set as a representative point is set in advance according to processing contents performed in subsequent steps.
In subsequent step S5, distance measurer 120 calculates a distance (referred to as a “reference distance” below) to each representative point in a case where the road surface is assumed to be flat (the road surface is assumed to be a reference road surface) based on information stored in advance in storage section 140.
In subsequent step S6, distance measurer 120 derives a distance (referred to as an “actual distance” below) to the representative point of each cell by the TOF method. An example of processing (distance measurement processing) which is performed per cell in step S6 and derives an actual distance to a representative point will be described in detail with reference to a flowchart in
First, in step S11, distance measurer 120 derives the distance to the road surface in each pixel in each cell by the TOF method.
Hereinafter, an example of distance measurement according to the TOF method will be described. As shown in
Image sensor 220 is controlled by imaging controller 110 to perform exposure at a timing based on emission timings of first pulse Pa and second pulse Pb. More specifically, as shown in
The first exposure starts at the same time as a rise of first pulse Pa, and ends after exposure time Tx set in advance in relation to the emission light from light source 210. This first exposure intends to receive return light components of first pulse Pa.
Output Oa of image sensor 220 resulting from the first exposure includes return light component S0 to which an oblique lattice hatching is applied, and background component BG to which a dot hatching is applied. The amplitude of return light component S0 is smaller than the amplitude of first pulse Pa.
A time difference between the rising edges of first pulse Pa and return light component S0 is Δt. Δt represents a time taken by invisible light to travel back and forth over distance dt between imaging apparatus 200 and target T.
The second exposure starts at the same time as a fall of second pulse Pb, and ends after exposure time Tx. This second exposure intends to receive return light components of second pulse Pb.
Output Ob of image sensor 220 resulting from the second exposure includes partial return light component S1 (refer to a diagonal lattice hatching portion) which is not the overall return light component, and background component BG to which a dot hatching is applied.
In addition, above component S1 to be observed is generally given by following equation 1.
S
1
=S
0×(Δt/Wa) (1)
The third exposure starts at a timing which does not include the return light component of first pulse Pa and second pulse Pb, and ends after exposure time Tx. This third exposure intends to receive only background component BG which is an invisible light component irrelevant to the return light component.
Output Oc of image sensor 220 resulting from the third exposure includes only background component BG to which a dot hatching is applied.
Distance dt from imaging apparatus 200 to the road surface can be derived based on the above relationship between the emission light and the return light according to following equations 2 to 4.
S
0
=Oa−BG (2)
S
1
=Ob−BG (3)
dt=c×(Δt/2)={(c×Wa)/2}×(Δt/Wa)={(c×Wa)/2}×(S1/S0) (4)
In this regard, c represents a light velocity.
In step S12 subsequent to step S11, distance measurer 120 adds and averages a distance to the road surface in each pixel in each cell derived in step S11, and outputs an averaging result as the distance to the representative point of each cell.
Another example of distance measurement processing performed per cell in step S6 will be described in detail with reference to a flowchart in
In step S21, distance measurer 120 uses a depth image signal to calculate return light components S0 and S1 of each pixel in each cell by using above equations 2 and 3.
In subsequent step S22, distance measurer 120 integrates return light components S0 and S1 of each pixel in each cell, and obtains integration values ΣS0 and ΣS1 of the return light components.
In subsequent step S23, distance measurer 120 derives distance dt to the representative point of each cell by using following equation 5.
dt={c×Wa}/2}×(ΣS1/ΣS0) (5)
Back to description of
Next, a specific example of road surface grade estimation performed by road surface detection apparatus 100 according to the present embodiment will be described with reference to
In a specific example described below, when subject vehicle V drives rearward in a situation there is an uphill on the rear side of subject vehicle V located on a flat road, the road surface grade in a traveling direction of subject vehicle V is estimated.
After extracting road surface 303, distance measurer 120 divides road surface 303 into a plurality of cells.
Subsequently, in this example, there is no target on road surface 303, and therefore distance measurer 120 sets representative points to all divided cells. In addition, representative points may be set only to specific cells used for road surface grade estimation.
Storage section 140 stores a coordinate (Xi, Yi, 0) as a look up table (LUT) in a case where height H at which imaging apparatus 200 is disposed and representative point RN disposed are on the reference road surface. Distance measurer 120 uses these items of data to calculate reference distance DRPi from subject vehicle V (more specifically camera O) to representative point RPi. In addition, reference distance DRPi to representative point RPi may be stored in advance in storage section 140.
Subsequently, distance measurer 120 uses the distance image to derive actual distance dRPi from subject vehicle V to representative point RPi by the TOF method. In this case, in this example, as described above, information of each pixel in cell Ci is used to derive dRPi.
Road grade surface estimator 130 calculates height hRPi from the reference road surface of each representative point RPi based on reference distance DRPi and actual distance dRPi from subject vehicle V to each representative point RPi in each cell Ci by using following equation 6. In addition,
h
RPi
=H×(DRPi−dRPi)/DRPi (6)
Furthermore, road grade surface estimator 130 estimates the grade of road surface 303 by using height hRPi from the reference road surface of each representative point RPi. In this example, heights hRP1 and hRP2 of neighboring cell C1 and cell C2 in the traveling direction of subject vehicle V are used to calculate a gradient with respect to a straight reference plane which connects representative points RP1 and RP2. Furthermore, this gradient is regarded as the grade of road surface 303 in the traveling direction of subject vehicle V.
As described above, according to the present embodiment, a road surface captured by the imaging apparatus is divided into cells, and a distance to a representative point of each cell is derived by the TOF method. Furthermore, a road surface grade is estimated based on derived distances to representative points of a plurality of cells.
Consequently, it is possible to precisely estimate a road surface grade.
Furthermore, according to the present embodiment, information of each pixel included in each cell is used to derive a distance to the representative point of each cell, so that distance measurement precision improves. Consequently, it is possible to precisely estimate a road surface grade.
(Modified Example 1)
In addition, in the above embodiment, cells are divided such that cell shapes in the zenith view are square shapes. However, the cell division is not limited to this.
Distance measurement precision of the TOF method lowers as a distance from subject vehicle V to a target (a road surface in this case) which is an object becomes farther. On the other hand, when information of each pixel included in each cell is used to calculate a distance to a representative point of each cell, as the number of pixels included in each cell is larger, distance measurement precision improves more. Hence, the size of each cell may be made the same in a captured image or, when a distance from subject vehicle V is farther in the zenith view, cells may be made larger (see
Furthermore, in the above embodiment, the representative point is set to the center point of each cell. However, the representative point setting is not limited to this. The representative point may be set to any pixel in each cell. In this case, information of each pixel included in each cell may be weighted to derive a distance to the representative point.
(Modified Example 2)
In the above embodiment, distances to representative points of two neighboring cells in the traveling direction of subject vehicle V are used to estimate a road surface grade. However, the road grade estimation is not limited to this. Data of representative points of three or more continuous cells may be used, or data of representative points of a plurality of non-continuous cells may be used.
Furthermore, for example, reference distance DRPi and actual distance dRPi to the representative point of each cell may be calculated, and then these results may be used to estimate a road surface grade by using a random sample consensus (RANSAC) algorithm. By using the RANSAC algorithm, it is possible to expect improvement of grade estimation precision. By using reference distance DRPi and actual distance dRPi to the representative point of each cell, it is also possible to expect improvement of control precision which uses the RANSAC algorithm and reduction of a computation time.
In addition, an error distribution with respect to actual distance dRPi is a substantially parallelogram as shown in
In addition, by averaging an estimation value calculated by using the average value, an estimation value calculated by using +σ (standard deviation) of actual distance dRPi, and an estimation value calculated by using −σ (standard deviation) of actual distance dRPi, it is naturally possible to more precisely estimate the road surface grade (see
(Modified Example 3)
When there is a target on a road surface, a road surface grade may be estimated, and then the height of the target on the road surface may be further estimated. In this case, by subtracting the height of the road surface at a position at which there is the target from a value obtained by calculating the height from the reference road surface to an upper end of the target, the height of the target may be estimated.
(Modified Example 4)
In the above embodiment, the height of a representative point of each cell is calculated to estimate a road surface grade. However, the road surface estimation is not limited to this. For example, a coordinate of the representative point of each cell may be calculated to estimate the road surface grade. Furthermore, the calculated height and coordinate of the representative point of each cell may be used for another control.
(Modified Example 5)
The above embodiment has described a case where imaging apparatus 200 is attached to a back surface of the vehicle. However, imaging apparatus 200 is not limited to this. Even when an imaging apparatus installed for use in monitoring surroundings of the vehicle is used as shown in
(Modified Example 6)
In the above embodiment, when each cell includes a target other than a road surface, each cell is excluded from a processing target of distance measurement processing.
However, the processing target exclusion is not limited to this. For example, a pixel corresponding to the target in each cell may be excluded from a computation target for deriving a distance to the representative point.
(Modified Example 7)
The above embodiment has described the example where distances to all representative points are derived by using information of each pixel included in each cell. However, deriving of distances is not limited to this. For example, only distances to representative points of cells whose distances are apart from a vehicle by a predetermined value may be derived by using information of each pixel included in each cell.
While various embodiments have been described herein above, it is to be appreciated that various changes in form and detail may be made without departing from the spirit and scope of the invention(s) presently or hereafter claimed.
This application is entitled and claims the benefit of Japanese Patent Application No. 2017-206031, filed on Oct. 25, 2017, the disclosure of which including the specification, drawings and abstract is incorporated herein by reference in its entirety.
The road surface detection apparatus, the road surface detection method and the recording medium having the road surface detection program recorded thereon according to the present disclosure can precisely estimate a road surface grade, and are suitable for use in vehicles.
Number | Date | Country | Kind |
---|---|---|---|
2017-206031 | Oct 2017 | JP | national |