The present invention relates to a driving assist apparatus, a drive assist method, and a program with which an object existing around one's vehicle is recognized.
A technique has recently been known which notifies a driver of a danger by recognizing an object such as a vehicle, a human being, an obstacle, and so on existing around the driver's own vehicle and superimposing information of the object over the view with using a semi-transmissive display called an HUD (Head Up Display). A technique is also known which avoids a collision or decreases an impact of collision by controlling one's vehicle based on the information of the recognized object existing around the vehicle. With these techniques, the object existing around one's vehicle need be recognized with using a sensor or a camera, and information of the recognized object need be grasped and managed. When detecting the object with the sensor, however, if the object has a high color density, the reflectance of a laser beam or radio wave decreases, and sometimes the object cannot be detected with high precision.
With an apparatus described in Patent Literature 1, an object existing around one's vehicle is recognized by detecting with image processing a portion having a high color density in a captured image acquired from a camera.
A conventional apparatus performs a process of detecting a portion having a higher color density compared to the entire captured image. Accordingly, object recognition takes time, which is a problem.
The present invention has been made to solve the problem described above, and has as its object to provide a driving assist apparatus, a driving assist method, and a program with which a processing time taken for recognition of an object from a captured image can be shortened.
A driving assist apparatus according to the present invention includes:
an image acquisition part to acquire a captured image around a vehicle;
a position information acquisition part to acquire position information of a first object existing around the vehicle and detected by a sensor;
a detection range determination part to determine a detection range of the first object within the captured image based on the position information of the first object; and
an object recognition part to perform image processing on, within the captured image, an image existing in a range other than the detection range of the first object, and to recognize a second object that is different from the first object.
Another driving assist apparatus according to the present invention includes:
an image acquisition part to acquire a captured image around a vehicle;
a position information acquisition part to acquire position information of a first object existing around the vehicle and detected by radiation with a sensor in a first-line direction, and to acquire position information of a second object existing around the vehicle and detected by radiation with the sensor in a second-line direction;
a detection range determination part to determine a detection range of the first object within the captured image based on the position information of the first object, and to determine a detection range of the second object within the captured image based on the position information of the second object; and
an object recognition part to perform image processing on, within the captured image, an image existing in a range other than a composite range of the detection range of the first object and the detection range of the second object, and to recognize a third object that is different from the first object and the second object.
A driving assist method according to the present invention includes:
acquiring a captured image around a vehicle;
acquiring position information of a first object existing around the vehicle and detected by a sensor;
determining a detection range of the first object within the captured image based on the position information of the first object; and
performing image processing on, within the captured image, an image existing in a range other than the detection range of the first object, thereby recognizing a second object that is different from the first object.
A program according to the present invention causes a computer to execute:
a process of acquiring a captured image around a vehicle;
a process of acquiring position information of a first object existing around the vehicle and detected by a sensor;
a process of determining a detection range of the first object within the captured image based on the position information of the first object; and
a process of performing image processing on, within the captured image, an image existing in a range other than the detection range of the first object, and recognizing a second object that is different from the first object.
According to a driving assist apparatus, a driving assist method, and a program of the present invention, a detection range of a first object within a captured image is detected based on position information of an object existing around a vehicle and detected by a sensor, and image processing is performed in a range other than the detection range, and a second object is recognized. Therefore, the processing time taken for recognizing the object can be decreased.
Embodiment 1 will be described hereinbelow with referring to the accompanying drawings.
The sensor 100 is, for example, a laser sensor such as a LIDAR (Light Detection and Ranging), and detects information of a distance from one's vehicle to an object (a vehicle, a human being, an obstacle, and so on) existing around one's own vehicle as well as information of a position of the object existing around one's vehicle. The LIDAR is capable of scanning a laser in the horizontal direction and acquiring information of a distance from one's vehicle to the object over a wide area of, for example, 190 degrees, with a resolution of approximately 0.4 degree. In the following description, the distance information is acquired only in the horizontal direction. However, the distance information may be acquired also in the direction of height with using other types of sensors such as a PMD (Photonic Mixer Device). The sensor 100 is not limited to a laser sensor, and a radar sensor that uses radio wave, for example, may be employed as the sensor 100.
When acquiring sensor information with the sensor 100, although the sensor 100 can sufficiently detect a laser beam reflected by a pale-color (for example, white) object having a low color density, it cannot sufficiently detect a laser beam reflected by a dark-color (for example, black) object having a high color density.
The camera 200 is an image capture device that captures an image around one's vehicle, and may be a visible-light camera or infrared camera.
The driving assist apparatus 300 is provided with a sensor information acquisition part 301, an image acquisition part 302, a coincidence point detection part 303, a detection range determination part 304, a storage part 305, an object recognition part 306, a model generation part 307, and a control part 308.
The sensor information acquisition part 301 acquires position information and distance information of the object existing around one's vehicle which are detected by the sensor 100, as sensor information. Therefore, the sensor information acquisition part 301 may be expressed as a position information acquisition part or distance information acquisition part.
The image acquisition part 302 acquires an image around one's vehicle, which is captured by the camera 200. Around one's vehicle signifies, for example, a range of several 10 cm to several 10 m from one's vehicle.
The coincidence point detection part 303 receives information of the captured image from the image acquisition part 302 and receives the sensor information from the sensor information acquisition part 301. The coincidence point detection part 303 detects with what position in the received captured image, the position of the object detected by the sensor coincides, based on the object position information included in the sensor information. The coincidence point detection part 303 then outputs information of a position (coincidence point) that coincides with the object, to the detection range determination part 304 as coincidence point information. Assume that in the process of detecting the coincidence point by the coincidence point detection part 303, the sensor 100 and camera 200 are calibrated beforehand.
The detection range determination part 304 determines the detection range of the object within the captured image based on the coincidence point information obtained from the coincidence point detection part 303. In other words, the detection range determination part 304 determines the detection range of the object within the captured image based on the object position information obtained by the sensor 100. How the detection range is determined will be described later.
An edge image expressing the feature of the object is stored in the storage part 305. When the object is, for example, a vehicle, the edge image may express an average contour of the vehicle.
From a range other than the detection range determined by the detection range determination part 304, of the object (to be called the first object hereinafter) existing within the captured image, the object recognition part 306 recognizes an object (to be called the second object hereinafter) being different from the first object, by image processing. When performing image processing, the object recognition part 306 utilizes the edge image stored in the storage part 305. To recognize an object signifies to identify whether the object is a vehicle, a human being, or an obstacle. To recognize an object may further include identifying the shape and size of the identified vehicle or the like. If a plurality of edge images are prepared in the storage part 305 for each vehicle model, the vehicle model of the vehicle can also be recognized.
The first recognition part 316 detects a pixel group having a color density equal to or higher than a threshold, from the range other than the detection range of the first object, within the captured image, and calculates a region based on the detected pixel group. How the region is calculated will be described later.
The second recognition part 326 image-processes an image within the region detected by the first recognition part 316, and recognizes the second object existing in the region. The first recognition part 316 carries out matching of the image within the region with using the edge image stored in the object recognition part 306, so as to recognize the second object existing in the region. More specifically, the second recognition part 326 is capable of recognizing the second object with using a scheme such as HOG (Histogram of Oriented Gradients) being an image feature extracting scheme that expresses the luminance gradient of an image in the form of a histogram.
Back to
The model generation part 307 calculates, from the position information within the captured image of the second object recognized by the object recognition part 306, information of the distance from one's vehicle to the second object and information of the position of the second object, by utilizing a motion stereo technique or depth map technique, and generates the model of the second object existing around one's vehicle. As the model, for example, a wire frame model which expresses an object or the like using only line information; a surface model which expresses an object or the like using surface information; a polygon model which expresses an object or the like using an aggregate of polygonal patches; a solid model which expresses an object or the like as a content-packed thing being close to the actual object; or a model which encloses an object with a square and expresses the object using 4 points of the square and the nearest neighboring point, leading to a total of 5 points, may be employed. Such a model can retain information of the distance between one's vehicle and the model.
Based on distances among a generated plurality of models, the model generation part 307 may handle the plurality of models as a single model by grouping the plurality of models. For example, if the models are close to each other and the objects (vehicle, human being, obstacle, or the like) are of the same type, the models are generated as a single model and the generated single model is managed. This can decrease the number of models to be generated, realizing reduction in capacity. Whether the models are close to each other is determined depending on whether the distances among the models are equal to or smaller than a threshold.
The control part 308 performs a control operation to display on a display device such as a navigation screen or HUD (Head Up Display), the model of the object existing around one's vehicle which is generated by the model generation part 307. As the model is displayed on the display device, it is possible to make the driver aware of the existence of the object existing around his or her vehicle visually. The control operation of the control part 308 is not limited to displaying, and the control part 308 may perform a control operation to notify the driver of the existence of the object existing around his or her vehicle by sound or vibration. In this case, a model generated by the model generation part 307 is not needed, and it suffices if the position information and distance information of the object existing around his or her vehicle is obtained. The control part 308 may transmit a signal to the outside so as to control the driving (for example, braking) of the vehicle based on the position information and distance information of the object existing around one's vehicle.
The hardware configuration of the driving assist apparatus 300 will now be described.
The sensor information acquisition part 301, image acquisition part 302, coincidence point detection part 303, detection range determination part 304, object recognition part 306, model generation part 307, and control part 308 are stored respectively as programs in the storage device 360, and their functions are realized when the processing device 350 reads the programs and executes them properly. Namely, the functions of the “parts” described above are realized by combining hardware being the processing device 350 and software being the programs. In other words, it is possible to say that the processing device 350 is programmed to realize the functions of the “parts” described above. Realizations of these functions are not limited to combination of the hardware and software. The functions may be realized by hardware alone by implementing programs in the processing device 350. In this manner, how the CPU, DSP, and FPGA constituting the processing device 350 perform the processing operation for realizing the functions can be designed arbitrarily. From the point of view of the processing speed, for example, the detection range determination processing of the detection range determination part 304, the object recognition processing of the object recognition part 307, and the model generation processing of the model generation part 307 are preferably performed by the DSP or FPGA independently, and the processings of the sensor information acquisition part 301, image acquisition part 302, coincidence point detection part 303, and control part 308 are preferably performed by the CPU independently.
The receiver 270 is hardware that receives the sensor information or captured image. The transmitter 280 is hardware that transmits a signal from the control part. The receiving function of the receiver 270 and the transmitting function of the transmitter 280 may be realized by a transmitter/receiver in which reception and transmission are integrated.
The operation of the driving assist apparatus 300 according to Embodiment 1 will be described.
Back to
Back to
Back to
First, the first recognition part 316 which constitutes the object recognition part 306 recognizes the detection range of the first object in the captured image based on the information from the detection range determination part 304 (step S41). The information from the detection range determination part 304 is, for example, position information X1′, X2′, Y1′, and Y2′ in the captured image illustrated in
Then, the first recognition part 316 extracts a pixel group having a color density equal to or higher than the threshold in the range other than the detection range of the first object (step S42).
Back to
It suffices if this region is provided to involve a pixel group having a color density equal to or higher than the threshold, as illustrated in
Back to
The second recognition part 326 performs matching processing on the image in the region being set by the first recognition part 316, with the edge image (step S45). Then, the second recognition part 326 only needs to perform the matching processing on the image within the region being set, instead of the entire captured image. Therefore, the processing time can be shortened and the processing load of the driving assist apparatus 300 can be decreased.
As a result of performing the matching processing on the image within the region being set, if the matching is successful, the second recognition part 326 recognizes that the second object is a vehicle (step S46). If the matching is not successful, the second recognition part 326 cannot recognize that the second object is a vehicle.
Back to
The control part 308 displays on the HUD or the like the model generated by the model generation part 307, thereby notifying the driver of the existence of the first object and second object existing around the driver's own vehicle. Alternatively, the control part 308 controls driving of the vehicle based in the position information and distance information of the first object and second object (step S7).
From the foregoing, according to Embodiment 1, the detection range determination part 304 determines the detection range of the first object within the captured image, and the object recognition part 306 recognizes the second object by performing image processing on an image existing in a range other than the detection range of the first object within the captured image. As a result, the range where image processing is performed in the captured image can be narrowed, so that the processing time can be shortened.
Furthermore, the first recognition part 316 of the object recognition part 306 performs image processing on the image existing in the range other than the detection range of the first object within the captured image, that is, performs a process of calculating a pixel group having a color density equal to or higher than the threshold. The second recognition part 326 of the object recognition part 306 performs image processing only on an image within the region based on the calculated pixel group, that is, performs matching processing between the image within the region and the edge image. As a result, the processing time can be shortened, and the processing load of the driving assist apparatus 300 can be decreased. Especially, the matching processing between the captured image and the edge image is very time-consuming if it is performed on the entire captured image. Hence, if the matching processing is performed only on the image within the region being set, it largely contributes to shortening of time.
In the field of driving assistance technology, after an image is captured with a camera, it is important to recognize an object existing around one's vehicle quickly and make notification to the driver, or to control the vehicle. According to Embodiment 1, the processing time for object recognition is shortened. Therefore, notification to the driver or vehicle control can be performed quickly, so that the safety of the driver can be ensured more.
Regarding the first object that is not a target of image processing, the position information and distance information of the first object can be detected by the sensor 100. As a result, both the first object and the second object can be recognized at high precision.
Embodiment 2 of the present invention will now be described with referring to the accompanying drawings. In Embodiment 1, the second object is recognized by performing image processing on an image existing in a range other than the detection range of the first object within the captured image. In Embodiment 2, the range where image processing is performed within the captured image is further narrowed by utilizing information of the vanishing point where the road lines intersect toward the depth.
The vanishing point notification part 309 notifies an object recognition part 306 of position information of a horizontal-direction line (vanishing line) being located at such a height where the vanishing point in the captured image is included.
A first recognition part 316 of the object recognition part 306 recognizes the detection range of the first object in step S041, and then extracts a pixel group having a color density equal to or higher than the threshold, in a range other than the detection range of the first object and lower than the vanishing line (step S042).
For this purpose, as illustrated in
From the foregoing, according to Embodiment 2, the object recognition part 306 recognizes the second object with utilizing the information of the vanishing point. Since the range of the captured image where image processing is performed can be further narrowed, the processing time can be shortened more than in Embodiment 1.
Namely, as the first recognition part 316 extracts a pixel group having a color density equal to or higher than the threshold, from an image existing outside the detection range of the first object and being lower than the vanishing line within the captured image, the detection range can be narrowed more than in Embodiment 1, and the processing time can be shortened. The vanishing line is a line formed of a vanishing point where the road lines intersect toward the depth, as described above. Therefore, by performing image processing on an image existing in a range below the vanishing line, all objects on the road can be detected thoroughly.
In the above discussion, the image processing, that is, the process of extracting a pixel group having a high color density, is performed on the image existing in a range below the vanishing line within the captured image. However, the image processing is not limited to this. For example, a traffic lane in the captured image may be detected, and the object recognition part 306 may perform image processing on an image included in this traffic lane. Namely, the first recognition part 316 performs a process of extracting a pixel group having a high color density from an image existing outside the detection range of the first object and being included in the traffic lane within the captured image. Then, the detection range can be narrowed more than in Embodiment 1, and the processing time can be shortened.
Embodiment 3 of the present invention will be described with referring to the accompanying drawings. In Embodiment 1, the sensor information acquisition part 301 acquires sensor information of 1 line. In Embodiment 3, a sensor information acquisition part 301 acquires sensor information of a plurality of lines. Thus, within the captured image, the range where image processing is performed by the object recognition part 306 is narrowed.
The sensor control device 400 is realized by, for example, a motor, and performs a control operation to swing a sensor 100 in the vertical direction.
In this case, in regard to the first object, position information (x1, x2) can be detected by laser beam radiation with the sensor 100 in a first-line direction, but the positions of the second object and third object cannot be detected. More specifically, the position information acquisition part 301 acquires the position information of the first object detected by radiation with the sensor 100 in the first-line direction, but cannot acquire the position information of the second object and third object. Therefore, although the detection range determination part 304 can detect the detection range of the first object, it cannot detect the detection ranges of the second object and third object. Accordingly, the object recognition part 306 must perform image processing on an image existing outside the detection range of the first object within the captured image, as has been described in Embodiment 1.
Meanwhile, in Embodiment 3, since the sensor control device 400 performs a control operation to swing the sensor 100 in the vertical direction, sensor information of 2 lines or more can be obtained. When a laser beam is radiated in a second-line direction, as the lower half of the body of the second body is in a color having a low color density, it reflects the laser bean sufficiently, so that position information (x3, x4) of the lower half of the body of the second object can be detected. Then, the position information acquisition part 301 can acquire position information of the second object which is detected by radiation with the sensor 100 in the second-line direction. Although not illustrated in
An object recognition part 306 performs image processing on, within the captured image, an image existing in a range other than a composite range of the detection range of the first object and the detection range of the second object, to recognize a third object.
In the above manner, according to Embodiment 3, the sensor control device 400 performs a control operation to swing the sensor 100 in the vertical direction, and the sensor information acquisition part 301 acquires sensor information of a plurality of lines. Therefore, the number of objects that exist around one's vehicle and can be detected by a sensor increases, then within the captured image, the range where image processing is performed by the object recognition part 306 can be reduced, and the processing time can be shortened more than in Embodiment 1.
In the above description, the sensor control device 400 performs a control operation to swing the sensor 100 in the vertical direction, thereby acquiring sensor information of a plurality of lines. However, sensor information acquisition is not limited to this. For example, sensor information of a plurality of lines may be acquired by utilizing a plurality of sensors whose lines for acquiring sensor information are different.
The first-line direction and the second-line direction have been described as expressing the horizontal direction in the captured image. However, the first-line direction and the second-line direction are not limited to this but may express the vertical direction.
100: sensor; 200: camera; 300: driving assist apparatus; 301: sensor information acquisition part; 302: image acquisition part; 303: coincidence point detection part; 304: detection range determination part; 305: storage part; 306: object recognition part; 307: model generation part; 308: control part; 309: vanishing point notification part; 316: first recognition part; 326: second recognition part; 350: processing device; 360: storage device; 370: receiver; 380: transmitter; 400: sensor control device
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2014/004287 | 8/21/2014 | WO | 00 |