This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2015-009750, filed Jan. 21, 2015, the entire contents of which are incorporated herein by reference.
Embodiments described herein relate generally to a system for obstacle avoidance around a moving object.
For moving objects such as a vehicle, a radio-controlled model car, and a plane, a technique for accurately calculating distance between obstacles and a moving object to prevent collision therewith would be desirable. For example, a drive assist system for a vehicle visually assists a driver to determine whether the vehicle may pass a road by displaying the vehicle in a virtual manner on a head up display or the like.
In general, according to an embodiment, a system for obstacle avoidance around a moving object includes an image capturing unit configured to capture images in a moving direction of the object, a processing unit configured to generate three-dimensional data based on the captured images, determine positions of the obstacles in a three-dimensional space according to the three-dimensional data, and generate an image including marks indicating a region proximate to the obstacles, and a display unit configured to display the generated image.
Hereinafter, the embodiments herein will be described with reference to the drawings.
First, an image processing apparatus and a drive assist system using the same according to the first embodiment will be described with reference to the drawings.
In the present embodiment, a contact of the vehicle with an obstacle will be predicted in advance using an image processing apparatus.
As illustrated in
An image visualized in the image processing apparatus 100 is displayed on the display unit 200. Obstacle distance map information output from the image processing apparatus 100 is input to an obstacle determining unit 300. The obstacle determining unit 300 determines the possibility of a vehicle's contacting an obstacle in advance.
The image acquiring unit 1 includes an image sensor 11 and a distance image sensor 12. The image sensor 11 outputs image data of a surrounding image including obstacles. For example, the image sensor 11 includes a camera that detects light in a visible wavelength range to capture a daytime image, a night vision camera that detects light (ray) in a near infrared range or a far-infrared range to capture a night image, and the like. The distance image sensor 12 acquires information relating to a distance to an obstacle. The distance image sensor 12 includes, for example, a time-of-flight (TOF) camera that is usable both day and night, a plurality of stereo cameras that is usable both day and night, or the like.
The memory unit 2 includes an image information region 21, a distance information region 22, an intermediate image region 23, an obstacle distance map region 24, and a visualized image region 25.
The image information region 21 stores image information acquired by the image sensor 11. The distance information region 22 stores distance information acquired by the distance image sensor 12. The intermediate image region 23 stores intermediate images acquired through image processing performed by the image processing apparatus 100. The obstacle distance map region 24 stores obstacle distance map information calculated by the image processing unit 3. The visualized image region 25 stores visualized image information calculated by the image processing unit 3.
The image processing unit 3 includes a filter section 31, a restoring section 32, an acquiring module 33, a distance calculating section 34, and a visualization processing section 35.
The filter section 31 removes noises of the image information and the distance information output from the image information region 21, the distance information region 22, and the intermediate image region 23. The restoring section 32 restores three-dimensional information from the acquired image information and the distance information. The acquiring section 33, for example, extracts data corresponding to the ground surface from the restored three-dimensional information and extracts the remaining data as obstacle data. The distance calculating section 34 calculates the shortest distance to an obstacle. The visualization processing section 35 visualizes the position between obstacles and the positional relationship between the vehicle and the obstacle.
As illustrated in
The vehicle 500 includes a drive assist system that predicts contact with an obstacle in advance. The drive assist system, for example, includes the memory unit 2, the image processing unit 3, the image sensor 11, the distance image sensor 12, the display unit 200, and the obstacle determining unit 300.
The image sensor 11 is disposed at the left-front portion of the vehicle 500 and acquires image information that includes obstacles 600a and 600b. The distance image sensor 12 is disposed at the right-front portion of the vehicle 500 and acquires distance information indicating a distance to an obstacle. The ECU 400 is disposed at the rear portion of the vehicle 500 and includes the memory unit 2 and the image processing unit 3. The obstacle determining unit 300 is disposed in the vicinity of a door 42 at the left-rear portion and inputs the obstacle distance map information stored in the memory unit 2.
The display unit 200 is provided in the vicinity of the mirror 41 on the right side and displays the visualized image information stored in the memory unit 2. The display unit 200 also displays the information of the possibility of a contact with an obstacle, which is output from the obstacle determining unit 300. For the display unit 200, a head-up display (HUD), a monitor inside a vehicle, or the like is used.
Next, image processing of the image processing apparatus 100 will be described with reference to
As illustrated in
The restoring section 32 generates three-dimensional information (three-dimensional data) that includes three-dimensional coordinates (x, y, z) of a point group (point cloud) based on the camera image and the distance information (STEP S5).
The acquiring section 33 then performs identification of a ground surface (STEP S6), extraction of ground surface data (STEP S7), and extraction of obstacle data (STEP S8).
Next, the identification of the ground surface in STEP S6 will be described in detail with reference to
The acquiring section 33 extracts data corresponding to the ground surface from the acquired three-dimensional data and outputs the remaining data as obstacle data.
As described in
When the image acquiring unit 1 is fixed to a vehicle body or the like, coefficients a, b, c, and d are specifically determined provided that the relationship between the position and angle of the image acquiring unit 1 and the ground surface is always the same. For example, as illustrated in
When the position and angle of the image acquiring unit 1 change (including when the change is extensive due to the vibration of the vehicle or the like), the plane corresponding to the ground surface may need to be identified for each frame based on the acquired three-dimensional data. In this case, as it may be assumed that one of the plane equations including many points is the ground surface, the corresponding plane equation is identified. Since the three-dimensional data includes many points corresponding to obstacles other than the points corresponding to the ground surface, it may not be able to easily extract the ground surface by a simple least squares method. In this case, it is preferable to use the random sample consensus (RANSAC) algorithm. In the present embodiment, the plane is extracted by using the RANSAC.
In
Then, it is determined whether than the distance d is smaller than a threshold th (STEP S26). When the distance d is smaller than the threshold th, 1 is added to a score (STEP S27). When the distance d is equal to or greater than the threshold th, 0 (zero) is added to a score (STEP S28).
Then, it is determined whether the processing with respect to all three-dimensional points has been performed (STEP S29). If the processing for all points has not been performed, the process returns to STEP S24. If the processing for all points has been performed, the score is output (STEP S30). If the score is the higher than the previous ones, the highest score is updated.
Then, it is determined whether the number of sample planes is sufficient (STEP S31) . If the number of the sample planes is not sufficient, the process returns to STEP S22. If the number of the sample planes is sufficient, a plane equation of a plane A with the highest score is output (STEP S32).
Since, in many cases, the ground surface covers a large part of the image captured by the image acquiring unit 1, it is possible to identify the ground surface through the RANSAC method. When a wall surface of a building or the like is identified as the ground surface, it may be easily determined that the identification is incorrect based on a positional mismatch between the plane and the image acquiring unit 1 (the ground surface can be assumed to be not perpendicular to the image capturing angle of the image acquiring unit 1). For this reason, it is possible to identify the ground surface by performing the above-described process using the RANSAC method again or the like after removing the plane data incorrectly identified as the ground surface.
The acquiring section 33 uses the plane equation that represents the ground surface to determine whether a point P is in the ground surface. In this case, it is determined that the point P is in the ground surface when the coordinate of the point P satisfies the plane equation or the distance from the plane to the point P is within a certain value. More specifically, the distance h from the plane to the point P may be expressed as:
h=|ax+by+cz+d|/(a2+b2+c2)1/2 (1)
When the distance h is less than the certain threshold th (h<=th), the point P is considered to be in the plane. According to the above, when the point P is in the ground surface, the coordinate of the point P is output as ground surface data. When the point P is not in the ground surface, the coordinate of the point P is output as obstacle data.
As illustrated in
When the distance d is smaller than the threshold th, the coordinate of the point P is added to the ground surface data (STEP Sb). When the distance d is equal to or greater than the threshold th, the coordinate of the point P is added to the obstacle data (STEP Sc).
Then, it is determined whether or not the processing is performed with respect to all three-dimensional points (STEP S29) . If it is determined that the processing is not performed for all points, the process returns to STEP S24. If it is determined that the processing is performed for all points, the obstacle data is output (STEP Sd).
The acquiring section 33 calculates the distance to the plane from all three-dimensional points extracted, and determines whether each of the point is in the plane. However, as this calculation can be independently performed for every point, it is possible to perform parallel processing at the same time. Therefore, it is possible to accelerate the processing by using a multi-core processor or a general-purpose computing on graphics processing unit (GPGPU).
Next, a specific circuit configuration of the acquiring section 33 that determines whether a point P is in the ground surface (plane coincidence calculation processing) will be described with reference to
As illustrated in
The point group data dividing unit 51 receives data of three-dimensional point group (point cloud), divides the data into a plurality of point data represented by three-dimensional coordinate Pi (x, y, z) (i=0 to n), and sends the three-dimensional coordinate Pi (x, y, z) to each of the arithmetic units U0 to Un.
As illustrated in
The result combining unit 52 receives data output from each of the arithmetic units U0 to Un. The result combining unit 52 combines the received data with each other and forms a unit of data (for example, 0/1 is arranged one by one from 0 to n to form an n-bit data). In this case, the number of each digit of the n-bit data may be added, and the sum may be output as a score. Using the score, each point in the point group is determined to belong to the ground surface or obstacles.
The plane coincidence calculation processing unit 50 includes a plurality (n) of arithmetic units U arranged in parallel. Compared to when a single arithmetic device for general use conducts arithmetic processing for all points, the identification of the ground surface and the extraction of obstacles may be accelerated. As a result, power consumption for the processing may be decreased.
Next, the distance calculating section 34 performs obstacle distance calculation (STEP S9 (refer to
As illustrated in
A distance d between a point P and the point 0 is calculated (STEP S114). For example, when the position information is three-dimensional, a distance d between the point P (x0, y0, z0) and an obstacle 0 (x1, y1, z1) may be expressed as:
d=f{(x0−x1)2+(y0−y1)+(z0−z1)2]1/2 (2)
It is determined whether the distance d is less than a shortest distance mini.d (STEP S115). If the distance d is less than the shortest distance mini.d, the distance d is set as the shortest distance mini.d (STEP S116). If the distance d is not less than the shortest distance mini.d, the distance d is ignored (STEP S117). Then, it is determined whether the processing with respect to all points from the point group of the obstacle data has been completed (STEP S118). If the processing has not been completed, the process returns to STEP 5113. If the processing has been completed, the shortest distance mini.d to the obstacles is output (STEP S119).
Then, it is determined whether the processing with respect to all small regions has been completed (STEP S120). If the processing has not been completed, the process returns to STEP S112. If the processing has been completed, a map of obstacles that are closest to the image acquiring unit 1 in each small region is output (STEP S121).
In the above step, the following is used as a pseudo code when the map is generated.
In the above calculation, the number of calculation increases as the number of the divided small regions increases and as the number of obstacle points increases. However, the calculation can be accelerated by using an exclusive hardware and parallel calculation technique including GPGPU or the like, because the calculation is simple. Therefore, it is preferable to use these techniques to accelerate the calculation.
As illustrated in
The point group data dividing unit 51 receives data of the three-dimensional point group (point cloud) corresponding to obstacles, divides the data into a plurality of point data, each corresponding to a three-dimensional coordinate Pi (x, y, z) (i=0 to n), and sends the divided data to the arithmetic units UU0 to UUn, respectively.
As illustrated in
The shortest-distance selecting unit 53 receives the information of the distance d output from each of the arithmetic units UU0 to UUn. The shortest-distance selecting unit 53 selects the shortest distance to the obstacles at the point P.
The shortest-distance calculation processing unit 60 is capable of calculating the shortest distance to the obstacles at the point P quickly.
The shortest-distance calculation processing unit 60 calculates the shortest distance to the obstacles with respect to each of the small regions of the area in which the obstacles are to be recognized while the point P is moving (while changing the coordinate of the point P) by using the above-described circuitry.
Next, the distance calculating section 34 generates an obstacle distance map with the obtained shortest distances to obstacles as a representative value of each small region (STEP S10 (refer to
As described above, it is preferable that the arithmetic processing in the acquiring section 33 and the distance calculating section 34 are performed by an exclusive hardware. However, such arithmetic processing requires a plurality of calculating circuits including a plurality of multipliers or the like. For this reason, considering the overall balance of the system including reusability in other calculating processing, implementation area, power efficiency, or the like, the arithmetic processing may be performed by software using a digital signal processor (DSP) or a graphics processing unit (GPU).
Next, the visualization processing section 35 performs visualization processing (STEP S11 (refer to
The visualization processing section 35 generates an isoline map based on the obtained obstacle distance map.
The visualization processing section 35 may be also applied to three-dimensional data.
The obstacle determining unit 300 determines whether a moving object (vehicles or people) is capable of passing between the obstacles without contact. When the width of a moving object is set as D, the moving object may contact obstacles located within a distance D/2.
For example, when the width of a vehicle, which is a moving object, is 180 cm, and 90 cm, which is the half of the value, is set in the visualization processing section 35, a user may recognize whether the moving object may pass between the obstacles. In addition, if the value is set in the obstacle determining unit 300, a driver may be able to recognize whether a driver may pass between the obstacles.
As described above, in the image processing apparatus and the drive assist system using the same, the image acquiring unit 1, the memory unit 2, and the image processing unit 3 are included in the image processing apparatus 100. The image acquiring unit 1 includes an image sensor 11 and a distance image sensor 12. The memory unit 2 includes an image information region 21, a distance information region 22, an intermediate image region 23, an obstacle distance map region 24, and a visualized image region 25. The image processing unit 3 includes the filter section 31, the restoring section 32, the acquiring section 33, the distance calculating section 34, and the visualization processing section 35. The drive assist system is mounted on the vehicle 500. The drive assist system includes the memory unit 2, the image processing unit 3, the image sensor 11, the distance image sensor 12, the display unit 200, and the obstacle determining unit 300. The drive assist system visualizes the obstacle information.
With such a drive assist system, a driver is capable of predicting a vehicle contacting against the obstacle in advance. The driver is capable of driving safely since the driver is capable of selecting the passable route in advance (before reaching an obstacle).
In addition, in the present embodiment, a vehicle driven by the driver is set as the moving object. However, the moving object may be a radio-controlled moving object, an airplane, people or animals in motion, a sailing ship, or the like.
In addition, the image sensor 11 is disposed on the image acquiring unit 1. However, a series of processing may be performed only with three-dimensional shape information without using visualized information. In this case, the image sensor 11 may be omitted and only the distance image sensor 12 may be used.
In addition, to acquire the three-dimensional information, a one-point range finding type TOF sensor or line-type TOF sensor may be used. In this case, the obtained information is point or line information, not an image. However, such information can be considered as image information and such a sensor can be included in the image acquiring unit 1.
Next, an image processing apparatus according to a second embodiment will be described with reference to the drawings.
Hereinafter, with regard to the same component as that in the first embodiment, the same symbol will be used and detailed description thereof will be omitted, and only the different component will be described.
As illustrated in
The memory unit 2a includes an image information region 21, a distance information region 22, an intermediate image region 23, an obstacle distance map region 24, and a passable route information region 26.
The image processing unit 3a includes a filter section 31, a restoring section 32, an acquiring section 33, a distance calculating section 34, and a passability determining section 36.
The passability determining section 36 combines an obstacle distance map generated by an image processing unit 3 and passable route information obtained from published map information or the like, and determines whether a route that the moving object is going to use is impassable due to an obstacle, based on the combined data.
The passability determining section 36 determines that a route is passable as illustrated in
A warning unit 700, for example, is disposed on the right-front portion of a vehicle (refer to
More specifically, the warning unit 700 outputs the warning to a driver based on the passable route information stored in the passable route information region 26 when, for example, the route set by a car navigation or the like is impassable. The warning, for example, may be displayed on a car navigation screen or a cockpit, and in addition a sound (or a voice) may be generated.
As described above, in the image processing apparatus and the drive assist system using the same, the image acquiring unit 1, the memory unit 2a, and the image processing unit 3a are included in the image processing apparatus 100a. The image acquiring unit 1 includes an image sensor 11 and a distance image sensor 12. The memory unit 2a includes an image information region 21, a distance information region 22, an intermediate image region 23, an obstacle distance map region 24, and a passable route information region 26. The image processing unit 3a includes a filter section 31, a restoring section 32, an acquiring section 33, a distance calculating section 34, a visualization processing section 35, and a passability determining section 36. The drive assist system, for example, includes the memory unit 2, the image processing unit 3, the image sensor 11, the distance image sensor 12, the display unit 200, the obstacle determining unit 300, and the warning unit 700. The drive assist system warns a driver of an impassable route.
As a result, in the present embodiment, the same effect as in the first embodiment may be obtained.
Next, an image processing apparatus according to the third embodiment will be described with reference to the drawings.
Hereinafter, with regard to the same component as that in the first embodiment, the same symbol will be used and detailed description thereof will be omitted, and only the different component will be described.
As illustrated in
The memory unit 2b includes an image information region 21, a distance information region 22, an intermediate image region 23, an obstacle distance map region 24, and a passable route information region 26a.
An obstacle movement predicting information region 26b stores passable route information and impassable route information output from the passability determining section 36, and the information indicating whether an obstacle is a still object or a moving object. If the obstacle is a moving object, information including the proceeding direction, the proceeding speed thereof, or the like is also stored. The information including the proceeding direction, the proceeding speed, or the like is calculated using the image acquiring unit 1 and the image processing unit 3a.
The obstacle movement predicting information region 26b outputs the obstacle movement prediction information to an automatic driving controlling unit 800 (STEP S17 (refer to
The automatic driving controlling unit 800 additionally determines whether an obstacle moves (whether the obstacle disappears) based on the obtained passable and impassable routes. For example, if an obstacle does not move for a long period of time, a route along which the vehicle makes a detour and reaches a destination is selected and automatic driving is performed (STEP S18 (refer to
As illustrated above, in the image processing apparatus and the drive assist system using the same, the image acquiring unit 1, the memory unit 2b, and the image processing unit 3a are provided at the image processing apparatus 100b. The memory unit 2b includes an image information region 21, a distance information region 22, an intermediate image region 23, an obstacle distance map region 24, and a passable route information region 26a. The drive assist system, for example, includes the memory unit 2b, the image processing unit 3a, the image sensor 11, the distance image sensor 12, the display unit 200, the obstacle determining unit 300, and the automatic driving controlling unit 800. The drive assist system controls automatic driving of a vehicle.
As a result, in the present embodiment, the same effect as in the first embodiment is obtained.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Exemplary embodiments herein are considered as including the configurations that are described in the following appendix.
A drive assist system including an image sensor that acquires an image at the surroundings including obstacles, a distance image sensor that acquires information of a distance to the obstacle, a restoring unit that restores three-dimensional information from the obtained image or the distance information, an acquiring unit that acquires obstacle data from the three-dimensional information, a distance calculating unit that calculates an obstacle distance map regarding a moving object by calculating the shortest distance from a vehicle driven by a driver to the obstacle, a visualization processing unit that visualizes the obstacle distance map by using contour lines, and a display unit that displays the visualized images.
A drive assist system including an image sensor that acquires an image at the surroundings including obstacles, a distance image sensor that acquires the distance to the obstacle, a restoring unit that restores three-dimensional information from the obtained image or the distance information, an acquiring unit that acquires obstacle data from the three-dimensional information, a distance calculating unit that calculates an obstacle distance map regarding a moving object by calculating the shortest distance from a vehicle driven by a driver to the obstacle, and an obstacle determining device that determines the possibility of the vehicle contacting against the obstacle in advance.
A drive assist system including an image sensor that acquires an image at the surroundings including obstacles, a distance image sensor that acquires the distance to the obstacle, a restoring unit that restores three-dimensional information from the obtained image or the distance information, an acquiring unit that acquires obstacle data from the three-dimensional information, a distance calculating unit that calculates an obstacle distance map regarding a moving object by calculating the shortest distance from a vehicle driven by a driver to the obstacle, a passability determining unit that determines passable and impassable routes, and a warning unit that warns the driver based on passable route information calculated by the passability determining unit when the passing route becomes impassable.
A drive assist system including an image sensor that acquires an image at the surroundings including obstacles, a distance image sensor that acquires the distance to the obstacle, a restoring unit that restores three-dimensional information from the obtained image or the distance information, an acquiring unit that acquires obstacle data from the three-dimensional information, a distance calculating unit that calculates an obstacle distance map regarding a moving object by calculating the shortest distance from a vehicle driven by a driver to the obstacle, a passability determining unit that determines passable and impassable routes and determines whether the obstacle is a moving object, and an automatic driving control unit that performs automatic driving by additionally determining whether the obstacle changes across a long period of time based on the obtained passable and impassable routes, and, if the obstacle does not move for a long period of time, selecting a route in which the vehicle makes a detour and reaches a destination.
A drive assist system according to one of Appendix 1 to Appendix 4 in which the image sensor includes a camera that detects a visible area to detect a daytime image and a night vision camera that detects a near infrared area or a far-infrared area to detect a night image.
A drive assist system according to one of Appendix 1 to Appendix 4 in which the distance image sensor includes a time-of-flight (TOF) camera that may respond in both day and night or a plurality of cameras that may respond in both day and night.
| Number | Date | Country | Kind |
|---|---|---|---|
| 2015-009750 | Jan 2015 | JP | national |