The present invention relates to an environment recognition system and an environment recognition method for generating a detailed map of a route of a vehicle.
A self-driving vehicle that automatically runs in places such as roads and warehouses reaches its destination by taking a preset route. However, temporary obstacles such as parked vehicles and fallen trees on the road are not included in map information used in setting a route, and, when the map information is old, the map information may represent the structures of roads differently from the actual ones. Therefore, it is not possible to make sure that the vehicle can actually take the route set using the map information. In addition, when the map information used in setting the route is a simplified map where road structures are represented as nodes and links, there is a possibility that the areas of the roads where the vehicle can take (e.g., the road width and the number of lanes) are not recorded precisely, so that it is not possible to determine where the vehicle is to run in a wide road, in advance. Therefore, an actual self-driving vehicle needs to detect an area that is drivable around a subject vehicle, using an onboard camera or an onboard environment recognition sensor such as LiDAR, and determine its course, while generating a detailed map around the subject vehicle in real time.
Having been known as this type of self-driving vehicle is a vehicle that detects information of the road where the vehicle is heading in places where visibility is poor, e.g., before a curve or an intersection, by using an image captured by a traffic camera. For example, PTL 1 discloses a vehicle vision assistance system that obtains an image corresponding to a blind spot from a camera at an intersection (hereinafter, referred to as a “traffic camera”). FIG. 7 in PTL 1, for example, discloses a method for complementing the image of a blind spot by converting an image captured by a traffic camera into an image from the viewpoint of an onboard camera, and synthesizing the resultant image with an image captured by the onboard camera.
In this literature, it is assumed that the blind spot of the onboard camera is formed by a fixed obstacle (e.g., a building) the relative position of which with respect to the traffic camera remains unchanged. Assuming that there is a vehicle at a predetermined position, it is possible to identify the area corresponding to the blind spot of the camera onboard the vehicle, from an image captured by the traffic camera. Therefore, it is not particularly difficult for the traffic camera to transmit the image corresponding to the blind spot of the onboard camera of the vehicle, autonomously, to the vehicle that located at the predetermined position from the viewpoint of the traffic camera (see, for example, 5235 in FIG. 5 of the same literature).
PTL 1: JP 2009-70243 A
When there is a preceding vehicle ahead of a self-driving vehicle that generates a detailed map in real time using an onboard camera, the road surface ahead of the preceding vehicle becomes the blind spot for the onboard camera. Therefore, there is a problem that the self-driving vehicle cannot generate a detailed map of the road head of the preceding vehicle, and cannot determine the course ahead of the preceding vehicle.
It might be possible to apply PTL 1 to such a problem if a traffic camera can also transmit the image of such a blind spot autonomously. However, not only a relative position and a relative speed between a preceding vehicle (moving obstacle) and the subject vehicle change from moment to moment, and but also the size of the vehicle body of the preceding vehicle (full length, full width, full height, shape, etc.) is not constant. Therefore, it is extremely difficult for a traffic camera to identify the blind spot of the onboard camera and to transmit the image corresponding to the blind spot to the self-driving vehicle autonomously.
Therefore, an object of the present invention is to provide an environment recognition system capable of generating a detailed map for self-driving, by complementing a blind spot of an onboard camera, the blind spot being formed by a moving obstacle such as a preceding vehicle, with an image captured by the traffic camera.
Features of the present invention for solving the above problems are, for example, as follows. An environment recognition system that recognizes an environment external to a subject vehicle by synthesizing an image captured by an onboard camera with an image captured by an external camera, the environment recognition system including: the onboard camera that captures an image in front of the subject vehicle; an onboard communicating unit that communicates with the external camera; a front moving object detecting unit that detects a moving object in front, based on the image captured by the onboard camera; a blind spot extracting unit that extracts a blind spot created by the moving object; an image converting unit that converts an image captured by and received from the external camera into an image from a viewpoint of the onboard camera; an image synthesizing unit that synthesizes the image having the viewpoint converted by the image converting unit, to the blind spot included in the image captured by the onboard camera; and a detailed map generating unit that generates a detailed map of a direction ahead of the subject vehicle based on the image synthesized by the image synthesizing unit.
With the environment recognition system according to the present invention, it is possible to smoothly determine a drivable area of a road surface by complementing a blind spot of an onboard camera, formed by a moving obstacle, such as a preceding vehicle, with an image captured by a traffic camera, and generating a detailed map for self-driving. Problems, configurations, and effects other than those described above will be clarified by the following description of embodiments.
Some embodiments of an environment recognition system according to the present invention will now be explained using drawings and the like. Note that the following description provides specific examples of the content of the present invention, and the present invention is not limited to these descriptions. Various changes and modifications may be made by those skilled in the art within the scope of the technical idea disclosed in the present specification. Across all of the drawings for explaining the present invention, components having the same function are denoted by the same reference numerals, and redundant explanations thereof are sometimes omitted.
To begin with, an environment recognition system 1 according to a first embodiment of the present invention will be explained with reference to
<Traffic Camera 2>
The traffic camera 2 is an external camera installed near an intersection, and includes an image capturing unit 20, an image transmitting unit 21, and an on-road communicating unit 22. The image capturing unit 20 is a monocular camera or a stereo camera for capturing an image of a road. The image transmitting unit 21 is a computer that generates transmission data by applying a lossless compression to an image captured by the image capturing unit 20 and parameters of the camera at the time when the image is captured, upon request from the environment recognition system 1. The camera parameters include internal parameters such as an angle of view, the number of pixels, and a focal length, and external parameters such as a camera position, a height from a road, and a viewpoint direction. The on-road communicating unit 22 is a communication device for transmitting the transmission data to the environment recognition system 1.
<Environment Recognition System 1>
The environment recognition system 1 is an onboard system for recognizing an external environment of the self-driving vehicle V0, and detects obstacles and generates a detailed map to be used in determining a course of the self-driving vehicle V0. As mentioned earlier, in order to implement self-driving, it is necessary to keep determining a passable area of the road. Therefore, as a precondition, the environment recognition system 1 keeps recognizing the shape of the road surface based on the image captured by the onboard camera 10. Therefore, when a blind spot is formed by a moving obstacle, such as a preceding vehicle V1, the environment recognition system 1 according to the present embodiment complements the blind spot by clipping a part of an image captured by the traffic camera 2, and synthesizing the image with an image captured by the onboard camera 10. The environment recognition system 1 then keeps determining the passable area of the road and creating a detailed map, based on the synthesized image.
As illustrated in
The onboard camera 10 is a monocular camera or a stereo camera for capturing an image of a direction in which the vehicle is heading. The front moving object detecting unit 11 detects a moving obstacle, such as a vehicle or a pedestrian, from an image captured by the onboard camera 10. The onboard communicating unit 12 is a communication device for communicating with the traffic camera 2. The blind spot extracting unit 13 extracts an image that is to be used for complementing the blind spot of the captured image of the onboard camera 10, from an image captured by the traffic camera 2. The image converting unit 14 performs viewpoint conversion to the image extracted by the blind spot extracting unit 13, in a manner suitable to the viewpoint of the onboard camera 10. The image synthesizing unit 15 synthesizes the image captured by the onboard camera 10 with the image corresponding to the blind spot, resultant of the viewpoint conversion of the image converting unit 14. The detailed map generating unit 17 generates a detailed map representing the details of a drivable area based on the image synthesized by the image synthesizing unit 15, and on the simplified map 16. Note that, in the simplified map 16, nodes and links indicating road structures, and camera parameters of the traffic cameras 2 at different locations (e.g., the locations where the cameras are installed, the ranges captured by the cameras) are recorded.
<Specifics of Processing Performed by Environment Recognition System 1>
Specifics of processing performed by the environment recognition system 1 in a specific situation will now be explained with reference to
On the contrary,
<Flowchart of Environment Recognition System 1>
Processing by the environment recognition system 1 will now be explained sequentially with reference to the flowchart in
To begin with, in step S1, the onboard camera 10 captures an image on the front side. In step S2, the front moving object detecting unit 11 detects an obstacle on the road in the direction in which the vehicle is heading, from the image captured by the onboard camera 10. As an example, the vehicle V1 in
If there is a blind spot on the road, the environment recognition system 1 requests a captured image from the traffic camera 2A in step S4. Note that, because the camera parameters (position, orientation, and the like) of each traffic camera are specified in the simplified map 16, the environment recognition system 1 can request a captured image from the traffic camera 2A that is closer to the line of sight of the onboard camera 10, based on the current position of the self-driving vehicle V0 and the simplified map 16.
In step S5, the environment recognition system 1 then receives the captured image from the traffic camera 2A. Then, in step S6, the blind spot extracting unit 13 extracts the part required in synthesizing, from the received captured image. In step S7, the image converting unit 14 performs viewpoint converting processing for converting the image extracted by the blind spot extracting unit 13 into an image from the same viewpoint as the onboard camera 10. In the viewpoint converting processing herein, the image size is converted in such a manner that the internal parameters of the traffic camera 2A are mapped to the internal parameters of the onboard camera 10, and then the viewpoint conversion of the image of the traffic camera 2A is performed in such a manner that the external parameters of the traffic camera 2A are matched to the external parameters of the onboard camera 10. The viewpoint converting processing is performed using a known method such as the method described in PTL 1. In step S8, the image synthesizing unit 15 then synthesizes the image having the viewpoint transformed by the image converting unit 14 to the image captured by the onboard camera 10.
Finally, in step S9, if there is no obstacle, the detailed map generating unit 17 generates detailed map data using the image captured by the onboard camera 10. If there is some obstacle, the detailed map generating unit generates a detailed map using the image synthesized by the image synthesizing unit 15 (see
With the environment recognition system 1 according to the present embodiment described above, even if the vehicle V1 driving ahead of the subject vehicle creates a blind spot on the road surface in the direction where the subject vehicle is heading, it is possible to create a synthesized image with no blind spot by synthesizing the image captured by the traffic camera 2 with the image captured by the onboard camera 10, so that it is possible to smoothly create a detailed map for the use of determining a route.
An environment recognition system according to a second embodiment of the present invention will now be explained. Redundant explanations of parts that are common with those in the first embodiment will be omitted.
In the first embodiment, the environment recognition system 1 requests a transmission of the entire captured image from the traffic camera 2, and the blind spot extracting unit 13 in the environment recognition system 1 extracts a needed area from the image captured by the traffic camera 2. By contrast, in the present embodiment, the environment recognition system 1 requests the traffic camera 2 to clip the part needed by the environment recognition system 1 from the captured image and to transmit only the part. In this example, the blind spot extracting unit 13 is not required in the environment recognition system 1, so that the blind spot extracting unit 13 is omitted in the environment recognition system 1 in
In order to cause the traffic camera 2 to clip the image required in the environment recognition system 1, in the present embodiment, when the front moving object detecting unit 11 in the environment recognition system 1 detects the vehicle V1, the front moving object detecting unit 11 determines the position and the size of the vehicle V1, from the image captured by the onboard camera 10. The image converting unit 14 in the environment recognition system 1 then converts the position and the size of the vehicle V1 in the image captured by the onboard camera 10, into the coordinates and the size of the image captured by the traffic camera 2. When transmitting a request for an image to the traffic camera 2, the onboard communicating unit 12 also transmits the coordinates and the size of the image, as the position of where the image is to be clipped.
When the on-road communicating unit 22 in the traffic camera 2 receives the information from the onboard communicating unit 12, the image clipping unit 23 clips the image based on the image clipping range received from the environment recognition system 1. The clipped image is then sent to the onboard communicating unit 12 in the environment recognition system 1, via the image transmitting unit 21.
In this manner, the environment recognition system 1 receives the image having been clipped on the side of the traffic camera 2, then causes the image converting unit 14 to perform viewpoint conversion so as to match the viewpoint to that of an image of the onboard camera 10, and causes the image synthesizing unit 15 to synthesize the images.
As described above, with the environment recognition system 1 according to the present embodiment, it is possible to reduce the amount of data transmitted from the traffic camera 2 to the environment recognition system 1 advantageously, in addition to the effects achieved by the first embodiment.
An environment recognition system according to a third embodiment of the present invention will now be explained. Redundant explanations of parts that are common with those in the embodiments described above will be omitted.
Therefore, in the present embodiment, an image resultant of deleting the portion P0 from
In the environment recognition system 1 according to the present embodiment, to begin with, the moving object recognizing unit 18a determines the position and size of the driving vehicle V1 in the captured image received from the traffic camera 2. The moving object deleting unit 18b then deletes the image of the detected vehicle V1, and stores the image having no part corresponding to the vehicle V1, in the image storage unit 18c. Because a plurality of images are received from the traffic camera 2, the same processing is applied to each of such images. The traffic camera image synthesizing unit 18d then generates synthesized images without the vehicle V1, by synthesizing the images captured at different points in time and having the vehicle V1 deleted. The processing after the synthesized images are input to the blind spot extracting unit 13 is the same as that according to the first embodiment.
Even if there is a parked vehicle V2 on the road, the moving object deleting unit 18b does not delete the parked vehicle V2 that is not a moving object from the image. Therefore, the parked vehicle V2 remains in the synthesized image where the vehicle V1 has been deleted, and when the detailed map generating unit 17 generates a detailed map using the synthesized image, the parked vehicle V2 is incorporated in the detailed map. As a result, the part where the parked vehicle V2 is parked can be excluded from the area where the self-driving vehicle V0 is permitted to drive.
As described above, with the environment recognition system 1 according to the present embodiment, even when the traffic is heavy and each one of the images captured by the traffic camera 2 has many blind spots due to the vehicles, it is possible to generate a captured image from the viewpoint of the traffic camera 2 with no blind spot created by the passing vehicles, by combining a large number of captured images captured at different points in time. In addition, because the parked vehicle V2 remains in the synthesized image according to the present embodiment, the self-driving vehicle V0 determining its course using the synthesized image can drive while avoiding the parked vehicle V2.
Number | Date | Country | Kind |
---|---|---|---|
2020-130521 | Jul 2020 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/015793 | 4/19/2021 | WO |