The present disclosure relates to a blind area estimation apparatus, a vehicle travel system, and a blind area estimation method.
A conventional vehicle travel system grasps a position of an object in a predetermined region as object information by a road side unit (RSU) which is an apparatus disposed on a roadside and provides an automatic driving vehicle in the region with object information (for example, Japanese Patent Application Laid-Open No. 2020-37400). More specifically, a server processes the object information acquired by the RSU and transmits the processed object information to the automatic driving vehicle in the region. The automatic driving vehicle determines a traveling route in consideration of the object information, and travels based on the traveling route. According to such a configuration, even the automatic driving vehicle which does not include a sensor for detecting a surrounding environment can travel in the region with an automatic driving.
However, the RSU is provided to monitor a ground from a height in many cases, thus there is a region which cannot be detected due to shielding by an object on the ground, that is to say, a blind area region which is a blind area for the RSU caused by the object. As described above, when an obstacle is located in the blind area region of which the RSU cannot grasp a state, there is a possibility that an automatic driving vehicle traveling in the blind area region collides with the obstacle. Thus, a blind area region which can be used in an automatic driving, for example, is required.
The present disclosure is therefore has been made to solve problems as described above, and it is an object of the present disclosure to provide a technique capable of estimating a blind area region.
A blind area estimation device according to the present disclosure includes: an acquisition part acquiring an object region which is a region of an object based on object information which is information of the object in a predetermined region detected by a detection part; and an estimation part estimating a blind area region which is a region of a blind area for the detection part caused by the object based on the object region.
The blind area region can be estimated.
These and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.
The RSU 1 is a blind area estimation device, and generates an object region which is a region of an object in a predetermined region and a blind area region which is a region of a blind area for a detection part of the RSU 1 by the object as described hereinafter. In the present embodiment 1, the predetermined region is a region which is a target of generation of the object region and the blind area region by the RSU 1, that is to say, a generation target region, however, this configuration is not necessary. In the present embodiment 1, the plurality of RSUs 1 are directed to a plurality of directions, respectively, however, this configuration is not necessary, but only one RSU 1 may also be provided, for example.
The fusion server 2 generates an integrated object region and blind area region based on object regions and blind area regions generated by the plurality of RSUs 1. The automatic driving vehicle 3 determines a traveling route along which the automatic driving vehicle 3 should perform an automatic driving based on the integrated object region and blind area region generated by the fusion server 2. The automatic driving of the automatic driving vehicle 3 may be an automatic driving of autonomous driving (AD) control or an automatic driving of advanced driver assistance system (ADAS) control.
Configuration of RSU
The detection part 11 is made up of a sensor capable of detecting object information which is information of an object in the generation target region and a supporter circuit of the sensor. In the present embodiment 1, the sensor includes a camera 111, a radio wave radar 112, and a laser radar 113, and the object information is information corresponding to a detection result of the camera 111 the radio wave radar 112 and the laser radar 113. The object may be a moving object or a stationary object.
The primary fusion part 12 processes the object information detected by the detection part 11. The primary fusion part 12 includes an object fusion part 121 which is an acquisition part and a blind area calculation part 122 which is an estimation part. The object fusion part 121 acquires the object region which is the region of the object in the generation target region by calculation, for example, based on the object information detected by the detection part 11. The blind area calculation part 122 estimates the blind area region which is a region of a blind area for the detection part 11 caused by the object by calculation, for example, based on the calculated object region.
The location part 13 acquires a position of the RSU 1 and a direction (orientation, for example) of the RSU 1. The location part 13 is made up of a positioning module of global navigation satellite system (GNSS) such as a GPS, a quasi-zenith satellite such as Michibiki, Beidou, Galileo, GLONASS and a NAVIC and an orientation measurement means using an inertia principle such as a gyroscope, for example.
The communication part 14 transmits information of the object region and the blind area region of the primary fusion part 12 and information of a position and a direction of the RSU 1 of the location part 13 to the fusion server 2. The communication part 14 is made up of a general-purpose communication apparatus or a dedicated communication network apparatus, for example.
Herein, in
Assuming that the object 6 has a quadrangular shape in
Next, a calculation of the coordinates of the points A, B, and C is described. For example, assumed as illustrated in
As described above, the blind area calculation part 122 applies the object region including LA, θA, and φA of each point of the object 6 and a height H of the placement reference point O from the ground to the above equations (1) to (5) to estimate the blind area region. The height H may be a fixed value set at a time of placing the RSU 1 or a value appropriately detected by the detection part 11.
As illustrated in
Flow chart of RSU
Firstly in Step S1, the detection part 11 takes in raw data of each sensor, and generates object information based on the raw data of each sensor. For example, the detection part 11 identifies the object 6 in a screen at a certain time from an image signal which is raw data of the camera 111 to generate a position and a direction of the object 6 as the object information. Then, the detection part 11 generates a point group which is raw data of the radio wave radar 112 and the laser radar 113 as the object information. When an output period of each sensor is different from each other, the detection part 11 synchronizes the data which is the output of each sensor.
In Step S2, the object fusion part 121 performs fusion processing of fusing the object information generated by the detection part 11 to calculate the object region. Used as the fusion processing is a known technique of preferentially using a value of a sensor having high reliability in consideration of reliability of each sensor in environment conditions of a temperature and light intensity when the different sensors detect values of the same item, for example. The object fusion part 121 may calculate not only the object region but also a speed and an acceleration rate of the object 6, for example.
In the present embodiment 1, the object fusion part 121 estimates whether the object 6 is a moving object or a stationary object in Step S2. That is to say, the object fusion part 121 estimates whether the blind area region estimated in the following Step S3 is a region which is a region of a blind area caused by a moving object or a region which is a region of a blind area caused by a stationary object. For example, the object fusion part 121 estimates that the object 6 is a moving object when a suspension time of the object 6 is equal to or smaller than a threshold value, and estimates that the object 6 is a stationary object when a suspension time of the object 6 is larger than the threshold value. The other constituent element (for example, the blind area calculation part 122) of the primary fusion part 12 may estimate whether a region is a blind area caused by a moving object or a stationary object.
In Step S3, the blind area calculation part 122 calculates the blind area region using the above calculation methods described in
In Step S4, the communication part 14 transmits, to the fusion server 2, the information of the object region and the blind area region, the estimation result indicating whether the object 6 is the moving object or the stationary object, and the information of the position and the direction of the RSU 1 of the location part 13. Subsequently, the operation in
The above operation is performed by each of the plurality of RSUs 1 directed to a plurality of directions, respectively. Accordingly, the primary fusion parts 12 of the plurality of RSUs 1 calculate a plurality of object regions based on object information in a plurality of directions, and the blind area calculation parts 122 of the plurality of RSUs 1 calculate a plurality of blind area regions based on a plurality of object regions.
Description of Transmission Information of RSU
A first column in the table in
A second column in
A third column in
The transmission information from each RSU 1 to the fusion server 2 includes not only the information in
Configuration of Fusion Server
The reception part 21 receives the object region and the blind area region in
The secondary fusion part 22 processes the transmission information from the plurality of RSUs 1. The secondary fusion part 22 includes a coordinate conversion part 221, an integration fusion part 222, and a blind area recalculation part 223. The coordinate conversion part 221 converts a coordinate system of the object region and the blind area region transmitted from the plurality of RSU 1 into an integrated global coordinate system based on the information of the position and the direction of the plurality of RSUs 1. The integration fusion part 222 integrates the object region from the plurality of RSUs 1, whose coordinate is converted in the coordinate conversion part 221. The blind area recalculation part 223 integrates the blind area region from the plurality of RSUs 1, whose coordinate is converted in the coordinate conversion part 221. The transmission part 23 transmits the integrated object region and blind area region to the automatic driving vehicle 3 in the generation target region including the integrated object region and blind area region. Accordingly, the object region and the blind area region of the RSU 1 is substantially transmitted to the automatic driving vehicle 3 in the generation target region.
Flow Chart of Fusion Server
Firstly in Step S11, the reception part 21 receives the object region and the blind area region in
In Step S12, the coordinate conversion part 221 converts a coordinate system of the object region and the blind area region transmitted from the plurality of RSU 1 into an integrated global coordinate system in the plurality of RSUs 1 based on the information of the position and the direction of the plurality of RSUs 1.
In Step S13, the integration fusion part 222 performs fusion processing of integrating the object region transmitted from the plurality of RSUs 1 for each object 6. Performed in the fusion processing is, for example, OR processing of adding the object region transmitted from the plurality of RSUs 1 for each object 6.
In Step S14, the blind area recalculation part 223 performs fusion processing of integrating the blind area region transmitted from the plurality of RSUs 1 for each object 6. Performed in the fusion processing is, for example, AND processing of extracting a common part of the blind area region transmitted from the plurality of RSUs 1 for each object 6.
For example, as illustrated in
In Step S15 in
Configuration of Transmission Information of Fusion Server
A first column in the table in
Configuration of Vehicle-Side Control Device
The communication part 31 communicates with the fusion server 2. Accordingly, the communication part 31 receives the object region and the blind area region integrated by the fusion server 2.
The location measurement part 32 measures a position and a direction (for example, an orientation) of the subject vehicle in the manner similar to the location part 13 of the RSU 1 in
The control part 33 controls traveling of the subject vehicle based on the object region and the blind area region received by the communication part 31. The control part 33 includes a route generation part 331 and a target value generation part 332. The route generation part 331 generates and determines a traveling route along which the subject vehicle should travel based on the position of the subject vehicle measured by the location measurement part 32, a destination, the object region, the blind area region, and a map of the global coordinate system. The target value generation part 332 generates a control target value of a vehicle speed and a handle angle, for example, for the subject vehicle to travel along the traveling route generated by the route generation part 331.
The driving part 34 includes a sensor 341, an electronic control unit (ECU) 342, and an architecture 343. The ECU 342 drives the architecture 343 based on information around the subject vehicle detected by the sensor 341 and the control target value generated by the control part 33.
Flow Chart of Vehicle-Side Control System
Firstly in Step S21, the location measurement part 32 measures and acquires the position and the direction of the subject vehicle.
In Step S22, the communication part 31 receives the object region and the blind area region integrated by the fusion server 2.
In Step S23, the route generation part 331 transcribes the position and the direction of the subject vehicle measured by the location measurement part 32, the destination, the object region, and the blind area region on the map of the global coordinate system to map them. The mapping in Step S23 can be easily performed by previously unifying all the coordinate values into the value of the global coordinate system.
In Step S24, the route generation part 331 generates the traveling route along which the subject vehicle should travel based on the map on which the mapping has been performed. For example, firstly as illustrated in
In a case where an object region 54 of a moving object is located on the temporary route 53 as illustrated in
In a case where a blind area region 57 of a moving object is located on the temporary route 53, the route generation part 331 generates a traveling route for the subject vehicle to temporarily stop in front of the blind area region 57 of the moving object and to start traveling when the blind area region 57 is out of front of the subject vehicle 51. In a case where a blind area region 58 of a stationary object is located on the temporary route 53 as illustrated in
When there are a plurality of regions including the object region and the blind area region between the subject vehicle and the destination, the route generation part 331 generates a traveling route satisfying the conditions of
In Step S25 in
Conclusion of Embodiment 1
According to the present embodiment 1 described above, the RSU 1 acquires the object region of the object and estimates the blind area region of the object. According to such a configuration, even when the automatic driving vehicle 3 does not include a sensor, the automatic driving vehicle 3 can grasp the object region and the blind area region of the object located around the automatic driving vehicle 3, for example. Thus, even when the automatic driving vehicle 3 does not include a sensor, the automatic driving vehicle 3 can plan a traveling route suppressing a collision with the object and a collision with an obstacle in the blind area region based on the object region and the blind area region. It is estimated whether the blind area region is a region of a blind area caused by a moving object or a stationary object, thus the automatic driving vehicle 3 can plan an appropriate traveling route by a type of an object, for example.
In the embodiment 1, the detection part 11 of the RSU 1 in
In the embodiment 1, the primary fusion part 12 is included in the RSU 1 in
In the embodiment 1, various types of GNSS are used as the location part 13 in
In the embodiment 1, the fusion server 2 transmits the object region and the blind area region to the automatic driving vehicle 3, and the automatic driving vehicle 3 generates the traveling route and the control target value based on the object region and the blind area region. In contrast, in the present embodiment 2, a route plan server 8 which is a travel pattern generation device determines a travel pattern of an automatic driving vehicle 9 in the generation target region based on the object region and the blind area region transmitted from the plurality of RSUs 1, and transmits the travel pattern to the automatic driving vehicle 9. The travel pattern is a travel pattern for performing a traveling along the traveling route 56 described in the embodiment 1, and is substantially the same as the traveling route 56. The automatic driving vehicle 9 generates the control target value based on the travel pattern received from the route plan server 8, and travels based on the control target value. The automatic driving of the automatic driving vehicle 9 may be an automatic driving of autonomous driving (AD) control or an automatic driving of advanced driver assistance system (ADAS) control.
Configuration of RSU
A configuration of the RSU 1 according to the present embodiment 2 is similar to the configuration of the RSU 1 according to the embodiment 1.
Configuration of Route Plan Server
The reception part 81 receives transmission information, for example, from the plurality of RSUs 1 in the manner similar to the reception part 21 in the embodiment 1.
The secondary fusion part 82 includes a coordinate conversion part 821, an integration fusion part 822, and a blind area recalculation part 823 similar to the coordinate conversion part 221, the integration fusion part 222, and the blind area recalculation part 223 in the embodiment 1, respectively. The secondary fusion part 82 having such a configuration integrates the object regions transmitted from the plurality of RSUs 1, and integrates the blind area regions transmitted from the plurality of RSUs 1 in the manner similar to the secondary fusion part 22 in the embodiment 1.
For example, the vehicle position acquisition part 83 communicates with each automatic driving vehicle 9 in the generation target region, thereby sequentially acquiring a position, an orientation, and a destination of each automatic driving vehicle 9 in each automatic driving vehicle 9. The map database 84 stores a map of a global coordinate system in the generation target region.
The travel pattern generation part 85 performs processing similar to that performed by the route generation part 331 included in the automatic driving vehicle 3 in the embodiment 1. Specifically, the travel pattern generation part 85 generates and determines a travel pattern of the automatic driving vehicle 9 based on the position, the orientation, and the destination of the automatic driving vehicle 9 acquired by the vehicle position acquisition part 83, the object region and the blind area region integrated by the secondary fusion part 82, and the map of the map database 84. The transmission part 86 transmits the travel pattern including a list of a time and a target position to the automatic driving vehicle 9.
Flow Chart of Route Plan Server
In Step S31 to Step S34, the route plan server 8 perforins processing similar to the processing of receiving the transmission information in Step S11 to the processing of integrating the blind area region in Step S14 in
In Step S35 to Step S38, the route plan server 8 performs processing similar to the processing of acquiring the position and the orientation of the direction of the subject vehicle in Step S21 to the processing of generating the traveling route in Step S24 in
For example, when the blind area region is estimated to be a region of a blind area caused by an stationary object, the route plan server 8 determines a travel pattern for the automatic driving vehicle 9 to avoid the blind area region. For example, when the blind area region is estimated to be a region of a blind area caused by a moving object, the route plan server 8 determines a travel pattern for the automatic driving vehicle 9 to stop in front of the blind area region and to starts traveling when the blind area region is out of front of the automatic driving vehicle 9.
In Step S39, the route plan server 8 transmits the travel pattern to the automatic driving vehicle 9. Subsequently, the operation in
Configuration of Automatic Driving Vehicle
The communication part 91 communicates with the route plan server 8. Accordingly, the communication part 91 receives the travel pattern generated by the route plan server 8. The location measurement part 92 measures a position and a direction of the subject vehicle in the manner similar to the location measurement part 32 in the embodiment 1.
The control value generation part 93 generates a control target value of a vehicle speed and a handle angle, for example, based on the travel pattern received by the communication part 91 and the position and the orientation of the subject vehicle measured by the location measurement part 92.
The driving part 94 includes a sensor 941, an ECU 942, and an architecture 943. The ECU 942 drives the architecture 943 based on information around the subject vehicle detected by the sensor 941 and the control target value generated by the control value generation part 93.
Conclusion of Embodiment 2
According to the present embodiment 2 described above, the route plan server 8 can grasp the object region and the blind area region of the object located around each automatic driving vehicle 9. Accordingly, even when the automatic driving vehicle 9 does not include a sensor and a route generation part, the route plan server 8 can plan a travel pattern for suppressing a collision between the automatic driving vehicle 9 and an object, for example, based on the object region and the blind area region. It is estimated whether the blind area region is a region of a blind area caused by a moving object or a stationary object, thus the automatic driving vehicle 9 can plan an appropriate travel pattern by a type of an object, for example.
The acquisition part and the estimation part described as the object fusion part 121 and the blind area calculation part 122 in
When the processing circuit 101 is the dedicated hardware, a single circuit, a complex circuit, a programmed processor, a parallel-programmed processor, an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of them, for example, falls under the processing circuit 101. Each function of each part of the acquisition part etc. may be achieved by circuits to which the processing circuit is dispersed, or each function of them may also be collectively achieved by one processing circuit.
When the processing circuit 101 is the processor, the functions of the acquisition part etc. are achieved by a combination with software etc. Software, firmware, or software and firmware, for example, fall under the software etc. The software etc. is described as a program and is stored in a memory. As illustrated in
(HDD), a magnetic disc, a flexible disc, an optical disc, a compact disc, a mini disc, a digital versatile disc (DVD), or a drive device of them, or any storage medium which is to be used in the future.
Described above is the configuration that each function of the acquisition part etc. is achieved by one of the hardware and the software, for example. However, the configuration is not limited thereto, but also applicable is a configuration of achieving a part of the acquisition part etc. by dedicated hardware and achieving another part of them by software, for example. For example, the function of the acquisition part can be achieved by the processing circuit 101 as the dedicated hardware, an interface, and a receiver, for example, and the function of the other units can be achieved by the processing circuit 101 as the processor 102 reading out and executing the program stored in the memory 103.
As described above, the processing circuit 101 can achieve each function described above by the hardware, the software, or the combination of them, for example. Each embodiment and each modification example can be arbitrarily combined, or each embodiment and each modification example can be appropriately varied or omitted.
While the invention has been shown and described in detail, the foregoing description is in all aspects illustrative and not restrictive. It is therefore understood that numerous modifications and variations can be devised without departing from the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2020-214993 | Dec 2020 | JP | national |