The present application claims priority under 35 U.S.C. § 119 to Japanese Patent Application No. 2020-061378, filed Mar. 30, 2020 and Japanese Patent Application No. 2021-050544, filed Mar. 24, 2021, the disclosures of which are herein incorporated by reference in their entireties.
The present invention relates to a vehicle control device and a vehicle control method.
Japanese patent No. 4796400 discloses a vehicle speed control apparatus that includes: a target speed calculation part configured to calculate, based on an location error probability distribution and data stored in advance and representing an acceleration or the gradient thereof that does not cause an uncomfortable feeling to the driver in a manner depending on the distance from a location point where the speed is to be changed to a target destination, a target speed value at each location point in such a manner that the variation in the speed of the vehicle is controlled along a continuous curve; and a speed control part configured to detect the speed of the vehicle and control the driving torque to control the speed to the target speed value.
Japanese patent No. 4967840 discloses a collision damage reduction apparatus that includes: an object detection means for detecting a nearby object around an own vehicle; a collision likelihood determination means for determining a likelihood of collision of the own vehicle with an object detected by the object detection means on a discrete-time basis; a collision impact reduction means for performing a control for reducing the impact of collision based on the collision likelihood determined by the collision likelihood determination means.
The technique disclosed in Japanese Patent No. 4796400 controls the speed of the vehicle based on a location error probability distribution. The technique disclosed in Japanese Patent No. 4967840 determines a likelihood of collision between the own vehicle and an object. Both the techniques are to perform a speed control on the vehicle based on the location error probability distribution or the collision likelihood, which speed control is not related to an action plan for autonomous driving. There is a problem that in the event of determining an action plan using sensors, the distance to be detected by the sensors may not be enlarged while maintaining the reliability of the sensors due to insufficient detectable distances of the sensors. In addition, another problem of the conventional techniques is an insufficiency of clarification of safety levels and an insufficiency of quantification of reliability (accuracy, detection probability) of the sensors and the accuracy of the action plan.
The present invention has been made in view of such problems, and it is an object of the present invention to provide a vehicle control device and a vehicle control method that can provide safety and comfort.
To solve the above described problems, a vehicle control device according to a certain embodiment of the present invention is a vehicle control device configured to control a vehicle and includes: an action plan creating part configured to create an action plan for autonomous driving of the vehicle; a vehicle behavior control part configured to control at least a speed of the vehicle based on the action plan; and a distance detection part configured to detect an object and output detection information on detection of the object. The action plan creating part is configured to set a maximum deceleration of the vehicle for autonomous driving. The action plan creating part includes a collision probability map setting part configured to, when the distance detection part detects an obstacle, determine a collision probability map, which is a two-dimensional map of a collision probability distribution representing likelihood of a collision between the vehicle and the detected obstacle in a two-dimensional space of a location and a speed. The collision probability map has been created based on a target stop location determined based on a predetermined target collision probability, the maximum deceleration, and the detection information. The action plan creating part is configured to create a current action plan based on the collision probability map, the predetermined target collision probability, and a current location and a current speed of the vehicle.
According to the present invention, it is possible to provide a vehicle control device and a vehicle control method that are capable of providing safety and comfort.
An embodiment of the present invention is described in detail below with reference to the drawings.
As illustrated in
The finder 20 is, for example, a LIDAR (Light Detection and Ranging or Laser Imaging Detection and Ranging) which measures a distance to a target by illuminating the target with light and then measures the reflected scattered light. For example, two units of the finders 20 are disposed at right and left locations spaced apart from each other in a front part of the own vehicle 1, and three units of the finders 20 are disposed in a rear side thereof (that is, five units in the front and the rear parts in total).
For example, three units of the radars 30 are disposed in the front part of the own vehicle 1, and two units of the radars 30 are disposed in the rear side thereof (totaling five units in the front and the rear parts). The radar 30 detects an object by, for example, an FM-CW (Frequency Modulated Continuous Wave) method.
The camera 40 is, for example, a digital camera which uses a solid-state image sensing device such as a CCD (Charge Coupled Device) and a CMOS (Complementary Metal Oxide Semiconductor). The camera 40 is attached to an upper side of a front windshield, a rear surface of a rearview mirror, or the like. The camera 40 captures, for example, an image of an area in front of the own vehicle 1 periodically and repeatedly. In the example shown in
The configuration illustrated in
The navigation device 50 includes a GNSS (Global Navigation Satellite System) receiver, map information (a navigation map), and includes a touch panel type display device, a speaker, a microphone, and the like, which function as user interfaces. The navigation device 50 is configured to perform a function of determining a current location of the own vehicle 1 with a GNSS receiver and calculating a route from the current position to a destination that a user inputs. The route calculated by the navigation device 50 is sent to a target lane determining part 110 (described later) of the vehicle control device 100. The current location of the own vehicle 1 can be determined by an INS (Inertial Navigation System) making use of outputs of the vehicle sensor 60 and may be determined by the INS especially when the GNSS receiver does not receives signals from navigation satellites. In addition, the navigation device 50 is configured to, when the vehicle control device 100 performs a manual driving mode, give guidance on the route to the destination by voice, sound, or navigation display.
The function to determine the current location of the own vehicle 1 may be performed by other device that is separated from the navigation device 50. In addition, the functions of the navigation device 50 may be realized by functions of a remote terminal device a user owns such as a smartphone or a tablet terminal device. In this case, information is communicated between the remote terminal device and the vehicle control device 100 through wired or through wireless communication.
The communication device 55 may perform wireless communication using, for example, a cellular network, a Wi-Fi network, Bluetooth (registered trademark), DSRC (Dedicated Short Range Communication), or the like. The communication device 55 may be configured to communicate wirelessly with an information providing server of a system, such as VICS (registered trademark) (Vehicle Information and Communication System), which monitors traffic status of a road, to receive traffic information indicating the traffic status of a road on which the own vehicle 1 is now traveling or is going to travel. The traffic information includes such information as information on traffic jam ahead, a time required for passing through a traffic jam area, information on accident, a disabled car, and a construction work, information on speed restriction and lane restriction, information on location of parking areas and on whether or not a parking area, a highway travel center, or a rest stop is full. The communication device 55 may receive the traffic information by communicating with a wireless beacon installed on a side space of a road or by performing vehicle-to-vehicle communication with another vehicle traveling near the own vehicle 1. The communication device 55 is an example of an “acquisition unit” which acquires information on traffic jam.
The vehicle sensor 60 includes a vehicle speed sensor to detect a vehicle speed of the own vehicle 1, an acceleration sensor to detect an acceleration, a yaw rate sensor to detect an angular velocity about a vertical axis, and an orientation sensor to detect an orientation of the own vehicle 1. Note that, herein, the vehicle sensor 60 is sometimes referred to generally as a “sensor(s)” in the description of a method of designing an action plan and description of formulas relevant to the action plan.
The HMI 70 includes, as components of the driving operation system, an accelerator pedal 71, an acceleration opening degree sensor 72, an accelerator pedal counter force output device 73, a brake pedal 74, a brake pressing-down amount sensor (or a master pressure sensor or the like) 75, a shift lever 76, a shift position sensor 77, a steering wheel 78, a steering angle sensor 79, a steering torque sensor 80, and other driving operation devices 81.
The accelerator pedal 71 is an operation element to be pressed down by a driver and to receive an instruction by the driver for acceleration (or to be released back by the driver and to receive an instruction by the driver for deceleration). The acceleration opening degree sensor 72 is configured to detect an amount by which the accelerator pedal 71 is pressed down and output an acceleration opening degree signal to the vehicle control device 100.
Here, the acceleration opening degree signal may be directly outputted to the travel drive force output device 300, the steering device 310, or the brake device 320 instead of being outputted to the vehicle control device 100. Sinitlarly, other output signals from the components for the driving operation may also be outputted to the vehicle control device 100 or directly to the travel drive force output device 300, the steering device 310, or the brake device 320 instead of being outputted to the vehicle control device 100. The accelerator pedal counter force output device 73 is configured to output a force (operation counter force) that the accelerator pedal 71 applies in response to receiving an instruction from the vehicle control device 100 and acts in a direction opposite to a direction in which the accelerator pedal 71 is pressed down, for instance.
The brake pedal 74 is an operation element to receive an instruction by a driver for deceleration. The brake pressing-down amount sensor 75 is configured to detect an amount by which a driver presses down the brake pedal 74 (or detect a force applied to the brake pedal 74 to press down the brake pedal 74) and output a brake-amount signal corresponding to a detected result to the vehicle control device 100.
The shift lever 76 is an operation element to receive an instruction by a driver for changing a shift position. The shift position sensor 77 is configured to detect a shift position to which a driver shifts the shift lever 76 and output a shift position signal indicating a detected result to the vehicle control device 100.
The steering wheel 78 is an operation element to receive an instruction by a driver for steering the own vehicle 1. The steering angle sensor 79 is configured to detect an operation angle of the steering wheel 78 and output a steering angle signal indicating the detected result to the vehicle control device 100. The steering torque sensor 80 is configured to detect a torque applied to a steering shaft by the steering wheel 78 to be turned and output a steering torque signal indicating a detected result to the vehicle control device 100.
Examples of the other driving operation devices 81 include a joystick, a button, a dial switch, and a GUI (Graphical User Interface) switch. The other driving operation devices 81 are configured to receive instructions for acceleration, deceleration, turning, and the like and output the instructions to the vehicle control device 100.
The HMI 70 includes such elements for non-driving operation as a display device 82, a speaker 83, a contact operation detecting device 84, a content reproduction device 85, various operation switches 86, a seat 88, a seat driving device 89, a window glass 90, and a window driving device 91.
Examples of the display device 82 include, for example, an LCD (Liquid Crystal Display), an organic EL (Electroluminescence) display, each of which is attached to various portions of an instrument panel or any portion which faces a front passenger seat or a rear seat. The display device 82 may be a HUD (Head-Up Display) that projects an image on a front windshield or other window glass. The speaker 83 outputs voice. In a case where the display device 82 is a touch panel, the contact operation detecting device 84 is configured to detect a contact position (touch position) on a display screen surface of the display device 82 and output the contact position to the vehicle control device 100. In a case where the display device 82 is not a touch panel, however, the contact operation detecting device 84 may be omitted.
The content reproduction device 85 includes, for example, a DVD (Digital Versatile Disc) player, a CD (Compact Disc) player, a television receiver, a device generating various guidance images, or the like. Each of the display device 82, the speaker 83, the contact operation detecting device 84, and the content reproduction device 85 may be partially or entirely included in the navigation device 50.
The various operation switches 86 are installed at various locations in a vehicle compartment. The various operation switches 86 may include an autonomous driving switchover switch 87 to instruct that autonomous driving should start (or in a predetermined time) or stop. The autonomous driving switchover switch 87 may be a GUI (Graphical User Interface) switch or a mechanical switch. The various operation switches 86 may include switches to drive the seat driving device 89 or the window driving device 91.
The seat 88 is a seat on which the driver sits. The seat driving device 89 may be capable of driving the seat 88 to change freely a reclining angle, a front-rear direction position and a yaw angle. The window glass 90 is installed, for example, in respective doors. The window driving device 91 is configured to drive the window glass 90 to open and close. The vehicle compartment camera 95 may be a digital camera utilizing a solid imaging element, such as CCD or CMOS. The vehicle compartment camera 95 may be installed on a rear view mirror, a steering boss, or an instrument panel, or at such a position that the vehicle compartment camera 95 is able to take an image of at least a head portion of a driver who is performing driving operation. For instance, the vehicle compartment camera 95 may periodically and repeatedly take an image of the driver.
Referring back to
The vehicle control device 100 includes the target lane determining part 110, an autonomous driving control part 120 (distance detection part), an autonomous driving mode control part 130, a recognition part 140, a switching-over control part 150, a travel control part 160 (vehicle behavior control part), an HMI control part 170, and a storage part 180.
A part or all of the functions of the target lane determining part 110, the respective parts of the autonomous driving control part 120 (distance detection part), and the travel control part 160 may be performed by a processor executing a program (software). Part or all of those functions may be performed by hardware such as an LSI (Large Scale Integration) and an ASIC (Application Specific Integrated Circuit) or may be performed by a combination of software and hardware.
It is assumed in explanation below that the autonomous driving control part 120 executes the function of each of the parts (to be described in detail hereinafter) by reading an appropriate program as needed from ROM or EEPROM (Electrically Erasable Programmable Read-Only Memory) and loading the read program on a RAM to have the part perform the function. The program for each part may be stored in the storage part 180 in advance or may be stored in other storage medium and read through a communication medium into the vehicle control device 100 as needed.
The target lane determining part 110 may be implemented using a MPU (Micro Processing Unit), for example. The target lane determining part 110 may be configured to divide a route provided by the navigation device 50 into a plurality of sections (for example, divide the route every 100 meters in the vehicle traveling direction) and determine a target lane for each of the sections with reference to precise map information 181. The target lane determining part 110 may be configured to make a decision, for example, on which one of the lanes numbered from the left-most lane in each section the own vehicle 1 should run on. For example, if there is a junction ahead where a current road along which the own vehicle 1 is traveling branches into two roads or other road joins the current road, the target lane determining part 110 determines a reasonable target lane so that the own vehicle 1 can run through the junction to run on an intended travel route beyond the junction. The target lane determined by the target lane determining part 110 is stored as target lane information 182 in the storage part 180.
The autonomous driving control part 120 includes the autonomous driving mode control part 130, the recognition part 140, and the switching-over control part 150.
The autonomous driving mode control part 130 may be configured to determine the mode of autonomous driving (autonomous driving mode) according to an operation performed by the driver via the HMI 70, an event determined by the action plan creating part 200, a travel mode determined by a route creating part 145, and the like. The HMI control part 170 is notified of the autonomous driving mode. A limit depending on the performance or the like of the detection device DD (sensors) of the own vehicle 1 may be set for the autonomous driving mode.
In any of the autonomous driving modes, switching to a manual driving mode (an overriding) is possible by operating one or more driving operation components in the HMI 70. The overriding occurs, for example: when the driver of the own vehicle 1 continuously operates a driving operation component of the HMI 70 for a predetermined period of time or longer; when the variation in an amount associated with an operation (for example, an acceleration opening degree of the accelerator pedal 71, a brake pressing-down amount of the brake pedal 74, a steering angle of the steering wheel 78) becomes equal to or larger than a predetermined value; or when the operation of the driving operation system is performed a predetermined number of times or more.
The recognition part 140 includes an own vehicle location recognition part 141, an external world recognizing part 142, a human detection part 143 (detection part), an AI (Artificial Intelligence) accelerator 144 (detection part), an action plan creating part 200, and a route creating part 145.
The own vehicle location recognition part 141 is configured to recognize a lane (travel lane) in which the own vehicle 1 is traveling, and a relative location of the own vehicle 1 with respect to the travel lane, based on the precise map information 181 stored in the storage part 180 and on information inputted from the finder 20 (sensor), the radar 30 (sensor), the camera 40 (sensor), the navigation device 50, or the vehicle sensor 60.
The own vehicle location recognition part 141 recognizes the travel lane by comparing a pattern of road partitioning lines (e.g., an arrangement of solid lines and dashed lines) recognized based on the precise map information 181 with a pattern of road partitioning lines near the own vehicle 1 recognized based on images captured by the camera 40. When recognizing the travel lane of the own vehicle 1, the current location of the own vehicle 1 received from the navigation device 50 or a processing result by INS may be taken into account.
Referring back to
The human detection part 143 is configured to detect a human from an image(s) taken by the camera 40. More specifically, the human detection part 143 is configured to detect a specific target object (such as a human or bicycle) in a specific area, using the AI accelerator 144. The human detection part 143 issues a request for detection of a human to the AI accelerator 144. The AI accelerator 144 performs AI computation outside a CPU and transmits a result of the detection of a human to the human detection part 143. As a high-speed detection is required in detecting a human, the AI accelerator 144 is used in detecting a human. The detection may be, however, conducted without using the AI accelerator 144.
For the sake of simplicity of the description, the human detection part 143 has been described as being separate from the camera 40 and the external world recognizing part 142. The human detection part 143 may be, however, any component that is capable of detecting a specific target object. Examples of such a component include: an image processing part that extracts a human or the like from an image taken by the camera 40; and a part that recognizes and detects a human or the like from a profile of an image object in an internal processing of the external world recognizing part 142. In this case, the human detection part 143 is omitted from the recognition part 140 in
In addition, as described hereinafter, it is possible to increase the recognition probability of recognizing a human detected by the human detection part 143 by making use of the VICS information received by the communication device 55.
The AI accelerator 144 is a processor dedicated to detection of a human and uses a computation resource(s) other than CPUs. The AI accelerator 144 is used in, for example, accelerating image processing by a processor enhanced by a GPU (Graphics Processing Unit) and signal processing using FPGA (Field Programmable Gate Array). The AI accelerator 144 performs AI computation on dedicated hardware (for example, GPU).
The route creating part 145 is configured to determine, when a lane keeping event is processed, a travel mode according to which the own vehicle 1 should travel, from a plurality of travel modes including: a constant-speed travel, a follow-up travel, a low-speed follow-up travel, a deceleration travel, a curve travel, an obstacle avoiding travel, and the like. The route creating part 145 then generates a route candidate based on the determined travel mode.
The route creating part 145 is configured to evaluate the generated route candidate from, for example, such two viewpoints as planning suitability and safety and select a route to be outputted to the travel control part 160. From the viewpoint of the planning suitability, routes are evaluated in such a manner that a route having high followability to a previously created plan (for example, an action plan) and being short in a total length is highly evaluated. For example, when changing to a right side lane is desired, if a candidate route makes the own vehicle 1 once change lanes to the left and then return to the original lane, the candidate route is poorly evaluated. From the viewpoint of safety, for example, routes are evaluated in such a manner that, at each point in a route, the larger a distance between the own vehicle 1 and an object (a nearby vehicle or the like) and the smaller a change in the amount of acceleration, deceleration, or steering angle is required in the route, then the route is highly evaluated.
Referring back to
After switching to the manual driving mode by overriding, in a case where an operation on the driving operation component of the HMI 70 has not been detected for a predetermined period of time, the switching-over control part 150 may switch the manual driving mode back to the previous autonomous driving mode. For example, when performing a handover control of shifting from the autonomous driving mode to the manual driving mode at a scheduled termination point of the autonomous driving, in order to notify a driver of a handover request in advance, the switching-over control part 150 may output information on the notification to the HMI control part 170.
The travel control part 160 is configured to control the travel drive force output device 300, the steering device 310, and the brake device 320 in such a way that the own vehicle 1 traces a route generated by the route creating part 145 on time as scheduled.
The travel control part 160 has functions as the vehicle behavior control part that controls at least the speed of the vehicle based on an action plan.
The HMI control part 170 is configured to, when receiving information on an autonomous driving mode communicated from the autonomous driving control part 120, control HMI 70 in accordance with the type of the autonomous driving mode with reference to mode dependent allowable operation information 184 (described later, See
The HMI control part 170 is configured to determine those devices that are allowed to be used (navigation device 50 and part or all of HMI 70) and those devices that are not allowed to be used, by referencing the mode dependent allowable operation information 184 in accordance with the information on the autonomous driving mode received from the autonomous driving control part 120. In addition, the HMI control part 170 is configured to determine, based on the determination result, whether an operation by the driver on a component of the HMI 70 for a non-driving operation or on the navigation device 50 should be enabled or not.
For example, when the vehicle control device 100 is performing the manual driving mode, the driver can operate driving operation components of HMI 70 (for example, accelerator pedal 71, brake pedal 74, shift lever 76, and steering wheel 78; see
In this case, in order to prevent driver distraction, i.e., to prevent the driver from being distracted due to actions other than driving (for example, an operations on the HMI 70, or the like), the HMI control part 170 may be configured to perform a control such that an operation to a part or all of the non-driving operation components of the HMI 70 by the driver is not allowed. In this event, in order to make the driver monitor the surrounding of the own vehicle 1, the HMI control part 170 may be configured to make the display device 82 (see
The HMI control part 170 may be configured to, when the driving mode is an autonomous driving, perform control in such a manner that restrictions on the driver distraction are eased and that operations by the driver on the non-driving operation components are allowed, which have not been allowed before. The HMI control part 170 makes, for example, the display device 82 display a video, the speaker 83 (see
The storage part 180 may store therein information such as, for example, the precise map information 181, the target lane information 182, the action plan information 183, and the mode dependent allowable operation information 184. The storage part 180 may be a ROM (Read Only Memory), a RAM (Random Access Memory), an HDD (Hard Disk Drive), a flash memory, or the like. A program to be executed by a processor may be stored in the storage part 180 in advance or may be downloaded from an external device via an in-vehicle Internet device or the like. Alternatively, the program may be installed in the storage part 180 by inserting a portable storage medium storing the program into a drive device (not illustrated).
The precise map information 181 may include more precise map information than the navigation map installed in the navigation device 50. The precise map information 181 may include, for example, information on a center portion of a lane or a boundary of the lane. The information on the boundary includes: a type, a color, and a length of a lane mark; a width of a road and a width of a road shoulder; a width of a main lane and widths of other lanes; a position of the boundary, a type of the boundary (guard rail, planted strip, kerbstone), and a zebra pattern zone for guiding and the like. These may be included in the precise map.
The precise map information 181 may also include road information, traffic regulation information, address information (addresses or postal codes), facility information, telephone number information, or the like. The road information includes: information representing a type of a road such as an expressway, a toll road, a national road, and a prefectural road; the number of lanes of the road; a width of each lane; a slope of a road; a location of a road (three-dimensional coordinates including a longitude, a latitude, and an altitude); a curvature of a curve of a lane; a location of a junction and a fork of lanes; and a road sign installed along a road. The traffic regulation information includes information on a lane closed due to construction, a traffic accident, a traffic jam, or the like.
Basic Action Plan
The action plan creating part 200 is configured to set a start point of an autonomous driving, and/or a destination of the autonomous driving. The start point of the autonomous driving may be a current location of the own vehicle 1 or may be a location point at which an instruction of starting autonomous driving has been conducted. The action plan creating part 200 is configured to create an action plan to be used in a road section between the start point and the destination of the autonomous driving. The action plan creating part 200 may be configured to create an action plan to be used in any section.
The action plan is constituted by, for instance, various events that are to be processed in a sequential order. The various events include, for example, a deceleration event to decelerate the own vehicle 1, an acceleration event to accelerate the own vehicle 1, a lane keeping event to have the own vehicle 1 keep on traveling in a travel lane without deviating from the travel lane, a lane change event to change travel lanes, an overtaking event to have the own vehicle 1 overtake a vehicle traveling ahead of the own vehicle 1, a branching point event to have the own vehicle 1 change travel lanes to a lane the driver wants to take or keep on traveling on the current lane without deviating from the current lane, a joining point event to have the own vehicle 1 accelerate or decelerate to change travel lanes to join a main travel lane from a joining lane, a hand-over event to switch from the manual driving mode to the autonomous driving mode at the start point of the autonomous driving or switch from the autonomous driving mode to the manual driving mode at the scheduled termination point of the autonomous driving.
The action plan creating part 200 schedules the lane change event, the branching point event or the joining point event at a location where the target lane determined by the target lane determining part 110 is switched to another lane. Information on the action plan created by the action plan creating part 200 is stored in the storage part 180 as action plan information 183 (described below).
Vehicle Action Plan to be Determined with Reference to Collision Probability
As shown in
The action plan creating part 200 may be configured to create a sudden braking allowed action plan and a preliminary braking action plan. The sudden braking allowed action plan keeps the set speed set for autonomous driving and permits sudden braking operations in a low collision probability region in the collision probability map 1000, the low collision probability region having collision probabilities lower than a predetermined threshold value lower than a target collision probability. The preliminary braking action plan avoids sudden braking operations by repeating short-time braking in a high collision probability region in the collision probability map 1000, the high collision probability region having a collision probability higher than a predetermined threshold value but lower the target collision probability.
A target collision probability setting part 210 sets a predetermined target collision probability.
The instruction achievement probability density distribution estimation part 220 estimates an instruction achievement probability density distribution 1001 (see
The instruction value offset amount calculation part 230 calculates an instruction value offset amount 1002 (see
The fusion accuracy reliability estimation part 240 calculates a fusion accuracy reliability-based probability distribution 1003 (see
The collision probability map setting part 250 determines a collision probability map according to formula (1) described below.
Referring back to
The travel drive force output device 300 is configured to output a travel drive force (a torque) required for a vehicle to travel, to a drive wheel. If the own vehicle 1 is an automobile having an internal combustion engine as a power source, the travel drive force output device 300 may include, for example, an engine (not shown), a transmission (not shown) and an engine ECU (Electronic Control Unit, not shown) to control the engine. Alternatively, if the own vehicle 1 is an electric vehicle having an electric motor as a power source, the travel drive force output device 300 may include a travel motor (not shown) and a motor ECU to control the travel motor (not shown). Alternatively, if the own vehicle 1 is a hybrid vehicle, the travel drive force output device 300 may include an engine, a transmission, an engine ECU, a travel motor, and a motor ECU (all of these are not shown).
When the travel drive force output device 300 includes only an engine, the engine ECU is configured to control a throttle opening degree of the engine, a shift position, and the like in accordance with information received from the travel control part 160 to be described later. When the travel drive force output device 300 includes only the travel motor, the motor ECU is configured to control the duty ratio of a PWM signal given to the travel motor in accordance with information received from the travel control part 160. When the travel drive force output device 300 includes both an engine and a travel motor, the engine ECU and the motor ECU work in cooperation with each other to control the travel drive force in accordance with information received from the travel control part 160.
The steering device 310, for example, includes a steering ECU and an electric motor (these are not shown). The electric motor is configured to turn wheels to be steered to change a direction of the wheels by applying a force to a rack-and-pinion mechanism. The steering ECU is configured to drive the electric motor for changing the direction of the wheels to be steered in accordance with information inputted by the vehicle control device 100 or inputted information on a steering angle or a steering torque.
The brake device 320 may be an electrically driving servo brake device including, for example, a brake caliper, a brake cylinder to apply a hydraulic pressure to the brake caliper, an electric motor to generate the hydraulic pressure in the cylinder, and a braking control part (all of these are not shown). The braking control part of the electrically driving servo brake device is configured to control the electric motor in accordance with the information inputted from the travel control part 160 so that a brake force that is commensurate with a braking operation is applied to each wheel. In addition, the electrically driving servo brake device may include a mechanism to transmit a hydraulic pressure generated by an operation on a brake pedal to the brake cylinder through a master cylinder as a back-up system.
The brake device 320 is not limited to the electrically driving servo brake device as above described and may be an electrically controlled hydraulic pressure brake device. The electrically controlled hydraulic pressure brake device is configured to control an actuator in accordance with the information inputted from the travel control part 160 to transmit a hydraulic pressure in the master cylinder to brake cylinders. In addition, the brake device 320 may include a regenerative brake system with the travel motor that may be included in the travel drive force output device 300.
The autonomous driving control part 120 of the embodiment functions as the distance detection part together with the detection device DD and vehicle sensor 60. The distance detection part is configured to output detection information based on the information outputted from the detection device DD and the vehicle sensor 60. The detection information includes: a combination of sensors (sensor configuration), of a plurality of sensors included in the detection device DD and vehicle sensor 60, which has detected a target object; a detected time period starting from a time at which the sensor combination has detected the target object and during which detected time period the sensor combination has continuously detected the target object; and a detected distance determined based on the outputs from the sensor combination and the detected time period. The detected distance is determined during the traveling based on a result of a measurement process performed in advance, in which measurement process the sensor combination is tested to confirm its ability to detect a target object in a situation where the true distance to the target object is known.
Next, a description will be given of the operations of the vehicle control device 100 as configured above.
First, a description will be given of considerations on the problems of the conventional technique.
There is a problem that in the event of determining an action plan using sensors, the distance to be detected by the sensors may not be enlarged while maintaining the reliability of the sensors due to insufficient detectable distances of the sensors. This problem is hereinafter referred to as problem 1.
As illustrated in
A description will be given of the actions illustrated in
On the other hand, under the high-reliability detection setting under which braking operations are to be performed in a distance range where the reliability of the sensors is high, the detection length decreases as illustrated by mark×b in
As will be appreciated from the above description, the distance to be detected by the sensors may not be enlarged while maintaining the reliability of the sensors due to insufficient detectable distances of the sensors.
The embodiment of the present invention provides, to solve the problem 1, a method for calculating the reliability and a method for linking the reliability to action plans.
A problem addressed as problem 2 is an insufficiency of clarification of safety levels and an insufficiency of quantification of reliability (accuracy, detection probability) of sensors and the accuracy of the action plan. To solve the problem 2, the embodiment of the present invention provides formulas that quantitatively assess the reliability (accuracy, detection probability) of sensors and the action plan, to clarify a distance range that can be regarded as safe.
The embodiment of the present invention introduces a concept of “collision probability” to create an action plan. This is based on the idea that an action plan is considered as possessing a “collision probability” like a recognition operation possesses a “recognition probability”. According to this idea, the “collision probability” is introduced to the action plan.
The collision probability P(C|St) represented in the collision probability map 901 is calculated on the basis of the detection probability and accuracy of the sensors, the algorithm and accuracy of the action plan. Dashed region d in the collision probability map 901 represents a region where it is unlikely that a collision occurs as long as the vehicle travels at a speed and at a location falling in the region. Dashed region e annotated in the tone expression 902 represents a probability of 10−7, a probability at which a collision is considered as being unlikely to occur.
As can be appreciated from the collision probability distribution 903, the collision probability P(C|St) when the speed is 0 can be determined according to the standard deviation σ (not shown) of the sensor error. This means that, at a location spaced apart by a distance of 6σ, the collision probability will be a probability of 10−7.
The embodiment of the present invention can estimate the accuracies and detection probabilities of the sensors and the algorithm of the action plan in an integrated manner, and uses the estimated information as data for determining validity of the algorithm.
A collision probability f representing a likelihood of the vehicle 1 to collide with the vehicle traveling ahead is determined with the following three parameters, as illustrated on the right side of
The instruction achievement probability density distribution 1001 is one of the three parameters with which the collision probability is to be determined (see
The instruction value offset amount 1002 is one of the three parameters with which the collision probability is to be determined. The instruction value offset amount 1002 is a parameter that represents an offset by which the target stop location should be spaced apart from the vehicle traveling ahead.
The fusion accuracy reliability-based probability distribution 1003 depends on the combination of sensors, such as cameras, radars, and Lidars, detected time period, and detected distance. The fusion accuracy reliability-based probability distribution 1003 depends on the redundancy of the sensors and the below-described detection scheme, which may be AND detection scheme (see
Hereinbelow, relationships between the instruction achievement probability density distribution 1001, fusion accuracy reliability, the fusion accuracy reliability-based probability distribution 1003, the collision probability f, and the instruction value offset amount 1002 will be described in detail with reference to
The instruction achievement probability density distribution 1001 represents a distribution of the location at which the vehicle 1 actually stops in response to a certain deceleration request with respect to the target stop location. The instruction achievement probability density distribution 1001 shown in
Assume that the action plan creating part 200 has determined an acceptable maximum deceleration for an action plan in accordance with the current autonomous driving mode and the like. An instruction achievement probability density distribution measured in advance in the above-described measurement process with a deceleration which is equal to or approximately the same as the maximum deceleration is regarded as the instruction achievement probability density distribution 1001 for the action plan to be currently executed.
The fusion accuracy reliability is obtained from: a sensor configuration representing a combination of sensors in a plurality of sensors, the combination of sensors having detected a target object; and a detected time period during which the sensor configuration continuously having detected the target object. In general, the more the number of sensors detecting the target object, or the longer the detected time period, the fusion accuracy reliability increases. The fusion accuracy reliability varies in a manner depending on the combination of the sensors detecting the target object.
For example, when a target object is detected by sensor configuration SC1 (Camera) and the detected time period is D, the fusion accuracy reliability is Low; when a target object is detected by sensor configuration SC6 (Lidar+Camera) and the detected time period is B, the fusion accuracy reliability is Mid; and when a target object is detected by sensor configuration SC7 (Radar+Lidar+Camera) and the detected time period is D, the fusion accuracy reliability is High. The fusion accuracy reliability may be assessed in three levels Low, Mid, and High as shown in
A fusion accuracy reliability-based probability distribution represents a distribution according to which the detected distances measured with a combination of sensors will be expected to be distributed with respect to the true value.
When focusing on one of the fusion accuracy reliabilities, e.g., Mid, the larger the deviations of the detected distances from the true distance, the wider the width of the fusion accuracy reliability-based probability distribution 1003 becomes, and the smaller the deviations of the detected distances from the true distance, the narrower the width of the fusion accuracy reliability-based probability distribution 1003 becomes. The fusion accuracy reliability-based probability distribution 1003 and its width are to be obtained in advance through a process of finding, in a situation where the true distance is known, an error distribution according to which the errors of the detected distances are distributed.
When focusing on the cases where the width of the fusion accuracy reliability-based probability distribution 1003 is Middle, the height of the fusion accuracy reliability-based probability distribution 1003 is high when the fusion accuracy reliability is High; and the height of the fusion accuracy reliability-based probability distribution 1003 is low when the fusion accuracy reliability is Low. The height of the fusion accuracy reliability-based probability distribution 1003 is determined so that the area of the probability distribution is approximately proportional to the detection probability of the sensor configuration. The width and height of the fusion accuracy reliability-based probability distribution 1003 are characteristics to be obtained in advance through a process of finding a detection characteristics, in which process the sensor configuration is tested to find whether an object is detected in a situation where the object is known to be present or absent.
A collision probability f representing a likelihood of vehicle 1 colliding with a vehicle traveling ahead of vehicle 1 is determined by a relative location relationship between the instruction achievement probability density distribution 1001 and the fusion accuracy reliability-based probability distribution 1003, which relative location relationship is to be determined by the instruction value offset amount 1002. The instruction value offset amount 1002 specifies a distance difference between the locations of apexes of the two distributions of the instruction achievement probability density distribution 1001 and the fusion accuracy reliability-based probability distribution 1003, which distance difference determines the collision probability f.
In other words, the instruction value offset amount 1002 can be determined so that the collision probability f becomes equal to the target collision probability. The collision probability f is obtained by a convolution between the fusion accuracy reliability-based probability distribution 1003 and the instruction achievement probability density distribution 1001. The instruction value offset amount 1002 can be determined so that the collision probability f becomes equal to the target collision probability. The nine graphs shown in
In
The width of the fusion accuracy reliability-based probability distribution 1003 varies in a manner depending on the sensor configuration (types and combination of sensors), detected time period, the distance to the target object, and the like. In general, when the distance to the target object is far, the width of the fusion accuracy reliability-based probability distribution 1003 is wide; and when the distance to the target object is near, the width of the fusion accuracy reliability-based probability distribution 1003 is narrow. When a comparison is made between cases in which the distances to the target objects are the same, for example, in the case of sensor configuration SC1 (Camera), the width of the fusion accuracy reliability-based probability distribution 1003 is wide; in the case of sensor configuration SC3 (Radar), middle; and in the case of sensor configuration SC4 (Radar+Lidar), narrow. In addition, in general, the longer the detected time period, the narrower the width of the fusion accuracy reliability-based probability distribution 1003 becomes.
As described above, the width of the fusion accuracy reliability-based probability distribution 1003 can be obtained in advance through a process of finding an error distribution according to which the errors of the detected distances are distributed in a situation where the true distance is known. In detail, the width of the fusion accuracy reliability-based probability distribution 1003 is to be obtained for each of the combinations of the sensor configuration (types and combination of sensors), the detected time period, and the detected distance. Then, it is possible to construct a table (not shown, detection probability distribution width table) that receives, as inputs, a sensor configuration (types and combination of sensors), a detected time period, and a detected distance and outputs a width of a fusion accuracy reliability-based probability distribution 1003 and to store the detection probability distribution width table in a storage device (e.g., Read Only Memory (ROM)) of the vehicle control device. With this structure, it is possible to estimate the width of the current fusion accuracy reliability-based probability distribution 1003 by referencing the detection probability distribution width table with the combination of the current sensor configuration, the detected time period, and the detected distance.
The detection probability distribution width table has been described as taking the three parameters of sensor configuration, detected time period, and detected distance and outputting the width of the fusion accuracy reliability-based probability distribution 1003. However, to reduce the size of the table, the information to be input to the table may be reduced to two parameters: a fusion accuracy reliability (High/Mid/Low) and a detected distance or a sensor configuration and a detected time period. In addition, to reduce the information to be inputted, the precision (quantization width) of each of the parameters may be selected in addition to the number of the parameters. These selections are to be made in a tradeoff between precision and cost.
In
In
As shown in
As the instruction achievement probability density distribution 1001 varies in a manner depending on the deceleration requested, the relationships shown in
Specifically, the offset amount characteristic parameter table is looked up with the maximum deceleration for the current action plan and the fusion accuracy reliability (High/Mid/Low) to select a linear relationship, which is like one of the linear relationships shown in
The calculation method described above calculates the offset amount 1002 using one of the linear relationships shown in
As will be appreciated from the above description, the instruction achievement probability density distribution 1001 can be estimated based on the deceleration. In addition, the width of the fusion accuracy reliability-based probability distribution 1003 can be estimated based on the sensor configuration, detected time period, and detected distance. The offset amount 1002 that gives the target collision probability can be calculated from the instruction achievement probability density distribution 1001 and the width of the fusion accuracy reliability-based probability distribution 1003. Putting these together, it is possible to create an offset amount table (not shown) that receives a deceleration, a sensor configuration, a detected time period, a detected distance and outputs a corresponding offset amount 1002 using a Read Only Memory (ROM).
With this configuration, however, the size of the offset amount table increases in a manner depending on the precision of input information. That is, when the quantization steps of the deceleration, detected time period, and detected distance is made small and/or when the number of combinations of sensors is large, the capacity of the offset amount table increases. That means, whether to calculate the offset amount 1002 based on the linear relationships as shown in
Returning to
As shown in
The own vehicle 1 currently traveling cannot know the actual location of the vehicle traveling ahead, and thus cannot know the location of the center axis of the fusion accuracy reliability-based probability distribution 1003. However, as described above, the width of the fusion accuracy reliability-based probability distribution 1003 can be estimated by obtaining the fusion accuracy reliability from a combination of: a sensor configuration representing a combination of sensors in a plurality of sensors, the combination of sensors having detected a target object; and a detected time period during which the sensor configuration continuously having detected the target object, and by referencing the data measured in advance. In addition, as described above, the instruction achievement probability density distribution 1001 in the current stopping operation can be estimated based on the maximum deceleration currently set by referencing the data measured in advance.
Then, under the instruction achievement probability density distribution 1001 estimated in the current stopping operation, an offset amount 1002 that gives a target collision probability can be determined by performing calculation based on the fusion accuracy reliability, the width of the fusion accuracy reliability-based probability distribution 1003, and data obtained by a measurement in advance as described with reference to
The instruction value offset amount 1002 has been described as being determined so that the collision probability, which is obtained by a convolution of the instruction achievement probability density distribution 1001 and the fusion accuracy reliability-based probability distribution 1003 spaced apart by the instruction value offset amount 1002, becomes equal to the target collision probability. However, as described above, when storing the offset amount table, which receives the deceleration, sensor configuration, detected time period, and detected distance and outputs the corresponding offset amount, in a Read Only Memory (ROM) and the offset amount table is to be looked up while traveling, there is no need of calculation of the convolution while traveling because information on the instruction achievement probability density distribution 1001 and the fusion accuracy reliability-based probability distribution 1003 is embedded in the offset amount table. That means, there is no need of estimating the instruction achievement probability density distribution 1001 and fusion accuracy reliability-based probability distribution 1003 themselves or using them in the calculation.
Now, a description will be given of a reason why the instruction achievement probability density distribution 1001 is obtained based on the maximum deceleration currently set. When the deceleration is made large, the instruction achievement probability density distribution 1001 gets wider and thus the offset amount to be set becomes large. Assuming the maximum deceleration for an action plan makes it possible to obtain a safe offset amount suitable to the action plan.
Hereinbelow, a description will be given of the operation described above with reference to
Step 1
The target collision probability setting part 210 sets a target collision probability. The target collision probability is a predetermined fixed value determined in advance in a design phase taking into account the detection probabilities of objects to be detected and the severities of collisions with the objects.
Step 2
The fusion accuracy reliability estimation part 240 determines a fusion accuracy reliability based on the current sensor configuration and the detected time period. The fusion accuracy reliability is determined based on the correct detection probability observed in advance with the same combination of sensors, as described with reference to
Step 3
The instruction achievement probability density distribution estimation part 220 and an instruction value offset amount calculation part 230 determines candidates of an instruction value table (probability density, offset amount). The instruction achievement probability density distribution estimation part 220 determines a candidate of the instruction achievement probability density distribution 1001 based on the maximum deceleration currently set. As described above, the candidate of the instruction achievement probability density distribution 1001 is determined based on characteristic that has been measured in advance through a measurement process to measure the stop location of the vehicle 1 to which a certain deceleration instruction has been given.
The instruction value offset amount calculation part 230 determines candidates of the instruction value offset amount based on the fusion accuracy reliability determined by the fusion accuracy reliability estimation part 240 and the candidate of the instruction achievement probability density distribution 1001.
This step corresponds to selecting one of the dash-dot line regions High, Mid, and Low shown in
Step 4
The fusion accuracy reliability estimation part 240 estimates the width of the fusion accuracy reliability-based probability distribution. The instruction value offset amount calculation part 230 selects one from the offset amount candidates selected in STEP 3, based on the estimated width of the fusion accuracy reliability-based probability distribution 1003. This corresponds to, for example, when the fusion accuracy reliability is determined as Mid because the target object is detected by sensor configuration SC5 (Radar+Camera) and the detected time period is B, selecting the offset amount 1002 corresponding to reference numeral SC5-B shown in
The width of the fusion accuracy reliability-based probability distribution 1003 is obtained, for example, by inputting the current sensor configuration and the detected time period to the detection probability distribution width table and getting the output from the table, as described above. The detection probability distribution width table stores data of the width of probability distribution obtained in advance through a measurement process.
Step 5
The action plan creating part 200 determines an instruction value table using the offset amount 1002 obtained in STEP 4. As a result, the target location at which the action plan creating part 200 attempts the vehicle 1 to stop is determined.
It should be noted that the flow from STEP 1 to STEP 4 is an example and the flow is not limited thereto. For example, the determination of the offset amount by the above-described STEP 3 and STEP 4 can be executed by storing the offset amount table, which receives the deceleration, sensor configuration, detected time period, detected distance and outputs a corresponding offset amount, in a Read Only Memory (ROM) and by the action plan creating part 200 looking up the table. In this embodiment, the instruction achievement probability density distribution 1001 and the width of the fusion accuracy reliability-based probability distribution 1003 are reflected in the data stored in the offset amount table, and thus there is no need of obtaining the instruction achievement probability density distribution 1001 and the width of the fusion accuracy reliability-based probability distribution 1003 separately while traveling. Hereinbelow, a description will be given of an embodiment under the assumption that an offset amount that gives a target collision probability is directly obtained from the deceleration, sensor configuration, detected time period, and detected distance, in a manner similar to using the offset amount table.
The collision probability p(C|St) represented by a collision probability map is to be calculated in accordance with the calculation model represented by formula (1). The details of the calculation process related to this calculation will be described later.
p(C|St)=∫∫p(C|St+1)p(St+1|αD(St,{circumflex over (d)}))p({circumflex over (d)}|d)d{circumflex over (d)}dSt+1 (1)
where St represents a state (location, speed) at time t,
St+1 represents a state (location, speed) at time t+1, i.e. next state,
p(C|St+1) represents a collision probability at the next state (location, speed),
p(St+1|αD(St, {circumflex over (d)})) represents a probability of transitioning to a state St+1 due to an action generated when {circumflex over (d)} is observed in a state St (this represents algorithm and accuracy of action plan), and
p({circumflex over (d)}|d) represents a probability of observing a state {circumflex over (d)} of a target object when the actual state of the target object is d (this represents sensor reliability, i.e., detection probability and accuracy).
This flow is repeatedly executed by the autonomous driving control part 120 (see
Step S11
The autonomous driving control part 120 (distance detection part) detects, on the basis of the output from the detection device DD (distance detection part), a distance between the own vehicle 1 and an obstacle to be avoided.
Step S12
The collision probability map setting part 250 determines a collision probability map based on: a predetermined target collision probability; a maximum deceleration currently set by the action plan creating part 200; a combination of sensors (sensor configuration), of a plurality of sensors included in the detection device DD of the distance detection part, which has detected a target object; a detected distance detected by the sensor configuration; and a detected time period during which the target object has been continuously detected by the sensor configuration. The target collision probability is a predetermined fixed value determined in advance in a design phase taking into account the detection probability of objects to be detected and the severities of collisions with the objects.
This collision probability map has been constructed using an offset amount which is determined so that a collision probability calculated based on overlapping between the instruction achievement probability density distribution 1001 estimated based on the maximum deceleration and the fusion accuracy reliability-based probability distribution 1003 estimated based on the sensor configuration, detected distance, and detected time period is equal to the target collision probability. In other words, the collision probability map is constructed based on the target collision probability, maximum deceleration, sensor configuration, detected distance, and detected time period, using a target stop location which is set so that the collision probability at the target stop location is equal to the target collision probability.
The collision probability map to be obtained here includes discrete grid points each of which corresponds to a location and a speed and for each of which a collision probability is obtained by an approximate calculation. Each of the discrete grid points represents a discrete state.
Step S13
The action plan creating part 200 creates an action plan based on the collision probability of the own vehicle 1 at the current state (location, speed) on the collision probability map and the target collision probability.
Step S14
The travel control part 160 controls at least the speed of the own vehicle 1 based on the action plan, completing one cycle of this flow.
The autonomous driving control part 120 executes STEPS S11 to S14 for every predetermined time period.
Hereinbelow, a description will be given of features of the collision probability map to be used in the above-described vehicle control.
It should be noted that the collision probability map 1000 shown in
When the detection probability and accuracy of the sensors, the accuracy of the action plan, and the collision probability at the final state are determined, a collision probability is calculated for each state representing the speed of the vehicle and the location of the vehicle relative to the target object. The collision probability map 1000 is created based on the calculated collision probabilities.
For example, a collision probability by human is defined within a range of 0 to 1 expressed in the tone expression 1402. This collision probability by human (see reference numeral i in
A target speed curve can be drawn on the collision probability map 1000 as shown in
The collision probability distribution at a speed of 0 corresponds to the fusion accuracy reliability-based probability distribution 1003 and presents a normal distribution like the characteristic 1403 shown in
An object of the present invention is to achieve both safety and comfort. In this regard, the embodiment of the present invention sets an appropriate action plan on the collision probability map 1000 illustrated in
When the collision probability is lower than a first threshold value lower than the target collision probability (see [Low collision probability region] in the collision probability map 1000 shown in
When the collision probability is lower than the target collision probability but equal to or higher than a second threshold value equal to or higher than the first threshold value (see [High collision probability region] in the collision probability map 1000 shown in
The shading expression on the collision probability map 1000 shown in
A description will be given of the accuracy of achieving the goal of an action plan.
Obtaining the accuracy of achieving the goal of an algorithm of an action plan corresponds to obtaining p(St+1|αD(St, {circumflex over (d)})) in formula (1).
The vehicle control device 100 (see
As shown in
A description will be given of the reliability (detection probability, accuracy) of sensors.
Obtaining the reliability of sensors corresponds to obtaining p({circumflex over (d)}|d) in formula (1).
As can be appreciated from
Influence of Action Plan and/or Sensors to Collision Risk
A description will be given of influences of an action plan and/or sensors to the risk of collision.
Reference numeral 1702 in
Reference numeral 1703 in
A description will be given of utilization of the reliabilities (detection probabilities) of plural sensors.
Here, assume that the plural sensors consist of sensor 1 and sensor 2. Here, D1 denotes an event of detecting a target object by sensor 1; D2 denotes an event of detecting a target object by sensor 2; and E denotes a state where a target object is present. p(D1|E) represents a probability of an event that sensor 1 detects a target object under a state where the target object is actually present; p(D2|E) represents a probability of an event that sensor 2 detects a target object under a state where the target object is actually present; p(D1∩D2|E) represents a probability of an event that both sensor 1 and sensor 2 detect a target object under a state where the target object is actually present; p(D1∩
d(D1∩D2|E)≠p(D1|E)p(D2|E) (2)
In the example of the detection with the AND logic, illustrated in
A description will be given later of an example of an action plan using the detection with the AND logic (hereinafter referred to as AND detection scheme) with reference to
In the example of the detection with the OR logic, illustrated in
A description will be given later of an example of an action plan using the detection with the OR logic (hereinafter referred to as OR detection scheme) with reference to
In the example of the braking according to states of plural sensors, illustrated in
A description will be given later of an example of action plan that performs braking according to detection states as HALF-AND detection scheme, with reference to
The embodiment utilizes the variations that occur in the detection states of two sensors to construct such a logic of action plan that achieves both the safety and comfort by varying the braking force according to the detection probabilities.
Collision Probability that Occurs in Performing Algorithm α
A description will be given of a collision probability and a discomfort probability that occur in performing algorithm α.
In
A correct detection probability p(D|E) of the sensors and a false detection probability p(D|Ē) of the sensors are inputted to the algorithm α.
A collision probability pα(C|E) that occurs in performing algorithm α is represented by formula (3); and a collision probability p(C|Ai) that occurs in selecting action Ai is represented by formula (4). The action Ai that results in the collision probability being closest to Pc is represented in formula (5) using the collision probability p(C|E) that occurs in performing algorithm α and the collision probability p(C|Ai) that occurs in selecting action Ai. In other words, formula (5) represents algorithm α that identifies action Ai that results in the total collision probability being closest to the predetermined collision probability Pc. That is, once the targeted, predetermined collision probability Pc has been determined, measures to be taken in algorithm α are determined.
Collision probability that occurs in performing algorithm α:
p
α(C|E)=p(C|Ai)p(D|E)+p(C|A1)p(
Collision probability that occurs in selecting action Ai:
Action that results in the total collision probability being closest to Pc:
Once the destination states S1, S2, and S3 are determined, corresponding collision probabilities p(C|S1), p(C|S2), and p(C|S3) and corresponding discomfort probabilities p(U|S1), p(U|S2), and p(U|S3) are determined.
The collision probability pα(C|E) that occurs in performing algorithm α is represented by formula (3); and the collision probability p(C|Ai) that occurs in selecting action Ai is represented by formula (4).
The action Ai that results in the collision probability being closest to Pc is represented in formula (5) using the collision probability pα(C|E) that occurs in performing algorithm α and the collision probability p(C|Ai) that occurs in selecting action Ai.
Formula (6) represents the discomfort probability p(U|Ai) that occurs in selecting action Ai; and formula (7) represents the discomfort probability pα(C|Ē) that occurs in performing algorithm α.
Discomfort probability that occurs in selecting action Ai:
Discomfort probability that occurs in performing algorithm α:
p
α(C|Ē)=p(C|Ai)p(D|Ē)+p(C|A1)p(
A description will be given of assessment of collision probability that occurs in performing an algorithm in terms of forming an action-based state transition tree in a network.
First, a description will be given of a state transition tree formed based on actions.
The bullet marks ⋅ in
In
For example, in the case of
Next, a description will be given of the action-based state transition tree, shown in
As can be appreciated from a comparison between
In this way, by creating an action-based state transition tree, it is possible to use the network in the tree to assess the collision probability that is occurs in performing an algorithm.
Next, a description will be given of the relationship of an algorithm and an action-based state transition tree.
The algorithm to be applied to the action-based state transition tree need not necessarily be the same at every subtree thereof. For example, when the sensor detects a vehicle entering into the travel lane of the own vehicle to cut in front of the own vehicle or detects the occurrence of an obstacle or falling object in the middle of performing an algorithm on the action-based state transition tree shown in
A description will be given of the continuousness of the processing to be performed on the collision probability by algorithm α.
In
In contrast,
A description will be given of how a probability in the model shown in
The algorithm α indicated by the dashed frame in
A collision probability p(C|A) denotes the collision probability that occurs in the event of selecting an action A and is given by formula (8). AD denotes an algorithm adopted when an obstacle is detected and A
Collision probability in selecting an action:
p(C|A)=∫p(s|A)p(C|s)ds (8)
Algorithm:
A
D=αD({circumflex over (d)}),A
Total collision probability:
p(C|E)=p(C|AD)p(D|E)+p(C|A
As represented by formula (8), the probability p(s|A), which represents a probability of transition to state s by action A, and the collision probability at state s are each regarded as continuous information; and the collision probability p(C|A) representing a probability of a collision that may occur when action A is selected is represented by an integration of a multiplication of them with respect to state s. The embodiments of the present application handle the collision probability continuously based on the model in which disturbance is added to the action A as described above.
A description will be given of an action plan that is created by putting together the detection error and detection probability of the sensors.
Action Plan Created Taking into Account the Detection Error and Detection Probability of Sensors
A Case in Which an Obstacle Detection Event D Occurs in a State of Obstacle Presence E
This case means “correct detection”, and the action plan will perform a sudden braking operation or a preliminary braking operation according to the observed obstacle distance. In a case where the observed obstacle distance {circumflex over (d)} is far relative to the actual obstacle distance d, a preliminary braking operation will be performed with a certain likelihood of occurrence of a collision (the risk of collision can be relieved by a later action plan). In a case where the observed obstacle distance {circumflex over (d)} is near relative to the actual obstacle distance d, a too-early sudden braking will be performed. In a case where the observed obstacle distance {circumflex over (d)} is approximately equal to the actual obstacle distance d, a correct sudden braking operation or a correct preliminary braking operation will be performed in a manner depending on the observed obstacle distance {circumflex over (d)}.
A Case in Which an Obstacle Non-Detection Event
In this case, as no obstacle is recognized, the action plan will perform a cruising operation. This means “overlooking” and may possibly result in a collision accident. When the actual obstacle distance d is short, this cruising operation has a possibility of encountering a collision in a short time and will be required to perform a collision avoidance action rapidly. When the actual obstacle distance d is far, this cruising operation has a possibility of encountering a collision in a later time (the risk of collision can be relieved by a later action plan).
A Case in Which an Obstacle Detection Event D Occurs in a State of Obstacle Absence
This means a “false detection”. As an obstacle is recognized falsely, the action plan will perform a braking operation. When the observed obstacle distance {circumflex over (d)} is short, a fruitless sudden braking operation will be performed. When the observed obstacle distance {circumflex over (d)} is far, a fruitless preliminary braking operation will be performed.
A Case in Which an Obstacle Non-Detection Event
This means a “correct detection of absence”. In this case, as no obstacle is recognized, the action plan will perform a cruising operation. This is a correct cruising operation.
In the case in which an obstacle non-detection event
The present embodiment restrains the above-described “overlooking” and “false detection”.
AND detection scheme represented in
A Case in Which an Obstacle Detection Event D1 and an Obstacle Non-Detection Event
This means a contradiction between two sensors, one of which fails to detect an existing obstacle. When AND detection scheme is used, even when an obstacle is detected (D1), it is determined that the information from either of the two sensors is inaccurate. In this case, a braking operation could be incorrect and thus may possibly lead to an accident. In view of this, a cruising operation is adopted. When the actual obstacle distance d is short, this cruising operation has a possibility of encountering a collision in a short time and will be required to perform a collision avoidance action rapidly. When the actual obstacle distance d is far, this cruising operation has a possibility of encountering a collision in a later time.
A Case in Which an Obstacle Non-Detection Event
In this case, a cruising operation is adopted. This is an “oversight/incorrect cruising operation” selected by falsely determining that no obstacle is present. This could lead to a collision accident. When the actual obstacle distance d is short, this cruising operation has a possibility of encountering a collision in a short time and will be required to perform a collision avoidance action rapidly. When the actual obstacle distance d is far, this cruising operation has a possibility of encountering a collision in a later time.
A Case in Which an Obstacle Non-Detection Event
This means contradiction between two sensors, one of which fails to detect an existing obstacle. When AND detection scheme is used, even when an obstacle is detected (D2), it is determined that the information from either of the two sensors is inaccurate. In this case, a braking operation could be incorrect and thus may possibly lead to an accident. In view of this, a cruising operation is adopted. When the actual obstacle distance d is short, this cruising operation has a possibility of encountering a collision in a short time and will be required to perform a collision avoidance action rapidly. When the actual obstacle distance d is far, this cruising operation has a possibility of encountering a collision in a later time.
A Case in Which an Obstacle Detection Event D1 and an Obstacle Detection Event D2 are Detected in a State of Obstacle Absence Ē
In this case, a braking operation is adopted. This case means that an incorrect braking operation is performed as a result of the false detection. When the observed obstacle distance {circumflex over (d)} is short, a fruitless sudden braking operation will be performed. When the observed obstacle distance {circumflex over (d)} is far, a fruitless preliminary braking operation will be performed.
In this way, when AND detection scheme is used, a cruising operation is adopted in the cases other than the case when an obstacle detection event D1 and an obstacle detection event D2 occur. In a state of obstacle presence E, that cruising operation is possibly involved in a collision. In a state of obstacle absence Ē, that cruising operation, adopted in the cases other than the case when an obstacle detection event D1 and an obstacle detection event D2 occur, is a correct cruising operation.
OR detection scheme represented in
A Case in Which an Obstacle Non-Detection Event
In this case, a cruising operation is adopted. This is an “oversight/incorrect cruising operation” selected by falsely determining that no obstacle exists. This could lead to a collision accident. When the actual obstacle distance d is short, this cruising operation has a possibility of encountering a collision in a short time and will be required to perform a collision avoidance action rapidly. When the actual obstacle distance d is far, this cruising operation has a possibility of encountering a collision in a later time.
A Case in Which an Obstacle Detection Event D1 and an Obstacle Detection Event D2 Occur in a State of Obstacle Absence Ē
In this case, a braking operation is adopted. This case means that an incorrect braking operation is performed as a result of the false detection. When the observed obstacle distance {circumflex over (d)} is short, a fruitless sudden braking operation will be performed. When the observed obstacle distance {circumflex over (d)} is far, a fruitless preliminary braking operation will be performed.
A Case in Which an Obstacle Detection Event D1 and an Obstacle Non-Detection Event
In this case, a braking operation is adopted. When the observed obstacle distance {circumflex over (d)} is short, a fruitless sudden braking operation will be performed. When the observed obstacle distance {circumflex over (d)} is far, a fruitless preliminary braking operation will be performed.
A Case in Which an Obstacle Non-Detection Event
In this case, a braking operation is adopted. When the observed obstacle distance {circumflex over (d)} is short, a fruitless sudden braking operation will be performed. When the observed obstacle distance {circumflex over (d)} is far, a fruitless preliminary braking operation will be performed.
According to OR detection scheme, a cruising operation is adopted only when an obstacle non-detection event
Comparing AND detection scheme (see
According to OR detection scheme, when a contradiction between sensors occurs due to a sensor failure, a braking operation is selected. In a state of obstacle absence Ē, this leads to a compromise in comfort. However, in a state of obstacle presence E, this leads to safety.
HALF-AND detection scheme represented in
In the case of AND detection scheme illustrated in
As illustrated in
As illustrated in
As described, with HALF-AND detection scheme, when an obstacle detection event D1 and an obstacle non-detection event
In the case of OR detection scheme illustrated in
In the case of AND detection scheme shown in
From this results, HALF-AND detection scheme is considered as being superior in comfort compared to AND detection scheme and is considered as being superior in comfort but inferior in safety compared to OR detection scheme. However, comparing them in a general viewpoint, i.e., by comparing them with the total number of black star marks ★ and white star marks ⋆, HALF-AND detection scheme is considered as providing an improved result compared to AND detection scheme and OR detection scheme.
Descriptions have been given of the action plans using AND detection scheme, OR detection scheme, and HALF-AND detection scheme. Hereinafter, a method for obtaining collision probabilities of states while traveling will be described in detail.
Next, a description will be given of a method for obtaining event occurrence probability. When the event of interest is a collision, the event occurrence probability corresponds to a collision probability. The event occurrence probability is a generalized concept of collision probability.
In formula (11), α denotes an action, Sc denotes a current state, Sn denotes a next state, C denotes an event (collision), {circumflex over (d)} denotes an observed value, α({circumflex over (d)}) represents an action to be performed in response to the detection of the observed value {circumflex over (d)}, p(C|S) represents a probability of occurrence of an event (collision) at state S, p(Sn|Sc, α({circumflex over (d)})) represents a probability of a state transition from state Sc to state Sn being caused by the action α({circumflex over (d)}), which is performed in response to the observation of {circumflex over (d)}, and p({circumflex over (d)}|Sc) represents a probability of {circumflex over (d)} being observed at state Sc.
The trace of location and speed (solid bold line) shown in
The collision probability at a state where the speed is vc=0 and the location is at xc, denoted as p(C|vc=0,xc) and shown in
The error distribution of detecting a target object by the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120), shown in
The probability of detecting a target object by the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120), shown in
p(C|vc=0,xc), which represents the probability of a collision with vc=0, is represented by formula (15), where σd denotes the standard deviation of the normal distribution described above and erf denotes the Gaussian error function. The uncertainty probability p(εα) of the algorithm (deceleration) α has a normal distribution and can be represented by formula (16). In other words, p(εα) represents a probability of uncertainty εα, which represents a difference between the deceleration actually performed by the vehicle and deceleration a requested by an instruction. In formula (16), σα is the standard deviation of the normal distribution that represents the uncertainty. Probability (error distribution) p({circumflex over (d)}2|xc,vc) representing the probability of the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120) located at state (xc,vc) and detecting a target object at location {circumflex over (d)}2 is represented by formula (17). Probability p({circumflex over (d)}1=1|xc,vc) representing the probability of the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120) located at state (xc,vc) and detecting the target object is represented by formula (18). In formula (18), Pmax and Pmin respectively denote maximum and minimum detection probabilities of the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120) detecting a target object. In formula (18), ds and de are parameters representing locations depending on a characteristic of the detection by the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120) detecting a target object. The parameters define locations between which the probability of detecting a target object by the distance detection part varies from Pmin to Pmax (see
Calculation of Collision Probability p(C|S)
Next, a description will be given of how the collision probability p(C|S) at each state (location, speed) is to be calculated.
1. Assume Grid Points
Here, assume that Gx and Gv denote indices of the grid points; the Gxsize and Gvsize denote the size of the grid; x denotes the location of a state and v denotes the speed of the state; xmax denotes a maximum value of location and vmax denotes a maximum value of speed. A conversion from indices to the state values of the corresponding actual state is represented by formulas (19) and (20) (see
2. Conditions are Given to Ends of Grid Points of Collision Probability Map
The collision probabilities of the locations beyond the target object are assumed as 1.
Collision probability at a speed of 0, i.e., p(Gx,0), is represented by a cumulative distribution of the distribution of the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120) detecting a target object and is represented by formula (23) based on formula (15).
In formula (23), xo denotes the location of the obstacle; and σd denotes the standard deviation of the error distribution of the target object detection by the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120).
3. The Collision Probabilities are Obtained Sequentially from Ends of the Grid Points of the Collision Probability Map.
Hereinafter, the collision probability on a grid point Gx, Gv is denoted as p(C|S)=pc(Gx|Gv). The collision probability at a current state Sc, denoted as p(C|Sc), is given by formula (24) and the collision probability at a current grid point Gxc, Gvc, denoted as pc(Gxc|Gvc), is given by formula (25).
In formula (24) and (25),
p({circumflex over (d)}|Sc) corresponds to p({circumflex over (d)}1)p({circumflex over (d)}2), where p({circumflex over (d)}1) represents the detection probability of detecting {circumflex over (d)}1 and p({circumflex over (d)}2) represents the error distribution of {circumflex over (d)}2;
p(Sn|Sc,α({circumflex over (d)})) corresponds to p(εα) and they each represent an error distribution of action α (deceleration); and
p(C|Sn) corresponds to Pc(gxn, gvn) and they each represent the collision probability at the next state (Pc(gxn, gvn) represents an approximated value obtained by interpolation).
Descriptions has been given of how the collision probability p(C|S) at each state (location, speed) can be calculated.
Calculation of Collision Probability by Approximation from Next State
Next, a description will be given of how the collision probability of a current state can be obtained from the next state by approximation.
1. Obtain Current Speed and Location Based on the Location of Grid Point
2. Set the Parameters {circumflex over (d)}1, {circumflex over (d)}2, and εα
3. Obtain Necessary Deceleration
The necessary deceleration a is calculated according to formula (28).
4. Determine a Time Difference to Next State
Firstly, as shown in
Next, the smaller one of ΔTx and ΔTv is selected as follows:
5. Obtain the Next State
The next state Sn(xn, vn) is obtained by formulas (32) and (33).
6. Obtain the Collision Probability of the Next State by Approximation from the Grid Points in the Vicinities of the Next State
As illustrated in
Here, the next state has been determined according to the parameters {circumflex over (d)}1,{circumflex over (d)}2, and εα set at step 2, and the collision probability pc(gxn, gvn) at the determined next state has been calculated.
7. Obtain the Collision Probability at the Current State by Sweeping Parameters {circumflex over (d)}1,{circumflex over (d)}2, and εα
The collision probability pc(Gxc, Gvc) at the current state is obtained by accumulating the calculated collision probability pc(gxn, gvn) in accordance with formula (35) with sweeping the parameters {circumflex over (d)}1, {circumflex over (d)}2, and εα.
It should be noted that:
pc(gxn,gvn) in formula (35) is given by formula (34);
mx and mv in formula (34) are determined by the next state (gxn,gvn);
the next state (gxn,gvn) is given by formulas (32) and (33);
formulas (32) and (33) are each a function of deceleration a and ΔT;
ΔT is determined by formula (31), based on ΔTv and ΔTx; and
ΔTv and ΔTx are given by functions of deceleration a, which are represented by formula (29) and (30). The deceleration a is given by formula (28) representing a function of uncertainty εα of deceleration a. That is, the next state (gxn,gvn) is obtained by a calculation in which deceleration a is added with uncertainty εα.
That is, pc(gxn,gvn) in formula (35) takes a value to which uncertainty εα of deceleration a is reflected. The summation ΣΣΣ in formula (35) is applied to a term in which pc(gxn,gvn) is multiplied by P(εα), which represents the probability of uncertainty εα of deceleration a, and the summation is performed with respect to the uncertainty εα. Therefore, collision probability pc(Gxc,Gvc) of the current state, given by formula (35), will take a value to which uncertainty εα of deceleration a has been reflected. In this way, the embodiment makes it possible to handle the collision probability continuously.
It should be noted that the term to which the summation ΣΣΣ in formula (35) is applied is multiplied with p({circumflex over (d)}1), which represents a probability of detecting a target object by the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120). p({circumflex over (d)}1), which represents a probability of detecting a target object by the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120), is given by formula (18) and presents the characteristic as shown in
The collision probability map to be set in STEP S12 of the flow according to the embodiment is calculated approximately assuming the situation shown in
The term “approximate collision probability map” is used here for the sake of convenience because it is, unlike the above-described “collision probability map” in which the true location of the target object is known, to be calculated on the assumption that the obstacle is present at the detected distance {circumflex over (d)}2 detected by the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120) as shown in
The term {circumflex over (d)}2 in
The collision probabilities at grid points at which the speed v=0, i.e., at points on one end portion of the approximate collision probability map are given by formula (23) with substituting xo, which represents the true location of the target object, with the value of detected distance. The collision probabilities at points at which the location is xc+{circumflex over (d)}2 and at which the speed v≠0, i.e., at points on the other end portion of the approximate collision probability map, are given as 1.
Based on the collision probabilities at points at which the speed v=0 and the collision probabilities at points at which the location is xc+{circumflex over (d)}2 and at which the speed v≠0, collision probability at each of the grid points in the approximate collision probability map is obtained by approximate calculation according to formula (34).
The value of p({circumflex over (d)}1) in formula (35) is given by applying formula (18) to each of the grid points in the approximate collision probability map. In formula (18), xc is substituted with a location given by formula (19).
The value of p({circumflex over (d)}2) in formula (35) is calculated using formula (17) with substituting xo, which represents the true location of the target object, with the value of detected distance. The term p({circumflex over (d)}2) in formula (35) corresponds to the above-described fusion accuracy reliability-based probability distribution 1003. As described above, characteristic values representing the fusion accuracy reliability-based probability distribution 1003 can be obtained based on a result of measuring the characteristics of the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120) detecting a target object in a situation where the true distance is known. The width of the fusion accuracy reliability-based probability distribution 1003 depends on the detected distance. The larger the detected distance, the fusion accuracy reliability-based probability distribution 1003 presents a normal distribution which has a wider width. Therefore, the parameter σd in formula (17), which represents the normal distribution, is to be substituted with a value obtained by a function of detected distance detected by the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120), which function is to be obtained in advance based on measurement values. Note that the value of the parameter σd is to be used also in formula (23), which gives the collision probabilities at locations at which the speed v=0.
The term p(εα) in formula (35) is given by formula (16). The parameter σα of the normal distribution represented by formula (16) is to be obtained in advance through a measurement process. The value calculated by formula (16) using the measured value of σα is used as the value of p(εα).
The term pc(gxn,gvn) in formula (35) is given by formula (34).
The instruction value offset amount (dM), which is used to calculate the approximate collision probability map, has been described as being obtained from the predetermined target collision probability, instruction achievement probability density distribution 1001, and fusion accuracy reliability-based probability distribution 1003. The instruction achievement probability density distribution 1001 is to be measured in advance through a measurement process to measure the stop location of the vehicle 1 to which a certain deceleration instruction has been given and can be regarded as a function of deceleration. The fusion accuracy reliability-based probability distribution 1003 is a characteristic to be measure through a process of finding an error distribution according to which the errors of the detected distances are distributed in a situation where the true distance is known. The fusion accuracy reliability-based probability distribution 1003 can be regarded as a function of detection information (sensor configuration, detected time period, and detected distance) outputted by the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120).
It will be appreciated from the above description that the approximate collision probability map can be regarded as a function of the deceleration, detection information (sensor configuration, detected time period, and detected distance) outputted from the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120).
According to the embodiment, a plurality of approximate collision probability maps are created in advance assuming a plurality of situations and stored in the collision probability map storage part 1010. Then, while the own vehicle 1 is traveling, the collision probability map setting part 250 selects one from the plurality of approximate collision probability maps that has been created for a situation close to the current situation.
As described above, the approximate collision probability map can be regarded as a function of the deceleration, detection information (sensor configuration, detected time period, and detected distance)outputted from the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120). According to this idea, a table (approximate collision probability map table) that receives a deceleration, a sensor configuration, a detected time period, and a detected distance as input parameters and outputs an approximate collision probability map is created and stored in the collision probability map storage part 1010 in advance. Then, while the own vehicle 1 is traveling, the collision probability map setting part 250 inputs: the maximum deceleration set by the action plan creating part 200 and the detection information obtained from the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120), i.e., the combination of sensors (sensor configuration) currently detecting an obstacle, the detected time period during which the obstacle has been continuously detected by the sensor configuration, and the detected distance detected by the sensor configuration, to the approximate collision probability map table stored in the collision probability map storage part 1010 and sets the approximate collision probability map outputted from the table as the approximate collision probability map to be used currently.
Using the approximate collision probability map table makes it possible to set the approximate collision probability map in a short time without consuming a computation resource.
The more information amount of the input parameters to the approximate collision probability map table (the smaller the quantization steps), the more precise the approximate collision probability maps can be. However, the size of the table increases. The input information amount is to be determined by a tradeoff between precision and table size (cost).
Incidentally, the table size can be reduced by storing collision probabilities for not all the grid points in the approximate collision probability map. For example, the data format of the approximate collision probability map can be determined such that the collision probabilities in a region where the collision probabilities are higher or lower than the target collision probability by a certain value may be regarded as a fixed value and the collision probabilities in such a region are not stored in the collision probability map storage part 1010.
As another embodiment, unlike the above-described embodiment in which an approximate collision probability map table is stored in the collision probability map storage part 1010 and an approximate collision probability map created for a situation close to the current situation is selected while the own vehicle 1 is traveling, the collision probability map setting part 250 may be configured to calculate the approximate collision probability map in a real time manner based on: the maximum deceleration currently set, the characteristic of the own vehicle 1 measured in advance, and the detection information obtained from the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120), i.e., the combination of sensors (sensor configuration) currently detecting an obstacle, the detected time period during which the obstacle has been continuously detected by the sensor configuration, and the detected distance detected by the sensor configuration. With this embodiment, the collision probability map storage part 1010 can be eliminated. In this embodiment, the collision probability map setting part 250 is configured to calculate the approximate collision probability map in accordance with formulas (16) to (18), (23), and (26) to (35) with the offset amount determined by the instruction value offset amount calculation part 230 according to the above described STEP 4.
In addition, in the case of the embodiment in which the collision probability map setting part 250 calculates the approximate collision probability map in a real-time manner, the collision probability map setting part 250 may be configured to store the calculated approximate collision probability map in the collision probability map storage part 1010 as an element of the approximate collision probability map table and to read and reuse the stored approximate collision probability map afterward when the collision probability map setting part 250 encounters a similar situation. In this case, the collision probability map setting part 250 may be configured to delete the stored approximate collision probability map from the approximate collision probability map table of the collision probability map storage part 1010 if the frequency of reusing the map is low. With this embodiment, the storage size necessary for the collision probability map storage part 1010 can be reduced.
In addition, even in the case of the embodiment in which the collision probability map setting part 250 calculates the approximate collision probability map in a real-time manner, calculating collision probabilities for not all the grid points in the approximate collision probability map can reduce necessary computation resource and reduce the storage size for storing the approximate collision probability map in the collision probability map storage part 1010.
The details of how to obtain the approximate collision probability map in the embodiments have been described.
A vehicle control device 100 according to the embodiment includes: an action plan creating part 200 configured to create an action plan for autonomous driving of an own vehicle 1; a vehicle behavior control part 160 configured to control at least a speed of the own vehicle 1 based on the action plan; and a distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120) configured to detect an object and output detection information on detection of the object. The action plan creating part 200 is configured to set a maximum deceleration of the own vehicle 1 for autonomous driving. The action plan creating part 200 includes a collision probability map setting part 250 configured to, when the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120) detects an obstacle, determine a collision probability map, which is a two-dimensional map of a collision probability distribution representing likelihood of a collision between the own vehicle 1 and the detected obstacle in a two-dimensional space of a location and a speed. The collision probability map has been created based on a target stop location at which the own vehicle 1 is to be controlled to stop, which target stop location is determined based on a predetermined target collision probability, the maximum deceleration, and the detection information. The action plan creating part 200 is configured to create a current action plan based on the collision probability map, the predetermined target collision probability, and a current location and a current speed of the own vehicle 1.
An example of conventional techniques controls the speed of a vehicle with reference to a location error probability distribution as described in Japanese Patent No. 4796400. Another example of conventional techniques determines the likelihood of a collision of an own vehicle to an obstacle as described in Japanese Patent No. 4967840. These examples of conventional technique control the speed of a vehicle based on the location error probability distribution or the likelihood of a collision but does not relate action plans of autonomous driving to those information. In the conventional techniques, there is a problem (problem 1) that in the event of determining an action plan using sensors, the distance to be detected by the sensors may not be enlarged while maintaining the reliability of the sensors due to insufficient detectable distances of the sensors. In addition, another problem (problem 2) of the conventional technique is an insufficiency of clarification of safety levels and an insufficiency of quantification of reliability (accuracy, detection probability) of sensors and the accuracy of the action plan.
In contrast, the embodiment introduces a collision probability map, which is a two-dimensional map of a collision probability distribution representing likelihood of a collision between the own vehicle 1 and the detected obstacle in a two-dimensional space of a location and a speed. The action plan creating part 200 is configured to create a current action plan based on the collision probability map, the predetermined target collision probability, and a current location and a current speed of the own vehicle 1. With this, the action plan creating part 200 can know the own position in the two-dimensional space of location and speed with respect to the points (location, speed) having the target collision probability, thereby to create an action plan taking into account the safety and comfort in autonomous driving.
The collision probability map according to the embodiment defines a plurality of grid points in the two-dimensional space. A collision probability has been calculated for each of the plurality of grid points. The collision probability calculated for each of the plurality of grid points represents a probability of the own vehicle 1 to collide with the obstacle when the vehicle behavior control part 160 instructs the own vehicle 1 to decelerate from the location and speed at the grid point with the maximum deceleration being an upper limit so as to stop at the target stop location.
With this configuration, the action plan creating part 200 can create an action plan based on the collision probabilities calculated assuming the maximum deceleration currently set.
The collision probability map includes a low collision probability region (see
With this configuration, the action plan creating part 200 can create a sudden braking allowed action plan and a preliminary braking action plan in the two-dimensional space of the collision probability map. The sudden braking allowed action plan keeps the set speed set for autonomous driving and permits sudden braking operations in a low collision probability region (see
The distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120) includes a plurality of sensors. The detection information includes: a sensor configuration representing a combination of sensors included in the plurality of sensors and having detected the obstacle; a detected distance detected by the sensor configuration; and a detected time period during which the sensor configuration continuously having detected the obstacle. The target stop location has been determined so that a collision probability calculated by performing convolution between an instruction achievement probability density distribution 1001 and an error distribution (fusion accuracy reliability-based probability distribution 1003) is equal to the target collision probability. The instruction achievement probability density distribution 1001 represents a probability density distribution of a location at which the vehicle will stop when the vehicle behavior control part 160 instructs the own vehicle 1 to decelerate with the maximum deceleration so as to stop the own vehicle 1 at the target stop location. The error distribution represents a probability distribution whose center location is at the detected distance and whose distribution represents a distribution of a difference between a true distance from the vehicle to the obstacle and the detected distance. The instruction achievement probability density distribution 1001 has been estimated with reference to the maximum deceleration, based on vehicle stopping characteristics of the own vehicle 1. The vehicle stopping characteristics have been measured in advance by performing stopping operations on the own vehicle 1 according to deceleration instructions given to the own vehicle 1. The error distribution (fusion accuracy reliability-based probability distribution 1003) has been estimated with reference to the sensor configuration, the detected distance, and the detected time period, based on distance detecting characteristics of the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120). The distance detecting characteristics have been measured in advance by performing a measurement of a distance to a known object using the distance detection part in a situation where the true distance to the known objection is known.
An error distribution representing a probability distribution of a difference between the true distance to the obstacle and the detected distance detected by the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120) can be estimated based on: the combination of sensors (sensor configuration), of the plurality of sensors included in the distance detection part, which has detected an obstacle; a detected distance detected by the sensor configuration; and the detected time period during which the obstacle has been continuously detected by the sensor configuration. The overlapping of the error distribution and the instruction achievement probability density distribution under the maximum deceleration represents a collision probability of the own vehicle 1. By determining the target stop location so that this collision probability becomes equal to a target collision probability equal to or lower than a collision probability predicted in a case of manual driving, it is possible to determine the collision probability map according to which the driving by the autonomous driving system can drive the own vehicle 1 in a manner safer than a case where a human drives the own vehicle 1.
In addition, with the above configuration, the action plan creating part 200 can use a collision probability map in which the target stop location is determined based on the actually measured characteristics of the own vehicle 1 and the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120).
Collision probability pc(Gxc,Gvc) of a grid point (Gxc,Gvc) in the collision probability map that is set by the collision probability map setting part 250 of the vehicle control device 100 according to the embodiment has been obtained by formula (35) where: p({circumflex over (d)}1) represents a detection probability of the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120) to, at the grid point (Gxc,Gvc), detect the obstacle; p({circumflex over (d)}2) represents a probability of the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120) to, at the grid point (Gxc,Gvc), detect the obstacle at a distance {circumflex over (d)}2; pc(gxn,gcn) represents a collision probability of a point (gxn,gcn) to which the own vehicle 1 will transition in the collision probability map when the own vehicle 1 is instructed by the vehicle behavior control part 160 so as to decelerate with a deceleration a to stop at the target stop location from the grid point (Gxc,Gvc); and p(εα) represents a probability of an uncertainty εα associated with the deceleration a, the uncertainty εα representing difference between the deceleration a and a deceleration actually performed by the own vehicle 1. The point (gxn,gcn) in the collision probability map is obtained by performing calculation in which the deceleration a is added with the uncertainty εα. In the collision probability map, the collision probabilities of grid points whose speed is 0 are given based on a result of measuring the characteristic of the distance detection part in advance. In the collision probability map, the collision probabilities of grid points whose speed is not 0 and whose location is at the detected distance are given of a predetermined value. The value of pc(gxn,gcn) is approximately calculated based on collision probabilities of grid points, of the plurality of grid points, in the vicinity of the point (gxn,gcn). The collision probability pc(Gxc,Gvc) for each of the plurality of grid points in the collision probability map is obtained by repeating summation ΣΣΣ of formula (35) from a corner grid point whose speed is 0 and whose location is at the detected distance in a direction in which the speed increases and/or the location approaches to the own vehicle 1.
The collision probability map obtained in this way includes collision probability values for which uncertainty εα of deceleration a has been taken into account. With this, the collision probability map represents situations that are close to state transitions in the actual traveling.
The term to which the summation ΣΣΣ in formula (35) is applied is multiplied with p({circumflex over (d)}1), which represents a detection probability of the detection by the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120). When the detected distance is far, the value of the detection probability p({circumflex over (d)}1) of the detection by the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120) takes a low value. In a region where detection probability p({circumflex over (d)}1) of the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120) detecting a target object is low, i.e., in a region where the distance from the target object is far, collision probability pc(Gxc,Gvc) is assessed as low by formula (35). In a region where the collision probability is assessed as low in the collision probability map, the action plan creating part 200 can create an action plan to start deceleration in a manner that does not influence the ride quality, as described with reference to
In the embodiment, the probability p(εα) has a normal distribution with a standard deviation determined based on a characteristic of the own vehicle 1, which characteristic has been measured in advance.
With this, the collision probability map setting part 250 can set a collision probability map based on the uncertainty appearing in actual deceleration operations of the own vehicle 1.
The action plan creating part 200 according to the embodiment includes a collision probability map storage part 1010 storing a plurality of collision probability maps calculated based on: a result of a measurement process to measure a stop location of the own vehicle 1 to which a certain deceleration instruction has been given; a result of a measurement process to measure characteristics of the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120) detecting a target object in a situation where a true distance to the target object is known; and the predetermined target collision probability. The collision probability map setting part 250 is configured to select a collision probability map from the plurality of collision probability maps stored in the collision probability map storage part 1010 as the collision probability map based on which the current action plan is to be created, based on the maximum deceleration and the detection information.
With this configuration, as the collision probability map setting part 250 can determine the collision probability map without performing calculation for obtaining the collision probability map while the own vehicle 1 is traveling, consumption of calculation resource is low.
The collision probability map setting part 250 according to the embodiment may be configured to determine the collision probability map by calculating the collision probability map in a real-time manner while the own vehicle 1 is traveling, based on: parameters representing characteristics of the own vehicle 1 and the distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120); the target collision probability; the maximum deceleration; and the detection information, wherein the parameters have been derived from a result of a measurement process to measure a stop location of the own vehicle 1 to which a certain deceleration instruction has been given and a result of a measurement process to measure characteristics of the distance detection part detecting a target object in a situation where a true distance to the target object is known.
The action plan creating part 200 according to the embodiment may include a collision probability map storage part 1010 storing a collision probability map calculated by the collision probability map setting part 250 in a real-time manner while the own vehicle 1 is traveling. The collision probability map setting part 250 may be configured to, when the collision probability map stored in the collision probability map storage part 1010 corresponds to the predetermined target collision probability, the maximum deceleration, and the detection information, determine the collision probability map stored in the collision probability map storage part 1010 as the collision probability map based on which the current action plan is to be created.
With this configuration, the storage resource of the collision probability map storage part 1010 storing collision probability maps can be eliminated or reduced.
The distance detection part (detection device DD, vehicle sensor 60, and autonomous driving control part 120) includes a plurality of sensors for detecting obstacles. The detection information includes: a sensor configuration representing a combination of sensors included in the plurality of sensors and having detected the obstacle; and a detected distance detected by the sensor configuration. The action plan creating part 200 may be configured such that when the obstacle has been detected by only a subset of the plurality of sensors and the detected distance is equal to or greater than a predetermined distance threshold value, the action plan creating part 200 regards the obstacle as not existing and creates an action plan for performing a cruising operation as the current action plan, and that when the obstacle has been detected by only a subset of the plurality of sensors and the detected distance is less than the predetermined distance threshold value, the action plan creating part 200 regards the obstacle as existing and creates an action plan for performing a preliminary braking operation as the current action plan.
When a contradiction between sensors occurs due to a sensor failure, this configuration leads to comfort in a state of obstacle absence Ē when the observed obstacle distance {circumflex over (d)} is equal to or greater than the predetermined threshold value and leads to safety in a state of obstacle presence E when the observed obstacle distance {circumflex over (d)} is less than the predetermined threshold value.
The above-described embodiment is intended to be illustrative of the present invention in an easily understandable manner. The present invention is not limited to that includes all of the components of the embodiment described. Moreover, a part of the configuration of a certain embodiment may be replaced with configurations of other embodiments, or configurations of other embodiments may be added to the configuration of a certain embodiment. Further, a part of the configuration of a certain embodiments may be eliminated, or added or replaced with another configuration.
The vehicle control device and the vehicle control method of the present invention can be realized by a program for causing a computer to function as each of the vehicle control device and the vehicle control method. The program may be stored in a computer readable storage medium.
Number | Date | Country | Kind |
---|---|---|---|
2020-061378 | Mar 2020 | JP | national |
2021-050544 | Mar 2021 | JP | national |