The subject disclosure relates to mitigating risk for a vehicle entering a road section and, in particular, to a system and method for calculating a risk for an intersection and operating the vehicle according to the calculated risk.
The risk of a vehicle having an accident increases when the vehicle passes through an intersection due to the presence of cross-traffic, changing speeds, changing light conditions, pedestrians crossing at the intersection, etc. Therefore, an autonomous vehicle entering the intersection needs to take additional measurements of the traffic condition of its environment to ensure that it can pass through the intersection without incident. To be effective, such measurements and subsequent calculations need to be performed in real time. Accordingly, it is desirable to provide a simple method of assessing a risk to the vehicle of passing through an intersection based on current traffic conditions.
In one exemplary embodiment, a method of navigating a vehicle is disclosed. An image of a roadway is obtained from a sensor. The sensor is focused at a road segment selected from a plurality of road segments of the roadway, wherein the road segment is selected using a machine learning program based on a risk of the road segment. The machine learning program is trained to select the road segment by calculating the risk for each of the plurality of road segments of the roadway, wherein the risk associated with the road segment is based on a hazard probability associated with the road segment and an occupancy probability associated with the road segment, selecting the road segment from the plurality of road segments based on the risk associated with the road segment, and determining a reduction in risk for a road risk model of the roadway due to selecting the road segment.
In addition to one or more of the features described herein, calculating the risk for the road segment further includes calculating a product of the hazard probability for the road segment and the occupancy probability for the road segment. Focusing the sensor further includes performing a random selection process on the plurality of road segments in which a probability of selecting the road segment is based on the risk associated with the road segment. The method further includes alerting a driver of the vehicle when an attention of the driver of the vehicle is not on the road segment. The method further includes directing the attention of the driver to the road segment using a haptic signal. The method further includes comparing the risk of the road risk model to a risk metric and rewarding the machine learning program for selecting the road segment when the risk is less than the risk metric. In an embodiment, the sensor includes a first sensor and a second sensor and the method further includes focusing the first sensor on the road segment while maintaining a wide field of view of the roadway with the second sensor.
In another exemplary embodiment, a system for navigating a vehicle is disclosed. The system includes a sensor and a processor. The sensor is configured to capture an image of a roadway. The processor is configured to focus the sensor at a road segment selected from a plurality of road segments of the roadway using a machine learning program based on a risk of the road segment. The machine learning program is trained to focus the sensor by calculating the risk for each of the plurality of road segments of the roadway, wherein the risk associated with the road segment is based on a hazard probability associated with the road segment and an occupancy probability associated with the road segment, selecting the road segment from the plurality of road segments based on the risk associated with the road segment, and determining a reduction in the risk for a road risk model of the roadway due to selecting the road segment.
In addition to one or more of the features described herein, the processor is further configured to calculate the risk for the road segment by calculating a product of the hazard probability for the road segment and the occupancy probability for the road segment. The processor is further configured to focus the sensor by performing a random selection process on the plurality of road segments in which a probability of selecting the road segment is based on the risk associated with the road segment. The processor is further configured to alert a driver of the vehicle when an attention of the driver is not on the road segment. The processor is further configured to direct the attention of the driver to the road segment using a haptic signal. The processor is further configured to train the machine learning program by comparing the risk of the road risk model to a risk metric and rewarding the machine learning program for selecting the road segment when the risk is less than the risk metric. In an embodiment, the sensor includes a first sensor and a second sensor and the processor is further configured to focus the first sensor on the road segment while maintaining a wide field of view of the roadway with the second sensor.
In yet another exemplary embodiment, a vehicle is disclosed. The vehicle includes a sensor and a processor. The sensor is configured to capture an image of a roadway. The processor is configured to focus the sensor at a road segment selected from a plurality of road segments of the roadway using a machine learning program based on a risk of the road segment. The machine learning program is trained to focus the sensor by calculating the risk for each of the plurality of road segments of the roadway, wherein the risk associated with the road segment is based on a hazard probability associated with the road segment and an occupancy probability associated with the road segment, selecting the road segment from the plurality of road segments based on the risk associated for the road segment, and determining a reduction in the risk for a road risk model of the roadway due to selecting the road segment.
In addition to one or more of the features described herein, the processor is further configured to calculate the risk for the road segment by calculating a product of the hazard probability for the road segment and the occupancy probability for the road segment. The processor is further configured to focus the sensor by performing a random selection process on the plurality of road segments in which a probability of selecting the road segment is based on the risk associated with the road segment. The processor is further configured to direct an attention of a driver of the vehicle to the road segment using a haptic signal when the attention of the driver is not on the road segment. The processor is further configured to train the machine learning program by comparing the risk of the road risk model to a risk metric and rewarding the machine learning program for selecting the road segment when the risk is less than the risk metric. In an embodiment, the sensor includes a first sensor and a second sensor and the processor is further configured to focus the first sensor on the road segment while maintaining a wide field of view of the roadway with the second sensor.
The above features and advantages, and other features and advantages of the disclosure are readily apparent from the following detailed description when taken in connection with the accompanying drawings.
Other features, advantages and details appear, by way of example only, in the following detailed description, the detailed description referring to the drawings in which:
The following description is merely exemplary in nature and is not intended to limit the present disclosure, its application or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features.
In accordance with an exemplary embodiment,
The vehicle 10 generally includes at least a navigation system 20, a propulsion system 22, a transmission system 24, a steering system 26, a brake system 28, a sensing system 30, an actuator system 32, and a controller 34. The navigation system 20 determines a road-level route plan for automated driving of the vehicle 10. The propulsion system 22 provides power for creating a motive force for the vehicle 10 and can, in various embodiments, include an internal combustion engine, an electric machine such as a traction motor, and/or a fuel cell propulsion system. The transmission system 24 is configured to transmit power from the propulsion system 22 to two or more wheels 16 of the vehicle 10 according to selectable speed ratios. The steering system 26 influences a position of the two or more wheels 16. While depicted as including a steering wheel 27 for illustrative purposes, in some embodiments contemplated within the scope of the present disclosure, the steering system 26 may not include a steering wheel 27. The brake system 28 is configured to provide braking torque to the two or more wheels 16.
The sensing system 30 includes sensors or detectors that sense an object 50 in an exterior environment of the vehicle 10 and determines various parameters of the object that are useful in locating the position and relative velocities of various remote vehicles in the environment of the autonomous vehicle. Such parameters can be provided to the controller 34. The sensing system 30 can include a first sensor and a second sensor. In various embodiments, the first sensor can be used to focus on a selected segment of a road or an intersection while the second sensor is used to obtain a wide field-of-view of the road or intersection. The first sensor can change its field of view from wide to narrow and can also change its orientation. In an embodiment, the sensing system 30 includes one or more digital cameras for capturing one or more images of the road or intersection. In alternative embodiments, the sensing system 30 can include one more of a radar system, Lidar, etc., for detecting range, relative velocities, azimuth, and elevation of an object 50 such as a target vehicle, a pedestrian, etc.
The controller 34 includes a processor 36 and a computer readable storage device or computer readable storage medium 38. The storage medium includes programs or instructions 39 that, when executed by the processor 36, operate the vehicle 10 based on outputs from the sensing system 30. The controller 34 can build a trajectory for the vehicle 10 based on the output of sensing system 30 and can provide the trajectory to the actuator system 32 to control the propulsion system 22, transmission system 24, steering system 26, and/or brake system 28 in order to navigate the vehicle 10 with respect to the object 50.
The computer readable storage medium 38 may further include programs or instructions 39 that when executed by the processor 36, determines a risk to the vehicle 10 of a traffic scenario, particularly at an intersection. The controller 34 can provide an alert to a driver of the vehicle 10 and/or control an operation of the vehicle based on the risk. In various embodiments, the controller 34 can also accelerate and/or decelerate the vehicle 10, steer the vehicle, apply brakes, etc. to avoid an incident or collision with the object 50.
The computer readable storage medium 38 may further include programs or instructions 39 that when executed by the processor 36, focuses a sensor on a selected road segment of the roadway based on a risk associated with the road segment. The sensor can be focused using a machine learning program, such as a neural network. The machine learning program can be trained to focus the sensor using real or simulated data, as described herein.
Each road segment is shaded to indicate a range for a hazard probability associated with the road segment. For example, a first leading road segment 314 immediately ahead of the host vehicle 301 has a high hazard probability associated with it, as there is a high probability of collision if a target vehicle is in it. A second leading road segment 316 further from the host vehicle 301 and has a medium hazard probability associated with it, due primarily to the additional time available for the host vehicle 301 to react to a target vehicle in it. Similarly, a third leading road segment 318 has a low hazard probability associated with it, and a fourth leading road segment 320 has a very low hazard probability associated with it. In various embodiments, the high hazard probability is between about 0.75 and 1, the medium hazard probability is between about 0.5 and about 0.75, the low hazard probability is between about 0.25 and 0.5, and the very low hazard probability is between 0 and about 0.25.
As illustrated in
Also as illustrated in
A first crossing road segment 402, second crossing road segment 404, third crossing road segment 406 and fourth crossing road segment 408 illustrates the effects of assumed target vehicle speed on the hazard probability. With respect to the first crossing road segment 402, although this road segment is away from the intersection, a target vehicle in this road segment given its assumed speed is on a collision course with the host vehicle 301.
The second crossing road segment 404 has a medium hazard probability because a target vehicle in this road segment having an assumed speed has a less likely probability of collision with the host vehicle 301. Similarly, the third crossing road segment 406 has a low hazard probability because a target vehicle in this road segment would most likely move out of the path of the host vehicle 301 by the time the host vehicle reaches this road segment. Similarly, fourth crossing road segment 408 has a very low hazard probability due to its relative inaccessibility by the host vehicle 301.
Once the hazard probability and the occupancy probability are known, a risk can be calculated for each road segment and for the entire intersection. The risk for a selected road segment (nth road segment) is the product of the occupancy probability for the road segment and the hazard probability for the road segment, as shown in Eq. (1):
Riskn=P(On)×P(Cn) Eq. (1)
where P(On) is the occupancy probability of the nth road segment and P(Cn) is the hazard probability of the nth road segment. The risk for the entire intersection is a summation of the risks for the plurality of road segments of the intersection, as shown in Eq. (2)
Risk(Intersection)=Σn=1NP(On)×P(Cn) Eq. (2)
where N is the total number of road segments. The determined risk for the intersection can be compared to a warning threshold to send a signal to the processor or to the driver to prevent the host vehicle 301 from moving into a situation in the intersection in which an accident can occur.
The inputs for training the machine learning program can come from many sources. A road importance model 902 and an intersection map 904 can be provided from a database or remote server. Camera data 906 and other sensor data 908 can come from the various components of the sensing system 30 such as the digital camera 40 and/or radar, Lidar, etc. The salience 910 of the intersection and the prior focus location data 912 can be stored in the computer readable storage medium 38 or the controller 34, for example.
The road importance model 902 and the intersection map 904 are used to determine a wide field-of-view image 914 of the road intersection with weightings for a hazard probability map. The camera data 906 and other sensor data 908 are used to the perform the road segmentation 916 and determine prior vehicle detections 918 of objects in the road, along with their locations and velocities. The salience 910 of the intersection and the prior focus location data 912 are used to create an uncertainty map 920 that can be used to train the machine learning program 922.
The machine learning program 922 can be a neural network, a Gaussian process machine, a support vector machine or other suitable machine learning program. The wide field-of-view image 914, road segmentation 916, prior vehicle detections 918 and the uncertainty map 920 are provided to the machine learning program 922. The machine learning program 922 receives these features and training data 948 and implements a stochastic policy 924 for selecting a road segment for focus by the sensor. The stochastic policy 924 includes randomly selecting a road segment based on the weights of the road segment. The risk for the road segment is used as a weight for the road segment in the random selection process. The probability of a road segment being selected using the random selection process is related to the risk associated with the road segment. For example, a road segment having a high associated risk is more likely to be selected than a road segment having a low associated risk.
The random selection process outputs a selected road segment 926 for closer focus. A simulation 930 is then performed to determine a road risk model based on the selection of the selected road segment. In the simulation, the first sensor is made to focus on the selected road segment to obtain a narrow FOV camera image 932. A second sensor remains on the wide field-of-view of the intersection of the simulation to obtain a wide FOV camera image 934. In general, the narrow FOV image of is a high-resolution image, while the wide FOV image of the intersection is a lower resolution image.
The first sensor observes the narrow FOV to obtain any narrow FOV vehicle detections 936. The second sensor observes the wide FOV to obtain any wide FOV vehicle detections 938. The narrow FOV vehicle detections 936 and the wide FOV vehicle detections 938 are used at an occupancy grid calculator 940 to calculate a focused occupancy grid 942 for the intersection and an unfocused occupancy grid 944 for the intersection. The values of the focused occupancy grid 942 and of the unfocused occupancy grid 944 are used to update a road risk model 946 for the intersection, which can be used as training data 948 in a subsequent iteration of a training step for training the machine learning program 922. The road risk model 946 indicates whether the focusing of the sensor at the selected road segment via the focused occupancy grid 942 lowers or decreases the risk with respect to the unfocused occupancy grid 944. The machine learning program 922 can be rewarded when focusing the sensor at a selected road segment decreases the risk associated with the road risk model or punished when focusing the sensor increases the risk. In various embodiments, the risk associated with the road risk model can be compared to a risk metric and the machine learning program can be rewarded when the risk due to focusing the sensor is less than the risk metric.
In box 1008, driver monitoring is performed in order to determine where the driver is focusing his attentions. The driving monitoring can be performed using eye sensors, for example, that track the location or orientation of the driver's eyes.
In box 1010, the selected road segment and the driver focus are provided to a disparity mapping module that determines a difference between the selected road segment and the driver's focus or attention. When there is a difference between the driver focus and the selected road segment (i.e., when the driver is not paying attention to the road segment having the greatest risk), an awareness signal can be generated to alert the driver.
In box 1012, the selected road segment is mapped to a haptic actuator. The haptic signal can be assigned to one or many haptic actuators, for example, on a driver's seat. A selected haptic signal can be transmitted in order to focus the driver's attention to a selected location within the roadway. For example, the haptic actuators can include a first vibration device on a left side of the driver's seat and a second device on a right side of the driver's seat. When a selected road segment is on the driver's left side, the first vibration device can be actuated and when the selected road segment is on the driver's right side, the second vibration device can be actuated. In addition, the intensity of the haptic signal can be high to indicate a high level of risk and concern and can be low to indicate a low level of risk and concern. In box 1014, the mapped haptic signal is sent to a haptics controller to actuate the corresponding haptic actuator.
In box 1108, the probability of no occlusion is sent down a processing path for the no occlusion scenario and a probability of occlusion is sent down a processing path for the occlusion scenario. These processing paths are run in parallel to each other. Along the processing path for the no occlusion scenario, in box 1110, a system dynamics model is run on the intersection assuming no occlusion. In box 1112, an updated belief of state of the intersection is arrived at based on the system dynamics model. In box 1114, the updated state is multiplied by the probability of no occlusion to generate a weighted no occlusion state.
Similarly, along the processing path for the occlusion scenario, in box 1116, a system dynamics model is run on the intersection assuming there is an occlusion. In box 1118, an updated belief of state of the intersection is arrived at based on the system dynamics model. In box 1120, the updated state is multiplied by the probability of occlusion to generate a weighted occlusion state. At summations box 1122, the weighted non occlusion state and the weighted occlusion state are added together to determine the risk associated with the selected road segment in box 1124.
While the method has been discussed with respect to a vehicle approaching an intersection, the method can also be applied to other roadways, such as a straight road section, a divided road, an undivided road, a curved road. The risk can be based on any type of intersection configuration and calculations can include the effect of obstructions, such as a building, hedge, or hill, on the occurrence of detections at a sensor.
While the above disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from its scope. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the disclosure without departing from the essential scope thereof. Therefore, it is intended that the present disclosure not be limited to the particular embodiments disclosed, but will include all embodiments falling within the scope thereof