This application takes priority and claims the benefit of Belgian Patent Application No. 2021/5227 filed on Mar. 25, 2021, the contents of which are herein incorporated by reference.
Automated vehicles (also referred to as autonomous vehicles) are robotic platforms with several perceptive sensors for obtaining raw measurements about the surrounding environment. The raw measurements are further processed by perception systems, which attribute a model of the environment allowing the vehicle control and decision-making unit to act accordingly and to appropriately maneuver in the traffic.
Existing perception systems for automated vehicles can detect and track elements from the scene and the environment. Those systems detect the objects from the scene with an object detection algorithm based on single and multiple sensors, such as camera, LiDAR or Radar. Then, the object type and object state are estimated. At the same time, the new object is checked and associated with past detected objects.
However, there is no quality assurance system for the observed information at runtime. The best that a detection and tracking system can do is to provide a score representing the uncertainty of the detection and tracking results.
Assessing the quality of the perception systems of automated vehicles at runtime is highly desirable. Perception systems are the initial point for any further interaction of automated vehicles with the environment. Hence, errors at perception systems can propagate to actions taken by the automated vehicles that can be catastrophic when maneuvering, especially in shared spaces with humans.
Perception systems are imperfect and non-robust. Additionally, state-of-the-art perception stacks in autonomous driving embodiments are based on non-explainable architectures such as deep neural networks. Guaranteeing the quality of these perception systems is still a major challenge. Thus, it is vital to assess the quality of automated vehicles' perception systems at runtime. If the quality of these perception systems is degraded, the vehicle control unit should be informed immediately so that it can avoid taking unsafe decisions and actions.
In the real world, at runtime, there is no ground-truth information about the surrounding objects and the environment. Ground-truth is generally understood as the real and exact position and status of the elements of the scene. Without that information, assessing the quality of perception systems at runtime without human supervision is not trivial.
The inventors now have surprisingly found a system and method for analyzing the proper evolution of the driving scene and detecting inconsistencies in the outputs of perception systems of automated vehicles at runtime in order to increase the safety of automated vehicles and similar robotic platforms.
Safety concerns about the perceived information are identified through the system and method of this invention. Each time a new result from the perception system arrives, it is compared with past results for detecting inconsistencies. The new result is also stored for a fixed period of time, for example, 2 seconds, for future comparisons. The comparison is done by first propagating the past results over a short period of time to the future, based on different assumptions about the behavior of each object. The propagation computes the boundary of all possible future states of the object. Then, the newly estimated state of the object is checked to see whether it stays within the computed boundary.
Accordingly, the first object of the invention is a computer-implemented method for detecting inconsistencies in the information from the perception sensors (1.1) and the perception systems (1.2), hereinafter jointly referred to as observations, of an automated vehicle (1) and running on an electronic control unit (1.3), which comprises the steps of:
In another aspect, the inconsistency detection system is free of human supervision and control.
In another aspect, the observed states of the scene are objects, road shapes, or environmental conditions or combinations thereof.
In another aspect, the observed and estimated states of scenes, objects, road shapes, or environmental conditions or combinations thereof are stored and then used to calculate the boundaries of states of scenes and objects in the future or to match current observed states with future observed states.
In another aspect, the observed and estimated states are stored for a fixed period of time, wherein the fixed period is between 0.1 seconds and 10 seconds, preferably between 1 second and 5 seconds, even more preferably between 1.5 seconds and 3 seconds.
In another aspect, the estimated states are updated or stored when new information about it is received.
In another aspect, the boundaries of possible states of an object or a scene are calculated at a given timestamp.
In another aspect, the boundaries are calculated based on one or more of the following parameters and features:
In another aspect, the assumptions are defined for one or more of following object types comprising:
Or one or more of the following scene type classifications comprising:
Or one or more environmental conditions comprising:
In another aspect, the assumptions are a combination of types and conditions comprising:
In another aspect, the calculated boundaries are one or more of the following parameters and features:
In another aspect, the coordinate systems comprise:
In another aspect, the assumption about the new velocities and positions of the objects based acceleration of the objects are calculated as follows:
v_max=previous_v+a_max*delta_t
v_min=previous_v+a_min*delta_t
p_max=previous_p_max+previous_v*delta_t+0.5*a_max*delta_t{circumflex over ( )}2
p_min=previous_p_min+previous_v*delta_t+0.5*a_min*delta_t{circumflex over ( )}2.
In another aspect, the assumption about the maximum and minimum velocities of the objects are calculated as follows:
v_max=min(previous_v+a_max*delta_t,v_assumption_max)
v_min=max(previous_v+a_min*delta_t,v_assumption_min)
In another aspect, it is checked
In another aspect, the perceived scene type, and environmental conditions, or combinations of types and conditions as described above are analyzed and matched through the time interval.
In another aspect, a notification is sent when the estimated state of the object stays outside the calculated boundaries, preferably via CAN bus. Actions developed after receiving this signal, such as for example to trigger an emergency maneuver are optional for the system, and not under the scope of the invention.
Another object of the invention is a data processing system for detecting inconsistencies in the observations from perception systems (1.2) and perception sensors (1.1) of an automated vehicle (1) and running on an electronic control unit (1.3), comprising means for carrying out the steps of:
Another object of the invention is a computer-readable medium having stored instructions to cause the computer to perform the steps of the inconsistency detection method of the present invention.
Another object of the invention is an AD/ADAS vehicle comprising the data processing system of the invention, or the computer readable medium of the invention.
The perception systems (1.2) of the vehicle interpret the raw information from the sensors (1.2) and extract observations on the scene. Such observations include one or more of the existing elements, their position, or environmental conditions.
The vehicle central board (1.3) is capable of performing several vehicle processes, such as vehicle control and decision making units that perform tasks such as path planning. The outputs of the vehicle central board (1.3) are executed by the vehicle actuators (1.4).
The inconsistency detector system (1.5) of the present invention monitors information from the sensors (1.1) and the perception systems (1.2), hereinafter jointly referred to as observations. The inconsistency detector system (1.5) informs the vehicle central board (1.3) about the reliability of those observations.
The system is running on an electronic control unit including one or more processors and a memory. The memory may include one or more instructions which can be executed by one or more processors causing the detection of inconsistencies from the input observations received at the electronic control unit.
The system receives observations of the scene and objects in the surrounding environment from one or more sensors (1.1) or from one or more perception systems (1.2) in the vehicle. The system may receive additional input from road information such as shape of the road, curvature, traffic status, or surface condition or a combination thereof. The system may also receive additional input such as environmental conditions including the position of the sun, weather, or humidity.
In another embodiment, the system receives the observations at a single time or during an interval comprising consecutive times.
In another embodiment, the system receives previously stated input, observations and times.
In general, each observation obtained from the real scene observed by the sensors (1.1) or the perception systems (1.2) generates one or several observation states in the inconsistency system associated to a time. Each observed state is stored for a fixed period of time. For example, an observed state may be stored for 2 seconds.
In subsequent times, the current observed state system inputs are updated to estimated states. The estimated states are obtained by calculating the boundaries of the possible states of the objects or the scene, hereinafter referred to as state boundaries.
The calculation of the state boundaries is based on one or more of the following parameters and features:
Once the current observations are received, the inconsistency detection system (1.5) evaluates their consistency as shown in
In a first step, the inconsistency detector system (1.5) checks for each new observed state, whether there exists previously stored estimated states of the same object or scene.
If there are no previously stored estimated states of the same object or full or partial scene, the system does not perform an inconsistency check.
If there are previously stored estimated states of the same object or full or partial scene, the system performs an inconsistency check. The inconsistency check consists of assessing whether or not the current observed state lies in the estimated state boundaries. If the new observed state is outside of the calculated boundaries, the inconsistency detection system (1.5) will consider the output of the perception system (1.2) or of the sensors (1.1) as inconsistent.
If an inconsistency is detected, the inconsistency detection system sends a notification to the control units in the vehicle to act accordingly and safely. With this signalization, the control units can perform appropriate actions to mitigate the inconsistency such as informing other subsequent systems as for example systems responsible for planning, decision making and control on the autonomous vehicle.
In one embodiment, the actions taken by the control system (1.4) that receives the inconsistency system signals of sensor or perception inconsistencies are not under the scope of this invention.
Number | Date | Country | Kind |
---|---|---|---|
2021/5227 | Mar 2021 | BE | national |