The present disclosure pertains to methods for evaluating the performance of trajectory planners and other robotic systems in simulated scenarios, and computer programs and systems for implementing the same. Trajectory planners are capable of autonomously planning ego trajectories for fully/semi-autonomous vehicles or other mobile robots. Example applications include ADS (Autonomous Driving System) and ADAS (Advanced Driver Assist System) performance testing.
There have been major and rapid developments in the field of autonomous vehicles. An autonomous vehicle (AV) is a vehicle which is equipped with sensors and control systems which enable it to operate without a human controlling its behavior. An autonomous vehicle is equipped with sensors which enable it to perceive its physical environment, such sensors including for example cameras, radar and lidar. Autonomous vehicles are equipped with suitably programmed computers which are capable of processing data received from the sensors and making safe and predictable decisions based on the context which has been perceived by the sensors. An autonomous vehicle may be fully autonomous (in that it is designed to operate with no human supervision or intervention, at least in certain circumstances) or semi-autonomous. Semi-autonomous systems require varying levels of human oversight and intervention, such systems including Advanced Driver Assist Systems and level three Autonomous Driving Systems. There are different facets to testing the behavior of the sensors and control systems aboard a particular autonomous vehicle, or a type of autonomous vehicle. Other mobile robots are being developed, for example for carrying freight supplies in internal and external industrial zones. Such mobile robots would have no people on board and belong to a class of mobile robot termed UAV (unmanned autonomous vehicle). Autonomous air mobile robots (drones) are also being developed.
In autonomous driving, the importance of guaranteed safety has been recognized. Guaranteed safety does not necessarily imply zero accidents, but rather means guaranteeing that some minimum level of safety is met in defined circumstances. It is generally assumed this minimum level of safety must significantly exceed that of human drivers for autonomous driving to be viable. Thus, the testing requirements for autonomous vehicles are especially challenging.
Sensor processing may be evaluated in real-world physical facilities. Similarly, the control systems for autonomous vehicles may be tested in the physical world, for example by repeatedly driving known test routes, or by driving routes with a human on-board to manage unpredictable or unknown context. Physical world testing will remain an important factor in the testing of autonomous vehicles capability to make safe and predictable decisions. However, physical world testing is expensive and time-consuming. Increasingly, there is more reliance placed on testing using simulated environments.
Simulation-based testing comes with its own challenges. A high-fidelity simulation requires significant computational resources. As the safety of autonomous systems increases, the number of required simulation instances also increases. For example, human driving is estimated to cause of the order 10−6 severe accidents per hour. It has been suggested that autonomous driving systems will need to reduce this by at least three orders of magnitude to be accepted in society, implying a minimum safety level of the order of 10−9 severe accidents per hour needs to be guaranteed. For an autonomous driving system operating at this level, a ‘naïve’ simulation strategy would therefore require of the order of 109 simulated driving hours on average to encounter a single severe accident. There is therefore a need to greatly increase the efficiency of simulation-based testing.
For mobile robot testing, a test scenario includes a simulated mobile robot (the ego agent) and some physical context that the ego agent is required to navigate. The physical context of a driving scenario generally includes a road layout and may include one or more ‘challenger’ objects (static objects and/or dynamic agents, such as vehicles, pedestrians, cyclists, animals etc.). An instance of the rest scenario is realized in a simulation environment, with the robotic system under testing in control of the ego agent. The performance of the robot system may be assessed in terms of the behavior of the ego agent in the scenario instance and/or in terms of decisions taken internally within the robotic system over the course of the simulation. The robotic system has knowledge of its location at any instant of time, understands its context, and is required to make safe and predictable decisions about how to navigate its environment to reach a pre-programmed destination.
Simulation-based testing may utilize parameterized scenarios, where different instances (or ‘runs’) of the scenario may be realized in a simulation environment with different parameter combinations. It is useful to conceptualize a set of parameter values as a point in a parameter space of the scenario. A scenario is ‘run’ (instantiated) at a given point in the parameter space when an instance of the scenario is realized in a simulation environment with the corresponding set of parameter values.
One approach is based on random sampling of points in the scenario's parameter space, for example using a Monte Carlos search strategy. However, this suffers from the drawbacks noted above. ‘Directed testing’ or ‘directed exploration’ approaches are more targeted, and use the outcomes of previous simulation to guide a subsequent search of the parameter space, with the aim of finding more salient parameter combinations with fewer simulation instances. The search may commence with an initial set of points in the parameter space (e.g. uniformly spaced, or selected via Monte Carlo sampling) and the scenario is instantiated at each of those points. The performance of the robotic system is assessed in each scenario, and the results are used to guide the selection of subsequent points in the parameter space.
Directed testing approaches use some defined concept of failure, with the extent of ‘failure’ quantified via an appropriate numerical metric. For example, in a driving context, a simple failure metric might be forward distance (longitudinal distance between the ego agent and a forward vehicle ahead of the ego agent). A failure event occurs when the forward distance falls below some safety threshold, with the severity of the failure increasing as the forward distance decreases. Most directed exploration approaches target the search of the parameter space on the ‘worst’ (most severe) failure instances. A directed testing method uses the performance analysis from previous scenario instances to predict the system's performance at other points in the parameter space, and uses those predictions to guide the selection of subsequent points. In the preceding example, the directed search would attempt to guide the search to region(s) of the parameter space where the forward distance is predicted to be minimized (such that failure on the forward distance metric is maximized).
A core aim herein is one of providing improved directed testing of robotic systems, in the sense of maximizing the saliency of the results for a given number of simulated runs (or, equivalently, minimizing the number of runs that are required to achieve a given level of saliency).
A directed search method is applied to a parameter space of a scenario for testing the performance of a robotic system in simulation. The directed search method is applied based multiple performance evaluation rules. A performance predictor is trained to probabilistically predict a pass or fail result for each rule at each point in the parameter space (other possible outcomes are also considered). An overall acquisition function is determined as follows: if a pass outcome is predicted at a given, the performance evaluation rule having the highest probability of an incorrect outcome prediction at x determines the acquisition function; whereas, if a fail outcome is predicted at a given point for at least one rule, then the acquisition function is determined by the performance evaluation rule for which a fail outcome is predicted with the lowest probability of an incorrect outcome prediction.
A first aspect herein is directed to computer-implemented method of testing, in simulation, performance of a robotic system in control of an ego agent of a test scenario, the method comprising: determining a first set of points in a parameter space of the test scenario, each point being a set of one or more parameter values for the test scenario; based the first set of points, generating in a simulation environment multiple first instances of the test scenario with the robotic system in control of the ego agent; assigning to each point of the first set a plurality of performance indicators based on a plurality of performance evaluation rules, thereby generating a training set of points and their assigned pluralities of performance indicators, wherein each performance indicator denotes a pass outcome or a fail outcome; using the training set to train a performance predictor for probabilistically predicting at each point x in the parameter space of the test scenario a pass or fail outcome for each performance evaluation rule; using the trained performance predictor to determine a plurality of rule acquisition functions, each rule acquisition function ƒi(s) denoting at each point x in the parameter space a probability of an incorrect outcome prediction for a performance evaluation rule i of the plurality of performance evaluation rules; selecting one or more second points based on an overall acquisition function defined as ƒ(x)=ƒj(x), wherein if a pass outcome is predicted at a given point x for every rule i then j is the performance evaluation rule having the highest probability of an incorrect outcome prediction at x, and wherein if a fail outcome is predicted at a given point x for at least one rule i, then j is the performance evaluation rule for which a fail outcome is predicted at x with the lowest probability of an incorrect outcome prediction; based on the one or more second points in the parameter space, generating in a simulation environment one or more second instances of the test scenario with the robotic system in control of the ego agent; and providing one or more outputs for evaluating performance of the robotic system in the one or more second instances based on the at least one predetermined performance evaluation rule.
In embodiments, the one or more outputs may be rendered in a graphical user interface.
The one or more outputs may comprise one or more performance scores assigned to each second point for the at least one performance evaluation rule.
The method may comprise updating the performance predictor based on the one or more second points and the one or more performance scores assigned thereto.
The method may comprise: selecting one or more third points in the parameter space based on an updated overall acquisition function defined by the updated performance predictor; based the one or more third points in the parameter space, generating in a simulation environment one or more third instances of the test scenario with the robotic system in control of the ego agent; and providing one or more second outputs for evaluating performance of the robotic system in the one or more third instances based on the at least one predetermined performance evaluation rule.
The one or more second outputs may comprise one or more performance scores assigned to the one or more third points, and the method is repeated iteratively until the performance predictor satisfies a termination condition, or a predetermined number of iterations is reached.
The performance predictor may comprise a Gaussian score prediction model for each performance prediction rule.
The score prediction model may provide a mean performance score gi(x) and standard deviation gσ,i(x) for each rule, i, at a given point x in the parameter space, wherein the rule acquisition function for rule i is based on gi(x)/gσ,i(x).
The robotic system may comprise a trajectory planner for a mobile robot.
The method may comprise using the score classification model and the score regression model to identify and mitigate an issue with the robotic system.
Further aspects herein provide a computer system configured to implement the method of any of the above aspects or embodiments, and computer program code configured to program the computer system implement the same.
Particular embodiments will now be described, by way of example only, with reference to the following schematic figures, in which:
The described embodiments consider performance testing of a robotic system in the form of an autonomous vehicle stack, or some component (or components) thereof, in parameterized driving scenarios. As will be appreciated, the described techniques are applicable to other forms of robotic systems when tested in simulation based on one or more parameterized test scenarios.
As outlined above, in the field of robotics, directed exploration methods have generally targeted ‘worst-case’ failures. Such methods can be highly effective in quickly locating failure cases (or, more precisely, in reducing the number of non-salient points in the scenario space that need to be considered before a failure case is found. A non-salient point in this context generally means a set of parameter values that do not result in a failure event for the robotic system under testing). However, there are significant drawbacks to this approach. In certain contexts, the worst-case failure is not necessarily the most interesting or salient. For example, in a driving context, there may be certain variations of a scenario in which a collision or other failure event is unavoidable (generally this implies, at the very least, a set of circumstances such that no human driver could reasonably be expected to avoid a collision). A directed exploration approach targeting worst-case failure can easily guide the search to ‘uninteresting’ failure cases. For example, in a cut-in scenario, such a method could, for example, guide the search to a set of parameter values at which another agent cuts in front of the ego agent with virtually no headway, and immediately decelerates. An expensive simulation to verify that the system under testing fails in those circumstances may be entirely redundant. On the other hand, if the system under testing ‘just’ fails in circumstances where it ought to have comfortably passed, that may be far more indicative of a genuine issue in the robotic system that needs to be identified and mitigated.
In the described embodiments, an alternative directed exploration approach is considered. Rather than identifying worst-case failures, the aim is to map out one or more “performance category boundaries” in a scenario's parameter space with sufficient confidence as efficiently as possible (in terms of the number of simulated instances of the scenario that need to be realized).
With binary {PASS, FAIL} performance categories, this equates to estimating the boundary (or boundaries) between pass/fail regions of the scenario's parameter space. Points either side of such a boundary correspond to parameter combinations (variations of the scenario) where the robotic system ‘just’ passes or ‘just’ fails on some predetermined numerical performance metric (or metrics), referred to herein as rules. One form of rule returns a floating point (or other numerical) value (robustness score) at every time step in a scenario, and a pass/fail category is assigned by applying a pass/fail threshold to the robustness score. The result is a time-sequence of pass/fail results over the course of the scenario. These results can be aggregated to provide an overall pass/fail result for a given scenario instance and a given rule (e.g. with an overall fail result if and only if the rule is failed at any point in the scenario). The following examples consider robustness scores that are normalized to a fixed range and failure threshold. The range is chosen as [−1, 1] with the failure threshold at zero. This is a convent design choice that makes the scores readily interpretable, but it will be appreciated that robustness scores can be represented in other ways.
Herein, a ‘performance indicator’ can be a performance class (e.g. PASS, FAIl or “NOT APPLICABLE”—see below), or a robustness score that indicates a performance class (e.g. a numerical score above a threshold might indicate PASS, and FAIL below the threshold).
However, the described embodiments go beyond binary categorization, and introduce a third “INACTIVE” performance category. As described in further detail below, this third performance category is introduced to accommodate rules that are not guaranteed to return a floating point robustness score at every time step, but which might return a ‘Not a Number’ (NaN) at a given time step. In formal terms, a NaN means a non-numerical output of a numerical function. NaNs have been used, for example, to handle values that are undefined in the mathematical sense (such as the logarithm of a negative number). In the present context, NaN's are introduced as a deliberate design choice, to better reflect situations where a rule is simply not applicable. For example, a ‘safe distance’ rule based on longitudinal distance between an ego agent and a forward vehicle does not apply in the case that the ego agent is waiting at a T-junction to turn left or right.
Constructing the rules with NaN outputs makes the scores more informative. However, the benefits go further than this. As described in detail below, NaN outputs are incorporated into the directed search process using separate prediction models for Nan/not-NaN classification and numerical score prediction. This formulation is not only accommodating of non-numerical robustness outputs, but also mitigates issues that could prevent the directed search from converging as intended (this is explained in detail below).
The described directed testing method are incorporated in a testing pipeline to facilitate rules-based testing of mobile robot stacks simulated scenarios. The testing pipeline can also be applied to real scenarios, in combination with simulation-based testing. Agent (actor) behaviour in real or simulated scenarios is evaluated by a test oracle based on defined performance evaluation rules. Such rules may evaluate different facets of safety. For example, a safety rule set may be defined to assess the performance of the stack against a particular safety standard, regulation or safety model (such as RSS), or bespoke rule sets may be defined for testing any aspect of performance. The testing pipeline is not limited in its application to safety, and can be used to test any aspects of performance, such as comfort or progress towards some defined goal. A rule editor allows performance evaluation rules to be defined or modified and passed to the test oracle.
A “full” stack typically involves everything from processing and interpretation of low-level sensor data (perception), feeding into primary higher-level functions such as prediction and planning, as well as control logic to generate suitable control signals to implement planning-level decisions (e.g. to control braking, steering, acceleration etc.). For autonomous vehicles, level 3 stacks include some logic to implement transition demands and level 4 stacks additionally include some logic for implementing minimum risk maneuvers. The stack may also implement secondary control functions e.g. of signalling, headlights, windscreen wipers etc.
The term “stack” can also refer to individual sub-systems (sub-stacks) of the full stack, such as perception, prediction, planning or control stacks, which may be tested individually or in any desired combination. A stack can refer purely to software, i.e. one or more computer programs that can be executed on one or more general-purpose computer processors.
Whether real or simulated, a scenario requires an ego agent to navigate a real or modelled physical context. The ego agent is a real or simulated mobile robot that moves under the control of the stack under testing. The physical context includes static and/or dynamic element(s) that the stack under testing is required to respond to effectively. For example, the mobile robot may be a fully or semi-autonomous vehicle under the control of the stack (the ego vehicle). The physical context may comprise a static road layout and a given set of environmental conditions (e.g. weather, time of day, lighting conditions, humidity, pollution/particulate level etc.) that could be maintained or varied as the scenario progresses. An interactive scenario additionally includes one or more other agents (“external” agent(s), e.g. other vehicles, pedestrians, cyclists, animals etc.).
The following examples consider applications to autonomous vehicle testing. However, the principles apply equally to other forms of mobile robot or robotic system.
Scenarios may be represented or defined at different levels of abstraction. More abstracted scenarios accommodate a greater degree of variation. For example, a “cut-in scenario” or a “lane change scenario” are examples of highly abstracted scenarios, characterized by a maneuver or behaviour of interest, that accommodate many variations (e.g. different agent starting locations and speeds, road layout, environmental conditions etc.). A “scenario run” refers to a concrete occurrence of an agent(s) navigating a physical context, optionally in the presence of one or more other agents. For example, multiple runs of a cut-in or lane change scenario could be performed (in the real-world and/or in a simulator) with different agent parameters (e.g. starting location, speed etc.), different road layouts, different environmental conditions, and/or different stack configurations etc. The terms “run” and “instance” are used interchangeably in this context. Herein, a scenario has a minimum of one adjustable parameter such that salient variations of the scenario may be explored using directed testing to iteratively select different values of the scenario parameter (or parameters). A set of (one or more) parameters is referred to, for conciseness, as a point in the parameter space of the scenario.
In the following examples, the performance of the stack is assessed, at least in part, by evaluating the behaviour of the ego agent in the test oracle against a given set of performance evaluation rules, over the course of one or more runs. The rules are applied to “ground truth” of the (or each) scenario run which, in general, simply means an appropriate representation of the scenario run (including the behaviour of the ego agent) that is taken as authoritative for the purpose of testing. Ground truth is inherent to simulation; a simulator computes a sequence of scenario states, which is, by definition, a perfect, authoritative representation of the simulated scenario run. In a real-world scenario run, a “perfect” representation of the scenario run does not exist in the same sense; nevertheless, suitably informative ground truth can be obtained in numerous ways, e.g. based on manual annotation of on-board sensor data, automated/semi-automated annotation of such data (e.g. using offline/non-real time processing), and/or using external information sources (such as external sensors, maps etc.) etc.
The scenario ground truth typically includes a “trace” of the ego agent and any other (salient) agent(s) as applicable. A trace is a history of an agent's location and motion over the course of a scenario. There are many ways a trace can be represented. Trace data will typically include spatial and motion data of an agent within the environment. The term is used in relation to both real scenarios (with real-world traces) and simulated scenarios (with simulated traces). The trace typically records an actual trajectory realized by the agent in the scenario. With regards to terminology, a “trace” and a “trajectory” may contain the same or similar types of information (such as a series of spatial and motion states over time). The term trajectory is generally favoured in the context of planning (and can refer to future/predicted trajectories), whereas the term trace is generally favoured in relation to past behaviour in the context of testing/evaluation.
In a simulation context, a “scenario description” is provided to a simulator as input. For example, a scenario description may be encoded using a scenario description language (SDL), or in any other form that can be consumed by a simulator. A scenario description is typically a more abstract representation of a scenario, that can give rise to multiple simulated runs. Depending on the implementation, a scenario description may have one or more configurable parameters that can be varied to increase the degree of possible variation. The degree of abstraction and parameterization is a design choice. For example, a scenario description may encode a fixed layout, with parameterized environmental conditions (such as weather, lighting etc.). Further abstraction is possible, however, e.g. with configurable road parameter(s) (such as road curvature, lane configuration etc.). The input to the simulator comprises the scenario description together with a chosen set of parameter value(s) (as applicable). The latter may be referred to as a parameterization of the scenario. The configurable parameter(s) define a parameter space (also referred to as the scenario space), and the parameterization corresponds to a point in the parameter space. In this context, a “scenario instance” may refer to an instantiation of a scenario in a simulator based on a scenario description and (if applicable) a chosen parameterization.
For conciseness, the term scenario may also be used to refer to a scenario run, as well a scenario in the more abstracted sense. The meaning of the term scenario will be clear from the context in which it is used.
Trajectory planning is an important function in the present context, and the terms “trajectory planner”, “trajectory planning system” and “trajectory planning stack” may be used interchangeably herein to refer to a component or components that can plan trajectories for a mobile robot into the future. Trajectory planning decisions ultimately determine the actual trajectory realized by the ego agent (although, in some testing contexts, this may be influenced by other factors, such as the implementation of those decisions in the control stack, and the real or modelled dynamic response of the ego agent to the resulting control signals).
A trajectory planner may be tested in isolation, or in combination with one or more other systems (e.g. perception, prediction and/or control). Within a full stack, planning generally refers to higher-level autonomous decision-making capability (such as trajectory planning), whilst control generally refers to the lower-level generation of control signals for carrying out those autonomous decisions. However, in the context of performance testing, the term control is also used in the broader sense. For the avoidance of doubt, when a trajectory planner is said to control an ego agent in simulation, that does not necessarily imply that a control system (in the narrower sense) is tested in combination with the trajectory planner.
To provide relevant context to the described embodiments, further details of an example form of AV stack will now be described.
In a real-world context, the perception system 102 receives sensor outputs from an on-board sensor system 110 of the AV, and uses those sensor outputs to detect external agents and measure their physical state, such as their position, velocity, acceleration etc. The on-board sensor system 110 can take different forms but generally comprises a variety of sensors such as image capture devices (cameras/optical sensors), lidar and/or radar unit(s), satellite-positioning sensor(s) (GPS etc.), motion/inertial sensor(s) (accelerometers, gyroscopes etc.) etc. The onboard sensor system 110 thus provides rich sensor data from which it is possible to extract detailed information about the surrounding environment, and the state of the AV and any external actors (vehicles, pedestrians, cyclists etc.) within that environment. The sensor outputs typically comprise sensor data of multiple sensor modalities such as stereo images from one or more stereo optical sensors, lidar, radar etc. Sensor data of multiple sensor modalities may be combined using filters, fusion components etc.
The perception system 102 typically comprises multiple perception components which co-operate to interpret the sensor outputs and thereby provide perception outputs to the prediction system 104.
In a simulation context, depending on the nature of the testing—and depending, in particular, on where the stack 100 is “sliced” for the purpose of testing (see below)—it may or may not be necessary to model the on-board sensor system 100. With higher-level slicing, simulated sensor data is not required therefore complex sensor modelling is not required.
The perception outputs from the perception system 102 are used by the prediction system 104 to predict future behaviour of external actors (agents), such as other vehicles in the vicinity of the AV.
Predictions computed by the prediction system 104 are provided to the planner 106, which uses the predictions to make autonomous driving decisions to be executed by the AV in a given driving scenario. The inputs received by the planner 106 would typically indicate a drivable area and would also capture predicted movements of any external agents (obstacles, from the AV's perspective) within the drivable area. The driveable area can be determined using perception outputs from the perception system 102 in combination with map information, such as an HD (high definition) map.
A core function of the planner 106 is the planning of trajectories for the AV (ego trajectories), taking into account predicted agent motion. This may be referred to as trajectory planning. A trajectory is planned in order to carry out a desired goal within a scenario. The goal could for example be to enter a roundabout and leave it at a desired exit; to overtake a vehicle in front; or to stay in a current lane at a target speed (lane following). The goal may, for example, be determined by an autonomous route planner (not shown).
The controller 108 executes the decisions taken by the planner 106 by providing suitable control signals to an on-board actor system 112 of the AV. In particular, the planner 106 plans trajectories for the AV and the controller 108 generates control signals to implement the planned trajectories. Typically, the planner 106 will plan into the future, such that a planned trajectory may only be partially implemented at the control level before a new trajectory is planned by the planner 106. The actor system 112 includes “primary” vehicle systems, such as braking, acceleration and steering systems, as well as secondary systems (e.g. signalling, wipers, headlights etc.).
Note, there may be a distinction between a planned trajectory at a given time instant, and the actual trajectory followed by the ego agent. Planning systems typically operate over a sequence of planning steps, updating the planned trajectory at each planning step to account for any changes in the scenario since the previous planning step (or, more precisely, any changes that deviate from the predicted changes). The planning system 106 may reason into the future, such that the planned trajectory at each planning step extends beyond the next planning step. Any individual planned trajectory may, therefore, not be fully realized (if the planning system 106 is tested in isolation, in simulation, the ego agent may simply follow the planned trajectory exactly up to the next planning step; however, as noted, in other real and simulation contexts, the planned trajectory may not be followed exactly up to the next planning step, as the behaviour of the ego agent could be influenced by other factors, such as the operation of the control system 108 and the real or modelled dynamics of the ego vehicle). In many testing contexts, the actual trajectory of the ego agent is what ultimately matters; in particular, whether the actual trajectory is safe, as well as other factors such as comfort and progress. However, the rules-based testing approach herein can also be applied to planned trajectories (even if those planned trajectories are not fully or exactly realized by the ego agent). For example, even if the actual trajectory of an agent is deemed safe according to a given set of safety rules, it might be that an instantaneous planned trajectory was unsafe; the fact that the planner 106 was considering an unsafe course of action may be revealing, even if it did not lead to unsafe agent behaviour in the scenario. Instantaneous planned trajectories constitute one form of internal state that can be usefully evaluated, in addition to actual agent behaviour in the simulation. Other forms of internal stack state can be similarly evaluated.
The example of
The extent to which the various stack functions are integrated or separable can vary significantly between different stack implementations—in some stacks, certain aspects may be so tightly coupled as to be indistinguishable. For example, in other stacks, planning and control may be integrated (e.g. such stacks could plan in terms of control signals directly), whereas other stacks (such as that depicted in
It will be appreciated that the term “stack” encompasses software, but can also encompass hardware. In simulation, software of the stack may be tested on a “generic” off-board computer system, before it is eventually uploaded to an on-board computer system of a physical vehicle. However, in “hardware-in-the-loop” testing, the testing may extend to underlying hardware of the vehicle itself. For example, the stack software may be run on the on-board computer system (or a replica thereof) that is coupled to the simulator for the purpose of testing. In this context, the stack under testing extends to the underlying computer hardware of the vehicle. As another example, certain functions of the stack 110 (e.g. perception functions) may be implemented in dedicated hardware. In a simulation context, hardware-in-the loop testing could involve feeding synthetic sensor data to dedicated hardware perception components.
Scenarios can be obtained for the purpose of simulation in various ways, including manual encoding. The system is also capable of extracting scenarios for the purpose of simulation from real-world runs, allowing real-world situations and variations thereof to be re-created in the simulator 202.
In the present off-board content, there is no requirement for the traces to be extracted in real-time (or, more precisely, no need for them to be extracted in a manner that would support real-time planning); rather, the traces are extracted “offline”. Examples of offline perception algorithms include non-real time and non-causal perception algorithms. Offline techniques contrast with “on-line” techniques that can feasibly be implemented within an AV stack 100 to facilitate real-time planning/decision making.
For example, it is possible to use non-real time processing, which cannot be performed on-line due to hardware or other practical constraints of an AV's onboard computer system. For example, one or more non-real time perception algorithms can be applied to the real-world run data 140 to extract the traces. A non-real time perception algorithm could be an algorithm that it would not be feasible to run in real time because of the computation or memory resources it requires.
It is also possible to use “non-causal” perception algorithms in this context. A non-causal algorithm may or may not be capable of running in real-time at the point of execution, but in any event could not be implemented in an online context, because it requires knowledge of the future. For example, a perception algorithm that detects an agent state (e.g. location, pose, speed etc.) at a particular time instant based on subsequent data could not support real-time planning within the stack 100 in an on-line context, because it requires knowledge of the future (unless it was constrained to operate with a short look ahead window). For example, filtering with a backwards pass is a non-causal algorithm that can sometimes be run in real-time, but requires knowledge of the future.
The term “perception” generally refers to techniques for perceiving structure in the real-world data 140, such as 2D or 3D bounding box detection, location detection, pose detection, motion detection etc. For example, a trace may be extracted as a time-series of bounding boxes or other spatial states in 3D space or 2D space (e.g. in a birds-eye-view frame of reference), with associated motion information (e.g. speed, acceleration, jerk etc.). In the context of image processing, such techniques are often classed as “computer vision”, but the term perception encompasses a broader range of sensor modalities.
Further details of the testing pipeline and the test oracle 252 will now be described. The examples that follow focus on simulation-based testing. However, as noted, the test oracle 252 can equally be applied to evaluate stack performance on real scenarios, and the relevant description below applies equally to real scenarios. The following description refers to the stack 100 of
As described previously, the idea of simulation-based testing is to run a simulated driving scenario that an ego agent must navigate under the control of the stack 100 being tested. Typically, the scenario includes a static drivable area (e.g. a particular static road layout) that the ego agent is required to navigate, typically in the presence of one or more other dynamic agents (such as other vehicles, bicycles, pedestrians etc.). To this end, simulated inputs 203 are provided from the simulator 202 to the stack 100 under testing.
The slicing of the stack dictates the form of the simulated inputs 203. By way of example,
By contrast, so-called “planning-level” simulation would essentially bypass the perception system 102. The simulator 202 would instead provide simpler, higher-level inputs 203 directly to the prediction system 104. In some contexts, it may even be appropriate to bypass the prediction system 104 as well, in order to test the planner 106 on predictions obtained directly from the simulated scenario (i.e. “perfect” predictions).
Between these extremes, there is scope for many different levels of input slicing, e.g. testing only a subset of the perception system 102, such as “later” (higher-level) perception components, e.g. components such as filters or fusion components which operate on the outputs from lower-level perception components (such as object detectors, bounding box detectors, motion detectors etc.).
Whatever form they take, the simulated inputs 203 are used (directly or indirectly) as a basis for decision-making by the planner 108. The controller 108, in turn, implements the planner's decisions by outputting control signals 109. In a real-world context, these control signals would drive the physical actor system 112 of AV. In simulation, an ego vehicle dynamics model 204 is used to translate the resulting control signals 109 into realistic motion of the ego agent within the simulation, thereby simulating the physical response of an autonomous vehicle to the control signals 109.
Alternatively, a simpler form of simulation assumes that the ego agent follows each planned trajectory exactly between planning steps. This approach bypasses the control system 108 (to the extent it is separable from planning) and removes the need for the ego vehicle dynamic model 204. This may be sufficient for testing certain facets of planning.
To the extent that external agents exhibit autonomous behaviour/decision making within the simulator 202, some form of agent decision logic 210 is implemented to carry out those decisions and determine agent behaviour within the scenario. The agent decision logic 210 may be comparable in complexity to the ego stack 100 itself or it may have a more limited decision-making capability. The aim is to provide sufficiently realistic external agent behaviour within the simulator 202 to be able to usefully test the decision-making capabilities of the ego stack 100. In some contexts, this does not require any agent decision making logic 210 at all (open-loop simulation), and in other contexts useful testing can be provided using relatively limited agent logic 210 such as basic adaptive cruise control (ACC). One or more agent dynamics models 206 may be used to provide more realistic agent behaviour if appropriate.
A scenario is run in accordance with a scenario description 201a and (if applicable) a chosen parameterization 201b of the scenario (generally denoted by x herein). A scenario typically has both static and dynamic elements which may be “hard coded” in the scenario description 201a or configurable and thus determined by the scenario description 201a in combination with a chosen parameterization 201b. In a driving scenario, the static element(s) typically include a static road layout.
The dynamic element(s) typically include one or more external agents within the scenario, such as other vehicles, pedestrians, bicycles etc.
The extent of the dynamic information provided to the simulator 202 for each external agent can vary. For example, a scenario may be described by separable static and dynamic layers. A given static layer (e.g. defining a road layout) can be used in combination with different dynamic layers to provide different scenario instances. The dynamic layer may comprise, for each external agent, a spatial path to be followed by the agent together with one or both of motion data and behaviour data associated with the path. In simple open-loop simulation, an external actor simply follows the spatial path and motion data defined in the dynamic layer that is non-reactive i.e. does not react to the ego agent within the simulation. Such open-loop simulation can be implemented without any agent decision logic 210. However, in closed-loop simulation, the dynamic layer instead defines at least one behaviour to be followed along a static path (such as an ACC behaviour). In this case, the agent decision logic 210 implements that behaviour within the simulation in a reactive manner, i.e. reactive to the ego agent and/or other external agent(s). Motion data may still be associated with the static path but in this case is less prescriptive and may for example serve as a target along the path. For example, with an ACC behaviour, target speeds may be set along the path which the agent will seek to match, but the agent decision logic 210 might be permitted to reduce the speed of the external agent below the target at any point along the path in order to maintain a target headway from a forward vehicle.
As will be appreciated, scenarios can be described for the purpose of simulation in many ways, with any degree of configurability. For example, the number and type of agents, and their motion information may be configurable as part of the scenario parameterization 201b.
The output of the simulator 202 for a given simulation includes an ego trace 212a of the ego agent and one or more agent traces 212b of the one or more external agents (traces 212). Each trace 212a, 212b is a complete history of an agent's behaviour within a simulation having both spatial and motion components. For example, each trace 212a, 212b may take the form of a spatial path having motion data associated with points along the path such as speed, acceleration, jerk (rate of change of acceleration), snap (rate of change of jerk) etc.
Additional information is also provided to supplement and provide context to the traces 212. Such additional information is referred to as “contextual” data 214. The contextual data 214 pertains to the physical context of the scenario, and can have both static components (such as road layout) and dynamic components (such as weather conditions to the extent they vary over the course of the simulation). To an extent, the contextual data 214 may be “passthrough” in that it is directly defined by the scenario description 201a or the choice of parameterization 201b, and is thus unaffected by the outcome of the simulation. For example, the contextual data 214 may include a static road layout that comes from the scenario description 201a or the parameterization 201b directly. However, typically the contextual data 214 would include at least some elements derived within the simulator 202. This could, for example, include simulated environmental data, such as weather data, where the simulator 202 is free to change weather conditions as the simulation progresses. In that case, the weather data may be time-dependent, and that time dependency will be reflected in the contextual data 214.
The test oracle 252 receives the traces 212 and the contextual data 214, and scores those outputs in respect of a set of performance evaluation rules 254. The performance evaluation rules 254 are shown to be provided as an input to the test oracle 252.
The rules 254 are categorical in nature (e.g. pass/fail-type rules). Certain performance evaluation rules are also associated with numerical performance metrics used to “score” trajectories (e.g. indicating a degree of success or failure or some other quantity that helps explain or is otherwise relevant to the categorical results). The evaluation of the rules 254 is time-based—a given rule may have a different outcome at different points in the scenario. The scoring is also time-based: for each performance evaluation metric, the test oracle 252 tracks how the value of that metric (the score) changes over time as the simulation progresses. The test oracle 252 provides an output 256 comprising a time sequence 256a of categorical (e.g. pass/fail) results for each rule, and a score-time plot 256b for each performance metric, as described in further detail later. The results and scores 256a, 256b are informative to the expert 122 and can be used to identify and mitigate performance issues within the tested stack 100. The test oracle 252 also provides an overall (aggregate) result for the scenario (e.g. overall pass/fail). The output 256 of the test oracle 252 is stored in a test database 258, in association with information about the scenario to which the output 256 pertains. For example, the output 256 may be stored in association with the scenario description 210a (or an identifier thereof), and the chosen parameterization 201b. As well as the time-dependent results and scores, an overall score may also be assigned to the scenario and stored as part of the output 256. For example, an aggregate score for each rule (e.g. overall pass/fail) and/or an aggregate result (e.g. pass/fail) across all of the rules 254.
A number of “later” perception components 102B form part of the sub-stack 100S to be tested and are applied, during testing, to simulated perception inputs 203. The later perception components 102B could, for example, include filtering or other fusion components that fuse perception inputs from multiple earlier perception components.
In the full stack 100, the later perception components 102B would receive actual perception inputs 213 from earlier perception components 102A. For example, the earlier perception components 102A might comprise one or more 2D or 3D bounding box detectors, in which case the simulated perception inputs provided to the late perception components could include simulated 2D or 3D bounding box detections, derived in the simulation via ray tracing. The earlier perception components 102A would generally include component(s) that operate directly on sensor data. With the slicing of
Such perception error models may be referred to as Perception Statistical Performance Models (PSPMs) or, synonymously, “PRISMs”. Further details of the principles of PSPMs, and suitable techniques for building and training them, may be bound in International Patent Publication Nos. WO2021037763 WO2021037760, WO2021037765, WO2021037761, and WO2021037766, each of which is incorporated herein by reference in its entirety. The idea behind PSPMs is to efficiently introduce realistic errors into the simulated perception inputs provided to the sub-stack 100S (i.e. that reflect the kind of errors that would be expected were the earlier perception components 102A to be applied in the real-world). In a simulation context, “perfect” ground truth perception inputs 203G are provided by the simulator, but these are used to derive more realistic (ablated) perception inputs 203 with realistic error introduced by the perception error models(s) 208. The perception error model(s) 208 serve as a “surrogate model” (being a surrogate for the perception system 102, or part of the perception system 102A, but operating on lower-fidelity inputs). Non-trained perception error models may also be used.
As described in the aforementioned reference, a perception error (or performance) model can be dependent on one or more variables representing physical condition(s) (“confounders”), allowing different levels of error to be introduced that reflect different possible real-world conditions. Hence, the simulator 202 can simulate different physical conditions (e.g. different weather conditions) by simply changing the value of a weather confounder(s), which will, in turn, change how perception error is introduced.
The later perception components 102b within the sub-stack 100S process the simulated perception inputs 203 in exactly the same way as they would process the real-world perception inputs 213 within the full stack 100, and their outputs, in turn, drive prediction, planning and control.
Alternatively, such models can be used to model the entire perception system 102, including the late perception components 208, in which case a model(s) is used to generate realistic perception output that are passed as inputs to the prediction system 104 directly.
Depending on the implementation, there may or may not be deterministic relationship between a given scenario parameterization 201b and the outcome of the simulation for a given configuration of the stack 100 (i.e. the same parameterization may or may not always lead to the same outcome for the same stack 100). Non-determinism can arise in various ways. For example, when simulation is based on PRISMs, a PRISM might model a distribution over possible perception outputs at each given time step of the scenario, from which a realistic perception output is sampled probabilistically. This leads to non-deterministic behaviour within the simulator 202, whereby different outcomes may be obtained for the same stack 100 and scenario parameterization because different perception outputs are sampled. Alternatively, or additionally, the simulator 202 may be inherently non-deterministic, e.g. weather, lighting or other environmental conditions may be randomized/probabilistic within the simulator 202 to a degree. As will be appreciated, this is a design choice: in other implementations, varying environmental conditions could instead be fully specified in the parameterization 201b of the scenario. With non-deterministic simulation, multiple scenario instances could be run for each parameterization. An aggregate pass/fail result could be assigned to a particular choice of parameterization 201b, e.g. as a count or percentage of pass or failure outcomes.
A test orchestration component 260 is responsible for selecting scenarios for the purpose of simulation. For example, the test orchestration component 260 may select scenario descriptions 201a and suitable parameterizations 201b automatically, which may be based on the test oracle outputs 256 from previous scenarios and/or other criteria. IN the following examples, the test orchestration component 260 implements a directed testing method, so as s
One example rule considered herein is a “safe distance” rule that applies in a lane following context, and which is evaluated between an ego agent and another agent. The safe distance rule requires the ego agent to maintain a safe distance from the other threshold at all times. Lateral and longitudinal distance are both considered and, to pass the safe distance rule, it is sufficient for only one of those distances to satisfy some safety threshold (consider a lane driving scenario, with the ego agent and the other agent in adjacent lanes; when driving alongside each other, their longitudinal separation along the road may be zero or close to zero, which is safe provided a sufficient lateral separation between the agents is maintained; similarly, with the ego agent driving behind the other agent in the same lane, their latitudinal separation perpendicular to the direction of the road may be zero or close to zero assuming both agents are approximately following the midline of the lane, which is safe provided a sufficient longitudinal headway is maintained). A numerical score is computed for the safe distance rule at a given point in time based on whichever distance (latitudinal or longitudinal) is currently determinative of safety.
Robustness scores such as the score described below with reference to
The safe distance rule is chosen to illustrate certain principles underpinning the described methodology because it is simple and intuitive. It will, however, be appreciated that the described techniques can be applied to any rule that is designed to quantify some aspect (or aspects) of driving performance, such as safety, comfort and/or progress towards some defined goal, by way of a numerical “robustness score”. A time-varying robustness score over the duration of a scenario run is denoted (t) and an overall robustness score for a run is denoted y. For example, a robustness scoring framework may be constructed for driving rules that are based on signal-temporal logic.
Lateral distance is measured along a road reference line (which could be a straight line or a curve), and longitudinal separation is measured in the direction perpendicular to the road reference line. Lateral and longitudinal separation (distance between the ego agent E and the challenger C) are denoted by dlat and dlon respectively. Latitudinal and longitudinal distance thresholds (safety distances) are denoted by dslat and dslon.
The safety distances dslat, dslon are typically not fixed, but typically vary as functions of the agents' relative speed (and/or other factors, such as weather, road curvature, road surface, lighting etc.). Expressing the separations and safety distances as functions of time, t, latitudinal and longitudinal “headroom” distances are defined as:
The latitudinal separation, latitudinal safety distance, longitudinal separation and longitudinal safety distance are extracted from the scenario ground truth 310 by, respectively, first, second, third and fourths extractor nodes 302, 304, 312, 314 of the computational graph 300 as time-varying signals. The latitudinal and longitudinal headroom distances are computed by first and second computational (assessor) nodes 306, 316, and converted to robustness scores as follows. The following examples consider normalized robustness scores over some fixed range, such as [−1,1], with 0 as the pass threshold.
The headroom distances quantify the extent to which the relevant safety distance is or is not breached: a positive latitudinal/longitudinal headroom distance implies that the latitudinal/longitudinal separation between the ego E and the challenger C is greater than the current latitudinal/longitudinal safety distance, and a negative headroom distance implies the opposite. Following the principles set out above, robustness scores for lateral and longitudinal distance may, for example, be defined as follows:
Here, A and B denote some predefined normalization distances (which may be the same or different for the lateral and longitudinal scores). For example, it can be seen that the longitudinal robustness score slon(t) varies between 1 and −1 as Dlon(t) varies between Alon, and −Blon. For Dlon(t)>A, the longitudinal robustness score is fixed at 1, and for slon(t)<Blon the robustness score is fixed at −1. The longitudinal robustness score slon(t) varies continuously over all possible values of longitudinal headroom. The same considerations apply to the lateral robustness score. As will be appreciated, this is merely one example, and a robustness score s(t) can be defined in various ways based on headroom distance.
Score normalization is convenient, because it makes the rules more interpretable, and facilitates comparison of the scores between different rules. However, it is not essential for scores to be normalized in this way. A score could be defined over any range with any failure threshold (not necessarily at zero).
The robustness score s(t) for the safe distance rule as a whole is computed by a third assessor node 308 as:
The rule is passed when s(t)>0 and failed when s(t)≤0. The rule is ‘just’ failed when s=0 (implying that one of the longitudinal and lateral separations is equal to its safety distance), representing the boundary between PASS and FAIL outcomes (performance categories). Alternatively, s=0 could be defined at the point at which the ego E just passes; this is an immaterial design choice and, for that reason, the terms “pass threshold” and “failure threshold” are used interchangeably herein to refer to the subset of the parameter space where the robustness score y=0.
A pass/fail result (or, or more generally, a performance category) may be assigned to each time step of a scenario run based on the robustness score s(t) at that time, which is useful to an expert interpreting the results. However, the example directed search methods below are based on an overall robustness score assigned to a scenario run by a fourth assessor node 309 as:
with an overall pass when y>0 and an overall fail when y≤0.
One possible directed search methodology utilizing the overall score y will now be described. The directed search uses a Gaussian process model to build a score predictor, which in turn is used to guide the selection of points (parameter sets) in the parameter space of a scenario.
To facilitate the description, an x superscript is introduced into the above notation to denote a quantity obtained with a given parameter set, e.g. y(x) denotes the overall robustness score obtained on a run characterized by parameter set x (a given point in the parameter space).
The Gaussian process model described herein assumes that the scoring function g(x) varies continuously as a function of the scenario parameters x. That is, it is assumed that a small change in x causes only a small change in the overall score Y that is obtained. It will be appreciated that the techniques can be applied with other forms of machine learning model, and the description applies equally to other such models.
The score predictor 402 is a machine learning component based on a trainable regression model. A training set 404 is comprised of training example (x, y) pairs, where x is a parameter set on which a simulated run has been performed and y is the resulting score (for conciseness, the x superscript is omitted in the following description when the meaning is clear).
The score predictor 402 consumes a parameter set x as input, and outputs a predicted score 406. In fact, the output is a probability distribution over possible scores in the following examples, which provides not only a predicted score 406 but also a confidence level associated with the prediction. Ultimately, the aim is to train the score predictor 402 so that it can accurately predict the overall performance of a given stack 100 (or some part of parts thereof) in a given scenario with a given parameter set x, without having to actually run and score a simulated instance of the scenario with those parameters x. To generate a training example for a given scenario, an instance of that scenario is run on a given parameter set x in the simulator 202, the test oracle 252 is applied to the simulator output, and the overall score Y is computed from the oracle output. This requires significant computational resources, both to tune the simulation, and then score the simulator output against the rule or rules in question. It is therefore highly desirable to build the training set 404 in a targeted manner, with as few examples as needed to give sufficient confidence given some analysis objective. In the following examples, that objective is to accurately map out the ‘boundaries’ in the scenario's parameter space between PASS and FAIL regions. Here, the methodology is less concerned with accurately predicting the score y per se; rather, the aim is to accurately predict the sign of the score with confidence (or, more generally, to predict whether or not the score satisfies the pass threshold).
The directed search methodology can be applied with multiple rules simultaneously based on a set of scores
where the i subscript denotes the ith rule and N is the number of rules on which the directed search is applied. In this case, a score predictor is constructed and trained for each rule i, as described in further detail below.
At step 412, an initial set of points is selected in the parameter space of the scenario. For example, these could be selected randomly (e.g. based on Monte Carlo sampling, or a ‘grid’ of uniformly spaced points in the parameter space may be chosen initially. The scenario is instantiated at each of these initial points, and the simulator output is analyzed in the test oracle 252 to obtain an overall score for each initial point (step 414).
The training set 404 is constructed initially based on the initially selected points and their computed scores.
At step 416, the score predictor 402 is trained based on the training set 406 to compute a predicted score for any given input x in the parameter space. As noted above, the output of the score predictor is actually a probability distribution 407 over possible scores Y given x, denoted by denoted by p(y|x) (score distribution).
The method could terminate at this point (decision block 417), otherwise at step 418, the score distribution 407 is used to determine an “acquisition function”. Details of the acquisition function are described below. For now, suffice it to say that the acquisition function encodes the analysis objective (e.g. the objective to map out PASS/FAIL boundaries with confidence) in precise, mathematical terms. The acquisition function is then used to select (step 420) a next point x in the parameter space in fulfilment of that objective.
At step 422, the scenario is instantiated at the selected next point x, the simulator output is scored, and that point x is added to the training set 402 with its associated score y (step 424). Thereafter, the method is repeated from step 416 onwards. Step 416 is an update/re-training step in subsequent iterations, at which the score predictor 402 is updated or retrained based on the updated training set 402.
The current example considers the selection of a single point based on the acquisition function. A single additional data point is obtained (the next parameter set and its actual score) and the scoring function 402 is updated based on this single additional data point. In other implementations, multiple points may be selected at each update step instead. Multiple points may, for example, be selected in a given iteration based on an acquisition function using a batched greedy optimisation as described at https://emukit.readthedocs.io/en/latest/api/emukit.core.loop.html#emukit.core.loop.html#emukit.core.loop.candidate_point_calculators.GreedyBatchPointCalculator, and also in sec 19.4.3 of https://github.co/probml/pml-book/releases/latest/download/book1.pdf, each incorporated herein by reference in its entirety.
Returning to decision block 417, there are various termination criteria that could be applied. For example, the method might terminate (426) following step 416 if it is determined from the score distribution 407 that the analysis objective has been satisfied (e.g. the PASS/FAIL boundary has been mapped with sufficient confidence, e.g. as determined when a minimum misclassification probability has been achieved throughout the parameter space, or some subset of the parameter space that is of interest). Alternatively or additionally, the method may terminate after some defined number of iterations.
The predicted PASS/FAIL boundary refers to the subset of the parameter space where g(x) is zero or close to zero.
For conciseness, the μ is dropped in the following description, and the mean score is denoted by g(x).
The acquisition function, denoted by reference numeral 430, is defined based on “misclassification probability”. The misclassification probability for a predicted score is the probability that the sign of the score is wrong (or, more generally, the probability that the predicted score is on the wrong side of the pass threshold compared with the true score that would be obtained by instantiating the scenario at x). This is not the same as the confidence in the predicted score 406 itself: the misclassification probability does depend on the standard deviation gσ(x), but this is not the only factor; it also depends on how close x is to a PASS/FAIL boundary. A relatively high standard deviation (low confidence) at x may still imply a low misclassification probability if the predicted score g(x) is close to 1 or −1; on the other hand, a relatively low standard deviation may still imply a high misclassification probability if the predicted score g(x) is close to zero.
Taking the above factors into account, the acquisition function 430 is defined as:
That is, the acquisition function 430 is defined as the modulus of the predicted score divided by its computed standard deviation.
The next point to is selected at step 420 above as:
That is, the next point x is selected as that which minimizes the acquisition function ƒ(x).
As can be seen, as the predicted score in the numerator can decrease the value of the acquisition function as the predicted score moves closer to zero (this is true for both positive and negative scores), but an increase in the predicted standard deviation can also decrease the value of ƒ(x).
In a Gaussian model, the misclassification probability is given by
Here, Φ denotes the standard normal distribution (Gaussian with mean and variance of one). The misclassification probability does not necessarily need to be computed; the method can be applied with the acquisition function ƒ(x) directly. As can be seen, the misclassification probability increases as ƒ(x) decreases, such that the argmin of ƒ(x) (or, equivalently, the argmax of −ƒ(x)) corresponds to the point of greatest misclassification probability.
Points which have already been explored will have gσ≈0 hence ƒ(x) should be small and these points will not be chosen. In practice there can be numerical instabilities when g(x) is also close to zero. This can be mitigated in various ways, for example, by sampling from the acquisition function rather than the argmin of the acquisition. Another technique excludes points based on a desired accuracy level. For example, a credible interval may be determined for the failure probability (an interval in which the failure probability falls with some given probability, such as 0.95), and points x may be excluded if the credible interval falls within some predetermined range. When calculating the misclassification probability, a small constant may be added to the failure threshold when the mean associated with the credible interval is close to the threshold. This will mitigate the effect of g(x) and gσ(x) both being close to zero simultaneously.
where |gi(x) and gσ,i(x) denote the predicted mean and standard deviation variance for rule i.
Each update is applied to a single rule only. With multiple roles, an overall “PASS” is obtained if and only if all rules are passed. On the other hand, an overall “FAIL” result is obtained if any of the rules is failed. The rule selection mechanism takes this into account, in order to accommodate multiple rules in the directed search process in the most efficient way possible.
At step 442, the method selects which of the rules to update as follows (index j denotes the rule selected at step 422). An overall acquisition function ƒ(x) is determined (step 422) in terms of the component acquisitions functions:
The rationale is as follows. If all rules are currently predicted to be passed for a given parameter set x, then a misclassification occurs if any one of those rules is, in fact, failed at that point x in the parameter space. Therefore, in the first case above (all rules predicted to be passed at x), the rule with the highest misclassification probability is selected. This is the rule that is most likely to change the overall outcome when a simulation is actually performed, and therefore the most “promising” in terms of discovering a parameter combination x on which an overall FAIL result is obtained (finding a failure point is the first step towards mapping out the PASS/FAIL boundary, assuming the parameter space contains at least one failure region).
On the other hand, if at least one rule is predicted to be failed at a given x with relatively high confidence, there is little point in incurring the significant expense of evaluating other rules at that point (requiring expensive simulations and analysis): the overall “FAIL” outcome is unlikely to change (a confident predicted failure on rule j at point x implies a confident predicted overall fail). Somewhat counterintuitively, it is therefore the rule that is predicted to be failed with the lowest misclassification probability that is taken to be indicative of the overall probability of misclassification (because an overall misclassification result would require the most confident failure prediction to be wrong). The set {circumflex over (p)} is the subset of rule(s) for which a failure outcome is predicted at x, and the overall acquisition function is defined at that point by the rule for which a failure outcome is predicted with the lowest misclassification probability.
At step 444, the next point in the parameter space is selected in the same manner as above:
with j defined as above. Thus, in addition to selecting the next point x, one of the rules, j is also selected (as the rule that determines the acquisition function ƒ(x) at the selected point).
At step 446, the scenario is instantiated at the next point x. However, the simulator output need only be scored on rule j, as it is only the score predictor 402-j for rule j that is updated. This requires a simulation, but does not require a multi-rule scoring analysis (which is expensive in and of itself).
In principle, a situation can arise where the next point x has been previously explored, but only evaluated for some different rule k≠j. In this case, the simulation at x does not need to be repeated, and the existing simulator output can be accessed and scored at this point for rule j. In practice, this is almost impossible because choosing the same float when sampling from a continuous distribution is extremely unlikely.
Alternatively, every rule may be scored on every run, and the Gaussian process models for all rules may be updated after the scoring of each run.
As briefly discussed, when a Gaussian process model is used, the score prediction assumes that the scoring function g(x) is continuous over the parameter space. If the rules are constructed in a way that materially breaches this assumption, the resulting discontinuities in the parameter space can prevent the Gaussian process search from converging. These considerations apply to other forms of model, and are not unique to Gaussian processes.
Without careful design, discontinuities in g(x) are more likely to arise in more complex scenarios where certain rules might only be naturally applicable for some part (or parts) of a run.
For example, consider an ego vehicle waiting on a minor road at a T-junction to join a traffic stream on a major road. Before the ego vehicle joins the traffic stream, the “safe distance” rule is not naturally applicable between the ego vehicle and other vehicles in the traffic stream, as that is not a lane following context; the rule only becomes pertinent once the ego vehicle enters the road and joins the traffic stream.
A simple way to address this would be to define a set of circumstances in which a given rule applies, and to set the rule robustness to a default value of 1 whenever the rule is inapplicable. For example, the safe distance rule may only be “activated” when the go agent crosses the junction line between the major and minor roads. Prior to that, the rule is inactive and set to a default value of 1. This is superficially appealing; generally the system designer/tester is only interested in failure cases. A “maximum” pass result of 1 is, in that context, the least interesting result, and it would therefore appear natural to extend a score of one to circumstances when the rule is inapplicable. However, in the context of the directed exploration method described above, this can cause significant convergence issues, as will now be illustrated with reference to
The scenario commences with an ego vehicle, E, waiting at a junction on a minor road to join a traffic stream on a major road. In the first instance 520, the parameters x1 are such that the ego vehicle E is never able to find an opportunity to join the major road, and simply waits at the junction for the entirety of the run (this does not necessarily mean that it would have been impossible for any ego vehicle to join the major road safely; it could also be that the ego vehicle is exercising ‘excessive’ caution).
By contrast, in the second instance 504, the ego vehicle E determines that it is able to safely join the traffic stream. This has arisen because the small difference δx1 between the two parameter sets causes, for example, a slight reduction in the speed of the vehicles on the major road and/or a slight widening of a gap in the stream of traffic.
In the first instance 502, the robustness score for the safe distance rule s(x
Thus, the first parameter set is assigned an overall score of g(x1)=1 whilst the second parameter set is assigned an overall score of g(x′1)=g(x1+δx1)=0.2. Both outcomes are a “PASS” overall on the safe distance rule (as the score never falls below zero). However, a discontinuity has arisen in the scoring function 9) in that a small change in the variable has caused a large and sudden change in the score. The pass in the first instance 502 is ‘artificial’ in the sense that the safe distance rule was never actually applicable.
By contrast, a second line segment 604 denotes the true PASS/FAIL boundary. In a third ‘fail’ region 608A to the left of this line 604, the conditions of the scenario are such that the ego vehicle E joins the major road but fails the safe distance rule; in a fourth ‘pass’ region 608B to the right, the ego vehicle E joins the major road and passes. The third and fourth regions 608A, 608B are subsets of the second region 608. The scoring function g(x) changes continuously across the true PASS/FAIL boundary 604, with x3 (
The Gaussian process methodology described above is based on an assumption that the scoring function g(x) is continuous. The discontinuity 602 that has been created by artificially setting a rule to one when it is not inherently applicable violates this assumption, and can prevent the directed search from converging. This applies to any rules, and to other forms of directed search that assume a continuous scoring function.
To address the aforementioned convergence problem, rules are instead assigned a special, non-numerical value (‘not a number’ or ‘NaN’) in conditions where they are not applicable. The ‘min’ function is now defined so that g(x)=nan if the robustness score remains NaN-valued throughout the entire duration of the scenario; if the robustness score is numerical at any point, the overall score is equal to the lowest numerical robustness value. For the safe distance rule example, the robustness score is set to be NaN valued at any time at which the ego vehicle E is waiting at the junction. Once the ego vehicle joins the major road, e.g. by crossing a defined junction line, the robustness score becomes numerical, and is computed as set out above.
Another benefit of NaN results is that they are more informative to an expert, expressly distinguishing, in the value of the robustness score, between circumstances in which the rule applies and is passed, and circumstances where the rule does not apply.
The directed search method above cannot be directly applied with a scoring function g(x) that can be NaN-valued. However, the scoring function g(x) now distinguishes explicitly between a ‘PASS’ outcome and a ‘NOT APPLICABLE’ (NaN) outcome, it becomes possible to model the discontinuity 602 explicitly using a hierarchical model.
With a Bayesian hierarchical approach, two models are learned for each rule, as shown in
The score regression model 902 is configured in the same way as the score predictor 404, but is now computing a second probability distribution 905 of the form:
Any probabilistic classification and regression models can be used in this context. The training of the models 902, 904 is described in detail below.
As in the method of
The regression model 902 is trained only on the subset 907 of training examples that are numerically valued.
Thereafter, the method proceeds as before, but with a modified acquisition function that is described below. At step 916, the modified acquisition function is determined based on the first and second distributions 903, 905. At step 918, the next point (or points) to explore is selected based on the modified acquisition function and, at step 920, the scenario is instantiated and scored at that (or those) points. The latest result(s) is added to the training set 906, and the method repeats. The classification model 904 is always updated at this point (step 914B). The regression model 902 is only updated (step 914A) if a non-NaN-valued example has been added to the training set 906 (as it is only trained on the subset of numerically-valued examples).
If, following the update of the models at steps 912a, 915b, a termination condition is satisfied (decision block 921), the method terminates (step 922).
When the method eventually terminates at step 922, the final regression and classification models 902, 904 serve to map out not only the PASS/FAIL boundary (
The modified acquisition function is determined at step 916 as follows. The i subscript is omitted for conciseness; with multiple rules, a modified acquisition function is determined for each rule as set out below.
The scoring function g may now be expressed as a mapping:
Here, n≥1 is the dimensionality of the parameter space, where each parameter set x∈ (or, strictly speaking, each parameter tuple) has one or more numerically-valued elements.
Let F denote a failure event and N denote a NaN event. Using logic notation, the approach described herein seeks to classify cases where F∩
As set out above, the first model 902 is a Gaussian Process (GP) regression model that provides p(y|x, y≠nan). This is trained by performing GP regression for the dataset 907, which may be expressed in mathematical notation as {(x, y)|y≠nan}.
The model for F can, in turn, be expressed as:
This assumes the pass/fail threshold is defined by g(x)=0. As will be appreciated, the definition can be extended to any threshold by integrating over the range of numerical y values that are classified as failure.
Because p(y|x, y≠nan) is parameterised as a Gaussian distribution with mean gμ(x) and standard deviation gσ(x), this can be written as
Therefore, the probability of misclassification acquisition function can be calculated as follows:
This, in turn, allows the modified acquisition function to be defined in terms of the first and second distributions 903, 905 as:
as provided by the classification model 904.
The definition of ‘misclassification’ in this context is somewhat nuanced. Misclassification is defined with respect to the ‘interesting’ category (not-NaN and FAIL) vs. all other possible outcomes (the ‘uninteresting’ category): the aim of the directed search is to determine the PASS/FAIL boundary 604 and the FAIL/NaN boundary 602B with sufficient confidence; the search is not concerned with the PASS/NaN boundary 602A. Accordingly, a misclassification occurs if x is classified as ‘interesting’ (not-NaN and FAIL) when the actual outcome is anything else (uninteresting). The acquisition function does not distinguish between different ‘uninteresting’ outcomes; hence, the definition of misclassification probability in the first bullet point above. A misclassification also occurs if the actual outcome for x is ‘interesting’ (not-NaN and FAIL), but it has been classified as ‘uninteresting’ (implying a predicted outcome of anything other than ‘not-NaN and FAIL’); hence the definition of misclassification probability in the second bullet point.
On the other hand, if the most probable outcome for x is ‘not-NaN and PASS’, but the actual outcome is NaN, this is not considered a misclassification (and vice versa). To give an example, suppose x has been classified as PASS with p(F|x,
Note that the ‘interesting/uninteresting’ terminology refers to the predicted outcome in a testing context, and reflects the special status of the ‘not-NaN and FAIL’ outcome. A point x with p(F∩
Applying the above definition of the modified acquisition function, the next point x is determined as that most likely to have been misclassified with respect to the FAIL category vs. all other categories:
This considers the selection of a single next point in each iteration for simplicity. Alternatively, multiple points may be selected for the next iteration, e.g. using the batched greedy optimisation described above.
It is useful to note that, when p(y=nan|x)=0, the modified acquisition function reduces, as expected, to the simpler, non-hierarchical form described above:
Multiple rules can be combined using the method of
References herein to components, functions, modules and the like, denote functional components of a computer system which may be implemented at the hardware level in various ways. A computer system comprises execution hardware which may be configured to execute the method/algorithmic steps disclosed herein and/or to implement a model trained using the present techniques. The term execution hardware encompasses any form/combination of hardware configured to execute the relevant method/algorithmic steps. The execution hardware may take the form of one or more processors, which may be programmable or non-programmable, or a combination of programmable and non-programmable hardware may be used. Examples of suitable programmable processors include general purpose processors based on an instruction set architecture, such as CPUs, GPUs/accelerator processors etc. Such general-purpose processors typically execute computer readable instructions held in memory coupled to or internal to the processor and carry out the relevant steps in accordance with those instructions. Other forms of programmable processors include field programmable gate arrays (FPGAs) having a circuit configuration programmable through circuit description code. Examples of non-programmable processors include application specific integrated circuits (ASICs). Code, instructions etc. may be stored as appropriate on transitory or non-transitory media (examples of the latter including solid state, magnetic and optical storage device(s) and the like). The subsystems 102-108 of the runtime stack
| Number | Date | Country | Kind |
|---|---|---|---|
| 2204784.9 | Apr 2022 | GB | national |
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/EP2023/058407 | 3/30/2023 | WO |