Autonomous vehicles, such as vehicles that do not require a human driver, can be used to aid in the transport of passengers or items from one location to another. Such vehicles may operate in a fully autonomous driving mode where passengers may provide some initial input, such as a destination, and the vehicle maneuvers itself to that destination. Thus, such vehicles may be largely dependent on systems that are capable of determining the location of the autonomous vehicle at any given time, as well as detecting and identifying objects external to the vehicle, such as other vehicles, stop lights, pedestrians, etc.
Data from one or more of these systems may be used to detect objects and their respective characteristics (position, shape, heading, speed, etc.). These characteristics can be used to predict trajectories of other objects. These trajectories may define what an object is likely to do for some brief period into the future. These trajectories can then be used to control the vehicle in order to avoid these objects. Thus, detection, identification, and prediction are critical functions for the safe operation of autonomous vehicles.
Aspects of the disclosure provide a system for estimating a spacing profile for a road agent, the system comprising: a first module that includes instructions that cause one or more processors to receive data related to characteristics of the road agent and road agent behavior detected in an environment of an autonomous vehicle; initiate an analysis of the road agent behavior; and estimate the spacing profile of the road agent as part of the analysis, the spacing profile including a lateral gap preference and one or more predicted behaviors of the road agent related to changes in lateral gap; and a second module that includes instructions that cause the one or more processors to determine one or more components of autonomous vehicle maneuver based on the estimated spacing profile; and send control instructions for performing the autonomous vehicle maneuver.
In one example, the initiation of the analysis is based on when the road agent is predicted to interact with the autonomous vehicle. In another example, the initiation of the analysis is based on a lateral gap requirement for the autonomous vehicle in relation to the road agent. In a further example, the initiation of the analysis is based on whether the road agent is performing an overtake maneuver past the autonomous vehicle. In yet another example, the initiation of the analysis is based on an existing lateral gap between the autonomous vehicle and the road agent. In a still further example, the initiation of the analysis is based on a machine learning model.
In another example, the one or more predicted behaviors of the road agent includes an overtake maneuver or a cut-off maneuver in relation to the autonomous vehicle. In a further example, the spacing profile includes a predictability score for the road agent. In this example, the spacing profile includes a predictability score for the autonomous vehicle. In yet another example, the system also includes a third module that includes instructions that causes the one or more processors to receive the control instructions; and control an operational system of the autonomous vehicle based on the control instructions. In this example, the system also includes the operational system. Further in this example, the system also includes the autonomous vehicle.
Other aspects of the disclosure provide for a method for estimating a spacing profile for a road agent. The method includes receiving, by one or more computing devices, data related to characteristics of the road agent and road agent behavior detected in an environment of an autonomous vehicle; initiating, by the one or more computing devices, an analysis of the road agent behavior; estimating, by the one or more computing devices, the spacing profile of the road agent as part of the analysis, the spacing profile including a lateral gap preference and one or more predicted behaviors of the road agent related to changes in lateral gap; determining, by the one or more computing devices, one or more components of autonomous vehicle maneuver based on the estimated spacing profile; and sending, by the one or more computing devices, control instructions for performing the autonomous vehicle maneuver.
In one example, the initiating of the analysis is based on when the road agent is predicted to interact with the autonomous vehicle. In another example, the initiating of the analysis is based on a lateral gap requirement for the autonomous vehicle in relation to the road agent. In a further example, the initiating of the analysis is based on whether the road agent is performing an overtake maneuver past the autonomous vehicle. In yet another example, the initiating of the analysis is based on an existing lateral gap between the autonomous vehicle and the road agent. In a still further example, the initiating of the analysis is based on a machine learning model.
In another example, the one or more predicted behaviors of the road agent includes an overtake maneuver or a cut-off maneuver in relation to the autonomous vehicle. In a further example, the spacing profile includes a predictability score for the road agent or the autonomous vehicle.
The technology relates to a planning system for an autonomous vehicle that accounts for lane-sharing road agents, such as cyclists. For scenarios where the autonomous vehicle shares lateral space with a road agent, a one-size fits all solution may cause the autonomous vehicle to slow more than needed and make less progress along a route, or may not create the best or safest outcomes for every possible scenario. Instead, the behavior of detected road agents may be used in addition to safety parameters/requirements to adapt an amount of lateral spacing needed to provide comfortable forward progress in the autonomous vehicle for a particular scenario.
The vehicle's computing devices may detect a road agent and road agent behavior over time. The vehicle's computing devices may initiate analysis of the road agent behavior based on the detected characteristics and behavior. For instance, the analysis may be initiated when the road agent is predicted to interact with the autonomous vehicle. The analysis of the road agent behavior is for estimating a spacing profile of the road agent. The spacing profile includes a lateral gap preference and one or more predicted behaviors of the road agent related to changes in lateral spacing. In some implementations, the spacing profile further includes a predictability score for the road agent, which may be based on how much behavior to-date has matched previously predicted behavior and/or predictability of environmental factors.
Based on the estimated spacing profile, the vehicle's computing devices may determine autonomous vehicle maneuver. To perform the determination, the vehicle's computing devices may update one or more constraints based on the spacing profile. The one or more constraints may then be used in the determination of one or more components of a vehicle maneuver, such as speed, path, or route. The vehicle's computing devices may then execute the determined vehicle maneuver by controlling the one or more operational systems of the autonomous vehicle accordingly.
The technology herein may allow for a smoother and safer trip in an autonomous vehicle for a passenger. In particular, more forward progress in the autonomous vehicle may be made than without the navigation system described above. The technology may also minimize unnecessary overreactions and underreactions by the autonomous vehicle to smaller lateral gaps with which cyclists, scooters, motorcycles, pedestrians, runners, and other road agents are comfortable while simultaneously maintaining required levels of safety and compliance.
As shown in
The one or more processor 120 may be any conventional processor, such as commercially available CPUs. Alternatively, the one or more processors may be a dedicated device such as an ASIC or other hardware-based processor.
The memory 130 stores information accessible by the one or more processors 120, including instructions 132 and data 134 that may be executed or otherwise used by the processor 120. The memory 130 may be of any type capable of storing information accessible by the processor, including a computing device-readable medium, or other medium that stores data that may be read with the aid of an electronic device, such as a hard-drive, memory card, ROM, RAM, DVD or other optical disks, as well as other write-capable and read-only memories. Systems and methods may include different combinations of the foregoing, whereby different portions of the instructions and data are stored on different types of media.
Memory 130 may store various models used by computing device 110 to make determinations on how to control vehicle 100. For example, memory 130 may store one or more object recognition models for identifying road users and objects detected from sensor data. For another example, memory 130 may store one or more behavior models for providing the probability of one or more actions being taken by a detected object. For another example, memory 130 may store one or more speed planning models for determining speed profiles for vehicle 100 based on map information and predicted trajectories of other road users detected by sensor data.
The instructions 132 may be any set of instructions to be executed directly (such as machine code) or indirectly (such as scripts) by the processor. For example, the instructions may be stored as computing device code on the computing device-readable medium. In that regard, the terms “instructions” and “programs” may be used interchangeably herein. The instructions may be stored in object code format for direct processing by the processor, or in any other computing device language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. In some implementations, the instructions 132 may include a plurality of modules 180, where each module may include one or more routines of a program that operates independent from other modules. Functions, methods and routines of the instructions are explained in more detail below.
The data 134 may be retrieved, stored or modified by processor 120 in accordance with the instructions 132. For instance, although the claimed subject matter is not limited by any particular data structure, the data may be stored in computing device registers, in a relational database as a table having a plurality of different fields and records, XML documents or flat files. The data may also be formatted in any computing device-readable format.
Although
Computing devices 110 may include all of the components normally used in connection with a computing device such as the processor and memory described above as well as a user input 150 (e.g., a mouse, keyboard, touch screen and/or microphone) and various electronic displays (e.g., a monitor having a screen or any other electrical device that is operable to display information). In this example, the vehicle includes an internal electronic display 152 as well as one or more speakers 154 to provide information or audio visual experiences. In this regard, internal electronic display 152 may be located within a cabin of vehicle 100 and may be used by computing devices 110 to provide information to passengers within the vehicle 100.
Computing devices 110 may also include one or more wireless network connections 156 to facilitate communication with other computing devices, such as the client computing devices and server computing devices described in detail below. The wireless network connections may include short range communication protocols such as Bluetooth, Bluetooth low energy (LE), cellular connections, as well as various configurations and protocols including the Internet, World Wide Web, intranets, virtual private networks, wide area networks, local networks, private networks using communication protocols proprietary to one or more companies, Ethernet, Wi-Fi and HTTP, and various combinations of the foregoing.
In one example, computing devices 110 may be an autonomous driving computing system incorporated into vehicle 100. The autonomous driving computing system may be capable of communicating with various components of the vehicle in order to maneuver vehicle 100 in a fully autonomous driving mode and/or semi-autonomous driving mode. For example, returning to
As an example, computing devices 110 may interact with deceleration system 160 and acceleration system 162 in order to control the speed of the vehicle. Similarly, steering system 164 may be used by computing devices 110 in order to control the direction of vehicle 100. For example, if vehicle 100 is configured for use on a road, such as a car or truck, the steering system may include components to control the angle of wheels to turn the vehicle. Signaling system 166 may be used by computing devices 110 in order to signal the vehicle's intent to other drivers or vehicles, for example, by lighting turn signals or brake lights when needed.
Navigation system 168 may be used by computing devices 110 in order to determine and follow a route to a location. For instance, the navigation system may function to generate routes between locations and plan trajectories for the vehicle in order to follow this route. Although depicted as a single system, the navigation system may actually comprise multiple systems to achieve the aforementioned routing and planning functions. In this regard, the navigation system 168 and/or data 134 may store detailed map information, e.g., highly detailed maps identifying the shape and elevation of roadways, lane lines, intersections, crosswalks, speed limits, traffic signals, buildings, signs, real time traffic information, vegetation, or other such objects and information.
In other words, this detailed map information may define the geometry of the vehicle's expected environment including roadways as well as speed restrictions (legal speed limits) for those roadways. Specifically, the map information may include a roadgraph defining the geometry of roadway features such as lanes, medians, curbs, crosswalks, etc. As an example, the roadgraph may include a plurality of points and/or line segments with connections to one another defining the geometry (e.g. size, shape, dimensions, and locations) of the aforementioned roadway features. The roadgraph may also include information which identifies how a vehicle is expected to travel in a given roadway, including direction (i.e., lawful direction of traffic in each lane), lane position, speed, etc. For instance, this map information may include information regarding traffic controls, such as traffic signal lights, stop signs, yield signs, etc. This information, in conjunction with real time information received from the perception system 172, can be used by the computing devices 110 to determine which directions of traffic are oncoming traffic lanes and/or have the right of way at a given location.
Lane portions 251A, 253A, and 255A of road 210 are on a first side of intersection 230, and lane portions 251B, 253B, and 255B of road 210 are on a second side of intersection 230 opposite the first side. Lane portions 252A, 254A, 256A, and 258A of road 220 are on a third side of intersection 230, and lane portions 252B, 254B, 256B, and 258B of road 220 are on a fourth side of intersection 230 opposite the third side. The lanes may be explicitly identified in the map information 200 as shown, or may be implied by the width of a road. Map information 200 may also identify bicycle lanes. As shown, map information 200 may also include stop lines 261 and 263 for road 210. Stop line 261 may be associated with a stop sign 265, and stop line 263 may be associated with a stop sign 267.
In addition to these features, the map information 200 may also include information that identifies the direction of traffic and speed limits for each lane as well as information that allows the computing device 110 to determine whether the vehicle has the right of way to complete a particular maneuver (e.g., to complete a turn or cross a lane of traffic or intersection). Map information 200 may further include information on traffic signs, such as traffic lights, stop signs, one-way sign, no-turn sign, etc. Map information 200 may include information about other environmental features such as curbs, buildings, parking lots, driveways, waterways, vegetation, etc.
Although the detailed map information is depicted herein as an image-based map, the map information need not be entirely image based (for example, raster). For example, the detailed map information may include one or more roadgraphs or graph networks of information such as roads, lanes, intersections, and the connections between these features. Each feature may be stored as graph data and may be associated with information such as a geographic location and whether or not it is linked to other related features, for example, a stop sign may be linked to a road and an intersection, etc. In some examples, the associated data may include grid-based indices of a roadgraph to allow for efficient lookup of certain roadgraph features.
The perception system 172 also includes one or more components for detecting objects external to the vehicle such as other road agents, obstacles in the roadway, traffic signals, signs, trees, etc. The other road agents may include cyclists, scooters, motorcycles, pedestrians, or runners. For example, the perception system 172 may include one or more imaging sensors including visible-light cameras, thermal imaging systems, laser and radio-frequency detection systems (e.g., LIDAR, RADAR, etc.,), sonar devices, microphones, and/or any other detection devices that record data which may be processed by computing devices 110.
When detecting objects, the one or more imaging sensors of the perception system 172 may detect their characteristics and behaviors, such as location (longitudinal and latitudinal distance relative to the vehicle), orientation, size, shape, type, direction/heading, trajectory, lateral movement, speed of movement, acceleration, etc. The raw data from the sensors and/or the aforementioned characteristics can be quantified or arranged into a descriptive function or vector and sent for further processing to the computing devices 110. As an example, computing devices 110 may use the positioning system 170 to determine the vehicle's location and perception system 172 to detect and respond to objects when needed to follow a route or reach a destination safely.
In addition to the operations described above and illustrated in the figures, various operations will now be described. The computing device 110 may detect a road agent, such as a cyclist, in the vehicle's environment and adjust one or more systems of the autonomous vehicle 100 according to the detected road agent. For example, in
At block 402, the vehicle's computing devices 110 may detect a road agent and road agent behavior using the perception system 172. A vicinity of the vehicle 100 may be defined by ranges of the sensors and other detection systems of the perception system 172 of the vehicle 100. Sensor data obtained from the perception system 172 may include object data defining a cyclist 510. The vehicle's computing devices 110 may identify the road agent using the object data along with the characteristics of the road agent. For example, the road agent may be detected having a given location, pose, orientation, dimensions/size, shape, speed, direction/heading, trajectory, lateral movement, acceleration, or other positional characteristics. The characteristics may also include physical characteristics, such as an estimated age, size, hand signals, light signals, type of apparel, type of bicycle, etc. For example, certain age groups, such as children or elderly, may be associated with greater lateral gap preferences and slower reaction times. Certain vehicles, such as road bikes, may be associated with smaller lateral gap preferences and higher speeds.
In addition to detecting the rail agent, the vehicle's computing devices 110 may also detect a plurality of objects in the vehicle's vicinity. For instance, sensor data from the perception system 172 may also include characteristics of each object, such as the object's size, shape, speed, orientation, direction, etc. The plurality of objects may include moving and/or stationary objects. In particular, the plurality of objects may include other road users, such as vehicles, bicycles, or pedestrians, may include other types of obstructions, such as buildings, posts, trees, or construction tools, or may include traffic features, such as lights, signs, lane lines, curbs, or rail tracks.
In scenario 500 depicted in
At block 404, the vehicle's computing devices 110 may initiate analysis of the road agent behavior based on the detected characteristics and behavior. For example, the analysis may be initiated when the road agent is predicted to interact with the autonomous vehicle. Interacting with the autonomous vehicle means factoring into the autonomous vehicle planning process or otherwise affecting the operation of the autonomous vehicle. The prediction may include that the road agent is projected to overlap laterally with or overtake the autonomous vehicle, a plurality of characteristics of a scenario that are associated with potential interaction with the autonomous vehicle are detected by the vehicle's computing devices, and/or a set of heuristics are satisfied by detected characteristics of the scenario. The prediction may also be made based on basic requirements, rules, or customs for road agents; for example, the prediction may take into account a lateral gap that at minimum satisfies the local regulatory rules and the operational design domain. When the road agent is predicted to interact with the autonomous vehicle, the analysis of the road agent behavior (which will be described below) may be initiated. The prediction may also include a threshold likelihood, such as a percentage or a score, for the interaction of the road agent with the autonomous vehicle in order to trigger the initiation of the analysis.
In some examples, initiating analysis may include determining a future trajectory of the road agent based on a location of the road agent in relation to the autonomous vehicle, past or current heading of the road agent, or past or current speed of the road agent. For example, the analysis may be initiated when the future trajectory of the road agent overtakes the autonomous vehicle. Overtaking the autonomous vehicle may be characterized by the road agent traveling in a same direction/heading as the autonomous vehicle for a period of time, having a higher speed than the autonomous vehicle, and/or having an increasing amount of lateral overlap with the autonomous vehicle. In the scenario shown in
In other examples, initiating analysis may include using a machine learning model for predicting that the road agent will interact with the autonomous vehicle.
Once initiated, the analysis may be iterated continually until there is no longer potential interaction between the road agent and the autonomous vehicle. For example, there is no longer potential interaction when the road agent is outside a maximum distance from the autonomous vehicle, turned onto a different street than the autonomous vehicle, or has parked.
The analysis of the road agent behavior is for estimating a spacing profile of the road agent. The spacing profile includes a lateral gap preference and one or more predicted behaviors of the road agent related to changes in lateral spacing. In some implementations, the spacing profile further includes a predictability score for the road agent, which may be based on how much behavior to-date has matched previously predicted behavior and/or predictability of environmental factors.
At block 406, the vehicle's computing devices 110 may estimate a spacing profile of the road agent as part of the analysis. The estimation of the spacing profile may be based on an existing lateral gap between autonomous vehicle and road agent or changes to the lateral gap over time. The existing lateral gap and changes to the lateral gap may be determined using detected locations of the road agent. For example, the cyclist 510 in
Other factors for estimating the spacing profile include environment context, road agent gaze/awareness, and road agent characteristics. Environment context may be extracted from map information and may include lane width (since the narrower the lane the more comfortable the road agent might be with a smaller lateral gap), adjacent lane types or boundary types (since a cyclist in or next to a bike lane may be able to react more to the autonomous vehicle nudging in laterally), or speed limit (a basis on which to determine likely speeds of the road agent). Other environmental contexts may be detected using the perception system; for example, traffic density and traffic speed. In the scenario in
Road agent gaze may be detected using the perception system to track a direction and focus of eyes of a human associated with the road agent, and may be used to determine road agent awareness of the autonomous vehicle. Road agent awareness may be defined by how long or often a road agent has looked at the autonomous vehicle within the last few seconds or iterations. In the scenario in
Generalizations of preferences based on one or more physical characteristics of the road agent may also be used to determine the lateral gap preference and the predicted behaviors.
Particularly with regards to predicted behaviors, they may include whether the road agent is trying to overtake or cut in front of the autonomous vehicle, which can be based on a difference in speed between the road agent and the autonomous vehicle. Namely, if the autonomous vehicle is slower than the road agent, a predicted behavior may include an overtake maneuver, which may be associated with greater reactions and/or smaller lateral gap preference. As described above, the cyclist 510 in
Predicted behaviors may be projected from detected behavior. Also, predicted behaviors may be based on predicted autonomous vehicle maneuvers and on probable reactions to the maneuvers. For example, if, in order to maintain a preferred or default lateral gap for a current trajectory of the road agent, it may be extrapolated that the autonomous vehicle may have to nudge into an oncoming lane with traffic. The system may assume that the road agent is aware that the vehicle nudging into the oncoming lane with traffic is unlikely, and therefore determine that the road agent would be comfortable with a smaller lateral gap with the autonomous vehicle. In the scenario in
At block 408, the vehicle's computing devices 110 may determine an autonomous vehicle maneuver based on the estimated spacing profile. To perform the determination, the vehicle's computing devices may update one or more constraints based on the spacing profile. For example, constraints, such as those related to a vehicle's lateral gap preference, may be added, moved, altered/tuned, or removed. Some constraints may include a permeability feature that may be updated. The constraints may be updated so that the effective vehicle's lateral gap preference may be updated to more closely match the road agent's lateral gap preference, which is determined from real world data in a same or similar manner as described above and may be updated in a same or similar manner in real time. The permeability of a speed constraint may be based on the predictability score of the road agent or of a particular road agent behavior. In particular, a higher permeability may be set originally for a merge constraint based on the road agent behavior because, at the beginning of the overtake maneuver, there is a lower predictability as to when the overtake by the road agent will occur. The merge constraint may be a predicted location for the autonomous vehicle to yield to the road agent. This can allow for the autonomous vehicle to slow down less initially and make more forward progress. In addition, based on a higher predictability score, there may be fewer or more tailored constraints. Based on a lower predictability score, there may be more or more generalized constraints. In some implementations, the one or more constraints may also be updated based on a predictability score of the autonomous vehicle. The vehicle's predictability score may be based on confidence levels with respect to the reactions of the autonomous vehicle to other road objects or road agents, the detection system of the autonomous vehicle, or other factors for maneuvering the autonomous vehicle. The constraints may further change as the road agent trajectory and behavior indicate that the overtake is imminent.
The one or more constraints may then be used in the determination of one or more components of a vehicle maneuver, such as speed, path, or route. In the case of lower predictability score of the road agent or the autonomous vehicle, the vehicle's computing devices may determine a slower speed or more cautious maneuvers according to the one or more constraints. In the case of higher predictability score of the road agent or the autonomous vehicle, the vehicle's computing devices may determine a specific path that takes into account the likely route of the road agent and efficiently navigates through the route. Particularly in the case of a high predictability for an imminent overtake maneuver by the road agent, the vehicle's computing devices may determine a slower speed to prepare to yield to the road agent, as needed. In some implementations, the determination of one or more components includes selecting a component setting that better satisfies the one or more constraints.
In some cases, the resulting one or more components may also be used in other calculations for a next iteration of the analysis, such as an overlap calculation or lateral gap calculation. For instance, the road agent's and/or vehicle's lateral gap preference may be used to update a projected trajectory of the road agent, and in turn update the vehicle maneuver in response to the updated projected trajectory.
System-wise, a first module may be configured for the estimation of the spacing profile, and a separate second module may be configured for the determination of the vehicle maneuver. The separation of the two functions allows for components for one module to be designed and shipped separate from the other. As such, the second module may be designed to determine merge constraints with or without a spacing profile, but treat determine more conservative constraints in the absence of a spacing profile received from the first module.
At block 410, the vehicle's computing devices 110 may execute the determined vehicle maneuver by controlling one or more operational systems of the autonomous vehicle accordingly. The vehicle's computing devices 110 may send instructions to one or more operational systems of the vehicle 100, including the deceleration system 160, acceleration system 162, and steering system 164. In some implementations, a third module separate from the first and second modules may be configured to receive control instructions from the second module and/or execute the control instructions.
In some alternative implementations, determining the autonomous vehicle maneuver may include determining how conservative the autonomous vehicle should be while sharing lateral space with the road agent based on the estimated spacing profile. Greater conservativeness may be determined associated with a low predictability, with high certainty of more aggressive road agent behavior (such a cut-in maneuver), or with a high complexity of a vehicle maneuver based on world context. This conservativeness may be a separate determination, such as a conservativeness level or score, or may be intrinsically part of the determination for the autonomous vehicle maneuver.
In further implementations, a same or similar method may be applied to other types of road agents than road agents. In some cases, spacing profiles for more than one road agent may be determined in parallel and used in determining the vehicle maneuver.
The technology herein may allow for a smoother and safer trip in an autonomous vehicle for a passenger. In particular, more forward progress in the autonomous vehicle may be made than without the navigation system described above. The technology may also minimize unnecessary overreactions and underreactions by the autonomous vehicle to smaller lateral gaps with which road agents and other road agents are comfortable while simultaneously maintaining required levels of safety and compliance.
Unless otherwise stated, the foregoing alternative examples are not mutually exclusive, but may be implemented in various combinations to achieve unique advantages. As these and other variations and combinations of the features discussed above can be utilized without departing from the subject matter defined by the claims, the foregoing description of the embodiments should be taken by way of illustration rather than by way of limitation of the subject matter defined by the claims. In addition, the provision of the examples described herein, as well as clauses phrased as “such as,” “including” and the like, should not be interpreted as limiting the subject matter of the claims to the specific examples; rather, the examples are intended to illustrate only one of many possible embodiments. Further, the same reference numbers in different drawings can identify the same or similar elements.
The present application claims the benefit of the filing date of U.S. Provisional Application No. 63/236,541, filed Aug. 24, 2021, the entire disclosure of which is incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
63236541 | Aug 2021 | US |