This disclosure relates generally to environmental perception, and more particularly to automated determination of relative importance of actors in the environment.
Various systems, such as autonomous vehicles (AVs) may sense objects in an environment based on various sensor data. Such systems may include various computing components for locally determining and responding to aspects of the environment. For example, the AV may detect objects in the environment, particularly moving objects, and control navigation of the AV based on the detected objects. The computing components on the AV are generally resource and time-constrained, such that the AV may apply a limited amount of processing each “tick” for detecting objects and modifying its intended navigation in the environment. Meanwhile, different types of environments vary in complexity and may vary with different numbers and types of objects that may have various movement characteristics in the environment. The processing and other resource capacity of the AV may mean that it cannot fully assess all the detected objects in the environment (also termed “actors”) within a specified time period, also termed a “tick.” To focus resource usage of the AV (or another system evaluating the environment), the AV may benefit from improved approaches for ranking the relative importance of the actors in the environment so that resources may be effectively directed to actors of higher importance and relatively lower importance actors may be deprioritized.
The various advantages and features of the present technology will become apparent by reference to specific implementations illustrated in the appended drawings. A person of ordinary skill in the art will understand that these drawings only show some examples of the present technology and would not limit the scope of the present technology to these examples. Furthermore, the skilled artisan will appreciate the principles of the present technology as described and explained with additional specificity and detail through the use of the accompanying drawings in which:
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology can be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a more thorough understanding of the subject technology. However, it will be clear and apparent that the subject technology is not limited to the specific details set forth herein and may be practiced without these details. In some instances, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
To improve use of the AV's limited computing resources, an object importance model may be trained to generate a ranked list of the actors perceived within an environment. An “actor” is a detected object within the environment separate from background elements of the environment (i.e., stationary features of the environment, such as buildings, roads, fences, etc.). Actors thus include objects that may move (or are currently moving) within the environment. To determine the relative importance of the various actors in the environment, the AV may apply an actor importance model that ranks the detected actors in the environment. The ranked actors may then be used to affect how the actors are further processed for further perception, planning, or other tasks by the AV. The actor importance model may be a relatively “lightweight” computer model, including neural network layers and other components, for determining the relative importance of the actors within the available resources for a “tick” of a perception stack of the AV while allowing further processing within the tick based on the rankings. As such, the actor importance model may be considered to guide, filter, or direct which actors receive further processing and evaluation based on the relative importance.
In one embodiment, the actor importance model characterizes each actor in a set of actors with a set of actor features, which may include the position of the actor and the relative distance of the actor with respect to the expected AV movement (e.g., the AV's intent) at one or more times. The AV intent may be similarly characterized to describe the planned movement of the AV across one or more times. The actor and AV intents may be used in the actor importance model to generate a representation of the scene (e.g., a scene embedding) describing the overall expected movement of the AV and the overall set of actors.
To determine the ranking of the individual actors, the actor features may be evaluated with respect to the scene representation to determine scores for each actor with respect to the scene, which may then be ordered to determine the ranking. In one embodiment, the scene is represented as a scene embedding and each actor is represented by a respective actor embedding. In one embodiment, the actor importance model includes various models for characterizing the actors and scene as embeddings. These models may be multi-layer perceptrons (MLPs) and generate embeddings for respectively representing the actors, AV intent, and scene. In one embodiment, as the number of actors in the scene is variable, a joint actor embedding is determined based on the actor embeddings and in one example is a sum of the actor embeddings. Then, in one embodiment, the scene embedding is determined from the joint actor embedding and the AV embedding (characterizing the AV intent). The joint actor embedding (formed from the actor embeddings) may thus allow the scene embedding to characterize the AV intent in combination with any number of actors detected in the scene.
To train parameters of the actor importance model, the training loss may be determined based on the predicted ranking of the actor importance model relative to a training ranking of the actors for a set of training data. Because the number of actors in a particular scene may vary, the training loss may be based on the relative ordering of actors in the predicted ranking compared to the training ranking. As such, the parameters may learn whether actors were ranked correctly relative to one another, rather than relative to an absolute order, which may significantly differ when there are different numbers of actors in the training data. In one embodiment, the error is based on a pairwise comparison of whether the relative ordering of actors (e.g., a pair of actors including a first actor and second actor) was correctly ranked in the predicted ranking compared to the training ranking. In one embodiment, the final error is a symmetric linear combination of pairwise errors. The error may also be a sum of the pairwise errors.
As another example, the training loss may also be based on a threshold, such that the training loss may discount or disregard an error cost based on an actor's ranking relative to the threshold. In some embodiments, the threshold may be based, e.g., on resources of the computing device and further processing of the AV perception stack. For example, in some examples the perception stack may be capable of executing further object detection and/or movement prediction for up to 10 actors, such that the training loss threshold may be set at the ranking of 10 to prioritize errors around the value at which further processing may be required or discarded and to discard errors for error below the threshold. That is, the error attributable to actors below the threshold may be reduced or eliminated to focus training on the actors above the threshold. Similarly, when actors are above the threshold (e.g., mis-ordered but both above the threshold), the training error for these actors may be reduced to encourage training parameters to focus on the errors around the threshold. By applying a relative error and, optionally, focusing error with respect to a threshold, the actor importance model may learn to generate rankings for a dynamic number of actors and focus its parameters on the rankings which may be most important for downstream decisions.
In this example, the AV management system 100 includes an AV 102, a data center 150, and a client computing device 170. The AV 102, the data center 150, and the client computing device 170 can communicate with one another over one or more networks (not shown), such as a public network (e.g., the Internet, an Infrastructure as a Service (IaaS) network, a Platform as a Service (PaaS) network, a Software as a Service (SaaS) network, another Cloud Service Provider (CSP) network, etc.), a private network (e.g., a Local Area Network (LAN), a private cloud, a Virtual Private Network (VPN), etc.), and/or a hybrid network (e.g., a multi-cloud or hybrid cloud network, etc.).
AV 102 can navigate about roadways without a human driver based on sensor signals generated by multiple sensor systems 104, 106, and 108. The sensor systems 104-108 can include different types of sensors and can be arranged about the AV 102. For instance, the sensor systems 104-108 can comprise Inertial Measurement Units (IMUs), cameras (e.g., still image cameras, video cameras, etc.), light sensors (e.g., LIDAR systems, ambient light sensors, infrared sensors, etc.), RADAR systems, a Global Navigation Satellite System (GNSS) receiver (e.g., Global Positioning System (GPS) receivers), audio sensors (e.g., microphones, Sound Navigation and Ranging (SONAR) systems, ultrasonic sensors, etc.), engine sensors, speedometers, tachometers, odometers, altimeters, tilt sensors, impact sensors, airbag sensors, seat occupancy sensors, open/closed door sensors, tire pressure sensors, rain sensors, and so forth. For example, the sensor system 104 can be a camera system, the sensor system 106 can be a LIDAR system, and the sensor system 108 can be a RADAR system. Other embodiments may include any other number and type of sensors.
AV 102 can also include several mechanical systems that can be used to maneuver or operate the AV 102. For instance, the mechanical systems can include a vehicle propulsion system 130, a braking system 132, a steering system 134, a safety system 136, and a cabin system 138, among other systems. The vehicle propulsion system 130 can include an electric motor, an internal combustion engine, or both. The braking system 132 can include an engine brake, a wheel braking system (e.g., a disc braking system that utilizes brake pads), hydraulics, actuators, and/or any other suitable componentry configured to assist in decelerating the AV 102. The steering system 134 can include suitable componentry configured to control the direction of movement of the AV 102 during navigation. The safety system 136 can include lights and signal indicators, a parking brake, airbags, and so forth. The cabin system 138 can include cabin temperature control systems, in-cabin entertainment systems, and so forth. In some embodiments, the AV 102 may not include human driver actuators (e.g., steering wheel, handbrake, foot brake pedal, foot accelerator pedal, turn signal lever, window wipers, etc.) for controlling the AV 102. Instead, the cabin system 138 can include one or more client interfaces (e.g., Graphical User Interfaces (GUIs), Voice User Interfaces (VUIs), etc.) for controlling certain aspects of the mechanical systems 130-138.
The AV 102 can additionally include a local computing device 110 that is in communication with the sensor systems 104-108, the mechanical systems 130-138, the data center 150, and the client computing device 170, among other systems. The local computing device 110 can include one or more processors and memory, including instructions that can be executed by the one or more processors. The instructions can make up one or more software stacks or components responsible for controlling the AV 102; communicating with the data center 150, the client computing device 170, and other systems; receiving inputs from riders, passengers, and other entities within the AV 102's environment; logging metrics collected by the sensor systems 104-108; and so forth. In this example, the local computing device 110 includes a perception stack 112, a mapping and localization stack 114, a planning stack 116, a control stack 118, a communications stack 120, a geospatial database 122, and an AV operational database 124, among other stacks and systems.
The perception stack 112 can enable the AV 102 to “see” (e.g., via cameras, LIDAR sensors, infrared sensors, etc.), “hear” (e.g., via microphones, ultrasonic sensors, RADAR, etc.), and “feel” (e.g., pressure sensors, force sensors, impact sensors, etc.) its environment using information from the sensor systems 104-108, the mapping and localization stack 114, the geospatial database 122, other components of the AV 102, and other data sources (e.g., the data center 150, the client computing device 170, third-party data sources, etc.). The perception stack 112 can detect and classify objects and determine their current and predicted locations, speeds, directions, and the like. In addition, the perception stack 112 can determine the free space around the AV 102 (e.g., to maintain a safe distance from other objects, change lanes, park the AV, etc.). The perception stack 112 can also identify environmental uncertainties, such as where to look for moving objects, flag areas that may be obscured or blocked from view, and so forth.
The mapping and localization stack 114 can determine the AV's position and orientation (pose) using different methods from multiple systems (e.g., GPS, IMUs, cameras, LIDAR, RADAR, ultrasonic sensors, the geospatial database 122, etc.). For example, in some embodiments, the AV 102 can compare sensor data captured in real-time by the sensor systems 104-108 to data in the geospatial database 122 to determine its precise (e.g., accurate to the order of a few centimeters or less) position and orientation. The AV 102 can focus its search based on sensor data from one or more first sensor systems (e.g., GPS) by matching sensor data from one or more second sensor systems (e.g., LIDAR). If the mapping and localization information from one system is unavailable, the AV 102 can use mapping and localization information from a redundant system and/or from remote data sources.
The planning stack 116 can determine how to maneuver or operate the AV 102 safely and efficiently in its environment. For example, the planning stack 116 can receive the location, speed, and direction of the AV 102, geospatial data, data regarding objects sharing the road with the AV 102 (e.g., pedestrians, bicycles, vehicles, ambulances, buses, cable cars, trains, traffic lights, lanes, road markings, etc.) or certain events occurring during a trip (e.g., an Emergency Vehicle (EMV) blaring a siren, intersections, occluded areas, street closures for construction or street repairs, Double-Parked Vehicles (DPVs), etc.), traffic rules and other safety standards or practices for the road, user input, and other relevant data for directing the AV 102 from one point to another. The planning stack 116 can determine multiple sets of one or more mechanical operations that the AV 102 can perform (e.g., go straight at a specified speed or rate of acceleration, including maintaining the same speed or decelerating; turn on the left blinker, decelerate if the AV is above a threshold range for turning, and turn left; turn on the right blinker, accelerate if the AV is stopped or below the threshold range for turning, and turn right; decelerate until completely stopped and reverse; etc.), and select the best one to meet changing road conditions and events. If something unexpected happens, the planning stack 116 can select from multiple backup plans to carry out. For example, while preparing to change lanes to turn right at an intersection, another vehicle may aggressively cut into the destination lane, making the lane change unsafe. The planning stack 116 could have already determined an alternative plan for such an event, and upon its occurrence, help to direct the AV 102 to go around the block instead of blocking a current lane while waiting for an opening to change lanes.
The control stack 118 can manage the operation of the vehicle propulsion system 130, the braking system 132, the steering system 134, the safety system 136, and the cabin system 138. The control stack 118 can receive sensor signals from the sensor systems 104-108 as well as communicate with other stacks or components of the local computing device 110 or a remote system (e.g., the data center 150) to effectuate operation of the AV 102. For example, the control stack 118 can implement the final path or actions from the multiple paths or actions provided by the planning stack 116. This can involve turning the routes and decisions from the planning stack 116 into commands for the actuators that control the AV's steering, throttle, brake, and drive unit.
The communications stack 120 can transmit and receive signals between the various stacks and other components of the AV 102 and between the AV 102, the data center 150, the client computing device 170, and other remote systems. The communications stack 120 can enable the local computing device 110 to exchange information remotely over a network, such as through an antenna array or interface that can provide a metropolitan WIFI® network connection, a mobile or cellular network connection (e.g., Third Generation (3G), Fourth Generation (4G), Long-Term Evolution (LTE), 5th Generation (5G), etc.), and/or other wireless network connection (e.g., License Assisted Access (LAA), Citizens Broadband Radio Service (CBRS), MULTEFIRE, etc.). The communications stack 120 can also facilitate local exchange of information, such as through a wired connection (e.g., a user's mobile computing device docked in an in-car docking station or connected via Universal Serial Bus (USB), etc.) or a local wireless connection (e.g., Wireless Local Area Network (WLAN), BLUETOOTH®, infrared, etc.).
The geospatial database 122 can store maps and related data of the streets upon which the AV 102 travels. The data stored in the geospatial database 122 may describe spatial information at a relatively “high” resolution (e.g., with “high-definition” (HD) mapping data) relative to the perception and size of components of the AV 102. In some embodiments, the maps and related data can comprise multiple layers, such as an areas layer, a lanes and boundaries layer, an intersections layer, a traffic controls layer, and so forth. The areas layer can include geospatial information indicating geographic areas that are drivable (e.g., roads, parking areas, shoulders, etc.) or not drivable (e.g., medians, sidewalks, buildings, etc.), drivable areas that constitute links or connections (e.g., drivable areas that form the same road) versus intersections (e.g., drivable areas where two or more roads intersect), and so on. The lanes and boundaries layer can include geospatial information of road lanes (e.g., lane or road centerline, lane boundaries, type of lane boundaries, etc.) and related attributes (e.g., direction of travel, speed limit, lane type, etc.). The lanes and boundaries layer can also include 3D (three-dimensional) attributes related to lanes (e.g., slope, elevation, curvature, etc.). The intersections layer can include geospatial information of intersections (e.g., crosswalks, stop lines, turning lane centerlines, and/or boundaries, etc.) and related attributes (e.g., permissive, protected/permissive, or protected only left turn lanes; permissive, protected/permissive, or protected only U-turn lanes; permissive or protected only right turn lanes; etc.). The traffic controls layer can include geospatial information of traffic signal lights, traffic signs, and other road objects and related attributes.
The AV operational database 124 can store raw AV data generated by the sensor systems 104-108 and other components of the AV 102 and/or data received by the AV 102 from remote systems (e.g., the data center 150, the client computing device 170, etc.). In some embodiments, the raw AV data can include LIDAR point cloud data, image or video data, RADAR data, GPS data, and other sensor data that the data center 150 can use for creating or updating AV geospatial data as discussed further below with respect to
The data center 150 can be a private cloud (e.g., an enterprise network, a co-location provider network, etc.), a public cloud (e.g., an IaaS network, a PaaS network, a SaaS network, or other CSP network), a hybrid cloud, a multi-cloud, and so forth. The data center 150 can include one or more computing devices remote to the local computing device 110 for managing a fleet of AVs and AV-related services. For example, in addition to managing the AV 102, the data center 150 may also support a ridesharing service, a delivery service, a remote/roadside assistance service, street services (e.g., street mapping, street patrol, street cleaning, street metering, parking reservation, etc.), and the like.
The data center 150 can send and receive various signals to and from the AV 102 and the client computing device 170. These signals can include sensor data captured by the sensor systems 104-108, roadside assistance requests, software updates, ridesharing pick-up and drop-off instructions, and so forth. In this example, the data center 150 includes one or more of a data management platform 152, an Artificial Intelligence/Machine-Learning (AI/ML) platform 154, a simulation platform 156, a remote assistance platform 158, a ridesharing platform 160, and a map management platform 162, among other systems.
The data management platform 152 can be a “big data” system capable of receiving and transmitting data at high speeds (e.g., near real-time or real-time), processing a large variety of data, and storing large volumes of data (e.g., terabytes, petabytes, or more of data). The varieties of data can include data having different structures (e.g., structured, semi-structured, unstructured, etc.), data of different types (e.g., sensor data, mechanical system data, ridesharing service data, map data, audio data, video data, etc.), data associated with different types of data stores (e.g., relational databases, key-value stores, document databases, graph databases, column-family databases, data analytic stores, search engine databases, time series databases, object stores, file systems, etc.), data originating from different sources (e.g., AVs, enterprise systems, social networks, etc.), data having different rates of change (e.g., batch, streaming, etc.), or data having other heterogeneous characteristics. The various platforms and systems of the data center 150 can access data stored by the data management platform 152 to provide their respective services.
The AI/ML platform 154 can provide the infrastructure for training and evaluating machine-learning algorithms for operating the AV 102, the simulation platform 156, the remote assistance platform 158, the ridesharing platform 160, the map management platform 162, and other platforms and systems. Using the AI/ML platform 154, data scientists can prepare data sets from the data management platform 152; select, design, and train machine-learning models; evaluate, refine, and deploy the models; maintain, monitor, and retrain the models; and so on.
The simulation platform 156 can enable testing and validation of the algorithms, machine-learning models, neural networks, and other development efforts for the AV 102, the remote assistance platform 158, the ridesharing platform 160, the map management platform 162, and other platforms and systems. The simulation platform 156 can replicate a variety of driving environments and/or reproduce real-world scenarios from data captured by the AV 102, including rendering geospatial information and road infrastructure (e.g., streets, lanes, crosswalks, traffic lights, stop signs, etc.) obtained from the map management platform 162; modeling the behavior of other vehicles, bicycles, pedestrians, and other dynamic elements; simulating inclement weather conditions, different traffic scenarios; and so on.
The remote assistance platform 158 can generate and transmit instructions regarding the operation of the AV 102. For example, in response to an output of the AI/ML platform 154 or other system of the data center 150, the remote assistance platform 158 can prepare instructions for one or more stacks or other components of the AV 102.
The ridesharing platform 160 can interact with a customer of a ridesharing service via a ridesharing application 172 executing on the client computing device 170. The client computing device 170 can be any type of computing system, including a server, desktop computer, laptop, tablet, smartphone, a smart wearable device (e.g., smart watch; smart eyeglasses or other Head-Mounted Display (HMD); smart ear pods or other smart in-ear, on-ear, or over-ear device; etc.), gaming system, or other general purpose computing device for accessing the ridesharing application 172. The client computing device 170 can be a customer's mobile computing device or a computing device integrated with the AV 102 (e.g., the local computing device 110). The ridesharing platform 160 can receive requests to be picked up or dropped off from the ridesharing application 172 and dispatch the AV 102 for the trip.
The map management platform 162 can provide a set of tools for the manipulation and management of geographic and spatial (geospatial) and related attribute data. The data management platform 152 can receive LIDAR point cloud data, image data (e.g., still image, video, etc.), RADAR data, GPS data, and other sensor data (e.g., raw data) from one or more AVs 402, Unmanned Aerial Vehicles (UAVs), satellites, third-party mapping services, and other sources of geospatially-referenced data. The raw data can be processed, and map management platform 162 can render base representations (e.g., tiles (2D), bounding volumes (3D), etc.) of the AV geospatial data to enable users to view, query, label, edit, and otherwise interact with the data. The map management platform 162 can manage workflows and tasks for operating on the AV geospatial data. The map management platform 162 can control access to the AV geospatial data, including granting or limiting access to the AV geospatial data based on user-based, role-based, group-based, task-based, and other attribute-based access control mechanisms. The map management platform 162 can provide version control for the AV geospatial data, such as to track specific changes that (human or machine) map editors have made to the data and to revert changes when necessary. The map management platform 162 can administer release management of the AV geospatial data, including distributing suitable iterations of the data to different users, computing devices, AVs, and other consumers of HD maps. The map management platform 162 can provide analytics regarding the AV geospatial data and related data, such as to generate insights relating to the throughput and quality of mapping tasks.
In some embodiments, the map viewing services of the map management platform 162 can be modularized and deployed as part of one or more of the platforms and systems of the data center 150. For example, the AI/ML platform 154 may incorporate the map viewing services for visualizing the effectiveness of various object detection or object classification models, the simulation platform 156 may incorporate the map viewing services for recreating and visualizing certain driving scenarios, the remote assistance platform 158 may incorporate the map viewing services for replaying traffic incidents to facilitate and coordinate aid, the ridesharing platform 160 may incorporate the map viewing services into the client computing device 170 to the ridesharing application 172 to enable passengers to view the AV 102 in transit en route to a pick-up or drop-off location, and so on.
The neural network 200 is a multi-layer neural network of interconnected nodes. Each node may represent data as it is processed from one layer to another according to parameters of the model. For example, in many types of networks, each layer is generated by applying weights to values of a prior layer and in some embodiments may be summed to generate the value for a node of a current layer. Depending on the type of neural network 200, different types of network layers may be used, such as fully-connected layers, regularization layers, recurrent layers, etc. In some embodiments, information associated with the nodes is shared among the different layers and each layer retains information as information is processed. In some cases, the neural network 200 can include a feed-forward network, in which data is fed forward through the layers from the input to output. In some cases, the neural network 200 can include recurrent elements, which can include loops that allow information to be carried across nodes while reading in input.
Information can be exchanged between nodes through node-to-node interconnections between the various layers. Nodes of the input layer 220 can activate a set of nodes in the first hidden layer 222A. For example, as shown, each of the input nodes of the input layer 220 is connected to each of the nodes of the first hidden layer 222A. The nodes of the first hidden layer 222A can transform the information of each input node by various parameters (e.g., weights, activation functions, etc.) to the input node information. The information derived from the transformation can then be passed to and can activate the nodes of the next hidden layer 222B, which can perform their own designated functions. Example functions include convolutional, up-sampling, data transformation, and/or any other suitable functions. The output of the hidden layer 222B can then activate nodes of the next hidden layer, and so on. The output of the last hidden layer 222N can activate one or more nodes of the output layer 221, which results in output of the neural network. In some cases, while nodes in the neural network 200 are shown as having multiple output lines, a node can have a single output and all lines shown as being output from a node represent the same output value.
In some cases, each node or interconnection between nodes can have a set of parameters (e.g., weights) derived from the training of the neural network 200. Once the neural network 200 is trained, it may be applied to new data and to predict an output for the new data based on the parameters learned in the training from the training data. As such, an interconnection between nodes can represent a piece of information learned about the relationship between the interconnected nodes (e.g., a parameter describing the relationship between). The interconnection can have a tunable numeric weight that can be learned (e.g., based on a training dataset), allowing the neural network 200 to be trained to output data based on the training data.
In some cases, the neural network 200 can adjust the parameters of the nodes using a training process called backpropagation. A backpropagation process can include a forward pass, a loss function, a backward pass, and a weight update. The forward pass, loss function, backward pass, and parameter/weight update is performed for each training iteration. The process can be repeated for a certain number of iterations for each set of training data until the neural network 200 is trained well enough so that the weights of the layers are accurately tuned.
To perform training, a loss function can be used to analyze error in the output. Any suitable loss function may be used to describe the “error” of the output of the neural network 200 (given its current parameters) with respect to the desired output (e.g., for a supervised training process, the known output values for the training data). Example training losses may include Cross-Entropy loss or mean squared error (MSE).
The loss (or error) will be high for the initial training data since the output values of the model will be much different than the predicted output. The goal of training is to minimize the amount of loss so that the predicted output is the same as the training output. The neural network 200 can perform a backward pass by determining which parameters most contributed to the loss of the network and adjusts weights so that the loss decreases. Various approaches may be used in training to modify the parameters, such as gradient descent.
The neural network 200 can include any suitable deep network. One example includes a Convolutional Neural Network (CNN), which includes an input layer and an output layer with multiple hidden layers between the input and out layers. The hidden layers of a CNN include a series of convolutional, nonlinear, pooling (for downsampling), and fully-connected layers. The neural network 200 can include any other deep network other than a CNN, such as fully-connected layer, MLP, an autoencoder, Deep Belief Nets (DBNs), Recurrent Neural Networks (RNNs), among others.
As understood by those of skill in the art, computer model (also described herein as machine-learning) architectures can vary depending on the desired implementation. For example, computer model architectures can utilize one or more of the following, alone or in combination: hidden Markov models; RNNs; CNNs; deep learning; Bayesian symbolic methods; Generative Adversarial Networks (GANs); support vector machines; image registration methods; and applicable rule-based systems. Where regression algorithms are used, they may include but are not limited to: a Stochastic Gradient Descent Regressor, a Passive Aggressive Regressor, etc.
Machine-learning models can also use or apply various additional algorithms, such as clustering algorithms, e.g., a K-means clustering algorithm, a Min-wise Hashing algorithm, or Euclidean Locality-Sensitive Hashing (LSH) algorithm), and/or an anomaly detection algorithm, such as a local outlier factor. Additionally, machine-learning models can employ a dimensionality reduction approach, such as, one or more of: a Mini-batch Dictionary Learning algorithm, an incremental Principal Component Analysis (PCA) algorithm, a Latent Dirichlet Allocation algorithm, and/or a Mini-batch K-means algorithm, etc.
In some embodiments, a computing system 300 is a distributed system in which the functions described in this disclosure can be distributed within a datacenter, multiple data centers, a peer network, etc. In some embodiments, one or more of the described system components represents many such components each performing some or all of the function for which the component is described. In some embodiments, the components can be physical or virtual devices.
Example system 300 includes at least one processing unit (Central Processing Unit (CPU) or “processor”) 310 and the connection 305 that couples various system components, including system memory 315, such as Read-Only Memory (ROM) 320 and Random-Access Memory (RAM) 325 to the processor 310. The computing system 300 can include a cache of high-speed memory 312 connected directly with, in close proximity to, or integrated as part of the processor 310.
The processor 310 can include any general purpose processor and a hardware service or software service, such as service modules 332, 334, and 336 stored in storage device 330, configured to a control processor 310 as well as a special purpose processor where software instructions are incorporated into the actual processor design. The service may include one or more of the methods described herein associated with determining environmental actor importance with ordered ranking loss. The processor 310 may essentially be a completely self-contained computing system containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric.
To enable user interaction, the computing system 300 includes an input device 345, which can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech, etc. The computing system 300 can also include an output device 335, which can be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input/output to communicate with the computing system 300. The computing system 300 can include a communication interface 340, which can generally govern and manage the user input and system output. The communication interface 340 may perform or facilitate the receipt and/or the transmission of wired or wireless communications via wired and/or wireless transceivers, including those making use of an audio jack/plug, a microphone jack/plug, a USB port/plug, an Apple® Lightning® port/plug, an Ethernet port/plug, a fiber optic port/plug, a proprietary wired port/plug, a BLUETOOTH® wireless signal transfer, a BLUETOOTH® low energy (BLE) wireless signal transfer, an IBEACON® wireless signal transfer, a Radio-Frequency Identification (RFID) wireless signal transfer, Near-Field Communications (NFC) wireless signal transfer, Dedicated Short Range Communication (DSRC) wireless signal transfer, 802.11 Wi-Fi® wireless signal transfer, WLAN signal transfer, Visible Light Communication (VLC) signal transfer, Worldwide Interoperability for Microwave Access (WiMAX), Infrared (IR) communication wireless signal transfer, Public Switched Telephone Network (PSTN) signal transfer, Integrated Services Digital Network (ISDN) signal transfer, 3G/4G/5G/LTE cellular data network wireless signal transfer, ad-hoc network signal transfer, radio wave signal transfer, microwave signal transfer, infrared signal transfer, visible light signal transfer signal transfer, ultraviolet light signal transfer, wireless signal transfer along the electromagnetic spectrum, or some combination thereof.
The communication interface 340 may also include one or more GNSS receivers or transceivers that are used to determine a location of the computing system 300 based on the receipt of one or more signals from one or more satellites associated with one or more GNSS systems. GNSS systems include, but are not limited to: the US-based GPS; the Russia-based Global Navigation Satellite System (GLONASS); the China-based BeiDou Navigation Satellite System (BDS); and the Europe-based Galileo GNSS. There is no restriction on operating on any particular hardware arrangement, and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
The storage device 330 can be a non-volatile and/or non-transitory and/or computer-readable memory device and can be a hard disk or other types of computer-readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, a floppy disk, a flexible disk, a hard disk, magnetic tape, a magnetic strip/stripe, any other magnetic storage medium, flash memory, memristor memory, any other solid state memory, a Compact Disc (CD) Read-Only Memory (CD-ROM) optical disc, a rewritable CD optical disc, a Digital Video Disk (DVD) optical disc, a Blu-ray Disc (BD) optical disc, a holographic optical disk, another optical medium, a Secure Digital (SD) card, a micro SD (microSD) card, a Memory Stick® card, a smartcard chip, a EMV chip, a Subscriber Identity Module (SIM) card, a mini/micro/nano/pico SIM card, another Integrated Circuit (IC) chip/card, RAM, Static RAM (SRAM), Dynamic RAM (DRAM), Read-Only Memory (ROM), Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), Flash EPROM (FLASHEPROM), cache memory (L1/L2/L3/L4/L5/L#), Resistive RAM (RRAM/ReRAM), Phase-Change Memory (PCM), Spin-Transfer Torque RAM (STT-RAM), another memory chip or cartridge, and/or a combination thereof.
The storage device 330 can include software services, servers, services, etc., that when the code that defines such software is executed by the processor 310, it causes the system 300 to perform a function. In some embodiments, a hardware service that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as a processor 310, a connection 305, an output device 335, etc., to carry out the function.
As further discussed in
While environment of
The actor importance ranking 540 may then be used for further actor perception & prediction 550 or as an input to motion planning 560 in various embodiments. In various embodiments, different or additional types of further processing may use the actor importance ranking 540 output from the actor importance model 530. Each of these further processes, which may be components of the perception stack 112 or the planning stack 116, may be comparatively heavier-weight processes that may benefit from the relative ranking provided by the actor importance ranking 540. For example, further actor perception and prediction 550 may include movement and intention predictions for actors within the environment, for example, to predict whether a car or pedestrian is likely change speed or direction within the environment. Such prediction algorithms may be relatively resource “heavy” with respect to computing resources available within a tick, such that these further predictions may be performed up to a maximum specified number of actors in the tick. The further predictions and actor ranking may also be provided to the motion planning 560. The motion planning 560 may use information about the environment and importance of the actors in determining further movement of the AV. As with the further perception, motion planning 560 may be relatively resource-intensive, and in some embodiments may be limited in the number of actors that may be evaluated with more extensive processes.
Though one AV intent is shown here, in other embodiments the actor importance model 530 may be executed on the set of actors 510 with multiple alternative AV intents 520. That is, while the actors 510 may be detected in the current tick, the particular actor importance ranking 540 of those actors 510 may be a function of a particular AV intent 520, which may be determined in a prior tick of the motion planning stack and include more than one AV intents, e.g., to account for alternative actual environment conditions (e.g., actor movement). In some embodiments, thus, the actor importance model 530 is applied to the actors 510 for each of the possible AV intents 520, such that an actor importance ranking 540 may be generated for each of the AV intents 520. In some instances, this may yield different rankings for various actors, which may affect which actor's likely motion is further predicted, for example to predict the motion of actors that are important to any of the possible AV intents. In one embodiment, the different actor importance rankings (associated with the different AV intents) may be provided to the different systems for further evaluation, which may use the different rankings to determine which actors to further process. In another example, the actor rankings associated with the different AV intents may be combined or weighted to determine a joint actor ranking associated with the group of different AV intents. The joint actor ranking may be determined by combining the respective actor rankings in any suitable way. For example, the joint actor ranking may be an average of the actor rankings or may be weighed along various dimensions, such as according to rank within the list (e.g., higher importance ranks may be associated with higher weights in a scalar or non-scalar manner), weighed according to a weight of the respective AV intents (e.g., based on a likelihood or priority of the respective AV intent), and/or with additional means.
As a result, the actor importance model may be valuable for evaluating the importance of detected actors with respect to one or more AV intents, permitting further analysis and processing based on the importance rankings. Example architectures and processes for embodiments are further discussed in the following FIGS.
The AV features 600 refers to the data describing the AV intent and the actor features 612 refer to the data describing the various actors for use in the actor importance model. In one embodiment, the AV features 600 and actor features 612 may include information describing a sequence of times projected into the future from the current time. The projected times may allow the actor importance model to represent not only current positions (of objects in the environment, such as the AV and actors), but also potential positions in the future (e.g., as actors may get closer to or further away from the AV). The AV features 600 may describe the AV intent with respect to the sequence of times, which may be designated T0, T1, T2, etc. The sequence of times may begin with an initial time (e.g., T0, or the current time), and include one or more times projected into the future. For example, T1 may refer to the next tick of the perception/control stack. The amount of time between each of the times may vary in different embodiments, and may be a constant time (e.g., 2 seconds between each time in the sequence), or may be spaced at different or increasing intervals (e.g., 2 seconds between T0 and T1, 4 seconds between T1 and T2, 8 seconds between T2 and T3, etc.). In one embodiment, the sequence of times or total amount of time is the same as the forward-looking planning horizon for which the AV intent is generated (e.g., by the planning stack).
As such, in one embodiment, the AV features 600 describe the AV intent as a sequence of positions intended for the AV to move to at each of the respective sequence of times. The AV features 600 may also include additional information, such as intended speed or other motion at each of the times. In one embodiment, the AV features 600 include a set of features at each of the times in the sequence of times. For example, each time T0, T1, T2, etc. may be associated with an intended position of the AV at the time according to the AV intent. Formally, the AV features 600 in one embodiment are described by a tensor having dimensions T×A, in which T is the number of times in the sequence and A is the set of AV intent features for each time.
The actor features 612 may similarly include features describing each of the actors detected in the environment. The actor features 612 may include a set of characteristics for the actors, such as a position, kinematics (e.g., speed and movement direction), and a type. The position may describe the position of the detected actor expected for each time and may vary at each time based on, e.g., the motion information of a previous time. The kinematics may describe movement of the actor in various ways. In one example, the actor's kinematic information may be based on the detected movement in the current tick or may be based on information predicted from a previous tick of the perception stack, which may, for example, describe a more complex expected movement trajectory for the actor. For example, in a previous tick, the perception stack may predict that an approaching car to an intersection is expected to slow and stop because the car approaches along a path that has a stop sign or stop light. In some embodiments, the position and kinematics may be determined for further times past the current time (e.g., past T0) by “unrolling” the actor's position and velocity at one time to project the expected position at a subsequent time (e.g., positionT
The actor features may also include a classification of the actor as determined by a perception model (e.g., a classification model). The classification may describe, for example, that the actor is a vehicle, a heavy vehicle, a pedestrian, an animal, etc. In some embodiments, the classification of the actors may be performed by a relatively “lightweight” classification model (e.g., for actors which are not tracked across ticks or for which more detailed classification has been performed). That is, because one purpose of the actor importance ranking is to prioritize the application of resources for further perception, in one embodiment the classification of actors used as an input to the actor ranking model may be relatively course or “low-granularity” classifications that may describe types of objects at a high-level. For example, humans may all be identified as the class “person”—when a particular actor having the class “person” is determined as having a high actor importance, in one embodiment, additional perception models may be applied to determine a further subtype of the “person” class and the likely behavior for that person for further AV motion planning. For example, the sensor data related to the detected person may then be analyzed by a further model that may distinguish between adults, children, walkers, runners, joggers, and so forth, to determine potential intent or movement of that “person” object. For a “person” object which are relatively low rank in importance (for example, farther away from the AV and having movement away from the AV's trajectory), this “person” object may not be considered sufficiently important for further perception processing.
As the number of actors identified in the environment may be variable depending on whether the AV is in a relatively crowded or sparse environment, the actor features 612 may include features for each of the actors N for each of the times T. As such, the actor features 612 may be described as a tensor of N×T×E, where E are the features (e.g., position, kinematics, and type) at each of T times for each actor N. In some embodiments, some actor features may not vary across time (e.g., detected actor type), such that the T dimension describes only features that vary across time (e.g., position and/or kinematics). In other embodiments, the structure of the models consuming the actor features 612 may be more effective or efficient when receiving a tensor in which the actor features are organized by each point in time and the features that do not vary across time are repeated as a “feature” of each point in time.
In one embodiment, the actor features 612 also include an estimated distance between the AV at the various points in time. That is, as the AV intent and actor positions may be “unrolled” to the future times, the expected distance between the AV and the actors at the future times may be estimated and used as one feature describing each actor. As such, the actor feature tensor may be expanded to N×T×(E+1) in some embodiments. Because AV planning is typically more likely to be affected by objects nearer to the AV including features that describe the predicted distance between the actor and the AV (at each of the various points in time T0-Tm), the actor importance model may more directly incorporate this feature in predicting the actor importance.
While distance is one example, further features used for generating the actor embedding may be generated as a function of the actor features and the AV intent features. Formally, these features may be a function ƒ(N(t), A(t)) of actor features N and AV intent features A at a time t. Generally, these features may be used to provide values that may vary across time and may provide a notion of “attention” indicating differences in time for which the actor may be more or less relevant.
The respective movement and trajectories of the AV 715 and detected actors are similar in
Returning to
Rather than directly using the AV embedding 608 for actor importance prediction, in this example, the actor importance model generates a scene embedding 645 to describe the “scene” in which the AV operates, which in some embodiments may describe intended movement of the AV with respect to the overall set of actors in the environment. That is, the scene embedding 645 characterizes the general AV intent and objects in the scene as a whole. To predict individual actor importance, the scene embedding 645 may then be applied to each of the actor embeddings 618 to determine the relative ranking of the respective actor and the output actor importance ranking 660.
In one embodiment, the scene embedding 645 is generated by a scene embedding model 640, which may include multiple layers as discussed above with respect to the other embedding models. The scene embedding model 640 may receive, as inputs, the AV embedding 608 and a joint actor embedding 630. In some embodiments, the scene embedding model 640 may also receive one or more environmental features 620. In this example, the joint actor embedding 630 describes characteristics of the set of actor embeddings 618 as a whole. The number of actors (and hence number of actor embeddings 618) is variable, such that the joint actor embedding 630 may provide a single actor embedding 630 to represent the set of actor embeddings 618 for input to the scene embedding model 640. The various inputs to the scene embedding model 640 may be concatenated for input to the scene embedding model 640.
In one embodiment, the joint actor embedding 630 may be generated by combining the values of the actor embeddings 618, for example by summing each of the positions in the embedding (i.e., the values for each index of the vector across the actor embeddings). In one embodiment, the values of each position in the embedding are averaged across the set of actor embeddings 618. In general, because ranking of the actors is not yet known, the combination of the individual actor embeddings 618 to generate the joint actor embeddings 630 should be invariant to the sequence in which each actor embedding is added. As the joint actor embedding 630 represents the set of actor embeddings 618 and the AV embedding 608 represents the AV intent, the scene embedding model 640 generates a scene embedding 645 that generally represents the intended AV trajectory in conjunction with the set of actors within the environment.
The scene embedding model 640 may also receive one or more environmental features 620 that may describe aspects of the environment that may not be captured by the actors or AV intent. These environmental features 620 may include, for example, weather or other climate information, traffic signal or navigation rules (e.g., stop signs, current traffic signals, etc.), time of day (e.g., daybreak, noon, dusk, evening, etc.) and other characteristics that may affect actor movement or importance. These environmental features 620 may be provided, in addition to the AV embedding 608 and joint actor embedding 630, as inputs to the scene embedding model 640. In some embodiments, these inputs may be concatenated to form a single input feature set (e.g., a combined vector) for processing by the scene embedding model 640.
In general, different processing aspects (e.g., models) or data as shown in
Finally, using the scene embedding 645 to represent the overall scene, the actor-scene importance model 650 may process the actor embeddings (shown here as a list of actors x1 to xn) to evaluate the relative importance of the actors with respect to the scene as represented in the scene embedding 645. In one embodiment, the actor-scene importance model 650 is also a multi-layer computer model, which may include one or more fully-connected layers. In one embodiment, the actor-scene importance model 650 is an MLP. Because the number of actors may be unknown and is variable, the actor-scene importance model may generate an importance score (e.g., an integer or real number) representing the expected importance of an actor with respect to the scene (as characterized by the respective embeddings). The actor-scene importance model 650 may be applied to each of the actors to generate an importance score 655 for each of the actors. For each actor, the actor-scene importance model 650 may receive the respective actor embedding and the scene embedding (e.g., as a pair) and output the actor importance score for that actor. For the scene as a whole, the scene embedding may thus be constant, while the individual actor embeddings are changed to determine the respective actor importance score 655 of the actor with respect to this scene. The actor importance scores 655 may then be ordered accordingly (e.g., from highest to lowest, or lowest to highest), and the position of each actor's importance score in the order may then be used to define the actor importance ranking 660. As the joint actor embedding 630 may combine any number of actor embeddings and the actor-scene importance model 650 may be applied to individual actor embeddings 618, the overall architecture may effectively evaluate any number of actors in the environment.
The architecture of the actor importance model in various embodiments may also differ relative to
The various parameters of the actor importance model (e.g., parameters of the AV embedding model 605, actor embedding model 615, scene embedding model 640, and actor-scene importance model 650) may be trained with a set of training data to learn parameters that effectively predict ranking with respect to a training ranking of the data in the training data. For each training instance in the training data, a number of actors and AV intent is used as input and the actors may be labeled with a training ranking for the model to learn towards. The predicted ranking output from the model is compared with the training ranking of the training instance and used to determine a loss function for modifying the parameters of the model to reduce the loss function.
A loss matrix 820 illustrates an example loss according to one embodiment. In this embodiment, the loss function is a pairwise loss, such that the loss is considered between pairs of the actors, for example the pair (x1, x2) or (x3, x7). The example matrix is illustrated as a diagonal matrix, such that the lower left portion of the matrix may show attributable loss function, if any, to each pair. In this example, the loss function for a pair has a loss of “1” when the pair is improperly ordered (e.g., the pairwise ranking of the actors in the predicted rank are not the same as the training rank) and a loss of zero otherwise. The mis-ordered pairs are shown in the loss matrix 820 as having a value of 1. To perform training of the model parameters, the model may learn parameters to reduce the loss function, that is, the number of actors which are mis-ordered in the predicted rank relative to the training rank. This loss formulation may provide superior results, particularly given the dynamic number of actors within a given scene. For loss functions which focus on absolute ranking of actors, a given actor's correct rank may vary significantly depending on the environment and the number of actors in the environment, even if nothing else about the environment changes. For example, the same actor, appearing in the same scene with the same AV intent, may have a rank of 2 when 10 actors are detected, and a rank of 12 when 100 actors are detected. Loss functions which attempt to reconcile the values of “2” and “12” directly may struggle to effectively learn parameters for the actor. However, in the relative ranking, the parameters may learn to generate an importance score for the actor that permits effective comparison with the importance score of other actors, permitting effective dynamic ranking of different sets of actors.
In further examples, the loss may also be affected by a threshold penalty. In many applications, due to the limited resources in the AV, a limited number of actors may be further processed as also discussed above. For example, a maximum number of actors may be provided for further classification or motion prediction in a tick. Other actors may be provided no further analysis or may be provided for a more limited analysis. As such, removal of actors that should have been included in the actors provided for further processing may negatively impact these further processes.
In some embodiments, a threshold penalty is used to modify a training loss with respect to a threshold. The training loss may be increased, for example, for actors predicted to be below the threshold, while having a training rank above the threshold. In other examples, the training loss may be modified based on whether the actors in the pair are above or below the threshold. In some examples, as actors above the threshold may generally be provided for further processing, in some examples, when both actors (in a pairwise loss) are above or below the threshold, the loss may be reduced, e.g., as the ranking error would not have affected which actors pass or do not pass the threshold.
As another example, the threshold penalty may be an added error for actors that have a training rank above the threshold and that are predicted to be below the threshold. In this example, the threshold penalty may be proportional to a rank difference between the predicted rank (i.e., below the threshold) and the threshold rank. For example, a training (true) rank 5 predicted to be rank 20 with a penalty threshold of 12 has a predicted rank difference of 8 with the penalty threshold, yielding gets a threshold penalty of k*8 for a threshold of 12, where k is a proportionality coefficient. The threshold penalty may be added to the ordering loss for the actor. This penalty may thus accentuate the error for actors that should be included above the threshold and due to the increased error encourage training that learns parameters to move the actors above the threshold faster.
In the example shown in
In some embodiments, the training ranking may be based on manual or semi-manual review (e.g., by a human). In additional embodiments, the training ranking may be automatically generated based on how further processes (such as further actor perception and prediction 550 and motion planning 560 discussed above) may process the actor importance rankings. For example, in various embodiments the loss function may reduce or eliminate the loss for predicted ranking error of actors ranked higher (e.g., less important) than a threshold set based on the number of actors used in further processes as just discussed. Likewise, the further processes may also be used to automatically label training data for training data based on the effect of the actor on motion planning. For example, the motion planning may be performed with the complete set of actors and with a counterfactual in which each actor is removed from consideration in motion planning. The change in planned motion between the two motion planning scenarios may then implicitly describe the relative importance of the actor; the actors with a more significant affect (when included vs. not included) may be considered more important to the planning process and used to automatically label training data with a training ranking accordingly.
Embodiments within the scope of the present disclosure may also include tangible and/or non-transitory computer-readable storage media or devices for carrying or having computer-executable instructions or data structures stored thereon. Such tangible computer-readable storage devices can be any available device that can be accessed by a general purpose or special purpose computer, including the functional design of any special purpose processor as described above. By way of example, and not limitation, such tangible computer-readable devices can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other device which can be used to carry or store desired program code in the form of computer-executable instructions, data structures, or processor chip design. When information or instructions are provided via a network or another communications connection (either hardwired, wireless, or combination thereof) to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such connection is properly termed a computer-readable medium. Combinations of the above should also be included within the scope of the computer-readable storage devices.
Computer-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Computer-executable instructions also include program modules that are executed by computers in stand-alone or network environments. Generally, program modules include routines, programs, components, data structures, objects, and the functions inherent in the design of special purpose processors, etc. that perform tasks or implement abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps.
Other embodiments of the disclosure may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network Personal Computers (PCs), minicomputers, mainframe computers, and the like. Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination thereof) through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Various embodiments of claimable subject matter includes the following examples.
The various embodiments described above are provided by way of illustration only and should not be construed to limit the scope of the disclosure. For example, the principles herein apply equally to optimization as well as general improvements. Various modifications and changes may be made to the principles described herein without following the example embodiments and applications illustrated and described herein, and without departing from the spirit and scope of the disclosure. Claim language reciting “at least one of” a set indicates that one member of the set or multiple members of the set satisfy the claim.
As described herein, one aspect of the present technology is the gathering and use of data available from various sources to improve quality and experience. The present disclosure contemplates that in some instances, this gathered data may include personal information. The present disclosure contemplates that the entities involved with such personal information respect and value privacy policies and practices.