Estimating object properties using visual image data

Information

  • Patent Grant
  • 11790664
  • Patent Number
    11,790,664
  • Date Filed
    Wednesday, March 23, 2022
    2 years ago
  • Date Issued
    Tuesday, October 17, 2023
    a year ago
  • CPC
  • Field of Search
    • CPC
    • G06V20/58
    • G06V20/584
    • G06V10/803
    • G06T7/70
    • G06T2207/20081
    • G06T2207/30261
    • G06N20/00
  • International Classifications
    • G06V20/58
    • G06T7/70
    • G06N20/00
    • G06V10/80
    • Disclaimer
      This patent is subject to a terminal disclaimer.
Abstract
A system is comprised of one or more processors coupled to memory. The one or more processors are configured to receive image data based on an image captured using a camera of a vehicle and to utilize the image data as a basis of an input to a trained machine learning model to at least in part identify a distance of an object from the vehicle. The trained machine learning model has been trained using a training image and a correlated output of an emitting distance sensor.
Description
BACKGROUND OF THE INVENTION

Autonomous driving systems typically rely on mounting numerous sensors including a collection of vision and emitting distance sensors (e.g., radar sensor, lidar sensor, ultrasonic sensor, etc.) on a vehicle. The data captured by each sensor is then gathered to help understand the vehicle's surrounding environment and to determine how to control the vehicle. Vision sensors can be used to identify objects from captured image data and emitting distance sensors can be used to determine the distance of the detected objects. Steering and speed adjustments can be applied based on detected obstacles and clear drivable paths. But as the number and types of sensors increases, so does the complexity and cost of the system. For example, emitting distance sensors such as lidar are often costly to include in a mass market vehicle. Moreover, each additional sensor increases the input bandwidth requirements for the autonomous driving system. Therefore, there exists a need to find the optimal configuration of sensors on a vehicle. The configuration should limit the total number of sensors without limiting the amount and type of data captured to accurately describe the surrounding environment and safely control the vehicle.





BRIEF DESCRIPTION OF THE DRAWINGS

Various embodiments of the invention are disclosed in the following detailed description and the accompanying drawings.



FIG. 1 is a block diagram illustrating an embodiment of a deep learning system for autonomous driving.



FIG. 2 is a flow diagram illustrating an embodiment of a process for creating training data for predicting object properties.



FIG. 3 is a flow diagram illustrating an embodiment of a process for training and applying a machine learning model for autonomous driving.



FIG. 4 is a flow diagram illustrating an embodiment of a process for training and applying a machine learning model for autonomous driving.



FIG. 5 is a diagram illustrating an example of capturing auxiliary sensor data for training a machine learning network.



FIG. 6 is a diagram illustrating an example of predicting object properties.





DETAILED DESCRIPTION

The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.


A detailed description of one or more embodiments of the invention is provided is below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.


A machine learning training technique for generating highly accurate machine learning results from vision data is disclosed. Using auxiliary sensor data, such as radar and lidar results, the auxiliary data is associated with objects identified from the vision data to accurately estimate object properties such as object distance. In various embodiments, the collection and association of auxiliary data with vision data is done automatically and requires little, if any, human intervention. For example, objects identified using vision techniques do not need to be manually labeled, significantly improving the efficiency of machine learning training. Instead, the training data can be automatically generated and used to train a machine learning model to predict object properties with a high degree of accuracy. For example, the data may be collected automatically from a fleet of vehicles by collecting snapshots of the vision data and associated related data, such as radar data. In some embodiments, only a subset of the vision-radar related association targets are sampled. The collected fusion data from the fleet of vehicles is automatically collected and used to train neural nets to mimic the captured data. The trained machine learning model can be deployed to vehicles for accurately predicting object properties, such as distance, direction, and velocity, using only vision data. For example, once the machine learning model has been trained to be able to determine an object distance using images of a camera without a need of a dedicated distance sensor, it may become no longer necessary to include a dedicated distance sensor in an autonomous driving vehicle. When used in conjunction with a dedicated distance sensor, this machine learning model can be used as a redundant or a secondary distance data source to improve accuracy and/or provide fault tolerance. The identified objects and corresponding properties can be used to implement autonomous driving features such as self-driving or driver-assisted operation of a vehicle. For example, an autonomous vehicle can be controlled to avoid a merging vehicle identified using the disclosed techniques.


A system comprising one or more processors coupled to memory is configured to receive image data based on an image captured using a camera of a vehicle. For example, a processor such as an artificial intelligence (AI) processor installed on an autonomous vehicle receives image data from a camera, such as a forward-facing camera of the vehicle. Additional cameras such as side-facing and rear-facing cameras can be used as well. The image data is utilized as a basis of an input to a machine learning trained model to at least in part identify a distance of an object from the vehicle. For example, the captured image is used as an input to a machine learning model such as a model of a deep learning network running on the AI processor. The model is used to predict the distance of objects identified in the image data. Surrounding objects such as vehicles and pedestrians can be identified from the image data and the accuracy and direction are inferred using a deep learning system. In various embodiments, the trained machine learning model has been trained using a training image and a correlated output of an emitting distance sensor. Emitting distance sensors may emit a signal (e.g., radio signal, ultrasonic signal, light signal, etc.) in detecting a distance of an object from the sensor. For example, a radar sensor mounted to a vehicle emits radar to identify the distance and direction of surrounding obstacles. The distances are then correlated to objects identified in a training image captured from the vehicle's camera. The associated training image is annotated with the distance measurements and used to train a machine learning model. In some embodiments, the model is used to predict additional properties such as an object's velocity. For example, the velocity of objects determined by radar is associated with objects in the training image to train a machine learning model to predict object velocities and directions.


In some embodiments, a vehicle is equipped with sensors to capture the environment of the vehicle and vehicle operating parameters. The captured data includes vision data (such as video and/or still images) and additional auxiliary data such as radar, lidar, inertia, audio, odometry, location, and/or other forms of sensor data. For example, the sensor data may capture vehicles, pedestrians, vehicle lane lines, vehicle traffic, obstacles, traffic control signs, traffic sounds, etc. Odometry and other similar sensors capture vehicle operating parameters such as vehicle speed, steering, orientation, change in direction, change in location, change in elevation, change in speed, etc. The captured vision and auxiliary data is transmitted from the vehicle to a training server for creating a training data set. In some embodiments, the transmitted vision and auxiliary data is correlated and used to automatically generate training data. The training data is used to train a machine learning model for generating highly accurate machine learning results. In some embodiments, a time series of captured data is used to generate the training data. A ground truth is determined based on a group of time series elements and is used to annotate at least one of the elements, such as a single image, from the group. For example, a series of images and radar data for a time period, such as 30 seconds, are captured. A vehicle identified from the image data and tracked across the time series is associated with a corresponding radar distance and direction from the time series. The associated auxiliary data, such as radar distance data, is associated with the vehicle after analyzing the image and distance data captured for the time series. By analyzing the image and auxiliary data across the time series, ambiguities such as multiple objects with similar distances can be resolved with a high degree of accuracy to determine a ground truth. For example, when using only a single captured image, there may be insufficient corresponding radar data to accurately estimate the different distances of two cars in the event one car occludes another or when two cars are close together. By tracking the cars over a time series, however, the distances identified by radar can be properly associated with the correct cars as the cars separate, travel in different directions, and/or travel at different speeds, etc. In various embodiments, once the auxiliary data is properly associated with an object, one or more images of the time series are converted to training images and annotated with the corresponding ground truth such as the distance, velocity, and/or other appropriate object properties.


In various embodiments, a machine learning model trained using auxiliary sensor data can accurately predict the result of an auxiliary sensor without the need for the physical auxiliary sensor. For example, training vehicles can be equipped with auxiliary sensors, including expensive and/or difficult to operate sensors, for collecting training data. The training data can then be used to train a machine learning model for predicting the result of an auxiliary sensor, such as a radar, lidar, or another sensor. The trained model is then deployed to vehicles, such as production vehicles, that only require vision sensors. The auxiliary sensors are not required but can be used as a secondary data source. There are many advantages to reducing the number of sensors including the difficulty in re-calibrating sensors, maintenance of the sensors, the cost of additional sensors, and/or additional bandwidth and computational requirements for additional sensors, among others. In some embodiments, the trained model is used in the case of auxiliary sensors failing. Instead of relying on additional auxiliary sensors, the trained machine learning model uses input from one or more vision sensors to predict the result of the auxiliary sensors. The predicted results can be used for implementing autonomous driving features that require detecting objects (e.g., pedestrians, stationary vehicles, moving vehicles, curbs, obstacles, road barriers, etc.) and their distance and direction. The predicted results can be used to detect the distance and direction of traffic control objects such as traffic lights, traffic signs, street signs, etc. Although vision sensors and object distance are used in the previous examples, alternative sensors and predicted properties are possible as well.



FIG. 1 is a block diagram illustrating an embodiment of a deep learning system for autonomous driving. The deep learning system includes different components that may be used together for self-driving and/or driver-assisted operation of a vehicle as well as for gathering and processing data for training a machine learning model. In various embodiments, the deep learning system is installed on a vehicle and data captured from the vehicle can be used to train and improve the deep learning system of the vehicle or other similar vehicles. The deep learning system may be used to implement autonomous driving functionality including identifying objects and predicting object properties such as distance and direction using vision data as input.


In the example shown, deep learning system 100 is a deep learning network that includes vision sensors 101, additional sensors 103, image pre-processor 105, deep learning network 107, artificial intelligence (AI) processor 109, vehicle control module 111, and network interface 113. In various embodiments, the different components are communicatively connected. For example, image data captured from vision sensors 101 is fed to image pre-y) processor 105. Processed sensor data of image pre-processor 105 is fed to deep learning network 107 running on AI processor 109. In some embodiments, sensor data from additional sensors 103 is used as an input to deep learning network 107. The output of deep learning network 107 running on AI processor 109 is fed to vehicle control module 111. In various embodiments, vehicle control module 111 is connected to and controls the operation of the vehicle such as the is speed, braking, and/or steering, etc. of the vehicle. In various embodiments, sensor data and/or machine learning results can be sent to a remote server (not shown) via network interface 113. For example, sensor data, such as data captured from vision sensors 101 and/or additional sensors 103, can be transmitted to a remote training server via network interface 113 to collect training data for improving the performance, comfort, and/or safety of the vehicle. In various embodiments, network interface 113 is used to communicate with remote servers, to make phone calls, to send and/or receive text messages, and to transmit sensor data based on the operation of the vehicle, among other reasons. In some embodiments, deep learning system 100 may include additional or fewer components as appropriate. For example, in some embodiments, image pre-processor 105 is an optional component. As another example, in some embodiments, a post-processing component (not shown) is used to perform post-processing on the output of deep learning network 107 before the output is provided to vehicle control module 111.


In some embodiments, vision sensors 101 include one or more camera sensors for capturing image data. In various embodiments, vision sensors 101 may be affixed to a vehicle, at different locations of the vehicle, and/or oriented in one or more different directions. For example, vision sensors 101 may be affixed to the front, sides, rear, and/or roof, etc. of the vehicle in forward-facing, rear-facing, side-facing, etc. directions. In some embodiments, vision sensors 101 may be image sensors such as high dynamic range cameras and/or cameras with different fields of view. For example, in some embodiments, eight surround cameras are affixed to a vehicle and provide 360 degrees of visibility around the vehicle with a range of up to 250 meters. In some embodiments, camera sensors include a wide forward camera, a narrow forward camera, a rear view camera, forward looking side cameras, and/or rearward looking side cameras.


In some embodiments, vision sensors 101 are not mounted to the vehicle with vehicle control module 111. For example, vision sensors 101 may be mounted on neighboring vehicles and/or affixed to the road or environment and are included as part of a deep learning system for capturing sensor data. In various embodiments, vision sensors 101 include one or more cameras that capture the surrounding environment of the vehicle, including the road the vehicle is traveling on. For example, one or more front-facing and/or pillar cameras capture images of objects such as vehicles, pedestrians, traffic control objects, roads, curbs, obstacles, is etc. in the environment surrounding the vehicle. As another example, cameras capture a time series of image data including image data of neighboring vehicles including those attempting to cut into the lane the vehicle is traveling in. Vision sensors 101 may include image sensors capable of capturing still images and/or video. The data may be captured over a period of time, such as a sequence of captured data over a period of time, and synchronized with other vehicle data including other sensor data. For example, image data used to identify objects may be captured along with radar and odometry data over a period of 15 seconds or another appropriate period.


In some embodiments, additional sensors 103 include additional sensors for capturing sensor data in addition to vision sensors 101. In various embodiments, additional sensors 103 may be affixed to a vehicle, at different locations of the vehicle, and/or oriented in one or more different directions. For example, additional sensors 103 may be affixed to the front, sides, rear, and/or roof, etc. of the vehicle in forward-facing, rear-facing, side-facing, etc. directions. In some embodiments, additional sensors 103 may be emitting sensors such as radar, ultrasonic, and/or lidar sensors. In some embodiments, additional sensors 103 include non-visual sensors. Additional sensors 103 may include radar, audio, lidar, inertia, odometry, location, and/or ultrasonic sensors, among others. For example, twelve ultrasonic sensors may be affixed to the vehicle to detect both hard and soft objects. In some embodiments, a forward-facing radar is utilized to capture data of the surrounding environment. In various embodiments, radar sensors are able to capture surrounding detail despite heavy rain, fog, dust, and other vehicles.


In some embodiments, additional sensors 103 are not mounted to the vehicle with vehicle control module 111. For example, similar to vision sensors 101, additional sensors 103 may be mounted on neighboring vehicles and/or affixed to the road or environment and are included as part of a deep learning system for capturing sensor data. In some embodiments, additional sensors 103 include one or more sensors that capture the surrounding environment of the vehicle, including the road the vehicle is traveling on. For example, a forward-facing radar sensor captures the distance data of objects in the forward field of view of the vehicle. Additional sensors may capture odometry, location, and/or vehicle control information including information related to vehicle trajectory. Sensor data may be captured over a period of time, such as a sequence of captured data over a period of time, and associated with image data is captured from vision sensors 101. In some embodiments, additional sensors 103 include location sensors such as global position system (GPS) sensors for determining the vehicle's location and/or change in location. In various embodiments, one or more sensors of additional sensors 103 are optional and are included only on vehicles designed for capturing training data. Vehicles without one or more sensors of additional sensors 103 can simulate the results of additional sensors 103 by predicting the output using a trained machine learning model and the techniques disclosed herein. For example, vehicles without a forward-facing radar or lidar sensor can predict the results of the optional sensor using image data by applying a trained machine learning model, such as the model of deep learning network 107.


In some embodiments, image pre-processor 105 is used to pre-process sensor data of vision sensors 101. For example, image pre-processor 105 may be used to pre-process the sensor data, split sensor data into one or more components, and/or post-process the one or more components. In some embodiments, image pre-processor 105 is a graphics processing unit (GPU), a central processing unit (CPU), an image signal processor, or a specialized image processor. In various embodiments, image pre-processor 105 is a tone-mapper processor to process high dynamic range data. In some embodiments, image pre-processor 105 is implemented as part of artificial intelligence (AI) processor 109. For example, image pre-processor 105 may be a component of AI processor 109. In some embodiments, image pre-processor 105 may be used to normalize an image or to transform an image. For example, an image captured with a fisheye lens may be warped and image pre-processor 105 may be used to transform the image to remove or modify the warping. In some embodiments, noise, distortion, and/or blurriness is removed or reduced during a pre-processing step. In various embodiments, the image is adjusted or normalized to improve the result of machine learning analysis. For example, the white balance of the image is adjusted to account for different lighting operating conditions such as daylight, sunny, cloudy, dusk, sunrise, sunset, and night conditions, among others.


In some embodiments, deep learning network 107 is a deep learning network used for determining vehicle control parameters including analyzing the driving environment to determine objects and their corresponding properties such as distance, velocity, or another appropriate parameter. For example, deep learning network 107 may be an artificial neural is network such as a convolutional neural network (CNN) that is trained on input such as sensor data and its output is provided to vehicle control module 111. As one example, the output may include at least a distance estimate of detected objects. As another example, the output may include at least potential vehicles that are likely to merge into the vehicle's lane, their distances, and their velocities. In some embodiments, deep learning network 107 receives as input at least image sensor data, identifies objects in the image sensor data, and predicts the distance of the objects. Additional input may include scene data describing the environment around the vehicle and/or vehicle specifications such as operating characteristics of the vehicle. Scene data may include scene tags describing the environment around the vehicle, such as raining, wet roads, snowing, muddy, high density traffic, highway, urban, school zone, etc. In some embodiments, the output of deep learning network 107 is a three-dimensional representation of a vehicle's surrounding environment including cuboids representing objects such as identified objects. In some embodiments, the output of deep learning network 107 is used for autonomous driving including navigating a vehicle towards a target destination.


In some embodiments, artificial intelligence (AI) processor 109 is a hardware processor for running deep learning network 107. In some embodiments, AI processor 109 is a specialized AI processor for performing inference using a convolutional neural network (CNN) on sensor data. AI processor 109 may be optimized for the bit depth of the sensor data. In some embodiments, AI processor 109 is optimized for deep learning operations such as neural network operations including convolution, dot-product, vector, and/or matrix operations, among others. In some embodiments, AI processor 109 is implemented using a graphics processing unit (GPU).


In various embodiments, AI processor 109 is coupled to memory that is configured to provide the AI processor with instructions which when executed cause the AI processor to perform deep learning analysis on the received input sensor data and to determine a machine learning result, such as an object distance, used for autonomous driving. In some embodiments, AI processor 109 is used to process sensor data in preparation for making the data available as training data.


In some embodiments, vehicle control module 111 is utilized to process the output of artificial intelligence (AI) processor 109 and to translate the output into a vehicle control operation. In some embodiments, vehicle control module 111 is utilized to control the vehicle for autonomous driving. In various embodiments, vehicle control module 111 can adjust is speed, acceleration, steering, braking, etc. of the vehicle. For example, in some embodiments, vehicle control module 111 is used to control the vehicle to maintain the vehicle's position within a lane, to merge the vehicle into another lane, to adjust the vehicle's speed and lane positioning to account for merging vehicles, etc.


In some embodiments, vehicle control module 111 is used to control vehicle lighting such as brake lights, turns signals, headlights, etc. In some embodiments, vehicle control module 111 is used to control vehicle audio conditions such as the vehicle's sound system, playing audio alerts, enabling a microphone, enabling the horn, etc. In some embodiments, vehicle control module 111 is used to control notification systems including warning systems to inform the driver and/or passengers of driving events such as a potential collision or the approach of an intended destination. In some embodiments, vehicle control module 111 is used to adjust sensors such as vision sensors 101 and additional sensors 103 of a vehicle. For example, vehicle control module 111 may be used to change parameters of one or more sensors such as modifying the orientation, changing the output resolution and/or format type, increasing or decreasing the capture rate, adjusting the captured dynamic range, adjusting the focus of a camera, enabling and/or disabling a sensor, etc. In some embodiments, vehicle control module 111 may be used to change parameters of image pre-processor 105 such as modifying the frequency range of filters, adjusting feature and/or edge detection parameters, adjusting channels and bit depth, etc. In various embodiments, vehicle control module 111 is used to implement self-driving and/or driver-assisted control of a vehicle. In some embodiments, vehicle control module 111 is implemented using a processor coupled with memory. In some embodiments, vehicle control module 111 is implemented using an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or other appropriate processing hardware.


In some embodiments, network interface 113 is a communication interface for sending and/or receiving data including training data. In various embodiments, a network interface 113 includes a cellular or wireless interface for interfacing with remote servers, to transmit sensor data, to transmit potential training data, to receive updates to the deep learning network including updated machine learning models, to connect and make voice calls, to send and/or receive text messages, etc. For example, network interface 113 may be used to transmit is sensor data captured for use as potential training data to a remote training server for training a machine learning model. As another example, network interface 113 may be used to receive an update for the instructions and/or operating parameters for vision sensors 101, additional sensors 103, image pre-processor 105, deep learning network 107, AI processor 109, and/or vehicle control module 111. A machine learning model of deep learning network 107 may be updated using network interface 113. As another example, network interface 113 may be used to update firmware of vision sensors 101 and additional sensors 103 and/or operating parameters of image pre-processor 105 such as image processing parameters.



FIG. 2 is a flow diagram illustrating an embodiment of a process for creating training data for predicting object properties. For example, image data is annotated with sensor data from additional auxiliary sensors to automatically create training data. In some embodiments, a time series of elements made up of sensor and related auxiliary data is collected from a vehicle and used to automatically create training data. In various embodiments, the process of FIG. 2 is used to automatically label training data with corresponding ground truths. The ground truth and image data are packaged as training data to predict properties of objects identified from the image data. In various embodiments, the sensor and related auxiliary data are captured using the deep learning system of FIG. 1. For example, in various embodiments, the sensor data is captured from vision sensors 101 of FIG. 1 and related data is captured from additional sensors 103 of FIG. 1. In some embodiments, the process of FIG. 2 is performed to automatically collect data when existing predictions are incorrect or can be improved. For example, a prediction is made by an autonomous vehicle to determine one or more object properties, such as distance and direction, from vision data. The prediction is compared to distance data received from an emitting distance sensor. A determination can be made whether the prediction is within an acceptable accuracy threshold. In some embodiments, a determination is made that the prediction can be improved. In the event the prediction is not sufficiently accurate, the process of FIG. 2 can be applied to the prediction scenario to create a curated set of training examples for improving the machine learning model.


At 201, vision data is received. The vision data may be image data such as video and/or still images. In various embodiments, the vision data is captured at a vehicle and transmitted to a training server. The vision data may be captured over a period of time to create is a time series of elements. In various embodiments, the elements include timestamps to maintain an ordering of the elements. By capturing a time series of elements, objects in the time series can be tracked across the time series to better disambiguate objects that are difficult to identify from a single input sample, such as a single input image and corresponding related data. For example, a pair of oncoming headlights may appear at first to both belong to a single vehicle but in the event the headlights separate, each headlight is identified as belonging to a separate motorcycle. In some scenarios, objects in the image data are easier to distinguish than objects in the auxiliary related data received at 203. For example, it may be difficult to disambiguate using only distance data the estimated distance of a van from a wall that the van is alongside of. However, by tracking the van across a corresponding time series of image data, the correct distance data can be associated with the identified van. In various embodiments, sensor data captured as a time series is captured in the format that a machine learning model uses as input. For example, the sensor data may be raw or processed image data.


In various embodiments, in the event a time series of data is received, the time series may be organized by associating a timestamp with each element of the time series. For example, a timestamp is associated with at least the first element in a time series. The timestamp may be used to calibrate time series elements with related data such as data received at 203. In various embodiments, the length of the time series may be a fixed length of time, such as 10 seconds, 30 seconds, or another appropriate length. The length of time may be configurable. In various embodiments, the time series may be based on the speed of the vehicle, such as the average speed of the vehicle. For example, at slower speeds, the length of time for a time series may be increased to capture data over a longer distance traveled than would be possible if using a shorter time length for the same speed. In some embodiments, the number of elements in the time series is configurable. The number of elements may be based on the distance traveled. For example, for a fixed time period, a faster moving vehicle includes more elements in the time series than a slower moving vehicle. The additional elements increase the fidelity of the captured environment and can improve the accuracy of the predicted machine learning results. In various embodiments, the number of elements is adjusted by adjusting the frames per second a sensor captures data and/or by discarding unneeded intermediate frames.


At 203, data related to the received vision data is received. In various is embodiments, the related data is received at a training server along with the vision data received at 201. In some embodiments, the related data is sensor data from additional sensors of the vehicle, such as ultrasonic, radar, lidar, or other appropriate sensors. The related data may be distance, direction, velocity, location, orientation, change in location, change in orientation, and/or other related data captured by the vehicle's additional sensors. The related data may be used to determine a ground truth for features identified in the vision data received at 201. For example, distance and direction measurements from radar sensors are used to determine object distances and directions for objects identified in the vision data. In some embodiments, the related data received is a time series of data corresponding to a time series of vision data received at 201.


In some embodiments, the data related to the vision data includes map data. For example, offline data such as road and/or satellite level map data may be received at 203. The map data may be used to identify features such as roads, vehicle lanes, intersections, speed limits, school zones, etc. For example, the map data can describe the path of vehicle lanes. Using the estimated location of identified vehicles in vehicles lanes, estimated distances for the detected vehicles can be determined/corroborated. As another example, the map data can describe the speed limit associated with different roads of the map. In some embodiments, the speed limit data may be used to validate velocity vectors of identified vehicles.


At 205, objects in the vision data are identified. In some embodiments, the vision data is used as an input to identify objects in the surrounding environment of the vehicle. For example, vehicles, pedestrians, obstacles, etc. are identified from the vision data. In some embodiments, the objects are identified using a deep learning system with a trained machine learning model. In various embodiments, bounding boxes are created for identified objects. The bounding boxes may be two-dimensional bounding boxes or three-dimensional bounding boxes, such as cuboids, that outline the exterior of the identified object. In some embodiments, additional data is used to help identify the objects, such as the data received at 203. The additional data may be used to increase the accuracy in object identification.


At 207, a ground truth is determined for identified objects. Using the related data received at 203, ground truths are determined for the object identified at 205 from the vision data received at 201. In some embodiments, the related data is depth (and/or distance) data of the is identified objects. By associating the distance data with the identified objects, a machine learning model can be trained to estimate object distances by using the related distance data as the ground truth for detected objects. In some embodiments, the distances are for detected objects such as an obstacle, a barrier, a moving vehicle, a stationary vehicle, traffic control signals, pedestrians, etc. and used as the ground truth for training. In addition to distance, the ground truth for other object parameters such as direction, velocity, acceleration, etc. may be determined. For example, accurate distances and directions are determined as ground truths for identified objects. As another example, accurate velocity vectors are determined as ground truths for identified objects, such as vehicles and pedestrians.


In various embodiments, vision data and related data are organized by timestamps and corresponding timestamps are used to synchronize the two data sets. In some embodiments, timestamps are used to synchronize a time series of data, such as a sequence of images and a corresponding sequence of related data. The data may be synchronized at capture time. For example, as each element of a time series is captured, a corresponding set of related data is captured and saved with the time series element. In various embodiments, the time period of the related data is configurable and/or matches the time period of the time series of elements. In some embodiments, the related data is sampled at the same rate as the time series elements.


In various embodiments, only by examining the time series of data can the ground truth be determined. For example, analysis of only a subset of vision data may misidentify objects and/or their properties. By expanding the analysis across the entire time series, ambiguities are removed. For example, an occluded vehicle may be revealed earlier or later in the time series. Once identified, the sometimes-occluded vehicle can be tracked throughout the entire time series, even when occluded. Similarly, object properties for the sometimes-occluded vehicle can be tracked throughout the time series by associating the object properties from the related data to the identified object in the vision data. In some embodiments, the data is played backwards (and/or forwards) to determine any points of ambiguity when associating related data to vision data. The objects at different times in the time series may be used to help determine object properties for the objects across the entire time series.


In various embodiments, a threshold value is used to determine whether to is associate an object property as a ground truth of an identified object. For example, related data with a high degree of certainty is associated with an identified object while related data with a degree of certainty below a threshold value is not associated with the identified object. In some embodiments, the related data may be conflicting sensor data. For example, ultrasonic and radar data output may conflict. As another example, distance data may conflict with map data. The distance data may estimate a school zone begins in 30 meters while information from map data may describe the same school zone as starting in 20 meters. In the event the related data has a low degree of certainty, the related data may be discarded and not used to determine the ground truth.


In some embodiments, the ground truth is determined to predict semantic labels. For example, a detected vehicle can be labeled based on a predicted distance and direction as being in the left lane or right lane. In some embodiments, the detected vehicle can be labeled as being in a blind spot, as a vehicle that should be yielded to, or with another appropriate semantic label. In some embodiments, vehicles are assigned to roads or lanes in a map based on the determined ground truth. As additional examples, the determined ground truth can be used to label traffic lights, lanes, drivable space, or other features that assist autonomous driving.


At 209, the training data is packaged. For example, an element of vision data received at 201 is selected and associated with the ground truth determined at 207. In some embodiments, the element selected is an element of a time series. The selected element represents sensor data input, such as a training image, to a machine learning model and the ground truth represents the predicted result. In various embodiments, the selected data is annotated and prepared as training data. In some embodiments, the training data is packaged into training, validation, and testing data. Based on the determined ground truth and selected training element, the training data is packaged to train a machine learning model to predict the results related to one or more related auxiliary sensors. For example, the trained model can be used to accurately predict distances and directions of objects with results similar to measurements using sensors such as radar or lidar sensors. In various embodiments, the machine learning results are used to implement features for autonomous driving. The packaged training is data is now available for training a machine learning model.



FIG. 3 is a flow diagram illustrating an embodiment of a process for training and applying a machine learning model for autonomous driving. For example, input data including a primary and secondary sensor data is received and processed to create training data for training a machine learning model. In some embodiments, the primary sensor data corresponds to image data captured via an autonomous driving system and the secondary sensor data corresponds to sensor data captured from an emitting distance sensor. The secondary sensor data may be used to annotate the primary sensor data to train a machine learning model to predict an output based on the secondary sensor. In some embodiments, the sensor data corresponds to sensor data captured based on particular use cases, such as the user manually disengaging autonomous driving or where distance estimates from vision data vary significantly from distance estimates from secondary sensors. In some embodiments, the primary sensor data is sensor data of vision sensors 101 of FIG. 1 and the secondary sensor data is sensor data of one or more sensors of additional sensors 103 of FIG. 1. In some embodiments, the process is used to create and deploy a machine learning model for deep learning system 100 of FIG. 1.


At 301, training data is prepared. In some embodiments, sensor data including image data and auxiliary data is received to create a training data set. The image data may include still images and/or video from one or more cameras. Additional sensors such as radar, lidar, ultrasonic, etc. may be used to provide relevant auxiliary sensor data. In various embodiments, the image data is paired with corresponding auxiliary data to help identify the properties of objects detected in the sensor data. For example, distance and/or velocity data from auxiliary data can be used to accurately estimate the distance and/or velocity of objects identified in the image data. In some embodiments, the sensor data is a time series of elements and is used to determine a ground truth. The ground truth of the group is then associated with a subset of the time series, such as a frame of image data. The selected element of the time series and the ground truth are used to prepare the training data. In some embodiments, the training data is prepared to train a machine learning model to only estimate properties of objects identified in the is image data, such as the distance and direction of vehicles, pedestrians, obstacles, etc. The prepared training data may include data for training, validation, and testing. In various embodiments, the sensor data may be of different formats. For example, sensor data may be still image data, video data, radar data, ultrasonic data, audio data, location data, odometry data, etc. The odometry data may include vehicle operation parameters such as applied acceleration, applied braking, applied steering, vehicle location, vehicle orientation, the change in vehicle location, the change in vehicle orientation, etc. In various embodiments, the training data is curated and annotated for creating a training data set. In some embodiments, a portion of the preparation of the training data may be performed by a human curator. In various embodiments, a portion of the training data is generated automatically from data captured from vehicles, greatly reducing the effort and time required to build a robust training data set. In some embodiments, the format of the data is compatible with a machine learning model used on a deployed deep learning application. In various embodiments, the training data includes validation data for testing the accuracy of the trained model. In some embodiments, the process of FIG. 2 is performed at 301 of FIG. 3.


At 303, a machine learning model is trained. For example, a machine learning model is trained using the data prepared at 301. In some embodiments, the model is a neural network such as a convolutional neural network (CNN). In various embodiments, the model includes multiple intermediate layers. In some embodiments, the neural network may include multiple layers including multiple convolution and pooling layers. In some embodiments, the training model is validated using a validation data set created from the received sensor data. In some embodiments, the machine learning model is trained to predict an output of a sensor such as a distance emitting sensor from a single input image. For example, a distance and direction property of an object can be inferred from an image captured from a camera. As another example, a velocity vector of a neighboring vehicle including whether the vehicle will attempt to merge is predicted from an image captured from a camera.


At 305, the trained machine learning model is deployed. For example, the trained is machine learning model is installed on a vehicle as an update for a deep learning network, such as deep learning network 107 of FIG. 1. In some embodiments, an over-the-air update is used to install the newly trained machine learning model. For example, an over-the-air update can be received via a network interface of the vehicle such as network interface 113 of FIG. 1. In some embodiments, the update is a firmware update transmitted using a wireless network such as a WiFi or cellular network. In some embodiments, the new machine learning model may be installed when the vehicle is serviced.


At 307, sensor data is received. For example, sensor data is captured from one or more sensors of the vehicle. In some embodiments, the sensors are vision sensors 101 of FIG. 1. The sensors may include image sensors such as a fisheye camera mounted behind a windshield, forward or side-facing cameras mounted in the pillars, rear-facing cameras, etc. In various embodiments, the sensor data is in the format or is converted into a format that the machine learning model trained at 303 utilizes as input. For example, the sensor data may be raw or processed image data. In some embodiments, the sensor data is preprocessed using an image pre-processor such as image pre-processor 105 of FIG. 1 during a pre-processing step.


For example, the image may be normalized to remove distortion, noise, etc. In some alternative embodiments, the received sensor data is data captured from ultrasonic sensors, radar, LiDAR sensors, microphones, or other appropriate technology and used as the expected input to the trained machine learning model deployed at 305.


At 309, the trained machine learning model is applied. For example, the machine learning model trained at 303 is applied to sensor data received at 307. In some embodiments, the application of the model is performed by an AI processor such as AI processor 109 of FIG. 1 using a deep learning network such as deep learning network 107 of FIG. 1. In various embodiments, by applying the trained machine learning model, one or more object properties such as an object distance, direction, and/or velocity are predicted from image data. For example, different objects are identified in the image data and an object distance and direction for each identified object are inferred using the trained machine learning model. As another example, a velocity vector of a vehicle is inferred for a vehicle identified in the image data. The velocity vector may be used to determine whether the neighboring vehicle is likely to cut into the is current lane and/or the likelihood the vehicle is a safety risk. In various embodiments, vehicles, pedestrians, obstacles, lanes, traffic control signals, map features, speed limits, drivable space, etc. and their related properties are identified by applying the machine learning model. In some embodiments, the features are identified in three-dimensions, such as a three-dimensional velocity vector.


At 311, the autonomous vehicle is controlled. For example, one or more autonomous driving features are implemented by controlling various aspects of the vehicle. Examples may include controlling the steering, speed, acceleration, and/or braking of the vehicle, maintaining the vehicle's position in a lane, maintaining the vehicle's position relative to other vehicles and/or obstacles, providing a notification or warning to the occupants, etc. Based on the analysis performed at 309, a vehicle's steering and speed may be controlled to maintain the vehicle safely between two lane lines and at a safe distance from other objects. For example, distances and directions of neighboring objects are predicted and a corresponding drivable space and driving path is identified. In various embodiments, a vehicle control module such as vehicle control module 111 of FIG. 1 controls the vehicle.



FIG. 4 is a flow diagram illustrating an embodiment of a process for training and applying a machine learning model for autonomous driving. In some embodiments, the process of FIG. 4 is utilized to collect and retain sensor data for training a machine learning model for autonomous driving. In some embodiments, the process of FIG. 4 is implemented on a vehicle enabled with autonomous driving whether the autonomous driving control is enabled or not. For example, sensor data can be collected in the moments immediately after autonomous driving is disengaged, while a vehicle is being driven by a human driver, and/or while the vehicle is being autonomously driven. In some embodiments, the techniques described by FIG. 4 are implemented using the deep learning system of FIG. 1. In some embodiments, portions of the process of FIG. 4 are performed at 307, 309, and/or 311 of FIG. 3 as part of the process of applying a machine learning model for autonomous driving.


At 401, sensor data is received. For example, a vehicle equipped with sensors is captures sensor data and provides the sensor data to a neural network running on the vehicle. In some embodiments, the sensor data may be vision data, ultrasonic data, radar data, LiDAR data, or other appropriate sensor data. For example, an image is captured from a high dynamic range forward-facing camera. As another example, ultrasonic data is captured from a side-facing ultrasonic sensor. In some embodiments, a vehicle is affixed with multiple sensors for capturing data. For example, in some embodiments, eight surround cameras are affixed to a vehicle and provide 360 degrees of visibility around the vehicle with a range of up to 250 meters. In some embodiments, camera sensors include a wide forward camera, a narrow forward camera, a rear view camera, forward looking side cameras, and/or rearward looking side cameras. In some embodiments, ultrasonic and/or radar sensors are used to capture surrounding details. For example, twelve ultrasonic sensors may be affixed to the vehicle to detect both hard and soft objects.


In various embodiments, the captured data from different sensors is associated with captured metadata to allow the data captured from different sensors to be associated together. For example, the direction, field of view, frame rate, resolution, timestamp, and/or other captured metadata is received with the sensor data. Using the metadata, different formats of sensor data can be associated together to better capture the environment surrounding the vehicle. In some embodiments, the sensor data includes odometry data including the location, orientation, change in location, and/or change in orientation, etc. of the vehicle. For example, location data is captured and associated with other sensor data captured during the same time frame. As one example, the location data captured at the time that image data is captured is used to associate location information with the image data. In various embodiments, the received sensor data is provided for deep learning analysis.


At 403, the sensor data is pre-processed. In some embodiments, one or more pre-y) processing passes may be performed on the sensor data. For example, the data may be pre-processed to remove noise, to correct for alignment issues and/or blurring, etc. In some embodiments, one or more different filtering passes are performed on the data. For example, a high-pass filter may be performed on the data and a low-pass filter may be performed on the data to separate out different components of the sensor data. In various embodiments, the pre-processing step performed at 403 is optional and/or may be incorporated into the neural network.


At 405, deep learning analysis of the sensor data is initiated. In some embodiments, the deep learning analysis is performed on the sensor data received at 401 and optionally pre-processed at 403. In various embodiments, the deep learning analysis is performed using a neural network such as a convolutional neural network (CNN). In various embodiments, the machine learning model is trained offline using the process of FIG. 3 and deployed onto the vehicle for performing inference on the sensor data. For example, the model may be trained to predict object properties such as distance, direction, and/or velocity. In some embodiments, the model is trained to identify pedestrians, moving vehicles, parked vehicles, obstacles, road lane lines, drivable space, etc., as appropriate. In some embodiments, a bounding box is determined for each identified object in the image data and a distance and direction is predicted for each identified object. In some embodiments, the bounding box is a three-dimensional bounding box such as a cuboid. The bounding box outlines the exterior surface of the identified object and may be adjusted based on the size of the object. For example, different sized vehicles are represented using different sized bounding boxes (or cuboids). In some embodiments, the object properties estimated by the deep learning analysis are compared to properties measured by sensors and received as sensor data. In various embodiments, the neural network includes multiple layers including one or more intermediate layers and/or one or more different neural networks are utilized to analyze the sensor data. In various embodiments, the sensor data and/or the results of deep learning analysis are retained and transmitted at 411 for the automatic generation of training data.


In various embodiments, the deep learning analysis is used to predict additional features. The predicted features may be used to assist autonomous driving. For example, a detected vehicle can be assigned to a lane or road. As another example, a detected vehicle can be determined to be in a blind spot, to be a vehicle that should be yielded to, to be a vehicle in the left adjacent lane, to be a vehicle in the right adjacent lane, or to have another appropriate attribute. Similarly, the deep learning analysis can identify traffic lights, drivable space, pedestrians, obstacles, or other appropriate features for driving.


At 407, the results of deep learning analysis are provided to vehicle control. For example, the results are provided to a vehicle control module to control the vehicle for is autonomous driving and/or to implement autonomous driving functionality. In some embodiments, the results of deep learning analysis at 405 are passed through one or more additional deep learning passes using one or more different machine learning models. For example, identified objects and their properties (e.g., distance, direction, etc.) may be used to determine drivable space. The drivable space is then used to determine a drivable path for the vehicle. Similarly, in some embodiments, a predicted vehicle velocity vector is detected. The determined path for the vehicle based at least in part on a predicted velocity vector is used to predict cut-ins and to avoid potential collisions. In some embodiments, the various outputs of deep learning are used to construct a three-dimensional representation of the vehicle's environment for autonomous driving which includes identified objects, the distance and direction of identified objects, predicted paths of vehicles, identified traffic control signals including speed limits, obstacles to avoid, road conditions, etc. In some embodiments, the vehicle control module utilizes the determined results to control the vehicle along a determined path. In some embodiments, the vehicle control module is vehicle control module 111 of FIG. 1.


At 409, the vehicle is controlled. In some embodiments, a vehicle with autonomous driving activated is controlled using a vehicle control module such as vehicle control module 111 of FIG. 1. The vehicle control can modulate the speed and/or steering of the vehicle, for example, to maintain a vehicle at a safe distance from other vehicles and in a lane at an appropriate speed in consideration of the environment around it. In some embodiments, the results are used to adjust the vehicle in anticipation that a neighboring vehicle will merge into the same lane. In various embodiments, using the results of deep learning analysis, a vehicle control module determines the appropriate manner to operate the vehicle, for example, along a determined path with the appropriate speed. In various embodiments, the result of vehicle controls such as a change in speed, application of braking, adjustment to steering, etc. are retained and used for the automatic generation of training data. In various embodiments, the vehicle control parameters may be retained and transmitted at 411 for the automatic generation is of training data.


At 411, sensor and related data are transmitted. For example, the sensor data received at 401 along with the results of deep learning analysis at 405 and/or vehicle control parameters used at 409 are transmitted to a computer server for the automatic generation of training data. In some embodiments, the data is a time series of data and the various gathered data are associated together by a remote training computer server. For example, image data is associated with auxiliary sensor data, such as distance, direction, and/or velocity data, to generate a ground truth. In various embodiments, the collected data is transmitted wirelessly, for example, via a WiFi or cellular connection, from a vehicle to a training data center. In some embodiments, metadata is transmitted along with the sensor data. For example, metadata may include the time of day, a timestamp, the location, the type of vehicle, vehicle control and/or operating parameters such as speed, acceleration, braking, whether autonomous driving was enabled, steering angle, odometry data, etc. Additional metadata includes the time since the last previous sensor data was transmitted, the vehicle type, weather conditions, road conditions, etc. In some embodiments, the transmitted data is anonymized, for example, by removing unique identifiers of the vehicle. As another example, data from similar vehicle models is merged to prevent individual users and their use of their vehicles from being identified.


In some embodiments, the data is only transmitted in response to a trigger. For example, in some embodiments, an inaccurate prediction triggers the transmitting of image sensor and auxiliary sensor data for automatically collecting data to create a curated set of examples for improving the prediction of a deep learning network. For example, a prediction performed at 405 to estimate the distance and direction of a vehicle using only image data is determined to be inaccurate by comparing the prediction to distance data from an emitting distance sensor. In the event the prediction and actual sensor data differ by more than a threshold amount, the image sensor data and related auxiliary data are transmitted and used to automatically generate training data. In some embodiments, the trigger may be used to identify particular scenarios such as sharp curves, forks in the roads, lane merges, sudden stops, intersections, or another appropriate scenario where additional training data is helpful and may be difficult to gather. For example, a trigger can be based on the sudden deactivation or is disengagement of autonomous driving features. As another example, vehicle operating properties such as the change in speed or change in acceleration can form the basis of a trigger. In some embodiments, a prediction with an accuracy that is less than a certain threshold triggers transmitting the sensor and related auxiliary data. For example, in certain scenarios, a prediction may not have a Boolean correct or incorrect result and is instead evaluated by determining an accuracy value of the prediction.


In various embodiments, the sensor and related auxiliary data are captured over a period of time and the entire time series of data is transmitted together. The time period may be configured and/or be based on one or more factors such as the speed of the vehicle, the distance traveled, the change in speed, etc. In some embodiments, the sampling rate of the captured sensor and/or related auxiliary data is configurable. For example, the sampling rate is increased at higher speeds, during sudden braking, during sudden acceleration, during hard steering, or another appropriate scenario when additional fidelity is needed.



FIG. 5 is a diagram illustrating an example of capturing auxiliary sensor data for training a machine learning network. In the example shown, autonomous vehicle 501 is equipped with at least sensors 503 and 553 and captures sensor data used to measure object properties of neighboring vehicles 511, 521, and 561. In some embodiments, the captured sensor data is captured and processed using a deep learning system such as deep learning system 100 of FIG. 1 installed on autonomous vehicle 501. In some embodiments, sensors 503 and 553 are additional sensors 103 of FIG. 1. In some embodiments, the data captured is the data related to vision data received at 203 of FIG. 2 and/or part of the sensor data received at 401 of FIG. 4.


In some embodiments, sensors 503 and 553 of autonomous vehicle 501 are emitting distance sensors such as radar, ultrasonic, and/or lidar sensors. Sensor 503 is a forward-facing sensor and sensor 553 is a right-side facing sensor. Additional sensors, such as rear-m facing and left-side facing sensors (not shown) may be attached to autonomous vehicle 501.


Axes 505 and 507, shown with long-dotted arrows, are reference axes of autonomous vehicle 501 and may be used as reference axes for data captured using sensor 503 and/or sensor 553. In the example shown, axes 505 and 507 are centered at sensor 503 and at the front of autonomous vehicle 501. In some embodiments, an additional height axis (not shown) is used to track is properties in three-dimensions. In various embodiments, alternative axes may be utilized. For example, the reference axis may be the center of autonomous vehicle 501. In some embodiments, each sensor of sensors 503 and 553 may utilize its own reference axes and coordinate system. The data captured and analyzed using the respective local coordinate systems of sensors 503 and 553 may be converted into a local (or world) coordinate system of autonomous vehicle 501 so that the data captured from different sensors can be shared using the same frame of reference.


In the example shown, field of views 509 and 559 of sensors 503 and 553, respectively, are depicted by dotted arcs between dotted arrows. The depicted fields of views 509 and 559 show the overhead perspective of the regions measured by sensors 503 and 553, respectively. Properties of objects in field of view 509 may be captured by sensor 503 and properties of objects in field of view 559 may be captured by sensor 553. For example, in some embodiments, distance, direction, and/or velocity measurements of objects in field of view 509 are captured by sensor 503. In the example shown, sensor 503 captures the distance and direction of neighboring vehicles 511 and 521. Sensor 503 does not measure neighboring vehicle 561 since neighboring vehicle 561 is outside the region of field of view 509. Instead, the distance and direction of neighboring vehicle 561 is captured by sensor 553. In various embodiments, objects not captured by one sensor may be captured by another sensor of a vehicle. Although depicted in FIG. 5 with only sensors 503 and 553, autonomous vehicle 501 may be equipped with multiple surround sensors (not shown) that provide 360 degrees of visibility around the vehicle.


In some embodiments, sensors 503 and 553 capture distance and direction measurements. Distance vector 513 depicts the distance and direction of neighboring vehicle 511, distance vector 523 depicts the distance and direction of neighboring vehicle 521, and distance vector 563 depicts the distance and direction of neighboring vehicle 561. In various embodiments, the actual distance and direction values captured are a set of values corresponding to the exterior surface detected by sensors 503 and 553. In the example shown, the set of distances and directions measured for each neighboring vehicle are approximated by distance vectors 513, 523, and 563. In some embodiments, sensors 503 and 553 detect a velocity vector (not shown) of objects in their respective fields of views 509 and 559. In some embodiments, is the distance and velocity vectors are three-dimensional vectors. For example, the vectors include height (or altitude) components (not shown).


In some embodiments, bounding boxes approximate detected objects including detected neighboring vehicles 511, 521, and 561. The bounding boxes approximate the exterior of the detected objects. In some embodiments, the bounding boxes are three-dimensional bounding boxes such as cuboids or another volumetric representation of the detected object. In the example of FIG. 5, the bounding boxes are shown as rectangles around neighboring vehicles 511, 521, and 561. In various embodiments, a distance and direction from autonomous vehicle 501 can be determined for each point on the edge (or surface) of a bounding box.


In various embodiments, distance vectors 513, 523, and 563 are related data to vision data captured in the same moment. The distance vectors 513, 523, and 563 are used to annotate distance and direction of neighboring vehicles 511, 521, and 561 identified in the corresponding vision data. For example, distance vectors 513, 523, and 563 may be used as the ground truth for annotating a training image that includes neighboring vehicles 511, 521, and 561. In some embodiments, the training image corresponding to the captured sensor data of FIG. 5 utilizes data captured from sensors with overlapping fields of view and captured at matching times. For example, in the event a training image is image data captured from a forward facing camera that only captures neighboring vehicles 511 and 521 and not neighboring vehicle 561, only neighboring vehicles 511 and 521 are identified in the training image and have their corresponding distance and directions annotated. Similarly, a right-side image capturing neighboring vehicle 561 includes annotations for the distance and direction of only neighboring vehicle 561. In various embodiments, annotated training images are transmitted to a training server for training a machine learning model to predict the annotated object properties. In some embodiments, the captured sensor data of FIG. 5 and corresponding vision data are transmitted to a training platform where they are analyzed and training images are selected and annotated. For example, the captured data may be a time series of data and the time series is analyzed to associate the related data to objects identified in the vision data.



FIG. 6 is a diagram illustrating an example of predicting object properties. In the example shown, analyzed vision data 601 represents the perspective of image data captured is from a vision sensor, such as a forward-facing camera, of an autonomous vehicle. In some embodiments, the vision sensor is one of vision sensors 101 of FIG. 1. In some embodiments, the vehicle's forward environment is captured and processed using a deep learning system such as deep learning system 100 of FIG. 1. In various embodiments, the process illustrated in FIG. 6 is performed at 307, 309, and/or 311 of FIG. 3 and/or at 401, 403, 405, 407, and/or 409 of FIG. 4.


In the example shown, analyzed vision data 601 captures the forward facing environment of an autonomous vehicle. Analyzed vision data 601 includes detected vehicle lane lines 603, 605, 607, and 609. In some embodiments, the vehicle lane lines are identified using a deep learning system such as deep learning system 100 of FIG. 1 trained to identify driving features. Analyzed vision data 601 also includes bounding boxes 611, 613, 615, 617, and 619 that correspond to detected objects. In various embodiments, the detected objects represented by bounding boxes 611, 613, 615, 617, and 619 are identified by analyzing captured vision data. Using the captured vision data as input to a trained machine learning model, object properties such as distances and direction of the detected objects are predicted. In some embodiments, velocity vectors are predicted. In the example shown, the detected objects of bounding boxes 611, 613, 615, 617, and 619 correspond to neighboring vehicles. Bounding boxes 611, 613, and 617 correspond to vehicles in the lane defined by vehicle lane lines 603 and 605. Bounding boxes 615 and 619 correspond to vehicles in the merging lane defined by vehicle lane lines 607 and 609. In some embodiments, bounding boxes used to represent detected objects are three-dimensional bounding boxes (not shown).


In various embodiments, the object properties predicted for bounding boxes 611, 613, 615, 617, and 619 are predicted by applying a machine learning model trained using the processes of FIGS. 2-4. The object properties predicted may be captured using auxiliary sensors as depicted in the diagram of FIG. 5. Although FIG. 5 and FIG. 6 depict different driving scenarios—FIG. 5 depicts a different number of detected objects and in different positions compared to FIG. 6—a trained machine learning model can accurately predict object properties for the objects detected in the scenario of FIG. 6 when trained on sufficient training data. In some embodiments, the distance and direction is predicted. In some embodiments, the velocity is predicted. The predicted properties may be predicted in two or three-dimensions. By is automating the generation of training data using the processes described with respect to FIGS. 1-6, training data for accurate predictions is generated in an efficient and expedient manner. In some embodiments, the identified objects and corresponding properties can be used to implement autonomous driving features such as self-driving or driver-assisted operation of a vehicle. For example, a vehicle's steering and speed may be controlled to maintain the vehicle safely between two lane lines and at a safe distance from other objects.


Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive.

Claims
  • 1. A method implemented by a processor included in a vehicle, the method comprising: receiving a time series training set comprising a plurality of images captured over a period of time, the images depicting an object proximate to a vehicle and being associated with respective timestamps, wherein the time series training set is associated with label information indicating, at least, respective distances of the object with respect to the vehicle and auxiliary data associated with the vehicle;training a machine learning model based on the time series training set; andproviding the machine learning model for execution by one or more other vehicles, wherein the machine learning model is configured to output distance information associated with objects.
  • 2. The method of claim 1, wherein the auxiliary data comprises velocities of the vehicle.
  • 3. The method of claim 1, wherein the auxiliary data comprises headings of the vehicle.
  • 4. The method of claim 1, wherein the trained machine learning model is configured to generate output without using distance information from emitting sensors.
  • 5. The method of claim 1, wherein the machine learning model is configured to output velocity information associated with objects.
  • 6. A system comprising one or more processors and non-transitory computer storage media storing instructions that when executed by the one or more processors cause the processors to perform operations comprising: receiving a time series training set comprising a plurality of images captured over a period of time, the images depicting an object proximate to a vehicle and being associated with respective timestamps, wherein the time series training set is associated with label information indicating, at least, respective distances of the object with respect to the vehicle and auxiliary data associated with the vehicle;training a machine learning model based on the time series training set; andproviding the machine learning model for execution by one or more other vehicles, wherein the machine learning model is configured to output distance information associated with objects.
  • 7. The system of claim 6, wherein the auxiliary data comprises velocities of the vehicle.
  • 8. The system of claim 6, wherein the auxiliary data comprises headings of the vehicle.
  • 9. The system of claim 6, wherein the trained machine learning model is configured to generate output without using distance information from emitting sensors.
  • 10. The system of claim 6, wherein the machine learning model is configured to output velocity information associated with objects.
  • 11. Non-transitory computer storage media storing instructions that when executed by a system of one or more processors, cause the processors to perform operations comprising: receiving a time series training set comprising a plurality of images captured over a period of time, the images depicting an object proximate to a vehicle and being associated with respective timestamps, wherein the time series training set is associated with label information indicating, at least, respective distances of the object with respect to the vehicle and auxiliary data associated with the vehicle;training a machine learning model based on the time series training set; andproviding the machine learning model for execution by one or more other vehicles, wherein the machine learning model is configured to output distance information associated with objects.
  • 12. The computer storage media of claim 11, wherein the auxiliary data comprises velocities of the vehicle.
  • 13. The computer storage media of claim 11, wherein the auxiliary data comprises headings of the vehicle.
  • 14. The computer storage media of claim 11, wherein the trained machine learning model is configured to generate output without using distance information from emitting sensors.
  • 15. The computer storage media of claim 11, wherein the machine learning model is configured to output velocity information associated with objects.
US Referenced Citations (591)
Number Name Date Kind
6882755 Silverstein et al. May 2005 B2
7209031 Nakai et al. Apr 2007 B2
7747070 Puri Jun 2010 B2
7904867 Burch et al. Mar 2011 B2
7974492 Nishijima Jul 2011 B2
8165380 Choi et al. Apr 2012 B2
8369633 Lu et al. Feb 2013 B2
8406515 Cheatle et al. Mar 2013 B2
8509478 Haas et al. Aug 2013 B2
8588470 Rodriguez et al. Nov 2013 B2
8744174 Hamada et al. Jun 2014 B2
8773498 Lindbergh Jul 2014 B2
8912476 Fogg et al. Dec 2014 B2
8913830 Sun et al. Dec 2014 B2
8928753 Han et al. Jan 2015 B2
8972095 Furuno et al. Mar 2015 B2
8976269 Duong Mar 2015 B2
9008422 Eid et al. Apr 2015 B2
9081385 Ferguson et al. Jul 2015 B1
9275289 Li et al. Mar 2016 B2
9586455 Sugai et al. Mar 2017 B2
9672437 McCarthy Jun 2017 B2
9710696 Wang et al. Jul 2017 B2
9738223 Zhang et al. Aug 2017 B2
9754154 Craig et al. Sep 2017 B2
9767369 Furman et al. Sep 2017 B2
9965865 Agrawal et al. May 2018 B1
10133273 Linke Nov 2018 B2
10140252 Fowers et al. Nov 2018 B2
10140544 Zhao et al. Nov 2018 B1
10146225 Ryan Dec 2018 B2
10152655 Krishnamurthy et al. Dec 2018 B2
10167800 Chung et al. Jan 2019 B1
10169680 Sachdeva et al. Jan 2019 B1
10192016 Ng et al. Jan 2019 B2
10216189 Haynes Feb 2019 B1
10228693 Micks et al. Mar 2019 B2
10242293 Shim et al. Mar 2019 B2
10248121 VandenBerg, III Apr 2019 B2
10262218 Lee et al. Apr 2019 B2
10282623 Ziyaee et al. May 2019 B1
10296828 Viswanathan May 2019 B2
10303961 Stoffel et al. May 2019 B1
10310087 Laddha et al. Jun 2019 B2
10311312 Yu et al. Jun 2019 B2
10318848 Dijkman et al. Jun 2019 B2
10325178 Tang et al. Jun 2019 B1
10331974 Zia et al. Jun 2019 B2
10338600 Yoon et al. Jul 2019 B2
10343607 Kumon Jul 2019 B2
10359783 Williams et al. Jul 2019 B2
10366290 Wang et al. Jul 2019 B2
10372130 Kaushansky et al. Aug 2019 B1
10373019 Nariyambut Murali et al. Aug 2019 B2
10373026 Kim et al. Aug 2019 B1
10380741 Yedla et al. Aug 2019 B2
10394237 Xu et al. Aug 2019 B2
10395144 Zeng et al. Aug 2019 B2
10402646 Klaus Sep 2019 B2
10402986 Ray et al. Sep 2019 B2
10414395 Sapp et al. Sep 2019 B1
10423934 Zanghi et al. Sep 2019 B1
10436615 Agarwal et al. Oct 2019 B2
10452905 Segalovitz et al. Oct 2019 B2
10460053 Olson et al. Oct 2019 B2
10467459 Chen et al. Nov 2019 B2
10468008 Beckman et al. Nov 2019 B2
10468062 Levinson et al. Nov 2019 B1
10470510 Koh et al. Nov 2019 B1
10474160 Huang et al. Nov 2019 B2
10474161 Huang et al. Nov 2019 B2
10474928 Sivakumar et al. Nov 2019 B2
10489126 Kumar et al. Nov 2019 B2
10489972 Atsmon Nov 2019 B2
10503971 Dang et al. Dec 2019 B1
10514711 Bar-Nahum et al. Dec 2019 B2
10528824 Zou Jan 2020 B2
10529078 Abreu et al. Jan 2020 B2
10529088 Fine et al. Jan 2020 B2
10534854 Sharma et al. Jan 2020 B2
10535191 Sachdeva et al. Jan 2020 B2
10542930 Sanchez et al. Jan 2020 B1
10546197 Shrestha et al. Jan 2020 B2
10546217 Albright et al. Jan 2020 B2
10552682 Jonsson et al. Feb 2020 B2
10559386 Neuman Feb 2020 B1
10565475 Lecue et al. Feb 2020 B2
10567674 Kirsch Feb 2020 B2
10568570 Sherpa et al. Feb 2020 B1
10572717 Zhu et al. Feb 2020 B1
10574905 Srikanth et al. Feb 2020 B2
10579058 Oh et al. Mar 2020 B2
10579063 Haynes et al. Mar 2020 B2
10579897 Redmon et al. Mar 2020 B2
10586280 McKenna et al. Mar 2020 B2
10591914 Palanisamy et al. Mar 2020 B2
10592785 Zhu et al. Mar 2020 B2
10599701 Liu Mar 2020 B2
10599930 Lee et al. Mar 2020 B2
10599958 He et al. Mar 2020 B2
10606990 Tuli et al. Mar 2020 B2
10609434 Singhai et al. Mar 2020 B2
10614344 Anthony et al. Apr 2020 B2
10621513 Deshpande et al. Apr 2020 B2
10627818 Sapp et al. Apr 2020 B2
10628432 Guo et al. Apr 2020 B2
10628686 Ogale et al. Apr 2020 B2
10628688 Kim et al. Apr 2020 B1
10629080 Kazemi et al. Apr 2020 B2
10636161 Uchigaito Apr 2020 B2
10636169 Estrada et al. Apr 2020 B2
10642275 Silva et al. May 2020 B2
10645344 Marman et al. May 2020 B2
10649464 Gray May 2020 B2
10650071 Asgekar et al. May 2020 B2
10652565 Zhang et al. May 2020 B1
10656657 Djuric et al. May 2020 B2
10657391 Chen et al. May 2020 B2
10657418 Marder et al. May 2020 B2
10657934 Kolen et al. May 2020 B1
10661902 Tavshikar May 2020 B1
10664750 Greene May 2020 B2
10671082 Huang et al. Jun 2020 B2
10671886 Price et al. Jun 2020 B2
10678244 Iandola et al. Jun 2020 B2
10678839 Gordon et al. Jun 2020 B2
10678997 Ahuja et al. Jun 2020 B2
10679129 Baker Jun 2020 B2
10685159 Su et al. Jun 2020 B2
10685188 Zhang et al. Jun 2020 B1
10692000 Surazhsky et al. Jun 2020 B2
10692242 Morrison et al. Jun 2020 B1
10693740 Coccia et al. Jun 2020 B2
10698868 Guggilla et al. Jun 2020 B2
10699119 Lo et al. Jun 2020 B2
10699140 Kench et al. Jun 2020 B2
10699477 Levinson et al. Jun 2020 B2
10713502 Tiziani Jul 2020 B2
10719759 Kutliroff Jul 2020 B2
10725475 Yang et al. Jul 2020 B2
10726264 Sawhney et al. Jul 2020 B2
10726279 Kim et al. Jul 2020 B1
10726374 Engineer et al. Jul 2020 B1
10732261 Wang et al. Aug 2020 B1
10733262 Miller et al. Aug 2020 B2
10733482 Lee et al. Aug 2020 B1
10733638 Jain et al. Aug 2020 B1
10733755 Liao et al. Aug 2020 B2
10733876 Moura et al. Aug 2020 B2
10740563 Dugan Aug 2020 B2
10740914 Xiao et al. Aug 2020 B2
10748062 Rippel et al. Aug 2020 B2
10748247 Paluri Aug 2020 B2
10751879 Li et al. Aug 2020 B2
10755112 Mabuchi Aug 2020 B2
10755575 Johnston et al. Aug 2020 B2
10757330 Ashrafi Aug 2020 B2
10762396 Vailespi et al. Sep 2020 B2
10768628 Martin et al. Sep 2020 B2
10768629 Song et al. Sep 2020 B2
10769446 Chang et al. Sep 2020 B2
10769483 Nirenberg et al. Sep 2020 B2
10769493 Yu et al. Sep 2020 B2
10769494 Xiao et al. Sep 2020 B2
10769525 Redding et al. Sep 2020 B2
10776626 Lin et al. Sep 2020 B1
10776673 Kim et al. Sep 2020 B2
10776939 Ma et al. Sep 2020 B2
10779760 Lee et al. Sep 2020 B2
10783381 Yu et al. Sep 2020 B2
10783454 Shoaib et al. Sep 2020 B2
10789402 Vemuri et al. Sep 2020 B1
10789544 Fiedel et al. Sep 2020 B2
10790919 Kolen et al. Sep 2020 B1
10796221 Zhang et al. Oct 2020 B2
10796355 Price et al. Oct 2020 B1
10796423 Goja Oct 2020 B2
10798368 Briggs et al. Oct 2020 B2
10803325 Bai et al. Oct 2020 B2
10803328 Bai et al. Oct 2020 B1
10803743 Abari et al. Oct 2020 B2
10805629 Liu et al. Oct 2020 B2
10809730 Chintakindi Oct 2020 B2
10810445 Kangaspunta Oct 2020 B1
10816346 Wheeler et al. Oct 2020 B2
10816992 Chen Oct 2020 B2
10817731 Vallespi et al. Oct 2020 B2
10817732 Porter et al. Oct 2020 B2
10819923 McCauley et al. Oct 2020 B1
10824122 Mummadi et al. Nov 2020 B2
10824862 Qi et al. Nov 2020 B2
10828790 Nemallan Nov 2020 B2
10832057 Chan et al. Nov 2020 B2
10832093 Taralova et al. Nov 2020 B1
10832414 Pfeiffer Nov 2020 B2
10832418 Karasev et al. Nov 2020 B1
10833785 O'Shea et al. Nov 2020 B1
10836379 Xiao et al. Nov 2020 B2
10838936 Cohen Nov 2020 B2
10839230 Charette et al. Nov 2020 B2
10839578 Coppersmith et al. Nov 2020 B2
10843628 Kawamoto et al. Nov 2020 B2
10845820 Wheeler Nov 2020 B2
10845943 Ansari et al. Nov 2020 B1
10846831 Raduta Nov 2020 B2
10846888 Kaplanyan et al. Nov 2020 B2
10853670 Sholingar et al. Dec 2020 B2
10853739 Truong et al. Dec 2020 B2
10860919 Kanazawa et al. Dec 2020 B2
10860924 Burger Dec 2020 B2
10867444 Russell et al. Dec 2020 B2
10871444 Al et al. Dec 2020 B2
10871782 Milstein et al. Dec 2020 B2
10872204 Zhu et al. Dec 2020 B2
10872254 Mangla et al. Dec 2020 B2
10872326 Garner Dec 2020 B2
10872531 Liu et al. Dec 2020 B2
10885083 Moeller-Bertram et al. Jan 2021 B2
10887433 Fu et al. Jan 2021 B2
10890898 Akella et al. Jan 2021 B2
10891715 Li Jan 2021 B2
10891735 Yang et al. Jan 2021 B2
10893070 Wang et al. Jan 2021 B2
10893107 Callari et al. Jan 2021 B1
10896763 Kempanna et al. Jan 2021 B2
10901416 Khanna et al. Jan 2021 B2
10901508 Laszlo et al. Jan 2021 B2
10902551 Mellado et al. Jan 2021 B1
10908068 Amer et al. Feb 2021 B2
10908606 Stein et al. Feb 2021 B2
10909368 Guo et al. Feb 2021 B2
10909453 Myers et al. Feb 2021 B1
10915783 Hallman et al. Feb 2021 B1
10917522 Segalis et al. Feb 2021 B2
10921817 Kangaspunta Feb 2021 B1
10922578 Banerjee et al. Feb 2021 B2
10924661 Vasconcelos et al. Feb 2021 B2
10928508 Swaminathan Feb 2021 B2
10929757 Baker et al. Feb 2021 B2
10930065 Grant et al. Feb 2021 B2
10936908 Ho et al. Mar 2021 B1
10937186 Wang et al. Mar 2021 B2
10943101 Agarwal et al. Mar 2021 B2
10943132 Wang et al. Mar 2021 B2
10943355 Fagg et al. Mar 2021 B2
10956755 Musk et al. Mar 2021 B2
11288524 Musk et al. Mar 2022 B2
20030035481 Hahm Feb 2003 A1
20050162445 Sheasby et al. Jul 2005 A1
20060072847 Chor et al. Apr 2006 A1
20060224533 Thaler Oct 2006 A1
20060280364 Ma et al. Dec 2006 A1
20090016571 Tijerina et al. Jan 2009 A1
20100118157 Kameyama May 2010 A1
20120109915 Kamekawa May 2012 A1
20120110491 Cheung May 2012 A1
20120134595 Fonseca et al. May 2012 A1
20140368493 Rogan Dec 2014 A1
20150104102 Carreira et al. Apr 2015 A1
20160132786 Balan et al. May 2016 A1
20160328856 Mannino et al. Nov 2016 A1
20170011281 Dijkman et al. Jan 2017 A1
20170039732 Morifuji Feb 2017 A1
20170158134 Shigemura Jun 2017 A1
20170206434 Nariyambut et al. Jul 2017 A1
20180012411 Richey et al. Jan 2018 A1
20180018590 Szeto et al. Jan 2018 A1
20180039853 Liu et al. Feb 2018 A1
20180067489 Oder et al. Mar 2018 A1
20180068459 Zhang et al. Mar 2018 A1
20180068540 Romanenko et al. Mar 2018 A1
20180074506 Branson Mar 2018 A1
20180121762 Han et al. May 2018 A1
20180150081 Gross et al. May 2018 A1
20180211403 Hotson et al. Jul 2018 A1
20180308012 Mummadi et al. Oct 2018 A1
20180314878 Lee et al. Nov 2018 A1
20180357511 Misra et al. Dec 2018 A1
20180374105 Azout et al. Dec 2018 A1
20190004534 Huang Jan 2019 A1
20190023277 Roger et al. Jan 2019 A1
20190025773 Yang et al. Jan 2019 A1
20190042894 Anderson Feb 2019 A1
20190042919 Peysakhovich et al. Feb 2019 A1
20190042944 Nair et al. Feb 2019 A1
20190042948 Lee et al. Feb 2019 A1
20190057314 Julian et al. Feb 2019 A1
20190065637 Bogdoll et al. Feb 2019 A1
20190072978 Levi Mar 2019 A1
20190079526 Vallespi et al. Mar 2019 A1
20190080602 Rice et al. Mar 2019 A1
20190095780 Zhong et al. Mar 2019 A1
20190095946 Azout et al. Mar 2019 A1
20190101914 Coleman et al. Apr 2019 A1
20190108417 Talagala et al. Apr 2019 A1
20190122111 Min et al. Apr 2019 A1
20190130255 Yim et al. May 2019 A1
20190145765 Luo et al. May 2019 A1
20190146497 Urtasun et al. May 2019 A1
20190147112 Gordon May 2019 A1
20190147250 Zhang et al. May 2019 A1
20190147254 Bai et al. May 2019 A1
20190147255 Homayounfar et al. May 2019 A1
20190147335 Wang et al. May 2019 A1
20190147372 Luo et al. May 2019 A1
20190158784 Ahn et al. May 2019 A1
20190180154 Orlov et al. Jun 2019 A1
20190185010 Ganguli et al. Jun 2019 A1
20190189251 Horiuchi et al. Jun 2019 A1
20190197357 Anderson et al. Jun 2019 A1
20190204842 Jafari et al. Jul 2019 A1
20190205402 Sernau et al. Jul 2019 A1
20190205667 Avidan et al. Jul 2019 A1
20190217791 Bradley et al. Jul 2019 A1
20190227562 Mohammadiha et al. Jul 2019 A1
20190228037 Nicol et al. Jul 2019 A1
20190230282 Sypitkowski et al. Jul 2019 A1
20190235499 Kazemi et al. Aug 2019 A1
20190236437 Shin et al. Aug 2019 A1
20190243371 Nister et al. Aug 2019 A1
20190244138 Bhowmick et al. Aug 2019 A1
20190250622 Nister et al. Aug 2019 A1
20190250626 Ghafarianzadeh et al. Aug 2019 A1
20190250640 O'Flaherty et al. Aug 2019 A1
20190258878 Koivisto et al. Aug 2019 A1
20190266418 Xu et al. Aug 2019 A1
20190266610 Ghatage et al. Aug 2019 A1
20190272446 Kangaspunta et al. Sep 2019 A1
20190276041 Choi et al. Sep 2019 A1
20190279004 Kwon et al. Sep 2019 A1
20190286652 Habbecke et al. Sep 2019 A1
20190286972 El Husseini et al. Sep 2019 A1
20190287028 St Amant et al. Sep 2019 A1
20190289281 Badrinarayanan et al. Sep 2019 A1
20190294177 Kwon et al. Sep 2019 A1
20190294975 Sachs Sep 2019 A1
20190311290 Huang et al. Oct 2019 A1
20190318099 Carvalho et al. Oct 2019 A1
20190325088 Dubey et al. Oct 2019 A1
20190325266 Klepper et al. Oct 2019 A1
20190325269 Bagherinezhad et al. Oct 2019 A1
20190325580 Lukac et al. Oct 2019 A1
20190325595 Stein et al. Oct 2019 A1
20190329790 Nandakumar et al. Oct 2019 A1
20190332875 Vallespi-Gonzalez et al. Oct 2019 A1
20190333232 Vallespi-Gonzalez et al. Oct 2019 A1
20190336063 Dascalu Nov 2019 A1
20190339989 Liang et al. Nov 2019 A1
20190340462 Pao et al. Nov 2019 A1
20190340492 Burger et al. Nov 2019 A1
20190340499 Burger et al. Nov 2019 A1
20190347501 Kim et al. Nov 2019 A1
20190349571 Herman et al. Nov 2019 A1
20190354782 Kee et al. Nov 2019 A1
20190354786 Lee et al. Nov 2019 A1
20190354808 Park et al. Nov 2019 A1
20190354817 Shiens et al. Nov 2019 A1
20190354850 Watson et al. Nov 2019 A1
20190370398 He et al. Dec 2019 A1
20190370575 Nandakumar et al. Dec 2019 A1
20190370935 Chang et al. Dec 2019 A1
20190373322 Rojas-Echenique et al. Dec 2019 A1
20190377345 Bachrach et al. Dec 2019 A1
20190377965 Totolos et al. Dec 2019 A1
20190378049 Widmann et al. Dec 2019 A1
20190378051 Widmann et al. Dec 2019 A1
20190382007 Casas et al. Dec 2019 A1
20190384303 Muller et al. Dec 2019 A1
20190384304 Towal et al. Dec 2019 A1
20190384309 Silva et al. Dec 2019 A1
20190384994 Frossard et al. Dec 2019 A1
20190385048 Cassidy et al. Dec 2019 A1
20190385360 Yang et al. Dec 2019 A1
20200004259 Gulino et al. Jan 2020 A1
20200004351 Marchant et al. Jan 2020 A1
20200012936 Lee et al. Jan 2020 A1
20200017117 Milton Jan 2020 A1
20200025931 Liang et al. Jan 2020 A1
20200026282 Choe et al. Jan 2020 A1
20200026283 Barnes et al. Jan 2020 A1
20200026992 Zhang et al. Jan 2020 A1
20200027210 Haemel et al. Jan 2020 A1
20200033858 Xiao Jan 2020 A1
20200033865 Mellinger, III et al. Jan 2020 A1
20200034665 Ghanta et al. Jan 2020 A1
20200034710 Sidhu et al. Jan 2020 A1
20200036948 Song Jan 2020 A1
20200039520 Misu et al. Feb 2020 A1
20200051550 Baker Feb 2020 A1
20200060757 Ben-Haim et al. Feb 2020 A1
20200065711 Clément et al. Feb 2020 A1
20200065879 Hu et al. Feb 2020 A1
20200069973 Lou et al. Mar 2020 A1
20200073385 Jobanputra et al. Mar 2020 A1
20200074230 Englard et al. Mar 2020 A1
20200086880 Poeppel et al. Mar 2020 A1
20200089243 Poeppel et al. Mar 2020 A1
20200089969 Lakshmi et al. Mar 2020 A1
20200090056 Singhal et al. Mar 2020 A1
20200097841 Petousis et al. Mar 2020 A1
20200098095 Borcs et al. Mar 2020 A1
20200103894 Cella et al. Apr 2020 A1
20200104705 Bhowmick et al. Apr 2020 A1
20200110416 Hong et al. Apr 2020 A1
20200117180 Celia et al. Apr 2020 A1
20200117889 Laput et al. Apr 2020 A1
20200117916 Liu Apr 2020 A1
20200117917 Yoo Apr 2020 A1
20200118035 Asawa et al. Apr 2020 A1
20200125844 She et al. Apr 2020 A1
20200125845 Hess et al. Apr 2020 A1
20200126129 Lkhamsuren et al. Apr 2020 A1
20200134427 Oh et al. Apr 2020 A1
20200134461 Chai et al. Apr 2020 A1
20200134466 Weintraub et al. Apr 2020 A1
20200134848 El-Khamy et al. Apr 2020 A1
20200143231 Fusi et al. May 2020 A1
20200143279 West et al. May 2020 A1
20200148201 King et al. May 2020 A1
20200149898 Felip et al. May 2020 A1
20200151201 Chandrasekhar et al. May 2020 A1
20200151619 Mopur et al. May 2020 A1
20200151692 Gao et al. May 2020 A1
20200158822 Owens et al. May 2020 A1
20200158869 Amirloo et al. May 2020 A1
20200159225 Zeng et al. May 2020 A1
20200160064 Wang et al. May 2020 A1
20200160104 Urtasun et al. May 2020 A1
20200160117 Urtasun et al. May 2020 A1
20200160178 Kar et al. May 2020 A1
20200160532 Urtasun et al. May 2020 A1
20200160558 Urtasun et al. May 2020 A1
20200160559 Urtasun et al. May 2020 A1
20200160598 Manivasagam et al. May 2020 A1
20200162489 Bar-Nahum et al. May 2020 A1
20200167438 Herring May 2020 A1
20200167554 Wang et al. May 2020 A1
20200174481 Van Heukelom et al. Jun 2020 A1
20200175326 Shen et al. Jun 2020 A1
20200175354 Volodarskiy et al. Jun 2020 A1
20200175371 Kursun Jun 2020 A1
20200175401 Shen Jun 2020 A1
20200183482 Sebot et al. Jun 2020 A1
20200184250 Oko Jun 2020 A1
20200184333 Oh Jun 2020 A1
20200192389 ReMine et al. Jun 2020 A1
20200193313 Ghanta et al. Jun 2020 A1
20200193328 Guestrin et al. Jun 2020 A1
20200202136 Shrestha et al. Jun 2020 A1
20200202196 Guo et al. Jun 2020 A1
20200209857 Djuric et al. Jul 2020 A1
20200209867 Valois Jul 2020 A1
20200209874 Chen et al. Jul 2020 A1
20200210717 Hou et al. Jul 2020 A1
20200210769 Hou et al. Jul 2020 A1
20200210777 Valois et al. Jul 2020 A1
20200216064 du Toit et al. Jul 2020 A1
20200218722 Mai et al. Jul 2020 A1
20200218979 Kwon et al. Jul 2020 A1
20200223434 Campos et al. Jul 2020 A1
20200225758 Tang et al. Jul 2020 A1
20200226377 Campos et al. Jul 2020 A1
20200226430 Ahuja et al. Jul 2020 A1
20200238998 Dasalukunte et al. Jul 2020 A1
20200242381 Chao et al. Jul 2020 A1
20200242408 Kim et al. Jul 2020 A1
20200242511 Kale et al. Jul 2020 A1
20200245869 Sivan et al. Aug 2020 A1
20200249685 Elluswamy et al. Aug 2020 A1
20200250456 Wang et al. Aug 2020 A1
20200250515 Rifkin et al. Aug 2020 A1
20200250874 Assouline et al. Aug 2020 A1
20200257301 Weiser et al. Aug 2020 A1
20200257306 Nisenzon Aug 2020 A1
20200258057 Farahat et al. Aug 2020 A1
20200265247 Musk et al. Aug 2020 A1
20200272160 Djuric et al. Aug 2020 A1
20200272162 Hasselgren et al. Aug 2020 A1
20200272859 Iashyn et al. Aug 2020 A1
20200273231 Schied et al. Aug 2020 A1
20200279354 Klaiman Sep 2020 A1
20200279364 Sarkisian et al. Sep 2020 A1
20200279371 Wenzel et al. Sep 2020 A1
20200285464 Brebner Sep 2020 A1
20200286256 Houts et al. Sep 2020 A1
20200293786 Jia et al. Sep 2020 A1
20200293796 Sajjadi et al. Sep 2020 A1
20200293828 Wang et al. Sep 2020 A1
20200293905 Huang et al. Sep 2020 A1
20200294162 Shah Sep 2020 A1
20200294257 Yoo Sep 2020 A1
20200294310 Lee et al. Sep 2020 A1
20200297237 Tamersoy et al. Sep 2020 A1
20200298891 Liang et al. Sep 2020 A1
20200301799 Manivasagam et al. Sep 2020 A1
20200302276 Yang et al. Sep 2020 A1
20200302291 Hong Sep 2020 A1
20200302627 Duggal et al. Sep 2020 A1
20200302662 Homayounfar et al. Sep 2020 A1
20200304441 Bradley et al. Sep 2020 A1
20200306640 Kolen et al. Oct 2020 A1
20200307562 Ghafarianzadeh et al. Oct 2020 A1
20200307563 Ghafarianzadeh et al. Oct 2020 A1
20200309536 Omari et al. Oct 2020 A1
20200309923 Bhaskaran et al. Oct 2020 A1
20200310442 Halder et al. Oct 2020 A1
20200311601 Robinson et al. Oct 2020 A1
20200312003 Borovikov et al. Oct 2020 A1
20200315708 Mosnier et al. Oct 2020 A1
20200320132 Neumann Oct 2020 A1
20200324073 Rajan et al. Oct 2020 A1
20200327192 Hackman et al. Oct 2020 A1
20200327443 Van et al. Oct 2020 A1
20200327449 Tiwari et al. Oct 2020 A1
20200327662 Liu et al. Oct 2020 A1
20200327667 Arbel et al. Oct 2020 A1
20200331476 Chen et al. Oct 2020 A1
20200334416 Vianu et al. Oct 2020 A1
20200334495 Al et al. Oct 2020 A1
20200334501 Lin et al. Oct 2020 A1
20200334551 Javidi et al. Oct 2020 A1
20200334574 Ishida Oct 2020 A1
20200337648 Saripalli et al. Oct 2020 A1
20200341466 Pham et al. Oct 2020 A1
20200342350 Madar et al. Oct 2020 A1
20200342548 Mazed et al. Oct 2020 A1
20200342652 Rowell et al. Oct 2020 A1
20200348909 Das Sarma et al. Nov 2020 A1
20200350063 Thornton et al. Nov 2020 A1
20200351438 Dewhurst et al. Nov 2020 A1
20200356107 Wells Nov 2020 A1
20200356790 Jaipuria et al. Nov 2020 A1
20200356864 Neumann Nov 2020 A1
20200356905 Luk et al. Nov 2020 A1
20200361083 Mousavian et al. Nov 2020 A1
20200361485 Zhu et al. Nov 2020 A1
20200364481 Kornienko et al. Nov 2020 A1
20200364508 Gurel et al. Nov 2020 A1
20200364540 Elsayed et al. Nov 2020 A1
20200364746 Longano et al. Nov 2020 A1
20200364953 Simoudis Nov 2020 A1
20200372362 Kim Nov 2020 A1
20200372402 Kursun et al. Nov 2020 A1
20200380362 Cao et al. Dec 2020 A1
20200380383 Kwong et al. Dec 2020 A1
20200393841 Frisbie et al. Dec 2020 A1
20200394421 Yu et al. Dec 2020 A1
20200394457 Brady Dec 2020 A1
20200394495 Moudgill et al. Dec 2020 A1
20200394813 Theverapperuma et al. Dec 2020 A1
20200396394 Zlokolica et al. Dec 2020 A1
20200398855 Thompson Dec 2020 A1
20200401850 Bazarsky et al. Dec 2020 A1
20200401886 Deng et al. Dec 2020 A1
20200402155 Kurian et al. Dec 2020 A1
20200402226 Peng Dec 2020 A1
20200410012 Moon et al. Dec 2020 A1
20200410224 Goel Dec 2020 A1
20200410254 Pham et al. Dec 2020 A1
20200410288 Capota et al. Dec 2020 A1
20200410751 Omari et al. Dec 2020 A1
20210004014 Sivakumar Jan 2021 A1
20210004580 Sundararaman et al. Jan 2021 A1
20210004611 Garimella Jan 2021 A1
20210004663 Park et al. Jan 2021 A1
20210006835 Slattery et al. Jan 2021 A1
20210011908 Hayes et al. Jan 2021 A1
20210012116 Urtasun et al. Jan 2021 A1
20210012210 Sikka et al. Jan 2021 A1
20210012230 Hayes et al. Jan 2021 A1
20210012239 Arzani et al. Jan 2021 A1
20210015240 Elfakhri et al. Jan 2021 A1
20210019215 Neeter Jan 2021 A1
20210026360 Luo Jan 2021 A1
20210027112 Brewington et al. Jan 2021 A1
20210027117 McGavran et al. Jan 2021 A1
20210030276 Li et al. Feb 2021 A1
20210034921 Pinkovich et al. Feb 2021 A1
20210042575 Firner Feb 2021 A1
20210042928 Takeda et al. Feb 2021 A1
20210046954 Haynes Feb 2021 A1
20210049378 Gautam et al. Feb 2021 A1
20210049455 Kursun Feb 2021 A1
20210049456 Kursun Feb 2021 A1
20210049548 Grisz et al. Feb 2021 A1
20210049700 Nguyen et al. Feb 2021 A1
20210056114 Price et al. Feb 2021 A1
20210056306 Hu et al. Feb 2021 A1
20210056317 Golov Feb 2021 A1
20210056420 Konishi et al. Feb 2021 A1
20210056701 Vranceanu et al. Feb 2021 A1
Foreign Referenced Citations (252)
Number Date Country
2019261735 Jun 2020 AU
2019201716 Oct 2020 AU
110599537 Dec 2010 CN
102737236 Oct 2012 CN
103366339 Oct 2013 CN
104835114 Aug 2015 CN
103236037 May 2016 CN
103500322 Aug 2016 CN
106419893 Feb 2017 CN
106504253 Mar 2017 CN
107031600 Aug 2017 CN
107169421 Sep 2017 CN
107507134 Dec 2017 CN
107885214 Apr 2018 CN
108122234 Jun 2018 CN
107133943 Jul 2018 CN
107368926 Jul 2018 CN
105318888 Aug 2018 CN
108491889 Sep 2018 CN
108647591 Oct 2018 CN
108710865 Oct 2018 CN
105550701 Nov 2018 CN
108764185 Nov 2018 CN
108845574 Nov 2018 CN
108898177 Nov 2018 CN
109086867 Dec 2018 CN
107103113 Jan 2019 CN
109116374 Jan 2019 CN
109215067 Jan 2019 CN
109359731 Feb 2019 CN
109389207 Feb 2019 CN
109389552 Feb 2019 CN
109579856 Feb 2019 CN
106779060 Mar 2019 CN
109615073 Apr 2019 CN
106156754 May 2019 CN
106598226 May 2019 CN
109791626 May 2019 CN
1066509228 May 2019 CN
109901595 Jun 2019 CN
109902732 Jun 2019 CN
109934163 Jun 2019 CN
109948428 Jun 2019 CN
109949257 Jun 2019 CN
109951710 Jun 2019 CN
109975308 Jul 2019 CN
109978132 Jul 2019 CN
109978161 Jul 2019 CN
110060202 Jul 2019 CN
110069071 Jul 2019 CN
110084086 Aug 2019 CN
110096937 Aug 2019 CN
110111340 Aug 2019 CN
110135485 Aug 2019 CN
110197270 Sep 2019 CN
110310264 Oct 2019 CN
110321965 Oct 2019 CN
110334801 Oct 2019 CN
110399875 Nov 2019 CN
110414362 Nov 2019 CN
110426051 Nov 2019 CN
110473173 Nov 2019 CN
110516665 Nov 2019 CN
110543837 Dec 2019 CN
110569899 Dec 2019 CN
110599864 Dec 2019 CN
110619282 Dec 2019 CN
110619283 Dec 2019 CN
110619330 Dec 2019 CN
110659628 Dec 2019 CN
110688992 Jan 2020 CN
107742311 Feb 2020 CN
110751280 Feb 2020 CN
110826566 Feb 2020 CN
107451659 Apr 2020 CN
108111873 Apr 2020 CN
110956185 Apr 2020 CN
110966991 Apr 2020 CN
111027549 Apr 2020 CN
111027575 Apr 2020 CN
111047225 Apr 2020 CN
111126453 May 2020 CN
111158355 May 2020 CN
107729998 Jun 2020 CN
108549934 Jun 2020 CN
111275129 Jun 2020 CN
111275618 Jun 2020 CN
111326023 Jun 2020 CN
111428943 Jul 2020 CN
111444821 Jul 2020 CN
111445420 Jul 2020 CN
111461052 Jul 2020 CN
111461053 Jul 2020 CN
111461110 Jul 2020 CN
110225341 Aug 2020 CN
111307162 Aug 2020 CN
111488770 Aug 2020 CN
111539514 Aug 2020 CN
111565318 Aug 2020 CN
111582216 Aug 2020 CN
111598095 Aug 2020 CN
108229526 Sep 2020 CN
111693972 Sep 2020 CN
106558058 Oct 2020 CN
107169560 Oct 2020 CN
107622258 Oct 2020 CN
111767801 Oct 2020 CN
111768002 Oct 2020 CN
111783545 Oct 2020 CN
111783971 Oct 2020 CN
111797657 Oct 2020 CN
111814623 Oct 2020 CN
111814902 Oct 2020 CN
111860499 Oct 2020 CN
111881856 Nov 2020 CN
111882579 Nov 2020 CN
111897639 Nov 2020 CN
111898507 Nov 2020 CN
111898523 Nov 2020 CN
111899227 Nov 2020 CN
112101175 Dec 2020 CN
112101562 Dec 2020 CN
112115953 Dec 2020 CN
111062973 Jan 2021 CN
111275080 Jan 2021 CN
112183739 Jan 2021 CN
112232497 Jan 2021 CN
112288658 Jan 2021 CN
112308095 Feb 2021 CN
112308799 Feb 2021 CN
112313663 Feb 2021 CN
112329552 Feb 2021 CN
112348783 Feb 2021 CN
111899245 Mar 2021 CN
202017102235 May 2017 DE
202017102238 May 2017 DE
102017116017 Jan 2019 DE
102018130821 Jun 2020 DE
102019008316 Aug 2020 DE
1215626 Sep 2008 EP
2228666 Sep 2012 EP
2420408 May 2013 EP
2723069 Apr 2014 EP
2741253 Jun 2014 EP
3115772 Jan 2017 EP
2618559 Aug 2017 EP
3285485 Feb 2018 EP
3 438 872 Feb 2019 EP
2863633 Feb 2019 EP
3113080 May 2019 EP
3525132 Aug 2019 EP
3531689 Aug 2019 EP
3537340 Sep 2019 EP
3543917 Sep 2019 EP
3608840 Feb 2020 EP
3657387 May 2020 EP
2396750 Jun 2020 EP
3664020 Jun 2020 EP
3690712 Aug 2020 EP
3690742 Aug 2020 EP
3722992 Oct 2020 EP
3690730 Nov 2020 EP
3739486 Nov 2020 EP
3501897 Dec 2020 EP
3751455 Dec 2020 EP
3783527 Feb 2021 EP
2402572 Aug 2005 GB
2548087 Sep 2017 GB
2577485 Apr 2020 GB
2517270 Jun 2020 GB
2578262 Aug 1998 JP
3941252 Jul 2007 JP
4282583 Jun 2009 JP
4300098 Jul 2009 JP
2010-083312 Apr 2010 JP
2015004922 Jan 2015 JP
5863536 Feb 2016 JP
6044134 Dec 2016 JP
2017-207874 Nov 2017 JP
2018-072105 May 2018 JP
2018-097648 Jun 2018 JP
2019-008460 Jan 2019 JP
6525707 Jun 2019 JP
2019101535 Jun 2019 JP
2020101927 Jul 2020 JP
2020173744 Oct 2020 JP
100326702 Feb 2002 KR
101082878 Nov 2011 KR
101738422 May 2017 KR
101969864 Apr 2019 KR
101996167 Jul 2019 KR
102022388 Aug 2019 KR
102043143 Nov 2019 KR
102095335 Mar 2020 KR
102097120 Apr 2020 KR
1020200085490 Jul 2020 KR
102189262 Dec 2020 KR
1020200142266 Dec 2020 KR
200630819 Sep 2006 TW
I294089 Mar 2008 TW
I306207 Feb 2009 TW
WO 02052835 Jul 2002 WO
WO 16032398 Mar 2016 WO
WO 16048108 Mar 2016 WO
WO 16207875 Dec 2016 WO
WO 17158622 Sep 2017 WO
WO 19005547 Jan 2019 WO
WO 19012534 Jan 2019 WO
WO 19067695 Apr 2019 WO
WO 19089339 May 2019 WO
WO 19092456 May 2019 WO
WO 19099622 May 2019 WO
WO 19122952 Jun 2019 WO
WO 19125191 Jun 2019 WO
WO 19126755 Jun 2019 WO
WO 19144575 Aug 2019 WO
WO 19182782 Sep 2019 WO
WO 19191578 Oct 2019 WO
WO 19216938 Nov 2019 WO
WO 19220436 Nov 2019 WO
WO 20006154 Jan 2020 WO
WO 20012756 Jan 2020 WO
WO 20025696 Feb 2020 WO
WO 20034663 Feb 2020 WO
WO 20056157 Mar 2020 WO
WO 20076356 Apr 2020 WO
WO 20097221 May 2020 WO
WO 20101246 May 2020 WO
WO 20120050 Jun 2020 WO
WO 20121973 Jun 2020 WO
WO 20131140 Jun 2020 WO
WO 20139181 Jul 2020 WO
WO 20139355 Jul 2020 WO
WO 20139357 Jul 2020 WO
WO 20142193 Jul 2020 WO
WO 20146445 Jul 2020 WO
WO 20151329 Jul 2020 WO
WO 20157761 Aug 2020 WO
WO 20163455 Aug 2020 WO
WO 20167667 Aug 2020 WO
WO 20174262 Sep 2020 WO
WO 20177583 Sep 2020 WO
WO 20185233 Sep 2020 WO
WO 20185234 Sep 2020 WO
WO 20195658 Oct 2020 WO
WO 20198189 Oct 2020 WO
WO 20198779 Oct 2020 WO
WO 20205597 Oct 2020 WO
WO 20221200 Nov 2020 WO
WO 20240284 Dec 2020 WO
WO 20260020 Dec 2020 WO
WO 20264010 Dec 2020 WO
Non-Patent Literature Citations (1)
Entry
International Search Report and Written Opinion dated May 28, 2020 in PCT/US2020/017290.
Related Publications (1)
Number Date Country
20220284712 A1 Sep 2022 US
Continuations (2)
Number Date Country
Parent 17249110 Feb 2021 US
Child 17656183 US
Parent 16279657 Feb 2019 US
Child 17249110 US