DATA INTEGRATION FROM MULTIPLE SENSORS

Information

  • Patent Application
  • 20230399006
  • Publication Number
    20230399006
  • Date Filed
    August 01, 2023
    9 months ago
  • Date Published
    December 14, 2023
    4 months ago
Abstract
Disclosed are methods and devices related to autonomous driving. In one aspect, a method is disclosed. The method includes determining three-dimensional bounding indicators for one or more first objects in road target information captured by a light detection and ranging (LIDAR) sensor; determining camera bounding indicators for one or more second objects in road image information captured by a camera sensor; processing the road image information to generate a camera matrix; determining projected bounding indicators from the camera matrix and the three-dimensional bounding indicators; determining, from the projected bounding indicators and the camera bounding indicators, associations between the one or more first objects and the one or more second objects to generate combined target information; and applying, by the autonomous driving system, the combined target information to produce a vehicle control signal.
Description
TECHNICAL FIELD

This document relates to autonomous driving.


BACKGROUND

Autonomous driving uses sensor and processing systems that take in the environment surrounding the autonomous vehicle and make decisions that ensure the safety of the autonomous vehicle and surrounding vehicles. The sensors should accurately determine distances to, and velocities of, potentially interfering vehicles as well as other movable and immovable objects. New techniques are needed to test and train autonomous vehicle systems on realistic driving data taken from actual sensors.


SUMMARY

The disclosed subject matter is related to autonomous driving and in particular to perception systems for autonomous driving and data used to benchmark and train perception systems. In one aspect, a method of generating and applying driving data to an autonomous driving system is disclosed. The method includes determining three-dimensional bounding indicators for one or more first objects in road target information captured by a light detection and ranging (LIDAR) sensor; determining camera bounding indicators for one or more second objects in road image information captured by a camera sensor; processing the road image information to generate a camera matrix; determining projected bounding indicators from the camera matrix and the three-dimensional bounding indicators; determining, from the projected bounding indicators and the camera bounding indicators, associations between the one or more first objects and the one or more second objects to generate a combined target information; and applying, by the autonomous driving system, the combined target information to produce a vehicle control signal.


In another aspect, an apparatus is disclosed for generating and applying data to an autonomous driving system. The apparatus includes at least one processor and memory including executable instructions that when executed perform the foregoing method.


In another aspect, a non-transitory computer readable medium is disclosed for storing executable instructions for generating and applying data to an autonomous driving system. When the executable instructions are executed by at least one processor they perform the foregoing method.


The following features may be included in various combinations. The applying includes training the system using data for training and testing using a learning method such as a machine perception system, a computer vision system, a deep learning system, etc. that works by using previously collected road image information and previously collected road target information, wherein the training causes changes in the perception system before the perception system is used in a live traffic environment. The applying includes testing the perception system to evaluate a performance of the perception system without the perception system operating in a live traffic environment, wherein the testing results in the performance being evaluated as acceptable or unacceptable. The one or more first objects are the same as the one or more second objects. The camera matrix includes extrinsic matrix parameters and intrinsic matrix parameters. The intrinsic matrix parameters include one or more relationships between pixel coordinates and camera coordinates. The extrinsic matrix parameters include information about the camera's location and orientation. The camera bounding indicators and projected bounding indicators are two-dimensional. The associations are performed using an intersection over union (IOU) technique. The vehicle control signal includes one or more of vehicle steering, vehicle throttle, and vehicle braking control outputs.


The above and other aspects and features of the disclosed technology are described in greater detail in the drawings, the description and the claims.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic diagram of an exemplary system implementing the disclosed subject matter;



FIG. 2 is an example of a flow chart, in accordance with some embodiments;



FIG. 3A depicts an example of a camera image and bounding boxes generated around objects of interest, in accordance with some embodiments;



FIG. 3B depicts an example of a LIDAR image and bounding boxes generated around objects of interest, in accordance with some embodiments;



FIG. 4 is another example of a flow chart, in accordance with some embodiments; and



FIG. 5 depicts an example of a hardware platform, in accordance with some embodiments.





DETAILED DESCRIPTION

Autonomous driving systems may be tested or trained using sensor data gathered from multiple sensors. For example, the autonomous driving system may be trained using data previously collected from multiple sensors in actual driving scenarios. The sensors may include cameras, light detection and ranging (LIDAR) sensors, and/or other sensors. For example, road image information or data may be collected during a testing session where the road image information is collected using a camera and stored for later training of an autonomous system. This previously collected image data may be referred to as previously collected road image information. Similarly, road target information may be collected during the testing session where road target information is collected using a LIDAR sensor and stored for later training of an autonomous driving system. This previously collected target data may be referred to as previously collected road target information.


Data from the sensors may be combined to generate “ground truth” data that then can be used by the autonomous driving system to train the autonomous driving system. The ground truth data can also be used to test the performance of the autonomous driving system without the autonomous driving system driving on roads on a real-time basis but instead using the previously collected and stored camera and LIDAR information. “Ground truth” data is data representing objects within the sensors' fields of view that has highly accurate location, size, and velocity information for the vehicles and objects in the driving scenario.


For training or testing an autonomous driving system, a portion of the ground truth data may be provided to the autonomous driving system, and the remaining ground truth data may be used to evaluate the performance during testing or to provide feedback during training. For example, camera and LIDAR data from one vehicle may be provided to the autonomous driving system for training or testing, and camera and LIDAR data from another vehicle may be added to evaluate the autonomous driving system. After training the autonomous driving system, the system may be used to autonomously drive a vehicle in real-time with traffic which may be referred to as a live traffic environment.



FIG. 1 is an example diagram 100 showing vehicles 110, 112, 114, and 116 travelling on road 102 with vehicles 110 and 116 travelling at velocities 134 and 136, respectively. Vehicles 110 and 116 include forward looking cameras 124 and 132 and rear looking cameras 118 and 128. Vehicles 110 and 116 also include forward looking light detection and ranging (LIDAR) 122 and 130 and rear looking LIDAR 120 and 126, respectively.


Cameras 124, 132, 118, and 128 can be any type of video or still camera such as charge-coupled device (CCD) image sensors, complementary metal oxide semiconductor (CMOS) image sensors, or any other type of image sensor. In some implementations, the cameras may capture images over a wide range of distances from the vehicle to which the camera is mounted. For example, the camera may capture images of objects very near such as 5 meters from the vehicle to which it is mounted, and the camera may capture in the same image objects up to or exceeding 1000 meters and all distances in between. Cameras 124, 132, 118, and 128 may be pointed in a fixed direction relative to the corresponding vehicle or may be steerable via mechanical steering of the camera or optical steering.


LIDAR 122, 120, 126, and 130 include laser-based or light emitting diode (LED)-based optical sensors including mechanically or otherwise scanned optical sensors. LIDAR can determine the precise distance to objects within its field. The LIDAR scanning then enables a determination of the three-dimensional positions of the objects. For example, the LIDAR can determine the positions of various exposed portions of a nearby vehicle. In some implementations, the LIDAR sensors may be limited to shorter distances than the cameras. For example, some LIDAR sensors are limited to approximately 100 m. In accordance with the disclosed subject matter, the two-dimensional image data from the cameras can be mapped to, or associated with, the three-dimensional LIDAR data.


In the illustrative example of FIG. 1, vehicle 110 is moving with velocity 134. Vehicle 116 is moving with velocity 136 which may be the same velocity as vehicle 110. Vehicles 112 and 114 lie between vehicles 110 and 116 and may move with the same average velocity as vehicles 110 and/or 116 but may move closer and farther from vehicles 110 and/or 116 over time. In the illustrative example of FIG. 1, vehicles 110 and 116 may be separated by 1000 m on road 102, vehicle 112 may be 50 m in front of vehicle 110 (and 950 m from vehicle 116), and vehicle 114 may be 250 m ahead of vehicle 110 (and 750 m behind vehicle 116). Camera 124 captures images/video of vehicles 112, 114 and 116. LIDAR 122 determines a precise distance (also referred to as a range) to various exposed portions of vehicle 112 and three-dimensional shape information about vehicle 112. In this illustrative example, vehicle 114 is out of range of LIDAR 122 but is captured by camera 124. Rear facing camera 128 captures images of vehicle 114, vehicle 112, and vehicle 110. Vehicles 114 and 112 are out of range from rear facing LIDAR 126 which, in this example, has a range of 100 m.


In the example of FIG. 1, camera 124 captures images of vehicles 112, 114, and 116. Imaging processing may be used to determine camera bounding boxes of objects in the images. Camera bounding boxes are determined as outlines of the largest portions of an object in view. For example, the camera bounding box for a delivery truck, or box truck, can be an outline of the back of the box truck. If multiple vehicles or objects are in an image, camera bounding boxes for each vehicle or object may be determined. Camera images are by their nature two-dimensional representations of the view seen by the camera.


Bounding boxes may also be determined from LIDAR data. For each object in view of LIDAR 122, a three-dimensional bounding box (also referred to herein as a LIDAR bounding boxes) may be determined from the LIDAR data. The LIDAR data also includes accurate distance (range) and velocity information between the LIDAR and the in-view portions of each target object. Further image processing may be used to associate the bounding boxes generated from camera 124 with the three-dimensional bounding boxes generated from LIDAR 122 to create ground truth data. For example, an intersection over union (IOU) method may be used to associate the road image bounding boxes with the three-dimensional bounding boxes. Other techniques may be used in addition to or in place of IOU to associate bounding boxes.


As used herein, a bounding box may also be referred to as a bounding indicator that is a boundary related to an object in a sensor result such as a camera image, a LIDAR image, a RADAR image, or another detection result of a sensor. The boundary (e.g., bounding box and/or bounding indicator) may have a shape such a rectangular, square, circular, trapezoidal, parallelogram, or any other shape. Stated another way, a bounding box may be shaped in a manner that is rectangular in some embodiments and that is not rectangular in other embodiments (e.g., as a circle or other non-rectangular shape). In some implementations, the boundary (e.g., bounding box and/or bounding indicator) may not have a named shape, or may follow the boundary of an arbitrarily-shaped object. The boundary (e.g., bounding box and/or bounding indicator) may be a two-dimensional boundary or a three-dimensional boundary of an object. The boundary (e.g., bounding box and/or bounding indicator) may be a voxel or a segmentation mask or map.


Rear facing camera 128 captures images of vehicles 114, 112, and 110. Imaging processing may be used to determine camera bounding boxes of objects in the images. In the foregoing example, vehicle 114 is 750 m behind vehicle 116 and thus out of range of rear facing LIDAR 126 on vehicle 116. While camera and LIDAR is being captured, vehicles 115 and 114 may move closer together and to within 100 m of each other when data from LIDAR 126 related to vehicle 114 may be collected. In some example embodiments, the LIDARs 120, 122, 126, and 130 may operate at ranges beyond 100 m such as 200 m or longer distance.


Image processing may also be used to associate the three-dimensional bounding boxes generated from LIDAR 126 to the objects viewable from the back of vehicle 116. In some example embodiments, LIDAR data including three-dimensional bounding boxes generated from LIDAR 126 on vehicle 116 can be combined with the image data from camera 124 and LIDAR sensor 122 on vehicle 110 to create ground truth data. For example, LIDAR data from LIDAR sensor 126 can be combined with the image data from camera 124 to improve the ground truth of the data that includes camera data from camera 124 and LIDAR 122.


For example, with vehicle 116 1000 m away from vehicle 110 and camera 124 having a 1000 m viewing range and LIDARs 122 and 126 having 100 m viewing range, ground truth data can be determined from the position of vehicle 110 to 100 m away via LIDAR 122, image only data can be generated at distances between 100 m and 900 m via camera 124, and ground truth may be determined between 900 m and 1000 m by combining camera 124 image data and LIDAR data from LIDAR 126. Image data from camera 128 may also be used in generating ground truth data.


In some example embodiments, the distance between vehicle 110 and vehicle 116 is varied over time as the image and LIDAR data is collected, or the distance is varied over successive runs of data collection. In one example, the distance between vehicles 110 and 116 is varied during a single data collection run. In another example, the distance between vehicles 110 and 116 is one value for one run, the data collection is stopped, the distance is changed, and data collection starts again. The data from the successive runs can be combined later using post processing.


In some example embodiments, bounding boxes generated from camera 124 image data can be associated with bounding boxes generated from LIDAR 122 data. Similarly, bounding boxes generated from camera 118 image data can be associated with bounding boxes generated from LIDAR 120, bounding boxes generated from camera 128 image data can be associated with bounding boxes generated from LIDAR 126 data, and bounding boxes generated from camera 132 image data can be associated with bounding boxes generated from LIDAR 130 data.



FIG. 2 depicts an example of a process, in accordance with some embodiments. At step 205, a LIDAR image is received from a LIDAR sensor. At step 210, a camera image is received from a camera sensor. At step 215, objects are detected in the received LIDAR image. At step 225, objects are detected in the received camera image. Image detection includes identifying objects in the field of view of the camera and identifying objects in the field of view of the LIDAR sensor. Image detection may involve image processing performed by a computer and may include traditional processing, parallel processing or learning techniques.


At step 220, a camera matrix is determined based on the camera image. The camera matrix can be applied to transform 3-dimensional (3D) images from the LIDAR sensor into a 2-dimensional (2D) projection. For example, the process can determine the precise 3D bounding boxes in world coordinates from the LIDAR image. Then the process can use the camera extrinsic matrix to transform the coordinates from world coordinates into camera coordinates. Finally, the process may use the camera intrinsic matrix to transform the coordinates from camera coordinates to the image plane (also referred to as pixel coordinates), where the coordinates are two-dimensional. The camera matrices can be obtained from offline calibration.


At step 230, bounding boxes are generated from the detected objects in the LiDAR image. At step 240, the 3-D LIDAR images with LIDAR bounding boxes are projected into 2D using the camera matrix determined at step 220. At step 235, camera image bounding boxes are generated from the object detection in the camera image. At step 250, the LIDAR projected bounding boxes are associated with the camera image bounding boxes. The association can use an intersection over union (IOU) technique or another technique. At step 255, ground truth data is generated based on association of the projected bounding boxes from the LiDAR image with the camera bounding boxes.


In some example embodiments, bounding boxes are determined without performing LIDAR or camera image detection, or image detection is performed after bounding boxes are determined. For example, LIDAR bounding boxes generated at step 230 may be determined directly from a LIDAR image received at step 205 and/or camera image bounding boxes generated at step 235 may be determined directly from a camera image received at step 210. The bounding boxes may be determined, for example, by determining areas in an image that move together in successive frames without detecting any corresponding objects in those areas. Later, the detection of objects may be performed on the images within the bounding boxes, or in some implementations no object detection is performed.



FIG. 3A depicts examples of bounding boxes generated from camera data. Bounding boxes 310, 320, and 330 are generated from LIDAR data. Bounding boxes 305, 315, and 325 are generated from camera data. An association performed using, for example, IOU, has determined that the bounding boxes 305 and 310 correspond to the same object—the semi-truck, that bounding boxes 315 and 320 correspond to the same object—a first pick-up truck, and that bounding boxes 325 and 330 correspond to the object—a second pick-up truck.



FIG. 3B depicts examples of bounding boxes generated from LIDAR data. Bounding boxes at 340, 345, and 350 correspond to examples of LIDAR bounding boxes before association with camera bounding boxes is performed.


In some implementations, two vehicles may be used to collect data simultaneously. In the example of FIG. 1, vehicle 116 may be referred to as a target vehicle and is in front of vehicle 110 which may be referred to as a source vehicle. Data from the two vehicles may be used to determine ground truth data for objects in the images form the camera of the source vehicle. In some implementations, the target vehicle has a LIDAR and the source vehicle has cameras. During data collection, the distance between the target vehicle and the source vehicle can be changed dynamically, in order to acquire ground truth data at the different ranges.


In some implementations, after the data is collected, the LIDAR detection and image detection is performed on the collected data via post processing. In this way, 3D information for each object in the LIDAR and the position of each object in the image can be determined.


In some implementations, after object detection in the camera images and object detection in the LIDAR images, one or more associations are made between the objects detected in the LIDAR images and objects detected in the camera images.


In some example embodiments, since the bounding boxes for each object are generated in 3D space from LIDAR data, the 3D bounding boxes can be projected into a 2D image plane through the camera extrinsic and intrinsic matrices. In this way, 3D bounding boxes from LIDAR data can be reflected in the image from the camera of the source vehicle. As such, two types of bounding boxes in the image include the original bounding boxes from image detection, and another that is a projected bounding box from LIDAR data.


The two types of bounding boxes are then associated with one another to determine how similar the two types of bounding boxes are. A technique such as intersection over union (IOU) can be used as a metric to find an association value representative of the two bounding boxes.


Intersection over union may be expressed as a ratio of the area of intersection between two bounding boxes divided by the area of the union of the two bounding boxes. For example, the IOU value for two bounding boxes that overlap exactly is 1, and the IOU value of the IOU for two bounding boxes that do not overlap at all is 0. Various overlapping regions will cause IOU values between 0 and 1. The more related or associated the bounding boxes are the closer the IOU value is to 1.


Another example of a technique for determining the relatedness of two bonding boxes is the Dice coefficient. The Dice coefficient is 2 times the area of overlap of the two bounding boxes divided by the total number of pixels in both bounding boxes. Another technique includes the generalized intersection over union (GIOU). Other techniques can also be used.



FIG. 4 depicts another process 400, in accordance with some example embodiments. The process includes various features described above. At step 430, the process includes determining three-dimensional bounding boxes for one or more first objects in road target information captured by a light detection and ranging (LIDAR) sensor.


At step 440, the process includes determining camera bounding boxes for one or more second objects in road image information captured by a camera sensor. In some cases, the first objects and the second objects may be entirely different from each other. In some cases, some of the first objects may be same as some of the second objects. For example, the objects may be other cars on the road, traffic signs, pedestrians, road markers, buildings, curbs, and so on. As another example, the objects may be left and right turn signals of vehicles in front, braking lights of nearby vehicles, and so on.


At step 450, the process includes processing the road image information to generate a camera matrix. In various embodiments, the camera matrix may include intrinsic and/or extrinsic parameters, as described in the present document.


At step 460, the process includes determining projected bounding boxes from the camera matrix and the three-dimensional bounding boxes. In some embodiments, the camera bounding boxes and projected bounding boxes are two-dimensional.


At step 470, the process includes determining, from the projected bounding boxes and the camera bounding boxes, associations between the one or more first objects and the one or more second objects to generate combined target information. The present application discloses using various association techniques such as the IOU technique.


At step 480, the process includes applying, by the autonomous driving system, the combined target information to produce a vehicle control signal. The vehicle control signal may be a signal that is used for navigating the vehicle and/or for controlling another operation of the vehicle. Some examples include: a steering control that controls the direction in which the vehicle moves, a vehicle throttle control that controls the amount of fuel supplied to the engine, a vehicle braking control output that controls amount of braking applied to reduce speed of the vehicle, and so on.


In some embodiments, the foregoing applying operation may include training a perception system using previously collected road image information and previously collected road target information. In some embodiments, the training may result in changes in the perception system (e.g., newly trained decision algorithms or a different flow of logic) before the perception system is used in a live traffic environment. For example, live traffic environment may present to the perception system patterns and situations that have not been seen by the perception system and may result in further training of the perception system. A live traffic environment may also impose processing time constraints on the perception system to process data and produce the vehicle control signal.


In some embodiments, the applying operation may include testing a perception system to evaluate a performance of the perception system without the perception system operating in a live traffic environment. For example, the testing results in the performance being evaluated as acceptable or unacceptable. The evaluation may be performed by a computing system using one or more metrics applied to a response of the perception system to make the evaluation, or a human user may make the evaluation.



FIG. 5 depicts an example of a hardware platform 500 that can be used to implement some of the techniques described in the present document. For example, the hardware platform 500 may implement the process 400, or other processes described above, and/or may implement the various modules described herein. The hardware platform 500 may include a processor 502 that can execute code to implement a method. The hardware platform 500 may include a memory 504 that may be used to store processor-executable code and/or store data. The hardware platform 500 may further include a communication interface 506. For example, the communication interface 506 may implement one or more wired or wireless communication protocols (Ethernet, LTE, Wi-Fi, Bluetooth, and so on).


Benefits of the disclosed techniques include the following: First, ground truth (3D information) data can be collected and generated for camera-based perception, which can be used for both training, testing, and benchmarking. Second, by using two data collection vehicles, ground truth data can be generated for long ranges (e.g., in a radius of around 1,000 meters), which is not limited the short ranges of typical LIDAR perception. Moreover, ground truth data can not only be generated in front of the source vehicle but can also be generated for areas surrounding the source vehicle by adjusting the relative position of two vehicles over time during data collection. Furthermore, more than two vehicles (i.e., vehicles like 110 and 116 with cameras and LIDAR sensors) can be used to do the data collection, so that a longer range of ground truth data can be generated. In this situation, there is one source vehicle and all the others are target vehicles.


It will be appreciated that various techniques for training a perception system for autonomous driving are disclosed. In some implementations, known road and driving data is used for training and testing (benchmarking) for the perception system in the autonomous driving pipeline. Machine learning is one of the technologies used in training the perception system. In some implementations where a different set of computational resources are available, deep learning or computer vision may be used for the training and testing.


Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, semiconductor devices, ultrasonic devices, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of aspects of the subject matter described in this specification can be implemented as one or more computer program products, e.g., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing unit” or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.


A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.


The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).


Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random-access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.


While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.


Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.


Only a few implementations and examples are described, and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.

Claims
  • 1.-20. (canceled)
  • 21. A system for generating autonomous driving data, comprising: a first vehicle that comprises a first light detection and ranging (LIDAR) sensor,a second vehicle that comprises a second camera sensor, the second vehicle being at a first distance from the first vehicle while the first vehicle and the second vehicle are in motion, andat least one processor, wherein the first LIDAR sensor is configured to capture three-dimensional data of a first moving object positioned between the first vehicle and the second vehicle while the first vehicle and the second vehicle are in motion,wherein the second camera sensor is configured to capture two-dimensional data of the first moving object, andwherein the at least one processor is configured to generate a first set of target data for the first moving object by combining data captured by the first LIDAR sensor of the first vehicle and data captured by the second camera sensor of the second vehicle.
  • 22. The system of claim 21, wherein the first vehicle is positioned in front of the second vehicle while the first vehicle and the second vehicle are in motion, and wherein the first LIDAR sensor comprises a rear facing LIDAR sensor.
  • 23. The system of claim 21, wherein the first vehicle further comprises a first camera sensor, the first camera sensor being a rear facing camera sensor, and wherein the second vehicle further comprises a second LIDAR sensor.
  • 24. The system of claim 21, wherein a second moving object is positioned between the first vehicle and the second vehicle while the first vehicle and the second vehicle are in motion, and wherein the second moving object is positioned outside of a range of the first LIDAR sensor, and wherein the at least one processor is configured to: generate target data for the second moving object based on data captured by the second camera sensor of the second vehicle only.
  • 25. The system of claim 21, wherein a third moving object is positioned between the first vehicle and the second vehicle while the first vehicle and the second vehicle are in motion, and wherein the third moving object is positioned outside of a range of the second camera sensor, and wherein the at least one processor is configured to: generate target data for the third moving object based on data captured by the first LIDAR sensor of the second vehicle only.
  • 26. The system of claim 21, wherein the at least one processor is configured to combine the data captured by the first LIDAR sensor of the first vehicle and the data captured by the second camera sensor of the second vehicle based on: determining three-dimensional bounding information of the at least one moving object based on the captured by the first LIDAR sensor of the first vehicle;determining two-dimensional bounding information of the at least one moving object based on the data captured by the second camera sensor of the second vehicle; andassociating the three-dimensional bounding information with the two-dimensional bounding information.
  • 27. The system of claim 26, wherein the at least one processor is configured to associate the three-dimensional bounding information with the two-dimensional bounding information using one of an intersection over union technique, a dice coefficient, or a generalized intersection over union technique.
  • 28. The system of claim 26, wherein the at least one processor is configured to associate the three-dimensional bounding information with the two-dimensional bounding information based on: determining a camera matrix based on the two-dimensional data captured by the second camera sensor of the second vehicle, andapplying the camera matrix to the three-dimensional data captured by the first LIDAR sensor of the first vehicle.
  • 29. The system of claim 21, wherein the first vehicle and the second vehicle are at different distances from each other while the first vehicle and the second vehicle are in motion, and wherein the at least one processor is configured to generate different sets of target data for the first moving object to obtain ground truth data representing the first moving object.
  • 30. The system of claim 21, wherein the first distance between the first vehicle and the second vehicle is greater than a range of the first LIDAR sensor and is also greater than a range of the second camera sensor.
  • 31. The system of claim 21, wherein the at least one processor is configured to generate a vehicle control signal based on the first set of target data.
  • 32. A method for generating autonomous driving data, comprising: operating a first vehicle and a second vehicle at a first distance from each other on a road, wherein the first vehicle comprises a first light detection and ranging (LIDAR) sensor and the second vehicle comprises a second camera sensor, wherein at least one moving object is positioned between the first vehicle and the second vehicle while the first vehicle and the second vehicle are in motion; andgenerating a first set of target data for the at least one moving object by combining data captured by the first LIDAR sensor of the first vehicle and data captured by the second camera sensor of the second vehicle.
  • 33. The method of claim 32, further comprising: operating the first vehicle and the second vehicle at a second distance from each other on the road;generating a second set of target data for the least one moving object by combining data captured by the first LIDAR sensor of the first vehicle and data captured by the second camera sensor of the second vehicle.
  • 34. The method of claim 33, further comprising: obtaining ground truth data representing the at least one object by combining the first set of target data and the second set of target data.
  • 35. The method of claim 32, the combining of the data captured by the first LIDAR sensor of the first vehicle and the data captured by the second camera sensor of the second vehicle comprises: determining three-dimensional bounding information of the at least one moving object based on the captured by the first LIDAR sensor of the first vehicle;determining two-dimensional bounding information of the at least one moving object based on the data captured by the second camera sensor of the second vehicle; andassociating the three-dimensional bounding information with the two-dimensional bounding information.
  • 36. The method of claim 35, wherein the associating of the three-dimensional bounding information with the two-dimensional bounding information comprises applying one of an intersection over union technique, a dice coefficient, or a generalized intersection over union technique.
  • 37. The method of claim 35, wherein the associating of the three-dimensional bounding information with the two-dimensional bounding information comprises: determining a camera matrix based on the two-dimensional data captured by the second camera sensor of the second vehicle, andapplying the camera matrix to the three-dimensional data captured by the first LIDAR sensor of the first vehicle.
  • 38. The method of claim 37, wherein the camera matrix includes extrinsic matrix parameters and intrinsic matrix parameters.
  • 39. The method of claim 38, wherein the intrinsic matrix parameters include one or more relationships between pixel coordinates and camera coordinates, wherein the extrinsic matrix parameters include information about location and orientation of the second camera sensor.
  • 40. The method of claim 32, further comprising: generating a vehicle control signal based on the first set of target data.
Priority Claims (1)
Number Date Country Kind
201911306101.0 Dec 2019 CN national
CROSS-REFERENCE TO RELATED APPLICATION

This patent document is a continuation of U.S. patent application Ser. No. 16/813,180, filed on Mar. 9, 2020, which claims priority to and the benefit of Chinese Patent Application No. 201911306101.0, filed on Dec. 17, 2019. The entire contents of the aforementioned patent applications are incorporated herein by reference in their entireties.

Continuations (1)
Number Date Country
Parent 16813180 Mar 2020 US
Child 18363541 US