The section headings used herein are for organizational purposes only and should not be construed as limiting the subject matter described in the present application in any way.
Autonomous, self-driving, and semi-autonomous automobiles use a combination of different sensors and technologies such as radar, image-recognition cameras, and sonar for detection and location of surrounding objects. These sensors enable a host of improvements in driver safety including collision warning, automatic-emergency braking, lane-departure warning, lane-keeping assistance, adaptive cruise control, and piloted driving. Among these sensor technologies, light detection and ranging (LIDAR) systems take a critical role, enabling real-time, high-resolution 3D mapping of the surrounding environment.
Most commercially available LIDAR systems used for autonomous vehicles today utilize a small number of lasers, combined with some method of mechanically scanning the environment. It is highly desired that future autonomous automobiles utilize solid-state semiconductor-based LIDAR systems with high reliability and wide environmental operating ranges.
The present teaching, in accordance with preferred and exemplary embodiments, together with further advantages thereof, is more particularly described in the following detailed description, taken in conjunction with the accompanying drawings. The skilled person in the art will understand that the drawings, described below, are for illustration purposes only. The drawings are not necessarily to scale; emphasis instead generally being placed upon illustrating principles of the teaching. The drawings are not intended to limit the scope of the Applicant's teaching in any way.
The present teaching will now be described in more detail with reference to exemplary embodiments thereof as shown in the accompanying drawings. While the present teaching is described in conjunction with various embodiments and examples, it is not intended that the present teaching be limited to such embodiments. On the contrary, the present teaching encompasses various alternatives, modifications and equivalents, as will be appreciated by those of skill in the art. Those of ordinary skill in the art having access to the teaching herein will recognize additional implementations, modifications, and embodiments, as well as other fields of use, which are within the scope of the present disclosure as described herein.
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the teaching. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
It should be understood that the individual steps of the method of the present teaching can be performed in any order and/or simultaneously as long as the teaching remains operable. Furthermore, it should be understood that the apparatus and method of the present teaching can include any number or all of the described embodiments as long as the teaching remains operable.
The present teaching relates generally to Light Detection and Ranging (LIDAR), which is a remote sensing method that uses laser light to measure distances (ranges) to objects. LIDAR systems generally measure distances to various objects or targets that reflect and/or scatter light. Autonomous vehicles make use of LIDAR systems to generate a highly accurate 3D map of the surrounding environment with fine resolution. The systems and methods described herein are directed towards providing a solid-state, pulsed time-of-flight (TOF) LIDAR system with high levels of reliability, while also maintaining long measurement range as well as low cost.
In particular, the present teaching relates to LIDAR systems that send out a short time duration laser pulse, and use direct detection of the return pulse in the form of a received return signal trace to measure TOF to the object. Also, the present teaching relates to systems that have transmitter and receiver optics, which are physically separate from each other in some fashion.
The transmitter 202 projects light within a field-of-view (FOV) corresponding to the angle between ray A 216 and ray C 218 in the diagram. The transmitter contains a laser array, where a subset of the laser array can be activated for a measurement. The transmitter does not emit light uniformly across the full FOV during a single measurement, but instead emits light within only a portion of the field of view. More specifically, the rays A 216, B 220, and C 218 form a center axis for individual laser beams which have some divergence, or cone angle, around that axis. That is, ray B 220 is the same as the optical axis 212 of the transmitter 202. In some embodiments, each ray 216, 218, 220 can be associated nominally with light from a single laser emitter in a laser array (not shown) in the transmitter 202. It should be understood that a laser emitter can refer to a laser source with either a single physical emission aperture, or multiple physical emission apertures that are operated as a group. In some embodiments, each ray 216, 218, 220 can be associated nominally with light from a group of contiguous individual laser emitter elements in a laser array (not shown) in the transmitter 202. In a similar ray analysis, the receiver receives light within a FOV corresponding to the angle between ray 1222 and ray 5224 in the diagram. The light is collected with a distribution across the FOV that includes (for illustration purposes) light along ray 2226, ray 3228 and ray 4230. More specifically, ray 3228 forms the center axis 214 for collected light which has some divergence, or cone angle, around that axis. In some embodiments, each ray 226, 228, 230 can be associated nominally with received light from a single detector element in a detector array (not shown) in the receiver 204. In some embodiments, each ray 226, 228, 230 can be associated nominally with received light from a group of contiguous individual detector elements in a detector array (not shown) in the transmitter 202. The single detector elements or contiguous groups of detector elements can be referred to as a pixel.
The design of the transmitter 202, including the laser source (not shown) and the lens system 208, is configured to produce illumination with a FOV having the central axis 212. The design of the receiver 204, including the detector array (not shown) and the lens system 208 positions, is configured to collect illumination with a FOV having the central axis 214. The central axis 212 of the FOV of the transmitter 202 is adjusted to intersect the central axis 214 of the FOV of the receiver 204 at a surface 232 indicated by SMATCH. This surface 232 is smooth. In some embodiments, the surface is nominally spherical. In other embodiments, the surface is not spherical as it depends on the design of the optical systems in the transmitter 202 and receiver 204, including their relative distortion. Several intersection points 234, 236, 238 along the surface 232 between the illumination from the transmitter 202 and collected light from the receiver 204 are indicated. The first letter corresponds to a transmitter 202 ray, and the second letter corresponds to a receiver 204 array. That is, point 234, C1, is the intersection of transmitter 202 ray C 218 and receiver 204 ray 1222. Point 236, B3, is the intersection of transmitter 202 ray B 220 and receiver 204 ray 3228. Point 238, A5, the intersection of transmitter 202 ray A 216 and receiver 204 ray 5224. Other intersection points 240, 242, 244, 246, 248, 250 are also indicated, having the same naming convention as points 234, 236, 238 along the surface 232. As is clear to those skilled in the art, a complete three-dimensional set of these intersection points can be applied to any particular pair of transmitters 202 and receivers 204, based on their relative center position 206, optic axes 212, 214 directions and FOVs.
The receive and time-of-flight computation electronics 268 receives the electrical detection signals from the detector array 270 and then processes these electrical detection signals to compute the range distance through time-of-flight calculations. The receive and time-of-flight computation electronics 268 can also control the pixels of the detector array 270, in order to select subsets of pixels that are used for a particular measurement. The intensity of the return signal is also computed in electronics 268. In some embodiments, the receive and time-of-flight computation electronics 268 determines if return signals from two different emitters in the laser array 206 are present in a signal from a single pixel (or group of pixels associated with a measurement). In some embodiments, the transmit controller 264 controls pulse parameters, such as the pulse amplitude, the pulse width, and/or the pulse delay.
The block diagram of the LiDAR system 260 of
Referring to both to
As such, the parallax between a transmitter and a receiver, creates a geometry where the particular pixel that receives a reflected pulse is a function both of the position of the laser being fired (i.e. which laser ray), and the position within the FOV (i.e. which receiver ray). Therefore, there is no one-to-one correspondence between laser ray and receiver ray (i.e. laser element and receiver element). Rather, the correspondence depends on the distance of the reflecting target.
A LIDAR system which uses multiple laser emitters as shown has many advantages, including a higher optical power density, while still maintaining eye safety limits. In order to have the fastest data acquisition rate (and frame rate), it is preferred that only the pixels 502 corresponding to a single laser (i.e. the sixteen pixels 502 in a particular emitter FOV 504) be utilized during a particular measurement sequence. Since an individual laser emitter reflects only to a specific area at a particular range, the overall system speed can be optimized by proper selection of the detector region to activate only those pixels that corresponds to a particular emitter.
A challenge with known LiDAR systems is that a projection of the transmitter emitter illumination FOV on the detector array collection area FOV strictly holds only at one measurement distance as described above. At different measurement distances, the shape of the transmitter illumination region that is reflected from a target onto the detector array is different. To make clear the distinction between the FOV projection that holds at one distance, and the more general overlap condition that holds at other distances, we refer to an image area. The image area as used herein is the shape of the illumination that falls on the detector from a range of measurement distances. The size and shape of an image area for a particular system can be determined based on the system measurement range (the range of distances over which the system takes measurements), the relative position and angle of the optical axes of the transmitter and receiver, and the size, shapes and positions of the emitters in the transmitter and the detectors in the receiver, as well as other system design parameters.
One feature of the present teaching is that method and systems according to the present teaching utilizes the known relationship between the optical axes and relative positions of a transmitter and receiver, and predetermines the image area for each emitter in a transmitter. This information can then be used to process collected measurement data, including data that is collected in regions of overlap between the image area of two different emitters. The processing can, for example, eliminate redundant data points, reduce the impact of noise and ambient light by selecting the best data point(s) in an overlap region, and/or produce multiple returns from different distances along a particular direction. The processing could also be used to improve image quality including the reduction of blooming.
In
In some embodiments, at least one subset of pixel(s) used in conjunction with one laser emitter overlaps with at least one subset of pixel(s) used in conjunction with a different laser emitter. The system includes a processor (not shown) that processes the data obtained from pixels in the overlap region by analyzing and combining data obtained from this overlap region and creating a combined single point cloud based on this processed data. In some embodiments, the processor dynamically selects the illuminated pixels (i.e. pixels in an image area of two or more energized laser emitters) that are associated with a particular laser emitter based on the return pulses contained in the data generated by the illuminated pixels. Various return pulse properties can be used to dynamically select a particular laser, including, for example, return pulse strength, noise level, pulse width and/or other properties. Referring to
In some embodiments of the present teaching, only a portion of the array of pixels are activated for a particular measurement (e.g. not a full row and/or not a full column). In these embodiments, a two-dimensional matrix-addressable detector array can be used. In some embodiments, the two-dimensional matrix-addressable detector array is a SPAR array. In some embodiments of the present teaching, only a portion of an array of laser emitters are energized for a particular measurement. For example, less than a full row and/or less than a full column can be energized. In these embodiments, a two-dimensional matrix-addressable laser array can be used. In some embodiments, the two-dimensional matrix-addressable laser array is a VCSEL array. In some embodiments the transmitter components are all solid-state, with no moving parts.
In a third step 806, a reflected return signal is received by the LIDAR system. In a fourth step 808, the received reflected return signal is processed. In some methods, the processing of the return signal determines the number of return peaks. In some methods, the processing calculates a distance to the object based on time-of-flight (TOF). In some methods, the processing determines the intensity, or the pseudo-intensity, of the return peaks. Various combinations of these processing results can be provided. Intensity can be directly detected with p-type-intrinsic-n-type-structure detectors (PIN) or Avalanche Photodetector (APD). Also or alternatively, intensity can be detected with Silicon Photo-Multiplier (SiPM) or Single Photon Avalanche Diode Detector (SPAD) arrays that provide a pseudo-intensity based on number of pixels that are triggered simultaneously. Some embodiments of the method further determine noise levels of the return signal traces. In various embodiments of the method, additional information is also considered, for example, ambient light levels and a variety of environmental conditions and/or factors.
In a fifth step 810, a decision is made about firing the laser to generate another pulse of light from the laser. If the decision is yes, the method proceeds back to the second step 804. In various embodiments of the method, the decision can be based on, for example, a decision matrix, an algorithm programmed into the LIDAR controller, or a lookup table. A particular number of laser pulses is then generated by cycling through the loop including the second step 804, third step 806, and the fourth step 808 until the desired number of laser pulses have been generated and the decision step 810 returns a stop and proceed to the sixth step 812.
The system performs multiple measurement signal processing steps in a sixth step 812. In various embodiments of the method 800, the multiple measurement signal processing steps can include, for example, filtering, averaging, and/or histogramming. The multiple measurement signal processing results in a final resultant measurement from the processed data of the multiple-pulse measurements. These resultant measurements can include both raw signal trace information and processed information. The raw signal information can be augmented with flags or tags that indicate probabilities or confidence levels of data as well as metadata related to the processing of the sixth step.
At a seventh step 814, the system moves to a decision loop that controls the next laser in some firing sequence, and continues to loop through the full list of lasers in a firing sequence until one complete set of measurements for all the lasers in the firing sequence have been obtained. When the method progresses to the second step 804 from the seventh step 814, a new, different, laser is fired. The firing sequence determines the particular laser that is fired on a particular loop. This sequence can, for example, correspond to a full frame or a partial frame.
In another possible embodiment, the loops 810 and 814 are combined such that a sub-group of lasers is formed, where the firing of the lasers is interleaved, so as to reduce the duty cycle on any individual laser compared to firing that single laser with back-to-back pulses, but still maintain a relatively short time between pulses for a particular laser. In this alternate embodiment, the system would step through a number of sub-groups to complete either a full or partial frame.
In the eighth step 816, the system analyzes the complete data set from the firing sequence and takes various actions on the data in any overlapping pixel regions. This can be, for example, overlap region 710 described in connection with
In the ninth step 818, the combined 4D information determined by the analysis of the multi-measurement return signal processing is then reported. The reported data can include, for example, the 3D measurement point data (i.e. the three spatial dimensions), and/or various other metrics including number of return peaks, time of flight(s), return pulse(s) amplitude(s), errors and/or a variety of calibration results. In a tenth step 820, the method is terminated.
There are many ways of selecting individual and/or groups of lasers and/or detectors. See, for example, U.S. Provisional Patent Application No. 62/831,668 entitled “Solid-State LIDAR Transmitter with Laser Control”. See also U.S. Provisional Application No. 62/859,349, entitled “Eye-Safe Long-Range Solid-State LIDAR System” and U.S. patent application Ser. No. 16/366,729, entitled “Noise Adaptive Solid-State LIDAR System”. These patent applications are all assigned to the present assignee and are all incorporated herein by reference.
An important feature of some aspects of the present teaching is the recognition that parallax causes the image area of a particular laser emitter (or group of emitters) to distort if the target extends over a range of distances from the LiDAR when compared to the FOV provided at a single target range. This distortion causes some overlap between FOVs from adjacent emitters for measurements from a range of target distances. For example, this parallax can be characterized based on a position of an emitter, an angle of the optical axis of illumination from the transmitter, and/or a position of a pixel and an angle of an optical axis of illumination collected by the pixel. The optical axis of the transmitter is not coincident with the optical axis of the receiver. By performing analysis and processing of the received data and using this known parallax, it is possible to analyze the regions of overlap and process the data to account for, and use to benefit, the information contained in these regions. The result is a single, informative, combined data set that is helpful to identify objects in the three-dimensional space probed by the LiDAR.
While the Applicant's teaching is described in conjunction with various embodiments, it is not intended that the Applicant's teaching be limited to such embodiments. On the contrary, the Applicant's teaching encompasses various alternatives, modifications, and equivalents, as will be appreciated by those of skill in the art, which may be made therein without departing from the spirit and scope of the teaching.
The present application is a non-provisional application of U.S. Provisional Patent Application No. 63/187,375, entitled “Pixel Mapping Solid-State LIDAR Transmitter System and Method” filed on May 11, 2021. The entire contents of U.S. Provisional Patent Application No. 63/187,375 are herein incorporated by reference.
Number | Date | Country | |
---|---|---|---|
63187375 | May 2021 | US |