The section headings used herein are for organizational purposes only and should not be construed as limiting the subject matter described in the present application in any way.
Autonomous, self-driving, and semi-autonomous automobiles use a combination of different sensors and technologies such as radar, image-recognition cameras, and sonar for detection and location of surrounding objects. These sensors enable a host of improvements in driver safety including collision warning, automatic-emergency braking, lane-departure warning, lane-keeping assistance, adaptive cruise control, and piloted driving. Among these sensor technologies, light detection and ranging (LiDAR) systems take a critical role, enabling real-time, high-resolution 3D mapping of the surrounding environment.
LiDAR systems need to be able to perform under a variety of conditions, including situations that include combinations of near and far distances of objects and various weather and ambient lighting conditions. It is important that the LiDAR be able to provide accurate object size information in these and other conditions. An adaptive LiDAR system is needed that can advantageously provide improved image and object identification properties as conditions change and evolve.
The present teaching, in accordance with preferred and exemplary embodiments, together with further advantages thereof, is more particularly described in the following detailed description, taken in conjunction with the accompanying drawings. The skilled person in the art will understand that the drawings, described below, are for illustration purposes only. The drawings are not necessarily to scale; emphasis instead generally being placed upon illustrating principles of the teaching. The drawings are not intended to limit the scope of the Applicant's teaching in any way.
The present teaching will now be described in more detail with reference to exemplary embodiments thereof as shown in the accompanying drawings. While the present teaching is described in conjunction with various embodiments and examples, it is not intended that the present teaching be limited to such embodiments. On the contrary, the present teaching encompasses various alternatives, modifications and equivalents, as will be appreciated by those of skill in the art. Those of ordinary skill in the art having access to the teaching herein will recognize additional implementations, modifications, and embodiments, as well as other fields of use, which are within the scope of the present disclosure as described herein.
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the teaching. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
It should be understood that the individual steps of the method of the present teaching can be performed in any order and/or simultaneously as long as the teaching remains operable. Furthermore, it should be understood that the apparatus and method of the present teaching can include any number or all of the described embodiments as long as the teaching remains operable.
LiDAR systems for autonomous cars must be able to perform under a variety of driving scenarios. For example, accurate range and image data that can be in the form of a three-dimensional point cloud should be obtained for a reflective traffic cone a few meters away, as well as a vehicle tire lying in the roadway, one hundred fifty meters distant. From an optical perspective, these two scenarios present substantially different characteristics, as the received optical signal amplitude from each object will vary by several orders of magnitude. The traffic cone will result in a very strong optical return, while the distant tire will produce a weak signal. The ratio of the largest to the smallest optical signal that can be reliably received by the LiDAR system, is sometimes referred to as the receiver dynamic range.
Direct time-of-flight (TOF) LiDAR systems using single-photon avalanche diode (SPAD) detectors are one type of LiDAR system that exhibits a high dynamic range. SPAD devices are sensitive to single photons (lowest possible optical power) but are not damaged by high optical powers. Damage from high optical power can occur, for example, with conventional avalanche photodiodes (APD). As a result of the high detector dynamic range, whether a SPAD detector receives a single photon, or the full power of the laser transmitter reflected back, the SPAD will continue to function. Commonly SPADs are formed in two-dimensional arrays, and each SPAD detector in the SPAD array of detectors is referred to as a pixel.
One common configuration for LiDAR systems is to use a SPAD focal-plane array made up of groups of pixels, where each group has a common signal output, also referred to as “multi-pixel”. In this configuration, each multi-pixel incorporates several individual SPAD pixels grouped together. A multi-pixel has a single combined received signal output from all of the pixels in the multi-pixel group and this combined signal provides a measure of optical intensity over some optical power range which is not provided in a single pixel output. A SPAD array without multi-pixel configuration would produce a binary image, because an individual SPAD pixel is essentially a digital device with two stable output states. That is, a SPAD pixel either generates a very low quiescent current, or generates a maximum current when operating at saturation because a received photon has triggered an avalanche. The transition between these two states is fast enough to essentially provide a digital output.
In contrast, a 2D SPAD array using a “multi-pixel” can be used to generate a gray scale intensity image which provides additional information about the scene as compared to a binary image. In a multi-pixel, not all SPAD will be at saturation across some power levels, giving a stepwise measure of intensity. This allows, for instance, the ability to determine reflectance of an object, and to differentiate lane markings or read signs in some conditions. Said another way, in a multi-pixel configuration, it is possible to determine information about the intensity of a return, in addition to the time-of-flight of the return.
However, at a high enough received optical power level, every SPAD pixel in a multi-pixel will simultaneously saturate. In this limiting case, range accuracy from determination of time-of-flight is largely maintained, but the imaging performance is degraded by a loss of contrast. As a result, at high input powers, even for a multi-pixel SPAD, the image can become more binary in nature.
Another issue besides loss of contrast, is a phenomenon referred to as “blooming”, which is characterized by an image of an object being larger than the true object. Thus, a three-dimensional point cloud representation of the object is larger than the true object. As an example, a road sign that uses a retro-reflector material presents a typical case of an object with high reflectivity, that often results in blooming with LiDAR systems. In the presence of blooming, reflective road signs can appear larger than normal, or lose their shape, making it difficult, for instance, to distinguish a stop-sign from a speed limit sign.
In any imaging lens system, light from an infinitely small point-source (such as a star) will form a spot on the focal plane detector array with some finite radius and distribution of optical energy. Physical properties of light and limits of any real imaging system do not allow for a point-source to result in an infinitely small, imaged spot on the focal plane array. If the image of a point-source like a star is brought to its sharpest focus by a lens system, the physics of diffraction and aberration/distortion inherent in non-perfect lenses, result in the optical energy being distributed over a pixel area of some size.
In optics, the term encircled energy (EE) refers to a measure of concentration of optical energy. Encircled energy is equivalent to the amount of energy within the imaged spot at a given radius, often reported in microns. Typically, one number is reported for EE corresponding to the radius at which 80% of the optical energy is encircled. For typical imaging lenses used in LiDAR systems considered here, the EE is in the range of 5 to 40 microns. The EE radius is larger than the size of a typical individual SPAD pixel in the focal plane detector array, which is ˜10 microns for current generation technology. Also, it is important to note that the EE number corresponds only to 80% of the energy and thus there will be some optical power at larger radii, which can be considered as one type of stray light within a LiDAR optical system.
For a LiDAR system using a SPAD detector array, which is sensitive enough to detect single photons, the physics which result in light spreading as characterized by the encircled energy, causes blooming. A highly reflective object which results in a large optical received power will correspondingly have larger amounts of stray light at extended radii than a same size object of lower reflectivity. If the stray light is high enough, a larger area of pixels within the detector will receive enough light to trigger an avalanche and result in a TOF detection. This is undesirable because the highly reflective object causes the image to appear larger than the actual object. That is, a processed receive image, which can be a three-dimensional point cloud representation, from a highly reflective object that causes blooming will be larger than the true image size that would be derived from a three-dimensional point cloud representation from the object with no blooming. It should be understood that a three-dimensional point cloud representation from the object with no blooming is considered to be a true three-dimensional point cloud representation of the object.
Conventional Flash LiDAR systems employ an emission source that emits laser light over a wide FOV. Some Flash LiDAR systems are solid-state. Flash LiDAR systems can illuminate the entire scene with a single illumination event. Thus, a Flash LiDAR system that uses a SPAD array, when the entire scene is illuminated at once, can have a significant blooming impact, with images of highly-reflective objects appearing to much larger than actual size, at the limit potentially filling the total FOV.
A conventional Flash LiDAR system using a SPAD array, where the whole field-of-view (FOV) is illuminated at one time, could be completely blinded in some cases by blooming. Significant blooming, where every pixel in the array is affected by stray light, could cause the system to report an object as large as the complete FOV, when in reality the object could be a small, highly reflective object like a taillight assembly on a car, or a reflective traffic cone. For autonomous vehicles making use of LiDAR, it is highly undesirable for an object to appear larger than actual as it may result in the vehicle making a path adjustment larger than necessary, or even unnecessary braking or stopping, depending on the range and reported image size of the object as derived from a three-dimensional point cloud representation of that object. In any event, there is a potential for an unsafe condition.
It is highly desirable for a LiDAR system that uses highly sensitive detector arrays to have the ability to determine that optical blooming has occurred, and also to adaptively mitigate the impact of blooming on the reported image and TOF data.
The pulsed TOF LiDAR system of the present teaching uses collimated transmitter laser beams with an illuminated laser FOV being created for each laser's optical beam. The laser FOVs are much smaller in size compared to a conventional Flash LiDAR system. In addition, the pulsed TOF LiDAR systems of the present teaching can use pulse averaging and/or pulse histogramming of multiple received laser pulses to improve Signal-to-Noise Ratio (SNR), which further improves range and performance. Also, these LiDAR systems can employ a very-high single-pulse frame rate, which can be well above 100 Hz.
Portions of the light from the incident optical beams are reflected by the target 106. These portions of reflected optical beams share the receiver optics 112. A detector array 114 receives the reflected light that is projected by the receiver optics 112. In various embodiments, the detector array 114 is solid-state with no moving parts. The detector array 114 typically has a fewer number of individual detector elements than the transmitter array 102 has individual lasers. Each detector in the detector array 114 has a detector FOV at the target plane 110 based on its position, size and the configuration of the receive optics 112.
The measurement resolution of the LiDAR system 100 is not determined by the size of the detector elements in the detector array 114, but instead is determined by the number of lasers in the transmitter array 102 and the collimation of the individual optical beams. In other words, the resolution is limited by a field-of-view of each optical beam. A processor (not shown) in the LiDAR system 100 performs a time-of-flight (TOF) measurement that determines a distance to the target 106 from optical beams transmitted by the laser array 102 that are detected at the detector array 114.
One feature of LiDAR systems according to the present teaching is that individual lasers and/or groups of lasers in the transmitter array 102 can be individually controlled. Each individual emitter in the transmitter array can be fired independently, with the optical beam emitted by each laser emitter corresponding to a 3D projection angle subtending only a portion of the total system field-of-view. One example of such a LiDAR system is described in U.S. Patent Publication No. 2017/0307736 A1, which is assigned to the present assignee. The entire contents of U.S. Patent Publication No. 2017/0307736 A1 are incorporated herein by reference.
Another feature of LiDAR systems according to the present teaching is that detectors and/or groups of detectors in the detector array 114 can also be individually controlled. This independent control over the individual lasers and/or groups of lasers in the transmitter array 102 and over the detectors and/or groups of detectors in the detector array 114 provide for various desirable operating features including control of the system field-of-view, optical power levels, and scanning pattern. It is also possible in some embodiments to change the laser FOV, making it larger or smaller or moving the relative position in the detector FOV at the target plane 110 by adjusting the relative positions of the transmit optics 104 and laser array 104.
Thus, desired fields-of-views can be established by controlling particular individual or groups of lasers in a transmitter array and/or controlling individual or groups of detectors in a receive array. Various system fields-of-view can be established using different relative fields-of-view for individual or groups of emitters and/or individual or groups of detectors. The fields-of-view can be established so as to produce particular and/or combinations of performance metrics. These performance metrics include, for example, improved signal-to-noise ratio, longer range or controlled range, eye safe operation power levels, and lesser or greater controllable resolutions. Importantly, these performance metrics can be modified during operation to optimize the LiDAR system performance.
LiDAR systems according to the present teaching use an array drive control system that is able to provide selective control of particular laser devices in an array of laser devices in order to illuminate a target according to a desired pattern. Also, LiDAR systems according to the present teaching can use an array of detectors that generate detector signals that can be independently processed. Consequently, a feature of the LiDAR systems of present teaching is the ability to provide a variety of operating capabilities from a LiDAR system exclusively with electronic, non-mechanical or non-moving parts that include a fixed array of emitters and a fixed array of detectors with both the transmit and receive optical beams projected using shared transmit and receive optics. Such a LiDAR system configuration can result in a flexible system that is also compact, reliable, and relatively low cost.
LiDAR systems of the present teaching also utilize a laser array, transmitter optics, receiver optics and detector array as described in connection with the known system shown in
The LiDAR system FOV 200 shown in
In the embodiment of the LiDAR system of
Various detector technologies can be used to construct the detector array for the LiDAR systems according to the present teaching. For example, Single Photon Avalanche Diode Detector (SPAD) arrays, Avalanche Photodetector (APD) arrays, and Silicon Photomultiplier Arrays (SPAs) can be used. The detector size not only sets the resolution by setting the FOV of a single detector, but also relates to the speed and detection sensitivity of each device. State-of-the-art, two-dimensional arrays of detectors for LiDAR are already approaching the resolution of VGA cameras, and are expected to follow a trend of increasing pixel density similar to that seen with CMOS camera technology. Thus, smaller and smaller sizes of the detector FOV 204 represented by the squares are expected to be realized over time. For example, an APD array with 264,000 pixels (688(H)×384(V)) was recently reported in the literature “A 250 m Direct Time-of-Flight Ranging System Based on a Synthesis of Sub-Ranging Images and a Vertical Avalanche Photo-Diodes (VAPD) CMOS Image Sensor”, Sensors 2018, 18, 3642.
A controller selects a set of one or more detectors in region 254 that fall within the laser beam FOV 252 of the selected laser. Signals from the selected set of detectors are detected simultaneously and the detected signal provided to the controller and then processed to generate one or more measurement signal. The LiDAR system could measure the TOF independently and simultaneously for all detectors in the selected set or alternatively the system could combine the signal output from selected detectors to form a single measurement. Also, each detector could be either a single pixel or a multi-pixel, as previously described. For long-range operation, including operation at the longest specified range of the LiDAR system, the number of pixels (i.e. individual detectors) used to generate the measurement pulse might be chosen to maximize the SNR at the expense of resolution. For example, the best SNR might correspond to a measurement made by summing or combining in some fashion the received signal from all the detectors in region 254 shown highlighted in
Blooming can occur with the LiDAR system described by
LiDAR systems according to the present teaching have the capability of determining that blooming is potentially occurring and can then take actions to adapt the system parameters to reduce the impact of blooming on the reported image and TOF data as can be based on the three-dimensional point cloud representation. LiDAR systems of the present teaching implement a set of decision criteria to determine whether blooming is potentially occurring.
The LiDAR system controller and interface electronics 302 controls the overall function of the LiDAR system and provides the digital communication to the host system processor 314. The transmit electronics 304 controls the operation of the laser array 306 and, in some embodiments, sets the pattern and/or power of laser firing of individual elements in the array 306.
The receive and time-of-flight computation electronics 308 receives the electrical detection signals from the detector array 310 and then processes these electrical detection signals to compute the range distance through time-of-flight calculations. Intensity information can also be processed. The receive and time-of-flight computation electronics 308 can also control the pixels of the detector array 310 in order to select subsets of pixels that are used for a particular measurement. The intensity of the return signal is also computed in the electronics 308. In some embodiments, the receive and time-of-flight computation electronics 308 determines if return signals from a region of interest at a target plane are indicative of blooming occurring.
In some embodiments, the controller and interface electronics 308 determine if the return signals from a region of interest are indicative of blooming. Also in some embodiments, the host system processor 314 determines if the return signals from a region of interest are indicative of blooming. In some embodiments, the transmit controller 304 controls pulse parameters, such as the pulse amplitude, the pulse width, and/or the pulse delay.
It should be understood that the block diagram of the LiDAR system 300 of
In a second step 404, the system performs an analysis of at least a portion of the obtained TOF, intensity, and/or return pulse characteristic data in order to determine a blooming condition. In some embodiments, in the second step 404, the system performs an analysis of the TOF and intensity data contained in the Region of Interest (ROI) and makes an estimate of whether blooming is occurring or not. The estimate could be a probability, or some other metric, which correlates to the occurrence of blooming. The system then compares the estimate to some blooming criteria which is either determined at the time the system is manufactured or could be adaptively determined during system operation. In a decision step three 406, the system determines if the blooming is occurring based on whether the blooming criteria are met. If the blooming occurrence criteria is not met, the system returns to step one 402, and continues to operate under the standard operating conditions. If the blooming criteria is met, the system will proceed to step four 408, which changes operation to use blooming parameters.
In some embodiments, in step four 408, the system changes operating modes to account for the possibility that blooming is occurring and reports received TOF, intensity, and/or return pulse characteristic data that is processed using blooming mode parameters. In the process flow chart 400, the operating mode is referred to as “blooming mode”. During blooming mode operation, the operating parameters of the system are different from the standard operating parameters. For any region of interest during blooming mode operation, the system performs an analysis of the TOF and intensity data contained in the ROI and makes an estimate of whether blooming is occurring or not in a decision step five 410. If the blooming occurrence criteria is not met, the system will switch back to the standard operating conditions returning to step one 402. If the blooming occurrence criteria is met, the system will continue to operate under blooming mode operating parameters by returning to step four 408.
The third step 508 is a decision step to compare the blooming occurrence set of estimates to the criterion. If the criterion is met, the analysis algorithm reports to the system that blooming mode should be used 510. If the criterion is not met, the analysis algorithm reports to the system that standard operating mode should be used 512. The analysis is then ended. For example, for the embodiment of process 400 of
A first set of criteria can be inferred from the description associated with
One common boundary or edge condition is the situation when a LiDAR system is looking at a “wall”, such that the true image is the same TOF for all detectors looking at the wall, and the wall could be a “white” wall that is reflective enough to lead to near saturation for all detectors. Thus, in this condition, it cannot be known with 100% confidence that blooming has definitely occurred, only that there is a strong possibility. Additional actions, or adaptations, are necessary by the LiDAR system to confirm the blooming and also to determine the extent of the blooming to the degree that this is possible under such conditions. Besides TOF and intensity, other criterion, such as ones based on the pulse width of the received signal between detectors within any defined region, can also be applied.
Some embodiments of the adaptive blooming LiDAR systems of the present teaching use a blooming mode of operation where more than one set of system parameters is being employed. Each set of system parameters could be a fixed set, or in various embodiments, one or more sets of parameters could be determined adaptively. The different sets of system parameters used by the LiDAR system could be for multiple performance improvements, including range, resolution, frame rate, intensity, reflectance measurement, stray light correction, and blooming correction among other factors.
For example, in one simple embodiment, a second set of system parameters for blooming mode are used that reduce the laser output power, or energized laser optical transmit power, used for measuring the FOV. In this example, the first set of system parameters would have a higher optical power that enables the longest range for the system. A second set of parameters would lower the optical power so that, in the closer ranges, there was less possibility of saturation of the detector. It should be understood that the sets of system parameters do not need to be used on equal basis. The second or additional sets of system parameters could be used in only some small percentage of the time. A lower optical power set of system parameters would allow, for instance, less saturation of the intensity image at closer range, thereby allowing for better identification of items like lane markings or text on signs. The intensity, TOF, and/or pulse characteristic data from the second set of system parameters could be compared to the same data from the first set of performance parameters, and areas of significant difference could be identified as possible areas where blooming was occurring.
The above descriptions are illustrative of possible operating modes for which more than one set of system parameters is being used. Another operating mode might involve adaptive changes to the averaging or histogramming parameters used to process the TOF and/or intensity data. One feature of the methods and apparatus of the present teaching is that the system and method are configured so that two (or more) sets of data that are regularly available can be compared against each other. As such, differences in intensity, TOF, and/or pulse width for those two sets of data could indicate potential blooming, from which the system would then take additional actions to confirm and correct the blooming impacts. The two sets of data could come from, for example, two different detector FOVs in the region of interest, two different laser FOVs in the region of interest, two different optical transmit powers of an energized laser in a single laser FOV in the region of interest, as well as various combinations of these and other parameters.
Once the LiDAR system has determined that blooming is possible, then various algorithms can be used to confirm the blooming and reduce the blooming impact on the reported data. One class of algorithms depends on obtaining measurements with a lower optical power illuminating the detectors. By lowering the illumination so that the detectors are not in saturation, the blooming effect is reduced, and it thus becomes possible to distinguish the impacted regions.
One way to reduce illumination on a particular detector is to take measurements with that detector using a laser that has an adjacent FOV to the laser that would be used for a standard TOF measurement. There is some amount of optical power that extends from an adjacent laser into the FOV of the adjacent detector regions due to laser beam divergence and parallax. This optical power provides weak illumination such that the detectors in the blooming region are no longer saturated. The adjacent laser could be from the same laser array, or it could be from a laser array where the FOV of the two laser arrays has been optically interleaved. Measurements with the adjacent laser can be obtained either independently, or simultaneously while scanning is performed for the primary laser. When scanning with the primary laser, data can be obtained for both the detectors within the main FOV of the primary laser as well as adjacent detectors. The LiDAR system performs a comparison of the TOF measurements for each detector obtained with primary and adjacent lasers. Any measurements where the TOF is the same for the two sets of data, a conclusion can be made that the object is “real”.
Any measurements where the TOF is different for the two sets of data, a conclusion can be made that the difference corresponds to the presence of blooming. In areas where blooming is determined, the system can either use the measurement data from an adjacent laser or if the system is recording multiple echoes, the echo from the first set of data corresponding to the region of blooming can be removed from the data set and the system only reports the other echoes. Alternatively, when blooming is determined, the system can report all of the measurement data without eliminating the blooming impact, and instead issue a flag or error that indicates which data is likely impacted by blooming, and the higher-level system can make a decision how to use the data.
In another embodiment of the method, the transmitter power (energized laser optical transmit power) is reduced. In this embodiment, measurements are made with the primary laser, with the optical transmit power from the primary laser reduced to a level such that saturation in the region of interest is not occurring. In this case, the LiDAR system must incorporate a laser driver circuit which can alter the current and/or voltage applied to the lasers. One advantage of using the primary laser compared to the adjacent laser, is a more direct control over the optical power used.
One possible disadvantage of using the primary laser compared to the adjacent laser is that lower power measurements cannot be obtained simultaneously with the higher power (primary) measurements. However, in a system that uses multiple laser pulses to generate an average or histogram, the impact could be minimized by using the lower power for either a few or even a single pulse within the overall set of laser pulses. There would be an impact to SNR, but the impact could be small depending on the system parameters. For instance, if 32 laser pulses were used for an average or histogram measurement, and only 1 of those 32 pulses corresponded to the lower power, the impact on SNR and range would be negligible. Also, the lower power pulse could be implemented for only the region of interest, not the full frame, further mitigating any frame rate and range impacts. This method requires a laser driver that can change laser power (voltage/current) on individual lasers within the array in a short time, sometimes even pulse-to-pulse (<5 μsec). The system will then perform a comparison of the TOF and intensity measurements obtained with the standard operating mode optical power and reduced optical power. Any measurements where the TOF is the same can result in a conclusion that the object is “real”. Any measurements where there are different TOFs can result in a conclusion of the presence of blooming. In areas where blooming is present, the system removes the returns corresponding to the bloomed area and/or reports using a flag or error the data points where blooming is likely occurring.
In yet another embodiment of the method, the optical pulse width is analyzed. LiDAR systems often use pulse width as a simple noise filter. For instance, a pulse width that is as long as the gating time of the TOF measurement would not correspond to any real physical object. Similarly, very small pulse widths can often be distinguished as electronic noise. A simple pulse width noise filter that has a maximum and minimum pulse width is easily implemented. In the case of blooming, pulse width can also be used for a method of determining blooming. A laser pulse that reflects off the object and images back to corresponding centroid location on the detector array will have the maximum optical power density. In this case, the pulse width received by the detector at the corresponding centroid location will be the maximum as the rise time, fall time, and length of the pulse will all be maximized by the received power level. In the region impacted by blooming, the light that is being received will, by definition, be outside the centroid of the received optical signal, and will be lower in optical power, even if the detectors are in saturation. In this case, the rise time and fall time can be expected to be slower, and the length of the pulse should also be reduced. So, by comparing the pulse widths of the pixels that are in the region of blooming, the pixel associated with the true extent of the object will have longer pulse width then those affected by the blooming. However, many factors can impact the pulse width. One feature of the present teaching is that various adaptive algorithm can be implemented to account for these various factors to determine the proper pulse width criteria.
Another embodiment for an adaptive blooming algorithm according to the present teaching makes use of a matrix-addressable or individually addressable transmitter that has the capability to change the FOV being illuminated. One-way blooming can occur is if the transmitter FOV is large enough such that a portion of the laser beam impinges on a highly reflective object. For a laser beam with smaller FOV, smaller divergence will intersect a smaller portion of the scene and so will mitigate the possibility of blooming.
There are engineering tradeoffs in the transmitter FOV versus the detector FOV. One tradeoff is the frame rate and scanning time. Another tradeoff is that a smaller transmitter FOV will slow the frame rate with all other aspects being equal. So, it is not necessarily desirable to always have a small transmitter FOV. One embodiment of the present teaching is to adapt the FOV when blooming is detected and adjust the transmitter FOV either for the full frame, or ideally only for the portion of the FOV affected by the blooming. In the region of blooming in this situation, we would use the TOF and intensity data associated with the smaller transmitter FOV to reduce the blooming.
Another feature of the present teaching is that the LiDAR system can react and adapt to blooming conditions, but some embodiments can also react and adapt to non-blooming conditions via a similar set of analysis steps as described herein. In this situation, instead of adapting the system to make a laser region smaller in response to a blooming condition, if there is no blooming at all in the scene, then the laser region illuminated can be made larger by transitioning from the standard operation to a “no blooming” operating condition.
While the Applicant's teaching is described in conjunction with various embodiments, it is not intended that the Applicant's teaching be limited to such embodiments. On the contrary, the Applicant's teaching encompasses various alternatives, modifications, and equivalents, as will be appreciated by those of skill in the art, which may be made therein without departing from the spirit and scope of the teaching.
The present application is non-provisional of U.S. Provisional Patent Application No. 63/312,356 entitled “System and Method for Solid-State LiDAR with Adaptive Blooming Correction”, filed on Feb. 21, 2022. The entire contents of U.S. Provisional Patent Application No. 63/312,356 are herein incorporated by reference.
Number | Date | Country | |
---|---|---|---|
63312356 | Feb 2022 | US |