The present invention relates generally to imaging, and more specifically to Light Detection And Ranging (LIDAR)-based imaging.
Flash-type LIDAR (also referred to herein as lidar), which can use a pulsed light emitting array to emit light for short durations over a relatively large area to acquire images, may allow for solid-state imaging of a large field of view or scene.
However, to illuminate such a large field of view (which may include long range and/or low-reflectance targets and in bright ambient light conditions) and still receive a recognizable return or reflected optical signal therefrom (also referred to herein as an echo signal or echo), higher optical emission power may be required. Moreover, higher emission power (and thus higher power consumption) may be required in some applications due to the relatively high background noise levels from ambient and/or other non-LIDAR emitter light sources (also referred to therein as a noise floor).
Power consumption in lidar systems can be particularly problematic in some applications, e.g., unmanned aerial vehicle (UAV), automotive, and industrial robotics. For example, in automotive applications, the increased emission power requirements must be met by the power supply of the automobile, which may add a considerable load for automobile manufacturers. Also, heat generated from the higher emission power may alter the optical performance of the light emitting array and/or may negatively affect reliability.
Some embodiments described herein provide methods, systems, and devices including electronic circuits to address the above and other problems by providing a lidar system including one or more emitter elements (including one or more semiconductor lasers, such as surface- or edge-emitting laser diodes; generally referred to herein as emitters), one or more light detector pixels (including one or more semiconductor photodetectors, such as photodiodes, including avalanche photodiodes and single-photon avalanche detectors; generally referred to herein as detectors), and a control circuit that is configured to selectively operate subsets of the emitter elements and/or detector pixels (including respective emitters and/or detectors thereof) to provide a 3D time of flight (ToF) flash lidar system with a configurable field of illumination and/or field of view for subranges of a distance range that can be imaged by the lidar system (also referred to as an imaging distance range).
In some embodiments, a lidar system including a receiver (e.g., a detector array) and a transmitter (e.g., an emitter array) operates based on strobe signals that define respective strobe windows (each corresponding to a respective subrange of the distance range), whereby the field of illumination of the emitter element(s) and/or the field of view of the detector pixel(s) can be programmed or varied on a strobe-by-strobe basis for a more efficient and lower power system performance, and/or in response to objects in the field of view (FoV) (such as to reduce multipath reflections from tunnel walls and ceiling).
According to some embodiments of the present invention, a Light Detection and Ranging (LIDAR) system includes an emitter array comprising a plurality of emitter units operable to emit optical signals, responsive to respective emitter control signals; a detector array comprising a plurality of detector pixels operable to be activated and deactivated to detect light for respective strobe windows between pulses of the optical signals and at respective delays that differ with respect to the pulses, responsive to respective strobe signals; and one or more control circuits. The control circuit(s) are configured to output the respective emitter control signals to selectively operate different subsets of the emitter units and/or to output the respective strobe signals to selectively operate different subsets of the detector pixels such that a field of illumination of the emitter units and/or a field of view of the detector pixels is varied based on the respective strobe windows.
In some embodiments, the respective strobe windows may correspond to respective sub-ranges of a distance range. For example, the respective strobe windows may include first and second strobe windows corresponding to different first and second sub-ranges of a distance range, respectively.
In some embodiments, the one or more control circuits may include an emitter control circuit configured to operate a first subset of the emitter units to provide a first field of illumination during the first strobe window, and to operate a second subset of the emitter units to provide a second field of illumination, different than the first field of illumination, during the second strobe window.
In some embodiments, the one or more control circuits may include a detector control circuit configured to operate a first subset of the detector pixels to provide a first field of view during the first strobe window, and to operate a second subset of the detector pixels to provide a second field of view, different than the first field of view, during the second strobe window.
In some embodiments, the detector control circuit may be configured to operate the second subset of the detector pixels with a greater detection sensitivity level than the first subset of the detector pixels.
In some embodiments, each of the detector pixels may include a plurality of detectors, and the detector control circuit may be configured to generate respective strobe signals that activate a first subset of the detectors for the first strobe window, and activate a second subset of the detectors, larger than the first subset of the detectors, for the second strobe window.
In some embodiments, the second field of illumination may include a greater emission power level than the first field of illumination.
In some embodiments, the emitter control circuit may be configured to generate respective emitter control signals comprising a first non-zero peak current to activate the first subset of the emitters for the first strobe window, and comprising a second peak current, greater than the first non-zero peak current, to activate the second subset of the emitters for the second strobe window.
In some embodiments, the first strobe window may corresponds to closer distance sub-ranges of the distance range than the second strobe window, and the first field of illumination and/or the first field of view may be wider than the second field of illumination and/or the second field of view.
In some embodiments, the first subset of the emitter units may include one or more of the emitter units that are positioned at a peripheral region of the emitter array, and the second subset of the emitter units may include one or more of the emitter units that are positioned at a central region of the emitter array.
In some embodiments, the first subset of the emitter units may include a first string of the emitter units that are electrically connected in series, and the second subset of the emitter units may include a second string of the emitter units that are electrically connected in series.
In some embodiments, the first subset of the detector pixels may include one or more of the detector pixels that are positioned at a peripheral region of the detector array, and the second subset of the detector pixels may include one or more of the detector pixels that are positioned at a central region of the detector array.
In some embodiments, the different subsets of the emitter units may be operable to provide the field of illumination without one or more lens elements. For example, the emitter array may include the emitter units on a curved and/or flexible substrate, where a curvature of the substrate is configured to provide the field of illumination without the one or more lens elements.
In some embodiments, the respective strobe windows may correspond to respective acquisition subframes of the detector pixels. Each acquisition subframe may include data collected for a respective distance sub-range of a distance range. An image frame may include the respective acquisition subframes for each of the distance sub-ranges of the distance range.
In some embodiments, the image frame may be a current image frame, and the one or more control circuits may be configured to provide the field of illumination of the emitter units and/or the field of view of the detector pixels that varies for the respective sub-ranges of the distance range in the current image frame based on one or more features of the field of view indicated by detection signals received from the detector pixels in a preceding image frame before the current image frame.
In some embodiments, in the preceding image frame, the one or more control circuits may be configured to provide the field of illumination of the emitter units and/or the field of view of the detector pixels that is static for the respective sub-ranges of the distance range.
According to some embodiments of the present invention, a Light Detection and Ranging (LIDAR) system includes at least one control circuit configured to output respective emitter control signals to operate emitter units of an emitter array and/or respective strobe signals to operate detector pixels of a detector array such that a field of illumination of the emitter units and/or a field of view of the detector pixels varies for respective sub-ranges of a distance range imaged by the LIDAR system.
In some embodiments, the detector pixels may be operable to detect light for respective strobe windows between pulses of the optical signals responsive to the respective strobe signals, where the respective strobe windows may correspond to the respective sub-ranges of the distance range.
In some embodiments, the respective strobe signals may operate a first subset of the detector pixels to detect the light over a first field of view during a first strobe window, and may operate a second subset of the detector pixels to detect light over a second field of view, different than the first field of view, during a second strobe window.
In some embodiments, the respective strobe signals may operate the second subset of the detector pixels with a greater detection sensitivity level than the first subset of the detector pixels.
In some embodiments, each of the detector pixels may include a plurality of detectors. The respective strobe signals may activate a first subset of the detectors for the first strobe window, and may activate a second subset of the detectors, larger than the first subset of the detectors, for the second strobe window.
In some embodiments, the respective emitter control signals may operate a first subset of the emitter units to provide a first field of illumination during a first strobe window, and may operate a second subset of the emitter units to provide a second field of illumination, different than the first field of illumination, during a second strobe window.
In some embodiments, the second field of illumination may include a greater emission power level than the first field of illumination.
In some embodiments, the respective emitter control signals may include a first non-zero peak current to activate the first subset of the emitters for the first strobe window, and may include a second peak current, greater than the first non-zero peak current, to activate the second subset of the emitters for the second strobe window.
According to some embodiments of the present invention, a method of operating a Light Detection and Ranging (LIDAR) system includes generating respective emitter control signals to operate different subsets of emitter units of an emitter array to emit optical signals and/or generating respective strobe signals to operate different subsets of detector pixels of a detector array to detect light, such that a field of illumination of the emitter units and/or a field of view of the detector pixels varies for respective sub-ranges of a distance range imaged by the LIDAR system.
In some embodiments, the detector pixels may be operable to detect light for respective strobe windows between pulses of the optical signals responsive to the respective strobe signals. The respective strobe windows may include first and second strobe windows corresponding to different first and second sub-ranges of a distance range, respectively.
In some embodiments, the respective emitter control signals may operate a first subset of the emitter units to provide a first field of illumination during the first strobe window, and may operate a second subset of the emitter units to provide a second field of illumination, different than the first field of illumination, during the second strobe window.
In some embodiments, the respective emitter control signals may operate the second subset of the emitter units with a greater power level than the first subset of the emitter units.
In some embodiments, the respective strobe signals may operate a first subset of the detector pixels to provide a first field of view during the first strobe window, and may operate a second subset of the detector pixels to provide a second field of view, different than the first field of view, during the second strobe window.
In some embodiments, the respective strobe signals may operate the second subset of the detector pixels with a greater detection sensitivity level than the first subset of the detector pixels.
In some embodiments, the LIDAR system may be configured to be coupled to an autonomous vehicle such that the emitter and detector arrays are oriented relative to an intended direction of travel of the autonomous vehicle.
Other devices, apparatus, and/or methods according to some embodiments will become apparent to one with skill in the art upon review of the following drawings and detailed description. It is intended that all such additional embodiments, in addition to any and all combinations of the above embodiments, be included within this description, be within the scope of the invention, and be protected by the accompanying claims.
A lidar system may include an array of emitters and an array of detectors, or a system having a single emitter and an array of detectors, or a system having an array of emitters and a single detector. As described herein, one or more emitters may define an emitter unit, and one or more detectors may define a detector pixel. A flash lidar system may acquire images by emitting light from an array of emitters, or a subset of the array, for short durations (pulses) over a field of view (FoV) or scene, and detecting the echo signals reflected from one or more targets in the FoV at one or more detectors. Subregions of the array of emitter elements are configured to direct light to (and subregions of the array of detector elements are configured to receive light from) respective subregions within the FoV, which are also referred to herein as regions of interest (ROIs). A non-flash or scanning lidar system may generate image frames by scanning light emission over a field of view or scene, for example, using a point scan or line scan to emit the necessary power per point and sequentially scan to reconstruct the full FoV. A non-range-strobing lidar illuminates the whole range of interest and collects echoes from the whole range of interest. An indirect time-of-flight (iToF) lidar measures range by detecting a phase offset of an echo signal with reference to an emitted signal, whereas a direct time-of-flight (dToF) lidar measures range by detecting the time from emission of a pulse of light to detection of an echo signal by a receiver.
The field of view of a lidar system may be referred to herein as including the field of illumination of light or optical signals output from the emitters and/or the field of view or field of detection over which light is collected by the receiver or detectors. In some embodiments, the field of view of the lidar system may include the intersection of the field of illumination of the emitters, the field of view of the detectors, and the temporal ‘strobe’ windows during which collected light can be detected with reference to the emitted optical signals.
Strobing as used herein may refer to the generation of detector control signals (also referred to herein as strobe signals or ‘strobes’) to control the timing and/or duration of activation (also referred to herein as detection windows or strobe windows) of one or more detectors of the lidar system. In some embodiments, the strobe windows may correspond to sub-ranges of the imaging distance range of a dToF lidar system, thus capturing reflected signal photons corresponding to specific distance sub-ranges at each window) to limit the number of ambient photons acquired in each emitter cycle. The reflected signal photons may be distinguished from ambient photons using a correlator circuit configured to output respective correlation signals representing detection of one or more of the photons whose respective time of arrival is within a predetermined correlation time relative to at least one other of the photons, as described for example in United States Patent Application Publication No. 2019/0250257 to Finkelstein et al, the disclosure of which is incorporated by reference herein. An emitter cycle (e.g., a laser cycle) refers to the time between emitter pulses. In some embodiments, the emitter cycle time is set as or otherwise based on the time required for an emitted pulse of light to travel round trip to the farthest allowed target and back, that is, based on a desired distance range. To cover targets within a desired distance range of about 400 meters, a laser in some embodiments may operate at a frequency of at most 375 kHz (i.e., emitting a laser pulse about every 2.66 microseconds or more). Strobing may be advantageous in terms of area or ‘real estate’ on a substrate (e.g., silicon) because the amount of memory needed in a pixel (in terms of bits per pixel, or bits/pixel) may be proportional to a ratio of the imaging distance range and the range resolution. For example, to provide 10 centimeter (cm) resolution over a 400 meter (m) imaging distance range may require (400 m/0.1 m)=4000 bins×10 bits/bin=40,000 bits/pixel; to provide 10 cm resolution over a 10 meter imaging distance range may require 10/0.1=100 bins×10 bits/bin=1,000 bits/pixel.
In some lidar implementations, different imaging distance ranges may be achieved by using different emitter modules. For example, an emitter module of a lidar system configured to image targets up to a desired distance range may be designed to emit four times the power per solid angle as compared to an emitter module configured to image up to half of the desired distance range, and/or may be configured to emit pulses at half the repetition rate in order to prevent range ambiguity. Such an implementation may also include hardwiring of emitter modules, resulting in less flexible system configurations and a static architecture which cannot respond to conditions on the fly/in real-time. For example, when driving in a tunnel, it may be desirable to reduce the long range field of view at larger angles in order to reduce multipath reflections from the walls and the ceiling of the tunnel, but the field of view should return to its nominal ranges once exiting the tunnel.
Also, some lidar systems may use strobes (also referred to as time gates) and may count the number of photons that arrive within a time gate, but may not directly measure their precise time of flight. Some range-strobing dToF lidar systems can measure time of flight and maintain a FoV (e.g., 30 degrees horizontal by 30 degrees vertical) for all strobe gates.
Some embodiments of the present invention arise from recognition that, in lidar systems, the power that may be required to image a target scales with the square of the distance range of the target. At the same time, many applications of lidar systems may require a wide field of view for closer ranges, but can provide acceptable performance with a smaller field of view for long ranges, particularly if the smaller field of view may result in reduced power consumption. Thus, some embodiments of the present invention provide lidar systems where the FoV is configured to dynamically vary 43.3 with distance sub-ranges (or the respective strobe windows corresponding to the distance sub-ranges) of the imaging distance. For example, some embodiments may operate an emitter array and/or a detector array to provide a wide FoV in short ranges and narrow FoV in long ranges, in some embodiments in combination with various detection modalities as described herein.
In particular, respective emitter control signals may be generated to operate one or more emitter units of an emitter array to emit optical signals that illuminate different portions or ROIs of a field of view for different image acquisition subframes of the detector array. Additionally or alternatively, respective strobe signals may be generated to operate one or more detector pixels of the detector array to detect light over different ROIs of the field of view for the different image acquisition subframes. Each image acquisition subframe collects data for a different strobe window (and thus, the respective distance sub-ranges corresponding to the different strobe windows). The respective sub-ranges are portions of a distance range that is based on or defined by a time between pulses of the optical signals. In some embodiments, the emitters and/or detectors may be operated to vary the FoV of the lidar system based on one or more targets, features, or other characteristics of a scene imaged thereby, for example, based on information determined from detection signals corresponding to a preceding image acquisition frame.
An example of a LIDAR system or circuit 100 in accordance with embodiments of the present disclosure is shown in
One or more of the emitter elements 115e of the emitter array 115 may define emitter units that respectively emit a radiation pulse or continuous wave signal (for example, through a diffuser or optical filter 114) at a time and repetition rate controlled by a timing generator or driver circuit 116. In particular embodiments, the emitters 115e may be pulsed light sources, such as LEDs or lasers (such as vertical cavity surface emitting lasers (VCSELs) and/or edge-emitting lasers). Radiation is reflected back from a target 150, is collected by collection optics 112, and is sensed by detector pixels defined by one or more detector elements 110d of the detector array 110. The control circuit 105 implements a processing circuit that measures the time of flight of the illumination pulse over the journey from emitter array 115 to target 150 and back to the detectors 110d of the detector array 110, using direct or indirect ToF measurement techniques.
In some embodiments, an emitter module or circuit 115 may include an array of emitter elements 115e (e.g., VCSELs), a corresponding array of optical elements 113, 114 coupled to one or more of the emitter elements (e.g., lens(es) 113 (such as microlenses) and/or diffusers 114), driver electronics 116, and (optionally) a safety mechanism. The optical elements 113, 114 can be configured to provide a sufficiently low beam divergence of the light output from the emitter elements 115e so as to ensure that fields of illumination of either individual or groups of emitter elements 115e do not significantly overlap, and yet provide a sufficiently large beam divergence of the light output from the emitter elements 115e to provide eye safety to observers. In some embodiments, one or more of the optical elements 113, 114 may be omitted. For example, the emitter array 115 may be implemented on a curved or flexible substrate, such that a desired field of illumination may be achieved based on the curvature of the emitter array 115 without the use of the lens(es) 113. More particularly, a desired horizontal field of illumination may be provided by the curvature of the emitter array 115 without the lens(es) 113, while a desired vertical field of illumination may be provided by the diffuser(s) 114, or vice versa. Conversely, the desired horizontal and/or vertical fields of illumination may be achieved using the lens(es) 113 without the diffuser(s) 114.
The driver electronics 116 may each correspond to one or more emitter elements, and may each be operated responsive to timing control signals with reference to a master clock and/or power control signals that control the peak power of the light output by the emitter elements 115e. In some embodiments, each of the emitter elements 115e in the emitter array 115 is connected to and controlled by a respective driver circuit 116. In other embodiments, respective groups of emitter elements 115e in the emitter array 115 (e.g., emitter elements 115e in spatial proximity to each other, such as serially connected (i.e., anode-to-cathode) strings of emitter elements 115e), may be connected to a same driver circuit 116. The driver circuit or circuitry 116 may include one or more driver transistors configured to control the pulse repetition rate, timing and amplitude of the optical emission signals that are output from the emitters 115e.
The safety mechanism may be configured to control one or more emitters 115e to immediately reduce or power down the emission power if an object is detected in the field of illumination within a pre-determined distance from the emitter module or circuit 115. For example, the safety mechanism may include a range finder, the control circuit 105 may electronically implement the functionality of the safety mechanism, or the lidar system itself may otherwise have a mechanism to detect an object within the pre-determined distance of the emitter module 115 and power down the emission power of the emitter array 115 sufficiently quickly in response. In some embodiments, the pre-determined distance range may be about 1 meter (m) or less.
The emission of optical signals from multiple emitters 115e (e.g., to illuminate the whole range of interest, over one or more detection windows) provides a single image frame for the flash LIDAR system 100. The maximum optical power output of the emitters 115e may be selected to generate a signal-to-noise ratio of the echo signal from the farthest, least reflective target at the brightest background illumination conditions that can be detected in accordance with embodiments described herein. An optional filter to control the emitted wavelengths of light and diffuser 114 to increase a field of illumination of the emitter array 115 are illustrated by way of example.
Light emission output from one or more of the emitters 115e impinges on and is reflected by one or more targets 150, and the reflected light is detected as an optical signal (also referred to herein as a return signal, echo signal, or echo) by one or more of the detectors 110d (e.g., via receiver optics 112), converted into an electrical signal representation (referred to herein as a detection signal), and processed (e.g., based on time of flight) to define a 3-D point cloud representation 170 of the scene within the field of view 190. Operations of LIDAR systems in accordance with embodiments of the present invention as described herein may be performed by one or more processors or controllers, such as the control circuit 105 of
In some embodiments, a receiver/detector module or circuit 110 includes an array of detector pixels (with each detector pixel including one or more detectors 110d, e.g., SPADs), receiver optics 112 (e.g., one or more lenses to collect light over the FoV 190), and receiver electronics (including timing circuit 106) that are configured to power, enable, and disable all or parts of the detector array 110 and to provide timing signals thereto. The detector pixels can be activated or deactivated with at least nanosecond precision, and may be individually addressable, addressable by group, and/or globally addressable.
The receiver optics 112 may include a macro lens that is configured to collect light from the largest FoV that can be imaged by the lidar system, a spectral filter to pass or allow passage of a sufficiently high portion of the ‘signal’ light (i.e., light of wavelengths corresponding to those of the optical signals output from the emitters) but substantially reject or prevent passage of non-signal light (i.e., light of wavelengths different than the optical signals output from the emitters), microlenses to improve the collection efficiency of the detecting pixels, and/or anti-reflective coating to reduce or prevent detection of stray light. For example, the receiver optics 112 may include an imaging filter that passes most or substantially all the arriving echo signal photons, yet rejects (a majority of) ambient photons.
The detectors 110d of the detector array 110 are connected to the timing circuit 106. The timing circuit 106 may be phase-locked to the driver circuitry 116 of the emitter array 115. The sensitivity of each of the detectors 110d or of groups of detectors may be controlled. For example, when the detector elements include reverse-biased photodiodes, avalanche photodiodes (APD), PIN diodes, and/or Geiger-mode Avalanche Diodes (SPADs), the reverse bias may be adjusted, whereby, the higher the overbias, the higher the sensitivity. The SPADs 110d of the detector array 110 may be discharged when the emitters 115e of the emitter array 115 fire, and may be (fully) recharged a short time after the emission of the optical pulse.
In some embodiments, at least one control circuit 105, such as a microcontroller or microprocessor, provides different emitter control signals to the driver circuitry 116 of different emitters 115e and/or provides different strobe signals to the timing circuitry 106 of different detectors 110d to vary the field of view of the lidar system 100 based on respective distance sub-ranges corresponding to respective strobe windows. Some embodiments described herein implement a time correlator, such that only pairs of (or more than two) avalanches detected within a pre-determined time are measured. In some embodiments, a measurement may include the addition of a fixed first charge (indicating a count value) onto a counting capacitor, as well as the addition of a second charge (which is a function of the arrival time) onto a time integrator. At the end of a frame, the control circuit(s) 105 may calculate the ratio of integrated time to number of arrivals, which is an estimate of the average time of arrival of photons for the detector pixel. The control circuit(s) 105 collect the point cloud data from the imager module (referred to herein as including the detector array 110 and accompanying processing circuitry), generating a 3D point cloud 170.
An example of a control circuit 105 that generates emitter and/or detector control signals is shown in
Embodiments of the present invention may thus provide apparatus, systems, circuits, and methods of operation that can modify the field of view of a lidar system as a function of range. For example, a lidar system mounted in a car driving near a peak of a hill may set its field of view for downward portions of the forward range (i.e., negative vertical angles with respect to the horizon) to cover a longer distance range and may set its field of view for upward portions of the forward range (i.e., positive vertical angles with respect to the horizon) to cover a shorter distance range, while when the car is driving on a level road the lidar system may set its field of view for the forward range to be longer.
As a specific example for a system with a 400 meter (m) imaging distance range, during the strobes spanning the first 200 m of the distance range, larger subsets of the emitter units may be operated to illuminate a greater portion or the full FoV, while during the strobes spanning the next 200 m of the distance range a selected smaller subset of the emitter units may be operated to illuminate a lesser portion of the FoV, based on use case. Similarly, larger and smaller subsets of the receivers/detector pixels may be likewise operated to detect greater and lesser portions of the FoV for shorter and farther subranges of the distance range, respectively. Emission power levels and/or detection sensitivity levels may also be varied based on the distance subrange being imaged. For example, as each detector pixel may include multiple photodetectors (e.g., SPADs), the number of photodetectors enabled per detector pixel can be programmed to vary (for example 1, 2 or 4 enabled photodetectors) on strobe by strobe basis and in conjunction with the FoV settings for the respective distance subranges.
As noted above, the field of view of the lidar system may include the intersection of the field of illumination of light or optical signals emitted from the emitters, the field of view over which light is detected by the receiver or detectors (also referred to as the field of detection), and the temporal detector strobe windows when or during which light is detected with reference to the emitted pulses of light.
A detector strobe window may refer to the respective durations of activation and deactivation of one or more detectors (e.g., responsive to respective strobe signals from a control circuit) over the temporal period or time between pulses of the emitter(s) (which may likewise be responsive to respective emitter control signals from a control circuit). The time between pulses (which defines a laser cycle, or more generally emitter pulse frequency) may be selected or may otherwise correspond to a desired imaging distance range for the lidar system. Each strobe window may be differently delayed relative to the emitter pulses, and thus may correspond to a respective portion or subrange of the distance range. Each strobe window may correspond to a respective image acquisition subframe (or more particularly, point cloud acquisition subframe, generally referred to herein as a subframe) of an image frame. A subframe may collect data responsive to multiple emitter pulses. For example there may be about 500, 1000, 2000 or 2500 laser cycles in each subframe. Each subframe may thus represent data collected for a corresponding distance sub-range over multiple laser cycles. A strobe window readout operation may be performed at the end of each subframe, with multiple subframes (each corresponding to a respective strobe window) making up each image frame (for example, 2, 5, 10, 15, 20, 25, 30 or 50 sub frames in each frame). That is, each image frame includes a plurality of subframes, each of the subframes samples or collects data for a respective strobe window over the temporal period, and each strobe window covers or corresponds to a respective distance subrange of the distance range. Range measurements and strobe window subrange correspondence as described herein are based on time of flight of an emitted pulse. Some strobing techniques (e.g., as described in United States Patent Application Publication No. 2017/0248796) may determine distance based on the strobe window during which an echo is received.
As noted above, the emitter pulse frequency of a lidar system may be selected or may otherwise correspond to the desired imaging distance range. For example, as shown in the timing diagram of
In the example of
In
Still referring to
More generally, while two programmable ROI configurations are provided in the examples of
As shown in
In some embodiments, a first subset of the emitters may likewise be operated for the first subset of the strobe windows to illuminate a first ROI 801, and a second subset of the emitters may be operated for the second subset of the strobe windows to illuminate a second ROI 802. In addition, for some strobe windows of the first subset of the strobe windows (e.g., for strobe windows 810-1 to 810-2), a first subset (910d1 in
While described above primarily with reference to addressing operations for detector arrays 110 to provide FoVs that vary with respective strobe windows/distance sub-ranges, embodiments of the present invention may include similar addressing operations for emitter arrays 115 to provide fields of illumination that vary with respective strobe windows/distance sub-ranges.
For example, as similarly described above with reference to the detector array 110, the field of illumination and power density of the emitter array 115 can be changed or varied on a per strobe basis, allowing for control and redirection the available power towards the region of interest to increase system efficiency. In particular, as shown in
As shown in
In other words, the field of illumination of the emitter array 115 can be varied by providing a wider ROI pattern 1001 for strobe windows corresponding to first distance sub-ranges (e.g., for closer distances) and providing a narrower ROI pattern 1002a or 1002b for strobe windows corresponding to second distance sub-ranges (e.g., for farther distances) relative to the lidar system. The power output of the emitter array 115 can also be varied for the respective ROI patterns 1001, 1002a, 1002b, for example, by control of the peak current driven to the emitters 115e (e.g. with greater peak current to achieve the narrower ROI pattern 1002a as compared to the narrower ROI pattern 1002b) and/or by activation of a fewer or more emitters 115e in the array 115.
That is, as shown in the examples of
Further example operations of lidar systems to provide illumination and detection over FoVs with different ROI patterns are described below with reference to
Also, as noted above with reference to
The durations of activation of the SPADs 110d may be equal or unequal. Also, the density of activated SPADs 110d may be scaled such that a sensitivity of the detector array 110 (and/or portions thereof) is varied or optimized for each distance sub-range, e.g., by activating subsets of the SPADs 110d. For example, as noted above with reference to
It will be understood that emitters and/or detectors that are configured to operate according to the examples described herein operate based on respective control signals (such as emitter control signals and detector strobe signals) generated by one or more associated control circuits, such as a sequencer circuit that may coordinate operation of the emitter array and detector array. That is, the respective control signals may be configured to control temporal and/or spatial operation of individual emitter elements of the emitter array and/or individual detector elements of the detector array to provide functionality as described herein.
Embodiments of the present invention may be used in conjunction with operations for varying the number or rate of readouts based on detection thresholds, as described for example in U.S. patent application Ser. No. 16/733,463 entitled “High Dynamic Range Direct Time of Flight Sensor with Signal-Dependent Effective Readout Rate” filed Jan. 3, 2020, the disclosure of which is incorporated by reference herein. For example, a power level of the emitter signals may be reduced in response to one or more readouts that are based on fewer cycles of the emitter signals (indicating a closer and/or more reflective target), or the power level of the emitter signals may be increased in response to one or more readouts that are based on more cycles of the emitter signals (indicating farther and/or less reflective targets).
Likewise, a smaller subset of the detector elements or detector pixels may be activated (e.g., in response to respective strobe signals) in response to one or more readouts that are based on fewer cycles of the emitter signal (indicating a closer and/or more reflective target), or a larger subset of the detector elements or detector pixels may be activated in response to one or more readouts that are based on more cycles of the emitter signal (indicating farther and/or less reflective targets).
Lidar systems and arrays described herein may be applied to ADAS (Advanced Driver Assistance Systems), autonomous vehicles, UAVs (unmanned aerial vehicles), industrial automation, robotics, biometrics, modeling, augmented and virtual reality, 3D mapping, and security. In some embodiments, the emitter elements of the emitter array may be VCSELs. In some embodiments, the emitter array may include a non-native (e.g., curved or flexible) substrate having thousands of discrete emitter elements electrically connected in series and/or parallel thereon, with the driver circuit implemented by driver transistors integrated on the non-native substrate adjacent respective rows and/or columns of the emitter array, as described for example in U.S. Patent Application Publication No. 2018/0301872 to Burroughs et al., the disclosure of which is incorporated by reference herein.
Various embodiments have been described herein with reference to the accompanying drawings in which example embodiments are shown. These embodiments may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure is thorough and complete and fully conveys the inventive concept to those skilled in the art. Various modifications to the example embodiments and the generic principles and features described herein will be readily apparent. In the drawings, the sizes and relative sizes of layers and regions are not shown to scale, and in some instances may be exaggerated for clarity.
The example embodiments are mainly described in terms of particular methods and devices provided in particular implementations. However, the methods and devices may operate effectively in other implementations. Phrases such as “example embodiment”, “one embodiment” and “another embodiment” may refer to the same or different embodiments as well as to multiple embodiments. The embodiments will be described with respect to systems and/or devices having certain components. However, the systems and/or devices may include fewer or additional components than those shown, and variations in the arrangement and type of the components may be made without departing from the scope of the inventive concepts.
The example embodiments will also be described in the context of particular methods having certain steps or operations. However, the methods and devices may operate effectively for other methods having different and/or additional steps/operations and steps/operations in different orders that are not inconsistent with the example embodiments. Thus, the present inventive concepts are not intended to be limited to the embodiments shown, but are to be accorded the widest scope consistent with the principles and features described herein.
It will be understood that when an element is referred to or illustrated as being “on,” “connected,” or “coupled” to another element, it can be directly on, connected, or coupled to the other element, or intervening elements may be present. In contrast, when an element is referred to as being “directly on,” “directly connected,” or “directly coupled” to another element, there are no intervening elements present.
It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present invention.
Furthermore, relative terms, such as “lower” or “bottom” and “upper” or “top,” may be used herein to describe one element's relationship to another element as illustrated in the Figures. It will be understood that relative terms are intended to encompass different orientations of the device in addition to the orientation depicted in the Figures. For example, if the device in one of the figures is turned over, elements described as being on the “lower” side of other elements would then be oriented on “upper” sides of the other elements. The exemplary term “lower”, can therefore, encompasses both an orientation of “lower” and “upper,” depending of the particular orientation of the figure. Similarly, if the device in one of the figures is turned over, elements described as “below” or “beneath” other elements would then be oriented “above” the other elements. The exemplary terms “below” or “beneath” can, therefore, encompass both an orientation of above and below.
The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the description of the invention and the appended claims, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “include,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Embodiments of the invention are described herein with reference to illustrations that are schematic illustrations of idealized embodiments (and intermediate structures) of the invention. As such, variations from the shapes of the illustrations as a result, for example, of manufacturing techniques and/or tolerances, are to be expected. Thus, the regions illustrated in the figures are schematic in nature and their shapes are not intended to illustrate the actual shape of a region of a device and are not intended to limit the scope of the invention.
Unless otherwise defined, all terms used in disclosing embodiments of the invention, including technical and scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs, and are not necessarily limited to the specific definitions known at the time of the present invention being described. Accordingly, these terms can include equivalent terms that are created after such time. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the present specification and in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. All publications, patent applications, patents, and other references mentioned herein are incorporated by reference in their entireties.
Many different embodiments have been disclosed herein, in connection with the above description and the drawings. It will be understood that it would be unduly repetitious and obfuscating to literally describe and illustrate every combination and subcombination of these embodiments. Accordingly, the present specification, including the drawings, shall be construed to constitute a complete written description of all combinations and subcombinations of the embodiments of the present invention described herein, and of the manner and process of making and using them, and shall support claims to any such combination or subcombination.
Although the invention has been described herein with reference to various embodiments, it will be appreciated that further variations and modifications may be made within the scope and spirit of the principles of the invention. Although specific terms are employed, they are used in a generic and descriptive sense only and not for purposes of limitation, the scope of the present invention being set forth in the following claims.
This application claims priority under 35 U.S.C. 119 from U.S. Provisional Patent Application No. 62/908,801 entitled “Strobe Based Configurable 3D Field of View LIDAR System,” filed Oct. 1, 2019, with the United States Patent and Trademark Office, the disclosure of which is incorporated by reference herein in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2020/053444 | 9/30/2020 | WO |
Number | Date | Country | |
---|---|---|---|
62908801 | Oct 2019 | US |